anisotropic reflection models james t. kajiya corrigenda: "an extended set of fortran basic linear algebra subprograms" jack j. dongarra jeremy croz and sven hammarling and richard j. hanson algorithm 677 _c_1 surface interpolation laura bacchelli montefusco giulio casciola a remark on algorithm 644: "a portable package for bessel functions of a complex argument and nonnegative order" algorithm 644 computes all major bessel functions of a complex argument and of nonnegative order. single- precision routine cbesy and double-precision routine zbesy are modified to reduce the computation time for the y bessel function by approximately 25% over a wide range of arguments and orders. quick check (driver) programs that exercise the package are also modified to make tests more meaningful over larger regions of the complex plane. d. e. amos fitting "standard" distributions to data is necessarily good: dogma or myth? this paper questions the conventional wisdom that "standard" distributions such as the normal, gamma, or beta families should necessarily be used to model inputs for simulation and proposes an alternative. relevant criteria are the quality of the fits and the impact of the input distributions on effective use of variance reduction techniques and on variate-generation speed. the proposed alternative, a "quasi-empirical" distribution, looks good on all these measures. bennett l. fox on the computation of polynomial representations of nilpotent lie groups philip feinsilver uwe granz rene schott remark on algorithm 716 the curve-fitting package tspack has been converted to double precision. also, portability has been improved by eliminating some potential errors. f. j. testa robert j. renka real values of the w-function approximations for real values of w(x), where w is defined by solutions of w exp(w) = x, are presented. all of the approximations have maximum absolute (|w|>1) or relative (|w|<1) errors of o(10 4). with these approximations an efficient algorithm, consisting of a single iteration of a rapidly converging iteration scheme, gives estimates of w(x) accurate to at least 16 significant digits (15 digits if double precision is used). the fortran code resulting from the algorithm is written to account for the different floating-point- number mantissa lengths on different computers, so that w(x) is computed to the floating-point precision available on the host machine. d. a. barry p. j. culligan-hensley s. j. barry can any stationary iteration using linear information be globally convergent? all known globally convergent iterations for the solution of a nonlinear operator equation (x) = 0 are either nonstationary or use nonlinear information. it is asked whether there exists a globally convergent stationary iteration which uses linear information. it is proved that even if global convergence is defined in a weak sense, there exists no such iteration for as simple a class of problems as the set of all analytic complex functions having only simple zeros. it is conjectured that even for the class of all real polynomials which have real simple zeros there does not exist a globally convergent stationary iteration using linear information. g. w. wasilkowski multicommodity flows in planar undirected graphs and shortest paths this paper deals with the multicommodity flow problems for two classes of planar undirected graphs. the first class c12 consists of graphs in which each source- sink pair is located on one of two specified face boundaries. the second class c01 consists of graphs in which some of the source-sink pairs are located on a specified face boundary and all the other pairs share a common sink located on the boundary. we show that the multicommodity flow problem for a graph in c12 (resp. c01) can be reduced to the shortest path problem for an undirected (resp. a directed) graph obtained from the dual of the original undirected graph. h suzuki t nishizeki n saito initializing generalized feedback shift register pseudorandom number generators the generalized feedback shift register pseudorandom number generators proposed by lewis and payne provide a very attractive method of random number generation. unfortunately, the published initialization procedure can be extremely time consuming. this paper considers an alternative method of initialization based on a natural polynomial representation for the terms of a feedback shift register sequence that results in substantial improvements in the initialization process. bruce jay collings g. barry hembree algorithm 722; functions to support the ieee standard for binary floating- point arithmetic this paper describes c programs for the support functions copysign(x,y), logb(x), scalb(x,n), nextafter(x,y), finite(x), and isnan(x) recommended in the appendix to the ieee standard for binary floating-point arithmetic. in the case of logb, the modified definition given in the later ieee standard for radix-independent floating-point arithmetic is followed. these programs should run without modification on most systems conforming to the binary standard. w. j. cody jerome t. coonen when trees collide: an approximation algorithm for the generalized steiner problem on networks ajit agrawal philip klein r. ravi graph coloring algorithms for fast evaluation of curtis decompositions marek perkowski rahul malvi stan grygiel mike burns alan mishchenko a fast adaptive grid scheme for elliptic partial differential equations we describe the recursive subdivision (rs) method-an efficient and effective adaptive grid scheme for two-dimensional elliptic partial differential equations (pdes). the rs method generates a new grid by recursively subdividing a rectangular domain. we use a heuristic approach which attempts to equidistribute a given density function over the domain. the resulting grid is used to generate an adaptive grid domain mapping (agdm), which may be applied to transform the pde problem to another coordinate system. the pde is then solved in the transformed coordinate system using a uniform grid. we believe parallelism is most easily exploited when computations are carried out on uniform grids; the agdm framework allows the power of adaptation to be applied while still preserving this uniformity. our method generates good adaptive grid domain mappings at a small cost compared to the cost of the entire computation. we describe the rs algorithm in detail, briefly describe the agdm framework, and illustrate the effectiveness of our scheme on several realistic test problems. calvin j. ribbens the solution of almost block diagonal linear systems arising in spline collocation at gaussian points with monomial basis functions numerical techniques based on piecewise polynomial (that is, spline) collation at gaussian points are exceedingly effective for the approximate solution of boundary value problems, both for ordinary differential equations and for time dependent partial differential equations. there are several widely available computer codes based on this approach, all of which have at their core a particular choice of basis representation for the piecewise polynomials used to approximate the solutions. until recently, the most popular approach was to use a b-spline representation, but it has been shown that the b-spline basis is inferior, both in operation counts and conditioning, to a certain monomial basis, and the latter has come more into favor. in this paper, we describe a linear algebraic equations which arise in spline collocation at gaussian points with such a monomial basis. it is shown that the new package, which implements an alternate column and row pivoting algorithm, is a distinct improvement over existing packages from the points of view of speed and storage requirements. in addition, we describe a second package, an important special case of the first, for solving the almost block diagonal systems which arise when condensation is applied to the systems arising in spline collocation at gaussian points, and also in other methods for solving two- point boundary value problems, such as implicit runge-kutta methods and multiple shooting. gouad majaess patrick keast graeme fairweather karin r. bennett algorithm 723; fresnel integrals an implementation of approximations for fresnel integrals and associated functions is described. the approximations were originally developed by w. j. cody, but a fortran implementation using them has not previously been published. w. van snyder collocation software for second-order elliptic partial differential equations we consider the collocation method for linear, second-order elliptic problems on rectangular and general two-dimensional domains. an overview of the method is given for general domains, followed by a discussion of the improved efficiencies and simplifications possible for rectangular domains. a very- high-level description is given of three specific collocation algorithms that use hermite bicubic basic functions, (1) gencol (collocation on general two- dimensional domains), (2) hermcol (collocation on rectangular domains with general linear boundary conditions), and (3) intcol (collocation on rectangular domains with uncoupled boundary conditions). the linear system resulting from intcol has half the bandwidth of that from hermcol, which provides substantial benefit in solving the system. we provide some examples showing the range of applicability of the algorithms and some performance profiles illustrating their efficiency. fortran implementations of these algorithms are given in the companion papers [10, 11]. e. n. houstis w. f. mitchell j. r. rice random behavior: mechanical and psychological foster lindley better algorithms for unfair metrical task systems and applications amos fiat manor mendel algorithm 706; dcutri: an algorithm for adaptive cubature over a collection of triangles an adaptive algorithm for computing an approximation to the integral of each element in a vector function f(x,y) over a two-dimensional region made up of triangles is presented. a fortran implementation of the algorithm is included. the basic cubature rule used over each triangle is a 37-point symmetric rule of degree 13. based on the same evaluation points the local error for each element in the approximation vector and for each triangle is computed using a sequence of null rule evaluations. a sophisticated error-estimation procedure tries, in a cautious manner, to decide whether we have asymptotic behavior locally for each function. different actions are taken depending on that decision, and the procedure takes advantage of the basic rule's polynomial degree when computing the error estimate in the asymptotic case. jarle berntsen terje o. espelid further algorithmic aspects of the local lemma michael molloy bruce reed drawing compound digraphs and its application to an idea organizer (abstract) kozo sugiyama algorithm 667: sigma - a stochastic-integration global minimization algorithm filippo aluffi-pentini valerio parisi francesco zirilli the rectilinear steiner tree problem: algorithms and examples using permutations of the terminal set christine r. leverenz miroslaw truszczynski sublinear time algorithms for metric space problems piotr indyk fast pseudorandom generators for normal and exponential variates fast algorithms for generating pseudorandom numbers from the unit-normal and unit- exponential distributions are described. the methods are unusual in that they do not rely on a source of uniform random numbers, but generate the target distributions directly by using their maximal-entropy properties. the algorithms are fast. the normal generator is faster than the commonly used unix library uniform generator "random" when the latter is used to yield real values. their statistical properties seem satisfactory, but only a limited suite of tests has been conducted. they are written in c and as written assume 32-bit integer arithmetic. the code is publicly available as c source and can easily be adopted for longer word lengths and/or vector processing. c. s. wallace algorithm 564: a test problem generator for discrete linear l1 approximation problem k. l. hoffman d. r. shier algorithm 653: translation of algorithm 539: pc-blas, basic linear algebra subprograms for fortran usage with the intel 8087, 80287 numeric data processor r. j. hanson f. t. krogh algorithm 690; chebyshev polynomial software for elliptic-parabolic systems of pdes pdecheb is a fortran 77 software package that semidiscretizes a wide range of time-dependent partial differential equations in one space variable. the software implements a family of spacial discretization formulas, based on piecewise chebyshev polynomial expansions with c0 continuity. the package has been designed to be used in conjunction with a general integrator for initial value problems to provide a powerful software tool for the solution of parabolic-elliptic pdes with coupled differential algebraic equations. examples are provided to illustrate the use of the package with the dassl d.a.e integrator of petzold [18]. m. berzins p. m. dew formal solutions of differential equations in the neighborhood of singular points (regular and irregular) this paper presents an algorithm for building a fundamental system of formal solutions in the neighbourhood of every singularity of a linear homogenous differential operator. j. della dora e. tournier automatic building of graphs for rectangular dualisation rectangular dualisation is a technique used to generate rectangular topologies for use in top-down floorplanning of integrated circuits. this paper presents an efficient algorithm that transforms an arbitrary graph, representing a custom integrated circuit, into one suitable for rectangular dualisation. the algorithm makes use of efficient techniques in graph processing such as planar embedding and introduces a novel procedure to transform a tree of biconnected sub-graphs into a block neighbourhood graph that is a path. marwan a. jabri walsh-spectral test for gfsr pseudorandom numbers by applying weyl's criterion for k-distributivity to gfsr sequences, we derive a new theoretical test for investigating the statistical property of gfsr sequences. this test provides a very useful measure for examining the k-distribution, that is, the statistical independence of the k-tuple of successive terms of gfsr sequences. in the latter half of this paper, we describe an efficient procedure for performing this test and furnish experimental results obtained from applying it to several gfsr generators with prime period lengths. shu tezuka a reduced gradient algorithm for nonlinear network problems p. beck l. lasdon m. engquist the efficient solution of linear complementarity problems for tridiagonal minkowski matrices c. w. cryer best "ordering" for floating-point addition this correspondence examines the influence of the summation order on the numerical accuracy of the resulting sum when the addition is performed using floating-point, finite- precision arithmetic. a simple statistical model is used to find the relative performance of various addition procedures. it is shown that phrasing the question in terms of finding a best ordering is overly restrictive since the most natural and accurate procedures for performing the addition utilize the storage of intermediate sums as well as performing orderings. t. g. robertazzi s. c. schwartz a new proof of hilbert's theorem on homogeneous functions paul gordan inverse pairs of test matrices algorithms that are readily programmable are provided for constructing inverse pairs of matrices with elements in a field, a division ring, or a ring. wilhelm s. ericksen an adaptive mesh-moving and local refinement method for time-dependent partial differential equations we discuss mesh-moving, static mesh- regeneration, and local mesh-refinement algorithms that can be used with a finite difference or finite element scheme to solve initial-boundary value problems for vector systems of time-dependent partial differential equations in two space dimensions and time. a coarse base mesh of quadrilateral cells is moved by an algebraic mesh-movement function so as to follow and isolate spatially distinct phenomena. the local mesh- refinement method recursively divides the time step and spatial cells of the moving base mesh in regions where error indicators are high until a prescribed tolerance is satisfied. the static mesh-regeneration procedure is used to create a new base mesh when the existing one becomes too distorted. the adaptive methods have been combined with a maccormack finite difference scheme for hyperbolic systems and an error indicator based upon estimates of the local discretization error obtained by richardson extrapolation. results are presented for several computational examples. david c. arney joseph e. flaherty accurate and efficient testing of the exponential and logarithm functions table-driven techniques can be used to test highly accurate implementation of exp log. the largest error observeed in exp and log accurately to within 1/500 unit in the last place are reported in our tests. methods to verify the tests' reliability are discussed. results of applying the tests to our own as well as to a number of other implementations of exp and log are presented. ping-tak peter tang factoring a binary polynomial of degree over one million on 22 may 2000, the factorization of a pseudorandom polynomial of degree 1 048 543 over the binary field z2 was completed on a 4-processor linux pc, using roughly 100 cpu-hours. the basic approach is a combination of the factorization software bipolar and a parallel version of cantor's multiplication algorithm. the pub-library (paderborn university bsp library) is used for the implementation of the parallel communication. olaf bonorden joachim von zur gathen jurgen gerhard olaf muller remark on algorithm 751 the triangulation package tripack has been revised to run more efficiently and to eliminate some potential errors. also, a portable triangulation plotting routine was added. robert j. renka fast evaluation of elementary mathematical functions with correctly rounded last bit abraham ziv timing comparisons of the householder qr transformations with rank-1 and rank-2 updates timothy j. rolfe algorithm 620: references and keywords for - collected algorithms of the acm john r. rice richard j. hanson figures of merit m. tompa make it practical isao sasano zhenjiang hu masato takeichi mizuhito ogawa generating multivariate distributions for statistical applications methods for generating random vectors from multi- variate distributions are required in a variety of statistical research contexts. examples of topics include examination of the robustness of multivariate tests such as roy's largest root test, assessment of error rates in discriminant analysis, and comparing multivariate goodness-of-fit methods. this presentation assimilates the material in a coherent framework and identifies some useful generation algorithms. the slant is admittedly to statistical applications but this does not preclude its use in other areas. mark e. johnson using the sledge package on sturm-liouville problems having nonempty essential spectra we describe the performance of the sturm-liouville software package sledge on a variety of problems having continuous spectra. the code's output is shown to be in good accord with a wide range of known theoretical results. michael s. p. eastham charles t. fulton steven pruess remark on algorithm 620 dennis e. hamilton improving bipartite graphs using multi-strategy simulated annealing panagiotis k. linos michael breen naomi one: north american openmath initiative goes online the north american openmath initiative (naomi) is a recently formed organization to promote, develop, and implement the openmath standard in north america. naomi held its first workshop in vancouver this february, and launched full-scale operations. stephen p. braham you could learn a lot from a quadratic: ii: digital dentistry henry g. baker a note on the recursive calculation of incomplete gamma functions it is known that the recurrence relation for incomplete gamma functions {γ(a \\+ n, x)}, 0 ≤ a < 1, n = 0, 1, 2 ..., when x is positive, is unstable---more so the larger x. nevertheless, the recursion can be used in the range 0 ≤ n ≤ x practically without error growth, and in larger ranges 0 ≤ n ≤ n with a loss of accuracy that can be controlled by suitably limiting n. walter gautschi algebraic methods in the theory of lower bounds for boolean circuit complexity we use algebraic methods to get lower bounds for complexity of different functions based on constant depth unbounded fan-in circuits with the given set of basic operations. in particular, we prove that depth k circuits with gates not, or and modp where p is a prime require exp( (n1/2k)) gates to calculate modr functions for any r pm. this statement contains as special cases yao's parity result [ ya 85 ] and razborov's new majority result [ra 86] (modm gate is an oracle which outputs zero, if the number of ones is divisible by m). r. smolensky on the storage requirement in the out-of-core multifrontal method for sparse factorization two techniques are introduced to reduce the working storage requirement for the recent multifrontal method of duff and reid used in the sparse out-of-core factorization of symmetric matrices. for a given core size, the reduction in working storage allows some large problems to be solved without having to use auxiliary storage for the working arrays. even if the working arrays exceed the core size, it will reduce the amount of input- output traffic necessary to manipulate the working vectors. experimental results are provided to demonstrate significant storage reduction on practical problems using the proposed techniques. joseph w. h. liu on the expected behavior of disjoint set union algorithms we show that the expected time of the weighted quickfind (qfw) disjoint set union and find algorithm to perform (n - 1) randomly chosen unions is cn + o(n/log n), where c = 2.0847 …. this implies, through an observation of tarjan and van leeuwen, linear expected time bounds to perform o(n) unions and finds for a class of other union -find algorithms. we also prove that the expected time of the unweighted quickfind (qf) algorithm is n2/8 + o(n(log n)2), and set the several related open questions of knuth and schonhage. b bollobás i simon a quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints a new algorithm is presented for solving nonlinear least-squares and nonlinear equation problems. the algorithm is based on approximating the nonlinear functions using the quadratic-tensor model proposed by schnabel and frank. the problem statement may include simple bounds or more general linear constraints on the unknowns. the algorithm uses a trust-region defined by a box containing the current values of the unknowns. the objective function (euclidean length of the functions) is allowed to increase at intermediate steps. these increases are allowed as long as our predictor indicates that a new set of best values exists in the trust-region. there is logic provided to retreat to the current best values, should that be required. the computations for the model-problem require a constrained nonlinear least-squares solver. this is done using a simpler version of the algorithm. in its present form the algorithm is effective for problems with linear constraints and dense jacobian matrices. results on standard test problems are presented in the appendix. the new algorithm appears to be efficient in terms of function and jacobian evaluations. r. j. hanson fred t. krogh reliable solution of special event location problems for odes computing the solution of the initial value problem in ordinary differential equations (odes) may be only part of a larger task. one such task is finding where an algebraic function of the solution (an event function) has a root (an event occurs). this is a task which is difficult both in theory and in software practice. for certain useful kinds of event functions, it is possible to avoid two fundamental difficulties. it is described how to achieve the reliable solutions of such problems in a way that allows the capability to be grafted onto popular codes for the initial value problem. l. f. shampine i. gladwell r. w. brankin fast backtracking principles applied to find new cages brendan mckay wendy myrvold jacqueline nadon l1 solution of overdetermined systems of linear equations nabih n. abdelmalek performance evaluation of programs related to the real gamma function methods are presented for evaluating the performance of programs for the functions Γ(x), ln Γ(x), and (x). accuracy estimates are based on comparisons using the manipulation theorem. ideas for checking robustness are also given, and details on specific implementations of test programs are included. w. j. cody optimal starting approximation and iterative algorithm for inverse error function bogdan a. popov automatic sampling with the ratio-of-uniforms method applying the ratio-of-uniforms method for generating random variates results in very efficient, fast, and easy-to-implement algorithms. however parameters for every particular type of density must be precalculated analytically. in this article we show, that the ratio-of-uniforms method is also useful for the design of a black-box algorithm suitable for a large class of distributions, including all with log-concave densities. using polygonal envelopes and squeezes results in an algorithm that is extremely fast. in opposition to any other ratio-of-uniforms algorithm the expected number of uniform random numbers is less than two. furthermore, we show that this method is in some sense equivalent to transformed density rejection. josef leydold semi-membership algorithms: some recent advances derek denny-brown yenjo han lane a. hemaspaandra leen torenvliet experimental analysis of dynamic algorithms for the single daniele frigioni mario ioffreda umberto nanni giulio pasquale linear recurrences with carry as uniform random number generators raymond couture pierre l'ecuyer a note on the quality of random variates generated by the ratio of uniforms method the one-dimensional distribution of pseudorandom numbers generated by the ratio of uniforms method using linear congruential generators (lcgs) as the source of uniform random number is investigated in this note. due to the two- dimensional lattice structure of lcgs there is always a comparable large gap without a point in the one-dimensional distribution of any ratio of uniforms method. lower bounds for these probabilities only depending on the modulus and the beyer quotient of the lcg are proved for the case that cauchy normal or exponential random numbers are generated. these bounds justify the recommendation not to use the ratio of uniform method combined with lcgs. wolfgang hörmann a modified adams method for nonstiff and mildly stiff initial value problems adams predictor-corrector methods, and explicit runge--kutta formulas, have been widely used for the numerical solution of nonstiff initial value problems. both of these classes of methods have certain drawbacks, however, and it has long been the aim of numerical analysts to derive a class of formulas that has the advantages of both adams and runge--kutta methods and the disadvantages of neither! in this paper we derive a class of modified adams formulas that attempts to achieve this aim. when used in a certain precisely defined predictor-corrector mode, these new formulas require three function evaluations per step, but have much better stability than adams formulas. this improved stability makes the modified adams formulas particularly effective for mildly stiff problems, and some numerical evidence of this is given. we also consider the performance of the new class of methods on the well-known detest test set to show their potential on general nonstiff initial value problems. j. r. cash s. semnani the numerical solution of separably stiff systems by precise partitioning david s. watkins ralph w. hansonsmith algorithm 635: an algorithm for the solution of systems of complex linear equations in the l norm with constraints on the unknowns roy l. streit another alternative for bordered systems william knight a faster deterministic maximum flow algorithm we describe a deterministic version of a 1990 cheriyan, hagerup, and mehlhorn randomized algorithm for computing the maximum flow on a directed graph with n nodes and m edges which runs in time o(mn \\+ n2+ε, for any constant ε. this improves upon alon's 1989 bound of o(mn \\+ n8/3log n) [a] and gives an o(mn) deterministic algorithm for all m > n1+ε. thus it extends the range of m/n for which an o(mn) algorithm is known, and matches the 1988 algorithm of goldberg and tarjan [gt] for smaller values of m/n. v. king s. rao r. tarjan table-driven implementation of the exponential function in ieee floating-point arithmetic algorithms and implementation details for the exponential function in both single- and double-precision of ieee 754 arithmetic are presented here. with a table of moderate size, the implementations need only working-precision arithmetic and are provably accurate to within 0.54 ulp as long as the final result does not underflow. when the final result suffers gradual underflow, the error is still no worse than 0.77 ulp. ping- tak peter tang an algorithm for building rectangular floor-plans previous reports [1, 3] have shown how to build an optimal floor-plan assembly starting with a planar structure graph in terms of components and their connections. the existing methods are based on exhaustively inspecting all possible rectangular duals until an optimal one is found. however, expensive computational resources are wasted when no rectangular dual exists. this paper presents a graph-theoretical formulation for the existence of rectangular floor-plans. it is shown that any triangulated graph (planar graph with all regions triangular) admits a rectangular dual if and only if it does not contain complex triangular faces. this result is the basis of a fast algorithm for checking admissibility of solutions. sany m. leinwand yen-tai lai robust rational function approximation algorithm for model generation carlos p. coelho joel r. phillips l. miguel silveira random number generators are chaotic c. herring j. i. palmore the nearest polynomial with a given zero, and similar problems hans j. stetter an extensible differential equation solver we describe a method for solving linear second order differential equations in terms of hypergeometric and other (less well known) special functions. intended as a method for computer algebra systems, the solver described in this article uses a database of functions to construct an anstatz for the solution. a user may extend the solver by appending functions to the database. an overview of a macsyma language implementation, with sample calculations, is given. barton l. willis random matroids we introduce a new random structure generalizing matroids. these random matroids allow us to develop general techniques for solving hard combinatorial optimization problems with random inputs. john h. reif paul g. spirakis resistance scaling and random walk dimensions for finitely ramified sierpinski carpets christian schulzky astrid franz karl heinz hoffmann non-blocking networks p feldman j friedman n pippenger expressing combinatorial optimization problems by linear programs mihalis yannakakis implementing gauss jordan on a hypercube multicomputer we consider the solution of dense algebraic systems on the ncube hypercube via the gauss jordan method. advanced loop interchange techniques are used to determine the appropriate algorithm for mimd architectures. for a computer with p = n processors, we show that gauss jordan is competitive to gaussian elimination when pivoting is not used. we experiment with three mappings of columns to processors: block, wrap and reflection. we demonstrate that load balancing the processors results in a considerable reduction of execution time. a. gerasoulis n. missirlis i. nelken r. peskin evaluation and inversion of the ratios of modified bessel functions, _i_1(_x_) /_i_0 (_x_) and _i_ 1.5(_x_)/ _i_0.5(_x_) geoffrey w. hill a mathematical programming generator this paper describes a mathematical programming generator that interprets problem statements written in the algebraic notation found in journal articles and text-books and outputs statements in the _mps_ format used by ibm's _mpsx_ mathematical programming system. the system has been implemented in the apl programming language. although originally designed for stand-alone use, it is currently being used as a component in an expert system that will help users formulate large linear programming models. the paper describes the syntax of the problem definition language and gives some illustrative examples. there are several unique features. first, the user can define objective function, constraint and right- hand-side coefficients as apl expressions. this leads to concise problem statements and also reduces data storage and processing requirements. second, the system supports an integrated data base query language. finally, there are a number of aids for model maintenance and sensitivity analysis. the last section of the paper describes the use of _mpgen_ in the expert system context. edward a. stohr algorithm 768: tensolve: a software package for solving systems of nonlinear equations and nonlinear least-squares problems using tensor methods this article describes a modular solftware package for solving systems of nonlinear equations and nonlinear problems, using a new class of methods called tensor methods. it is intended for small- to medium-sized problems, say with up to 100 equations and unknowns, in cases where it is reasonable to calculate the jacobian matrix or to approximate it by finite differences at each iteration. the software allows the user to choose between a tensor method and a standard method based on a linear model. the tensor method approximates f(x) by a quadratic model, where the second-order term is chosen so that the model is hardly more expensive to form, store, or solve than the standard linear model. moreover, the software provides two different global strategies: a line search approach and a two-dimensional trust region approach. test results indicate that, in general, tensor methods are significantly more efficient and robust than standard methods on small- and medium-sized problems in iterations and function evaluations. ali bouaricha robert b. schnabel on the second eigenvalue of random regular graphs j. friedman j. kahn e. szemeredi editorial: a publishing plan fulfilled t. r. girill a survey of linear programming in randomized subexponential time michael goldwasser an implementation of a class of stabilized explicit methods for the time integration of parabolic equations j. g. verwer the conjugate gradient method in the presence of clustered eigenvalues jose d. flores optimal quadratures in h(sub)p spaces k. sikorski f. stenger a set of library routines for solving parabolic equations in one space variable p. m. dew j. e. walsh astrogeometry, error estimation, and other applications of set-valued analysis andrei finkelstein olga kosheleva vladik kreinovich bayesian networks this brief tutorial on bayesian networks serves to introduce readers to some of the concepts, terminology, and notation employed by articles in this special section. in a bayesian network, a variable takes on values from a collection of mutually exclusive and collective exhaustive states. a variable may be discrete, having a finite or countable number of states, or it may be continuous. often the choice of states itself presents an interesting modeling question. for example, in a system for troubleshooting a problem with printing, we may choose to model the variable "print output" with two states---"present" and "absent"\---or we may want to model the variable with finer distinctions such as "absent," "blurred ," "cut off," and "ok." david heckerman michael p. wellman approximate inclusion-exclusion n. linial n. nisan suitening our nomenclature l. ramshaw parint: a software package for parallel integration e. dedoncker a. gupta j. ball p. ealy alan genz algorithm 724; program to calculate f-percentiles let 0 p 1 be given and let f be the cumulative distribution function of the f-distribution with (m, n,) degrees of freedom. this fortran 77 routine is a complement to [1] where a method was presented to find the inverse of the f-distribution function, finv(m, n, p), using a series expansion technique to find the inverse for the beta distribution function. roger w. abernathy robert p. smith the solution of a combustion problem with rosenbrock methods solving flame propagation problems with the method of lines leads to large systems of ordinary differential equations. these systems are usually solved by backward differentiation formula (bdf) methods, such as by lsode of hindmarsh. recently, rosenbrock methods turned out to be rather successful for integrating small systems with inexpensive function and jacobian evaluations. however, no test has been done on the performance of some recently developed rosenbrock codes in a situation in which the dimension of the system is large, for example, over one hundred equations. these rosenbrock codes performed quite well on the stiff detest and other small systems. the aim of this paper is to investigate the performance of the rosenbrock methods in solving the flame propagation problem by the method of lines. a. ostermann p. kaps t. d. bui time- and space-optimality in b-trees a b-tree is compact if it is minimal in number of nodes, hence has optimal space utilization, among equally capacious b-trees of the same order. the space utilization of compact b-trees is analyzed and compared with that of noncompact b-trees and with (node)-visit-optimal b-trees, which minimize the expected number of nodes visited per key access. compact b-trees can be as much as a factor of 2.5 more space efficient than visit-optimal b-trees; and the node-visit cost of a compact tree is never more than 1 + the node-visit cost of an optimal tree. the utility of initializing a b-tree to be compact (which initialization can be done in time linear in the number of keys if the keys are presorted) is demonstrated by comparing the space utilization of a compact tree that has been augmented by random insertions with that of a tree that has been grown entirely by random insertions. even after increasing the number of keys by a modest amount, the effects of compact initialization are still felt. once the tree has grown so large that these effects are no longer discernible, the tree can be expeditiously compacted in place using an algorithm presented here; and the benefits of compactness resume. arnold l. rosenberg lawrence snyder algorithm 549: weierstrass' elliptic functions [s21] ulrich eckhardt construction of nilpotent lie algebras over arbitrary fields in this paper we present a general description of a computationally efficient algorithm for constructing every n-dimensional nilpotent lie algebra as a central extension of a nilpotent lie algebra of dimension less than n. as an application of the algorithm, we present a complete list of all real nilpotent six-dimensional lie algebras. since 1958, four such lists have been developed: namely, those of morozov [2], shedler [3], vergne [5] and skjelbred and sund [4]. no two of these lists agree exactly. our list resolves all the discrepancies in the other four lists. moreover, it contains each earlier list as a subset. robert e. beck bernard kolman intrinsically parallel multiscale algorithms for hypercubes most algorithms implemented on parallel computers have been optimal serial algorithms, slightly modified or parallelized. an exciting possibility is the search for intrinsically parallel algorithms. these are algorithms which do not have a sensible serial equivalent --- any serial equivalent is so inefficient as to be of little use. we describe a multiscale algorithm for the solution of pde systems that is designed specifically for massively parallel supercomputers. unlike conventional multigrid algorithms, the new algorithm utilizes the same number of processors at all times. convergence rates are much faster than for standard multigrid methods --- the solution error decreases by up to three digits per iteration. the basic idea is to solve many coarse scale problems simultaneously, combining the results in an optimal way to provide an improved fine scale solution. on massively parallel machines the improved convergence rate is attained at no extra computational cost since processors that would otherwise be sitting idle are utilized to provide the better convergence. furthermore the algorithm is ideally suited to simd computers as well as mimd computers. on serial machines the algorithm is much slower than standard multigrid because of the extra time spent on multiple coarse scales, though in certain cases the improved convergence rate may justify this --- primarily in cases where other methods do not converge. the algorithm provides an extremely fast solution of various standard elliptic equations on machines such as the 65,536 processor connection machine, and uses only (log(n)) parallel machine instructions to solve such equations. the discovery of this algorithm was motivated entirely by new hardware. it was a surprise to the authors to find that developments in computer architecture might lead to new mathematics. undoubtedly further intrinsically parallel algorithms await discovery. p. frederickson o. a. mcbryan local error estimates and regularity tests for the implementation of double adaptive quadrature paola favati giuseppe fiorentino grazia lotti francesco romani variable precision exponential functionthe exponential function presented here returns a result which differs from_ex_ by less than one unit in the last place, for any representable value of_x_ which is not too close to values for which _ex_ would overflow orunderflow. (for values of _x_ which are not within this range, an errorcondition is raised.)it is a Â"variable precisionÂ" function in that it returns a _p_-digitapproximation for a _p_-digit argument, for any _p_ = 0 (_p_-digit means _p_-decimal-digit). the program and analysis are valid for all _p_ = 0, butcurrent implementations place a restriction on _p_.the program is presented in a pascal-like programming language callednumerical turing which has special facilities for scientific computing,including precision control, directed roundings, and built-in functions forgetting and setting exponents.t. e. hull a. abrham a modification of talbot's method for the simultaneous approximation of several values of the inverse laplace transform in recent years many results have been obtained in the field of the numerical inversion of laplace transforms. among them, a very accurate and general method is due to talbot: this method approximates the value of the inverse laplace transform f(t), for t fixed, using the complex values of the laplace transform f(s) sampled on a suitable contour of the complex plane. on the basis of the interest raised by talbot's method implementation, the author has been induced to investigate more deeply the possibilities of this method and has been able to generalize talbot's method, to approximate simultaneously several values of f(t) using the same sampling values of the laplace transform. in this way, the only unfavorable aspect of the classical talbot method, that is, that of recomputing all of the samples of f(s) for each t, has been eliminated. mariarosaria rizzardi stiffness detection and estimation of dominant spectrum with explicit runge- kutta methods a new stiffness detection scheme based on explicit runge-kutta methods is proposed. it uses a krylov subspace approximation to estimate the eigenvalues of the jacobian of the differential system. the numerical examples indicate that this technique is a worthwhile alternative to other known stiffness detection schemes, especially when the systems are large and when it is desirable to know more about the spectrum of the jacobian than just the spectral radius. kersti ekeland brynjulf owren eivor Øines stochastic bounds for queueing systems with multiple markov modulated sources zhen liu don towsley a mixed integer mathematical programming model solution using branch and bound techniques (abstract only) an optimal allocation of work release and labor assignments on a paced assembly line using the objective criterion of minimal standard hours has been formulated as a mixed integer mathematical programming model. the results of our research lead to the development of an algorithm which solves the mathematical model formulation and a computer implementation demonstrates its use in an actual operating environment. the algorithm developed for solving the problem uses a branch and bound approach as its basic solution technique. basically, the ingredients of a branch and bound algorithm are a lower bounding strategy, a branching strategy (which partitions the set of feasible solutions into subsets), and a search strategy (which determines the order in which the subsets are investigated). the algorithm presented here uses the method of relaxing difficult constraints as a lower bounding strategy. the lower constraints are those which contain integer variables. this approach results in an ordinary linear programming problem model formulation which is used for computing lower bounds. the branching strategy is a binary partitioning in which a particular assembly mode is either in the solution subset, or not. the search strategy used is the backtracking procedure in which the branching variables are heuristically preordered. the mixed integer mathematical model follows: the objective function is v(x) = cx where c is (l x n) vector and x is a (n x 1) vector subject to ax = d where a is a (m x n) vector d is a (m x 1) vector hx ≤ h where h is a (1 x n) vector h is a (1 x 1) vector bx-ds ≤ o where b is a (r x n) matrix d is a (r x n) matrix s is a (r x 1) vector bx-ps ≥ o where r is a (s x n) matrix os ≤ e where o is a (t x n) matrix e is a (t x 1) vector all variable are non-negative, the elements of the vector x arise from a continuous function, and the elements of the vector s are (0,1) integer values. the computer software used for solving the above model consists of two fortran iv modules under the command of a language processor. both software packages can be operated either interactively or under a batch processing mode. one of the computer software packages is labeled the "initialization program" which is used to generate the coefficient vectors and matrices, i.e., c, a, d, h, h, b, d, r, o, and e presented in the above model. the other is labeled the "action program" which is used to obtain an "optimal" solution for the problem. the computer software packages have been executed on a xerox-560 computer in a limited fashion. in conclusion, there is an important by-product from our research so far; a way to find alternative optimal solutions. the branch and bound algorithm which we utilize is a partial enumeration technique in which subsets of feasible solutions are eliminated from consideration by comparing a lower bound on the objective function value (for a given subset) to the best solution obtained up to that point. theoretically, alternative optimal solutions may be among those subsets of feasible solutions. lung chiang wu harry k. edwards improvements and extensions to simulation interval procedures frank j. matejcik acm algorithms policy fred t. krogh algorithm 738; a software package for unconstrained optimization using tensor methods this paper describes a software package for finding the unconstrained minimizer of a nonlinear function of n variables. the package is intended for problems where n is not too large---say, n 100---so that the cost of storing one n × n matrix, and factoring it at each iteration, is acceptable. the software allows the user to choose between a recently developed "tensor method" for unconstrained optimization and an analogous standard method based on a quadratic model. the tensor method bases each iteration on a specially constructed fourth-order model of the objective function not significantly more expensive to form, store, or solve than the standard quadratic model. in our experience, the tensor method requires significantly fewer iterations and function evaluations to solve most unconstrained optimization problems than standard methods based on quadratic models, and also solves a somewhat wider range of problems. for these reasons, it may be a useful addition to numerical software libraries. t. chow e. eskow r. schnabel algorithm 708; significant digit computation of the incomplete beta function ratios an algorithm is given for evaluating the incomplete beta function ratio ix(a,b) and its complement 1 - ix(a,b). a new continued fraction and a new asymptotic series are used with classical results. a transportable fortran subroutine based on this algorithm is currently in use. it is accurate to 14 significant digits when precision is not restricted by inherent error. armido r. didonato alfred h. morris corrigendum: computing selected eigenvalues of sparse unsymmetric matrices using subspace iteration i. s. duff j. a. scott las vegas algorithms for linear and integer programming when the dimension is small this paper gives an algorithm for solving linear programming problems. for a problem with n constraints and d variables, the algorithm requires an expectedod2n+lognodd/2+o1+od4nlogn arithmetic operations, asn->∞. the constant factors do not depend on d. also, an algorithm is given for integer linear programming. let 4 bound the number of bits required to specify the rational numbers defining an input constraint or the objective function vector. let n and d be as before. then, the algorithm requires expected o2ddn+8dd< ;rad>nl nn&l; t;/rm>lnn +dod4 lnn operations on numbers with do14 bits, as n->∞ , where the constant factors do not depend on d or4to other convex programming problems. for example, an algorithm for finding the smallest sphere enclosing a set of n points in edhas the same time bound. kenneth l. clarkson the use of taylor series to test accuracy of function programs this paper discusses the use of local taylor series expansions for determining the accuracy of computer programs for special functions. the main example is testing of programs for exponential integrals. additional applicaitons include testing of programs for certain bessel functions, dawson's integral, and error functions. w. j. cody l. stoltz algorithm 611: subroutines for unconstrained minimization using a model/trust- region approach david m. gay representation issues for design topological optimization by genetic methods p. j. gage i. m. kroo algorithm 571: statistics for von mises' and fisher's distributions of directions: _i_1(_x_) /_i_0 (_x_), _i_ 1.5(_x_)/ _i_0.5(_x_)and their inverses [s14] geoffrey w. hill acm algorithms policy fred t. krogh on stopping criteria in verified nonlinear systems or optimization algorithms traditionally, iterative methods for nonlinear systems use heuristic domain and range stopping criteria to determine when accuracy tolerances have been met. however, such heuristics can cause stopping at points far from actual solutions, and can be unreliable due to the effects of round-off error or inaccuracies in data. in verified computations, rigorous determination of when a set of bounds has met a tolerance can be done analogously to the traditional approximate setting. nonetheless, the range tolerance possibly cannot be met. if the criteria are used to determine when to stop subdivision of n-dimensional bounds into subregions, then failure of a range tolerance results in excessive, unnecessary subdivision, and could make the algorithm impractical. on the other hand, interval techniques can detect when inaccuracies or round-off will not permit residual bounds to be narrowed. these techniques can be incorporated into range thickness stopping criteria that complement the range stopping criteria. in this note, the issue is first introduced and illustrated with a simple example. the thickness stopping criterion is then formally introduced and analyzed. third, inclusion of the criterion within a general verified global optimization algorithm is studied. an industrial example is presented. finally, consequences and implications are discussed. r. b. kearfott g. w. walster approximation algorithms for constrained for constrained node weighted steiner tree problems we consider a class of optimization problems, where the input is an undirected graph with two weight functions defined for each node, namely the node's profit and its cost. the goal is to find a connected set of nodes of low cost and high profit. we present approximation algorithms for three natural optimization criteria that arise in this context, all of which are np-hard. the budget problem asks for maximizing the profit of the set subject to a budget constraint on its cost. the quota problem requires minimizing the cost of the set subject to a quota constraint on its profit. finally, the prize collecting problem calls for minimizing the cost of the set plus the profit (here interpreted as a penalty) of the complement set. for all three problems, our algorithms give an approximation guarantee of o(\log n), where n is the number of nodes. to the best of our knowledge, these are the first approximation results for the quota problem and for the prize collecting problem, both of which are at least as hard to approximate as set cover. for the budget problem, our results improve on a previous o(\log^2 n) result of guha, moss, naor, and schieber. our methods involve new theorems relating tree packings to (node) cut conditions. we also show similar theorems (with better bounds) using edge cut conditions. these imply bounds for the analogous budget and quota problems with edge costs which are comparable to known (constant factor) bounds. anna moss yuval rabani fixed leading coefficient implementation of sd-formulas for stiff odes r. sacks-davis algorithm 660: qshep2d: quadratic shepard method for bivariate interpolation of scattered data robert j. renka admissible term orderings used in computer algebra systems heinz kredel beyond floating point c. w. clenshaw f. w. j. olver undaunted sets (extended abstract) varol akman optimizing locality for ode solvers runge-kutta methods are popular methods for the solution of systems of ordinary differential equations and are provided by many scientific libraries. the performance of runge-kutta methods does not only depend on the specific application problem to be solved but also on the characteristics of the target machine. for processors with memory hierarchy, the locality of data referencing pattern has a large impact on the efficiency of a program. in this paper, we describe program transformations for runge-kutta methods resulting in programs with improved locality behavior. the transformations are based on properties of the solution method but are independent from the specific application problem or the specific target machine, so that the resulting implementation is suitable as library function. we show that the locality improvement leads to performance gains on different target machines. we also demonstrate how the locality of memory references can be further increased by exploiting the dependence structure of the right hand side function of specific ordinary differential equations. thomas rauber gudula ruger algorithm 672: generation of interpolatory quadrature rules of the highest degree of precision with preassigned nodes for general weight functions t. n. l. patterson solving equations nonlinear in only one of n \\+ 1 variables charles b. dunham c. z. zhu random number generators are chaotic the study of highly unstable nonlinear dynamical systems---chaotic systems--- has emerged recently as an area of major interest and applicability across the mathematical, physical and social sciences. this interest has been triggered by advances in the past decade, particularly in the mathematical understanding of complex systems. an important insight that has become widely recognized in recent years is that deterministic systems can give rise to chaotic behavior. surprisingly, many of these systems are extremely simple, yet they exhibit complex chaotic behavior. charles herring julian i. palmore solving combinatorial optimization problems using parallel simulated annealing and parallel genetic algorithms pooja p. mutalik leslie r. knight joe l. blanton roger l. wainwright online algorithms for selective multicast and maximal dense trees baruch awerbuch tripurari singh solving systems of linear one-sided equations in integer monoid and group rings one of the applications of grobner bases in commutative polynomial rings is to solve linear equations. here we show how similar results can be obtained for systems of one-sided linear equations in the more general setting of monoid and group rings. birgit reinert on the calculation of the effects of roundoff errors esko ukkonen area requirements (abstract) giuseppe di battista algorithm 627: a fortran subroutine for solving volterra integral equations john m. bownds lee appelbaum generating random spanning trees more quickly than the cover time david bruce wilson using mathematica to aid simulation analysis paul a. savory algorithm 622: a simple macroprocessor john r. rice calvin and william a. ward interpolants for runge-kutta formulas a general procedure for the construction of interpolants for runge-kutta (rk) formulas is presented. as illustrations, this approach is used to develop interpolants for three explicit rk formulas, including those employed in the well-known subroutines rkf45 and dverk. a typical result is that no extra function evaluations are required to obtain an interpolant with o(h5) local truncation error for the fifth-order rk formula used in rkf45; two extra function evaluations per step are required to obtain an interpolant with o(h6 w. h. enright k. r. jackson s. p. no:.kc /rsett p. g. thomsen algorithm 623: interpolation on the surface of a sphere robert j. renka corrigenda: "an efficient derivative-free method for solving nonlinear equations" d. le coordinate representation of order types requires exponential storage we give doubly exponential upper and lower bounds on the size of the smallest grid on which we can embed every planar configuration of n points in general position up to order type. the lower bound is achieved by the construction of a widely dispersed "rigid" configuration which is then modified to one in general position by recent techniques of sturmfels and white, while the upper bound uses recent results of grigor'ev and vorobjou on the solution of simultaneous inequalities. this provides a sharp answer to a question first posed by chazelle. j. e. goodman r. pollack b. sturmfels open problems samir khuller minimum partitioning simple rectilinear polygons in o(n log log n) time w. t. liou j. j. tan r. c. lee algorithm 593: a package for the helmholtz equation in nonrectangular planar regions wlodzimierz proskurowski a differential-equations approach to functional equivalence j. shackle a modified schur-complement method for handling dense columns in interior- point methods for linear programming the main computational work in interior-point methods for linear programming (lp) is to solve a least-squares problem. the normal equations are often used, but if the lp constraint matrix contains a nearly dense column the normal- equations matrix will be nearly dense. assuming that the nondense part of the constraint matrix is of full rank, the schur complement can be used to handle dense columns. in this article we propose a modified schur-complement method that relaxes this assumption. encouraging numerical results are presented. knud d. andersen acm algorithms policy fred t. krogh a hybrid hypercube algorithm for the symmetric tridiagonal eigenvalue problem two versions of an algorithm for finding the eigenvalues of symmetric, tridiagonal matrices are described. they are based on the use of the sturm sequences and the bisection algorithm. the algorithms were implemented on the fps t-series. some speedup factor results are presented. j. a. jackson l. m. liebrock l. r. ziegler some hamilton paths and a minimal change algorithm peter eades michael hickey ronald c. read the convex hull of ellipsoids the treatment of curved algebraic surfaces becomes more and more the f ocus of attention in computational geometry. we present a video that illustrates the computation of the convex hull of a set of ellipsoids. the underlying algorithm is an application of our work on determining a cell in a 3-dimensional arrangement of quadrics, see \cite{ghs-ccaq-01}. in the video, the main emphasis is on a simple and comprehensible visualization of the geometric aspects of the algorithm. in addition, we give some insights into the underlying mathematical problems. nicola geismann michael hemmer elmar schömer a quantitative evaluation of the feasibility of, and suitable hardware architectures for, an adaptive, parallel finite-element system pamela zave george e. cole epsilon geometry: building robust algorithms from imprecise computations d. salesin j stolfi l. guibas the computation of spectral density functions for singular sturm-liouville problems involving simple continuous spectra the software package sledge has as one of its options the estimation of spectral density functions p(t) for a wide class of singular strurm- liouville problems. in this article the underlaying theory and implementation issues are discussed. several examples exhibiting quite varied asymptotic behavior in p are presented. c. t. fulton s. pruess polynomial-time aggregation of integer programming problems ravindran kannan supereffective slow-down of parallel computations victor y. pan franco p. preparata simple multivariate time series for simulations of complex systems recent work has made the generation of univariate time series for inputs to stochastic systems quite simple. the time series are all random linear combinations of i.i.d. random variables with exponential, gamma and hyperexponential marginal distributions. the simplicity of structure of these time series models makes it practical to combine them to model multivariate situations. thus one can model, for example, alternating sequences of response and think times at a terminal in which response and think times are not only autocorrelated, but also crosscorrelated. peter a. w. lewis edge concentration: a method for clustering directed graphs the display of a directed graph is a commonly used visual aid for representing relationships. however, some graphs contain so many edges that their display by traditional graph layout algorithms is virtually impossible because of the overwhelming number of crossings. graphs representing large software systems and their configurations are particularly prone to this problem. examples of such graphs include: graphs depicting a system's configuration, call graphs, graphs depicting import and export relationships between modules, and graphs depicting the "includes" relation among a system's source files. this paper proposes the elimination of some edges by replacing sets of edges that have the same set of source and target nodes by a special node called an edge concentration node. reducing the number of edges often has the desirable side effect of reducing the number of crossings. an algorithm that determines a reasonable set of edge concentrations of a graph in (n4) operations for each level in the graph is presented where n is the number of nodes in that level. several examples from the area of software configuration management are shown to demonstrate the effectiveness of using edge concentrations. f. j. newbery generating box-constrained optimization problems we present a method for generating box-constrained nonlinear programming test problems. the technique allows the user to control some properties of the generated test problems that are know to influence the behavior of algorithms for their solution. a corresponding set of fortran 77 routines is described in a companion algorithm (774). francisco facchinei joaquim júdice joão soares the software engineering learning facility the software engineering learning facility (self) is a web-based environment designed to enhance learning the art of software development. the system consists of three components. the _practice component_ enables students to solve problems related to language constructs and algorithms. the _process component_ guides students through a waterfall model of software development, emphasizing product development and verification. the _performance component_ monitors progress and provides feedback for improving each student's personal software development process. this paper reports on the motivation and design of the system, the state of its implementation, and lessons learned. larry morell david middleton finding all isolated solutions to polynomial systems using hompack although the theory of polynomial continuation has been established for over a decade (following the work of garcia, zangwill, and drexler), it is difficult to solve polynomial systems using continuation in practice. divergent paths (solutions at infinity), singular solutions, and extreme scaling of coefficients can create catastrophic numerical problems. further, the large number of paths that typically arise can be discouraging. in this paper we summarize polynomial-solving homotopy continuation and report on the performance of three standard path-tracking algorithms (as implemented in hompack) in solving three physical problems of varying degrees of difficulty. our purpose is to provide useful information on solving polynomial systems, including specific guidelines for homotopy construction and parameter settings. the m-homogeneous strategy for constructing polynomial homotopies is outlined, along with more traditional approaches. computational comparisons are included to illustrate and contrast the major hompack options. the conclusions summarize our numerical experience and discuss areas for future research. alexander p. morgan andrew j. sommese layne t. watson acm algorithms policy fred t. krogh lapack is now available jack dongarra calculating approximate curve arrangements using rounded arithmetic we present here an algorithm for the curve arrangement problem: determine how a set of planar curves subdivides the plane. this algorithm uses rounded arithmetic and generates an approximate result. it can be applied to a broad class of planar curves, and it is based on a new definition of approximate curve arrangements. this result is an important step towards the creation of practical computer programs for reasoning about algebraic curves of high degree. v. milenkovic optimizing systems for effective block-processing kumar n. lalgudi marios c. papaefthymiou miodrag potkonjak uniform asymptotic expansions for exponential integrals e_n(x) and bickley functions kin (x) d. e. amos an empirical study of dynamic graph algorithms the contributions of this paper are both of theoretical and of experimental nature. from the experimental point of view, we conduct an empirical study on some dynamic connectivity algorithms which where developed recently. in particular, the following implementations were tested and compared with simple algorithms: simple sparsification by eppstein et al. and the recent randomized algorithm by henzinger and king. in our experiments, we considered both random and non- random inputs. moreover, we present a simplified variant of the algorithm by henzinger and king, which for random inputs was always faster than the original implementation. for non-random inputs, simple sparsification was the fastest algorithm for small sequences of updates; for medium and large sequences of updates, the original algorithm by henzinger and king was faster. from the theoretical point of view, we analyze the average case running time of simple sparsification and prove that for dynamic random graphs its logarithmic overhead vanishes. david alberts giuseppe cattaneo giuseppe f. italiano analysis of an intrinsic overload control for a class of queueing systems we consider a priority queueing system which consists of two queues sharing a processor and in which there is delayed feedback. such a model arises from systems which employ a priority assignment scheme to achieve overload control. an analytic expression for the stationary probability of the queue lengths is derived. an algorithm is proposed to compute the queue lengths distribution. some numerical results are illustrated. y. t. wang n-evaluation conjecture for multipoint iterations for the solution of scalar nonlinear equations g. w. wasilkowski performance results of the simplex algorithm for a set of real-world linear programming models this paper provides performance results using the sperry univac 1100 series linear programming product fmps to solve a set of 16 real-world linear programming problems. as such, this paper provides a data point for the actual performance of a commercial simplex algorithm on real- world linear programming problems and shows that the simplex algorithm is a linear time algorithm in actual performance. correlations and performance relationships not previously available are also provided. edward h. mccall remark on "algorithm 506: hqr3 and exchng: fortran subroutines for calculating and ordering the eigenvalues of a real upper hessenberg matrix" david s. flamm robert a. walker faster mixing via average conductance laszlo lovasz ravi kannan nonlinear programming on generalized networks we describe a specialization of the primal truncated newton algorithm for solving nonlinear optimization problems on networks with gains. the algorithm and its implementation are able to capitalize on the special structure of the constraints. extensive computational tests show that the algorithm is capable of solving very large problems. testing of numerous tactical issues are described, including maximal basis, projected line search, and pivot strategies. comparisons with nlpnet, a nonlinear network code, and minos, a general-purpose nonlinear programming code, are also included. david p. ahlfeld john m. mulvey ron s. dembo stavros a. zenios remark on algorithm 706: dcutri - an algorithm for adaptive cubature over a collection of triangles we present corrections to algorthm 706 (acm trans. math. softw. 18, 3, sept. 1992, pages 329--342; calgo supplement 123). terje o. espelid computation of the multivariate normal integral this paper presents a direct computation of the multivariate normal integral by the gauss quadrature method. an error control method is given. results are presented for multivariate integrals consisting of up to twelve normal distributions. a computer program in fortran is given. zvi drezner how to compute multivariate pade approximants we present here various ways of generalizing the pade approximation to multivariate functions. to compute them, we use a computer algebra system: reduce. c. chaffy directed s-t numberings, rubber bands, and testing digraph k-vertexconnectivitylet g = (v, e) be a directed graph and n denote |v|. we show that g isk-vertex connected iff for every subset x of v with |x| = k, there is anembedding of g in the (k-1)-dimensional space r ;k-1, f : v ->:rk-1<;/italic>, such that no hyperplane contains k points of {f(v) | v ε v}, andfor each v ε v - x, f(v) is in the convex hull of {f(w) | (v, w) ε e}. thisresult generalizes to directed graphs the notion of convex embeddings ofundirected graphs introduced by linial, lovász and wigderson in "rubber bands,convex embeddings and graph connectivity," combinatorica 8 (1988), 91-102.using this characterization, a directed graph can be tested for k-vertexconnectivity by a monte carlo algorithm in time o((m(n) + nm(k)).(log n)) witherror probability < 1/n, and by a las vegas algorithm in expected timeo((m(n)+nm(k)).k), where m(n) denotes the number of arithmetic steps formultiplying two n x n matrices (m(n) = o(n2.3755)). our monte carlo algorithmimproves on the best previous deterministic and randomized time complexitiesfor k > n0.19; e.g., for k = (n0.5, the factor of improvement is > n0.62. bothalgorithms have processor efficient parallel versions that run in o((log n)2)time on the erew pram model of computation, using a number of processors equalto (log n) times the respective sequential time complexities. our monte carloparallel algorithm improves on the number of processors used by the bestprevious (monte carlo) parallel algorithm by a factor of at least (n2/(logn)3) while having the same running time.generalizing the notion of s-t numberings, we give a combinatorialconstruction of a directed s-t numbering for any 2-vertex connected directedgraph.joseph cheriyan john h. reif automatic differentiation in prose f. w. pfeiffer an accurate elementary mathematical library for the ieee floating point standard the algorithms used by the ibm israel scientific center for the elementary mathematical library using the ieee standard for binary floating point arithmetic are described. the algorithms are based on the "accurate tables method." this methodology achieves high performance and produces very accurate results. it overcomes one of the main problems encountered in elementary mathematical functions computations: achieving last bit accuracy. the results obtained are correctly rounded for almost all arguement values. our main idea in the accurate tables method is to use "nonstandard tables," which are different from the natural tables of equally spaced points in which the rounding error prevents obtaining last bit accuracy. in order to achieve a small error we use the following idea: perturb the original, equally spaced, points in such a way that the table value (or tables values in case we need several tables) will be very close to numbers which can be exactly represented by the computer (much closer than the usual double percision representation). thus we were able to control the error introduced by the computer representation of real numbers and extended the accuracy without actually using extended precision arithmetic. shmuel gal a computational study of a multiple-choice knapsack algorithm r. d. armstrong d. s. kung p. sinha a. a. zoltners solving linear algebraic equations on an mimd computer r. e. lord j. s. kowalik s. p. kumar general software for two-dimensional nonlinear partial differential equations david k. melgaard richard f. sincovec the design of a new frontal code for solving sparse, unsymmetric systems we describe the design, implementation, and performance of a frontal code for the solution of large, sparse, unsymmetric systems of linear equations. the resulting software package, ma42, is included in release 11 of the harwell subroutine library and is intended to supersede the earlier ma32 package. we discuss in detail the extensive use of higher-level blas kernels within ma42 and illustrate the performance on a range of practical problems on a cray y-mp, an ibm 3090, and an ibm risc system/6000. we examine extending the frontal solution scheme to use multiple fronts to allow ma42 to be run in parallel. we indicate some directions for future development. i. s. duff j. a. scott the full degree spanning tree problem randeep bhatia samir khuller robert pless yoram j. sussmann superconvergent interpolants for collocation methods applied to mixed-order bvodes continuous approximations to boundary value problems in ordinary differential equations (bvodes), constructed using collocation at gauss points, are more accurate at the mesh points than at off-mesh points. from these approximations, it is possible to construct improved continuous approximations by extending the high accuracy that is available at the mesh points to off- mesh points. one possibility is the bootstrap approach, which improves the accuracy of the approximate solution at the off-mesh points in a sequence of steps until the accuracy at the mesh points and off-mesh points is consistent. a bootstrap approach for systems of mixed-order bvodes is developed to improve approximate solutions produced by colnew, a gauss- collocation-based software package. an implementation of this approach is discussed and numerical results presented which confirm that the improved approximations satisfy the predicted error bounds and are relatively inexpensive to construct. wayne h. enright ramanan sivasothinathan algorithm 581: an improved algorithm for computing the singular value decomposition [f1] tony f. chan random samples with known sample statistics: with application to variance reduction a number of sampling schemes are described which aim to improve the accuracy of estimators in simulation experiments. the schemes negatively correlate key sample statistics in pairs of runs or blocks of runs. use is made of generators which produce random samples from certain distributions when sample statistics like the mean and dispersion are prescribed. special cases considered include normal, inverse gaussian and gamma generators. these can be used in their own right or as the basis of generators of other distributions. guide lines are given which indicate the conditions under which the schemes might be effective. russell c.h. cheng algorithm 654: fortran subroutines for computing the incomplete gamma function ratios and their inverse armido r. didonato alfred h. morris parallel additive lagged fibonacci random number generators srinivas aluru area requirement and symmetry display in drawing graphs g. di battista r. tamassia i. g. tollis integration of a primal simplex network algorithm with a large-scale mathematical programming system in this paper we discuss the implementation of a primal simplex algorithm for network problems in the mpsiii mathematical programming system. because of the need to interface with the rest of the mps this implementation is unorthodox, but computationally effective, and has a number of advantages over "stand alone" network optimizers. it is argued that a similar approach is appropriate for other general-purpose mathematical programming systems, and has applications beyond pure network optimization. j. a. tomlin j. s. welch algorithm 815: fortran subroutines for computing approximate solutions of feedback set problems using grasp we propose fortran subroutines for approximately solving the feedback vertex and arc set problems on directed graphs using a greedy randomized adaptive search procedure (grasp). implementation and usage of the package is outlined and computational experiments are reported illustrating solution quality as a function of running time. paola festa panos m. pardalos mauricio g. c. resende approximation algorithms for max-3-cut and other problems via complex semidefinite programming a number of recent papers on approximation algorithms have used the square roots of unity, -1 and 1 to represent binary decision variables for problems in combinatorial optimization, and have relaxed these to unit vectors in real space using semidefinite programming in order to obtain near optimal solutions to these problems. in this paper, we consider using the cube roots of unity, 1, e~~i2π/3, to represent ternary decision variables for problems in combinatorial optimization. here the natural relaxation is that of unit vectors in complex space. we use an extension of semidefinite programming to complex space to solve the natural relaxation, and use a natural extension of the random hyperplane technique introduced by the authors in [8] to obtain near-optimal solutions to the problems. in particular, we consider the problem of maximizing the total weight of satisfied equations xu-xv≡cmod 3 and inequations xu-xv&nequiv;cmod 3 , where xu∈0,1,2 u. this problem can be used to model the max-3-cut problem and a directed variant we call max-3-dicut. for the general problem, we obtain a .79373-approximation algorithm. if the instance contains only inequations (as it does for max-3-cut), we obtain a performance guarantee of 7 ;12+3 4p 2arccos 2 -1/4 .83601. this compares with proven performance guarantees of .800217 for max-3-cut (by frieze and jerrum [7]) and 3+10-8&l; t;/f> for the general problem (by andersson, engebretson, and hastad [2]). it matches the guarantee of .836008 for max-3-cut found independently by de klerk, pasechnik, and warners [4]. we show that all these algorithms are in fact identical in the case of max-3-cut. michel x. goemans david williamson the computation of eigenvalues and solutions of mathieu's differential equation for noninteger order two algorithms for calculating the eigenvalues and solutions of mathieu's differential equation for noninteger order are described. in the first algorithm, leeb's method is generalized, expanding the mathieu equation in fourier series and diagonalizing the symmetric tridiagonal matrix that results. numerical testing was used to parameterize the minimum matrix dimension that must be used to achieve accuracy in the eigenvalue of one part in 1012. this method returns a set of eigenvalues below a given order and their associated solutions simultaneously. a second algorithm is presented which uses approximations to the eigenvalues (taylor series and asymptotic expansions) and then iteratively corrects the approximations using newton's method until the corrections are less than a given tolerance. a backward recursion of the continued fraction expansion is used. the second algorithm is faster and is optimized to obtain accuracy of one part in 1014, but has only been implemented for orders less than 10.5. randall b. shirts iterative blow-up time approximation for initial value problems michio sakakihara response time distributions for a multi-class queue with feedback a single server queue with feedback and multiple customer classes is analyzed. arrival processes are independent poisson processes. each round of service is exponentially distributed. after receiving a round of service, a customer may depart or rejoin the end of the queue for more service. the number of rounds of service required by a customer is a random variable with a general distribution. our main contribution is characterization of response time distributions for the customer classes. our results generalize in some respects previous analyses of processor-sharing models. they also represent initial efforts to understand response time behavior along paths with loops in local balanced queueing networks. simon s. lam a. udaya shankar an adaptive algorithm for the approximate calculation of multiple integrals jarle berntsen terje o. espelid alan genz digital inversive pseudorandom numbers a new algorithm, the digital inversive method, for generating uniform pseudorandom numbers is introduced. this algorithm starts from an inversive recursion in a large finite field and derives pseudorandom numbers from it by the digital method. if the underlying finite field has q elements, then the sequences of digital inversive pseudorandom numbers with maximum possible period length q can be characterized. sequences of multiprecision pseudorandom numbers with very large period lengths are easily obtained by this new method. digital inversive pseudorandom numbers satisfy statistical independence properties that are close to those of truly random numbers in the sense of asymptotic discrepancy. if q is a power of 2, then the digital inversive method can be implemented in a very fast manner. jurgen eichenauer- herrmann harald niederreiter applying series expansion to the inverse beta distribution to find percentiles of the f-distribution let 0 ≤ 1 and f be the cumulative distribution function (cdf) of the f-distribution. we wish to find xp such that < ;italic>f(xp|n1, n2) = p, where n1 and n2 are the degrees of freedom. traditionally, xp is found using a numerical root-finding method, such as newton's method. in this paper, a procedure based on a series expansion for finding xpis given. the series expansion method has been applied to the normal, chi-square, and t distributions, but because of computational difficulties, it has not been applied to the f-distribution. these problems have been overcome by making the standard transformation to the beta distribution. the procedure is explained in sections 3 and 4\. empirical results of a comparison of cpu times are given in section 5. the series expansion is compared to some of the standard root-finding methods. a table is given for p = .90. roger w. abernathy robert p. smith numerical computation with general two-dimensional domains john r. rice certification of algorithm 708: significant-digit computation of the incomplete beta algorithm 708 (bratio) was run on 2730 test cases. comparison of these results with the results from an algorithm using a continued fraction of tretter and walster were performed using a high- precision version of the latter algorithm implemented in maple. accuracy of bratio ranged from 9.64 significant digits to a full machine double-precision, 15.65 significant digits, with the lower value occurring when a was nearly equal to b, and a was large. barry w. brown lawrence b. levy remark on algorithm 587 the subroutine wnnls of algorithm 587 has exposed some shortcomings, especially in solving rank-deficient problems. they may lead to fatal errors and (or) false results. five improvements are proposed. the effect of the changes is tested on four rank-deficient test problems. it is emphasized that the wnnls still remains vulnerable to bad scaling of the problem. v. dadurkevicius parallel implementation of the lanczos method for sparse matrices: analysis of data distributions e. m. garzon i. garcía local expansion of vertex-transitive graphs and random generation in finite groups laszlo babai saving time and space in spline calculations george c. bush the stiff integrators in the nag library m. berzins r. w. brankin i. gladwell a polynomial solution to the undirected two paths problem yossi shiloach the k-distribution of generalized feedback shift register pseudorandom numbers a necessary and sufficient condition is established for the generalized feedback shift register (gfsr) sequence introduced by lewis and payne to be k-distributed. based upon the theorem, a theoretical test for k-distributivity is proposed and performed in a reasonable amount of computer time, even for k = 16 and a high degree of resolution (for which statistical tests are impossible because of the astronomical amount of computer time required). for the special class of gfsr generators considered by arvillias and maritsas based on the primitive trinomial dp \\+ dq \\+ 1 with q = an integral power of 2, it is shown that the sequence is k-distributed if and only if the lengths of all subregisters are at least k. the theorem also leads to a simple and efficient method of initializing the gfsr generator so that the sequence to be generated is k-distributed. m. fushimi s. tezuka a dynamic self-stabilizing algorithm for finding strongly connected components mehmet hakan karaata fawaz al-anzi a dyadic determinant function this paper proposes to define a dyadic determinant function as the n-th exterior power of an array. when the array is a square matrix and n is its row (or column) number, we obtain as a limit case the usual monadic determinant function. this approach provides a natural definition of the determinant of a non-square array as used in lagrange's identity or laplace's rule of expanding determinant. on the other hand, the successive exterior powers of a matrix provide a direct means of computing the coefficients of its characteristic polynomial. sylvain baron a modular system of algorithms for unconstrained minimization we describe a new package, uncmin, for finding a local minimizer of a real valued function of more than one variable. the novel feature of uncmin is that it is a modular system of algorithms, containing three different step selection strategies (line search, dogleg, and optimal step) that may be combined with either analytic or finite difference gradient evaluation and with either analytic, finite difference, or bfgs hessian approximation. we present the results of a comparison of the three step selection strategies on the problems in more, garbow, and hillstrom in two separate cases: using finite difference gradients and hessians, and using finite difference gradients with bfgs hessian approximations. we also describe a second package, revmin, that uses optimization algorithms identical to uncmin but obtains values of user- supplied functions by reverse communication. robert b. schnabel john e. koonatz barry e. weiss what is a solution of an ode? robert m. corless a new look at fault tolerant network routing consider a communication network g in which a limited number of link and/or node faults f might occur. a routing for the network (a fixed path between each pair of nodes) must be chosen without any knowledge of which components might become faulty. choosing a good routing corresponds to bounding the diameter of the surviving route graph r(g, )/f, where two nonfaulty nodes are joined by an edge if there are no faults on the route between them. we prove a number of results concerning the diameter of surviving route graphs. we show that if is a minimal length routing, then the diameter of r(g, )/f can be on the order of the number of nodes of g, even if f consists of only a single node. however, if g is the n-dimensional cube, the diameter of r(g, )/f≤3 for any minimal length routing and any set of faults f with |f| danny dolev joe halpern barbara simons ray strong algorithm 607: text exchange system: a transportable system for management and exchange of programs and other text w. v. snyder r. j. hanson remark on "algorithm 346: f-test probabilities" r. s. cormack i. d. hill solveblok: a package for solving almost block diagonal linear systems carl de boor richard weiss a graph labelling proof of the backpropagation algorithm r. rojas the mathematical basis and a prototype implementation of a new polynomial rootfinder with quadratic convergence formulas developed originally by weierstrass have been used since the 1960s by many others for the simultaneous determination of all the roots of a polynomial. convergence to simple roots is quadratic, but individual approximations to a multiple root converge only linearly. however, it is shown here that the mean of such individual approximations converges quadratically to that root. this result, along with some detail about the behavior of such approximations in the neighborhood of the multiple root, suggests a new approach to the design of polynomial rootfinders. it should also be noted that the technique is well suited to take advantage of a parallel environment. this article first provides the relevant mathematical results: a short derivation of the formulas, convergence proofs, an indication of the behavior near a multiple root, and some error bounds. it then provides the outline of an algorithm based on these results, along with some graphical and numerical results to illustrate the major theoretical points. finally, a new program based on this algorithm, but with a more efficient way of choosing starting values, is described and then compared with corresponding programs from imsl and nag with good results. this program is available from mathon (combin@cs.utoronto.ca). . t. e. hull r. mathon corrigenda: "minimizing multimodal functions of continuous variables with the simulated annealing algorithm" a. corana m. and c. martini and s. ridella overlapping batch means: something for nothing? nonoverlapping batch means (nolbm) is a we11-known approach for estimating the variance of the sample mean. in this paper we consider an overlapping batch means (olbm) estimator that, based on the same assumptions and batch size as nolbm, has essentially the same mean and only 2/3 the asymptotic variance of nolbm. confidence interval procedures for the mean based on nolbm and olbm are discussed. both estimators are compared to the classical estimator of the variance of the mean based on sums of covariances. marc s. meketon bruce schmeiser adversarial queueing theory allan borodin jon kleinberg prabhakar raghavan madhu sudan david p. williamson expokit: a software package for computing matrix exponentials expokit provides a set of routines aimed at computing matrix exponentials. more precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear obes with constant inhomogeneity. the backbone of the sparse routines consists of matrix-free krylov subspace projection methods (arnoldi and lanczos processes), and that is why the toolkit is capable of coping with sparse matrices of large dimension. the software handles real and complex matrices and provides specific routines for symmetric and hermitian matrices. the computation of matrix exponentials is a numerical issue of critical importance in the area of markov chains and furthermore, the computed solution is subject to probabilistic constraints. in addition to addressing general matrix exponentials, a distinct attention is assigned to the computation of transient states of markov chains. roger b. sidje hybrid computation of bivariate rational interpolation hiroshi kai matu-tarow noda on solving almost block diagonal (staircase) linear systems j. k. reid a. jennings analysis and comparison of two general sparse solvers for distributed memory computers this paper provides a comprehensive study and comparison of two state-of-the-art direct solvers for large sparse sets of linear equations on large-scale distributed-memory computers. one is a multifrontal solver called mumps, the other is a supernodal solver called superlu. we describe the main algorithmic features of the two solvers and compare their performance characteristics with respect to uniprocessor speed, interprocessor communication, and memory requirements. for both solvers, preorderings for numerical stability and sparsity play an important role in achieving high parallel efficiency. we analyse the results with various ordering algorithms. our performance analysis is based on data obtained from runs on a 512-processor cray t3e using a set of matrices from real applications. we also use regular 3d grid problems to study the scalability of the two solvers. theory and application of marsaglia's monkey test for pseudorandom number generators a theoretical analysis is given for a new test, the "monkey" test, for pseudorandom number sequences, which was proposed by marsaglia. selected results, using the test on several pseudorandom number generators in the literature, are also presented. ora e. percus paula a. whitlock algorithm 625: a two-dimensional domain processor john r. rice expanders, sorting in rounds and superconcentrators of limited depth expanding graphs and superconcentrators are relevant to theoretical computer science in several ways. here we use finite geometries to construct explicitly highly expanding graphs with essentially the smallest possible number of edges. our graphs enable us to improve significantly previous results on a parallel sorting problem, by describing an explicit algorithm to sort n elements in k time units using (nαk) processors, where, e.g., α2 = 7/4. using our graphs we can also construct efficient n-superconcentrators of limited depth. for example, we construct an n superconcentrator of depth 3 with (n4/3) edges; better than the previous known results. n alon efficient gaussian elimination method for symbolic determinants and linear systems tateaki sasaki hirokazu murao fast distributed agreement (preliminary version) sam toueg kenneth j. perry t. k. srikanth practical integer division with karatsuba complexity tudor jebelean algorithm 798: high-dimensional interpolation using the modified shepard method a new implementation of the modified quadratic shepard method for the interpolation of scattered data is presented. qshep5d is a c++ translation of the original fortran 77 program qshep3d developed by renka (for 2d and 3d interpolation) which has been upgraded for 5d interpolation. this software development was motivated by the need for interpolated 5d hypervolumes of environmental response variables produced by forest growth and production models. michael w. berry karen s. minser algorithm 562: shortest path lengths [h] u. pape hamiltonian paths in infinite graphs david harel approximate solutions to problems in pspace anne condon error estimation in automatic quadrature routines a new algorithm for estimating the error in quadrature approximations is presented. based on the same integrand evaluations that we need for approximating the integral, one may, for many quadrature rules, compute a sequence of null rule approximations. these null rule approximations are then used to produce an estimate of the local error. the algorithm allows us to take advantage of the degree of precision of the basic quadrature rule. in the experiments we show that the algorithm works satisfactorily for a selection of different quadrature rules on all test families of integrals. jarle berntsen terje o. espelid a family of genetic algorithm packages on a workstation for solving combinatorial optimization problems roger l. wainwright a class of numerical integration rules with first order derivatives mohamad adnan al-alaoui learning stochastic functions by smooth simultaneous estimation to learn, it suffices to estimate the error of all candidate hypotheses simultaneously. we study the problem of when this "simultaneous estimation" is possible and show that it leads to new learning procedures and weaker sufficient conditions for a broad class of learning problems. we modify the standard probably approximately correct (pac) setup to allow concepts that are "stochastic functions." a deterministic function maps a set x into a set y, whereas a stochastic function is a probability distribution on x x y. we approach the simultaenous estimation problem by concentrating on a subset of all estimators, those that satisfy a natural "smoothness" constraint. the common empirical estimator falls within this class. we show that smooth simultaneous estimability can be characterized by a sampling-based criterion. also, we describe a canonical estimator for this class of problems. this canonical estimator has a unique form: it uses part of the samples to select a finite subset of hypotheses that approximates the class of candidate hypotheses, and then it uses the rest of the samples to estimate the error of each hypothesis in the subset. finally, we show that a learning procedure based on the canonical estimator will work in every case where empirical error minimization does. kevin l. buescher p. r. kumar a method of univariate interpolation that has the accuracy of a third-degree polynomial hiroshi akima decomposing a permutation into a conjugated tensor product sebastian egner markus puschel thomas beth algorithm 720; an algorithm for adaptive cubature over a collection of 3-dimensional simplices an adaptive algorithm for computing an approximation to the integral of each element in a vector of functions over a 3-dimensional region covered by simplices is presented. the algorithm is encoded in fortran 77. locally, a cubature formula of degree 8 with 43 points is used to approximate an integral. the local error estimate is based on the same evaluation points. the error estimation procedure tries to decide whether the approximation for each function has asymptotic behavior, and different actions are taken depending on that decision. the simplex with the largest error is subdivided into 8 simplices. the local procedure is then applied to each new region. this procedure is repeated until convergence. jarle berntsen ronald cools terje o. espelid a linear-time heuristic for improving network partitions an iterative mincut heuristic for partitioning networks is presented whose worst case computation time, per pass, grows linearly with the size of the network. in practice, only a very small number of passes are typically needed, leading to a fast approximation algorithm for mincut partitioning. to deal with cells of various sizes, the algorithm progresses by moving one cell at a time between the blocks of the partition while maintaining a desired balance based on the size of the blocks rather than the number of cells per block. efficient data structures are used to avoid unnecessary searching for the best cell to move and to minimize unnecessary updating of cells affected by each move. c. m. fiduccia r. m. mattheyses a factorization algorithm for linear ordinary differential equations the reducibility and factorization of linear homogeneous differential equations are of great theoretical and practical importance in mathematics. although it has been known for a long time that factorization is in principle a decision procedure, its use in an automatic differential equation solver requires a more detailed analysis of the various steps involved. especially important are certain auxiliary equations, the so-called associated equations. an upper bound for the degree of its coefficients is derived. another important ingredient is the computation of optimal estimates for the size of polynomial and rational solutions of certain differential equations with rational coefficients. applying these results, the design of the factorization algorithm lodef and its implementation in the scratchpad ii computer algebra system is described. f. schwarz line search algorithms with guaranteed sufficient decrease the development of software for minimization problems is often based on a line search method. we consider line search methods that satisfy sufficient decrease and curvature conditions, and formulate the problem of determining a point that satisfies these two conditions in terms of finding a point in a set t(μ). we describe a search algorithm for this problem that produces a sequence of iterates that converge to a point in t(μ) and that, except for pathological cases, terminates in a finite number of steps. numerical results for an implementation of the search algorithm on a set of test functions show that the algorithm terminates within a small number of iterations. jorge j. more david j. thuente exact solution of large-scale, asymmetric traveling salesman problems a lowest-first, branch-and-bound algorithm for the asymmetric traveling salesman problem is presented. the method is based on the assignment problem relaxation and on a subtour elimination branching scheme. the effectiveness of the algorithm derives from reduction procedures and parametric solution of the relaxed problems associated with the nodes of the branch-decision tree. large- size, uniformly, randomly generated instances of complete digraphs with up to 2000 vertices are solved on a decstation 5000/240 computer in less than 3 minutes of cpu time. in addition, we solved on a pc 486/33 no wait flow shop problems with up to 1000 jobs in less than 11 minutes and real-world stacker crane problems with up to 443 movements in less than 6 seconds. g. carpaneto m. dell'amico p. toth a rejection technique for sampling from log-concave multivariate distributions different universal methods (also called automatic or black-box methods) have been suggested for sampling form univariate log-concave distributions. the descriptioon of a suitable universal generator for multivariate distributions in arbitrary dimensions has not been published up to now. the new algorithm is based on the method of transformed density rejection. to construct a hat function for the rejection algorithm the multivariate density is transformed by a proper transformation t into a concave function (in the case of log- concave density t(x) = log(x).) then it is possible to construct a dominating function by taking the minimum of serveral tangent hyperplanes that are transformed back by t-1 into the original scale. the domains of different pieces of the hat function are polyhedra in the multivariate case. although this method can be shown to work, it is too slow and complicated in higher dimensions. in this article we split the rn into simple cones. the hat function is constructed piecewise on each of the cones by tangent hyperplanes. the resulting function is no longer continuous and the rejection constant is bounded from below but the setup and the generation remains quite fast in higher dimensions; for example, n = 8. the article describes the details of how this main idea can be used to construct algorithm tdrmv that generates random tuples from a multivariate log-concave distribution with a computable density. although the developed algorithm is not a real black box method it is adjustable for a large class of log-concave densities. josef leydold table-driven implementation of the logarithm function in ieee floating-point arithmetic algorithms and implementation details for the logarithm functions in both single and double precision of ieee 754 arithmetic are presented here. with a table of moderate size, the implementation need only working- precision arithmetic and are provably accurate to within 0.57 ulp. ping-tak peter tang algorithm 744; a stochastic algorithm for global optimization with constraints a stochastic algorithm is presented for finding the global optimum of a function of n variables subject to general constraints. the algorithm is intended for moderate values of n, but it can accommodate objective and constraint functions that are discontinuous and can take advantage of parallel processors. the performance of this algorithm is compared to that of the nelder-mead simplex algorithm and a simulated annealing algorithm on a variety of nonlinear functions. in addition, one-, two-, four-, and eight-processor versions of the algorithm are compared using 64 of the nonlinear problems with constraints collected by hock and schittkowski. in general, the algorithm is more robust than the simplex algorithm, but computationally more expensive. the algorithm appears to be as robust as the simulated annealing algorithm, but computationally cheaper. issues discussed include algorithm speed and robustness, applicability to both computer and mathematical models, and parallel efficiency. f. michael rabinowitz boundary-valued shape-preserving interpolating splines this article describes a general-purpose method for computing interpolating polynomial splines with arbitrary constraints on their shape and satisfying separable or nonseparable boundary conditions. examples of applications of the related fortran code are periodic shape-preserving spline intepolation and the construction of visually pleasing closed curves. p. costantini the initial transient problem - estimating transient and steady state performance in this paper, deutsch, richards and fernandez-torres (1983) fit relaxed time series (rarma) models to transient and steady state data from a simulated m/m/1 queue and a nonstationary queueing network. both systems are initially empty and idle. the statistical identification procedure (richards 1983) for the rarma model class is illustrated. improved estimates of the steady state mean of the m/m/1, and of the polynominal in time of the network's nonstationary "steady state" performance, are obtained by including the identified transient model in the rarma regression. s. j. deutsch j. e. richards c. fernandez-torres algorithm 578: solution of real linear equations in a paged virtual store [f4] j. j. du croz s. m. nugent j. k. reid d. b. taylor mathematical software for sturm-liouville problems software is described for the sturm-liouville eigenproblem. eigenvalues, eigenfunctions, and spectral density functions can be estimated with global error control. the method of approximating the coefficients forms the mathematical basis. the underlying algorithms are briefly described, and several examples are presented. steven pruess charles t. fulton remark on "algorithm 535: the qz algorithm to solve the generalized eigenvalue problem" b. s. garbow remark on algorithm 726: orthpol - a package of routines for generating orthogonal polynomials and gauss-type quadrature rules walter gautschi corrigendum: "a note on complex division" g. w. stewart on the sojourn time distribution in a finite capacity processor shared queue we consider a processor shared m/m/1 queue that can accommodate atmost a finite number k of customers. using singular perturbation techniques,we construct asymptotic approximations to the distribution of acustomer's sojourn time. we assume that k is large and treat several different cases of themodel parameters and also treat different time scales. extensivenumerical comparisons are used to back up our asymptotic formulas. \\---author's abstract charles knessl bootstrap confidence intervals for ratios of expectations we are concerned with computing a confidence interval for the ratio e&l; t;/italic>[y]/e[& lt;italic>x, where (x,y) is a pair of random variables. this ratio estimation problem arises in, for instance, regenerative simulation. as an alternative to confidence intervals based on asymptotic normality, we study and compare different variants of the bootstrap for one-sided and two-sided intervals. we point out situations where these techniques provide confidence intervals with coverage much closer to the nominal value than do the classical methods. denis choquet pierre l 'ecuyer christian leger algorithm 642: a fast procedure for calculating minimum cross-validation cubic smoothing splines the procedure cubgcv is an implementation of a recently developed algorithm for fast o(n) calculation of a cubic smoothing spline fitted to n noisy data points, with the degree of smoothing chosen to minimize the expected mean square error at the data points when the variance of the error associated with the data is known, or, to minimize the generalized cross validation (gcv) when the variance of the error associated with the data is unknown. the data may be unequally spaced and nonuniformly weighted. the algorithm exploits the banded structure of the matrices associated with the cubic smoothing spline problem. bayesian point error estimates are also calculated in o(n) operations. m. f. hutchinson algorithm 579: cpsc: complex power series coefficients [d4] b. fornberg twisted gfsr generators ii the twisted gfsr generators proposed in a previous article have a defect in k-distribution for k larger than the order of recurrence. in this follow up article, we introduce and analyze a new tgfsr variant having better k-distribution property. we provide an efficient algorithm to obtain the order of equidistribution, together with a tight upper bound on the order. we discuss a method to search for generators attaining this bound, and we list some of these such generators. the upper bound turns out to be (sometimes far) less than the maximum order of equidistribution for a generator of that period length, but far more than that for a gfsr with a working are of the same size. makoto matsumoto yoshiharu kurita conductance and the rapid mixing property for markov chains: the approximation of permanent resolved the permanent of an n x n matrix a with 0-1 entries aij is defined by per (a) = / /n-1/ i= ai (i), where the sum is over all permutations of [n] = {0, …, n \\- 1}. evaluating per (a) is equivalent to counting perfect matchings (1-factors) in the bipartite graph g = (v1, v2, e), where v1 = v2 = [n] and (i,j) ∈ e iff aij = 1. the permanent function arises naturally in a number of fields, including algebra, combinatorial enumeration and the physical sciences, and has been an object of study by mathematicians for many years (see [14] for background). despite considerable effort, and in contrast with the syntactically very similar determinant, no efficient procedure for computing this function is known. convincing evidence for the inherent intractability of the permanent was provided in the late 1970s by valiant [19], who demonstrated that it is complete for the class #p of enumeration problems and thus as hard as counting any np structures. interest has therefore recently turned to finding computationally feasible approximation algorithms (see, e.g., [11], [17]). the notion of approximation we shall use in this paper is as follows: let be a function from input strings to natural numbers. a fully-polynomial randomised approximation scheme (fpras) for is a probabilistic algorithm which, when presented with a string x and a real number ε > 0, runs in time polynomial in |x| and 1/ε and outputs a number which with high probability estimates (x) to within a factor of (1 + ε). a promising approach to finding a fpras for the permanent was recently proposed by broder [7], and involves reducing the problem of counting perfect matchings in a graph to that of generating them randomly from an almost uniform distribution. the latter problem is then amenable to the following dynamic stochastic technique: construct a markov chain whose states correspond to perfect and 'near-perfect' matchings, and which converges to a stationary distribution which is uniform over the states. transitions in the chain correspond to simple local perturbations of the structures. then, provided convergence is fast enough, we can generate matchings by simulating the chain for a small number of steps and outputting the structure corresponding to the final state. when applying this technique, one is faced with the task of proving that a given markov chain is rapidly mixing, i.e., that after a short period of evolution the distribution of the final state is essentially independent of the initial state. 'short' here means bounded by a polynomial in the input size; since the state space itself may be exponentially large, the chain must typically be close to stationarity after visiting only a small fraction of the space. recent work on the rate of convergence of markov chains has focussed on stochastic concepts such as coupling [1] and stopping times [3]. while these methods are intuitively appealing and yield tight bounds for simple chains, the analysis involved becomes extremely complicated for more interesting processes which lack a high degree of symmetry. using a complex coupling argument, broder [7] claims that the perfect matchings chain above is rapidly mixing provided the bipartite graph is dense, i.e., has minimum vertex degree at least n/2. this immediately yields a fpras for the dense permanent. however, the coupling proof is hard to penetrate; more seriously, as has been observed by mihail [13], it contains a fundamental error which is not easily correctable. in this paper, we propose an alternative technique for analysing the rate of convergence of markov chains based on a structural property of the underlying weighted graph. under fairly general conditions, a finite ergodic markov chain is rapidly mixing iff the conductance of its underlying graph is not too small. this characterisation is related to recent work by alon [4] and alon and milman [5] on eigenvalues and expander graphs. while similar characterisations of rapid mixing have been noted before (see, e.g., [2]), independent estimates of the conductance have proved elusive for non-trivial chains. using a novel method of analysis, we are able to derive a lower bound on the conductance of broder's perfect matchings chain under the same density assumption, thus verifying that it is indeed rapidly mixing. the existence of a fpras for the dense permanent is therefore established. reductions from approximate counting to almost uniform generation similar to that mentioned above for perfect matchings also hold for the large class of combinatorial structures which are self-reducible [10]. consequently, the markov chain approach is potentially a powerful general tool for obtaining approximation algorithms for hard combinatorial enumeration problems. moreover, our proof technique for rapid mixing also seems to generalise to other interesting chains. we substantiate this claim by considering an example from the field of statistical physics, namely the monomer-dimer problem (see, e.g., [8]). here a physical system is modelled by a set of combinatorial structures, or configurations, each of which has an associated weight. most interesting properties of the model can be computed from the partition function, which is just the sum of the weights of the configurations. by means of a reduction to the associated generation problem, in which configurations are selected with probabilities proportional to their weights, we are able to show the existence of a fpras for the monomer-dimer partition function under quite general conditions. significantly, in such applications the generation problem is often of interest in its own right. our final result concerns notions of approximate counting and their robustness. we show that, for all self-reducible np structures, randomised approximate counting to within a factor of (1 + nβ), where n is the input size, is possible in polynomial time either for all β ∈ r or for no β ∈ r. we are therefore justified in calling such a counting problem approximable iff there exists a polynomial time randomised procedure which with high probability estimates the number of structures within ratio (1 + nβ) for some arbitrary β ∈ r. the connection with the earlier part of the paper is our use of a markov chain simulation to reduce almost uniform generation to approximate counting within any factor of the above form: once again, the proof that the chain is rapidly mixing follows from the conductance characterisation. mark jerrum alistair sinclair grey group relation and their general problems (i) xiaozhong li kaiquan shi generating sorted lists of random numbers jon louis bentley james b. saxe remark on "algorithm 603: colrow and arceco: fortran packages for solving certain almost block diagonal linear systems by modified alternate row and column elimination" j. c. diaz g. and p. keast beating the logarithmic lower bound: randomized preemptive disjoint paths and call control algorithms ran adler yossi azar algorithm 699; a new representation of patterson's quadrature formulae fred t. krogh w. van snyder remark on "algorithm 535: the qz algorithm to solve the generalized eigenvalue problem for complex matrices [f2]" b. s. garbow algorithm 666: chabis: a mathematical software package for locating and evaluating roots of systems of nonlinear equations chabis is a mathematical software package for the numerical solution of a system of n nonlinear equations in n variables. first, chabis locates at least one solution of the system within an n-dimensional polyhedron. then, it applies a new generalized method of bisection to this n-polyhedron in order to obtain an approximate solution of the system according to a predetermined accuracy. in this paper we briefly describe the user interface to chabis and present several details of its implementation, as well as an example of its usage. michael n. vrahatis algorithm 651: algorithm hfft - high-order fast-direct solution of the helmholtz equation hfft is a software package for solving the helmholtz equation on bounded two- and three-dimensional rectangular domains with dirichlet, neumann, or periodic boundary conditions. the software is the result of combining new fourth-order accurate compact finite difference (hodie) discretizations and a fast-direct solution technique (the fourier method). in this paper we briefly describe the user interface to hfft and present an example of its usage and several details of its implementation. ronald f. boisvert algorithm 617 dafne: a differential-equations algorithm for nonlinear equations filippo aluffi-pentini valerio parisi francesco zirilli generation of large-scale quadratic programs for use as global optimization test problems a method is presented for the generation of test problems for global optimization algorithms. givena bounded polyhedron in r and a vertex, the method constructs nonconvex quadratic functions(concave or indefinite) whose global minimum is attained at the selected vertex. the constructionrequires only the use of linear programming and linear systems of equations. panos m. pardalos a nonlinear runge-kutta formula for initial value problems d. j. evans b. b. sanugi with what accuracy can we measure masses if we have an (approximately known) mass standard vladik kreinovich an algorithm for fast polynomial approximation ronald w gatterdam computing the volume is difficult z furedi i barany on the number of eularian orientations of a graph we give efficient randomized schemes to sample and approximately count eulerian orientations of any eulerian graph. eulerian orientations are natural flow-like structures, and welsh has pointed out that computing their number (i)corresponds to evaluating the tutte polynomial at the point (0, --2) [8,19] and (ii) is equivalent to evaluating "ice-type partition functions" in statistical physics [20]. our algorithms are based on a reduction to sampling and approximately counting perfect matchings for a class of graphs for which the methods of broder [3, 10] and others [4, 6] apply. a crucial step of the reduction is the "monotonicity lemma" (lemma 3.3) which is of independent combinatorial interest. roughly speaking, the monotonicity lemma establishes the intuitive fact that "increasing the number of constraints applied on a flow problem can only decrease the number of solutions". in turn, the proof of the lemma involves a new decomposition technique which decouples problematically overlapping structures (a recurrent obstacle in handling large combinatorial populations) and allows detailed enumeration arguments. as a byproduct, (i) we exhibit a class of graphs for which perfect and near-perfect matchings are polynomially related, and hence the permanent can be approximated, for reasons other than "short augmenting paths" (previously the only known approach); and (ii) we obtain a further direct sampling scheme for eulerian orientations which is faster than the one suggested by the reduction to perfect matchings. finally, with respect to our approximate counting algorithm, we give the complementary hardness result, namely, that counting exactly eulerian orientations is #p-complete, and provide some connections with eulerian tours. milena mihail peter winkler cauchy principal value integral using hybrid integral hiroshi kai matu-tarow noda algorithm 569: colsys: collocation software for boundary-value odes [d2] u. ascher j. christiansen r. d. russell towards a strong communication complexity theory or generating quasi-random sequences from two communicating slightly-random sources u v vazirani the need for an industry standard of accuracy for elementary-function programs cheryl m. black robert burton and thomas h. miller algorithm 680: evaluation of the complex error function g. p. m. poppe c. m. j. wijers when is double rounding innocuous? samuel a. figueroa fitting johnson curves to univariate and multivariate data moment matching and percentile matching are the standard methods for fitting a distribution from johnson's translation system (the sl, s u, s b, and s n families) to a univariate data set (johnson and kotz 1970). one method for fitting a multivariate johnson distribution to a vector-valued data set is to: (a) fit each marginal distribution separately with a univariate johnson distribution; and then (b) fit a multivariate normal distribution to the transformed vectors using the sample correlation coefficients between the transformed coordinates. to make this approach numerically feasible, wilson (1983) implemented an interactive percentile matching algorithm based on a modified newton-raphson procedure. when the algorithm was applied to 80 trivariate data sets arising in a large-scale policy analysis project, excessive manual intervention was required in some situations to obtain acceptable fits (martin 1983). james r. wilson a statistical model for net length estimation the prerouting estimation of net length is very important to the physical design since the estimated length can be employed as a figure-of-merit of the placement process, as an evaluation of the placement result, and in the calculation of capacitance in the predictive timing analysis. the traditional methods, bounded rectangle method, minimal spanning tree method and minimal steiner tree method, result in either poor estimation or time consuming processing. moreover, they do not consider the interference between each net and the technology influence. a statistical model for net length estimation has been developed to overcome these deficiencies. this model provides an unbiased estimate with 7 percent relative root-mean-squares error on an average. lai-chering suen on the number of halving planes let s r3 be an n-set in general position. a plane containing three of the points is called a halving plane if it dissects s into two parts of equal cardinality. it is proved that the number of halving planes is at most (n2 .998). as a main tool, for every set y of n points in the plane a set n of size n4) is constructed such that the points of n are distributed almost evenly in the triangles determined by y. i. barany z. furedi l. lovasz optimization in a distributed processing environment using genetic algorithms with multivariate crossover we set out to demonstrate the effectiveness of distributed genetic algorithms using multivariate crossover in optimizing a function of a sizable number of independent variables. our results show that this algorithm has unique potential in optimizing such functions. the multivariate crossover meta- strategy, however, did not result in a singularly better performance of the algorithm than did simpler crossover strategies. aaron h. konstam stephen j. hartley william l. carr algorithm 644: a portable package for bessel functions of a complex argument and nonnegative order this algorithm is a package of subroutines for computing bessel functions hv&l; t;/italic>(1)(zhv(2)(z), iv(z), jv(z), kv(z), yv(z) and airy functions ai(z), ai′(z), bi(z), bi′(z) for ordersv≥0 and complex z in πz≤π. eight callable subroutines and their double-precision counterparts are provided. exponential scaling and sequence generation are auxiliary options. d. e. amos ctadel: a generator of multi-platform high performance codes for pde-based scientific applications robert van engelen lex wolters gerard cats remark on "algorithms 508 and 509: matrix bandwidth and profile reduction and a hybrid profile reduction algorith" john g. lewis algorithm 736; hyperelliptic integrals and the surface measure of ellipsoids the algorithm for computing a class of hyperelliptic integrals and for determining the surface measure of ellipsoids is described in detail by dunkl and ramirez [1994]. an efficient implementation of their algorithm is presented here. charles f. dunkl donald e. ramirez how to vectorize the algebraic multilevel iteration we consider the algebraic multilevel iteration (amli) for the solution of systems of linear equations as they arise form a finite-difference discretization on a rectangular grid. key operation is the matrix-vector product, which can efficiently be executed on vector and parallel-vector computer architectures if the nonzero entries of the matrix are concentrated in a few diagonals. in order to maintain this structure for all matrices on all levels coarsening in alternating directions is used. in some cases it is necessary to introduce additional dummy grid hyperplanes. the data movements in the restriction and prolongation are crucial, as they produce massive memory conflicts on vector architectures. by using a simple performance model the best of the possible vectorization strategies is automatically selected at runtime. examples show that on a fujitsu vpp300 the presented implementation of amli reaches about 85% of the useful performance, and scalability with respect to computing time can be achieved. lutz grosz a remark on bartels and conn's linearly constrained, discrete l1 problems two modifications of bartels and conn's algorithm for solving linearly constrained discrete l1 problems are described. the modifications are designed to improve performance of the algorithm under conditions of degeneracy. roger w. koenker pin t. ng on likely solutions of a stable matching problem boris pittel algorithm 559: the stationary point of a quadratic function subject to linear constraints [e4] j. t. betts acm algorithms policy fred t. krogh how fair is fair queuing a. g. greenberg n. madras the compound poisson distribution algorithm 714; celefunt: a portable test package for complex elementary functions this paper discusses celefunt, a pakage of fortran programs for testing complex elementary functions. w. j. cody remark on algorithm 715 david t. price dynamic graph drawing (abstract) robert f. cohen approximation algorithms for shortest path motion planning this paper gives approximation algorithms of solving the following motion planning problem: given a set of polyhedral obstacles and points s and t, find a shortest path from s to t that avoids the obstacles. the paths found by the algorithms are piecewise linear, and the length of a path is the sum of the lengths of the line segments making up the path. approximation algorithms will be given for versions of this problem in the plane and in three- dimensional space. the algorithms return an ε-short path, that is, a path with length within (1 + ε) of shortest. let n be the total number of faces of the polyhedral obstacles, and ε a given value satisfying < ε ≤ π. the algorithm for the planar case requires (n log n)/ε time to build a data structure of size ( ;n/ε). given points s and t, and ε-short path from s to t can be found with the use of the data structure in time (< italic>n/ε \\+ n log n). the data structure is associated with a new variety of voronoi diagram. given obstacles s 3 and points s, t ε e3, an ε-short path between s and t can be found in (nλ(n) log(/ε)/&am; p;#949;4 \\+ n2 lognp log(n logp)) time, where p is the ratio of the length of the longest obstacle edge to the distance between s to t. the function λ(n) = α(n) (α(n) (1)), where the α(n) is a form of inverse of ackermann's function. for log(1/ε) and log p that are (log n), this bound is (log n2(n)λ(n ;)/ε4). k. clarkson remark on algorithm 630 a. buckley remark on algorithm 723: fresnel integrals w. van snyder a review of recent developments in solving odes mathematical models when simulating the behavior of physical, chemical, and biological systems often include one or more ordinary differential equations (odes). to study the system behavior predicted by a model, these equations are usually solved numerically. although many of the current methods for solving odes were developed around the turn of the century, the past 15 years or so has been a period of intensive research. the emphasis of this survey is on the methods and techniques used in software for solving odes. odes can be classified as stiff or nonstiff, and may be stiff for some parts of an interval and nonstiff for others. we discuss stiff equations, why they are difficult to solve, and methods and software for solving both nonstiff and stiff equations. we conclude this review by looking at techniques for dealing with special problems that may arise in some odes, for example, discontinuities. although important theoretical developments have also taken place, we report only those developments which have directly affected the software and provide a review of this research. we present the basic concepts involved but assume that the reader has some background in numerical computing, such as a first course in numerical methods. gopal k. gupta ron sacks-davis peter e. tescher nitpack: an interactive tree package p. w. gaffney j. w. wooter k. a. kessel w. r. mckinney on threshold pivoting in the multifrontal method for sparse indefinite systems a simple modification to the numerical pivot selection criteria in the multifrontal scheme of duff and reid for sparse symmetric matrix factorization is presented. for a given threshold value, the modification allows a broader choice of block 2 x 2 pivots owing to a less restrictive pivoting condition. it also extends the range of permissible threshold values from [0, 1/2) to [0, 0.6404). moreover, the bound on element growth for stability consideration in the modified scheme is nearly the same as that of the original strategy. joseph w. h. liu approximation schemes for minimum latency problems sanjeev arora george karakostas modeling languages versus matrix generators for linear programming robert fourer the de bruijn multiprocessor network: a versatile sorting network m. r. samatham d. k. pradhan global optimization by center-concentrated sampling toru kambayashi algorithm 604: a fortran program for the calculation of an extremal polynomial frederick w. sauer modification of the minimum-degree algorithm by multiple elimination the most widely used ordering scheme to reduce fills and operations in sparse matrix computation is the minimum-degree algorithm. the notion of multiple elimination is introduced here as a modification to the conventional scheme. the motivation is discussed using the k-by-k grid model problem. experimental results indicate that the modified version retains the fill-reducing property of (and is often better than) the original ordering algorithm and yet requires less computer time. the reduction in ordering time is problem dependent, and for some problems the modified algorithm can run a few times faster than existing implementations of the minimum-degree algorithm. the use of external degree in the algorithm is also introduced. joseph w. h. liu acm algorithms policy fred t. krogh kinder, gentler average-case analysis for convex hulls and maximal vectors rex a. dwyer turing award lecture richard h. karp a shortest-path algorithm with expected time o(n2 log n log* n) we present an algorithm which finds the shortest distance between all pairs of points in a non- negatively weighted directed graph in the average time stated in the title. this algorithm, an extension of spira's solution [sp], executes in the stated time over quite general classes of probability distributions on graphs. peter bloniarz a locally parameterized continuation process werner c. rheinboldt john v. burkardt algorithm 726; orthpol - a package of routines for generating orthogonal polynomials and gauss-type quadrature rules a collection of subroutines and examples of their uses, as well as the underlying numerical methods, are described for generating orthogonal polynomials relative to arbitrary weight functions. the object of these routines is to produce the coefficients in the three-term recurrence relation satisfied by the orthogonal polynomials. once these are known, additional data can be generated, such as zeros of orthogonal polynomials and gauss-type quadrature rules, for which routines are also provided. walter gautschi on ordered languages and the optimization of linear functions by greedy algorithms the optimization problem for linear functions on finite languages is studied, and an (almost) complete characterization of those functions for which a primal and a dual greedy algorithm work well with respect to a canonically associated linear programming problem is given. the discussion in this paper is within the framework of ordered languages, and the characterization uses the notion of rank feasibility of a weighting with respect to an ordered language. this yields a common generalization of a sufficient condition, obtained recently by korte and lovasz for greedoids, and the greedy algorithm for ordered sets in faigel's paper [6]. ordered greedoids are considered the appropriate generalization of greedoids, and the connection is established between ordered languages, polygreedoids, and coxeteroids. answering a question of bjorner, the author shows in particular that a polygreedoid is a coxeteroid if and only if it is derived from an integral polymatroid. ulrich faigle design and implementation of a very small linear algebra program package microcomputers, when properly programmed, have sufficient memory and speed to successfully perform serious calculations of modest size--linear equations, least squares, matrix inverse or generalized inverse, and the symmetric matrix eigenproblem. john c. nash an efficient filter-based approach for combinational verification rajarshi mukherjee jawahar jain koichiro takayama masahiro fujita jacob a. abraham donald s. fussell finding minimum-cost circulations by canceling negative cycles a classical algorithm for finding a minimum-cost circulation consists of repeatedly finding a residual cycle of negative cost and canceling it by pushing enough flow around the cycle to saturate an arc. we show that a judicious choice of cycles for canceling leads to a polynomial bound on the number of iterations in this algorithm. this gives a very simple strongly polynomial algorithm that uses no scaling. a variant of the algorithm that uses dynamic trees runs in (nm(log n)min{log(nc), m log n}) time on a network of n vertices, m arcs, and arc costs of maximum absolute value c. this bound is comparable to those of the fastest previously known algorithms. andrew v. goldberg robert e. tarjan efficient algorithms for inverting evolution evolution can be mathematically modelled by a stochastic process that operates on the dna of species. such models are based on the established theory that the dna sequences, or genomes, of all extant species have been derived from the genome of the common ancestor of all species by a process of random mutation and natural selection. a stochastic model of evolution can be used to construct phylogenies, or evolutionary trees, for a set of species. maximum likelihood estimation (mle) methods seek the evolutionary tree which is most likely to have produced the dna under consideration. while these methods are intellectually satisfying, they have not been widely accepted because of their computational intractability. in this paper, we address the intractability of mle methods as follows: we introduce a metric on stochastic process models of evolution. we show that this metric is meaningful by proving that in order for any algorithm to distinguish between two stochastic models that are close according to this metric, it needs to be given many observations. we complement this result with a simple and efficient algorithm for inverting the stochastic process of evolution, that is, for building a tree from observations on two-state characters. (we will use the same techniques in a subsequent paper to solve the problem for multistate characters, and hence for building a tree from dna sequence data.) the tree we build is provably close, in our metric, to the tree generating the data and gets closer as more observations become available. though there have been many heuristics suggested for the problem of finding good approximations to the most likely tree, our algorithm is the first one with a guaranteed convergence rate, and further, this rate is within a polynomial of the lower-bound rate we establish. ours is also the first polynomial-time algorithm that is proven to converge at all to the correct tree. martin farach sampath kannan a constant-factor approximation algorithm for the k-median problem (extended abstract) moses charikar sudipto guha Éva tardos david b. shmoys algorithm 786: multiple-precision complex arithmetic and functions this article describes a collection of fortran routines for multiple- precision complex arithmetic and elementary functions. the package provides good exception handling, flexible input and output, trace features, and results that are almost always correctly rounded. for best efficiency on different machines, the user can change the arithmetic type used to represent the multiple-precision numbers. david m. smith controlling the defect in existing variable-order adams codes for initial- value problems p. m. hanson w. h. enright weyl closure of a d-ideal harrison tsai vectorized dissection on the hypercube dissection ordering is used with gaussian elimination on a hypercube parallel processor with vector hardware to solve matrices arising from finite- difference and finite-element discretizations of 2-d elliptic partial differential equations. these problems can be put into a matrix-vector form, ax = f, where the matrix a takes the place of the differential operator, x is the solution vector, and f is the source vector. the domain is divided among the nodes with neighboring subdomains sharing a strip called a separator. each processor is given its own part of the source vector and computes its own part of the stiffness matrix, a. the elimination starts out in parallel; communication is only needed after most of the elimination is finished when the edges need to be eliminated. back substitution is initially done on the domain edges, and then totally in parallel without communication on each node. the hypercube code involved was optimized to work with vector hardware. example problems and timings are given with comparisons to nonvector runs. t-h. olesen j. petersen incremental modular decomposition modular decomposition is a form of graph decomposition that has been discovered independently by researchers in graph theory, game theory, network theory, and other areas. this paper reduces the time needed to find the modular decomposition of a graph from (n3) to (n2). together with a new algorithm for transitive orientation given in [21], this leads to fast new algorithms for a number of problems in graph recognition and isomorphism, including recognition of comparability graphs and permutation graphs. the new algorithm works by inserting each vertex successively into the decomposition tree, using (n) time to insert each vertex. john h. muller jeremy spinrad computation of exponential integrals of a complex argument previous work on exponential integrals of a real argument has produced a computational algorithm which implements backward recurrence on a three-term recurrence relation (miller algorithm). the process on which the algorithm is based involves the truncation of an infinite sequence with a corresponding analysis of the truncation error. this error is estimated by means of (asymptotic) formulas, which are not only applicable to the real line, but also extendible to the complex plane. this fact makes the algorithm extendible to the complex plane also. however, the rate of convergence decreases sharply when the complex argument is close to the negative real axis. in order to make the algorithm more efficient, analytic continuation into a strip about the negative real axis is carried out by limited use of power series. the analytic details needed for a portable computational algorithm are presented. donald e. amos rotation distance, triangulations, and hyperbolic geometry d d sleator r e tarjan w p thurston transient diffusion approximation for some queuening systems. there are many situations where information about the transient behaviour of computer systems is wanted. in this paper a diffusion approximation to the transient behaviour of some queueing systems is proposed. the solution of the forward diffusion equation on the real positive line with the elementary return barrier having almost general holding time distribution provides the approximation for the gi/g/1 queue. the solution of the diffusion equation in a finite region approximates the transient behaviour for the two-server cyclic system and gives approximations for the first overflow time in the gi/g/1/n system and for the maximum number of customers in the gi/g/1 system. also the approximation for the busy period of the gi/g/1 system and of the two-server cyclic system are presented. andrzej duda adaptive packet routing for bursty adversarial traffic william aiello eyal kushilevitz rafail ostrovsky adi rosen radix-b extensions to some common empirical tests for pseudorandom number generators empirical testing of computer generated pseudo-random sequences is widely practiced. extensions to the coupon collector's and gap tests are presented that examine the distribution and independence of radix-b digit patterns in sequences with modulo of the form bw. an algorithm is given and the test is applied to a number of popular generators. theoretical expected values are derived for a number of defects that may be present in a pseudorandom sequence and additional empirical evidence is given to support these values. the test has a simple model and a known distribution function. it is easily and efficiently implemented and easily adaptable to testing only the bits of interest, griven a certain application. brad c. johnson algorithm 814: fortran 90 software for floating-point multiple precision arithmetic, gamma and related functions a collection of fortran-90 routines for evaluating the gamma function and related functions using the fm multiple- precision arithmetic package. david m. smith implicitization of a general union of parametric varieties f. orecchia investigating des with crack and related programs the aim of the package to be described is to try modularizing the investigation of differential equations for which there are no complete algorithms available yet. all, that is available for such problems are algorithms for special situations, e.g. when first integrals with a simple structure exist (e.g. polynomial in first derivatives) or when the problem has infinitesimal symmetries. in all such cases, finally a system of differential equations has to be solved which is overdetermined in the sense that more conditions have to be satisfied than there are unknown functions. to do a variety of such investigations efficiently, like a symmetry analysis, application of symmetries, determination of first integrals, differential factors, equivalent lagrangians, the strategy is to have one package (crack) for simplifying des and solving simple des as effective as possible and to use this program as the main tool for all the above mentioned investigations. for each investigation there is then only a short program necessary to just formulate the necessary conditions in form of an overdetermined de-system and to call crack to solve this, possibly in a number of successive calls. the examples below shall indicate the range of possible applications. thomas wolf andreas brand lower bounds for solving linear diophantine equations on random access machines the problem of recognizing the language l~~n(ln, k) of solvable diophantine linear equations with n variables (and solutions from {o, … , k}n&l; t;/supscrpt>) is considered. the languages ∪nεn ln, ∪& lt;italic>nεn ln, l, the knapsack problem, are np-complete. the (n2 lower bound for ln,1 on linear search algorithms due to dobkin and lipton is generalized to an (n2log(k& lt;/italic> \\+ 1)) lower bound for ln, k. the method of klein and meyer auf der heide is further improved to carry over the (n2) lower bound for ln, 1 to random access machines (rams) in such a way that it holds for a large class of problems and for very small input sets. by this method, lower bounds that depend on the input size, as is necessary for ln, are proved. thereby, an (n2l og(k \\+ 1)) lower bound is obtained for rams recognizing ln or ln, k, for inputs from {0, … , (nk)& lt;supscrpt>0(n 2)}n. friedhelm meyer auf der heide algorithm 610: a portable fortran subroutine for derivatives of the psi function d. e. amos fortran codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation fortran 77 codes sonest and conest are presented for estimating the 1-norm ( or the infinity-norm) of a real or complex matrix, respectively. the codes are of wide applicability in condition estimation since explicit access to the matrix, a, is not required; instead, matrix-vector products ax and atx are computed by the calling program via a reverse communication interface. the algorithms are based on a convex optimization method for estimating the 1-norm of a real matrix devised by hager. we derive new results concerning the behavior of hager's method, extend it to complex matrices, and make several algorithmic modifications in order to improve the reliability and efficiency. nicholas j. higham certification of algorithm 700 numerical tests of the sleign software for sturm-liouville problems marco marletta fast monte carlo algorithms for permutation groups laszlo babai gene cooperman larry finkelstein eugene luks Ákos seress on-line algorithms for steiner tree problems (extended abstract) piotr berman chris coulston algorithm 688; epdcol: a more efficient pdecol code the software package pdecol [7] is a popular code among scientists wishing to solve systems of nonlinear partial differential equations. the code is based on a method-of- lines approach, with collocation in the space variable to reduce the problem to a system of ordinary differential equations. there are three principal components: the basis functions employed in the collocation; the method used to solve the system of ordinary differential equations; and the linear equation solver which handles the linear algebra. this paper will concentrate on the third component, and will report on the improvement in the performance of pdecol resulting from replacing the current linear algebra modules of the code by modules which take full advantage of the special structure of the equations which arise. savings of over 50 percent in total execution time can be realized. p. keast p. h. muir linearly constrained discrete _i_1 problems richard h. bartels andrew r. conn an improved algorithm for ordered sequential random sampling k. aiyappan nair algorithm 697: univariate interpolation that has the accuracy of a third- degree polynomial hiroshi akima a simple but realistic model of floating-point computation w. s. brown well … it isn't quite that simple robert m. corless david j. jeffrey a sparse matrix package - part ii: special cases j. m. mcnamee an o(n0.4this paper presents a polynomial-time algorithm to color any 3-colorable n-node graph with o( n2/5 log8/5 n) colors, improving the best previously known bound of o( n/ logn) colors. by reducing the number of colors needed to color a 3-colorable graph, the algorithm also improves the bound for k-coloring for fixed k ≥ 3 from the previous o((n/log )1-1/(k-1)< /supscrpt>) colors to o(n1-1/(k-4/3)8/5 n) colors. an extension of the algorithm further improves the bounds. precise values appear in a table at the end of this paper. a. blum procedures for optimization problems with a mixture of bounds and general linear constraints philip e. gill walter murray michael a. saunders margaret h. wright algorithm 631: finding a bracketed zero by larkin's method of rational interpolation victor norton remark on "algorithm 573: nl2sol - an adaptive nonlinear least-squares algorithm" david m. gay the construction of fast portable multiplicative congruential random number generators c. d. kemp generations of permutations with non-unique elements t. j. rolfe algorithm 605: pbasic: a verifier program for american national standard minimal basic t. r. hopkins planar graph decomposition and all pairs shortest paths an algorithm is presented for generating a succinct encoding of all pairs shortest path information in a directed planar graph g with real-valued edge costs but no negative cycles. the algorithm runs in o(pn) time, where n is the number of vertices in g, and p is the minimum cardinality of a subset of the faces that cover all vertices, taken over all planar embeddings of g. the algorithm is based on a decomposition of the graph into o(pn) outerplanar subgraphs satisfying certain separator properties. linear-time algorithms are presented for various subproblems including that of finding an appropriate embedding of g and a corresponding face-on-vertex covering of cardinality o(p), and of generating all pairs shortest path information in a directed outerplannar graph. greg n. frederickson a new 3d adaptive finite element scheme with 1-irregular hexahedral element meshes don morton john m. tyler algorithm 701; goliath - a software system for the exact analysis of rectangular rank-deficient sparse rational linear systems peter alfeld david j. eyre statistical techniques of timing verification timing verification of vlsi designs using statistical techniques such as those implemented in hitchcock's timing analysis1 permit a far more precise assessment of machine performance than previous techniques. the accuracy of these results is affected by proper user specification of statistical input parameters and by the algorithmic treatment of the system design. since these items are both system and technology dependent, the system designer must understand them and apply appropriate statistical techniques in order to insure a properly verified design. this paper both outlines the mathematical derivations and illustrates the magnitude of the improvements to be obtained. james h. shelly david r. tryon algorithm 778: l-bfgs-b: fortran subroutines for large-scale bound-constrained optimization l-bfgs-b is a limited-memory algorithm for solving large nonlinear optimization problems subject to simple bounds on the variables. it is intended for problems in which information on the hessian matrix is difficult to obtain, or for large dense problems. l-bfgs-b can also be used for unconstrained problems and in this case performs similarly to its predessor, algorithm l-bfgs (harwell routine va15). the algorithm is implemented in fortran 77. ciyou zhu richard h. byrd peihuang lu jorge nocedal algorithm 715; specfun - a portable fortran package of special function routines and test drivers specfun is a package containing transportable fortran special function programs for real arguments and accompanying test drivers. components include bessel functions, exponential integrals, error functions and related functions, and gamma functions and related functions. w. j. cody on sparse matrix reordering for parallel factorization to minimize the amount of computation and storage for parallel sparse factorization, sparse matrices have to be reordered prior to factorization. we show that none of the popular ordering heuristics proposed before, namely, mulitple minimum degree and nested dissection, perform consistently well over a range of matrices arising in diverse application domains. spectral partitioning has been previously proposed as a means of generating small vertex separators for nested dissection of sparse matrices, so that the resulting ordering is amenable to efficient distributed parallel factorization with good load balance and low inter-processor communication. we show that nested dissection using spectral partitioning performs well for matrices arising from finite- element discretizations, but results in excessive fill compared to the minimum degree ordering for unstructured matrices such as power matrices and those arising from circuit simulation. the relative effectiveness of these two ordering schemes for parallel factorization is shown to vary widely for matrices arising from different application domains. we present an ordering strategy that performs consistently well for all matrix types. its ordering is comparable or better than either minimum degree or nested dissection for all matrices evaluated. performance results on the intel ipsc/860 are reported. b. kumar p. sadayappan c.-h. huang an exact analysis of the distribution of cycle times in a class of queueing networks prediction of detailed characteristics of the time delays experienced by customers in queueing networks is of great importance in various modelling and performance evaluation activities: operations research, computer systems and communication networks. their statistical properties have been investigated predominantly by simulation techniques with the exception of mean value analyses for which little's law is applied. theoretical studies of the probability distributions of time delays tend to be based on their laplace transforms, which are of limited use, can be inverted analytically only in very simple cases and present substantial computation problems for numerical inversion. an exact derivation is presented for the distribution of cycle times in so called tree-like queueing networks. the analysis is performed for a network structure which is such that it is not necessary to mark a special customer, so avoiding expansion of the state space. cycle time distribution is derived initially in the form of its laplace transform, from which its moments follow. a recurrence relation for a uniformly convergent discrete representation of the distribution then follows by a similar argument. finally, the numerical results obtained for some simple test networks are presented and compared with those corresponding to an approximate method, hence indicating the accuracy of the latter. p g harrison analysis of non-strict functional implementations of the dongarra-sorensen eigensolver we study the producer-consumer parallelism of eigensolvers composed of a tridiagonalization function, a tridiagonal solver, and a matrix multiplication, written in the non-strict functional programming language id. we verify the claim that non-strict functional languages allow the natural exploitation of this type of parallelism, in the framework of realistic numerical codes. we compare the standard top-down dongarra-sorensen solver with a new, bottom-up version. we show that this bottom-up implementation is much more space efficient than the top-down version. also, we compare both versions of the dongarra-sorensen solver with the more traditional ql algorithm, and verify that the dongarra-sorensen solver is much more efficient, even when run in a serial mode. we show that in a non-strict functional execution model, the dongarra-sorensen algorithm can run completely in parallel with the householder function. moreover, this can be achieved without any change in the code components. we also indicate how the critical path of the complete eigensolver can be improved. s. sur w. böhm partial orders for planarity and drawings (abstract) hubert de fraysseix pierre rosenstiehl algorithm 743; wapr: a fortran routine for calculating real values of the w-function we implement w-function approximation scheme described by barry et al. a range of tests of the approximations is included so that the code can be assessed on any given machine. users can calculate w(x) by specifying x itself or by specifying an offset from exp( 1), the latter option necessitated by rounding errors that can arise for x close to exp( 1). results of running the code on a sun workstation are included. d. a. barry s. j. barry p. j. culligan- hensley high-precision division and square root we present division and square root algorithm for calculations with more bits than are handled by the floating-point hardware. these algorithms avoid the need to multiply two high- precision numbers, speeding up the last iteration by as much as a factor of 10\. we also show how to produce the floating-point number closest to the exact result with relatively few additional operations. alan h. karp peter markstein equilibrium states of runge-kutta schemes: part ii a theory is given that accounts for the observed behavior of runge-kutta codes when the stepsize is restricted by stability. the theory deals with the general case when the dominant eigenvalues of the jacobian may be a complex conjugate pair. this extends and generalizes the results of part i of this paper, which deal with the real case. familiarity with part i is assumed, but not essential. george hall partitioning by probability condensation a partitioning model is formulated in which components are assigned probabilities of being placed in bins separated by partitions. the expected number of nets crossing partitions is a quadratic function of these probabilities. minimization of this expected value forces condensation of the probabilities into a "definite" state representing a very good partitioning. the bipartitioning case is treated explicitly. j. blanks visibility representations of planar graphs (abstract) ioannis g. tollis remark: corrections and errors in john ivie's some macsyma programs for solving recurrence relations pedro celis algorithm 772: stripack: delaunay triangulation and voronoi diagram on the surface of a sphere stripack is a fortran 77 software package that employs an incremental algorithm to construct a delaunay triangulation and, optionally, a voronoi diagram of a set of points (nodes) on the surface of the unit sphere. the triangulation covers the convex hull of the nodes, which need not be the entire surface, while the voronoi diagram covers the entire surface. the package provides a wide range of capabilities including an efficient means of updating the triangulation with nodal additions or deletions. for n nodes, the storage requirement for the triangulation is 13n integer storage locations in addition to 3n nodal corrdinates. using an off-line algorithm and work space of size 3n, the triangulation can be constructed with time complexity o(nlogn). robert j. renka an algorithm for linear programming which requires o((( m+n)n< supscrpt>z+(m+n)< supscrpt>1n)) arithmetic operations we present an algorithm for linear programming which requires (((m \\+ n) n2 \\+ (m \\+ n)1.5 n) l) arithmetic operations where m is the number of inequalities, and n is the number of variables. each operation is performed to a precision of (l) bits. l is bounded by the number of bits in the input. p. m. vaidya expanding graphs and the average-case analysis of algorithms for matchings and related problems hall's theorem states that a bipartite graph has a perfect matching if and only if every set of vertices has an equal number of neighbours. equivalently, it states that every non-maximum matching has an augmenting path if the graph is an expander with expansion 1. we use this insight to demonstrate that if a graph is an expander with expansion more than one than every non-maximum matching has a short augmenting path and, therefore, the bipartite matching algorithm performs much better on such graphs than in the worst case. we then apply this idea to the average case analysis of various augmenting path algorithms and to the approximation of the permanent. in particular, we demonstrate that the following algorithms perform much better on the average than in the worst case. in fact, they will rarely exhibit their worst-case running times. hopcroft-karp's algorithm for bipartite matchings. micali-vazirani's and even-kariv's algorithms for non-bipartite matchings. gabow-tarjan's parallel algorithm for bipartite matchings. dinic's algorithm for k-factors and 0-1 network flows. jerrum-sinclair's approximation scheme for the permanent. it seems rather surprising that the algorithms which are the fastest known for worst- case inputs also do exceedingly on almost every graph. r. motwani two fortran packages for assessing initial value methods we present a discussion and description of a collection of fortran routines designed to aid in the assessment of initial value methods for ordinary differential equations. although the overall design characteristics are similar to those of earlier testing packages [2,6] that were used for the comparison of methods [5,7], the details and objectives of the current collection are quite different. our principal objective is the development of testing tools that can be used to assess the efficiency and reliability of a standard numerical method without requiring significant modifications to the method and without the tools themselves affecting the performance of the method. w. h. enright j. d. pryce checking the calculation of gradients philip wolfe bounds for width two branching programs branching programs for the computation of boolean functions were first studied in the master's thesis of masek.7 in a rather straightforward manner they generalize the concept of a decision tree to a decision graph. let p be a branching program with edges labelled by the boolean variables, x1,...,xn and their complements. given an input a (a1,...,an) ε {0,1}n, program p computes a function value fp(a) in the following way. the nodes of p play the role of states or configurations. in particular, sinks play the role of final states or stopping configurations. the length of program p is the length of the longest path in p. following cobham,2capacity of the program is defined to be the logarithm to the base 2 of the number of nodes in p. length and capacity are lower bounds on time and space requirements for any reasonable model of sequential computation. clearly, any n-variable boolean function can be computed by a branching program of length n if the capacity is not constrained. since space lower bounds in excess of log n remain a fundamental challenge, we consider restricted branching programs in the hope of gaining insight into this problem and the closely related problem of time-space trade-offs. allan borodin danny dolev faith e. fich wolfgang paul how accurate should numerical routines be? charles b. dunham estimating small cell-loss ratios in atm switches via importance sampling the cell-loss ratio at a given node in an atm switch, defined as the steady- state fraction of packets of information that are lost at that node due to buffer overflow, is typically a very small quantity that is hard to estimate by simulation. cell losses are rare events, and importance sampling is sometimes the appropriate tool in this situation. however, finding the right change of measure is generally difficult. in this article, importance sampling is applied to estimate the cell-loss ratio in an atm switch modeled as a queuing network that is fed by several sources emitting cells according to a markov-modulated on/off process, and in which all the cells from the same source have the same destination. the charge of measure is obtained via an adaptation of a heuristic proposed by chang et al. {1994} for intree networks. the numerical experiments confirm important efficiency improvements even for large nonintree networks and a large number of sources. experiments with different variants of the importance sampling methodology are also reported, and a number of practical issues are illustrated and discussed. minimizing capacity violations in a transshipment network the problem of minimizing capacity violation is a variation of the transshipment problem. it is equivalent to the problem of computing maximum mean surplus cuts which arises in the dual approach to the minimum cost network circulation problem. mccormick and ervolina [15] proposed an algorithm which computes a sequence of cuts with increasing mean surpluses, and stops when an optimal one is found. the mean surplus of this cut is equal to the minimum possible maximum capacity violation. mccormick and ervolina proved that the number of iterations in this algorithm is o(m). one iteration, i.e., finding the subsequent cut, amounts to computing maximum flow in an appropriate network. we prove that the number of iterations in this algorithm is (n). this gives the best known upper bound o(n2m) for the problem. we also show a tight analysis of this algorithm for the case with integral capacities and demands, and present some improvements. tomasz radzik randomness-optimal sampling, extractors, and constructive leader election david zuckerman algorithm 557: pagp, a partitioning algorithm for (linear) goal programming problems [h] j. l. arthur a. ravindran sequential random sampling fast algorithms for selecting a random set of exactly k records from a file of n records are constructed. selection is sequential: the sample records are chosen in the same order in which they occur in the file. all procedures run in o(k) time. the "geometric" method has two versions: with or without o(k) auxiliary space. a further procedure uses hashing techniques and requires o(k) space. j. h. ahrens u. dieter a fully dynamic algorithm for maintaining the transitive closure valerie king garry sagert construction of extractors using pseudo-random generators (extended abstract) luca trevisan two fast implementations of the "minimal standard" random number generator although superficially time-consuming, on 32-bit computers the minimal standard random number generator can be implemented with surprising economy. david f. carta forests, frames, and games: algorithms for matroid sums and applications this paper presents improved algorithms for matroid partitioning problems, such as finding a maximum cardinality set of edges of a graph that can be partitioned into k forests. the notion of a clamp in a matroid sum is introduced. efficient algorithms for problems involving clumps are presented. applications of these algorithms to problems arising in the study of structural rigidity of graphs, the shannon switching game and others are given. harold gabow herbert westermann discrete-time optimal control problems with general constraints this paper presents a computational procedure for solving combined discrete- time optimal control and optimal parameter selection problems subject to general constraints. the approach adopted is to convert the problem into a nonlinear programming problem which can be solved using standard optimization software. the main features of the procedure are the way the controls are parametrized and the conversion of all constraints into a standard form suitable for computation. the software is available commercially as a fortran program dmiser3 together with a companion program miser3 for solving continuous-time problems. m. e. fisher l. s. jennings sequential and parallel algorithms to find a k5 minor andre kezdy patrick mcguinness algorithm 694: a collection of test matrices in matlab nicholas j. higham linear expected-time algorithms for connectivity problems (extended abstract) researchers in recent years have developed many graph algorithms that are fast in the worst case, but little work has been done on graph algorithms that are fast on the average. (exceptions include the work of angluin and valiant [1], karp [7], and schnorr [9].) in this paper we analyze the expected running time of four algorithms for solving graph connectivity problems. our goal is to exhibit algorithms whose expected time is within a constant factor of optimum and to shed light on the properties of random graphs. in section 2 we develop and analyze a simple algorithm that finds the connected components of an undirected graph with n vertices in o(n) expected time. in sections 3 and 4 we describe algorithms for finding the strong components of a directed graph and the blocks of an undirected graph in o(n) expected time. the time required for these three problems is Ω(m) in the worst case, where m is the number of edges in the graph, since all edges must be examined; but our results show that only o(n) edges must be examined on the average.*@@@@ in section 5 we present an algorithm for finding a minimum weight spanning forest in an undirected graph with edge weights in o(m) expected time. richard m. karp robert endre tarjan computing frobenius maps and factoring polynomials a new probabilistic algorithm for factoring univariate polynomials over finite fields is presented whose asymptotic running time improves upon previous results. to factor a polynomial of degree n over fq, the algorithm uses o((n2 \\+ n log q)•(log n)2 log log n) arithmetic operations in fq. the main technical innovation is a new way to compute frobenius and trace maps in the ring of polynomials modulo the polynomial to be factored. joachim von zur gathen victor shoup are orthogonal transformations worthwhile for least squares problems? orthogonal transformations are important in many areas of numerical analysis. even though they may be used for the solution of large least squares systems, their suitability for this task has been exaggerated. for most problems normal equations are adequate and computationally less time consuming. richard l. branham an improved primal simplex variant for pure processing networks in processing networks, ordinary network constraints are supplemented by proportional flow restrictions on arcs entering or leaving some nodes. this paper describes a new primal partitioning algorithm for solving pure processing networks using a working basis of variable dimension. in testing against mpsx/370 on a class of randomly generated problems, a fortran implementation of this algorithm was found to be an order-of-magnitude faster. besides indicating the use of our methods in stand-alone fashion, the computational results also demonstrate the desirability of using these methods as a high-level module in a mathematical programming system. michael d. chang chou-hong j. chen michael engquist algorithm 650: efficient square root implementation on the 68000 two square root algorithms (for integer and floating point data types) are presented, which aresimpler and more efficient than standard procedures. these could be effectively used as the basis ofhardware-based square root generators as well as for software implementations. one possible appli-cation for an efficient square root routine would be in calculating trigonometric and exponentialfunctions. (this application may be primarily of academic interest, however, since standard tran-scendental function generators would generally be more efficient.) three accompanying mc68000implementations of the algorithm for 32-bit integer and ieee single- and double-precision data areavailable on the calgo listing. these programs return rounding status in the condition code register,and they exhibit the following approximate runtime performance at 8 mhz: 105-134 ps (integer);180-222 ps (single precision); 558-652 ps (double precision). kenneth c. johnson designing software for one-dimensional partial differential equations users of software for solving partial differential equations are often surprised by its inability to formulate their problems. computer scientists speak of partial differential equations (pdes) as canonical coupled systems, typically in divergence form. physicists, chemists, and engineers start with the same description, but then add real-world things like "rankine-hugoniot" conditions, integro-differential operators, and eulerian (moving) coordinate systems. this paper describes a generalization of the classical pde formulation that allows users to formulate virtually any pde problem. the extended formulation has been used successfully on a wide range of nontrivial problems in the physical sciences that cannot even be written down in the classical form. n. l. schryer burnside's theorem d. binger algorithm 734; a fortran 90 code for unconstrained nonlinear minimization this paper describes a fortran 90 implementation of acm transactions on mathematical software algorithm 630, a minimization algorithm designed for use in a limited-memory environment. it includes implementation of the buckley- lenir method, nocedal's limited memory algorithm, and an experimental limited- memory implementation of a factored update due to powell, as well as a fairly standard quasi-newton implementation due originally to shanno. this algorithm uses a number of the new features of fortran 90 to offer capabilities that were not formerly available. a. g. buckley numerical experience with sequential quadratic programming algorithms for equality constrained nonlinear programming computational experience is given for a sequential quadratic programming algorithm when lagrange multiplier estimates, hessian approximations, and merit functions are varied to test for computational efficiency. indications of areas for further research are given. david f. shanno kang hoh phua parallel implementation of domain decomposition techniques on intel's hypercube parallel implementation of domain decomposition techniques for elliptic pdes in rectangular regions is considered. this technique is well suited for parallel processing, since in the solution process the subproblems either are independent or can be easily converted into decoupled problems. more than 80% of execution time is spent on solving these independent and decoupled problems. the hypercube architecture is used for concurrent execution. the performance of the parallel algorithm is compared against the sequential version. the speed-up, efficiency, and communication factors are studied as functions the number of processors. extensive tests are performed to find, for a given mesh size, the number of subregions and nodes that minimize the overall execution time. m. haghoo w. proskurowski algorithm 606: nitpack: an interactive tree package p. w. gaffney j. wooten and k. a. kessel and w. r. mckinney algorithm 662: a fortran software package for the numerical inversion of the laplace transform based on weeks' method b. s. garbow g. giunta j. n. lyness a. murli a fast normal random number generator a method is presented for generating pseudorandom numbers with a normal distribution. the technique uses the ratio of uniform deviates method discovered by kinderman and monahan with an improved set of bounding curves. an optimized quadratic fit reduces the expected number of logarithm evaluations to 0.012 per normal deviate. the method gives a theoretically correct distribution and can be implemented in 15 lines of fortran. timing and source size comparisons are made with other methods for generating normal deviates. the proposed algorithm compares favorably with some of the better algorithms. joseph l. leva higher-dimensional voronoi diagrams in linear expected time this work is the first to validate theoretically the suspicions of many researchers --- that the "average" voronoi diagram is combinatorially quite simple and can be constructed quickly. specifically, assuming that dimension d is fixed, and that n input points are chosen independently from the uniform distribution on the unit d-ball, it is proved that the expected number of simplices of the dual of the voronoi diagram is (n) (exact constants are derived for the high-order term), and a relatively simple algorithm exists for constructing the voronoi diagram in (n) time. it is likely that the methods developed in the analysis will be applicable to other related quantities and other probability distributions. r. a. dwyer corrections to "a discrete-time paradigm to evaluate skew performance in a multimedia atm multiplexer a. lombardo g. morabito g. schembra a mathematical software environment several hundred scientists and engineers work at amoco production company's tulsa research center. naturally, scientific computing is an important part of their research in petroleum exploration and production. general purpose mathematical and statistical software is often overlooked when it could be used in place of writing code. our strategy for increasing the awareness and usage of existing software is through advertisement of software via a reusable software catalog, through regular in-house presentations of a mathematical software seminar, and through the development of a mathematical software environment, a collection of programs and procedures created to encourage the use of general purpose mathematical software. donald l. williams the design of ma48: a code for the direct solution of sparse unsymmetric linear systems of equations we describe the design of a new code for the direct solution of sparse unsymmetric linear systems of equations. the new code utilizes a novel restructuring of the symbolic and numerical phases, which increases speed and saves storage without sacrifice of numerical stability. other features include switching to full-matrix processing in all phases of the computation enabling the use of all three levels of blas, treatment of rectangular or rank- deficient matrices, partial factorization, and integrated facilities for iterative refinement and error estimation. i. s. duff j. k. reid saving time and space in spline calculations george c bush angular resolution of straight-line drawings (abstract) michael kaufmann a variable order runge-kutta method for initial value problems with rapidly varying right-hand sides explicit runge-kutta methods (rkms) are among the most popular classes of formulas for the approximate numerical integration of nonstiff, initial value problems. however, high-order runge- kutta methods require more function evaluations per integration step than, for example, adams methods used in pece mode, and so, with rkms, it is expecially important to avoid rejected steps. steps are often rejected when certain derivatives of the solutions are very large for part of the region of integration. this corresponds, for example, to regions where the solution has a sharp front or, in the limit, some derivative of the solution is discontinuous. in these circumstances the assumption that the local truncation error is changing slowly is invalid, and so any step- choosing algorithm is likely to produce an unacceptable step. in this paper we derive a family of explicit runge-kutta formulas. each formula is very efficient for problems with smooth solution as well as problems having rapidly varying solutions. each member of this family consists of a fifty-order formula that contains imbedded formulas of all orders 1 through 4. by computing solutions at several different orders, it is possible to detect sharp fronts or discontinuities before all the function evaluations defining the full runge-kutta step have been computed. we can then either accpet a lower order solution or abort the step, depending on which course of action seems appropriate. the efficiency of the new algorithm is demonstrated on the detest test set as well as on some difficult test problems with sharp fronts or discontinuities. j. r. cash alan h. karp algorithm 681: intbis, a portable interval newton/bisection package we present a portable software package for finding all real roots of a system of nonlinear equations within a region defined by bounds on the variables. where practical, the package should find all roots with mathematical certainty. though based on interval newton methods, it is self- contained. it allows various control and output options and does not require programming if the equations are polynomials; it is structured for further algorithmic research. its practicality does not depend in a simple way on the dimension of the system or on the degree of nonlinearity. r. baker kearfott manuel novoa remark on algorithm 566 we report the development of second- derivative fortran routines to supplement algorithm 566 developed by j. more et al. (acm trans. math. softw. 7, 14-41, 136-- 140, 1981). algorithm 566 provides function and gradient subroutines of 18 test functions for multivariate minimization. our supplementary hessian segments enable users to test optimization software that requires second derivative information. eigenvalue analysis throughout the minimization is now possible, with the goal of better understanding progress by different minimization algorithms and the relation of progress to eigenvalue distribution and condition number. victoria z. averbukh samuel figueroa tamar schlick binomial random variate generation existing binomial random-variate generators are surveyed, and a new generator designed for moderate and large means is developed. the new algorithm, btpe, has fixed memory requirements and is faster than other such algorithms, both when single, or when many variates are needed. voratas kachitvichyanukul bruce w. schmeiser a problem in counting digits jack distad ronald w gatterdam existence and uniqueness theorems for formal power series solutions of analytic differential systems c. j. rust g. j. reid a. d. wittkopf remark on "algorithm 507: procedures for quintic natural spline interpolation" r. j. hanson algorithm 609: a portable fortran subroutine for the bickley functions kin (x) d. e. amos algorithm 649: a package for computing trigonometric fourier coefficients based on lyness's algorithm we present a package that allows the computation of the trigonometric fourier coefficients of a smooth function. the function can be provided as a subprogram or as a data list of function values at equally spaced points. the computational cost of the algorithm does not depend on the required number of fourier coefficients. numerical results of comparative tests with a standard integrator for oscillatory functions are also reported. g. giunta a. murli a concurrent neural network algorithm for the traveling salesman problem a binary neuromorphic data structure is used to encode the n --- city traveling salesman problem (tsp). in this representation the computational complexity, in terms of number of neurons, is reduced from hopfield and tank's (n2) to (n log2 n). a continuous synchronous neural network algorithm in conjunction with the lagrange multiplier, is used to solve the problem. the algorithm has been implemented on the ncube hypercube multiprocessor. this algorithm converges faster and has a higher probability to reach a valid tour than previously available results. n. toomarian making hard constraints soft fred t krogh a bisection method for systems of nonlinear equations a. eiger k. sikorski f. stenger optimal asynchronous newton method for the solution of nonlinear equations a. bojanczyk a functional description of analyze: a computer-assisted analysis system for linear programming models harvey greenberg algorithm 552: solution of the constrained _i_1 linear approximation problem [f4] i. barrodale f. d. k. roberts remark on "algorithm 539: basic linear algebra subprograms for fortran usage" david s. dodson roger g. grimes on taylor series and stiff equations david barton note on local methods of univariate interpolation hiroshi akima a logarithmic poisson execution time model for software reliability measurement a new software reliability model is developed that predicts expected failures (and hence related reliability quantities) as well or better than existing software reliability models, and is simpler than any of the models that approach it in predictive validity. the model incorporates both execution time and calendar time components, each of which is derived. the model is evaluated, using actual data, and compared with other models. j. d. musa k. okumoto remark on algorithm 752 the triangulation-based scatterd-data fitting package srfpack was updated for (a) compatibility with a revised interface to the triangulation package tripack, (b) the elimination of potential errors in the treatment of tension factors and in the extrapoltion procedure, and (c) the addition of a more accurate local gradient-estimation procedure and a simple but portable contour-plotting capability. robert j. renka for unknown-but-bounded errors, interval estimates are often better than averaging g. william walster vladik kreinovich a fast planar partition algorithm, ii k. mulmuley a revised simplex method with integer q-matrices we describe a modification of the simplex formulas in which q-matrices are used to implement exact computations with an integer multiprecision library. our motivation comes from the need for efficient and exact incremental solvers in the implementation of constraint solving languages such as prolog. we explain how to reformulate the problem and the different steps of the simplex algorithm. we compare some measurements obtained with integer and rational computations. david-olivier azulay jean-françois pique computing selected eigenvalues of sparse unsymmetric matrices using subspace iteration this paper discusses the design and development of a code to calculate the eigenvalues of a large sparse real unsymmetric matrix that are the rightmost, leftmost, or are of the largest modulus. a subspace iteration algorithm is used to compute a sequence of sets of vectors that converge to an orthonormal basis for the invariant subspace corresponding to the required eigenvalues. this algorithm is combined with chebychev acceleration if the rightmost or leftmost eigenvalues are sought, or if the eigenvalues of largest modulus are known to be the rightmost or leftmost eigenvalues. an option exists for computing the corresponding eigenvectors. the code does not need the matrix explicitly since it only requires the user to multiply sets of vectors by the matrix. sophisticated and novel iteration controls, stopping criteria, and restart facilities are provided. the code is shown to be efficient and competitive on a range of test problems. i. s. duff j. a. scott an adaptive quadrature routine j r d'errico further results on equivalence and decomposition in queueing network models this paper addresses three aspects related to the notion of exact equivalence in queueing models. in many cases the parameters of a system equivalent to a given model involve only a small subset of conditional probabilities of the state of the original model given the equivalent one. it is shown that meaningful bounds may be obtained for the conditional probabilities of interest with little computational effort. such bounds are useful in assessing processing capacities as well as the accuracy of approximate solutions. as a second point it is shown that the notion of exact equivalence may be easily extended to networks with non-exponential servers. this is done for both the methods of supplementary variables and for the imbedded markov chain technique. qualitative analysis of approximation methods is also discussed. finally, numerical methods based on the notion of exact equivalence, i.e. operating on conditional probabilities, are considered. alexandre brandwajn a short note on results of testing two random number generators joseph m. saur dual integer linear programs and the relationship between their optima we consider dual pairs of packing and covering integer linear programs. best possible bounds are found between their optimal values. tight inequalities are obtained relating the integral optima and the optimal rational solutions. r aharoni p erdös n linial loss-less condensers, unbalanced expanders, and extractors an extractor is a procedure which extracts randomness from a detective random source using a few additional random bits. explicit extractor constructions have numerous applications and obtaining such constructions is an important derandomization goal. trevisan recently introduced an elegant extractor construction, but the number of truly random bits required is suboptimal when the input source has low-min-entropy. significant progress toward overcoming this bottleneck has been made, but so far has required complicated recursive techniques that lose the simplicity of trevisan's construction. we give a clean method for overcoming this bottleneck by constructing {\em loss-less condensers}. which compress the n-bit input source without losing any min- entropy, using o(\log n) additional random bits. our condensers are built using a simple modification of trevisan's construction, and yield the best extractor constructions to date. loss-less condensers also produce unbalanced bipartite expander graphs with small (polylogarithmic) degree d and very strong expansion of (1-\epilon)d. we give other applications of our construction, including dispersers with entropy loss o(\log n), depth two super-concentrators whose size is within a polylog of optimal, and an improved hardness of approximation result. amnon ta-shma christopher umans david zuckerman estimation of the inverse function for random variate generation a regression method for estimating the inverse of a continuous cumulative probability function f(x) is presented. it is assumed that an ordered sample, x1, …, xn, of identically and independently distributed random variables is available. a reference distribution f0( x) with known inverse f0-1(p< /italic>) is used to calculate the quantities wi = i ln[f(xi)/f0(< italic>xi+1< /subscrpt>)]. these quantities are used to estimate the function γ(p) = pd ln≥f< ;subscrpt>0[f -1(p)] /dp from which an estimate of f-1(_p) is derived. the method produces an estimate in a form that is convenient for random variate generation. the procedure is illustrated using data from a study of oil and gas lease bidding. stephen c. hora implementation of the gibbs-poole-stockmeyer and gibbs-king algorithms john g. lewis implementing complex elementary functions using exception handling algorithms are developed for reliable and accurate evaluations of the complex elementary functions required in fortran 77 and fortran 9, namely, cabs, csqrt, cexp, clog, csin, and ccos. the algorithms are presented in a pseudocode that has a convenient exception-handling facility. a tight error bound is derived for each algorithm. corresponding fortran programs for an ieee environment have also been developed to illustrate the practicality of the algorithms, and these programs have been tested very carefully to help confirm the correctness of the algorithms and their error bounds. the results of these tests are included in the paper, but the fortran programs are not. t. e. hull thomas f. fairgrieve ping-tak peter tang remark on algorithm 620 tim hopkins efficient parallel solution of linear systems the most efficient known parallel algorithms for inversion of a nonsingular n × n matrix a or solving a linear system ax = b over the rationals require (log n)2 time and m(n)n0.5 processors (where m(n) is the number of processors required in order to multiply two n × n rational matrices in time (log n).) furthermore, all known polylog time algorithms for those problems are unstable: they require the calculation to be done with perfect precision; otherwise they give no results at all. this paper describes parallel algorithms that have good numerical stability and remain efficient as n grows large. in particular, we describe a quadratically convergent iterative method that gives the inverse (within the relative precision 2-no(1)) of an n × n rational matrix a with condition ≤ n0(1) in (log n)2 time using m(n) processors. this is the optimum processor bound and the factor n0.5 improvement of known processor bounds for polylog time matrix inversion. it is the first known polylog time algorithm that is numerically stable. the algorithm relies on our method of computing an approximate inverse of a that involves (log n) parallel steps and n2 processors. also, we give a parallel algorithm for solution of a linear system ax = b with a sparse n × n symmetric positive definite matrix a. if the graph g(a) (which has n vertices and has an edge for each nonzero entry of a) is s(n)-separable, then our algorithm requires only ((log n)(log s(n))2) time and |e| + m(s(n)) processors. the algorithm computes a recursive factorization of a so that the solution of any other linear system ax = b′ with the same matrix a requires only (log n log s(n)) time and |e| + s(n)2 processors. v pan j reif note on kovacic's algorithm felix ulmer jacques-arthur weil efficient and adaptive lagrange-multiplier methods for continuous nonlinear optimization tao wang benjamin w. wah graph partitioning and ordering for interactive proximity queries andy wilson eric larsen dinesh manocha ming c. lin constrained nonlinear least squares: an exact penalty approach with projected structured quasi-newton updates this paper is concerned with the development, numerical implementation, and testing of an algorithm for solving constrained nonlinear least squares problems. the algorithm is an adaptation of the least squares case of an exact penalty method for solving nonlinearly constrained optimization problems due to coleman and conn. it also uses the ideas of nocedal and overton for handling quasi-newton updates of projected hessians, those of dennis, gay, and welsch for approaching the structure of nonlinear least squares hessians, and those of murray and overton for performing line searches. this method has been tested on a selection of problems listed in the collection of hock and schittkowski. the results indicate that the approach taken here is a viable alternative for least squares problems to the general nonlinear methods studied by hock and schittkowski. nezam mahdavi-amiri richard h. bartels bezoutian formulas a la macaulay for the multivariate resultant c. d'andrea a. dickenstein efficient algorithms for the hitchcock transportation problem we consider the hitchcock transportation problem on n supply points and k demand points when n is much greater than k. the problem is solved in o_(n2< italic>k log n \\+ n2 log2 n) time if n > k log k. further, applying a geometric method named splitter finding and randomization, we improve the time complexity for a case in which the ratio c of the least supply and the maximum supply satisfies the inequality log cn < n/k4 log n. indeed, if n < k5 log3 k and c = poly(n), the problem is solved in o(kn) time, which is optimal. takeshi tokuyama jun nakano a simple min-cut algorithm we present an algorithm for finding the minimum cut of an undirected edge- weighted graph. it is simple in every respect. it has a short and compact description, is easy to implement, and has a surprisingly simple proof of correctness. its runtime matches that of the fastest algorithm known. the runtime analysis is straightforward. in contrast to nearly all approaches so far, the algorithm uses no flow techniques. roughly speaking, the algorithm consists of about |v| nearly identical phases each of which is a maximum adjacency search. mechthild stoer frank wagner algorithm 698; dcuhre: an adaptive multidemensional integration routine for a vector of integrals jarle berntsen terje o. espelid alan genz reducing edge connectivity to vertex connectivity zvi galil giuseppe f. italiano algorithm 785: a software package for computing schwarz-christoffel conformal transformation for doubly connected polygonal regions a software package implementing schwarz-christoffel conformal transformation (or mapping) of doubly connected polygonal regions is fully described in this article from mathematical, numerical, and practical perspectives. the package solves the so-called accessory parameter problem associated with the mapping function as well as evaluates forward and inverse maps. the robustness of the package is reflected by the flexibility in choosing the accuracy of the parameters to be computed, the speed of computation, the ability of mapping "difficult" regions (to be specified in section 2), and being user friendly. several examples are presented to demonstrate the capabilities of the package. chenglie hu algorithm 615: the best subset of parameters in leasst absolute value regression r. d. armstrong p. o. beck m. t. kung algorithm 603: colrow and arceco: fortran packages for solving certain almost block diagonal linear systems by modified alternate row and column elimination j. c. díaz g. and p. keast statistical independence properties of inversive pseudorandom vectors over parts of the period this article deals with the inversive method for generating uniform pseudorandom vectors. statistical independence properties of the generated pseudorandom vector sequences over parts of the period are considered based on the discrete discrepancy of corresponding point sets. an upper bound for the average value of these discrete discrepancies is established. frank emmerich admit-1: automatic differentiation and matlab interface toolbox admit-1 enables the computation of sparse jacobian and hessian matrices, using automatic differentiation technology, from a matlab environment. given a function to be differentiated, admit-1 will exploit sparsity if present to yield sparse derivative matrices (in sparse matlab form). a generic automatic differentiation tool, subject to some functionality requirements, can be plugged into admit-1; examples include adol-c (c/c++ target functions)and admat (matlab target funcitons). admit-1 also allows for the calculation of gradients and has several other related functions. this article provides an introduction to the design and usage of admit-1. thomas f. coleman arun verma eigenvalue and eigenfunction computations for sturm-liouville problems paul b. bailey anton zettl advanced notations in mathematica this paper outlines the functionality and implementation of the notation package, an extension to the mathematica front end, allowing the user to introduce advanced notations. following this, several advanced example notations are presented. these include complete and functioning notations for both dirac's bra-ket notation as well as for tensorial expressions. proper functioning notations for both of these objects have not been previously presented in a major symbolic computation system, if at all. jason harris recent results in non-uniform random variate generation a selective survey is given of new methods in non-uniform random variate generation. luc devroye computing hyperelliptic integrals for surface measure of ellipsoids an algorithm for computing a class of hyperelliptic integrals and for determining the surface measure of ellipsoids is described. the algorithm is used to construct an omnibus optimal-design criterion. charles f. dunkl donald e. ramirez a technique for solving ordinary differential equations using riemann's p-functions this paper presents an algorithmic approach to symbolic solution of 2nd order linear odes. the algorithm consists of two parts. the first part involves complete algorithms for hypergeometric equations and hypergeometric equations of confluent type. these algorithms are based on riemann's p-functions and hukuhara's p-functions respectively. another part involves an algorithm for transforming a given equation to a hypergeometric equation or a hypergeometric equation of confluent type. the transformation is possible if a given equation satisfies certain conditions, otherwise it works only as one of heuristic methods. however our method can solve many equations which seem to be very difficult to solve by conventional methods. shunro watanabe solving ordinary differential equations with discontinuities c. w. gear o. osterby spreadsheet calculations of probabilities from the f, t, χ2, and normal distribution by computing probabilities from the normalization of the f distribution (instead of by numerical integration methods), statistical capabilities in spreadsheet operations can be greatly expanded and enhanced. frank g. landram james r. cook marvin johnston use of genetic algorithms for optimization in digital control of dynamic systems this paper presents a method to optimize proportional- integral-derivative (pid) control parameters, given a discrete model of the controlled process. this method is based on holland's genetic algorithm (ga). it does not require a mathematical model of the controller to represent its dynamic behavior. it gives a solution that is not only optimal but also meets engineering constraints. genetic algorithms do a global search without derivatives for points in a multi-dimensional search space. this method works for non-linear as well as linear systems. the objective function of the ga is based on the integrated product of time and absolute error (itae). the performance of the ga is compared to that of the other optimization methods. the results show that it is simple and effective. rajeshwar prasad srivastava an alternative implementation of variable step-size multistep formulas for stiff odes k. r. jackson r. sacks-davis a practical parallel algorithm for solving band symmetric positive definite systems of linear equations we give a practical parallel algorithm for solving band symmetric positive definite systems of linear equations in o(m* log n) time using nm/log n processors. here n denotes the system size and m its bandwidth. hence, the algorithm is efficient. for tridiagonal systems, the algorithm runs in o(log n) time using n/log n processors. furthermore, an improved version runs in o(log m log n) time using nm2/(log m log n) processors. ilan bar-on interpolation of data on the surface of a sphere robert j. renka private approximation of np-hard functions the notion of private approximation was introduced recently by feigenbaum, fong, strauss and wright. informally, a private approximation of a function f is another function f that approximates f in the usual sense, but does not yield any information on x other than what can be deduced from f(x). as such, f(x) is useful for private computation of f(x) (assuming that f can be computed more efficiently than f. in this work we examine the properties and limitations of this new notion. specifically, we show that for many np-hard problems, the privacy requirement precludes non-trivial approximation. this is the case even for problems that otherwise admit very good approximation (e.g., problems with ptas). on the other hand, we show that slightly relaxing the privacy requirement, by means of leaking "just a few bits of informationrdquo; about x, again permits good approximation. shai halevi robert krauthgamer eyal kushilevitz kobbi nissim a generalized envelope method for sparse factorization by rows a generalized form of the envelope method is proposed for the solution of large sparse symmetric and positive definite matrices by rows. the method is demonstated to have practical advantages over the conventional column-oriented factorization using compressed column storage or the multifrontal method using full frontal submatrices. joseph w. h. liu applications of best approximation charles dunham comparison of polynomial-oriented computer algebra systems robert h. lewis me28: a sparse unsymmetric linear equation solver for complex equations i. s. duff space-efficient implementations of graph search methods robert e. tarjan implementing a random number package with splitting facilities a portable set of software tools is described for uniform randomvariates generation. it provides for multiple generators runningsimultaneously, and each generator has its sequence of numberspartitioned into many long (disjoint) substreams. simple procedure callsallow the user to make any generator "jump" ahead to thebeginning of its next substream, back to the beginning of its currentsubstream, or back to the beginning of its first substream.…implementation issues are discussed.…a pascal implementation for32-bit computers is described. \\---from the authors' abstract pierre l'ecuyer serge côte remark on algorithm 526 albrecht preusser provably monotone approximations charles b dunham algorithm 594: software for relative error analysis john l. larson mary e. pasternak john a. wisniewski lower bounds and fast algorithms for sequence acceleration george m. trojan a faster algorithm for finding the minimum cut in a graph we consider the problem of finding the minimum capacity cut in a network g with n nodes. this problem has applications to network reliability and survivability and is useful in subroutines for other network optimization problems. one can use a maximum flow problem to find a minimum cut separating a designated source node s from a designated sink node t, and by varying the sink node one can find a minimum cut in g as a sequence of at most 2n - 2 maximum flow problems. we then show how to reduce the running time of these 2n - 2 maximum flow algorithms to the running time for solving a single maximum flow problem. the resulting running time is o(nm log n2/m) for finding the minimum cut in either a directed or undirected network. the algorithm also determines the arc connectivity of either a directed or undirected network in o(nm) steps. jianxiu hao james b. orlin corrigendum: "algorithm 650: efficient square root implementation on the 68000" [acm trans. math. software 13 (1987), no. 2, 138--151] kenneth c. johnson the distribution of queuing network states at input and output instants k. c. sevcik i. mitrani algorithm 669: brkf45: a fortran subroutine for solving first-order systems of nonstiff initial value problems for ordinary differential equations j. r. cash combinatorial algorithms: generation, enumeration, and search donald l. kreher douglas r. stinson huseholder reduction of linear equations this tutorial discusses householder reduction ofn linear equations to a triangularform which can be solved by back substitution. the main strength of themethod is its unconditional numerical stability. we explain howhouseholder reduction can be derived from elementary-matrix algebra. themethod is illustrated by a numerical example and a pascal procedure. weassume that the reader has a general knowledge of vector and matrixalgebra but is less familiar with linear transformation of a vectorspace. \\---author's abstract per brinch hansen generating quadratic bilevel programming test problems this paper describes a technique for generating sparse or dense quadratic bilevel programming problems with a selectable number of known global and local solutions. the technique described here does not require the solution of any subproblems. in addition, since most techniques for solving these problems begin by solving the corresponding relaxed quadratic program, the global solutions are constructed to be different than the global solution of this relaxed problem in a selectable number of upper- and lower-level variables. finally, the problems that are generated satisfy the requirements imposed by all of the solution techniques known to the authors. paul h. calamai luis n. vicente a rejection technique for sampling from t-concave distributions a rejection algorithm that uses a new method for constructing simple hat functions for a unimodal, bounded density f is introduced called "transformed density rejection." it is based on the idea of transforming f with a suitable transformation t such that t(f(x)) is concave. f is then called t-concave, and tangents of t(f(x)) in the mode and in a point on the left and right side are used to construct a hat function with a table-mountain shape. it is possible to give conditions for the optimal choice of these points of contact. with t= -1/xxx, the method can be used to construct a universal algorithm that is applicable to a large class of unimodal distributions, including the normal, beta, gamma, and t-distribution. wolfgang hörmann note on a pseudorandom number generator james fullerton comment on gamma deviate generation philip a. houle a note on complex division an algorithm for computing the quotient of two complex numbers is modified to make it more robust in the presence of underflows. g. w. stewart finding the repeated median regression line the repeated median regression line is a robust regression estimate, having a maximal 50% breakdown point. this paper presents an o(n(log n)2) algorithm for finding the repeated median regression line through n points in the plane. andrew stein michael werman theoretical and practical aspects of combinatorial problem solving martin groetschel a general approximation technique for constrained forest problems we present a general approximation technique for a large class of graph problems. our technique mostly applies to problems of covering, at minimum cost, the vertices of a graph with trees, cycles or paths satisfying certain requirements. in particular, many basic combinatorial optimization problems fit in this framework, including the shortest path, minimum spanning tree, minimum-weight perfect matching, traveling salesman and steiner tree problems. our technique produces approximation algorithms that run in o(n~~2 log n) time and come within a factor of 2 of optimal for most of these problems. for instance, we obtain a 2-approximation algorithm for the minimum-weight perfect matching problem under the triangle inequality. our running time of o< /italic>(n2 log n) time compares favorably with the best strongly polynomial exact algorithms running in o (n3) time for dense graphs. a similar result is obtained for the 2-matching problem and its variants.we also derive the first approximation algorithms for many np- complete problems, including the non- fixed point-to-point connection problem, the exact path partitioning problem and complex location-design problems. moreover, for the prize-collecting traveling salesman or steiner tree problems, we obtain 2-approximation algorithms, therefore improving the previously best-known performance guarantees of 2.5 and 3, respectively [4]. michel x. goemans david p. williamson kink-free deformations of polygons we consider a discrete version of the whitney-graustein theorem concerning regular equivalence of closed curves. two regular polygons p and p', i.e. polygons without overlapping adjacent edges, are called regularly equivalent if there is a continuous one-parameter family ps, o ≥ s ≤ 1 of regular polygons with po = p and p1 = p'. geometrically the one-parameter family is a kink-free deformation transforming p into p'. the winding number of a polygon is a complete invariant of its regular equivalence class. we develop a linear algorithm that determines a linear number of elementary steps to deform a regular polygon into any other regular polygon with the same winding number. g. vegter an abstract parallel graph reduction machine kenneth r. traub algorithm 639: to integrate some infinite oscillating tails james lyness gwendolen hines algorithm 641: exact solution of general systems of linear equations jörn springer approximation algorithms for clustering problems david b. shmoys algorithm 572: solution of the helmholtz equation for the dirichlet problem on general bounded three-dimensional regions [d3] dianne p. o'leary olof widlund the curve of least energy b. k. p. horn tnpack - a truncated newton minimization package for large-scale problems: i. algorithm and usage tamar schlick aaron fogelson a comparison of several methods of integrating stiff ordinary differential equations on parallel computing architectures many physical systems lead to initial value problems where the system of stiff ordinary differential equations is loosely coupled. thus, in some cases the variables may be directly mapped onto sparsely connected parallel architectures such as the hypercube. this paper investigates various methods of implementing gear's algorithm on parallel computers. two conventional corrector methods utilize either functional or newton raphson iteration. we consider both alternatives and show that they exhibit similar speedups on an n node hypercube. in addition a polynomial corrector is investigated. it has the advantage of not having to solve a linear system as in the newton raphson method, yet it converges faster than functional iteration. a. bose i. nelken j. gelfand algorithm 624: triangulation and interpolation at arbitrarily distributed points in the plane robert j. renka the solution of certain two-dimensional markov models a class of two- dimensional birth-and-death processes, with applications in many modelling problems, is defined and analysed in the steady-state. these are processes whose instantaneous transition rates are state-dependent in a restricted way. generating functions for the steady-state distribution are obtained by solving a functional equation in two variables. that solution method lends itself readily to numerical implementation. some aspects of the numerical solution are discussed, using a particular model as an example. g. fayolle p. j.b. king i. mitrani gauss-legendre quadrature greg fee remark on algorithm 590.: fortran subroutines for computing deflating subspaces with specified spectrum p. hr. petkov n. d. christov m. m. konstantinov gprof: a call graph execution profiler large complex programs are composed of many small routines that implement abstractions for the routines that call them. to be useful, an execution profiler must attribute execution time in a way that is significant for the logical structure of a program as well as for its textual decomposition. this data must then be displayed to the user in a convenient and informative way. the gprof profiler accounts for the running time of called routines in the running time of the routines that call them. the design and use of this profiler is described. susan l. graham peter b. kessler marshall k. mckusick eigenvalues, flows and separators of graphs f. r. k. chung s.-t. yau computing gr bner fans of toric ideals birkett huber on the reduction of linear systems of difference equations this paper deals with linear systems of difference equations whose coefficients admit generalized factorial series representations at z = ∞. we shall give a criterion by which a given system is determined to have a regular singularity. in the same manner, we give an algorithm, implementable in a computer algebra system, which reduces in a finite number of steps the system of difference equations on an irreducible form. m. a. barkatou algorithm 602: hurry: an acceleration algorithm for scalar sequences and series theodore fessler william ford and david a. smith an mebdf code for stiff initial value problems in two recent papers one of the present authors has proposed a class of modified extended backward differentiation formulae for the numerical integration of stiff initial value problems. in this paper we describe a code based on this class of formulae and discuss its performance on a large set of stiff test problems. j. r. cash s. considine a fast implementation of the minimum degree algorithm using quotient graphs alan george joseph w. h. liu strongly polynomial-time and nc algorithms for detecting cycles in dynamic graphs this paper is concerned with the problem of recognizing, in a graph with rational vector-weights associated with the edges, the existence of a cycle whose total weight is the zero vector. this problem is known to be equivalent to the problem of recognizing the existence of cycles in dynamic graphs and to the validity of some systems of recursive formulas. it was previously conjectured that combinatorial algorithms exist for the cases of two- and three-dimensional vector-weights. the present paper gives strongly polynomial algorithms for any fixed dimension. moreover, these algorithms also establish membership in the class nc. on the other hand, it is shown that when the dimension of the weights is not fixed, the problem is equivalent to the general linear programming problem under strongly polynomial and logspace reductions. e. cohen n. megiddo an example of error propagation reinterpreted as subtractive cancellation - revisited james s. dukelow computation of the incomplete gamma function ratios and their inverse an algorithm is given for computing the incomplete gamma function ratios p(a, x) and q>(a, x) for a 0, x 0, a \\+ x 0\\. temme's uniform asymptotic expansions are used. the algorithm is robust; results accurate to 14 significant digits can be obtained. an' extensive set of coefficients for the temme expansions is included. an algorithm, employing third-order schroder iteration supported by newton- raphson iteration, is provided for computing x when a, p(a, x), and q(a, x) are given. three iterations at most are required to obtain 10 significant digit accuracy for x. armido r didonato alfred h morris algorithm 668: h2pec: sampling from the hypergeometric distribution voratas kachitvichyanukul bruce w. schmeiser outward rotations: a tool for rounding solutions of semidefinite programming relaxations, with applications to max cut and other problems uri zwick random walks on weighted graphs and applications to on-line algorithms don coppersmith peter doyle prabhakar raghavan marc snir on constructing karnaugh maps don thompson a method for computing all solutions to systems of polynomials equations alexander p. morgan a compact row storage scheme for cholesky factors using elimination trees for a given sparse symmetric positive definite matrix, a compact row- oriented storage scheme for its cholesky factor is introduced. the scheme is based on the structure of an elimination tree defined for the given matrix. this new storage scheme has the distinct advantage of having the amount of overhead storage required for indexing always bounded by the number of nonzeros in the original matrix. the structural representation may be viewed as storing the minimal structure of the given matrix that will preserve the symbolic cholesky factor. experimental results on practical problems indicate that the amount of savings in overhead storage can be substantial when compared with sherman's compressed column storage scheme. joseph w. liu a fast and portable uniform quasi-random generator of very large period based ona generalized multi-moduli congruential method daniel guinier on the time overhead of counters and traversal markers the problem of minimizing the time overhead of counters and traversal markers is studied. the methodology used to study the problem is based upon markov processes. a fundamental result is presented that characterizes those flowcharts where the problem can be solved uniquely in the case of counters. two methods are described that avoid poor choices of counter placement by choosing counter placements from a dominating set. with respect to the problem of traversal marker placement, the basic result that traversal markers must be placed on the complement of a uni-connected sub-graph is extended to flowcharts with circuits. finally it is shown that complements of uni- connected subgraphs that are minimal in size may not have minimal time overheads. ira r. forman n-processors graphs distributively achieve perfect matchings in o(log2n) beats a perfect matching in a graph g(v,e), also called a 1-factor, is a collection p of non-interesting edges engaging (incident with) all the vertices; in case g is bipartite v m @@@@ f, m @@@@ f , p should engage all the vertices of m. the combinatorial problem of finding a perfect matching in g (and its rich ramifications) were extensively studied (and applied) from existential, algorithmic and probabilistic points of view. here we replace sequential algorithms by distributive, parallel ones. we imagine n processors without a shared memory or a central coordinator (except a clock) reside at the n vertices and communicate by messages. for each particular problem, the edges (giving direct connections) are given and define the input graph g. the collectin of all these graphs is made into a probability space gn (in several ways, as explicated below). in one synchronized step ( beat), each processor can send a message to a neighbor along an edge of g. one can cheaply implement such steps, uniformly for all graphs, with an efficient switchboard. eli shamir eli upfal collocation software for boundary-value odes u. ascher j. christiansen r. d. russell you could learn a lot from a quadratic: newton squares the circle henry g. baker a symbolic computation method of analytic solution of the mixed direchlet- neumann-robin problem for laplace's equation the authors show how to use symbolic computing methods to obtain an analytic representation of the solution of laplace's equation (in r2 or r3) satisfying boundary conditions of a very general type, viz., involving conditions of dirichlet, neumann and robin's type on disjoint subsets of the boundary of the solution domain. herbert h. snyder ralph w. wilkerson a note on comparison of nonlinear least squares algorithms zafar khan sudhir chawla dinesh dave integration of liouvillian functions with special functions in this paper, we discuss a decision procedure for the indefinite integration of transcendental liouvillian functions in terms of elementary functions and logarithmic integrals. we also discuss a decision procedure for the integration of a large class of transcendental liouvillian functions in terms of elementary functions and error-functions. p. h. knowles sparse matrix test problems we describe the harwell-boeing sparse matrix collection, a set of standard test matrices for sparse matrix problems. our test set comprises problems in linear systems, least squares, and eigenvalue calculations from a wide variety of scientific and engineering disciplines. the problems range from small matrices, used as counter-examples to hypotheses in sparse matrix research, to large test cases arising in large- scale computation. we offer the collection to other researchers as a standard benchmark for comparative studies of algorithms. the procedures for obtaining and using the test collection are discussed. we also describe the guidelines for contributing further test problems to the collection. i. s. duff roger g. grimes john g. lewis constructing lower and upper bounded delay routing trees using linear programming jaewon oh iksoo pyo massoud pedram towards an analysis of local optimization algorithms tassos dimitriou russell impagliazzo random variate generation the presentation begins with a discussion of basic concepts and concludes with a state of the art survey of methods for generating random variates on a digital computer. this paper updates the more than 300 references cited in last year's paper. copies of the visual aids are given. bruce schmeiser simple distributed algorithms for the cycle cutset problem arun jagota rina dechter high-order least-squares fits joseph l. sowers algorithm 691; improving quadpack automatic integration routines two automatic adaptive integrators from quadpack (namely, qag, and qags) are modified by substituting the gauss-kronrod rules used for local quadrature with recursive monotone stable (rms) formulas. extensive numerical tests, both for one-dimensional and two-dimensional integrals, show that the resulting programs are faster, perform less functional evaluations, and are more suitable paola favati grazia lotti francesco romani finding maximum independent sets in sparse and general graphs richard beigel algorithm 597: sequence of modified bessel functions of the first kind w. j. cody towards efficient implementation of singly-implicit methods it has been observed that for problems of low dimension the transformations used in the implementation of singly-implicit runge-kutta methods consume an unreasonable share of the total computational costs. two proposals for reducing these costs are presented here. the first makes use of an alternative transformation for which the combined operation counts of the transformations together with the iterations themselves are lower than for the standard implementation scheme for singly-implicit methods. the second proposal is to use a runge-kutta method for which the first row of the coefficient matrix is zero but which still possesses acceptable stability properties. it is hoped that by combining these two proposals increased efficiency in the implementation of runge-kutta methods for stiff problems can be achieved. j. c. butcher the electrical resistance of a graph captures its commute and cover times view an n-vertex, m-edge undirected graph as an electrical network with unit resistors as edges. we extend known relations between random walks and electrical networks by showing that resistance in this network is intimately connected with the lengths of random walks on the graph. for example, the commute time between two vertices s and t (the expected length of a random walk from s to t and back) is precisely characterized by the effective resistance rst between s and t: commute time = 2mrst. additionally, the cover time (the expected length of a random walk visiting all vertices) is characterized by the maximum resistance r in the graph to within a factor of log n: mr ≤ cover time ≤ o (mr log n). for many graphs, the bounds on cover time obtained in this manner are better than those obtained from previous techniques such as the eigenvalues of the adjacency matrix. in particular, using this approach, we improve known bounds on cover times for various classes of graphs, including high-degree graphs, expanders, and multi- dimensional meshes. moreover, resistance seems to provide an intuitively appealing and tractable approach to these problems. a. k. chandra p. raghavan w. l. ruzzo r. smolensky the first annual large dense linear system survey alan edelman experimental evaluation of efficient sparse matrix distributions manuel ujaldón shamik d. sharma emilio l. zapata joel saltz algorithm 608: approximate solution of the quadratic assignment problem david h. west an analysis of atkinson's algorithm greg butler necessary and sufficient conditions for hyperplane transversals we will prove that a finite family b = {b1, b2, …, bn} of connected compact sets in rd has a hyperplane transversal if and only if for some k there exists a set of points p = {p1, p2, …, pn} (i.e. a k-dimensional labeling of the family) which spans rk and every k \\+ 2 sets of b are met by a k-flat consistent with the order type of p. this is a common generalization of theorems of hadwiger, katchalski, goodman-pollack and wenger. r. pollack r. wenger "interval rational = algebraic" revisited: a more computer realistic result anatoly v. lakeyev vladik kreinovich special issue on the workshop on mathematical performance modeling and analysis (mama '99) mark s. squillante remark on "algorithm 569: colsys: collocation software for boundary-value odes [d2]" j.-fr. hake when floating-point addition isn't commutative fred ris ed barkmeyer craig schaffert peter farkas algorithm 630: bbvscg - a variable-storage algorithm for function minimization a. buckley a. lenir fast approximation algorithms for multicommodity flow problems tom leighton clifford stein fillia makedon Éva tardos serge plotkin spyros tragoudas computational geometry column 32 joseph o'rourke acm algorithms policy fred t. krogh queueing delays on virtual circuits using a sliding window flow control scheme a tandem queue model is developed that models the end-to-end delay behavior in networks that employ sliding window flow control. messages arriving to find the window filled are assumed to be queued outside the network to await their turn to enter. the model is analyzed using a hierarchical decomposition method; the key step entails incorporating the fact that the interdeparture process is not exponentially distributed. end-to-end delay in virtual circuits can then be obtained by modelling a virtual route as a tandem queue [1] using the method of "adjusted rates." g. varghese w. chou a. a. nilsson an extended set of fortran basic linear algebra subprograms this paper describes an extension to the set of basic linear algebra subprograms. the extensions are targeted at matrix-vector operations that should provide for efficient and portable implementations of algorithms for high-performance computers. jack j. dongarra jeremy du croz sven hammarling richard j. hanson computer generation of poisson deviates from modified normal distributions j. h. ahrens u. dieter reservoir-sampling algorithms of time complexity o(n(1 + log(n/n))) one-pass algorithms for sampling n records without replacement from a population of unknown size n are known as reservoir-sampling algorithms. in this article, vitter's reservoir-sampling algorithm, algorithm z, is modified to give a more efficient algorithm, algorithm k. additionally, two new algorithms, algorithm l and algorithm m, are proposed. if the time for scanning the population is ignored, all the four algorithms have expected cpu time o(n(1 + log(n/n))), which is optimum up to a constant factor. expressions of the expected cpu time for the algorithms are presented. among the four, algorithm l is the simplest, and algorithm m is the most efficient when n and n/n are large and n is o( ;n2). kim-hung li genocop: a genetic algorithm for numerical optimization problems with linear constraints z. michalewicz c. z. janikow real-time integration h. k. hodge pseudo-random number generators for a calculator d. e. ross open problems: 3 s. khuller provably monotone approximations, ii charles b. dunham solving problems symbolically by using poisson series processors j. f. san juan a. abad s. serrano a. gavín call routing by simulated annealing xin yao nitin kanani a distribution-free random number generator via a matrix-exponential representation edward f. brown note on blended linear multistep formulas the blended linear multistep methods of skeel and kong (1977) are fairly simply expressed in nordsieck form, but the underlying multistep method turns out to be much more complicated. robert d. skeel thu v. vu an interface optimization and application for the numerical solution of optimal control problems an interface between the application problem and the nonlinear optimization algorithm is proposed for the numerical solution of distributed optimal control problems. by using this interface, numerical optimization algorithms can be designed to take advantage of inherent problem features like the splitting of the variables into states and controls and the scaling inherited from the functional scalar products. further, the interface allows the optimization algorithm to make efficient use of user-provided function evaluations and derivative calculations. matthias heinkenschloss luís n. vicente sample space definition the sample space definition of simulation experiments characterizes a simulation experiment in terms of the sample space and probability distribution of the inputs, an output function defined on the inputs, a sampling plan, a statistics function defined on the outputs, and parameters of interest that the statistics estimate. this definition is particularly useful for studing variance reduction techniques. barry l. nelson bruce w. schmeiser problem of determination trustworthy softwareÂ-hardware computingsthe inductional method of algorithms investion, whith descripted by means ofalgorithms algebra, for solution of the problem of determination trustworthysofware --- hardware computings is worked up.volodymyr ovsiak software for doubled-precision floating-point computations seppo linnainmaa accurate approximate solution of partial differential equations at off-mesh points numerical methods for partial differential equations often determine approximations that are more accurate at the set of discrete meshpoints than they are at the "off-mesh" points in the domain of interest. these methods are generally most effective if they are allowed to adjust the location of the mesh points to match the local behavior of the solution. different methods will typically generate their respective approximations on incompatible, unstructured meshes, and it can be difficult to evaluate the quality of a particular solution, or to visualize important properties of a solution. in this paper we will introduce a generic approach which can be used to generate approximate solution values at arbitrary points in the domain of interest for any method that determines approximations to the solution and low-order derivatives at meshpoints. this approach is based on associating a set of "collocation" points with each mesh element and requiring that the local approximation interpolate the meshpoint data and almost satisfy the partial differential equation at the collocation points. the accuracy associated with this interpolation/collocation approach is consistent with the "meshpoint accuracy" of the underlying method. the approach that we develop applies to a large class of methods and problems. it uses local information only and is therefore particularly suitable for implementation in a parallel or network computing environment. numerical examples are given for some second-order problems in two and three dimensions. w. h. enright applying parallel computation algorithms in the design of serial algorithms nimrod megiddo .879-approximation algorithms for max cut and max 2sat michel x. goemans david p. williamson algorithm 703; mebdf: a fortran subroutine for solving first-order systems of stiff initial value problems for ordinary differential equations j. r. cash s. considine using schubert to enumerate conics in p 3 x. hernández j. m. miret calculating equilibrium probabilities for (n)/ck/1/n queues equilibrium state distributions are determined for queues with load-dependent poisson arrivals and service time distributions representable by cox's generalized method of stages. the solution is obtained by identifying a birth- death process that has the same equilibrium state distribution as the original queue. special cases of two-stage (c2) and erlang-k (ek) service processes permit particularly efficient algorithms for calculating the load - dependent service rates of the birth-death process corresponding to the original queue. knowing the parameters of the birth-death process, the equilibrium state probabilities can be calculated straight-forwardly. this technique is particularly useful when subsystems are reduced to flow-equivalent servers representing the complementary network. raymond marie solving minimum-cost flow problems by successive approximation we introduce a framework for solving minimum-cost flow problems. our approach measures the quality of a solution by the amount that the complementary slackness conditions are violated. we show how to extend techniques developed for the maximum flow problem to improve the quality of a solution. this framework allows us to achieve (min(3, n5/3 m2/3, nm log n) log (nc)) running time. a. goldberg r. tarjan a random polynomial time algorithm for approximating the volume of convex bodies we present a randomised polynomial time algorithm for approximating the volume of a convex body k in n-dimensional euclidean space. the proof of correctness of the algorithm relies on recent theory of rapidly mixing markov chains and isoperimetric inequalities to show that a certain random walk can be used to sample nearly uniformly from within k. m. dyer a. frieze algorithm 687: a decision tree for the numerical solution of initial value ordinary differential equations c. a. addison w. h. enright p. w. gaffney i. gladwell p. m. hanson a differential-equations algorithm for nonlinear equations filippo aluffi- pentini valerio parisi francesco zirilli algorithms 164: tower chart(skyscraper diagram) for contingency tables carina heiselbetz iterative aggregation/disaggregation techniques for nearly uncoupled markov chains iterative aggregation/disaggregation methods provide an efficient approach for computing the stationary probability vector of nearly uncoupled (also known as nearly completely decomposable) markov chains. three such methods that have appeared in the literature recently are considered and their similarities and differences are outlined. specifically, it is shown that the method of takahashi corresponds to a modified block gauss-seidel step and aggregation, whereas that of vantilborgh corresponds to a modified block jacobi step and aggregation. the third method, that of koury et al., is equivalent to a standard block gauss-seidel step and iteration. for each of these methods, a lemma is established, which shows that the unique fixed point of the iterative scheme is the left eigenvector corresponding to the dominant unit eigenvalue of the stochastic transition probability matrix. in addition, conditions are established for the convergence of the first two of these methods; convergence conditions for the third having already been established by stewart et al. all three methods are shown to have the same asymptotic rate of convergence. wei-lu cao william j. stewart a comparison of adaptive refinement techniques for elliptic problems adaptive refinement has proved to be a useful tool for reducing the size of the linear system of equations obtained by discretizing partial differential equations. we consider techniques for the adaptive refinement of triangulations used with the finite element method with piecewise linear functions. several such techniques that differ mainly in the method for dividing triangles and the method for indicating which triangles have the largest error have been developed. we describe four methods for dividing triangles and eight methods for indicating errors. angle bounds for the triangle division methods are compared. all combinations of triangle divisions and error indicators are compared in a numerical experiment using a population of eight test problems with a variety of difficulties (peaks, boundary layers, singularities, etc.). the comparison is based on the l-infinity norm of the error versus the number of vertices. it is found that all of the methods produce asymptotically optimal grids and that the number of vertices needed for a given error rarely differs by more than a factor of two. william f. mitchell index of nilpotency of binomial ideals i. ojeda martinez de castilla r. piedra sanchez algorithms column this column will discuss recent developments in the field of algorithms. i expect that the results reported here will be from recent conferences or workships. this column could be useful to the reader who may not keep track of new developments in the field of algorithms. guest columns on special topics will be welcomed. two problems that have recieved a lot of attention during the last few years are the problems of facility location and the related problem of _k_-medians. i will describe the problems, provide a brief summary of prior work, and then describe one of the main results from a paper [1] to apprear at stoc 2001 to be held in crete, greece. samir khuller numerical differentiation of analytic functions bengt fornberg a recursive formulation of cholesky factorization of a matrix in packed storage a new compact way to store a symmetric or triangular matrix called rpf for recursive packed format is fully described. novel ways to transform rpf to and from standard packed format are included. a new algorithm, called rpc for recursive packed cholesky, that operates on the rpg format is presented. algorithm rpc is basd on level-3 blas and requires variants of algorithms trsm and syrk that work on rpf. we call these rp_trsm and rp_syrk and find that they do most of their work by calling gemm. it follows that most of the execution time of rpc lies in gemm. the advantage of this storage scheme compared to traditional packed and full storage is demonstrated. first, the rpc storage format uses the minimal amount of storage for the symmetric or triangular matrix. second, rpc gives a level-3 implementation of cholesky factorization whereas standard packed implementations are only level 2. hence, the performance of our rpc implementation is decidedly superior. third, unlike fixed block size algorithms, rpc, requires no block size tuning parameter. we present performance measurements on several current architectures that demonstrate improvements over the traditional packed routines. also msp parallel computations on the ibm smp computer are made. the graphs that are attached in section 7 show that the rpc algorithms are superior by a factor between 1.6 and 7.4 for order around 1000, and between 1.9 and 10.3 for order around 3000 over the traditional packed algorithms. for some architectures, the rpc performance results are almost the same or even better than the traditional full-storage algorithms results. the exact analysis of sparse rectangular linear systems peter alfeld david j. eyre a test problem generator for the steiner problem in graphs in this paper we present a new binary-programming formulation for the steiner problem in graphs (spg), which is well known to be np-hard. we use this formulation to generate test problems with known optimal solutions. the technique uses the kkt optimality conditions on the corresponding quadratically constrained optimization problem. b. n. khoury p. m. pardalos d.-z. du a survey of software for partial differential equations marek machura roland a. sweet implementing weighted b-matching algorithms: towards a flexible software design m. muller-hannemann a. schwartz linear programming with two variables per inequality in poly-log time g s lueker n megiddo v ramachandran beware of linear congruential generators with multipliers of the form a = ±2q ±2r linear congruential random-number generators with mersenne prime modulus and multipliers of the form a = ±2q ±r have been proposed recently. their main advantage is the availability of a simple and fast implementation algorithm for such multipliers. this note generalizes this algorithm, points out statistical weaknesses of these multipliers when used in a straightforward manner, and suggests in what context they could be used safely. pierre l'ecuyer richard simard acm algorithms distribution service expanded no author j+= ;j michael wolfe property testing in bounded degree graphs oded goldreich dana ron algorithm 645: subroutines for testing programs that compute the generalized inverse of a matrix j. c. nash r. l. c. wang algorithm 614: a fortran subroutine for numerical integration in h(sub)p k. sikorski f. stenger j. schwing an investigation of iterative routing algorithms zahir moosa douglas edwards algorithm 599: sampling from gamma and poisson distributions j. h. ahrens k. d. kohrt u. dieter computing betti numbers via combinatorial laplacians joel friedman portable vectorized software for bessel function evaluation a suite of computer programs for the evaluation of bessel functions and modified bessel functions of orders zero and one for a vector of real arguments is described. distinguishing characteristics of these programs are that (a) they are portable across a wide range of machines, and (b) they are vectorized in the case when multiple function evaluations are to be performed. the performance of the new programs are compared with software from the fnlib collection of fullerton on which the new software is based. ronald f. boisvert bonita v. saunders a frontal code for the solution of sparse positive-definite symmetric systems arising from finite-element applications we describe the design, implementation, and performance of a frontal code for the solution of large sparse symmetric systems of linear finite-element equations. the code is intended primarily for positive-definite systems, since numerical pivoting is not performed. the resulting software package, ma62, will be included in the harwell subroutine library. we illustrate the performance of our new code on a range of problems arising from real engineering and industrial applications. the performance of the code is compared with that of the harwell subroutine library general frontal solver ma42 and with other positive-definite codes from the harwell subroutine library. iain s. duff jennifer a. scott modularity of cycles and paths in graphs certain problems related to the length of cycles and paths modulo a given integer are studied. linear-time algorithms are presented that determine whether all cycles in an undirected graph are of length p mod q and whether all paths between two specified nodes are of length p mod q, for fixed integers p.q. these results are compared to those for directed graphs. e. m. arkin c. h. papadimitriou m. yannakakis algorithm 600: translation of algorithm 507. procedures for quintic natural spline interpolation john g. herriot christian h. reinsch hypercube implementation of the simplex algorithm large, sparse, linear systems of equations arise frequently when constructing mathematical models of natural phenomena. most often, these linear systems are fully constrained and can be solved via direct or iterative techniques. however, one important problem class requires solutions to underconstrained linear systems that maximize some objective function. these linear optimization problems are natural formulations of many business plans and often contain hundreds of equations with thousands of variables. historically, linear optimization problems have been solved via the simplex method. despite the excellent performance of the simplex method, the size of the optimization problems and the frequency of their solution make linear optimization a computationally taxing endeavor. this paper examines the performance of parallel variants of the simplex algorithm on the intel ipsc, a message-based parallel system. linear optimization test data are drawn from commercial sources and represent realistic problems. analysis shows that the speedup obtained is sensitive to both the structure of the underlying data and the data partitioning. c. b. stunkel d. a. reed algorithm 621: software with low storage requirements for two-dimensional, nonlinear, parabolic differential equations b. p. sommeijer p. van der houwen further investigation into spectral analysis for confidence intervals in steady state simulations in using spectral analysis for confidence intervals in steady state simulations, the parameter 1 must be specified. this quantity determines the order of the covariances which are needed in computation of the standard error. this paper investigates the effect of 1 on the standard error using a small experiment conducted on four queueing systems. neil b. marks the multifrontal solution of indefinite sparse symmetric linear i. s. duff j. k. reid corrigendum: "box-bisection for solving second-degree systems and the problem of clustering" alexander morgan vadim shapiro remark on "algorithm 246: graycode [z]" m. c. er bounds on threshold dimension and disjoint threshold coverings (abstract only) the threshold dimension1 of a graph g is the smallest number of threshold graphs needed to cover the edges of g. if t(n) is the greatest threshold dimension of any graph of n vertices, we show that for some constant c, n-c n log n < t(n) < n- n + 1 we establish the same bounds for edge-disjoint coverings of graphs by threshold graphs. the results have applications to manipulating systems of simultaneous linear inequalities and to space bounds for synchronization problems2. p. erdos e. ordman y. zalcstein permanents, pfaffian orientations, and even directed circuits (extended abstract) william mccuaig neil robertson p. d. seymour robin thomas algorithm 655: iqpack: fortran subroutines for the weights of interpolatory quadratures we present fortran subroutines that implement the method described in [3] for the stable evaluation of the weights of interpolatory quadratures with prescribed simple or multiple knots. given a set of knots and their multiplicities, the package generates the weights by using the zeroth moment μ0 of w, the weight function in the integrand, and the (symmetric tridiagonal) jacobi matrix j associated with the polynomials orthogonal on (a, b) with respect to w. there are utility routines that generate μ0 and j for classical weight functions, but quadratures can be generated for any μ0 and j supplied by the user. utility routines are also provided that (1) evaluate a computed quadrature, applied to a user-supplied integrand, (2) check the polynomial order of precision of a quadrature formula, and (3) compute the knots and weights of simple gaussian quadrature formula. sylvan elhay jaroslav kautsky table-driven implementation of the expm1 function in ieee floating-point arithmetic algorithms and implementation details for the function ex \\- 1 in both single and double precision of ieee 754 arithmetic are presented here. with a table of moderate size, the implementations need only working-precision arithmetic and are provably accurate to within 0.58 ulp. ping tak peter tang algorithm 656: an extended set of basic linear algebra subprograms: model implementation and test programs pdfind is a fortran-77 implementation of an algorithm that finds a positive definite linear combination of two symmetric matrices, or determines that such a combination does not exist. the algorithm is designed to be independent of the data structures used to store the matrices. the user must provide a subroutine, chlsky, which acts as an interface between pdfind and the matrix data structures. chlsky also provides the user control over the number of iterations of the algorithm. implementations of chlsky are included which call linpac routines for full matrices as well as symmetric banded matrices. jack j. dongarra jeremy du croz sven hammarling richard j. hanson on grey relational mapping shuze wang xiqiang liu on computing a separating transcendence basis let _k_(1,…, _xn_)/_k_ be a finitely generated field extension, and _g_1_,…,_gr_ ε _k_(x). for _k_(_x_,…,_xn_)/&l; t;i>k_(_g_1,…,& lt;i>gr_) being separably generated (which in particular includes the case char(_k_) = 0) we give a method to compute the transcendence degree and a separating transcendence basis of this extension by means of simple linear algebra techniques. rainer steinwandt commentary on: solving symbolic equations with press richard fateman alan bundy richard o'keefe leon sterling symbolic newton method for two-parameter eigenvalue problem michio sakakihara shigekazu nakagawa recent advances in uniform random number generation pierre l'ecuyer an example of error propagation reinterpreted as subtractive cancellation james a. delaney a bvp solver based on residual control and the maltab pse our goal was to make it as easy as possible to solve a large class of boundary value problems (bvps) for ordinary differential equations in the matlab problem solving environment (pse). we present here theoretical and software developments resulting in bvp4c, a capable bvp solver that is exceptionally easy to use. corrigenda: "some tests of generalized bisection" r. baker kearfott algorithms for parallel k-vertex connectivity and sparse certificates joseph cheriyan ramakrishna thurimella remark on "algorithm 500: minimization of unconstrained multivariate functions [e4]" d. f. shanno k. h. phua algorithm 716; tspack: tension spline curve-fitting package the primary purpose of tspack is to construct a smooth function which interpolates a discrete set of data points. the function may be required to have either one or two continuous derivatives. if the accuracy of the data does not warrant interpolation, a smoothing function (which does not pass through the data points) may be constructed instead. the fitting method is designed to avoid extraneous inflection points (associated with rapidly varying data values) and preserve local shape properties of the data (monotonicity and convexity), or to satisfy the more general constraints of bounds on function values or first derivatives. the package also provides a parametric representation for construction general planar curves and space curves. r. j. renka multicolor reordering of sparse matrices resulting from irregular grids many iterative algorithms for the solution of large linear systems may be effectively vectorized if the diagonal of the matrix is surrounded by a large band of zeroes, whose width is called the zero stretch. in this paper, a multicolor numbering technique is suggested for maximizing the zero stretch of irregularly sparse matrices. the technique, which is a generalization of a known multicoloring algorithm for regularly sparse matrices, executes in linear time, and produces a zero stretch approximately equal to n/2 , where 2 is the number of colors used in the algorithm. for triangular meshes, it is shown that ≤ 3, and that it is possible to obtain = 2 by applying a simple backtracking scheme. rami g. melhem k. v. s. ramarao a portable random number generator well suited for the rejection method up to now, all known efficient portable implementations of linear congruential random number generators with modulus 231 \\-- 1 have worked only with multipliers that are small compared with the modulus. we show that for nonuniform distributions, the rejection method may generate random numbers of bad qualify if combined with a linear congruential generator with small multiplier. a method is described that works for any multiplier smaller than 230. it uses the decomposition of multiplier and seed in high-order and low- order bits to compute the upper and lower half of the product. the sum of the two halfs gives the product of multiplier and seed modulo 221 \\-- 1. coded in ansi-c and fortran77 the method results in a portable implementation of the linear congruential generator that is as fast or faster than other portable methods. w. hörmann g. derflinger transitive orientation in 0(n2) time this paper presents an algorithm for the transitive graph orientation problem which runs in 0(n2) time. the best previous algorithms for this problem required 0(n3) time. transitive orientation is the slowest part of several graph recognition problems, so the new algorithm immediately improves the complexity of algorithms for recognizing comparability graphs, permutation graphs, and circular permutation graphs. jeremy spinrad algorithm 705; a fortran-77 software package for solving the sylvester matrix equation axbt \\+ cxdt = e this paper documents a software package for solving the sylvester matrix equation (1) axbt \\+ cxdt = e all quantities are real matrices; a and c are m x n; b and d are m x n; and x and e are m x n. the unknown is x. two symmetric forms of eq. (1) are treated separately for efficiency. they are the continuous-time symmetric sylvester equation (2) axet \\+ exat \\+ c = 0 and the discrete time equation (3) axat \\+ c = 0, for which a, e, and c is symmetric. the software also provides a means for estimating the condition number of these three equations. the algorithms employed are more fully described in an accompanying paper [3]. judith d. gardiner matthew r. wette alan j. laub james j. amato cleve b. moler algorithm 740; fortran subroutines to compute improved incomplete cholesky factorizations efficient and reliable code to compute incomplete cholesky factors of sparse matrices for use as preconditioners in a conjugate gradient algorithm is described. this code implements two recently developed, improved incomplete factorization algorithms. an efficient implementation of the standard incomplete cholesky factorization is also included. mark t. jones paul e. plassmann exploratory data analysis in a study of the performance of nonlinear optimization routines david c. hoaglin virginia klema stephen c. peters graphs that are almost binary trees (preliminary version) this paper studies embeddings of graphs in binary trees. the cost of such an embedding is the maximum distance in the binary tree between images of adjacent graph vertices. several techniques for bounding the costs of such embeddings from above are derived; notable among these is an algorithm for embedding any outerplanar graph in a binary tree with a cost that is within a factor of 3 of optimal. a number of techniques for bounding the costs of such embeddings from below are developed; notable here are two techniques for inferring the presence of large separators in graphs. finally, a number of characterizations are established of those families of graphs that are almost binary trees, in the sense that every graph in the family is embeddable in a binary tree within bounded cost. hong jia-wei arnold l. rosenberg latin supercube sampling for very high-dimensional simulations this article introduces latin supercube sampling (lss) for very high- dimensional simulations such as arise in particle transport, finance, and queueing. lss is developed as a combination of two widely used methods: latin hypercube sampling (lhs) and quasi-monte carlo (qmc). in lss, the input variables are grouped into subsets, and a lower-dimensional qmc method is used within each subset. the qmc points are presented in random order within subsets. qmc methods have been observed to lose effectiveness in high- dimensional problems. this article shows that lss can extend the benefits of qmc to much higher dimensions, when one can make a good grouping of input variables. some suggestions for grouping variables are given for the motivating examples. even a poor grouping can still be expected to do as well as lhs. the article also extends lhs and lss to infinite-dimensional problems. the paper includes a survey of qmc methods, randomized versions of them (rqmc), and previous methods for extending qmc to higher dimensions. furthermore it shows that lss applied with rqmc is more reliable than lss with qmc. art b. owen new nag library software for first-order partial differential equations new nag fortran library routines are described for the solution of systems of nonlinear, first-order, time-dependent partial differential equations in one space dimension, with scope for coupled ordinary differential or algebraic equations. the method-of-lines is used with spatial discretization by either the central-difference keller box scheme or an upwind scheme for hyperbolic systems of conservation laws. the new routines have the same structure as existing library routines for the solution of second-order partial differential equations, and much of the existing library software is reused. results are presented for several computational examples to show that the software provides physically realistic numerical solutions to a challenging class of problems. s. v. pennington m. berzins strong deviations from randomness in m-sequences based on trinomials the fixed vector of any m-sequence based on a trinomial is explicitly obtained. local nonrandomness around the fixed vector is analyzed through model-construction and experiments. we conclude that the initial vector near the fixed vector should be avoided. makoto matsumoto yoshiharu kurita algorithm 812: bpoly: an object-oriented library of numerical algorithms for polynomials in bernstein form the design, implementation, and testing of a c++ software library for univriate polynomials in bernstein form is described. by invoking the class environment and operator overloading, each polynomial in an expression is interpreted as an object compatible with the arithmetic operations and other common functions (subdivision, degree, elevation, differentiation and integration, compoistion, greatest common divisor, real-root solving, etc.) for polynomials in bernstein form. the library allows compact and intuitive implementation of lengthy manipulation of bernstein-form polynomials, which often arise in computer graphics and computer-aided design and manufacturing applications. a series of empirical tests indicates that the library functions are typically very accurate and reliable, even for polynomials of surprisingly high degree. yi-feng tsai rida t. farouki approximation with exact lagrange-type interpolation c. dunham z. zhu a reservation based cyclic server queue with limited service in this paper, we examine a problem which is an extension of the limited service in a queueing system with a cyclic server. in this service mechanism, each queue, after receiving service in cycle j, makes a reservation for its service requirement in cycle j \\+ 1. in this paper, we consider symmetric case only, i.e., the arrival rates to all the queues are the same. the main contribution to queueing theory is that we propose an approximation for the queue length and sojourn-time distributions for this discipline. most approximate studies on cyclic queues, which have been considered before, examine the means only. our method is an iterative one, which we prove to be convergent by using stochastic dominance arguments. we examine the performance of our algorithm by comparing it to simulations and show that the results are very good. duan-shin lee bhaskar sengupta properly rounded variable precision square root the square root function presented here returns a properly rounded approximation to the square root of its argument, or it raises an error condition if the argument is negative. properly rounded means rounded to the nearest, or to nearest even in case of a tie. it is variable precision in that it is designed to return a p-digit approximation to a p-digit argument, for any p > 0\\. (precision p means p decimal digits.) the program and the analysis are valid for all p > 0, but current implementations place some restrictions on p. t. e. hull a. abrham algorithm 646: pdfind: a routine to find a positive definite linear combination of two real symmetric matrices pdfind is a fortran-77 implementation of an algorithm that finds a positive definite linear combination of two symmetric matrices, or determines that such a combination does not exist. the algorithm is designed to be independent of the data structures used to store the matrices. the user must provide a subroutine, chlsky, which acts as an interface between pdfind and the matrix data structures. chlsky also provides the user control over the number of iterations of the algorithm. implementations of chlsky are included which call linpac routines for full matrices as well as symmetric banded matrices. charles r. crawford implementing finite element software on hypercube machines g. a. lyzenga a. raefsky b. nour-omid some tests of generalized bisection this paper addresses the task of reliably finding approximations to all solutions to a system of nonlinear equations within a region defined by bounds on each of the individual coordinates. various forms of generalized bisection were proposed some time ago for this task. this paper systematically compares such generalized bisection algorithms to themselves, to continuation methods, and to hybrid steepest descent/quasi-newton methods. a specific algorithm containing novel "expansion" and "exclusion" steps is fully described, and the effectiveness of these steps is evaluated. a test problem consisting of a small, high-degree polynomial system that is appropriate for generalized bisection, but very difficult for continuation methods, is presented. this problem forms part of a set of 17 test problems from published literature on the methods being compared; this test set is fully described here. r. baker kearfott superconcentrators, generalizers and generalized connectors with limited depth we show that the minimum possible size of an n-superconcentrator with depth 2k≥4 is (nλ(k, n)), where λ(k, .) is the inverse of a certain function at the k-th level of the primitive recursive hierarchy. it follows that the minimum possible depth of an n-superconcentrator with linear size is (β(n)), where β is the inverse of a function growing more rapidly than any primitive recursive function. similar results hold for generalizers. we give a simple explicit construction for a (d1...d< ;subscrpt>k)-generalizer with depth k and size (d1+...+dk)d1...dk. this is applied to give a simple explicit construction for a generalized n-connector with depth 2k 3 and size (2d1+3d2& lt;/subscrpt>+...+3dk 1+2dk) d1...dk. these are the best explicit constructions currently available. we also show that, for each fixed k≥2, the minimum possible size of a generalized n-connector with depth k is Ω(n1+1/k) and 0((n log n)1+1/k). danny dolev cynthia dwork nicholas pippenger avi wigderson on dual minimum cost flow algorithms (extended abstract) jens vygen algorithm 652: hompack: a suite of codes for globally convergent homotopy algorithms there are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. the essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. hompack provides three qualitatively different algorithms for tracking the homotopy zero curve: ordinary differential equation-based, normal flow, and augmented jacobian matrix. separate routines are also provided for dense and sparse jacobian matrices. a high-level driver is included for the special case of polynomial systems. layne t. watson stephen c. billups alexander p. morgan the value of strong inapproximability results for clique aravind srinivasan finding boundaries of the domain of definition for functions along a one- dimensional ray a program for finding boundaries of function domains along a one-dimensional ray has been developed. the program decomposes the function into subexpressions which are recursively tested for intervals where they are undefined, negative, or fractional, or points where they equal zero. the intervals in which the subexpressions are undefined are then united to create the boundaries of the domain of definition of the whole function. the advantages of the use of such a program in solution of systems of nonlinear algebraic equations and nonlinear optimization are demonstrated. orit shacham mordechai schacham some optimal inapproximability results johan håstad new dynamic algorithms for shortest path tree computation paolo narváez kai- yeung siu hong-yi tzeng an evaluation of mathematical software that solves nonlinear least squares problems k. l. hiebert generating a sample from a k-cell table with changing probabilities in o(log2k time george s. fishman l. stephen yarberry multivariate interpolation of large sets of scattered data this paper presents a method of constructing a smooth function of two or more variables that interpolates data values at arbitrarily distributed points. shepard's method for fitting a surface to data values at scattered points in the plane has the advantages of a small storage requirement and an easy generalization to more than two independent variables, but suffers from low accuracy and a high computational cost relative to some alternative methods. localizations of this method have reasonably low computational costs, but remain relatively inaccurate. we describe a modified shepard's method that, without sacrificing the advantages, has accuracy comparable to other local methods. computational efficiency is also improved by using a cell method for nearest-neighbor searching. test results for two and three independent variables are presented. robert j. renka a new approach to all pairs shortest paths in planar graphs an algorithm is presented for generating a succinct encoding of all pairs shortest path information in a directed planar graph g with real-valued edge costs but no negative cycles. the algorithm runs in (pn) time, where n is the number of vertices in g, and p is the minimum cardinality of a subset of the faces that cover all vertices, taken over all planar embeddings of g. linear- time algorithms are presented for various subproblems including that of finding an appropriate embedding of g and a corresponding face-on-vertex covering of cardinality (p), and of generating all pairs shortest path information in a directed outerplanar graph. g. n. frederickson note on the end game in homotopy zero curve tracking homotopy algorithms to solve a nonlinear system of equations f(x) = 0 involve tracking the zero curve of a homotopy map p(a, λ, x) from λ = 0 until λ = 1. when the algorithm nears or crosses the hyperplane λ = 1, an "end game" phase is begun to compute the solution x satisfying p(a, λ, x ) = f(x ) = 0. this note compares several end game strategies, including the one implemented in the normal flow code fixpnf in the homotopy software package hompack. maria sosonkina layne t. watson david e. stewart algorithm of simplification of nonlinear equations systems jean-michel nataf a note on einstein metrics - a simple application of a symbolic algebra system birger nielsen hendrik pedersen corrigenda: "double integration using one-dimensional adaptive quadrature routines: a software interface problem" f. n. fritsch algorithm 595: an enumerative algorithm for finding hamiltonian circuits in a directed graph silvano martello pagp, a partitioning algorithm for (linear) goal programming problems jeffrey l. arthur a. ravindran a polynomial algorithm for the two-variable integer programming problem ravindran kannan an efficient algorithm for computing a minimum node cutset from a vertex- disjoint path set for timing optimization wing ning li parallel algorithms for minimum cuts and maximum flows in planar networks algorithms are given that compute maximum flows in planar directed networks either in o((log n)3) parallel time using o(n4) processors or o((log n)2) parallel time using o(n6) processors. the resource consumption of these algorithms is dominated by the cost of finding the value of a maximum flow. when such a value is given, or when the computation is on an undirected network, the bound is o((log n)2) time using o(n3 ) processors. no efficient parallel algorithm is known for the maximum flow problem in general networks. donald b. johnson remark on algorithm 562: shortest path lengths u. pape algorithm 577: algorithms for incomplete elliptic integrals [s21] b. c. carlson elaine m. notis how to stabilize a fixed point keith briggs generating random variates from a distribution of phase type an algorithm for generating phase type random variates is presented. a phase type distribution is any distribution which may be that of the time until absorption in a finite state markov process. the exponential sojourn times allow us to record only the number of visits, k, to each state before absorption and then generate appropriate erlang-k random variates. a related algorithm for the superposition of renewal processes of phase type is also discussed. marcel f. neuts miriam e. pagano an adaptive nonlinear least-squares algorithm john e. dennis david m. gay roy e. walsh symbolic and algebraic computation in robust stability analysis nainn-ping ke deterministic distributed resource discovery (brief announcement) the resource discovery problem was introduced by harchol-balter, leighton and lewin in [hll99], as a part of their work on web caching. they developed a randomized algorithm for the problem in the weakly connected directed graph model, that was implemented within lcs at mit, and then licensed to akamai technologies. the directed graph is a logical one (as opposed to the underlying communication graph). it represents the nodes' "knowledge" about the topology of the underlying communication network. in [hll99] the notion of topology "knowledge" is simplified, by modeling it as a "knowledge" of an id of another node. in the general case such a "knowledge" may a include whole route, as well as any additional information needed in order to establish a connection (e.g. a cryptographic public key). it is assumed (here, and in [hll99]) that the logical graph g is weakly connected. this can result from topology changes: e.g., a loss of a connection to a name server, or a gain of new knowledge that is uneven between different nodes, since it is obtained by distributed algorithms, and in non-homogeneous network. note that weak connectivity is a necessary condition for the solvability of the problem. dealing efficiently with the weakly connected graph was, in fact, the main contribution in [hll99]. the alternative of transforming the graph into an undirected one, and then solving the problem on the resulting undirected graph, is possible. if e0 = o(n) it even leads to efficient solutions: a time optimal algorithm (for undirected graphs) appears in [sv82], it was designed for crcw pram, but can be translated into the model of [hll99] using oe0| log n) messages. the assumption that |e0| = o(n) may be suitable if topology changes are assumed to be limited. many practical distributed systems, however, attempt to deal with the case that the network may be partitioned. thus, e0 should be as big as possible, to enable the disconnected components to regain connectivity. the current paper proposes a deterministic algorithm for the problem in the same model as that of [hll99]. our algorithm is near optimal in all the measures: time, message, and communication complexities. each previous algorithm had a complexity that was higher at least in one of the measures. specifically, previous deterministic solutions required either time linear in the diameter of the initial network, or communication complexity ;o(n3) (with message complexity o(n& lt;/italic>2)), or message complexity < italic>o(|e0< ;/subscrpt>| log n) (where e0 is the edge set of the initial graph). compared to the main randomized algorithm of the harchol-balter, leighton, and lewin, the time complexity is reduced from _o(log2n ) to o(log n log* n), the message complexity from o(n log 2n) to o(n log n log * n), and the communication complexity from o(n2 log3 n) to o(|0|log2< /supscrpt> n \\+ n2 log n). shay kutten david peleg a multi-level solution algorithm for steady-state markov chains a new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state markov chains is presented. the method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. it is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the gauss-seidel and optimal sor algorithms for a variety of test problems. it is shown how the well-known iterative aggregation- disaggregation algorithm of takahashi can be interpreted as a special case of the new method. graham horton scott t. leutenegger algorithm 665: machar: a subroutine to dynamically determined machine parameters numerical software written in high-level languages often relies on machine- dependent parameters to improve portability. machar is an evolving fortran subroutine for dynamically determining thirteen fundamental parameters associated with a floating-point arithmetic system. the version presented here operates correctly on a large number of different floating- point systems, including those implementing the new ieee floating-point standard. w. j. cody remark on "algorithm 396: student's t-quantiles [s14]" g. w. hill graphing elementary riemann surfaces robert m. corless david j. jeffrey algorithm 596: a program for a locally parameterized werner c. rheinboldt john v. burkardt mathematica solutions to the issac system challenge 1997 m. trott computer implementation of multiple comparisons with the best and selection procedures the lack of computer software for ranking and selection has often been cited as a reason for it not having caught on in practice (hayes 1982). actually, this lack of software was a manifestation of some deeper perceived difficulties with practical usage, we believe. it was perceived that bechhofer's (1954) indifference zone selection type inference was unavailable for single-stage experiments. the multiple comparisons with the best (mcb) result of hsu (1981) showed that not only is it available for single-stage experiments, it can be given simultaneously with gupta's (1965) subset selection inference without decreasing the minimum confidence level of either. it has also been perceived that the unequal sample size case requires more complex methodologies than the equal sample size case. it is pointed out in hsu (1983) that, so long as the analysis is performed on the computer, there needs to be no difference in methodology between the two cases. based on these recent developments, a computer package has been written for multiple comparisons with the best. given the data (equal sample size or not) and the desired confidence level p* ( 1 α), the first page output includes subset selection inference and indifference zone selection inference (extended to single-stage experiments), while the second page output gives the confidence intervals inference of hsu (1981, 1982). all the inferences are guaranteed to be correct simultaneously with a probability of at least p*. the result of analysis of some water filter data using an earlier version of the package has been reported in lorenz, hsu, and tuovinen (1982). jason c. hsu uncluttering force-directed graph layouts david p. dobkin alejo hausner emden r. gansner stephen c. north evaluation of step directions in optimization algorithms we present a method for comparing that part of optimization algorithms that chooses each step direction. it is an example of a general approach to algorithm evaluation in which one tests specific parts of the algorithm, rather than making overall evaluations on a set of standard test problems. our testing procedure can he useful for developing new algorithms and for writing and evaluating optimization software. we use the method to compare two versions of the conjugate gradient algorithm, and to compare these with an algorithm based on conic functions. william c. davidon jorge nocedal a proof of h. k. hodge's induction formula and the number of zeros in which n] ends c. rosselló lsqr: an algorithm for sparse linear equations and sparse least squares christopher c. paige michael a. saunders on monotone paths among obstacles with applications to planning assemblies we study the class of problems associated with the detection and computation of monotone paths among a set of disjoint obstacles. we give an (ne) algorithm for finding a monotone path (if one exists) between two points in the plane in the presence of polygonal obstacles. (here, e is the size of the visibility graph defined by the n vertices of the obstacles.) if all of the obstacles are convex, we prove that there always exists a monotone path between any two points s and t. we give an (n log n) algorithm for finding such a path for any s and t, after an initial (e \\+ n log n) preprocesing. we introduce the notions of "monotone path map", and "shortest monotone path map" and give algorithms to compute them. we apply our results to a class of separation and assembly problems, yielding polynomial-time algorithms for planning an assembly sequence (based on separations by single translations) of arbitrary polygonal parts in two dimensions. e. m. arkin r. connelly j. s. mitchell rejection-inversion to generate variates from monotone discrete distributions for discrete distributions a variant of reject from a continuous hat function is presented. the main advantage of the new method, called rejection- inversion, is that no extra uniform random number to decide between acceptance and rejection is required, which means that the expected number of uniform variates required is halved. using rejection-inversion and a squeeze, a simple universal method for a large class of monotone discrete distributions is developed. it can be used to generate variates from the tails of most standard discrete distributions. rejection- inversion applied to the zipf (or zeta) distribution results in algorithms that are short and simple and at least twice as fast as the fastest methods suggested in the literature. w. hörmann g. derflinger interactive ellpack: an interactive problem-solving environment for elliptic partial differential equations ellpack is a versatile, very high- level language for solving elliptic partial differential equations.solving elliptic problems with ellpack typically involves a process in which one repeatedlycomputes a solution, analyzes the results, and modifies the solution technique. although this processis best suited for an interactive environment, ellpack itself is batch oriented. with this in mind,we have developed interactive ellpack, an extension of ellpack that provides true interactiveelliptic problem solving by allowing the user to interactively build grids, choose solution methods,and analyze computed results. interactive ellpack features a sophisticated interface with window- ing, color graphics output, and graphics input. wayne r. dyksen calvin j. ribbens construction of test problems in quadratic bivalent programming a method of constructing test problems for constrained bivalent quadratic programming is presented. for any feasible integer point for a given domain, the method generates quadratic functions whose minimum over the given domain occurs at the selected point. certain properties of unconstrained quadratic zero-one programs that determine the difficulty of the test problems are also discussed. in addition, a standardized random test problem generator for unconstrained quadratic zero- one programming is given. panos m. pardalos more interesting h. k. hodge a generalized algorithm for centrality problems on trees a general framework is presented for rapidly analyzing tree networks to compute a measure of the centrality or eccentricity of all vertices in the tree. several problems, which have been previously described in the literature, fit this framework. some of these problems have no published solution better than performing a separate traversal for each vertex whose eccentricity is calculated. the method presented in this paper performs just two traversals and yields the eccentricities of all vertices in the tree. natural sufficient conditions for the algorithm to work in linear time on any given problem are stated. arnon rosenthal jose a. pino on computational aspects of bounded linear least squares problems the paper describes numerical experiments with active set methods for solving bounded linear least squares problems. it concentrates on two problems that arise in the implementation of the active set method. one problem is the choice of a good starting point. the second problem is how to move out of a "dead point." the paper investigates the use of simple iterative methods to solve these problems. the results of our experiments indicate that the use of gauss-seidel iterations to obtain a starting point is likely to provide large gains in efficiency. another interesting conclusion is that dropping one constraint at a time is advantageous to dropping several constraints at a time. achiya dax an iterative solution to speical linear systems on a vector hypercube an intel hypercube implementation of a new stationary iterative method developed by one of us (jdp) is presented. this algorithm finds the solution vector x for the invertible n × n linear system ax = (i - b)x = f where a has real spectrum. the solution method converges quickly because the jacobi iteration matrix b is replaced by an equivalent iteration matrix with a smaller spectral radius. the parallel algorithm partitions a row-wise among all the processors in order to keep memory load to a minimum and to avoid duplicate computations. with the introduction of vector hardware to the hypercube, more modifications have been made to the implementation algorithm in order to exploit that hardware and reduce run-time even further. example problems and timings will be presented. l. de pillis j. petersen j. de pillis towards the effective parallel computation of matrix pseudospectra given a matrix _a_, the computation of its pseudospectrum a∈ (_a_) is a far more expensive task than the computation of characteristics such as the condition number and the matrix spectrum. as research of the last 15 years has shown, however, the matrix pseudospectrum provides valuable information that is not included in other indicators. so, we ask how to compute it efficiently and build a tool that would facilitate engineers and scientists to make such analyses? in this paper we focus on parallel algorithms for computing pseudospectra. the most widely used algorithm for computing pseudospectra is embarassingly parallel; nevertheless, it is extremely costly and one cannot hope achieve absolute high performance with it. we describe algorithms that have drastically improved performance while maintaining a high degree of large grain parallelism. we evaluate the effectiveness of these methods in the context of a matlab-based environment for parallel programming using mpi on small, off-the-shelf parallel systems. c. bekas e. kokiopoulou i. koutis e. gallopoulos approximating sets with equivalence relations in various considerations of computer science (for instance in image processing and databases) one faces the following situation: given a set (of points or of documents) one considers a descending sequence of equivalence relations ("approximation spaces of order "). these equivalence relations determine a sequence of closure operations cli. given a set x, the approximation sequence of x is simply i> . we characterize here those sets x which satisfy the condition: x = {cli(x): i> }. w marek h rasiowa algorithm 573: nl2sol - an adaptive nonlinear least-squares algorithm [e4] john e. dennis david m. gay roy e. welsch remark on algorithm 354: numerical inversion of laplace transform robert piessens algorithm 790: cshep2d: cubic shepard method for bivariate interpolation of scattered data we describe a new algorithm for scattered data interpolation. the method is similar to that of algorithm 660 but achieves cubic precision and c2 continuity at very little additional cost. an accompanying article presents test results that show the method to be among the most accurate available. robert j. renka gams, a user's guide anthony brook david kendrick alexander meeraus variable precision rough sets approach to predicting factors affecting delay likelihoods jack david katzberg pauline katzberg wojciech ziarko an o(ln n) parallel algorithm for the subset sum problem z qiang algorithm 695: software for a new modified cholesky factorization elizabeth eskow robert b. schnabel introduction to this classic reprint and commentaries bob waite software for an implementation of weeks' method for the inverse laplace transform a software package based on a modification of the weeks' method is presented for calculating function values f(t) of the inverse laplace transform. this method requires transform values f(z) at arbitrary points in the complex plane, and is suitable when f(t) has continuous derivatives of all orders; it is especially attractive when f(t) is required at a number of different abscissas t. b. s. garbow g. giunta j. n. lyness a. murli an observation on bisection software for the symmetric tridiagonal eigenvalue problem in this article we discuss a small modification of the bisection routines in eispack and lapack for finding a few of the eigenvalues of a symmetric tridiagonal matrix a. when the principal minors of the matrix a yield good approximations to the desired eigenvalues, these modifications can yield about 30% reduction in the computation times. translational polygon containment and minimal enclosure using linear programming based restriction victor j. milenkovic corrigendum: remark on "algorithm 539: basic linear algebra subroutines for fortran usage" david s. dodson automatic partitioning techniques for solving partial differential equations on irregular adaptive meshes jerome galtier sampling and integration of near log-concave functions david applegate ravi kannan a remark on algorithm 643: fexact: an algorithm for performing fisher's exact test in r x c contingency tables douglas b. clarkson yuan-an fan harry joe matrix multiplication: a case study of enhanced data cache utilization n. eiron m. rodeh i. steinwarts sweeping arrangements of curves we consider arrangements of curves that intersect pairwise in at most k points. we show that a curve can sweep any such arrangement and maintain the k-intersection property if and only if k equals 1 or 2. we apply this result to an eclectic set of problems: finding boolean formulae for polygons with curved edges, counting triangles and digons in arrangements of pseudocircles, and finding extension curves for arrangements. we also discuss implementing the sweep. j. snoeyink j. hershberger size-time complexity of boolean networks for prefix computations the prefix problem consists of computing all the products x0_x1…xj (j=0, …, n \\- 1), given a sequence x = (x0, x1, …, xn \\- 1) of elements in a semigroup. in this paper we completely characterize the size-time complexity of computing prefixes with boolean networks, which are synchronized interconnections of boolean gates and one-bit storage devices. this complexity crucially depends upon a property of the underlying semigroup, which we call cycle-freedom (no cycle of length greater than one in the cayley graph of the semigroup). denoting by s and t size and computation time, respectively, we have s = ((n/t) log(n/t)), for non-cycle-free semigroups, and s = (n/t), for cycle-free semigroups. in both cases, t ∈ [ (logn), o(n)]. g. bilardi f. p. preparata remark on "algorithm 334: normal random deviates" allen e. tracht algorithm 721; mtieu1 and mtieu2: two subroutines to compute eigenvalues and solutions to mathieu's differential equation for noninteger and integer order two fortran routines are described which calculate eigenvalues and eigenfunctions of mathieu's differential equation for noninteger as well as integer order, mtieu1 uses standard matrix techniques with dimension parameterized to give accuracy in the eigenvalue of one part in 1012. mtieu2 used continued fraction techniques and is optimized to give accuracy in the eigenvalue of one part in 1014. the limitations of the algorithms are also discussed and illustrated. randall b. shirts a note about overhead costs in ode solvers g. k. gupta algorithm 661: qshep3d: quadratic shepard method for trivariate interpolation of scattered data robert j. renka a new inversive congruential pseudorandom number generator with power of two modulus pseudorandom numbers are important ingredients of stochastic simulations. their quality is fundamental for the strength of the simulation outcome. the inversive congruential method for generating uniform pseudorandom numbers is a particularly attractive alternative to linear congruential generators, which show many undesirable regularities. in the present paper a new inversive congruential generator with power of two modulus is introduced. periodicity and statistical independence properties of the generated sequences are analyzed. the results show that these inversive congruential generators perform very satisfactorily. jurgen eichenauer-herrmann holger grothe algorithm 629: an integral equation program for laplace's equation in three dimensions kendall e. atkinson random trees and the analysis of branch and bound procedures douglas r. smith symbolic manipulation of integrodifferential expressions and factorization of linear ordinary differential operators over transcendental extensions of a differential field s. p. tsarev testing isomorphism of cone graphs(extended abstract) we give algorithms to decide graph isomorphism in a subclass of graphs which we call cone graphs. a cone graph is an undirected graph for which there exists a vertex r which uniquely determines a breadth-first search (bfs) tree. equivalently, all shortest paths from r to any other graph vertex are unique. our algorithms may be used either nondeterministically or probabilistically. used as probabilistic algorithms, they return always a correct answer, but with an expected running time only. christoph m. hoffmann improvement of complex arithmetic by use of double elements c. b. dunham enhanced simulated annealing for globally minimizing functions of many- continuous variables a new global optimization algorithm for functions of many continuous variables is presented, derived from the basic simulated annealing method. our main contribution lies in dealing with high- dimensionality minimization problems, which are often difficult to solve by all known minimization methods with or without gradient. in this article we take a special interest in the variables discretization issue. we also develop and implement several complementary stopping criteria. the original metropolis iterative random search, which takes place in a euclidean space r& lt;/bold>n, is replaced by another similar exploration, performed within a succession of euclidean spaces rp< /italic>, with p <log2dtime tends to zero exponentially withn. \\---authors' abstract noga alon nimrod megiddo random number generation with primitive pentanomials this paper presents generalized feedback shift register (gfsr) generators with primitive polynomials _x__p_ \\+ _x__p_-1 \\+ _x__q_ \\+ _x__q_-1 \\+ 1. the recurrence of these generators can be efficiently computed. we adopt fushimi's initialization scheme, which guarantees the _k_-distribution property. statistical and timing results are presented. pei-chi wu evaluation of a test set for stiff ode solvers lawrence f. shampine an experimental study of methods for parallel preconditioned krylov methods high performance multiprocessor architectures differ both in the number of processors, and in the delay costs for synchronization and communication. in order to obtain good performance on a given architecture for a given problem, adequate parallelization, good balance of load and an appropriate choice of granularity are essential. we discuss the implementation of parallel version of pcgpak for both shared memory architectures and hypercubes. our parallel implementation is sufficiently efficient to allow us to complete the solution of our test problems on 16 processors of the encore multimax/320 in an amount of time that is a small multiple of that required by a single head of a cray x/mp, despite the fact that the peak performance of the multimax processors is not even close to the supercomputer range. we illustrate the effectiveness of our approach on a number of model problems from reservoir engineering and mathematics. d. baxter j. saltz m. schultz s. eisenstat k. crowley an alternative to chan's deflation for bordered systems w. govaerts note on generalization in experimental algorithmics a recurring theme in mathematical software evaluation is the generalization of rankings of algorithms on test problems to build knowledge-based recommender systems for algorithm selection. a key issue is to profile algorithms in terms of the qualitative characteristics of benchmark problems. in this methodological note, we adapt a novel all-pairs algorithm for the profiling task; given performance rankings for m algorithms on n problem instances, each described with p features, identify a (minimal) subset of p that is useful for assessing the selective superiority of an algorithm over another, for all pairs of m algorithms. we show how techniques presented in the mathematical software literature are inadequate for such profiling purposes. in conclusion, we also address various statistical issues underlying the effective application of this technique. naren ramakrishnan raul e. valdes-perez algorithm 547: fortran routines for discrete cubic spline interpolation and smoothing [e1], [e3] charles s. duris algorithm 712; a normal random number generator joseph l. leva corrigendum: "interpolation with interval and point tension controls using cubic weighted v-splines" thomas a. foley runge-kutta starters for multistep methods c. w. gear pseudorandom vector generation by the inversive method pseudorandom vectors are of importance for parallelized simulation methods. in this article we carry out a detailed analysis of the inversive method for the generation of uniform pseudorandom vectors. this method can be viewed as an analog of the inversive congruential method for pseudorandom number generation. we study, in particular, the periodicity properties and the behavior under the serial test for sequences of pseudorandom vectors generated by the inversive method. harald niederreiter on translating a set of rectangles given a collection of disjoint objects in the plane, we are interested in translating them by a common vector. if we have a primitive for translating one object at a time, then the order in which the objects can individually be translated is often geometrically constrained. in this paper we study the nature of these constraints and exhibit optimal algorithms for finding valid motion ordering for several different classes of objects. these algorithms find use in computer display applications. leo j. guibas f. frances yao a nested partitioning procedure for numerical multiple integration jerome h. friedman margaret h. wright algorithm 632; a program for the 0 - 1 multiple knapsack problem silvano martello paolo toth testing for cycles in infinite graphs with periodic structure k. iwano k. steiglitz integral closure of noetherian rings patrizia gianni barry trager an algorithm for computing exponential solutions of first order linear differential systems eckhard pflugel a fourth order spline method for singular two-point boundary value problems (abstract only) we describe a new spline method for the (weakly) singular two-point boundary value problem: (xαy)′ = f(x,y), 0 < x < 1, αε(0,1) (1) y(0) = a, y(1) = b. we construct our spline approximation s(x) for the solution y(x) of the two- point boundary value problem (1) such that while (xαs′)′ ε c[0,1], with the uniform mesh xi = ih, i = 0(1)n, in each sub-interval [xi,si+1], our spline approximation s(x) linearly spans a certain set of (non-polynomial) basis functions in the representation of the solution y(x) of the two-point boundary value problem, and satisfies the interpolition conditions li(s) = li(y), li+1(s) = li+1(y) (2) mi(s) = mi(y), mi+1(s) = mi+1(y), where li(y) = y(xi) and mi(y) = (xαy′)′)|x=xi the resulting method provides order h2 uniformly convergent approximations for the solution over [0, 1]. we then describe a modification of the above method. in the modified method, we generate the solution at the nodal points by using the recently proposed fourth order method of chawla [m.m. chawla, "a fourth order finite difference method based on uniform mesh for singular two-point boundary value problems", j. comput. appl. math., to appear] and then use the "conditions of continuity" to obtain the smoothed approximations for the linear functionals mi(y) needed for the construction of the spline solution. we show that the resulting method provides order h 4 uniformly convergent spline approximations for the solution y(x) over [0, 1]. the second and the fourth order of the methods described above are verified computationally. m. m. chawla r. subramanian h. l. sathi remark on algorithm 669 desmond j. higham algorithm 659: implementing sobol's quasirandom sequence generator we compare empirically accuracy and speed of low-discrepancy sequence generators of sobol' and faure. these generators are useful for multidimensional integration and global optimization. we discuss our implementation of the sobol' generator. paul bratley bennett l. fox implementation of dynamic trees with in-subtree operations tomasz radzik granularity issues for solving polynomial systems via globally convergent algorithms on a hypercube polynomial systems of equations frequently arise in many applications such as solid modelling, robotics, computer vision, chemistry, chemical engineering, and mechanical engineering. locally convergent iterative methods such as quasi-newton methods may diverge or fail to find all meaningful solutions of a polynomial system. recently a homotopy algorithm has been proposed for polynomial systems that is guaranteed globally convergent (always converges from an arbitrary starting point) with probability one, finds all solutions to the polynomial system, and has a large amount of inherent parallelism. there are several ways the homotopy algorithms can be decomposed to run on a hypercube. the granularity of a decomposition has a profound effect on the performance of the algorithm. the results of decompositions with two different granularities are presented. the experiments were conducted on an ipsc-16 hypercube using actual industrial problems. d. c. s. allison a. chakraborty l. t. watson computing the block triangular form of a sparse matrix we consider the problem of permuting the rows and columns of a rectangular or square, unsymmetric sparse matrix to compute its block triangular form. this block triangular form is based on a canonical decomposition of bipartite graphs induced by a maximum matching and was discovered by dulmage and mendelsohn. we describe implementations of algorithms to compute the block triangular form and provide computational results on sparse matrices from test collections. several applications of the block triangular form are also included. alex pothen chin-ju fan on computational efficiency of the iterative methods for the simultaneous approximation of polynomial zeros a measure of efficiency of simultaneous methods for determination of polynomial zeros, defined by the coefficient of efficiency, is considered. this coefficient takes into consideration (1) the r-order of convergence in the sense of the definition introduced by ortega and rheinboldt (iterative solution of nonlinear equations in several variables. academic press, new york, 1970) and (2) the number of basic arithmetic operations per iteration, taken with certain weights depending on a processor time. the introduced definition of computational efficiency was used for comparison of the simultaneous methods with various structures. g. v. milovanovic m. s. petkovic desi methods for stiff initial-value problems recently, the so-called desi (diagonally extended singly implicit) runge-kutta methods were introduced to overcome some of the limitations of singly implicit methods. preliminary experiments have shown that these methods are usually more efficient than the standard singly implicit runge-kutta (sirk) methods and, in many cases, are competitive with backward differentiation formulae (bdf). this article presents an algorithm for determining the full coefficient matrix from the stability function, which is already chosen to make the method a-stable. because of their unconventional nature, desi methods have to be implemented in a special way. in particular, the effectiveness of these methods depends heavily on how starting values are chosen for the stage iterations. these and other implementation questions are descussed in detail, and design choices we have made form the basis of an experimental coe for the solution of stiff problems by desi methods. we present here a small subset of the numerical results obtained with our code. many of these results are quite satisfactory and suggest that desi methods have a useful role in the solution of this type of problem. j. c. butcher j. r. cash m. t. diamantakis a comparison of four pseudo random number generators implemented in ada william n. graham algorithm 633: an algorithm for linear dependency analysis of multivariate data r. c. ward g. j. davis v. e. kane fast algorithms for k-shredders and k-node connectivity augmentation (extended abstract) joseph cheriyan ramakrishna thurimella implementing the complex arcsine and arccosine functions using exception handling we develop efficient algorithms for reliable and accurate evaluatins of the complex arcsine and arccosine functions. a tight error bound is derived for each algorithm; the results are valid for all machine- representable points in the complex plane. the algorithms are presented in a pseudocode that has a convenient exception-handling facility. corresponding fortran 77 programs for an ieee environment have also been developed to illustrate the practicality of the algorithms, and these programs have been tested very carefully to help confirm the correctness of the algorithms and their error bounds. the results of these tests are included in the article, but the fortran 77 programs are not (these programs are available from fairgrieve). tests of other widely available programs fail at many points in the complex plane, and otherwise are slower and produce much less accurate results. t. e. hull thomas f. fairgrieve ping tak peter tang performance evaluation of programs for the error and complementary error functions this paper presents methods for performance evaluation of computer programs for the functions erf(x), erfc(x), and ex< /italic>2 erfc(x). accuracy estimates are based on comparisons using power series expansions and an expansion in the repeated integrals of erfc(x). some suggestions for checking robustness are also given. details of a specific implementation of a test program are included. w. j. cody an improved incomplete cholesky factorization incomplete factorization has been shown to be a good preconditioner for the conjugate gradient method on a wide variety of problems. it is well known that allowing some fill-in during the incomplete factorization can significantly reduce the number of iterations needed for convergence. allowing fill-in, however, increases the time for the factorization and for the triangular system solutions. additionally, it is difficult to predict a priori how much fill-in to allow and how to allow it. the unpredictability of the required storage/work and the unknown benefits of the additional fill-in make such strategies impractical to use in many situations. in this article we motivate, and then present, two "black-box" strategies that significantly increase the effectiveness of incomplete cholesky factorization as a preconditioner. these strategies require no parameters from the user and do not increase the cost of the triangular system solutions. efficient implementations for these algorithms are described. these algorithms are shown to be successful for a variety of problems from the harwell-boeing sparse matrix collection. mark t. jones paul e. plassmann corrections to "the computation and communication complexity of a parallel banded system solver" d. h. lawrie a. h. sameh algorithms for drawing trees (abstract) peter eades a combined unifrontal/multifrontal method for unsymmetric sparse matrices we discuss the organization of frontal matrices in multifrontal methods for the solution of large sparse sets of unsymmetric linear equations. in the multifrontal method, work on a frontal matrix can be suspended, the frontal matrix can be stored for later reuse, and a new frontal matrix can be generated. there are thus several frontal matrices stored during the factorization, and one or more of these are assembled (summed) when creating a new frontal matrix. although this means that arbitrary sparsity patterns can be handled efficiently, extra work is required to sum the frontal matrices together and can be costly because indirect addressing is requred. the (uni)frontal method avoids this extra work by factorizing the matrix with a single frontal matrix. rows and columns are added to the frontal matrix, and pivot rows and columns are removed. data movement is simpler, but higher fill- in can result if the matrix cannot be permuted into a variable-band form with small profile. we consider a combined unifrontal/multifrontal algorithm to enable general fill-in reduction orderings to be applied without the data movement of previous multifrontal approaches. we discuss this technique in the context of a code designed for the solution of sparse systems with unsymmetric pattern. timothy a. davis iain s. duff band reduction algorithms revisited in this paper we explain some of the changes that have been incorporated in the latest version of the lapack subroutine for reducing a symmetric banded matrix to tridiagonal form. these modifications improve the performance for larger-bandwidth problems and reduce the number of operations when accumulating the transformations onto the identity matrix, by taking advantage of the structure of the initial matrix. we show that similar modifications can be made to the lapack subroutines for reducing a symmetric positive definite generalized eigenvalue problem to a standard symmetric banded eigenvalue problem and for reducing a general banded matrix to bidiagonal form to facilitate the computation of the singular values of the matrix. linda kaufman numerical methods for infinite markov processes the estimation of steady state probability distributions of discrete markov processes with infinite state spaces by numerical methods is investigated. the aim is to find a method applicable to a wide class of problems with a minimum of prior analysis. a general method of numbering discrete states in infinite domains is developed and used to map the discrete state spaces of markov processes into the positive integers, for the purpose of applying standard numerical techniques. a method based on a little used theoretical result is proposed and is compared with two other algorithms previously used for finite state space markov processes. p. j.b. king i. mitrani exact real computer arithmetic with continued fractions we introduce a representation of the computable real numbers by continued fractions. this deals with the subtle points of undecidable comparison an integer division, as well as representing the infinite 1/0 and undefined 0/0 numbers. two general algorithms for performing arithmetic operations are introduced. the algebraic algorithm, which computes sums and products of continued fractions as a special case, basically operates in a positional manner, producing one term of output for each term of input. the transcendental algorithm uses a general formula of gauss to compute the continued fractions of exponentials, logarithms, trigonometric functions, as well as a wide class of special functions. a prototype system has been implemented in lelisp, and the performance of these algorithms is promising. jean vuillemin algorithm 799: revolve: an implementation of checkpointing for the reverse or adjoint mode of computational differentiation in its basic form, the reverse mode of computational differentiation yields the gradient of a scalar- valued function at a cost that is a small multiple of the computational work needed to evaluate the function itself. however, the corresponding memory requirement is proportional to the run-time of the evaluation program. therefore, the practical applicability of the reverse mode in its original formulation is limited despite the availability of ever larger memory systems. this observation leads to the development of checkpointing schedules to reduce the storage requirements. this article presents the function revolve, which generates checkpointing schedules that are provably optimal with regard to a primary and a secondary criterion. this routine is intended to be used as an explicit "controller" for running a time-dependent applications program. andreas griewank andrea walther some statistics on gaussian elimination with partial pivoting a. j. macleod a test for cancellation errors in quasi-newton methods it has recently been shown that cancellation errors in a quasi-newton method can increase without bound as the method converges. a simple test is presented to determine when cancellation errors could lead to significant contamination of the approximating matrix. chaya gurwitz a test problem generator for discrete linear l 1 approximation problems k. l. hoffman d. r. shier on the significance of the theory of convex polyhedra for formal algebra this paper was originally published as ber die bedeutung der theorie der konvexen polyeder f r die formale algebra in jahresbericht d. deutschen math. ver. **_30_** (1922): 98-99. it is based on a talk given in jena to the german mathematical society on 23 september 1921. quoted in ostrowski's later paper on multiplication and factorization of polynomials. i lexicographic orderings and extreme aggregates of terms. aequationes math. **_13_** (1975): 201-228. a. ostrowski two exercises j. järvinen t. pasanen implementation of a lattice method for numerical multiple integration an implementation of a method for numerical multiple integration based on a sequence of imbedded lattice rules is given. besides yielding an approximation to the integral, this implementation also provides an error estimate which does not require much extra computation. the results of some numerical experiments conclude the paper. stephen joe ian h. sloan a unified approach to path problems robert endre tarjan combinatorial optimization: an integer programming perspective vijay chandru m. r. rao static analysis of large programs (invited talk) (abstract only): some experiences our research group at microsoft has spent some effort over the last few years attempting to apply static analysis methods to large application programs (over a million lines of code). in the first part of the talk, i will share some of the insights we have gained along the way. the first insight is that the static analysis method of interest must scale to large programs. it must scale in terms of performance, both running time and memory requirements. the interesting complexity metric is average-case behaviour. the analysis must also scale in terms of the quality of information produced. this metric is hard to measure, and depends on the problem to be solved. the second insight is that large commercial applications differ from the benchmark programs typically used in the literature in many ways beyond sheer size: for instance, they routinely circumvent the type system, they make use of every conceivable language feature, they use large shared libraries, they contain some very large automatically generated functions, they define functions with large numbers of call sites, and they include many indirect call sites. all of these characteristics make analysis hard. in particular, they make the implementation of a scalable analysis an exercise in careful engineering. in the second part of the talk, i will make the claim that it is possible to develop scalable static analysis methods for large programs, using the following approach: first, implement an efficient algorithm, and carefully engineer it to scale. important requirements are modularity, a complete parser, and some form of garbage collection. next, identify the conceptual limitations of the efficient algorithm, and examine the test programs to identify common cases or idioms in the code that expose these limitations. finally, extend the original algorithm in a principled manner to account for these common cases, without compromising performance. i will use my work on pointer analysis of large programs as an example of this approach. the key insight is that it is possible to identify common idioms in the code that are the bottlenecks for efficient algorithms. a scalable static analysis method can be developed if these idioms can be identified and accommodated. manuvir das a fast algorithm for code movement optimisation d. m. dhamdhere exploiting zeros on the diagonal in the direct solution of indefinite sparse symmetric linear systems we describe the design of a new code for the solution of sparse indefinite symmetric linear systems of equations. the principal difference between this new code and earlier work lies in the exploitation of the additional sparsity available when the matrix has a significant number of zero diagonal entries. other new features have been included to enhance the execution speed, particularly on vector and parallel machines. i. s. duff j. k. reid software considerations for the "black box" solver fidisol for partial differential equations fidisol is a program package for the solution of nonlinear systems of two- dimensional and three-dimensional elliptic and parabolic partial differential equations (pdes) with nonlinear boundary conditions (bcs) on the boundaries of a rectangular domain. a finite difference method (fdm) with an arbitrary grid and arbitrary consistency order is used, these are either prescribed by the user or are self-adapted for a given relative tolerance. fidisol has been designed to be fully vectorizable on vector computers. in this paper we discuss several problems from the viewpoint of software development and user interface, for example, how to deliver the pdes and bcs to fidisol and how to allow a flexible use by a suitable parameter list. willi schönauer eric schnepf a fast parallel algorithm for the maximal independent set problem a parallel algorithm is presented that accepts as input a graph g and produces a maximal independent set of vertices in g. on a p-ram without the concurrent write or concurrent read features, the algorithm executes in o((log n)4) time and uses o((n/(log n))3) processors, where n is the number of vertices in g. the algorithm has several novel features that may find other applications. these include the use of balanced incomplete block designs to replace random sampling by deterministic sampling, and the use of a "dynamic pigeonhole principle" that generalizes the conventional pigeonhole principle. richard m. karp avi wigderson on a single server queue with non-renewal input appie van de liefvoort aby tehranipour generating numerical ode formulas via a symbolic calculus of divided differences divided differences of vector functions can be applied to generate numerical update formulas for odes, as proposed by w. kahan. a computer algebra system renders the symbolic manipulations practical to perform. the technique is presented here using the maple system. some examples illustrate the potential applications of this hybrid symbolic-numeric approach to solving initial-value problems. k. o. geddes algorithm 658: a program for solving separable elliptic equations this paper persents a program serrg2 that solves separable elliptic equations on a rectangle. the program uses a matrix decomposition technique to directly solve the linear system arising from a rayleigh-ritz-galerkin approach with tensor product b-splines to solve the separable partial differential equation. this approach permits high-order discretizations, variable meshes, and multiple knots. linda kaufman daniel d. warner algorithm 678 btpec: sampling from the binomial distribution the fortran implementation of an exact, uniformly fast algorithm for generating the binomial, random variables is presented. the algorithm is numerically stable and is faster than other published algorithms. the code uses only standard fortran statements and is portable to most computers; it has been tested on the ibm 370, 3033, 4381, dec vax 11/780, sun 3/50, cdc 6500-6600, encore multimax, and apple mcintosh plus. a driver program is also included. voratas kachitvichyanukul bruce w. schmeiser hurry: an acceleration algorithm for scalar sequences and series theodore fessler william f. ford david a. smith integer arithmetic h. k. hodge fully dynamic algorithms for edge connectivity problems zvi galil giuseppe f. italiano sprint2d: adaptive software for pdes sprint2d is a set of software tools for solving both steady an unsteady partial differential equations in two-space variables. the software consists of a set of coupled modules for mesh generation, spatial discretization, time integration, nonlinear equations, linear algebra, spatial adaptivity, and visualization. the software uses unstructured triangular meshes and adaptive local error control in both space and time. the class of problems solved includes systems of parabolic, elliptic, and hyperbolic equations; for the latter by use of riemann-solver- based methods. this article describes the software and shows how the adaptive techniques may be used to increase the reliability of the solution for a burgers' equations problem, an electrostatics problem from elastohydrodynamic lubrication, and a challenging gas jet problem. m. berzins r. fairlie s. v. pennington j. m. ware l. e. scales numerical comparisons of some explicit runge-kutta pairs of orders 4 through 8 we performed numerical testing of six explicit runge-kutta pairsranging in order from a (3,4) pair to a (7,8) pair. all the testproblems had smooth solutions and we assumed dense output was notrequired. the pairs were implemented in a uniform way. in particular,the stepsize selection for all pairs was based on the locally optimalformula. we tested the efficiency of the pairs, to what extent toleranceproportionality held, the accuracy of the local error estimate andstepsize prediction, and the performance on mildly stiff problems. wealso showed, for these pairs, how the performance could be alterednoticeably by making simple changes to the stepsize selection strategy.as part of the work, we demonstrated new ways of presenting numericalcomparisons. \\---from the author's abstract p. w. sharp multiplicative, congruential random-number generators with multiplier ± 2_k_1 ± 2_k_2 and modulus 2_p_ \\- 1 the demand for random numbers in scientific applications is increasing. however, the most widely used multiplicative, congruential random-number generators with modulus 231 1 have a cycle length of about 2.1 × 109. moreover, developing portable and efficient generators with a larger modulus such as 261 1 is more difficult than those with modulus 231 1\\. this article presents the development of multiplicative, congruential generators with modulus m = 2p 1 and four forms of multipliers: 2k1 − 2k2, 2k1 \\+ 2k2, m 2k1 \\+ 2k2, and m 2k1 2k2, k1 > k2\\. the multipliers for modulus 231 1 and 261 1 are measured by spectral tests, and the best ones are presented. the generators with these multipliers are portable and vary fast. they have also passed several empirical tests, including the frequency test, the run test, and the maximum-of-t test. pei-chi wu algorithm 565: pdetwo/psetm/gearb: solution of systems of two-dimensional nonlinear partial differential equations [d3] david k. melgaard richard f. sincovec the search for and classification of integrable abel ode classes edgardo s. cheb-terrab theodore kolokolnikov austin d. roche tables of 64-bit mersenne twisters we give new parameters for a mersenne twister pseudorandom number gene rator for 64-bit word machines. takuji nishimura control-theoretic techniques for stepsize selection in implicit runge-kutta methods the problem of stepsize selection in implicit runge-kutta schemes is analyzed from a feedback control point of view. this approach leads to a better understanding of the relation between stepsize and error. a new dynamical model describing this relation is derived. the model is used as a basis for a new stepsize selection rule. this rule achieves better error control at little extra expense. the properties of the new model and the improved performance of the new error control are demonstrated using both analysis and numerical examples. kjell gustafsson fast algorithms for convex quadratic programming and multicommodity flows s kapoor p m vaidya on approximating arbitrary metrices by tree metrics yair bartal algorithm 592: a fortran subroutine for computing the optimal estimate of f(x) p. w. gaffney more efficient computation of the complex error function gautschi has developed an algorithm that calculates the value of the faddeeva function w(z) for a given complex number z in the first quadrant, up to 10 significant digits. we show that by modifying the tuning of the algorithm and testing the relative rather than the absolute error we can improve the accuracy of this algorithm to 14 significant digits throughout almost the whole of the complex plane, as well as increase its speed significantly in most of the complex plane. the efficiency of the calculation is further enhanced by using a different approximation in the neighborhood of the origin, where the gautschi algorithm becomes ineffective. finally, we develop a criterion to test the reliability of the algorithm's results near the zeros of the function, which occur in the third and fourth quadrants. g. p. m. poppe c. m. j. wijers an algorithm for finding hamilton cycles in random graphs this paper describes a polynomial time algorithm ham that searches for hamilton cycles in undirected graphs. on a random graph its asymptotic probability of success is that of the existence of such a cycle. if all graphs with n vertices are considered equally likely, then using dynamic programming on failure leads to an algorithm with polynomial expected time. finally, it is used in an algorithm for solving the symmetric bottleneck travelling salesman problem with probability tending to 1, as n tends to ∞. b bollobás t i fenner a m frieze algorithms: reducing the order of a power series joseph l. sowers on the discrepancy of gfsr pseudorandom numbers a new summation formula based on the orthogonal property of walsh functions is devised. using this formula, the k-dimensional discrepancy of the generalized feedback shift register (gfsr) pseudorandom numbers is derived. the relation between the discrepancy and k-distribution of gfsr sequences is also obtained. finally the definition of optimal gpsr pseudorandom number generators is introduced. shu tezuka algorithm 732; solvers for self-adjoint elliptic problems in irregular two- dimensional domains software is provided for the rapid solution of certain types of elliptic equations in rectangular and irregular domains. specifically, solutions are found in two dimensions for the nonseparable self- adjoint elliptic problem g y =f , where g and f are given functions of x and y, in two- dimensional polygonal domains with dirichlet boundary conditions. helmholtz and poisson problems in polygonal domains and the general variable coefficient problem i.e.,g 1 in a rectangular domain may be treated as special cases. the method of solution combines the use of the capacitance matrix method, to treat the irregular boundary, with an efficient iterative method (using the laplacian as preconditioner) to deal 2 with nonseparability. each iterative step thus involves solving the poisson equation in a rectangular domain. the package includes separate, easy-to-use routines for the helmholtz problem and the general problem in rectangular and general polygonal domains, and example driver routines for each. both single- and double-precision routines are provided. second-order-accurate finite differencing is employed. storage requirements increase approximately as p2 \\+ n2, where p is the number of irregular boundary points and where n is the linear domain dimension. the preprocessing time (the capacitance matrix calculation) varies as pn2 log n, and the solution time varies as n2 log n. if the equations are to be solved repeatedly in the same geometry, but with different source or diffusion functions, the capacitance matrix need only be calculated once, and hence the algorithm is particularly efficient for such cases. patrick f. cummins geoffrey k. vallis fast set operations using treaps guy e. blelloch margaret reid-miller lifting markov chains to speed up mixing fang chen laszlo lovasz igor pak performance evaluation of programs for certain bessel functions this paper presents methods for performance evaluation of the k bessel functions. accuracy estimates are based on comparisons involving the multiplication theorem. some ideas for checking robustness are also given. the techniques used here are easily extended to the y bessel functions and, with a little more effort, to the i and j functions. details on a specific implementation for testing the k bessel functions are included. w. j. cody l. stoltz a global optimization algorithm using stochastic differential equations sigma is a set of fortran subprograms for solving the global optimization problem, which implements a method founded on the numerical solution of a cauchy problem for a stochastic differential equation inspired by statistical mechanics. this paper gives a detailed description of the method as implemented in sigma and reports the results obtained by sigma attacking, on two different computers, a set of 37 test problems which were proposed elsewhere by the present authors to test global optimization software. the main conclusion is that sigma performs very well, solving 35 of the problems, including some very hard ones. unfortunately, the limited results available to us at present do not appear sufficient to enable a conclusive comparison with other global optimization methods. filippo aluffi-pentini valerio parisi francesco zirilli on recursive calculation of the generalized inverse of a matrix the generalized inverse of a matrix is an extension of the ordinary square matrix inverse which applies to any matrix (e.g., singular, rectangular). the generalized inverse has numerous important applications such as regression analysis, filtering, optimization and, more recently, linear associative memories. in this latter application known as distributed associative memory, stimulus vectors are associated with response vectors and the result of many associations is spread over the entire memory matrix, which is calculated as the generalized inverse. addition/deletion of new associations requires recalculation of the generalized inverse, which becomes computationally costly for large systems. a better solution is to calculate the generalized inverse recursively. the proposed algorithm is a modification of the well known algorithm due to rust et al. [2], originally introduced for nonrecursive computation. we compare our algorithm with greville's recursive algorithm and conclude that our algorithm provides better numerical stability at the expense of little extra computation time and additional storage. saleem mohideen vladimir cherkassky an alternative to chan's deflation for bordered systems w govaerts j d pryce the traveling salesman problem in graphs with 3-edge cutsets this paper analyzes decomposition properties of a graph that, when they occur, permit a polynomial solution of the traveling salesman problem and a description of the traveling salesman polytope by a system of linear equalities and inequalities. the central notion is that of a 3-edge cutset, namely, a set of 3 edges that, when removed, disconnects the graph. conversely, our approach can be used to construct classes of graphs for which there exists a polynomial algorithm for the traveling salesman problem. the approach is illustrated on two examples, halin graphs and prismatic graphs. g. cornuejols d. naddef w. pulleyblank how i implemented the karmarkar algorithm in one evening e. r. swart computing the mdmt decomposition the mdmt factorization of an n×n symmetric indefinite matrix a can be used to solve a linear system with a as the coefficient matrix. this factorization can be computed efficiently using an algorithm given in 1977 by bunch and kaufman. the lapack project has been implementing block versions of well-known algorithms for solving dense linear systems and eigenvalue problems. the block version of the mdmt decomposition algorithm in lapack requires the user to specify a block size b by supplying an n×b scratch array. it then makes (n/b 2)2/2 invocations of a matrix-matrix product subroutine with one matrix no larger than b×b, n2/(2b) b invocations of a matrix-vector product routine with a matrix no larger than b×b, and between n n/b and 2(n n/b) invocations of a matrix-vector product routine with matrices with less than b columns. because the user can query lapack about an optimal block size, our concern is focused on users who cannot change the amount of available scratch space or who neglect to use this facility and are unaware of a performance degradation with small block sizes. this article suggests two alternative algorithms. the first is a block algorithm requiring b×b scratch space and is about 5% slower than lapack's current block algorithm with large b. the user does not have to specify the block size. the second algorithm is a rejuvenation of an old implementation of the mdmt decomposition algorithm that requires n matrix- vector products. the performance of the various algorithms on a specific machine is dependent on the manufacturer's implementation of the different basic linear algebra subroutines that they each invoke. our data indicate that on the cray y-mp, alliant, and convex the time for the rejuvenated algorithm is either less than or within 10% of that of lapack's block algorithm with large b. linda kaufman matlab-implementation of mm3a1 - algorithm for optimal singular adaptive observation of miso linear discrete systems lyubomir sotirov nikola nikolov stella sotirova remark on algorithm 723: fresnel integrals stuart anderson algorithm 717; subroutines for maximum likelihood and quasi-likelihood estimation of parameters in nonlinear regression models we present fortran 77 subroutines that solve statistical parameter estimation problems for general nonlinear models, e.g., nonlinear least-squares, maximum likelihood, maximum quasi-likelihood, generalized nonlinear least-squares, and some robust fitting problems. the accompanying test examples include members of the generalized linear model family, extensions using nonlinear predictors ("nonlinear glim"), and probabilistic choice models, such as linear-in- parameter multinomial probit models. the basic method, a generalization of the nl2sol algorithm for nonlinear least-squares, employs a model/trust-region scheme for computing trial steps, exploits special structure by maintaining a secant approximation to the second-order part of the hessian, and adaptively switches between a gauss-newton and an augmented hessian approximation. gauss- newton steps are computed using a corrected seminormal equations approach. the subroutines include variants that handle simple bounds on the parameters, and that compute approximate regression diagnostics. david s. bunch david m. gay roy e. welsch on extracting randomness from weak random sources (extended abstract) amnon ta-shma a class of logarithmic integrals victor adamchik getting rid of correlations among pseudorandom numbers: discarding versus tempering we consider the impact of discarding and tempering on modern huge period high speed linear generators, and illustrate how a simple strategy yields unexpected &mdashh; and unwanted --- success in a fair coin gambling which is simulated by a recently proposed generator. it becomes clear that discarding is no general rule to get rid of unwanted correlations. stefan wegenkittl makoto matsumoto five-year cumulative author index (vol. 10--14. 1984--1988). anonymous testing unconstrained optimization software jorge j. more burton s. garbow kenneth e. hillstrom algorithm 670: a runge-kutta-nystr m code r. w. brankin i. gladwell j. r. dormand p. j. prince w. l. seward efficient and portable combined random number generators in this paper we present an efficient way to combine two or more multiplicative linear congruential generators (mlcgs) and propose several new generators. the individual mlcgs, making up the proposed combined generators, satisfy stringent theoretical criteria for the quality of the sequence they produce (based on the spectral test) and are easy to implement in a portable way. the proposed simple combination method is new and produces a generator whose period is the least common multiple of the individual periods. each proposed generator has been submitted to a comprehensive battery of statistical tests. we also describe portable implementations, using 16-bit or 32-bit integer arithmetic. the proposed generators have most of the beneficial properties of mlcgs. for example, each generator can be split into many independent generators and it is easy to skip a long subsequence of numbers without doing the work of generating them all. p. l'ecuyer an optimal randomized logarithmic time connectivity algorithm for the erew pram (extended abstract) improving a long chain of works we obtain a randomized erew pram algorithm for finding the connected components of a graph g=(v,e) with n vertices and m edges in o(log n) time using an optimal number of o((m+n)/log n) processors. the result returned by the algorithm is always correct. the probability that the algorithm will not complete in o(log n) time is at most -c for any desired c > 0. the best deterministic erew pram connectivity algorithm, obtained by chong and lam, runs in o(log n log log n) time using m \\+ n processors. shay halperin uri zwick formal solutions of scalar singularly-perturbed linear differential equations y. o. macutan computer evaluation of cyclicity in planar cubic system victor f. edneral an acceptance-complement analogue of the mixture-plus-acceptance-rejection method for generating random variables richard a. kronmal arthur v. peterson the tree structure of exponential calculations - addendum r mcbeth algorithm 702; tnpack - a truncated newton minimization package for large- scale problems: i. algorithm and usage we present a fortran package of subprograms for minimizing multivariate functions without constraints by a truncated newton algorithm. the algorithm is especially suited for problems involving a large number of variables. truncated newton methods allow approximate, rather than exact, solutions to the newton equations. truncation is accomplished in the present version by using the preconditioned conjugate gradient algorithm (pcg) to solve approximately the newton equations. the preconditioner m is factored in pcg using a sparse modified cholesky factorization based on the yale sparse matrix package. in this paper we briefly describe the method and provide details for program usage. tamar schlick aaron fogelson solving systems of nonlinear equations using the nonzero value of the topological degree two algorithms are described here for the numerical solution of a system of nonlinear equations f(x) = , q=0,0, ,0∈ r , and f is a given continuous mapping of a region d in rn into rn . the first algorithm locates at least one root of the sy stem within n-dimensional polyhedron, using the non zero value of the topological degree of f at relative to the polyhedron; the second algorithm applies a new generalized bisection method in order to compute an approximate solution to the system. teh size of the original n-dimensional polyhedron is arbitrary, and the method is globally convergent in a residual sense. these algorithms, in the various function evaluations, only make use of the algebraic sign of f and do not require computations of the topological degree. moreover, they can be applied to nondifferentiable continuous functions f and do not involve derivatives of f or approximations of such derivatives. michael n. vrahatis cross-bispectrum computation and variance estimation k. s. lii k. n. helland remark on algorithm 30 milan novotny finite element methods j. tinsley oden control theoretic techniques for stepsize selection in explicit runge-kutta methods kjell gustafsson efficient and portable combined tausworthe random number generators shu tezuka pierre l'ecuyer corrigendum: algorithm 725: computation of the multivariate normal integral zvi drezner algorithm 773: ssrfpack: interpolation of scattered data on the surface of a sphere with a surface under tension ssrfpack is a fortran 77 software package that constructs a smooth interpolatory or approximating surface to data values associated with arbitrarily distributed points on the surface of a sphere. it employs automatically selected tension factors to preserve shape properties of the data and avoid overshoot and undershoot associated with steep gradients. robert j. renka algorithm 643: fexact: a fortran subroutine for fisher's exact test on unordered r c contingency tables the computer code for mehta and patel's (1983) network algorithm for fisher's exact test on unordered r×c contingency tables is provided. the code is written in double precision fortran 77. this code provides the fastest currently available method for executing fisher's exact test, and is shown to be orders of magnitude superior to any other available algorithm. many important details of data structures and implementation that have contributed crucially to the success of the network algorithm are recorded here. cyrus r. mehta nitin r. patel algorithm 684: c1\\- and c2-interplation on triangles with quintic and nonic bivariate polynomials albrecht preusser algorithm 601: a sparse-matrix package - part ii: special cases j. m. mcnamee verifying partial orders we present a randomized algorithm which uses o(n(log n)1/3) expected comparisons to verify that a given partial order holds on n elements from an unknown total order. c. kenyon-mathieu v. king a partial pivoting strategy for sparse symmetric matrix decomposition it is well known that the partial pivoting strategy by bunch and kaufman is very effective forfactoring dense symmetric indefinite matrices using the diagonal pivoting method. in this paper, westudy a threshold version of the strategy for sparse symmetric matrix decomposition. the use of thisscheme is explored in the multifrontal method of duff and reid for sparse indefinite systems.experimental results show that it is at least as effective as the existing pivoting strategy used in thecurrent multifrontal implementation. joseph w. h. liu matlab - implementation of m2a5-algorithm for optimal singular adaptive observation of siso linear discrete systems lyubomir sotirov nikola nikolov stella sotirova an instability problem in chebyshev expansions for special functions allan macleod space and time efficient implementations of parallel nested dissection deganit armon john reif algorithm 711; btn: software for parallel unconstrained optimization btn is a collection of fortran subroutines for solving unconstrained nonlinear optimization problems. it currently runs on both intel hypercube computers (distributed memory) and sequent computers (shared memory), and can take advantage of vector processors if they are available. the software can also be run on traditional computers to simulate the performance of a parallel computer. btn is a general-purpose algorithm, capable of solving problems with a large numbers of variables and suitable for users inexperienced with parallel computing. it is designed to be as easy to use as traditional algorithms for this problem, requiring only that a (scalar) subroutine be provided to evaluate the objective function and its gradient vector of first derivatives. the algorithm is based on a block truncated-newton method. truncated-newton methods obtain the search direction by approximately solving the newton equations via some iterative method. the particular method used in btn is a block version of the lanczos method, which is numerically stable for nonconvex problems. in addition to the optimization software, a parallel derivative checker is also provided. stephen g. nash ariela sofer spanning tree based state encoding for low power dissipation winfried nöth reiner kolla equilibrium states of runge kutta schemes understanding the behavior of runge-kutta codes when stability considerations restrict the stepsize provides useful information for stiffness detection and other implementation details. analysis of equilibrium states on test problems is presented which provides predictions and insights into this behavior. the implications for global error are also discussed. george hall an algorithm for generating chi random variables an algorithm is presented for generating random variables from the chi family of distributions withdegrees of freedom parameter ly 2 1. it is based on the ratio of uniforms method and can be usedeffectively for the gamma family. john f. monahan small sets supporting fary embeddings of planar graphs answering a question of rosenstiehl and tarjan, we show that every plane graph with n vertices has a fary embedding (i.e., straight-line embedding) on the 2n \\- 4 by n \\- 2 grid and provide an (n) space, (n log n) time algorithm to effect this embedding. the grid size is asymptotically optimal and it had been previously unknown whether one can always find a polynomial sized grid to support such an embedding. on the other hand we show that any set f, which can support a fary embedding of every planar graph of size n, has cardinality at least n \\+ (1 - (1)) n which settles a problem of mohar. hubert de fraysseix jános pach richard pollack software for estimating sparse hessian matrices the solution of a nonlinear optimization problem often requires an estimate of the hessian matrix for a function f. in large scale problems, the hessian matrix is usually sparse, and then estimation by differences of gradients is attractive because the number of differences can be small compared to the dimension of the problem. in this paper we describe a set of subroutines whose purpose is to estimate the hessian matrix with the least possible number of gradient evaluations. thomas f. coleman burton s. garbow jorge j. more algorithm 700: a fortran software package for sturm - liouville problems paul b. bailey anton zettl algorithm 731; a moving-grid interface for systems of one-dimensional time- dependent partial differential equations in the last decade, several numerical techniques have been developed to solve time-dependent partial differential equations (pdes) in one dimension having solutions with steep gradients in space and in time. one of these techniques, a moving-grid method based on a lagrangian description of the pde and a smoothed-equidistribution principle to define the grid positions at each time level, has been coupled with a spatial discretization method that automatically discreizes the spatial part of the user-defined pde following the method of lines approach. we supply two fortran subroutines, cwresu and cwresx, which compute the residuals of the differential algebraic equations (dae) system obtained from semidiscretizing, respectively, the pde and the set of moving-grid equations. these routines are combined in an enveloping routine skmres, which delivers the residuals of the complete dae system. to solve this stiff, nonlinear dae system, a robust and efficient time-integrator must be applied, for example, a bdf method such as implemented in the dae solvers sprint [berzins and furzeland 1985; 1986; berzins et al. 1989] and dassl [brenan et al. 1989; petzold 1983]. some numerical examples are shown to illustrate the simple and effective use of this software interface. j. g. blom p. a. zegeling control of interpolatory error in retarded differential equations kenneth w. neves problems, puzzles, challenges g. j. fee m. b. monagan implementation of hopf and double-hopf continuation using bordering methods we discuss the computational study of curves of hopf and double-hopf points in the software package content developed at cwi, amsterdam. these are important points in the numerical study of dynamical systems characterized by the occurrence of one or two conjugate pairs of pure imaginary eigenvalues in the spectrum of the jacobian matrix. the bialternate product of matrices is extensively used in three codes for the numerical continuation of curves of hopf points and in one for the continuation of curves of double-hopf points. in the double-hopf and two of the single-hopf cases this is combined with a bordered matrix method. we use this software to find special points on a hopf curve in a model of chemical oscillations and by computing a hopf and a double-hopf curve in a realistic model of a neuron. w. govaerts yu. a. kuznetsov b. sijnave algorithm 742; l2cxft: a fortran subroutine for least-squares data fitting with nonnegative second divided differences a fortran subroutine applies the method of demetriou and powell [1991] to restore convexity in n measurements of a convex function contaminated by random errors. the method minimizes the sum of the squares of the errors, subject to nonnegativity of second divided differences, in two phases. first, an approximation close to the optimum is derived in o(n) operations. then, this approximation is used as the starting point of a dual-feasible quadratic programming algorithm that completes the calculation of the optimum. the constraints allow b-splines to be used, which reduce the problem to an equivalent one with fewer variables where the knots of the splines are determined automatically from the data points due to the constraint equations. the subroutine benefits from this reduction, since common submatrices that occur during the calculation are updated suitably. iterative refinement improves the accuracy of some calculations when round-off errors accumulate. the subroutine has been applied to a variety of data having substantial differences and has performed fast and stably even for small data spacing, large n, and single-precision arithmetic. driver programs and examples with output are provided to demonstrate the use of the subroutine. i. c. demetriou a numerical technique for analytic continuation (abstract only) the principle of analytic continuation for a complex analytic function is well known. a numerical technique which directly employs the cauchy-riemann equations will be presented for performing analytic continuation in a region. the technique can be readily implemented in a concurrent programming environment in which the region is subdivided with a grid and a process created at each grid position. since any process can transition to its final state as soon as its immediate neighbors have obtained a suitable configuration, analytic continuation is able to "flow" naturally throughout the region. this work is motivated by considerations of adaptability to various regions and not by efficiency. however, the nature of the interprocess communication is critical. this is also described. mark temte algorithm 689; discretized collocation and iterated collocation for nonlinear volterra integral equations of the second kind this paper describes a fortran code for calculating approximate solutions to systems of nonlinear volterra integral equations of the second kind. the algorithm is based on polynomial spline collocation, with the possibility of combination with the corresponding iterated collocation. it exploits certain local superconvergence properties for the error estimation and the stepsize strategy. j. g. blom h. brunner a block 6(4) runge-kutta formula for nonstiff initial value problems a new selection is made for an efficient two-step block runge-kutta formula of order 6. the new formula is developed using some of the efficiency criteria recently investigated by shampine, and as a result, a block formula with much improved performance is obtained. an important property of this new formula is that there is a "natural" interpolating polynomial available. this can be used to compute approximate solution values at off-step points without the need to compute any additional function evaluations. the quality of this interpolant is examined, and it is shown to have certain desirable properties. the performance of the new block runge- kutta formula is evaluated using the detest test set and is shown to be more efficient than certain other standard runge-kutta formulas for this particular test set. j. r. cash using gaussian elimination for computation of the central difference equation coefficients sandra j cynar provably monotone approximations, romannumeral 3 charles b. dunham lower bounds on the complexity of graph properties in this simple model, a decision tree algorithm must determine whether an unknown digraph on nodes {1, 2, …, n} has a given property by asking questions of the form "is edge in the graph?". the complexity of a property is the number of questions which must be asked in the worst case. aanderaa and rosenberg conjectured that any monotone, nontrivial, (isomorphism-invariant) n-node digraph property has complexity (n2). this bound was proved by rivest and vuillemin and subsequently improved to n2/4+ (n2). in part i, we give a bound of n2< /supscrpt>/2+ (n& lt;supscrpt>2). whether these properties are evasive remains open. in part ii, we investigate the power of randomness in recognizing these properties by considering randomized decision tree algorithms in which coins may be flipped to determine the next edge to be queried. yao's lower bound on the randomized complexity of any monotone nontrivial graph property is improved from (nlog& lt;supscrpt>1/12n) to (n5/4), and improved bounds for the complexity of monotone, nontrivial bipartite graph properties are shown. valerie king remark on "an example of error propagation reinterpreted as subtractive cancellation" by j. a. delaney (signum newsletter 1/96) v. drygalla fixed versus variable order runge-kutta popular codes for the numerical solution of nonstiff ordinary differential equations (odes) are based on a (fixed order) runge-kutta method, a variable order adams method, or an extrapolation method. extrapolation can be viewed as a variable order runge-kutta method. it is plausible that variation of order could lead to a much more efficient runge-kutta code, but numerical comparisons have been contradictory. we reconcile previous comparisons by exposing differences in testing methodology and incompatibilities of the implementations tested. an experimental runge-kutta code is compared to a state-of-the-art extrapolation code. with some qualifications, the extrapolation code shows no advantage. extrapolation does not appear to be a particularly effective way to vary the order of runge-kutta methods. although an acceptable way to solve nonstiff problems, our tests raise the question as to whether there is any point in pursuing it as a separate method. l. f. shampine l. s. baca a fast algorithm for the two-variable integer programming problem sidnie dresher feit grey curve and its general problems jinlu liou difficulties in fitting scientific data charles b. dunham a primal null-space affine-scaling method this article develops an affine-scaling method for linear programming in standard primal form. its descent search directions are formulated in terms of the null-space of the linear programming matrix, which, in turn, is defined by a suitable basis matrix. we describe some basic properties of the method and an experimental implementation that employs a periodic basis change strategy in conjunction with inexact computation of the search direction by an iterative method, specifically, the conjugate-gradient method with diagonal preconditioning. the result of a numerical study on a number of nontrivial problems representative of problems that arise in practice are reported and discussed. a key advantage of the primal null-space affine- scaling method is its compatibility with the primal simplex method. this is considered in the concluding section, along with implications for the development of a more practical implementation. k. kim j. l. nazareth remark on algorithm 299 i. d. hill m. c. pike a schwarz splitting variant of cubic spline collocation methods for elliptic pdes we consider the formulation of the schwarz alternating method for a new class of elliptic cubic spline collocation discretization schemes. the convergence of the method is studied using jacobi and gauss-seidel iterative methods for implementing the interaction among subdomains. the schwarz cubic spline collocation (scsc) method is formulated for hypercube architectures and implemented on the ncube (128 processors) machine. the performance and convergence of the hypercube scsc algorithm is studied with respect to domain partition and subdomain overlapping area. the numerical results indicate that the partition and mapping of the scsc on the ncube is almost optimal while the speedup obtained is similar to other domain decomposition techniques. e. n. houstis j. r. rice e. a. vavalis algorithm 686: fortran subroutines for updating the qr decomposition l. reichel w. b. gragg the queue-length distribution of the m/ck/1 queue the exact closed-form analytic expression of the probability distribution of the number of units in a single server queue with poisson arrivals and coxian service time distribution is obtained. h. g. perros remarks on implementation of _o_(_n_1/2τ) assignment algorithms we examine an implementation and a number of modifications of a 1973 algorithm of hopcroft and karp for permuting a sparse matrix so that there are no zeros on the diagonal. we describe our implementation of the original hopcroft and karp algorithm and compare this with modifications which we prove to have the same on1/2` 4;) behavior, where the matrix is of order n with τ entries. we compare the best of these with an efficient implementation of an algorithm whose worst-case behavior is o< ;/italic>(nτ). iain s. duff torbjörn wiberg algorithm 648: nsdtst and stdtst: routines for assessing the performance of iv solvers w. h. enright j. d. pryce a new method for planar graph drawings on a grid (abstract) goos kant increasing robustness in global adaptive quadrature through interval selection heuristics henry d. shapiro determination of maximal symmetry groups of classes of differential equations a symmetry of a differential equation is a transformation which leaves invariant its family of solutions. as the functional form of a member of a class of differential equations changes, its symmetry group can also change. we give an algorithm for determining the structure and dimension of the symmetry group(s) of maximal dimension for classes of partial differential equations. it is based on the application of differential elimination algorithms to the linearized equations for the unknown symmetries. existence and uniqueness theorems are applied to the output of these algorithms to give the dimension of the maximal symmetry group. classes of differential equations considered include ode of form uxx = (x, u, ux), reaction-diffusion systems of form ut \\- uxx = (u, v), vt \\- vxx = g(u, v), and nonlinear telegraph systems of form vt = ux, vx = c(u, x) ;ux \\+ b(u, x). gregory j. reid allan d. wittkopf simulation in exponential families an acceptance-rejection algorithm for the simulation of random variables in statistical exponential families is described. this algorithm does not require any prior knowledge of the family, except sufficient stati stics and the value of the parameter. it allows simulation from many members of the exponential family. we present some bounds on computing time, as well as the main properties of the empirical measures of samples simulated by our methods (functional glivenko-cantelli and central limit theorems). this algorithm is applied in order to evaluate the distribution of m-estimators under composite alternatives; we also propose its use in bayesian statistics in order to simulate from posterior distributions. philippe barbe michel broniatowski on the statistical independence of nonlinear congruential pseudorandom numbers recently, several nonlinear congruential methods for generating uniform pseudorandom numbers have been proposed and analysed. in the present note, further statistical independence properties of a general class of nonlinear congruential pseudorandom number generators are established. the results that are obtained are essentially best possible in an asymptotic sense and show that the generated pseudorandom numbers model truly random numbers very closely in terms of asymptotic discrepancy. jurgen eichenauer- herrmann harald niederreiter toward parallel mathematical software for elliptic partial differential equations three approaches to parallelizing important components of the mathematical software package ellpack are considered: an explicit approach using compiler directives available only on the target machine, an automatic approach using an optimizing and parallelizing precompiler, and a two-level approach based on extensive use of a set of low level computational kernels. the focus is on shared memory architectures. each approach to parallelization is described in detail, along with a discussion of the effort involved. performance on a test problem, using up to sixteen processors of a sequent symmetry s81, is reported and discussed. implications for the parallelization of a broad class of mathematical software are drawn. calvin j. ribbens layne t. watson colin desa data-streams and histograms histograms have been used widely to capture data distribution, to represent the data by a small number of step functions. dynamic programming algorithms which provide optimal construction of these histograms exist, albeit running in quadratic time and linear space. in this paper we provide linear time construction of 1 + ε approximation of optimal histograms, running in polylogarithmic space. our results extend to the context of data-streams, and in fact generalize to give 1 + ε approximation of several problems in data-streams which require partitioning the index set into intervals. the only assumptions required are that the cost of an interval is monotonic under inclusion (larger interval has larger cost) and that the cost can be computed or approximated in small space. this exhibits a nice class of problems for which we can have near optimal data-stream algorithms. sudipto guha nick koudas kyuseok shim a method for the integration of solutions of ore equations sergei a. abramov mark van hoeij an elementary solution of a minimax problem arising in algorithms for automatic mesh selection we fill in some details in the solution of a minimax problem arising in automatic mesh selection. we supply two proofs of a result that, while simple, deserves the attention, in particular because we will need it to establish the complexity of an algorithm for factorization of bivariate approximate polynomials in an upcoming paper. robert m. corless algorithm 563: a program for linearly constrained discrete i1 problems richard h. bartels andrew r. conn corrigenda: "two fortran packages for assessing initial value methods" w. h. enright j. d. pryce simulation run length planning for stochastic loss models rayadurgam srikant ward whitt variance reduction: basic transformations a stochastic computer simulation model is a description of a system of interrelated random variables. realizations of the system can be used to estimate parameters of interest. variance reduction techniques (vrts) transform simulation models into similar models that permit more precise estimation of the parameters. the basic types of transformations are defined and a simple example is given. barry l. nelson bruce w. schmeiser interpolatory integration formulas for optimal composition a set of symmetric, closed, interpolatory integration formulas on the interval [-1, 1] with positive weights and increasing degree of precision is introduced. these formulas, called recursive monotone stable (rms) formulas, allow applying higher order or compound rules without wasting previously computed functional values. an exhaustive search shows the existence of 27 families of rms formulas, stemming from the simple trapezoidal rule. paola favati grazia lotti francesco romani a binary algorithm for the jacobi symbol we present a new algorithm to compute the jacobi symbol, based on stein's binary algorithm for the greatest common divisor, and we determine the worst case behavior of this algorithm. our implementation of the algorithm runs approximately 7--25% faster than traditional methods on inputs of size 100-- 1000 decimal digits. jeffrey shallit jonathan sorenson fast hardware random number generator for the tausworthe sequence many simulation programs require m-dimensional uniformly distributed random numbers. a linear recurrence modulo two generator, based on n-bits and producing l-bit numbers (l ≤ n), according to tausworthe theory, may yield a sequence of m-tuples uniformly distributed in m (n/l) dimensions. when using software computing algorithms on a binary computer, for large n (e.g. n 159), the generation speed is for many purposes too slow. to overcome this disadvantage we present a new concept of a hardware random number generator, to give the tausworthe sequence with high generation speed independent of the number of bits per word n. for a 32-bit data word computer we have performed statistical tests on three generators, two of them gave good results. meir barel fortran packages for solving certain almost block diagonal linear systems by modified alternate row and column elimination j. c. diaz g. fairweather p. keast an automatic layout facility (abstract) giuseppe liotta new results on deriving protocol specifications from service specifications previous papers describe an algorithm for deriving a specification of protocol entities from a given service specification. a service specification defines a particular ordering for the execution of service primitives at the different service access points using operators for sequential, parallel and alternative executions. the derived protocol entities ensure the correct ordering by exchanging appropriate synchronization messages, between one another through the underlying communication medium. this paper presents several new results which represent important improvements to the above protocol derivation approach. first the language restriction to finite behaviors is removed by allowing for the definition of procedures which can be called recursively. secondly, a new derivation algorithm has been developed which is much simpler than the previous one. third, the resulting protocol specifications are much more optimized than those obtained previously. f. khendek g. von bochmann c. kant algorithms for orthogonal drawings (abstract) roberto tamassia detecting and decomposing self-overlapping curves paint one side of a rubber disk black and the other side white; stretch the disk any way you wish in three-dimensional space, subject to the condition that from any point in space, if you look down you see either the white side of the disk or nothing at all. now make the stretched disk transparent but color its boundary black; project its boundary into a plane that lies below the disk. the resulting curve is self-overlapping. we show how to test whether a given curve is self-overlapping, and how to count how many essentially different stretchings of the disk could give rise to the same curve. p. w. shor c. j. van wyk acm algorithms policy fred t. krogh an improved las vegas primality test we present a modification of the goldwasser-kilian-atkin primality test, which, when given an input n, outputs either prime or composite, along with a certificate of correctness which may be verified in polynomial time. atkin's method computes the order of an elliptic curve whose endomorphism ring is isomorphic to the ring of integers of a given imaginary quadratic field q( \---d). once an appropriate order is found, the parameters of the curve are computed as a function of a root modulo n of the hilbert class equation for the hilbert class field of q( \---d). the modification we propose determines instead a root of the watson class equation for q( ---d) and applies a transformation to get a root of the corresponding hilbert equation. this is a substantial improvement, in that the watson equations have much smaller coefficients than do the hilbert equations. e. kaltofen t. valente n. yui highly continuous runge-kutta interpolants to augment the discrete runge-kutta solutlon to the mitlal valueproblem, piecewlse hermite interpolants have been used to provide acontinuous approximation with a continuous first derivative we showthat it m possible to construct mterpolants with arbltrardy manycontinuous derivatives which have the same asymptotic accuracy andbasic cost as the hermite interpol ants. we also show that theusual truncation coefficient analysis can be applied to these newinterpolants, allowing their accuracy to be examined in more detadas an illustration, we present some globally c2 interpolants foruse with a popular 4th and 5th order runge-kutta pair of dormandand prince, and we compare them theoretically and numerically withexisting interpolants. d. j. higham algorithm 676 odrpack: software for weighted orthogonal distance regression in this paper, we describe odrpack, a software package for the weighted orthogonal distance regression problem. this software is an implementation of the algorithm described in [2] for finding the parameters that minimize the sum of the squared weighted orthogonal distances from a set of observations to a curve or surface determined by the parameters. it can also be used to solve the ordinary nonlinear least squares problem. the weighted orthogonal distance regression procedure application to curve and surface fitting and to measurement error models in statistics. the algorithm implemented is an efficient and stable trust region (levenberg-marquardt) procedure that exploits the structure of the problem so that the computational cost per iteration is equal to that for the same type of algorithm applied to the ordinary nonlinear least squares problem. the package allows a general weighting scheme, provides for finite difference derivatives, and contains extensive error checking and report generating facilities. paul t. boggs janet r. donaldson richaard h. byrd robert b. schnabel a fourth-order-accurate fourier method for the helmholtz equation in three dimensions we present fourth-order-accurate compact discretizations of the helmholtz equation on rectangular domains in two and three dimensions with any combination of dirichlet, neumann, or periodic boundary conditions. the resulting systems of linear algebraic equations have the same block- tridiagonal structure as traditional central differences and hence may be solved efficiently using the fourier method. the performance of the method for a variety of test cases, including problems with nonsmooth solutions, is presented. the method is seen to be roughly twice as fast as deferred corrections and, in many cases, results in a smaller discretization error. ronald f. boisvert algorithm 598: an algorithm to compute solvent of the matrix equation ax2 \\+ bx \\+ c = 0 george j. davis some linear and nonlinear methods for pseudorandom number generation harald niederreiter a matlab differentiation matrix suite a software suite consisting of 17 matlab functions for solving differential equations by the spectral collocation (i.e., pseudospectral) method is presented. it includes functions for computing derivatives of arbitrary order corresponding to chebyshev, hermite, laguerre, fourier, and sinc interpolants. auxiliary functions are included for incorporating boundary conditions, performing interpolation using barycentric formulas, and computing roots of orthogonal polynomials. it is demonstrated how to use the package for solving eigenvalue, boundary value, and initial value problems arising in the fields of special functions, quantum mechanics, nonlinear waves, and hydrodynamic stability. a set of ada packages for high-precision calculations the packages described here are designed to perform efficient high-accuracy calculations in cases where there can be serious loss of significance due to rounding errors. numbers are represented by a value part with a variable number of digits in the mantissa and an error estimate which is updated throughout the calculation and which gives a range of possible values for the result of the calculation. for economy and speed, intermediate results are truncated so that the least- significant digits correspond to values only a few orders of magnitude smaller than the error estimate. type definitions and the standard arithmetic operations, together with conversions to and from integers, are provided in one package. the mathematical constants π and euler's gamma are incorporated to a maximum precision of 320 digits in a package that also contains all of the standard elementary functions. separate packages contain input and output procedures and conversion to and from floating-point types. b. g. s. doman c. j. pursglove w. m. coen an efficient derivative-free method for solving nonlinear equations an algorithm is presented for finding a root of a real function. the algorithm combines bisection with second and third order methods using derivatives estimated from objective function values. globaql convergence is ensured and the number of function evaluations is bounded by four times the number needed by bisection. numerical comparisons with existing algorithms indicate the superiority of the new algorithm in all classes of problems. d. le algorithm 769: fortran subroutines for approximate solution of sparse quadratic assignment problems using grasp we describe fortran subroutines for finding approximate solutions of sparse instances of the quadratic assignment problem (qap) using a greedy randomized adaptive search procedure (grasp). the design and implementation of the code are described in detail. computational results comparing the new subroutines with a dense version of the code (algorithm 754, acm toms) show that the speedup increases with the sparsity of the data. panos m. pardalos leonidas s. pitsoulis mauricio g. c. resende bounds for the positive eigenvectors of nonnegative matrices and for their approximations by decomposition p.-j. courtois p semal integrated resource assignment and scheduling of task graphs using finite domain constraints krzysztof kuchcinski grey codes, towers of hanoi, hamiltonian path on the n-cube, and chinese rings leroy j. dickey acm algorithms policy fred t. krogh corporate analysis by apl this paper describes how apl has been used in finland in the field of corporate analysis. the concept of corporate analysis is shortly introduced. a corporate analysis package, trennus, is introduces. however, the emphasis of this paper is practical. it describes the concepts used in programming large windows based software. the tools and ideas are universal and can be applied in any large software. arto juvonen unto niemi rankings of partial derivatives c. j. rust g. j. reid solving large full sets of linear equations in a paged virtual store j. j. du croz s. m. nugent j. k. reid d. b. taylor squeezing the most out of an algorithm in cray fortran jack j. dongarra stanley c. eisenstat algorithm 553: m3rk, an explicit time integrator for semidiscrete parabolic equations [d3] j. g. verwer algorithm 741; least-squares solution of a linear, bordered, block-diagonal system of equations a package of fortran subroutines is presented for the least-squares solution of a system of overdetermined, full-rank, linear equations with single- bordered block-diagonal structure. this structure allows for a natural sequential processing, one block diagonal at a time, so that large systems can be handled even on smaller machines. orthogonal transformations in the form of householder reflections are used to factor the system. the routines make heavy use of the levels 1, 2, and 3 blas. richard d. ray an operational view on renewal theory in this paper we derive a formula for the moments of the residual life in operational context, and show that the paradox of residual life holds also in a finite queueing model. in addition, we prove the renewal theorem, show that forward and backward times are independent, and state the memoryless property. as applications we point out how to derive takacs recurrence formula for the moments of the waiting time and how to base the markovian state theory on this. wolfgang kowalk spectral partitioning: the more eigenvectors, the better charles j. alpert so- zen yao expanders obtained from affine transformations a bipartite graph gn = (u, v, e) is said an (n, k, δ) expander if |u| = |v| = n, |e| ≤ kn, and for any x u, |Γ< ;subscrpt>gn(x)| ≥ (1+δ(1-|x|/n)) |x|, where Γg< subscrpt>n(x) is the set of nodes in v connected to nodes in x with edges in e. in this paper we show that the problem of estimating the coefficient δ of a bipartite graph is reduced to that of estimating the eigenvalue of a matrix related to the graph. as a result we give an explicit construction of (n, 5, 1 - 5/8 2) expanders. by applying gabber and galil's construction to these expanders we obtain n-superconcentrators with 248n edges. s jimbo a maruoka a study of the comparative effects of various means of motion cueing during a simulated compensatory tracking task nasa's langley research center conducted a simulation experiment to ascertain the comparative effects of motion cues (combinations of platform motion and g-seat normal acceleration cues) on compensatory tracking performance. in the experiment, a full six- degree-of-freedom yf-16 model was used as the simulated pursuit aircraft. the langley visual motion simulator (with in-house developed wash-out), and a langley developed g-seat were principal components of the simulation. the results of the experiment were examined utilizing univariate and multivariate techniques. the statistical analyses demonstrate that the platform motion and g-seat cues provide additional information to the pilot that allows substantial reduction of lateral tracking error. also, the analyses show that the g-seat cue helps reduce vertical error. the differences in pilot control behavior make it impossible to statistically determine if the motion platform and g-seat cues have an additive effect, or whether the motion or g-seat cues can be interchanged. however, it is anticipated that additional data will overcome this problem. burnell t. mckissick billy r. ashworth russell v. parrish dennis j. martin indefinite integration with validation we present an overview of two approaches to validated one-dimensional indefinite integration. the first approach is to find an inclusion of the integrand, then integrate this inclusion to obtain an inclusion of the indefinite integral. inclusions for the integrand may be obtained from taylor polynomials, tschebyscheff polynomials, or other approximating forms which have a known error term. the second approach finds an inclusion of the indefinite integral directly as a linear combination of function evaluations plus an interval-valued error term. this requires a self-validating form of a quadrature formula such as gaussian quadrature. in either approach, composite formulae improve the accuracy of the inclusion. the result of the validated indefinite integration is an algorithm which may be represented as a character string, a subroutine in a high-level programming language such as pascal-sc or fortran, or as a collection of data. an example is given showing the application of validated indefinite integration in constructing a validated inclusion of the error function, erf(x). george corliss gary krenz fast algorithms for solving path problems robert endre tarjan an evaluation of some new cyclic linear multistep formulas for stiff odes we evaluate several sets of cyclic linear multistep formulas (clmfs). one of these sets was derived by tischer and sacks-davis. three new sets of formulas have been derived and we present their characteristics. the formulas have been evaluated by comparing the performance of four versions of a code which implements clmfs. the four versions are very similar and each version implements one of the sets of clmfs being studied. we compare the performance of these codes with that of a widely used code, lsode. one of the new sets of clmfs is not only much more efficient in solving stiff problems that have a jacobian with eigenvalues close to the imaginary axis but is almost as efficient as lsode in solving other problems. this is a significant improvement over the ony other clmf code available, stint from tendler, bickart, and picel. p. e. tischer g. k. gupta regenerative randomization: theory and application examples juan a. carrasco angel calderón evaluation and comparison methods for confidence interval procedures keebom kang bruce schmeiser algorithm 674: fortran codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation we omitted giving this article an acm algorithm number when it was first published in its entirety in the december 1988 issue of toms, vol. 14, no. 4, pp. 381--396. to correct this, we do so here, and reprint the title as a pointer to the original article. nicholas j. higham remark on "algorithm 498: airy functions using chebyshev series approximations" m. razaz j. l. schonfelder on the existence of equilibria in noncooperative optimal flow control the existence of nash equilibria in noncooperative flow control in a general product-form network shared by k users is investigated. the performance objective of each user is to maximize its average throughput subject to an upper bound on its average time-delay. previous attempts to study existence of equilibria for this flow control model were not successful, partly because the time-delay constraints couple the strategy spaces of the individual users in a way that does not allow the application of standard equilibrium existence theorems from the game theory literature. to overcome this difficulty, a more general approach to study the existence of nash equilibria for decentralized control schemes is introduced. this approach is based on directly proving the existence of a fixed point of the best reply correspondence of the underlying game. for the investigated flow control model, the best reply correspondence is shown to be a function, implicitly defined by means of k interdependent linear programs. employing an appropriate definition for continuity of the set of optimal solutions of parameterized linear programs, it is shown that, under appropriate conditions, the best reply function is continuous. brouwer's theorem implies, then, that the best reply function has a fixed point. yannis a. korilis aurel a. lazar an approach to the k paths problem the k paths problem asks whether it is possible to find k edge-disjoint paths joining k given pairs of vertices in a graph. we show that this problem can be solved in polynomial time for k≤5 if the input graph is k+2 -connected. it follows from this result that under the same restriction, the subgraph homeomorphism problem is polynomial for any pattern graph with five or fewer edges. allen cypher algorithm 588: fast hankel transforms using related and lagged convolutions walter l. anderson an automatic continuation strategy for the solution of singularly perturbed nonlinear boundary value problems in a recent paper, the present authors derived an automatic continuation algorithm for the solution of linear singular perturbation problems. the algorithm was incorporated into two general-purpose codes for solving boundary value problems, and it was shown to deal effectively with a large test set of linear problems. the present paper describes how the conintuation algorithm for linear problems can be extended to deal with the nonlinear case. the results of exstensive numerical testing on a set of nonlinear singular perturbation problems are given, and these clearly demonstrate the efficacy of continuation for solving such problems. crypto backup and key escrow david paul maher verifiable partial key escrow mihir bellare shafi goldwasser the world of data compression eric scheirer acm president's letter: government classification of private ideas peter j. denning processing encrypted data a severe problem in the processing of encrypted data is that very often, in order to perform arithmetic operations on the data, one has to convert the data back to its nonencrypted origin before performing the required operations. this paper addresses the issue of processing data that have been encrypted while the data are in an encrypted mode. it develops a new approach for encryption models that can facilitate the processing of such data. the advantages of this approach are reviewed, and a basic algorithm is developed to prove the feasibility of the approach. niv ahituv yeheskel lapid seev neumann responses to nist's proposal ronald l. rivest martin e. hellman john c. anderson john w. lyons an efficient representation for sparse sets sets are a fundamental abstraction widely used in programming. many representations are possible, each offering different advantages. we describe a representation that supports constant-time implementations of clear-set, add- member, and delete-member. additionally, it supports an efficient forall iterator, allowing enumeration of all the members of a set in time proportional to the cardinality of the set. we present detailed comparisons of the costs of operations on our representation and on a bit vector representation. additionally, we give experimental results showing the effectiveness of our representation in a practical application: construction of an interference graph for use during graph-coloring register allocation. while this representation was developed to solve a specific problem arising in register allocation, we have found it useful throughout our work, especially when implementing efficient analysis techniques for large programs. however, the new representation is not a panacea. the operations required for a particular set should be carefully considered before this representation, or any other representation, is chosen. preston briggs linda torczon a monte carlo study of cichelli hash-function solvability r. charles bell bryan floyd superimposing encrypted data much has been written about the necessity of processing data in the encrypted form. however, no satisfactory method of processing encrypted data has been published to date. ahitub et al. [2] have analyzed the possibilities of using some special algorithms to add encrypted data. rivest et al. [10] have suggested the use of an algorithm based on homomorphic functions for processing encrypted data. the main limitation of this algorithm is that such functions can be broken by solving a set of linear equations, as noted by [2]. the public-key crytosystem described in [11] can be used to multiply encrypted data but cannot be used to add encrypted data and is therefore not appropriate for some practical applications such as bank transactions. abadi, feigenbaum and kilian [1] presented some general theorems concerning the problem of computing with encrypted data and formulated a framework to prove precise statements about what an encrypted instance hides and reveals; they also described encryption schemes for some well-known functions. k. w. yu tong lai yu the design of substitution-permutation networks resistant to differential and linear cryptanalysis in this paper we examine a class of product ciphers referred to as substitution-permutation networks. we investigate the resistance of these cryptographic networks to two important attacks: differential cryptanalysis and linear cryptanalysis. in particular, we develop upper bounds on the differential characteristic probability and on the probability of a linear approximation as a function of the number of rounds of substitutions. further, it is shown that using large s-boxes with good diffusion characteristics and replacing the permutation between rounds by an appropriate linear transformation is effective in improving the cipher security in relation to these two attacks. h. m. heys s. e. tavares tree rebalancing in optimal time and space a simple algorithm is given which takes an arbitrary binary search tree and rebalances it to form another of optimal shape, using time linear in the number of nodes and only a constant amount of space (beyond that used to store the initial tree). this algorithm is therefore optimal in its use of both time and space. previous algorithms were optimal in at most one of these two measures, or were not applicable to all binary search trees. when the nodes of the tree are stored in an array, a simple addition to this algorithm results in the nodes being stored in sorted order in the initial portion of the array, again using linear time and constant space. q. f stout b. l warren joint encryption and error correction schemes t. r. n. rao a modulus oriented hash function for the construction of minimal perfect tables maurizio panti salvatore valenti c++ dynamic arrays vs. linked lists dynamically allocated linked lists are generally portrayed as a more flexible structure than arrays. however, dynamically allocated arrays, as available in c++, can be just as flexible and easier to use. this paper compares the use of dynamically allocated linked lists to dynamically allocated arrays as available in c++. marsha zaidman witness-based cryptographic program checking and robust function sharing yair frankel peter gemmell moti yung arithmetic coding revisited over the last decade, arithmetic coding has emerged as an important compression tool. it is now the method of choice for adaptive coding on myltisymbol alphabets because of its speed, low storage requirements, and effectiveness of compression. this article describes a new implementation of arithmetic coding that incorporates several improvements over a widely used earlier version by witten, neal, and cleary, which has become a de facto standard. these improvements include fewer multiplicative operations, greatly extended range of alphabet sizes and symbol probabilities, and the use of low- precision arithmetic, permitting implementation by fast shift/add operations. we also describe a modular structure that separates the coding, modeling, and probability estimation components of a compression system. to motivate the improved coder, we consider the needs of a word-based text compression program. we report a range of experimental results using this and other models. complete source code is available. alistair moffat radford m. neal ian h. witten maintaining order in a linked list we present a new representation for linked lists. this representation allows one to efficiently insert objects into the list and to quickly determine the order of list elements. the basic data structure, called an indexed 2-3 tree, allows one to do n inserts in o(nlogn) steps and to determine order in constant time. we speed up the algorithm by dividing the data structure up into log*n layers. the improved algorithm does n insertions and comparisons in o(nlog*n) steps. the paper concludes with two applications: determining ancestor relationships in a growing tree and maintaining a tree structured environment (context tree). paul f. dietz a family of conservative codes with block delimiters for decoding without a phase-locked loop the family of conservative codes is a new scheme for encoding and decoding digital data for very high speed serial communication. these codes are characterized by having a constant number of transitions in each codeword and a known delimiting transition (rising or falling edge) at the end of each codeword. the conservative encoding scheme is primarily intended for binary data transmission in a single mode fiber optic network. additional constraints imposed on the encoding are balancing each codeword, i.e., making the number of zeros and ones equal, and limiting the maximum run- lengths of the high and low level. the purpose of these is to limit the dc shift of ac-coupled receivers and thereby increase the decoding reliability and decrease the optical receiver complexity. it is shown that both constraints can be imposed simultaneously without a significant degradation in encoding efficiency. the decoding and serial-to-parallel conversion are achieved by directly using the serial signal transitions without recovering the receiving clock explicitly with a phase-locked loop. it is possible to record information from multiple asynchronous sources without a training period. yoram ofek reducing the retrieval time of hashing method by using predictors many methods for resolving collisions in hashing techniques have been proposed. they are classified into two main categories: open addressing and chaining. in this paper, other methods are presented that are intermediate between the two categories. the basic idea of our methods is the use of one or more predictors reserved per cell instead of a link field as in the chaining method. the predictors are used to maintain loose synonym chains. after describing the methods, the efficiencies are estimated theoretically and verified experimentally. in comparison with the chaining method, we prove that our methods significantly reduce the average number of probes necessary to retrieve a key without expending extra space. seiichi nishihara katsuo ikeda array processing - principles and practice alan wilson application of splay trees to data compression the splay-prefix algorithm is one of the simplest and fastest adaptive data compression algorithms based on the use of a prefix code. the data structures used in the splay-prefix algorithm can also be applied to arithmetic data compression. applications of these algorithms to encryption and image processing are suggested. d. w. jones the soft heap: an approximate priority queue with optimal error rate a simple variant of a priority queue, called a soft heap, is introduced. the data structure supports the usual operations: insert, delete, meld, and findmin. its novelty is to beat the logarithmic bound on the complexity of a heap in a comparison-based model. to break this information-theoretic barrier, the entropy of the data structure is reduced by artifically raising the values of certain keys. given any mixed sequence of n operations, a soft heap with error rate ε (for any 0 < ε ≤ 1/2) ensures that, at any time, at most εn of its items have their keys raised. the amortized complexity of each operation is constant, except for insert, which takes 0(log 1/ε)time. the soft heap is optimal for any value of ε in a comparison-based model. the data structure is purely pointer-based. no arrays are move items across the data structure not individually, as is customary, but in groups, in a data-structuring equivalent of "car pooling." keys must be raised as a result, in order to preserve the heap ordering of the data structure. the soft heap can be used to compute exact or approximate medians and percentiles optimally. it is also useful for approximate sorting and for computing minimum spanning trees of general graphs. bernard chazelle a taxonomy for key escrow encryption systems dorothy e. denning dennis k. branstad quantum cryptography d wiedemann external hashing schemes for collections of data structures richard j. lipton arnold l. rosenberg andrew c. yao a digital multisignature scheme using bijective public-key cryptosystems a new digital multisignature scheme using bijective public-key cryptosystems that overcomes the problems of previous signature schemes used for multisignatures is proposed. the principal features of this scheme are (1) the length of a multisignature message is nearly equivalent to that for a singlesignature message; (2) by using a one-way hash function, multisignature generation and verification are processed in an efficient manner; (3) the order of signing is not restricted; and (4) this scheme can be constructed on any bijective public-key cryptosystem as well as the rsa scheme. in addition, it is shown that the new scheme is considered as safe as the public-key cryptosystem used in this new scheme. some variations based on the scheme are also presented. tatsuaki okamoto debating encryption standards corporate communications of the acm staff a national debate on encryption exportability traditionally, cryptography has been an exclusively military technology controlled by the national security agency (nsa). therefore, u.s. international traffic in arms regulations (itars) requires licenses for all export of modern cryptographic methods. some methods, such as the data encryption standard (des), are easily obtained for export to the coordinating committee for multilateral export controls (cocom) countries, but not soviet block countries, or most third world nations. clark weissman a database encryption system with subkeys a new cryptosystem that is suitable for database encryption is presented. the system has the important property of having subkeys that allow the encryption and decryption of fields within a record. the system is based on the chinese remainder theorem. george i. davida david l. wells john b. kam efficient generation of shared rsa keys we describe efficient techniques for a number of parties to jointly generate an rsa key. at the end of the protocol an rsa modulus _n_ = _pq_ is publicly known. none of the parties know the factorization of _n_. in addition a public encryption exponent is publicly known and each party holds a share of the private exponent that enables threshold decryption. our protocols are efficient in computation and communication. all results are presented in the _honest but curious_ scenario (passive adversary). dan boneh matthew franklin quantum public key distribution reinvented charles h. bennett gilles brassard comments on "weighted sum codes for error detection and their comparison with existing codes" peter farkaš acm forum robert l. ashenhurst reassessing the crypto debate this month we summarize the main issues relating to the increasing use of cryptography in the world---for example, in enhancing confidentiality, integrity, and authenticity in computers and communications. earlier columns have considered some related topics [1]. peter g. neumann cryptography and data security overview of panel discussion cryptography is the science of secret writing. it has been used since the dawn of writing itself to conceal messages from an adversary. throughout history, the use of cryptography has been largely confined to diplomatic and military communications. but in the last 10 years the tremendous advances in communication technology have created a serious need for cryptographic protection of private sector communications. electronic funds transfer systems and satellite voice/digital networks are particularly vulnerable to compromise, without the use of cryptography. recognizing the need for data protection apart from national security concerns, the national bureau of standards issued a call for a data encryption algorithm in 1973. during the subsequent discussions and debates, the mathematicians and computer scientists involved realized that cryptography was not an established science in the unclassified literature. they began the task of creating the foundations for this young discipline. in 1977, the nbs published the data encryption standard (des). its use is required to protect sensitive governmental information not related to national security. several national and international standards organizations have also adopted the des. michael willett fully persistent lists with catenation james r. driscoll daniel d. k. sleator robert e. tarjan weight-biased leftist trees and modified skip lists seonghun cho sarataj sahni bi as an assertion language for mutable data structures samin s. ishtiaq peter w. o'hearn public cryptography outlook ten years ago last march, the national bureau of standards issued its first solicitation for an algorithm to be used in public cryptography. while significant progress has been made in the development of the technology in this field, the public's use of cryptography has proceeded more slowly than some had originally expected. yet, looking back, it seems surprising that anyone would have thought that public cryptography could have developed any faster than it has. when the des was published as a standard in january, 1977, the private and non-classified government communities knew little about cryptography and its capabilities. open publications were primarily academic papers, and consequently the public had a poor understanding of what problems cryptography could solve and what problems were beyond its scope. miles e. smid doubly-linked opportunities jim carraway ultimate cryptography whitfield diffie a locally adaptive data compression scheme a data compression scheme that exploits locality of reference, such as occurs when words are used frequently over short intervals and then fall into long periods of disuse, is described. the scheme is based on a simple heuristic for self-organizing sequential search and on variable-length encodings of integers. we prove that it never performs much worse than huffman coding and can perform substantially better; experiments on real files show that its performance is usually quite close to that of huffman coding. our scheme has many implementation advantages: it is simple, allows fast encoding and decoding, and requires only one pass over the data to be compressed (static huffman coding takes two passes). jon louis bentley daniel d. sleator robert e. tarjan victor k. wei the digital signature standard corporate nist the study of an ordered minimal perfect hashing scheme c. c. chang cryptanalysis of private-key encryption schemes based on burst-error- correcting codes hung-min sun shiuh-pyng shieh adaptively secure multi-party computation ran canetti uri feige oded goldreich moni naor how to simultaneously exchange secrets by general assumptions the simultaneous secret exchange protocol is the key tool for contract signing protocols and certified mail protocols. this paper proposes efficient simultaneous secret exchange protocols (or gradual secret releasing protocols) that are based on general assumptions such as the existence of one-way permutations and one-way functions, while the existing efficient simultaneous secret exchange protocols are based on more constrained assumptions such as specific number theoretic problems and the existence of oblivious transfer primitives (or trap-door one-way permutations). moreover, while the existing simultaneous secret exchange protocols have an additional requirement that the underlying commit (encryption) function is "ideal", the above-mentioned "general assumptions" are provably sufficient for our schemes. therefore, our protocols are provably secure under the general assumptions. in addition, our protocols are at least as efficient as the existing practical protocols, when efficient one-way permutations and one-way functions are used. tatsuaki okamoto kazuo ohta information leakage of boolean functions and its relationship to other cryptographic criteria this paper presents some results on the cryptographic strength of boolean functions from the information theoretic point of view. it is argued that a boolean function is resistant to statistical analysis if there is no significant static and dynamic information leakage between its inputs and its output(s). in particular we relate information leakage to nonlinearity, higher order sac, correlation immunity and resilient functions. it is shown that reducing information leakage increases resistance to the differential attack and the linear attack. we note that some conventional cryptographic criteria require zero static or dynamic information leakage in only one domain. such a requirement can result in a large information leakage in another domain. to avoid this weakness, it is better to jointly constrain all kinds of information leakage in the function. in fact, we claim that information leakage can be used as a fundamental measure of the strength of a cryptographic algorithm. m. zhang s. e. tavares l. l. campbell an unknown key-share attack on the mqv key agreement protocol the mqv key agreement protocol, a technique included in recent standards, is shown in its basic form to be vulnerable to an unknown key-share attack. although the attack's practical impact on security is minimal---a key confirmation step easily prevents it---the attack is noteworthy in the principles it illustrates about protocol design. first, minor efficiency improvements can significantly alter the security properties of a protocol. second, protocol analysis must consider potential interactions with all parties, not just those that are normally online. finally, attacks must be assessed in terms of system requirements, not just in isolation. burton s. kaliski dense multiway trees b-trees of order m are a "balanced" class of m-ary trees, which have applications in the areas of file organization. in fact, they have been the only choice when balanced multiway trees are required. although they have very simple insertion and deletion algorithms, their storage utilization, that is, the number of keys per page or node, is at worst 50 percent. in the present paper we investigate a new class of balanced m-ary trees, the dense multiway trees, and compare their storage utilization with that of b-trees of order m. surprisingly, we are able to demonstrate that weakly dense multiway trees have an (log2 n) insertion algorithm. we also show that inserting mh \\- 1 keys in ascending order into an initially empty dense multiway tree yields the complete m-ary tree of height h, and that at intermediate steps in the insertion sequence the intermediate trees can also be considered to be as dense as possible. furthermore, an analysis of the limiting dynamic behavior of the dense m-ary trees under insertion shows that the average storage utilization tends to 1; that is, the trees become as dense as possible. this motivates the use of the term "dense." k. culik th. ottmann d. wood implementation of fortran 90 pointers richard bleikamp technical opinion: designing cryptography for the new century susan landau systolic implementations of a move-to-front text compressor c. d. thomborson b. w.-y. wei non-expansive hashing nathan linial ori sasson data structures: pointers vs. arrays. when, where and why in the inevitable search for the "perfect" structure, the beginning programmer is faced with a multitude of possibilities. stacks, arrays, queues, linked lists, trees, and, on a higher level, the decision whether the "best" choice would involve a physical implementation of arrays or pointers. the choice, particularly for the novice, is not easy. the purpose of the abstract is to facilitate that choice. having perused a dozen textbooks dealing with introduction to data structures (the greater majority of which employ pascal) one finds the "expected" comparative analysis between dynamic and static variables. the prevalent tendency indicates that the utilization of pointer (dynamic) variables more effectively controls memory allocation with the result being a generally more effective, possibly more expedient execution. the strange occurrence that i could not help the questioning was involved with the reasons why most treatments of data structures devote more time to the static approach as opposed to the dynamic one. and, on a different note, why so few textbooks actually compared the two parallel approaches for the same problem. to alleviate this, i considered solving a simple problem both ways. we wish to create a binary search tree from a list of up to 50000 numbers and, in addition, remove any duplicates from the list. a reasonable approach to this would be to statically declare an array of 50000 nodes, using subroutines to "getnode" (using an available pool of nodes created within the program), create a tree (i.e., the root), setleft and setright (to properly place an item in the tree). such a program involves 5 modules, about 80 lines of code, a run time of under 2 seconds, 1089 page faults, and a peak page file (indicating the use of memory) of 3253. on the other hand, if one writes the program using pointer variables, the "getnode" procedure mentioned above can be replaced by a simple "new" function (for obtaining a new node), no storage for the numbers is required within the program and we see the following results: 72 lines of code, 4 modules, a run time of under 2 seconds, 861 page faults, and a peak page file of 924. an interesting observation is that the only "significant" difference in the "statistics" of the two versions is in the peak page file. there was emphatically no difference in run time. it would seem useful, therefore, to allow the novice programmer the option of attempting both approaches, while encouraging such comparisons as the one noted above. the "best" approach may not be an acceptable one for a particular user who feels burdened by a concept not fully understood. thus, the when, where, and why can only be asked and answered by the programmer who truly experiences the greatest magnitude. domenick j. pinto frequent value locality and value-centric data cache design youtao zhang jun yang rajiv gupta arithmetic coding for data compression the state of the art in data compression is arithmetic coding, not the better- known huffman method. arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding. ian h. witten radford m. neal john g. cleary unconditional security in quantum cryptography basic techniques to prove the unconditional security of quantum crypto graphy are described. they are applied to a quantum key distribution protocol proposed by bennett and brassard [1984]. the proof considers a practical variation on the protocol in which the channel is noisy and photos may be lost during the transmission. each individual signal sent into the channel must contain a single photon or any two-dimensional system in the exact state described in the protocol. no restriction is imposed on the detector used at the receiving side of the channel, except that whether or not the received system is detected must be independent of the basis used to measure this system. a polynomial time generator for minimal perfect hash functions a perfect hash function phf is an injection f from a set w of m objects into the set consisting of the first n nonnegative integers where n m. if n = m, then f is a minimal perfect hash function, mphf. phfs are useful for the compact storage and fast retrieval of frequently used objects such as reserved words in a programming language or commonly employed words in a natural language. the mincycle algorithm for finding phfs executes with an expected time complexity that is polynomial in m and has been used successfully on sets of cardinality up to 512. given three pseudorandom functions h0, h1, and h2, the mincycle algorithm searches for a function g such that f(w) = (h0(w) + g h1(w) + g h2(w)) mod n is a phf. thomas j. sager comments on an example for procedure parameters d w krumme systolic implementations of a move-to-front text compressor clark d. thomborson belle w.-y. wei implementation of abstract data types with arrays of unbounded dimensions a. barrero deterministic skip lists we explore techniques based on the notion of a skip list to guarantee logarithmic search, insert and delete costs. the basic idea is to insist that between any pair of elements above a given height are a small number of elements of precisely that height. the desired behaviour can be achieved by either using some extra space for pointers, or by adding the constraint that the physical sizes of the nodes be exponentially increasing. the first approach leads to simpler code, whereas the second is ideally suited to a buddy system of memory allocation. our techniques are competitive in terms of time and space with balanced tree schemes, and, we feel, inherently simpler when taken from first principles. j. ian munro thomas papadakis robert sedgewick commutativity theorems: examples in search of algorithms john j. wavrik a demodulizer for haskell kate golder optimal single row router the single row approach represents a systematic suboptimal approach to the general multilayer rectilinear wire problem. with this approach, the single row wiring problem (i.e., one where all the points are collinear) forms the backbone of the general multilayer wiring problem. we consider the problem of generating minimum width layout for single row wiring problems. our algorithm, which is not grid-based, is enumerative and uses a strong bounding criterion. raghunath raghavan sartaj sahni multiple matching of rectangular patterns ramana m. idury alejandro a. schäffer a linear time algorithm for residue computation and a fast algorithm for division with a sparse divisor an algorithm is presented to compute the residue of a polynomial over a finite field of degree n modulo a polynomial of degree o(log n) in o(n) algebraic operations. this algorithm can be implemented on a turing machine. the implementation is based on turing machine procedure that divides a polynomial of degree n by a sparse polynomial with k nonzero coefficients in o(kn) steps. this algorithm can be adapted to compute the residue of a number of length n modulo a number of length o(log n) in o(n) bit operations. michael kaminski on the completeness of object-creating database transformation languages object-oriented applications of database systems require database transformations involoving nonstandard functionalities such as set manipulation and object creation, that is, the introduction of new domain elements. to deal with thse functionalities, abiteboul and kanellakis [1989] introduced the "determinate" transformations as a generalization of the standard domain-preserving transformations. the obvious extensions of complete standard database programming languages, however, are not complete for the determinate transformations. to remedy this mismatch, the "constructive" transformations are proposed. it is shown that the constructive transformations are precisely the transformations that can be expressed in said extensions of complete standard languages. thereto, a close correspondence between object creation and the construction of hereditarily finite sets is established. a restricted version of the main completeness result for the case where only list manipulations are involved is also presented. jan van den bussche dirk van gucht marc andries marc gyssens computing a centerpoint of a finite planar set of points in linear time the notion of a centerpoint of a finite set of points in two and higher dimensions is a generalisation of the concept of the median of a (finite) set of points on the real line. in this paper, we present an algorithm for computing a centerpoint of a set of n points in the plane. the algorithm has complexity o(n) which significantly improves the o(n log3 n) complexity of the previously best known algorithm. we use suitable modifications of the ham- sandwich-cut algorithm and the prune-and-search technique to achieve this improvement. shreesh jadhav asish mukhopadhyay improvement in a lazy context: an operational theory for call-by-need andrew moran david sands three logics for branching bisimulation three temporal logics are introduced that induce on labeled transition systems the same identifications as branching bisimulation, a behavioral equivalence that aims at ignoring invisible transitions while preserving the branching structure of systems. the first logic is an extension of hennessy-milner logic with an "until" operator. the second one is another extension of hennessy- milner logic, which exploits the power of backward modalities. the third logic is ctl* without the next-time operator. a relevant side-effect of the last characterization is that it sets a bridge between the state- and action-based approaches to the semantics of concurrent systems. rocco de nicola frits vaandrager detection is easier than computation (extended abstract) perhaps the most important application of computer geometry involves determining whether a pair of convex objects intersect. this problem is well understood in a model of computation where the objects are given as input and their intersection is returned as output. however, for many applications, we may assume that the objects already exist within the computer and that the only output desired is a single piece of data giving a common point if the objects intersect or reporting no intersection if they are disjoint. for this problem, none of the previous lower bounds are valid and we propose algorithms requiring sublinear time for their solution in 2 and 3 dimensions. bernard chazelle david p. dobkin high performance cache management for sequential data access erhard rahm donald ferguson counting networks and multi-processor coordination james aspnes maurice herlihy nir shavit an output sensitive algorithm for discrete convex hulls sariel har-peled two familiar transitive closure algorithms which admit no polynomial time, sublinear space implementations any boolean straight-line program which computes the transitive closure of an nxn boolean matrix by successive squaring requires time exceeding any polynomial in n if the space used is o(n). this is the first demonstration of a "natural" algorithm which (1) has a polynomial time implementation and (2) has a small (e.g., o(log2n)) space implementation, but (3) has no implementation running in polynomial time and small space simultaneously. it is also shown that any implementation of warshall's transitive closure algorithm requires Ω(n) space. martin tompa space-efficient ray-shooting and intersection searching: algorithms, dynamization, and applications siu wing cheng ravi janardan spatial machines: a more realistic approach to parallel computation yosee feldman ehud shapiro separating and collapsing results on the relativized probabilistic polynomial- time hierarchy the probabilistic polynomial-time hierarchy (bph) is the hierarchy generated by applying the bp-operator to the meyer-stockmeyer polynomial-time hierarchy (ph), where the bp-operator is the natural generalization of the probabilistic complexity class bpp. the similarity and difference between the two hierarchies bph and ph is investigated. oracles a and b are constructed such that both ph(a) and ph(b) are infinite while bph(a) is not equal to ph(a) at any level and bph(b) is identical to ph(b) at every level. similar separating and collapsing results in the case that ph(a) is finite having exactly k levels are also considered. ker-i ko two tapes are better than one for nondeterministic machines it is known that k tapes are no better than two tapes for nondeterministic machines. we show here that two tapes are better than one. in fact, we show that two pushdown stores are better than one tape. also, k tapes are no better than two for nondeterministic reversal-bounded machines. we show here that two tapes are better than one for such machines. in fact, we show that two reversal- bounded pushdown stores are better than one reversal-bounded tape. we also show that for one-tape nondeterministic machines, unrestricted machines are better than reversal-bounded machines. pavol d zvi galil functional fun michael j. clancy marcia c. linn in-place techniques for parallel convex hull algorithms (preliminary version) mujtaba r. ghouse michael t. goodrich pessimization is unsolvable c b dunham the history and status of the p versus np question michael sipser distributed bisimulations a new equivalence between concurrent processes is proposed. it generalizes the well-known bisimulation equivalence to take into account the distributed nature of processes. the result is a noninterleaving semantic theory; concurrent processes are differentiated from processes that are non- deterministic but sequential. the new equivalence, together with its observational version, is investigated for a subset of the language ccs, and various algebraic characterizations are obtained. ilaria castellani matthew hennessy on contention resolution protocols and associated probabilistic phenomena consider an on-line scheduling problem in which a set of abstract processes are competing for the use of a number of resources. further assume that it is either prohibitively expensive or impossible for any two of the processes to directly communicate with one another. if several processes simultaneously attempt to allocate a particular resource (as may be expected to occur, since the processes cannot easily coordinate their allocations), then none succeed. in such a framework, it is a challenge to design efficient contention resolution protocols. two recently-proposed approaches to the problem of pram emulation give rise to scheduling problems of the above kind. in one approach, the resources (in this case, the shared memory cells) are duplicated and distributed randomly. we analyze a simple and efficient deterministic algorithm for accessing some subset of the duplicated resources. in the other approach, we analyze how quickly we can access the given (nonduplicated) resource using a simple randomized strategy. we obtain precise bounds on the performance of both strategies. we anticipate that our results with find other applications. p. d. mackenzie c. g. plaxton r. rajaraman call by need computations to root-stable form aart middeldorp special issue on parallelism the articles presented in our special issue on parallel processing on the supercomputing scale reflect, to some extent, splits in the community developing these machines. there are several schools of thought on how best to implement parallel processing at both the hard- and software levels. controversy exists over the wisdom of aiming for general- or special-purpose parallel machines, and what architectures, networks, and granularities would serve these best. the efficiency of processing elements that are loosely or tightly coupled is still in question. and, in the software amphitheatre, there is debate over whether new languages should be written from scratch, whether old languages should be modified into parallel processing dialogues, or whether investments in old, sequential programs should be leveraged by recompiling them into parallel programs. these issues were readily apparent at this year's international conference on parallel processing (icpp), as they have been during the 15 years since the annual conference first met. few expect resolutions to these questions within the next 15 years, which is not to say that these subfields are not progressing rapidly; quite the contrary. as we now see, an outpouring of recent commercial unveilings of both super and supermini parallel processors represents the expected potpourri of design philosophies. related to the general- versus special-purpose issue is the direct, applications-oriented question: what do parallel processor architects expect their machines to be used for? and, taking this a step further, what would they like to see next-generation machines, with one hundred times more speed and memory, do? i asked icpp attendees and other computer scientists these questions. many answered straightforwardly that they see simulation as the main application now and in the future. others deflected the question. said one such architect quite flatly, "i just build the things. you'd be better off asking users." yes and no; i wanted to know what was on architects' minds. why were their imaginations so dry? but then others plunged in quite daringly, answering at a completely different level as reported below. perhaps the range of all responses reflects differences in priorities that are to be expected in a young field in flux. some of their thoughts convey that they are somewhat overwhelmed by new choices and freedoms. it seems that the advent of parallelism may be more than just the beginning of a new era within computer science. "historically, computer scientists have been unable to predict many of the uses of faster machines," says darpa/information science technology office program manager stephen h. kaisler, "because each such machine opens up new regimes of computing to explore." indeed, parallel computing itself---not just its promise of improved speed---is all at once exposing new, unforeseen possibilities: a wide vista of architectures, languages, and operating systems. "to date, we've been playing around with a very limited range of these, but now many novel, promising combinations are within our grasp," says george adams, a research scientist at the research institute for advanced computer science who is studying parallel processors for nasa ames research center. until recently, the technology was not advanced enough to create a machine with large numbers of processing elements, for example. today, says adams, cheap chips and improved communications permit running new permutations of languages and operating systems on such machines. these combinations were never before testable. according to adams, the balance between cpu capability and memory should be of prime concern for next-generation parallel processors and supercomputers, which he predicts are on a collision course and will become one and the same. scientists at ames are most often limited not by cpu speed, he says, but by memory size. because data space for problems involving simulation of physical systems cannot be held entirely in memory, portions must reside on disk. this causes wall-clock problem solution time to "suffer drastically," says adams. since disk access is typically 100,000 times slower than memory access, users prefer not to wait for a value to be retrieved. instead, they often recalculate values even if hundreds of mathematical operations are involved. so, if a parallel supercomputer with two orders of magnitude more cpu speed and memory suddenly appeared, "these scientists would likely cheer," says adams. "then they would ask for (still) more memory." for paul castleman, president and ceo of bbn advanced computers, there is really no reason to limit cpu and memory increases for the next generation of general-purpose parallel processors to two orders of magnitude: "that's thinking relatively conservatively" for the next decade, he says. "we have in the works a prototype processor that is 1000 times faster than today's, and we are beginning work on configurations that are tens of thousands times faster." but a product's usability is the final determinant, says castleman, "not the macho of how many more mips you can get . . . not whether you've souped up your sports car to run that much faster, but whether it feels comfortable to use." the solution is in the software development environment, which is why dec and ibm have done so well, he says. consequently, bbn advanced computers is now putting most of its effort into software tools for both scientific and nonscientific users. graphics, for example, can further a user's understanding of many simultaneous processes---each using information from a common database \---with graphs of processing elements' [pes') results. an economist may watch one pe number crunching dollars flowing into consumption and another pe measuring capital accumulation: or a physical plant operator may observe calculations of pressure that is causing a tank to overflow while another pe handles variables affecting a chemical reaction. besides being an aid to users, if graphics tools are also provided, each user's applications programmer would employ these utilities to generate the necessary aggregate graphics. but darpa's kaisler says that, in exploiting the first wave of commercially available parallel processors, little effort has been expended toward using these machines for research and development in computer science. "what is needed is a new effort, a new push to open up new avenues of algorithm development," he says, "beginning from first principles about what constitutes an algorithm and how to map it to the computational model provided by a specific parallel processor." the impact of commercial unveilings draws different if not diametrically opposed conclusions, however. a cautionary note about "hardware revelations" comes from david gelernter, associate professor of computer science at yale university. hardware designers thus stricken will build machines from "the bottom up" that no one will know how to program, he says. already, a dozen models have a dozen different low-level parallel programming systems. moreover, gelernter bemoans the fact that "machine-independent methods for parallel programming have been slow to emerge, and that consequently programmers have been forced to accommodate themselves to the machines rather than vice versa." he proposes that researchers imagine new kinds of programs before they imagine new kinds of machines. once again the computer science equivalent of the age-old chicken-before-the-egg question arises. how far can hard- and software developments proceed independently? when should they be combined? parallelism seems to bring these matters to the surface with particular urgency. the first article in our special issue is "data parallel algorithms," by w. daniel hillis and guy l. steele, jr. these authors, from thinking machines corporation, discuss algorithms and a new programming style for fine-grained single instruction multiple data (simd) parallel processors like the connection machine®. they cite examples such as parsing and finding the ends of linked lists---problems that they had assumed were inherently sequential--- as milestones in their transition from "serial to parallel thinkers." the next article, "advanced compiler optimizations for supercomputers," is by david a. padua and michael j. wolfe, of the university of illinois and kuck and associates, respectively. they represent those who believe sequential algorithms should be recompiled to accommodate vector, concurrent, and multiprocessor architectures. in a discussion of data dependence testing in loops, they show how parallelism in sequential codes for operations, statements, and iterations can be automatically detected for vector supercomputers. further, they discuss improving data dependence graphs and optimizing code for parallel computers. in a more theoretical vein, robert thomas and randall rettberg of bbn advanced computers discuss contention, or "hot spots," the phenomenon that some have predicted might cripple certain parallel processors. in their article, "contention is no obstacle to shared-memory multiprocessing," the authors describe engineering approaches to controlling the backup of data in networks and switches. besides reporting specific methods used to control contention, they offer benchmarks on their own machine, the butterfly . two applications-oriented articles complete the set. "toward memory-based reasoning," by craig stanfill and david waltz, suggests that memory of specific events, rather than of rules like those used in expert systems, is the right foundation for intelligent machines. in "parallel free-text search on the connection machine system," craig stanfill and brewster kahle harness unique properties of massive parallelism to implement a successful document- retrieval search paradigm. they use simple queries and relevance feedback techniques to produce intriguing results on a reuters news database of 16,000 stories. karen a. frenkel shape analysis for mobile ambients the ambient calculus is a calculus of computation that allows active processes to move between sites. we present an analysis inspired by state-of-the-art pointer analyses that safety and accurately predicts which processes may turn up at what sites during the execution of a composite system. the analysis models sets of processes by sets of regular tree grammars enhanced with context-dependent counts, and it obtains its precision by combining a powerful redex materialisation with a strong redex reduction (in the manner of the strong updates performed in pointer analyses). the underlying ideas are flexible and scale up to general tree structures admitting powerful restructuring operations. hanne riis nielson flemming nielson on the adequacy of graph rewriting for simulating term rewriting j. r. kennaway j. w. klop m. r. sleep f. j. de vries bounds on the time to reach agreement in the presence of timing uncertainty hagit attiya cynthia dwork nancy lynch larry stockmeyer a context sensitive tabular parsing algorithm frank hadlock beyond ml john c. reynolds fast parallel orthogonalization d kozen how to check if a finitely generated commutative monoid is a principal ideal commutative monoid we give a characterization of commutative monoids satisfying that all its ideals are principal. using this characterization we construct an algorithm for deciding whether a finitely generated commutative monoid satisfies this condition. jose carlos rosales pedro a. garcía-sanchez juan ignacio garcía-garcía on a fast and portable uniform quasi-random number generator fatin sezgin a denotational semantics for prolog a denotational semantics is presented for the language pro.og. metapredicates are not considered. conventional control sequencing is assumed for prolog's execution. the semantics is nonstandard, and goal continuations are used to explicate the sequencing. tim nicholson norman foo distribution of distances and triangles in a point set and algorithms for computing the largest common point sets tatsuya akutsu hisao tamaki takeshi tokuyama two-way one-counter automata accepting bounded languages h. petersen a practical turing machine representation r m nirenberg parallel speedup of sequential machines: a defense of parallel computation thesis i parberry bisimulation through probabilistic testing (preliminary report) we propose a language for testing concurrent processes and examine its strength in terms of the processes that are distinguished by a test. by using probabilistic transition systems as the underlying semantic model, we show how a testing algorithm with a probability arbitrary close to 1 can distinguish processes that are not bisimulation equivalent. we also show a similar result (in a slightly stronger form) for a new process relation called 2/3-bisimulation --- lying strictly between that of simulation and bisimulation. finally, the ultimately strength of the testing language is shown to identify an even stronger process relation, called probabilistic bisimulation. k. g. larsen a. skou terminating turing machine computations and the complexity and/or decidability of correspondence problems, grammars, and program schemes h. b. hunt a generalization of dehn-sommerville relations to simple stratified spaces ketan mulmuley a packing problem with applications to lettering of maps michael formann frank wagner a constructive proof of the countability of * james m. swenson a time-space tradeoff for sorting on a general sequential model of computation in a general sequential model of computation, no restrictions are placed on the way in which the computation may proceed, except parallel operations are not allowed. we show that in such an unrestricted environment time•space Ω(n2/log n) in order to sort n elements, each in the range [1,n2]. a. borodin s. cook the pattern-of-calls expansion is the canonical fixpoint for recursive definitions michael a. arbib ernest g. manes learning read-once formulas using membership queries l. hellerstein marek karpinski one-way functions and pseudorandom generators one-way are those functions which are easy to compute, but hard to invert on a non-negligible fraction of instances. the existence of such functions with some additional assumptions was shown to be sufficient for generating perfect pseudorandom strings |blum, micali 82|, |yao 82|, |goldreich, goldwasser, micali 84|. below, among a few other observations, a weaker assumption about one-way functions is suggested, which is not only sufficient, but also necessary for the existence of pseudorandom generators. the main theorem can be understood without reading the sections 3-6. l a levin a weight-size trade-off for circuits with mod m gates vince grolmusz the shortest vector problem in l2 is np-hard for randomized reductions (extended abstract) miklós ajtai realistic input models for geometric algorithms mark de berg matthew katz a. frank van der stappen jules vleugels decidability of the purely existential fragment of the theory of term algebras this paper is concerned with the question of the decidability and the complexity of the decision problem for certain fragments of the theory of free term algebras. the existential fragment of the theory of term algebras is shown to be decidable through the presentation of a nondeterministic algorithm, which, given a quantifier-free formula p, constructs a solution for p if it has one and indicates failure if there are no solutions. it is shown that the decision problem is in np by proving that, if a quantifier-free formula p has a solution, then there is one that can be represented as a dag in space at most cubic in the length of p. the decision problem is shown to be complete for np by reducing 3-sat to that problem. thus it is established that the existential fragment of the theory of pure list structures in the language of nil, cons, car, cdr, =, ≤ (subexpression) is np-complete. it is further shown that even a slightly more expressive fragment of the theory of term algebras, the one that allows bounded universal quantifiers, is undecidable. k. n. venkataraman including queueing effects in amdahl's law r. nelson on the decidability of distributed decision tasks maurice herlihy sergio rajsbaum maintenance of geometric extrema let s be a set, f: s×s->r a bivariate function, and f (x,s) the maximum value of f(& lt;italic>x,y) over all elements y∈s. we say that f is decomposable with respect with the maximum if f( x,s) = max {f(x,s1),f(x< /italic>,s2), …,f(x,< italic>sk)} for any decomposition s = μ_i=1i< ;/italic>=ks< ;subscrpt>i. computing the maximum (minimum) value of a decomposable function is inherent in many problems of computational geometry and robotics. in this paper, a general technique is presented for updating the maximum (minimum) value of a decomposable function as elements are inserted into and deleted from the set s. our result holds for a semi- online model of dynamization: when an element is inserted, we are told how long it will stay. applications of this technique include efficient algorithms for dynamically computing the diameter or closest pair of a set of points, minimum separation among a set of rectangles, smallest distance between a set of points and a set of hyperplanes, and largest or smallest area (perimeter) retangles determined by a set of points. these problems are fundamental to application areas such as robotics, vlsi masking, and optimization. david dobkin subhash suri the satanic notations: counting classes beyond #p and other definitional adventures lane a. hemaspaandra heribert vollmer theory of computing: achievements, challenges, and opportunities michael c. loui decidability of the multiplicity equivalence of multitape finite automata t. harju j. karhumäki deadlock-free routing in arbitrary networks via the flattest common supersequence method ambrose k. laing robert cypher on the simplification and equivalence problems for straight-line programs oscar h. ibarra brian s. leininger searching for primitive roots in finite fields v. shoup strongly equivalent logic programs a logic program pi1 is said to be equivalent to a logic program pi2 in the sense of the answer set semantics if pi1 and pi2 have the same answer sets. we are interested in the following stronger condition: for every logic program pi, pi1 ∪ pi has the same answer sets as pi2 ∪ pi. the study of strong equivalence is important, because we learn from it how one can simplify a part of a logic program without looking at the rest of it. the main theorem shows that the verification of strong equivalence can be accomplished by checking the equivalence of formulas in a monotonic logic, called the logic of here-and- there, which is intermediate between classical logic and intuitionistic logic. vladimir lifschitz david pearce agustín valverde dense hierarchies of grammatical families h. a. maurer a. salomaa d. wood finite state verifiers i: the power of interaction an investigation of interactive proof systems (ipss) where the verifier is a 2-way probabilistic finite state automaton (2pfa) is initiated. in this model, it is shown: (1) ipss in which the verifier uses private randomization are strictly more powerful than ipss in which the random choices of the verifier are made public to the prover. (2) ipss in which the verifier uses public randomization are strictly more powerful than 2pfa's alone, that is, without a prover. (3) every language which can be accepted by some deterministic turing machine in exponential time can be accepted by some ips. additional results concern two other classes of verifiers: 2pfa's that halt in polynomial expected time, and 2-way probabilistic pushdown automata that halt in polynomial time. in particular, ipss with verifiers in the latter class are as powerful as ipss where verifiers are polynomial-time probabilistic turing machines. in a companion paper [7], zero knowledge ipss with 2pfa verifiers are investigated. cynthia dwork larry stockmeyer voronoi diagrams for direction-sensitive distances oswin aichholzer danny z. chen d. t. lee asish mukhopadhyay evanthia papadopoulou franz aurenhammer approximation schemes for covering and packing problems in image processing and vlsi a unified and powerful approach is presented for devising polynomial approximation schemes for many strongly np-complete problems. such schemes consist of families of approximation algorithms for each desired performance bound on the relative error ε > , with running time that is polynomial when ε is fixed. though the polynomiality of these algorithms depends on the degree of approximation ε being fixed, they cannot be improved, owing to a negative result stating that there are no fully polynomial approximation schemes for strongly np-complete problems unless np = p. the unified technique that is introduced here, referred to as the shifting strategy, is applicable to numerous geometric covering and packing problems. the method of using the technique and how it varies with problem parameters are illustrated. a similar technique, independently devised by b. s. baker, was shown to be applicable for covering and packing problems on planar graphs. dorit s. hochbaum wolfgang maass optimal dynamic scheduling of task tree on constant-dimensional architectures xiangdong yu dipak ghosal entropy and sorting we reconsider the old problem of sorting under partial information, and give polynomial time algorithms for the following tasks. (1) given a partial order p, find (adaptively) a sequence of comparisons (questions of the form, "is x < y?") which sorts (i.e. finds an unknown linear extension of) p using o(log(e(p))) comparisons in worst case (where e(p) is the number of linear extensions of p). (2) compute (on line) answers to any comparison algorithm for sorting a partial order p which force the algorithm to use (log(e(p))) comparisons. (3) given a partial order p of size n, estimate e(p) to within a factor exponential in n. (we give upper and lower bounds which differ by the factor nn&l; t;/supscrpt>/n!.) < p>our approach, based on entropy of the comparability graph of p and convex minimization via the ellipsoid method, is completely different from earlier attempts to deal with these questions. jeff kahn jeong han kim simplifying termination proofs for rewrite systems by preprocessing bernhard gramlich a single-exponential upper bound for finding shortest paths in three dimensions we derive a single-exponential time upper bound for finding the shortest path between two points in 3-dimensional euclidean space with (nonnecessarily convex) polyhedral obstacles. prior to this work, the best known algorithm required double-exponential time. given that the problem is known to be pspace-hard, the bound we present is essentially the best (in the worst-case sense) that can reasonably be expected. john h. reif james a. storer deciding first-order properties of locally tree-decomposable structures we introduce the concept of a class of graphs, or more generally, relational structures, being _locally tree-decomposable_. there are numerous examples of locally tree-decomposable classes, among them the class of planar graphs and all classes of bounded valence or of bounded tree-width. we also consider a slightly more general concept of a class of structures having _bounded local tree-width_.we show that for each property φ of structures that is definable in first-order logic and for each locally tree-decomposable class c of structures, there is a linear time algorithm deciding whether a given structure _a_ ∈ c has property φ. for classes c of bounded local tree- width, we show that for every _k_ ≥ 1 there is an algorithm solving the same problem in time _o_(_n< /i>1+(1/_k_)) (where _n_ is the cardinality of the input structure). markus frick martin grohe classifying learnable geometric concepts with the vapnik-chervonenkis dimension a blumer a ehrenfeucht d haussler m warmuth on solving equations and disequations we are interested in the problem of solving a systemi&l; t;/inf>=ti:1≤i≤n,pj&l; t;/inf> qj:1≤j≤m> of equations and disequations, also known as disunification. solutions to disunification problems are substitutions for the variables of the problem that make the two terms of each equation equal, but leave those of the disequations different. we investigate this in both algebraic and logical contexts where equality is defined by an equational theory and more generally by a definitive clause equality theory e. we show how e-disunification can be reduced to e-unification, that is, solving equations only, and give a disunification algorithm for theories given a unification algorithm. in fact, this result shows that for theories in which the solutions of all unification problems can also be represented finitely. we sketch how disunification can be applied to handle negation in logic programming with equality in a similar style to colmerauer's logic programming with rational trees, and to represent many solutions to ac-unification problems by a few solutions to aci- disunification problems. wray l. buntine hans-jurgen burckert incremental reduction in the lambda calculus an incremental algorithm is one that takes advantage of the fact that the function it computes is to be evaluated repeatedly on inputs that differ only slightly from one another, avoiding unnecessary duplication of common computations. we define here a new notion of incrementality for reduction in the untyped λ-calculus and describe an incremental reduction algorithm, Λinc. we show that Λinc has the desirable property of performing non-overlapping reductions on related terms, yet is simple enough to allow a practical implementation. the algorithm is based on a novel λ-reduction strategy that may prove useful in a non-incremental setting as well. incremental λ-reduction can be used to advantage in any setting where an algorithm is specified in a functional or applicative manner. john field tim teitelbaum on the union of -curved objects alon efrat matthew j. katz another advantage of free choice (extended abstract): completely asynchronous agreement protocols recently, fischer, lynch and paterson [3] proved that no completely asynchronous consensus protocol can tolerate even a single unannounced process death. we exhibit here a probabilistic solution for this problem, which guarantees that as long as a majority of the processes continues to operate, a decision will be made (theorem 1). our solution is completely asynchronous and is rather strong: as in [4], it is guaranteed to work with probability 1 even against an adversary scheduler who knows all about the system. michael ben-or the alternating fixpoint of logic programs with negation we introduce and describe the alternating fixpoint of a logic program with negation. the underlying idea is to monotonically build up a set of negative conclusions until the least fixpoint is reached, using a transformation related to the one that defines stable models, developed by gelfand and lifschitz. from a fixed set of negative conclusions, we can derive the positive conclusions that follow (without deriving any further negative ones), by traditional horn clause semantics. the union of positive and negative conclusions is called the alternating fixpoint partial model. the name "alternating" was chosen because the transformation runs in two passes; the first pass transforms an underestimate of the set of negative conclusions into an (intermediate) overestimate; the second pass transforms the overestimates into a new underestimate; the composition of the two passes is monotonic. our main theorem is that the alternating fixpoint partial model is exactly the well-founded partial model. we also show that a system is fixpoint logic, which permits rule bodies to be first order formulas but requires inductive relations to be positive within them, can be transformed straightforwardly into a normal logic program whose alternating fixpoint partial model corresponds to the least fixpoint of the fixpoint logic system. thus alternating fixpoint logic is at least as expressive as fixpoint logic. the converse is shown to hold for finite structures. a. van gelder partial order programming (extended abstract) we introduce a programming paradigm in which statements are constraints over partial orders. a partial order programming problem has the form minimize u subject to u1 &sqsupe; v1, u2 &sqsupe; v2, *** where u is the goal, and u1 &sqsupe; v1, u2 &sqsupe; v2, *** is a collection of constraints called the program. a solution of the problem is a minimal value for u determined by values for u1, v1, etc. satisfying the constraints. the domain of values here is a partial order, a domain d with ordering relation &sqsupe;. the partial order programming paradigm has interesting properties: it generalizes mathematical programming and also computer programming paradigms (logic, functional, and others) cleanly, and offers a foundation both for studying and combining paradigms. it takes thorough advantage of known results for continuous functionals on complete partial orders, when the constraints involve expressions using only continuous and monotone operators. the semantics of these programs coincide with recent results on the relaxation solution method for constraint problems. it presents a framework that may be effective in modeling, or knowledge representation, of complex systems. d. s. parker points and triangles in the plane and halving planes in space we prove that for any set s of n points in the plane and n3-α triangles spanned by the points of s there exists a point (not necessarily of s) contained in at least n3-3α/(512 log5 n) of the triangles. this implies that any set of n points in three-dimensional space defines at most 6.4n8/3 log5/3 n halving planes. boris aronov bernard chazelle herbert edelsbrunner leonidas j. guibas micha sharir rephael wenger multiplicative complexity of polynomial multiplication over finite fields let mq( n) denote the number of multiplications required to compute the coefficients of the product of two polynomials of degree n over a q-element field by means of bilinear algorithms. it is shown that mq(n) 3n \\- o(n). in particular, if q/2 < n q \\+ 1, we establish the tight bound q(n) = 3n \\+ 1 [q/2].the technique we use can be applied to analysis of algorithms for multiplication of polynomials modulo a polynomial as well. michael kaminski nader h. bshouty nearest neighbor problems gordon wilfong probabilistic simulations (preliminary version) the results of this paper concern the question of how fast machines with one type of storage media can simulate machines with a different type of storage media. most work on this question has focused on the question of how fast one deterministic machine can simulate another. in this paper we shall look at the question of how fast a probabilistic machine can simulate another. this approach should be of interest in its own right, in view of the great attention that probabilistic algorithms have recently attracted. nicholas pippenger low level complexity for combinatorial games there have been numerous attempts to discuss the time complexity of problems and classify them into hierarchical classes such as p, np, pspace, exp, etc. a great number of familiar problems have been reported which are complete in np (nondeterministic polynomial time). even and tarjan considered generalized hex and showed that the problem to determine who wins the game if each player plays perfectly is complete in polynomial space. shaefer derived some two- person game from np complete problems which are complete in polynomial space. a rough discussion such as to determine whether or not a given problem belongs to np is independent of the machine model and the way of defining the size of problems, since any of the commonly used machine models can be simulated by any other with a polynomial loss in running time and by no matter what criteria the size is defined, they differ from each other by polynomial order. however, in precise discussion, for example, in the discussion whether the computation of a problem requires o(n k) time or o(nk+l) time, the complexity heavily depends on machine models and the definition of size of problems. from these points, we introduce somewhat stronger notion of the reducibility. akeo adachi shigeki iwata takumi kasai reducing computational complexity with array predicates this article describes how _array predicates_ were used to reduce the computational complexity of four apl primitive functions when one of their arguments is a permutation vector. the search primitives, _indexof_ and set membership, and the sorting _primitives,_ upgrade and downgrade, execute in linear time on such arguments. our contribution, a method for static determination of array properties, lets us generate code that is optimized for special cases of primitives. our approach eliminates runtime checks which would otherwise slow down the execution of all cases of the effected primitives. we use the same analysis technique to reduce the type complexity of certain array primitives. robert bernecky competitive algorithms for on-line problems an on-line problem is one in which an algorithm must handle a sequence of requests, satisfying each request without knowledge of the future requests. examples of on-line problems include scheduling the motion of elevators, finding routes in networks, allocating cache memory, and maintaining dynamic data structures. a competitive algorithm for an on-line problem has the property that its performance on any sequence of requests is within a constant factor of the performance of any other algorithm on the same sequence. this paper presents several general results concerning competitive algorithms, as well as results on specific on-line problems. mark manasse lyle mcgeoch daniel sleator generating hard instances of lattice problems (extended abstract) m. ajtai probabilistic temporal logics for finite and bounded models we present two (closely-related) propositional probabilistic temporal logics based on temporal logics of branching time as introduced by ben-ari, pnueli and manna and by clarke and emerson. the first logic, ptlf, is interpreted over finite models, while the second logic, ptlb, which is an extension of the first one, is interpreted over infinite models with transition probabilities bounded away from 0. the logic ptlf allows us to reason about finite-state sequential probabilistic programs, and the logic ptlb allows us to reason about (finite-state) concurrent probabilistic programs, without any explicit reference to the actual values of their state-transition probabilities. a generalization of the tableau method yields exponential-time decision procedures for our logics, and complete axiomatizations of them are given. several meta-results, including the absence of a finite-model property for ptlb, and the connection between satisfiable formulae of ptlb and finite state concurrent probabilistic programs, are also discussed. sergiu hart micha sharir( on embedding a microarchitectural design language within haskell john launchbury jeffrey r. lewis byron cook an optimal algorithm for the ( k)-levels, with applications to separation and transversal problems this paper gives an optimal onlogn+nk time algorithm for constructing the levels1,…,k in an arrangement ofn lines in the plane. this algorithmis extended to compute these levels in an arrangement ofn unboundedx-monotone polygonal convex chains,of which each pair intersects at most a constant number of times. these algorithms can be used to solve the following separation andtransversal problems. for a set nblue points and an set of n redpoints, find a line that separates the two sets in such a way that thesum, m, of the number of red pointsabove the line and the number of blue points below the line isminimized. such an optimal line can be found in onmlogm+nlogn time. for a set ofnline segments in the plane, find aline that intersects the maximum number of the line segments. such anoptimal line can be found in onmlogm+nlogn time for vertical segments and in onmlogm+nlog 2 nan expected time for arbitrary line segments, wherem denotes the number ofline segments not intersected by the optimal line. hazel everett jean-marc robert marc van kreveld effective axiomatizations of hoare logics edmund m. clarke steven m. german joseph y. halpern on exact specification by examples some recent work [7, 14, 15] in computational learning theory has discussed learning in situations where the teacher is helpful, and can choose to present carefully chosen sequences of labelled examples to the learner. we say a function t in a set h of functions (a hypothesis space) defined on a set x is specified by s***x if the only function in h which agrees with t on s is t itself. the specification number (t) of t is the least cardinality of such an s. for a general hypothesis space, we show that the specification number of any hypotheis is at least equal to a parameter from [14] known as the testing dimension of h. we investigate in some detail the specification numbers of hypotheses in the set hn of linearly separable boolean functions: we present general methods for finding upper bounds on (t) and we characterise those t which have largest (t). we obtain a general lower bound on the number of examples required and we show that for all nested hypotheses, this lower bound is attained. we prove that for any t ε hn, there is exactly one set of examples of minimal cardinality (i.e., of cardinality (t)) which specifies t. we then discuss those t ε hn which have limited dependence, in the sense that some of the variables are redundant (i.e., there are irrelevant attributes), giving tight upper and lower bounds on (t) for such hypotheses. in the final section of the paper, we address the complexity of computing specification numbers and related parameters. martin anthony graham brightwell dave cohen john shawe-taylor functional interpretations of feasibly constructive arithmetic s. cook a. urquhart existential second-order logic over strings existential second-order logic (eso) and monadic second-order logic(mso) have attracted much interest in logic and computer science. eso is a much expressive logic over successor structures than mso. however, little was known about the relationship between msoand syntatic fragments of eso. we shed light on this issue by completely characterizing this relationship for the prefix classes of eso over strings, (i.e., finite successor structures). moreover, we determine the complexity of model checking over strings, for all eso-prefix classes. let eso( q ) denote the prefix class containing all sentences of the shape **r**q4 , where r is a list of predicate variables, q is a first-order predicate qualifier from the prefix set q and 4 is quantifier-free. we show that eso( *∀ * ) and eso( *∀∀ ) are the maximal standard eso-prefix classes contained in mso, thus expressing only regular languages. we further prove the following dichotomy theorem: an eso prefix-class either expresses only regular languages (and is thus in mso), or it expresses some np-complete languages. we also give a precise characterization of those eso-prefix classes that are equivalent to mso over strings, and of the eso- prefix classes which are closed under complementation on strings. thomas eiter georg gottlob yuri gurevich the asymptotic complexity of merging networks let m(m,n) be the minimum number of comparatorsneeded in a comparator network that merges m elements x1≤x2≤ ≤xm and n elements y1≤y2≤ ≤yn , where n≥m . batcher's odd-even merge yields the following upper bound: mm,n≤1 2m+nlog 2m+on; in particular, mn,n≤nlog 2n+on. we prove the following lower bound that matches the upper bound above asymptotically as n≥m->∞: mm,n≥1 2m+nlog 2m-om; in particular, mn,n≥nlog 2n-on. our proof technique extends to give similarily tight lower bounds for the size of monotone boolean circuits for merging, and for the size of switching networks capable of realizing the set of permutations that arise from merging. peter bro miltersen mike paterson jun tarui compatible tetrahedralizations we give some special-case tetrahedralization algorithms. we first consider the problem of finding a tetrahedralization compatible with a fixed triangulation of the boundary of a polyhedron. we then adapt our solution to the related problem of compatibly tetrahedralizing the interior and exterior of a polyhedron. we also show how to tetrahedralize the region between nested convex polyhedra with o(n log n) tetrahedra and no steiner points. marshall bern optimal disk i/o with parallel block transfer j. s. vitter e. a. m. shriver the complexity of the equivalence problem for straight-line programs we look at several classes of straight-line programs and show that the equivalence problem is either undecidable or computationally intractable for all but the trivial classes. for example, there is no algorithm to determine if an arbitrary program (with positive, negative, or zero integer inputs) using only constructs x 1, x x + y, x x/y (integer division) outputs 0 for all inputs. the result holds even if we consider only programs which compute total 0/1 - functions. for programs using constructs x 0, x c, x cx, x x/c, x x + y, x x y, skip l, if p(x) then skip l, and halt,1 the equivalence problem is decidable in [equation] time (λ is a fixed positive constant and n is the maximum of the sizes of the programs). the bound cannot be reduced to a polynomial in n unless p np. in fact, we prove the following rather surprising result: the equivalence problem for programs with one input/output variable and one intermediate variable using only constructs x x + y and x x/2 is np- hard. we also show the decidability of the equivalence problem for a certain class of programs and use this result to prove the following: let in be the set of natural numbers and f be any total one-to-one function from in onto in × in. (f is called a pair generator. such functions are useful in recursive function and computability theory.) then f cannot be computed by any program using only constructs x 0, x c, x x + y, x x y, x x * y, x x/y, skip l, if p(x) then skip l, and halt. oscar h. ibarra brian s. leininger formal language, grammar and set-constraint-based program analysis by abstract interpretation patrick cousot radhia cousot a framework for defining logics the edinburgh logical framework (lf) provides a means to define (or present) logics. it is based on a general treatment of syntax, rules, and proofs by means of a typed λ-calculus with dependent types. syntax is treated in a style similar to, but more general than, martin-lo"f's system of arities. the treatment of rules and proofs focuses on his notion of a judgment. logics are represented in lf via a new principle, the judgments as types principle, whereby each judgment is identified with the type of its proofs. this allows for a smooth treatment of discharge and variable occurence conditions and leads to a uniform treatment of rules and proofs whereby rules are viewed as proofs of higher-order judgments and proof checking is reduced to type checking. the practical benefit of our treatment of formal systems is that logic-independent tools, such as proof editors and proof checkers, can be constructed. robert harper furio honsell gordon plotkin an inverted taxonomy of sorting algorithms an alternative taxonomy (to that of knuth and others) of sorting algorithms is proposed. it emerges naturally out of a top-down approach to the derivation of sorting algorithms. work done in automatic program synthesis has produced interesting results about sorting algorithms that suggest this approach. in particular, all sorts are divided into two categories: hardsplit/easyjoin and easysplit/hardjoin. quicksort and merge sort, respectively, are the canonical examples in these categories. insertion sort and selection sort are seen to be instances of merge sort and quicksort, respectively, and sinking sort and bubble sort are in-place versions of insertion sort and selection sort. such an organization introduces new insights into the connections and symmetries among sorting algorithms, and is based on a higher level, more abstract, and conceptually simple basis. it is proposed as an alternative way of understanding, describing, and teaching sorting algorithms. susan m. merritt land*: an and with local bindings, a guarded let* special form oleg kiselyov the complexity of lalr(k) testing seppo sippu eljas soisalon-soininen esko ukkonen extended projection - new method to extract efficient programs from constructive proofs yukihide takayama logics for probabilistic programming (extended abstract) this paper introduces a logic for probabilistic programming+ prob-dl (for probabilistic dynamic logic; see section 2 for a formal definition). this logic has "dynamic" modal operators in which programs appear, as in pratt's [1976] dynamic logic dl. however the programs of prob-dl contain constructs for probabilistic branching and looping whereas dl is restricted to nondeterministic programs. the formula {a} p of prob-dl denotes "with measure ≥ , formula p holds after executing program a." in section 3, we show that prob-dl has a complete and consistent axiomatization, (using techniques derived from parikh's [1978] completeness proof for the propositional dynamic logic). section 4 presents a probabilistic quantified boolean logic (prob-qbf) which also has applications to probabilistic programming. john h. reif concurrent dynamic logic in this paper concurrent dynamic logic (cdl) is introduced as an extension of dynamic logic tailored toward handling concurrent programs. properties of cdl are discussed, both on the propositional and first-order level, and the extension is shown to possess most of the desirable properties of dl. its relationships with the μ-calculus, game logic, dl with recursive procedures, and ptime are further explored, revealing natural connections between concurrency, recursion, and alternation. david peleg the orbit problem is decidable the "accessibility problem" for linear sequential machines (harrison [7]) is the problem of deciding whether there is an input x that sends such a machine from a given state q1 to a given state q2. harrison [7] showed that this problem is reducible to the "orbit problem:" given aεqn×n does there exist iεn such that aix y.* we will call this the "orbit problem" because the question can be rephrased as: does y belong to the orbit of x under a where the "orbit of x under a" is the set {aix: i 0,1,2,...}. (a0 is the identity matrix i.) in harrison's original problem the elements of a,x, and y were members of an arbitrary "computable" field. in view of the lack of structure of such fields, we study only the rationals. shank [13] proves that the orbit problem is decidable for the rational case when n 2. the current paper establishes that for the general rational case, the problem is decidable - and in fact polynomial-time decidable. we wish to give a brief idea of our approach to the problem. ravindran kannan richard j. lipton k one-way heads cannot do string-matching tao jiang ming li dynamic word problems gudmund skovbjerg frandsen peter bro miltersen sven skyum primality testing using elliptic curves we present a primality proving algorithm---a probablistic primality test that produces short certificates of primality on prime inputs. we prove that the test runs in expected polynomial time for all but a vanishingly small fraction of the primes. as a corollary, we obtain an algorithm for generating large certified primes with distribution statistically close to uniform. under the conjecture that the gap between consecutive primes is bounded by some polynomial in their size, the test is shown to run in expected polynomial time for all primes, yielding a las vegas primality test. our test is based on a new methodology for applying group theory to the problem of prime certification, and the application of this methodology using groups generated by elliptic curves over finite fields. we note that our methodology and methods have been subsequently used and improved upon, most notably in the primality proving algorithm of adleman and huang using hyperelliptic curves and in practical primality provers using elliptic curves. shafi goldwasser joe kilian variable reordering for shared binary decision diagrams using output probabilities m. a. thornton j. p. williams r. drechsler n. drechsler hard-core theorems for complexity classes nancy lynch proved that if a decision problem a is not solvable in polynomial time, then there exists an infinite recursive subset x of its domain on which the decision is almost everywhere complex. in this paper, general theorems of this kind that can be applied to several well-known automata-based complexity classes, including a common class of randomized algorithms, are proved. shimon even alan l. selmen yacov yacobi property testing and its connection to learning and approximation in this paper, we consider the question of determining whether a function f has property p or is ε-far from any function with property p. a property testing algorithm is given a sample of the value of f on instances drawn according to some distribution. in some cases, it is also allowed to query f on instances of its choice. we study this question for different properties and establish some connections to problems in learning theory and approximation. in particular, we focus our attention on testing graph properties. given access to a graph g in the form of being able to query whether an edge exists or not between a pair of vertices, we devise algorithms to test whether the underlying graph has properties such as being bipartite, k-colorable, or having a p-clique (clique of density p with respect to the vertex set). our graph property testing algorithms are probabilistic and make assertions that are correct with high probability, while making a number of queries that is independent of the size of the graph. moreover, the property testing algorithms can be used to efficiently (i.e., in time linear in the number of vertices) construct partitions of the graph that correspond to the property being tested, if it holds for the input graph. oded goldreich shari goldwasser dana ron the membership problem in aperiodic transformation monoids the problem of testing membership in aperiodic or "group-free" transformation monoids is the natural counterpart to the well-studied membership problem in permutation groups. the class a of all finite aperiodic monoids and the class g of all finite groups are two examples of varieties, the fundamental complexity units in terms of which finite monoids are classified. the collection of all varieties v forms an infinite lattice under the inclusion ordering, with the subfamily of varieties that are contained in a forming an infinite sublattice. for each v a, the associated problem memb(v) of testing membership in transformation monoids that belong to v, is considered. remarkably, the computational complexity of each such problem turns out to look familiar. moreover, only five possibilities occur as v ranges over the whole aperiodic sublattice: with one family of np-hard exceptions whose exact status is still unresolved, any such memb(v) is either pspace-complete, np- complete, p-complete or in ac0. these results thus uncover yet another surprisingly tight link between the theory of monoids and computational complexity theory. m. beaudry p. mckenzie d. therien balanced routing: towards the distance bound on grids manfred kunde the complexity of searching a sorted array of strings arne andersson torben hagerup johan håstad ola petersson are lr parsers too powerful? p machanick the role of term symmetry in e-completion procedures a major portion of the work and time involved in completing an incomplete set of reductions using an e-completion procedure such as the one described by knuth and bendix or its extension to associative-commutative equational theories as described by peterson and stickel is spent calculating critical pairs and subsequently testing them for confluence and coherence. a pruning technique which removes from consideration those critical pairs that represent redundant or superfluous information can make a marked difference in the run time and efficiency of an e-completion procedure to which it is applied. in this paper, a technique is proposed for removing critical pairs from consideration at various points before, during, or after their formation. this method is based on the property of term symmetry, which will be defined and explored with respect to e-unification and e-completion procedures. informally, term symmetry exists between two terms when one can be transformed into the other through variable renaming. by identifying and eliminating various forms of term symmetry which arise between syntactic structures such as, reductions, critical pairs, subterms, and unifiers, it is possible to derive an e-completion procedure that produces the same results without processing these symmetric redundancies. ralph w. wilkerson blayne e. mayfield an efficient dynamic selection method j. t. postmus a. h.g. rinnooy kan g. t. timmer search for the maximum of a random walk andrew m. odlyzko neural network for partitionable variational inequalities often in recent times industries have asked mathematicians to determine their "true" utility functions directly from the available data about used resources and about the corresponding profits in order to optimize the latter with respect to the former. the possibility of determining the utility function directly from the data is very important because in this way the exact situation of the company is described. moreover, the biggest companies divide their investments into several activities. the optimization of their utility function can lead to problems that involve separable or partitionable functions. two-layered feed- forward neural networks are able to approximate any separable function while fitting the data mantaining the separable structure with the desired approximation error. thus the theory of partitionable variational inequalities can be used in order to find the optimum of the utility function, subject to some constraints. the presence of the partitionable structure is important because it simplifies the resolution algorithms and makes them more efficient. moreover, with stronger assumptions about the function, the above results can be generalized to a larger class of utility functions: one problem of dimension n can be split into n problems of dimension one. giulia rotondo a fixpoint semantics for nondeterministic data flow criteria for adequacy of a data flow semantics are discussed and kahn's successful semantics for functional (deterministic) data flow is reviewed. problems arising from nondeterminism are introduced and the paper's approach to overcoming them is introduced. the approach is based on generalizing the notion of input-output relation, essentially to a partially ordered multiset of input-output histories. the brock-ackerman anomalies concerning the input- output relation model of nondeterministic data flow are reviewed, and it is indicated how the proposed approach avoids them. a new anomaly is introduced to motivate the use of multisets. a formal theory of asynchronous processes is then developed. the main result is that the operation of forming a process from a network of component processes is associative. this result shows that the approach is not subject to anomalies such as that of brock and ackerman. john staples v. l. nguyen skinny and fleshy failures of relative completeness the notion of relative completeness of logics of programs was delineated almost ten years ago, in particular by wand, cook and clarke. more recently, it has been felt that cook's notion hinges on a fragile balance between the semantics of a programming language and first-order expressiveness in structures. this fragility underlies the negative results about relative completeness. the main negative result in this area, clarke's theorem, states that for a programming language with a sufficiently rich control structure there is no effective formalism for partial-correctness that is both sound and relatively complete (in the sense of cook). a conclusion often drawn from this result is that the non-existence of an adequate cook-complete formalization of the partial-correctness theory of a programming language is an indicator of the complexity of that language. our most novel result is that this is not always the case. we exhibit a programming language whose control structure is trivial, and yet for which no cook-complete hoare logic exists. the poverty of the language is precisely what permits certain structures to be expressive, structures which would not be expressive had the program constructs been used more freely. we also discuss the failure of relative completeness for "fleshy" programming languages, of the kind of clarke's. we point out the relevance of the lambda calculus here, from which we derive the failure of cook-completeness for a programming language orthogonal (i.e. incomparable) to clarke's, whose control-structure complexity resides to a great extent in a modest use of generic procedures. a variant of the same idea also provides an alternative proof of non-relative-completeness for (a variant of) clarke's language. finally, we note in passing a simple proof of wand's theorem, which states the failure of local completeness (without cook's expressiveness condition) for while programs. our proof uses rudimentary facts from mathematical logic. its interest, compared with previous proofs [wan78, bt82, apt81] is that it refers to a structure whose first-order theory is not decidable. moreover, our proof shows that the failure of local completeness for while programs is not necessarily due to a tension between first-order definability and definability by program looping, as one might have been tempted to conclude from previous proofs. d. leivant t. fernando continuations may be unreasonable we show that two lambda calculus terms can be observationally congruent (i.e., agree in all contexts) but their continuation-passing transforms may not be. we also show that two terms may be congruent in all untyped contexts but fail to be congruent in a calculus with call/cc operators. hence, familiar reasoning about functional terms may be unsound if the terms use continuations explicitly or access them implicitly through new operators. we then examine one method of connecting terms with their continuized form, extending the work of meyer and wand [8]. albert meyer jon g. riecke temporal abstract interpretation we study the abstract interpretation of temporal calculi and logics in a general syntax, semantics and abstraction independent setting. this is applied to the @@@@-calculus, a generalization of the μ-calculus with new reversal and abstraction modalities as well as a new time-symmetric trace-based semantics. the more classical set-based semantics is shown to be an abstract interpretation of the trace-based semantics which leads to the understanding of model-checking and its application to data-flow analysis as incomplete temporal abstract interpretations. soundness and incompleteness of the abstractions are discussed. the sources of incompleteness, even for finite systems, are pointed out, which leads to the identification of relatively complete sublogics, a la ctl. patrick cousot radhia cousot balanced lines, halving triangles, and the generalized lower bound theorem a recent result by pach and pinchasi on so-called balanced lines of a finite two-colored point set in the plane is related to other facts on halving triangles in 3-space and to a special case of the generalized lower bound theorem for convex polytopes. micha sharir emo welzl the power of parallel pointer manipulation t. w. lam w. l. ruzzo using formal grammars to encode expert problem solving knowledge clyde matthews theory of computation: how to start leonid levin size-time complexity of boolean networks for prefix computations the prefix problem consists of computing all the products x0x1 … xj (j = 0, … , n \\- 1), given a sequence x = (x0, x1, … , xn- ;1) of elements in a semigroup. in this paper we completely characterize the size-time complexity of computing prefixes with boolean networks, which are synchronized interconnections of boolean gates and one-bit storage devices. this complexity crucially depends upon two properties of the underlying semigroup, which we call cycle-freedom (no cycle of length greater than one in the cayley graph of the semigroup), and memory-induciveness (arbitrarily long products of semigroup elements are true functions of all their factors). a nontrivial characterization is given of non-memory-inducive semigroups as those whose recurrent subsemigroup (formed by the elements with self-loops in the cayley graph) is the direct product of a left-zero semigroup and a right-zero semigroup. denoting by s and t size and computation time, respectively, we have s = ((n/t)log(& lt;italic>n/t)) for memory- inducive non-cycle-free semigroups, and s = (n/t) for all other semigroups. we have t ε [ (log n), (n)] for all semigroups, with the exception of those whose recurrent subsemigroup is a right-zero semigroup, for which t ε [ (1), (n)]. the preceding results are also extended to the vlsi model of computation. area- time optimal circuits are obtained for both boundary and nonboundary i/o protocols. g. bilardi f. p. preparata generalized selection and ranking (preliminary version) selection in a set requires time linear in the size of the set when there are no a priori constraints on the total orders possible for the set. constraints often come for free, however, with sets which arise in applications. linear time selection [bl] can be suboptimal for such problems. we therefore generalize the well known selection problem to admit constraints on the input sets, with a view toward settling the complexity issues which arise. the generalization also applies to the other quantile problems of ranking a given element in the input set and verification of the claim that a given element has a specified rank. greg n. frederickson donald b. johnson p-uniform circuit complexity much complexity-theoretic work on parallelism has focused on the class nc, which is defined in terms of logspace-uniform circuits. yet p-uniform circuit complexity is in some ways a more natural setting for studying feasible parallelism. in this paper, p-uniform nc (punc) is characterized in terms of space-bounded auxpdas and alternating turing machines with bounded access to the input. the notions of general-purpose and special-purpose computation are considered, and a general- purpose parallel computer for punc is presented. it is also shown that nc = punc if all tally languages in p are in nc; this implies that the nc = punc question and the nc = p question are both instances of the aspace(s(n)) = aspace,time(s(n), &l; t;italic>s(n)&l; t;italic>o(1)) question. as a corollary, it follows that nc = punc implies pspace = dtime(2n o(1)). eric w. allender an nc parallel 3d convex hull algorithm in this paper we present an ologn time parallel algorithm for computing the convex hullof n points in r3. this algorithm uses on1+a processors on a crew pram, for any constant 0 <δ ≤ 1\. so far, all adequately documented parallel algorithmsproposed for this problem use time at least olog 2 . in addition, the algorithm presented here is thefirst parallel algorithm for the three-dimensional convex hull problemthat is not based on the serial divide- and-conquer algorithm ofpreparata and hong, whose crucial operation is the merging of the convexhulls of two linearly separated point sets. the contributions of thispaper are therefore (i) an ologn time parallel algorithm for the three-dimensionalconvex hull problem, and (ii) a parallel algorithm for this problemthat does not follow the traditional divide-and- conquer paradigm. nancy m. amato franco p. preparata using degrees of freedom analysis to solve geometric constraint systems glenn a. kramer on the combinatorial and algebraic complexity of quantifier elimination in this paper, a new algorithm for performing quantifier elimination from first order formulas over real closed fields in given. this algorithm improves the complexity of the asymptotically fastest algorithm for this problem, known to this data. a new feature of this algorithm is that the role of the algebraic part (the dependence on the degrees of the imput polynomials) and the combinatorial part (the dependence on the number of polynomials) are sparated. another new feature is that the degrees of the polynomials in the equivalent quantifier-free formula that is output, are independent of the number of input polynomials. as special cases of this algorithm new and improved algorithms for deciding a sentence in the first order theory over real closed fields, and also for solving the existential problem in the first order theory over real closed fields, are obtained. saugata basu marie-françoise roy convexity algorithms in parallel coordinates with a system of parallel coordinates, objects in rn can be represented with planar "graphs" (i.e., planar diagrams) for arbitrary n [21]. in r2, embedded in the projective plane, parallel coordinates induce a point -> line duality. this yields a new duality between bounded and unbounded convex sets and hstars (a generalization of hyperbolas), as well as a duality between convex union (convex merge) and intersection. from these results, algorithms for constructing the intersection and convex merge of convex polygons in o(n) time and the convex hull on the plane in o(log n) for real-time and o(n log n) worst-case construction, where n is the total number of points, are derived. by virtue of the duality, these algorithms also apply to polygons whose edges are a certain class of convex curves. these planar constructions are studied prior to exploring generalizations to n-dimensions. the needed results on parallel coordinates are given first. alfred inselberg mordechai reif tuval chomut multicommodity max-flow min-cut theorems and their use in designing approximation algorithms tom leighton satish rao on the complexity of ram with various operation sets we prove that polynomial time bounded rams with the instruction set [shift, +, x, boolean ] accept exactly the languages in pspace. this generalizes previous results: [5] showed the same for the instruction set that does not include multiplication, [5] and [7] proved the weaker theorems, that rams (and even prams) with this instruction set could be simulated in exptape. the pram result is a simple corollary to our theorems. we also introduce other powerful string- manipulating instructions for rams, show a nontrivial simulation of turing machines by these rams, and show that in a sense such simulations are optimal. janos simon mario szegedy on-line load balancing with applications to machine scheduling and virtual circuit routing james aspnes yossi azar amos fiat serge plotkin orli waarts lower bounds for sorting networks nabil kahale tom leighton yuan ma c. greg plaxton torsten suel endre szemeredi on the efficiency of subsumption algorithms the costs of subsumption algorithms are analyzed by an estimation of the maximal number of unification attempts (worst-case unification complexity) made for deciding whether a clause c subsumes a clause d. for this purpose the clauses c and d are characterized by the following parameters: number of variables in c, number of literals in c, number of literals in d, and maximal length of the literals. the worst- case unification complexity immediately yields a lower bound for the worst- case time complexity. first, two well-known algorithms (chang-lee, stillman) are investigated. both algorithms are shown to have a very high worst-case time complexity. then, a new subsumption algorithm is defined, which is based on an analysis of the connection between variables and predicates in c. an upper bound for the worst- case unification complexity of this algorithm, which is much lower than the lower bounds for the two other algorithms, is derived. examples in which exponential costs are reduced to polynomial costs are discussed. finally, the asymptotic growth of the worst-case complexity for all discussed algorithms is shown in a table (for several combinations of the parameters). g. gottlob a. leitsch multiplying streams of large matrices in parallel and distributed environment gregory rowe the halting problem for turning machines d m wilson research some properties circulant matrix and fast convolution algorithms ivan ivanov some perspectives on computational complexity in past decades, the theory of computational complexity has flourished in terms of both the revelation of its internal structures and the unfolding of its numerous applications. in this paper we discuss several persistent and interwoven themes underlying many of these accomplishments. chief among them are the interplay between communication and computation, the power of problem reduction, and the increasingly prominent role played by classical mathematics. we will also speculate on a few promising directions for future development of computational complexity. andrew chi-chih yao ccs expressions, finite state processes, and three problems of equivalence we examine the computational complexity of testing finite state processes for equivalence, in the calculus of communicating systems (ccs). this equivalence problem in ccs is presented as a refinement of the familiar problem of testing whether two nondeterministic finite state automata (n.f.s.a.) accept the same language. three notions of equivalence, proposed for ccs, are investigated: (1) observation equivalence, (2) congruence, and (3) failure equivalence. we show that observation equivalence (@@@@) can be tested in cubic time and is the limit of a sequence of equivalence notions (@@@@k), where, @@@@1 is the familiar n.f.s.a. equivalence and, for each fixed k, @@@@k is pspace- complete. we provide an o(nlogn) test for congruence for n state processes of bounded fanout, by extending the algorithm that minimizes the states of d.f.s.a.'s. finally, we show that, even for a very restricted type of process, testing for failure equivalence is pspace-complete. paris c. kanellakis scott a. smolka computing algebraic formulas with a constant number of registers we show that, over an arbitrary ring, the functions computed by polynomial- size algebraic formulas are also computed by polynomial-length algebraic straight- line programs which use only 3 registers (or 4 registers, depending on some definitions). we also show that polynomial-length products of 3 × 3 matrices compute precisely those functions that polynomial-size formulas compute (whereas, for general rings, polynomial-length 3-register straight- line programs compute strictly more functions than polynomial-size formulas). this can be viewed as an extension of the results of barrington in [ba1,ba2] from the boolean setting to the algebraic setting of an arbitrary ring. richard cleve a polynomial linear search algorithm for the n-dimensional knapsack problem we present a linear search algorithm which decides the n-dimensional knapsack problem in n4log(n) + 0.(n3) steps. this algorithm works for inputs consisting of n numbers for some arbitrary but fixed integer n. this result solves an open problem posed for example in [6] and [7] by dobkin / lipton and a.c.c. yao, resp.. it destroys the hope of proving large lower bounds for this np- complete problem in the model of linear search algorithms. friedhelm meyer auf der heide a new string-pattern matching algorithm using partitioning and hashing efficiently sun kim a probabilistic pdl in this paper we give a probabilistic analog ppdl of propositional dynamic logic. we prove a small model property and give a polynomial space decision procedure for formulas involving well-structured programs. we also give a deductive calculus and illustrate its use by calculating the expected running time of a simple random walk program. dexter kozen polynomial time solutions of some problems of computational algebra the first structure theory in abstract algebra was that of finite dimensional lie algebras (cartan-killing), followed by the structure theory of associative algebras (wedderburn-artin). these theories determine, in a non- constructive way, the basic building blocks of the respective algebras (the radical and the simple components of the factor by the radical). in view of the extensive computations done in such algebras, it seems important to design efficient algorithms to find these building blocks. we find polynomial time solutions to a substantial part of these problems. we restrict our attention to algebras over finite fields and over algebraic number fields. we succeed in determining the radical (the "bad part" of the algebra) in polynomial time, using (in the case of prime characteristic) some new algebraic results developed in this paper. for associative algebras we are able to determine the simple components as well. this latter result generalizes factorization of polynomials over the given field. correspondingly, our algorithm over finite fields is las vegas. some of the results generalize to fields given by oracles. some fundamental problems remain open. an example: decide whether or not a given rational algebra is a noncommutative field. k friedl l rónyai algorithm 167: dif, the difference between two matrices richard h. oates bandwidth constrained np-complete problems bandwidth restrictions are considered on several np-complete problems, including the following problems: (1) 3-satisfiability, (2) independent set and vertex cover, (3) simple max cut (4) partition into triangles, (5) 3-dimensional matching, (6) exact cover by 3 sets, (7) dominating set, (8) graph grundy numbering (for graphs of finite degree), (9) 3-colorability, (10) directed and undirected hamiltonian circuit, (11) bandwidth minimization, and (12) feedback vertex set and feedback arc set. it is shown that each of the problems (1)-(12) when restricted to graphs (formulas, triples, or sets) of bandwidth bounded by a function f is log space hard for the complexity class ntisp (poly,f(n)). (ntisp(poly,f(n)) denotes the family of problems solvable nondeterministically in polynomial time and simultaneous f(n) space, e.g., ntisp(poly,poly) np and ntisp(poly, log n) nspace(log n).). in fact, (1)-(9) are log space complete for ntisp(poly,f(n)) when the bandwidth is bounded by the function f. this means, for example, that (1)-(9) provide several new examples of problems complete for nspace(log n), and hence solvable in polynomial time deterministically, when restricted to bandwidth log2n. in general, for a function f, if any of the problems (1)-(12), when restricted to bandwidth f(n), could be solved deterministically in polynomial time, then ntisp(poly, f(n)) @@@@ p. (this does not seem particularly likely even when f(n) log2n.) this indicates that several np- complete problems become easier with diminishing bandwidth. however, they remain intractable unless the bandwidth is restricted to c-log2n, for some c>0. burkhard monien ivan hal sudborough the complexity of the union of ( , )-covered objects alon efrat relativized polynomial time hierarchies having exactly k levels ker-i ko enhanced sharing analysis techniques: a comprehensive evaluation roberto bagnara enea zaffanella patricia m. hill reliable computation with cellular automata we construct a one- dimensional array of cellular automata on which arbitrarily large computations can be implemented reliably, even though each automaton at each step makes an error with some constant probability. to compute reliably with unreliable components, von neumann proposed boolean circuits whose intricate interconnection pattern (arising from the error- correcting organization) he had to assume to be immune to errors. in a uniform cellular medium, the error- correcting organization exists only in "software", therefore errors threaten to disable it. the real technical novelty of the paper is therefore the construction of a self-repairing organization. peter gacs program analysis: a toolmaker's perspective reinhard wilhelm properties of a family of booster types gary l. peterson time distribution analysis for binary search of a linked list firooz khosraviyani mohammad h. moadab douglas f. hale p-complete geometric problems m. atallah p. callahan m. goodrich a fast parallel algorithm for the maximal independent set problem a parallel algorithm is presented which accepts as input a graph g and produces a maximal independent set of vertices in g. on a p-ram without the concurrent write or concurrent read features, the algorithm executes in o((log n)4) time and uses o((n/log n)3) processors, where n is the number of vertices in g. the algorithm has several novel features that may find other applications. these include the use of balanced incomplete block designs to replace random sampling by deterministic sampling, and the use of a "dynamic pigeonhole principle" that generalizes the conventional pigeonhole principle. richard m. karp avi wigderson symbolic and algebraic computation research in italy p gianni a miola t mora mechanizing a theory of program composition for unity compositional reasoning must be better understood if non-trivial concurrent programs are to be verified. chandy and sanders [2000] have proposed a new approach to reasoning about composition, which charpentier and chandy [1999] have illustrated by developing a large example in the unity formalism. the present paper describes extensive experiments on mechanizing the compositionality theory and the example, using the proof tool isabelle. broader issues are discussed, in particular, the formalization of program states. the usual representation based upon maps from variables to values is contrasted with the alternatives, such as a signature of typed variables. properties need to be transferred from one program component's signature to the common signature of the system. safety properties can be so transferred, but progress properties cannot be. using polymorphism, this problem can be circumvented by making signatures sufficiently flexible. finally the proof of the example itself is outlined. lawrence c. paulson dynamically maintaining configurations in the plane (detailed abstract) for a number of common configurations of points (lines) in the plane, we develop datastructures in which insertions and deletions of points (or lines, respectively) can be processed rapidly without sacrificing much of the efficiency of query answering of known static structures for these configurations. as a main result we establish a fully dynamic maintenance algorithm for convex hulls that can process insertions and deletions of single points in only o(log3n) steps or less per transaction, where n is the number of points currently in the set. the algorithm has several intriguing applications, including that one can "peel" a set of n points in only o(log3n) steps and that one can maintain two sets at a costs of only o(log3n) or less per insertion and deletion such that it never takes more than o(log2n) steps to determine whether the two sets are separable by a straight line. also efficient algorithms are obtained for dynamically maintaining the common intersection of a set of half-spaces and for dynamically maintaining the maximal elements of a set of plane points. the results are all derived by means of one master technique, which is applied repeatedly and which seems to capture an appropriate notion of "decomposability" for configurations. mark h. overmars jan van leeuwen weak alternating automata are not that weak automata on infinite words are used for specification and verification of nonterminating programs. different types of automata induce different levels of expressive power, of succinctness, and of complexity. alternating automata have both existential and universal branching modes and are particularly suitable for specification of programs. in a weak alternating automata the state space is partitioned into partially ordered sets, and the automaton can proceed from a certain set only to smaller sets. reasoning about weak alternating automata is easier than reasoning about alternating automata with no restricted structure. known translations of alternating automata to weak alternating automata involve determinization, and therefore involve a double- exponential blow-up. in this paper we describe a quadratic translation, which circumvents the need for determinization, of buchi and co-buchi alternating automata to weak alternating automata. beyond the independent interest of such a translation, it gives rise to a simple complementation algorithm for nondeterministic buchi automata. orna kupferman moshe y. vardi an efficient and simple polygon intersection algorithm maharaj mukherjee topological matching there is a lot of practical and theoretical interest in designing algorithms to process digital pictures. of particular interest are problems arising when one starts with an nxn array of pixels and stores it, one pixel per processor, in some sort of array-like parallel computer. in this paper we give an optimal (n) time solution, based on a simpler (n) time solution for a more powerful computer called a mesh computer. beyer suggested that this problem was a prime candidate for a non-linear recognition problem, but our result shows that this is not true. quentin f. stout confluence properties of weak and strong calculi of explicit substitutions categorical combinators [curien 1986/1993; hardin 1989; yokouchi 1989] and more recently λ -calculus [abadi 1991; hardin and levy 1989], have been introduced to provide an explicit treatment of substitutions in the λ-calculus. we reintroduce here the ingredients of these calculi in a self- contained and stepwise way, with a special emphasis on confluence properties. the main new results of the paper with respect to curien [1986/1993], hardin [1989], abadi [1991], and hardin and levy [1989] are the following: (1) we present a confluent weak calculus of substitutions, where no variable clashes can be feared;(2) we solve a conjecture raised in abadi [1991]: λ -calculus is not confluent (it is confluent on ground terms only). this unfortunate result is "repaired" by presenting a confluent version of λ -calculus, named the λenv-caldulus in hardin and levy [1989], called here the confluent λ -calculus. pierre-louis curien therese hardin jean-jacques levy chain mulitplication of matrices of approximately or exactly the same size we present a different approach to finding an optimal computation order; it exploits both the difference between the size of the matrices and the difference between the number of nonzero elements in the matrices. therefore, this technique can be usefully applied where the matrices are almost or exactly the same size. we show that using the proposed technique, an optimal computation order can be determined in time o(n) if the matrices have the same size, and in time o(n) otherwise. nicola santoro communication-efficient parallel sorting (preliminary version) michael t. goodrich approximate motion planning and the complexity of the boundary of the union of simple geometric figures we study rigid motions of a rectangle amidst polygonal obstacles. the best known algorithms for this problem have running time (n2) where n is the number of obstacle corners. we introduce the tightness of a motion planning problem as a measure of the difficulty of a planning problem in an intuitive sense and describe an algorithm with running time ((a/b * 1/ε crit + 1)n(log n)2), where a ≥ b are the lengths of the sides of a rectangle and εcrit is the tightness of the problem. we show further that the complexity (= number of vertices) of the boundary of n bow-ties (c.f. figure 1.1) is (n). similar results for the union of other simple geometric figures such as triangles and wedges are also presented. helmut alt rudolf fleischer michael kaufmann kurt mehlhorn stefan näher stefan schirra christian uhrig implementing radixsort arne andersson stefan nilsson empty types in polymorphic lambda calculus the model theory of simply typed and polymorphic (second-order) lambda calculus changes when types are allowed to be empty. for example, the "polymorphic boolean" type really has exactly two elements in a polymorphic model only if the "absurd" type ∀t.t is empty. the standard β-ε axioms and equational inference rules which are complete when all types are nonempty are not complete for models with empty types. without a little care about variable elimination, the standard rules are not even sound for empty types. we extend the standard system to obtain a complete proof system for models with empty types. the completeness proof is complicated by the fact that equational "term models" are not so easily obtained: in contrast to the nonempty case, not every theory with empty types is the theory of a single model. a. r. meyer j. c. mitchell e. moggi r. statman hypergeometric dispersion and the orbit problem we describe an algorithm for finding the positive integer solutions n of orbit problems of the form αn = β where α and β are given elements of a field k. our algorithm corrects the bounds given in [7], and shows that the problem is not polynomial in the euclidean norms of the polynomials involved. combined with a simplified version of the algorithm of [8] for the "specification of equivalence", this yields a complete algorithm for computing the dispersion of polynomials in nested hypergeometric extensions of rational function fields. this is a necessary step in computing symbolic sums, or solving difference equations, with coefficients in such fields. we also solve the related equations ;p(αn&l; t;/italic>) = 0 and p(n, αn) = 0 where p is a given polynomial and α is given. sergei a. abramov manuel bronstein the bisector surface of freeform rational space curves gershon elber myung-soo kim block-matrix operations using orthogonal trees hypercube algorithms are presented for distributed block-matrix operations. these algorithms are based entirely on an interconnection scheme which involves two orthogonal sets of binary trees. this switching topology makes use of all hypercube interconnection links in a synchronized manner. an efficient novel matrix-vector multiplication algorithm based on this technique is described. also, matrix transpose operations moving just pointers rather than actual data, have been implemented for some applications by taking advantage of the above tree structures. for the cases where actual physical vector and matrix transposes are needed, possible techniques, including extensions of the above scheme, are discussed. the algorithms support submatrix partitionings of the data, instead of being limited to row and/or column partitionings. this allows efficient use of nodal vector processors as well as shorter interprocessor communication packets. it also produces a favorable data distribution for applications which involve near neighbor operations such as image processing. the algorithms are based on an interprocessor communication paradigm which involves variable length, tagged block data transfers. they have been implemented on an intel ipsc hypercube system with the support of the hypercube library developed at the christian michelsen institute. a. c. elster a. p. reeves a sequence of series for the lambert w function robert m. corless david j. jeffrey donald e. knuth complexity classes defined by counting quantifiers jacobo torán polynomial methods for allocating complex components james smith giovanni de micheli a note on modifiable grammars george h. roberts on the parallel decomposability of geometric problems there is a large and growing body of literature concerning the solution of geometric problems on mesh-connected arrays of processors [5,9,14,17]. most of these algorithms are optimal (i.e., run in time (_n1/d) on a d-dimensional n-processor array), and they all assume that the parallel machine is trying to solve a problem of size n on an n-processor array. what happens when we have parallel machine for efficiently solving a problem of size p, and we are interested in using it to solve a problem of size n < p? the answer to that question has to do with a fundamental, and yet (at least so far) little- studied property of geometric problems: their parallel-decomposability. more specifically, given that a problem of size p can be solved on a parallel machine p faster by a factor of (say) s(p) than on a ram alone, then that problem is fully parallel-decomposable for p if a ram to which the parallel machine p is attached can solve arbitrarily large problems with a speedup of also s(p) when compared to a ram alone. the issue has been settled for the sorting problem when p is a linear systolic array [1,2,3,11]. here we show that many geometric problems are fully parallel-decomposable for (multidimensional) mesh-connected arrays of processors. m. j. atallah j. j. tsay on the decidability of sparse univariate polynomial interpolation a. borodin p. tiwari a new solution to the n <= 8 queens problem this paper proposes a new solution to the problem of how many queens cover one chessboard. for the purpose of this paper, a staunton style chess queen is assumed, and not the ruler of any country. this algorithm is implemented in pidgin algol (1), which will be demonstrated to support concurrency. c. clay undecidability on quantum finite automata masami amano kazuo iwama unordered parallel distance-1 and sitance-2 fft algorithms of radix 2 and (4-2) mahn-ling woo rosemary a. renaut piecing together complexity to illustrate the "remarkable extent to which complexity theory operates by means of analogs from computability theory," richard karp created this conceptual map or jigsaw puzzle. to lay out the puzzle in the plane, he used a "graph planarity algorithm." the more distantly placed parts might not at first seem related, "but in the end, the theory of np-completeness does bring them all together," karp says. the upper right portion of the puzzle shows concepts related to combinatorial explosions and the notion of a "good" or "efficient" algorithm. in turn, "complexity" connects these concepts to the upper left portion, which represents the concerns of early computability theorists. the traveling salesman problem is closer to the upper right corner because it is probably intractable. it therefore borders on "np-completeness" and "combinatorial explosion." to some extent, however, certain divisions blur. "linear programming," for example, has an anomalous status---the most widely used algorithms for solving such problems in practice are not good in the theoretical sense, and those that are good in the theoretical sense are often not good in practice. one example is the ellipsoid method that was the object of so much attention six years ago. it ran in polynomial time, but the polynomial was of such a high degree that the method proved good in the technical sense, but ineffective in practice. "the reason is that our notion of polynomial-time algorithms doesn't exactly capture the notion of an intuitively efficient algorithm," karp explains. "when you get up to n5 or n6, then it's hard to justify saying that it is really efficient. so edmonds's concept of a good algorithm isn't quite a perfect formal counterpart of good in the intuitive sense." further, the simplex algorithm is good in every practical sense, karp says, but not good according to the standard paradigm of complexity theory. the most recent addition to linear programming solutions, an algorithm devised by narendra karmarkar that some think challenges the simplex algorithm, is good in the technical sense and also appears to be quite effective in practice, says karp. the good algorithm segment is adjacent to "heuristics" because a heuristic algorithm may work well, but lack a theoretical pedigree. some heuristic algorithms are always fast, but sometimes fail to give good solutions. others always give an optimal solution, but are not guaranteed to be fast. the simplex algorithm is of the latter type. "undecidability, " "combinatorial explosion," and "complexity" are on the same plane because they are analogs of one another; undecidability involves unbounded search, whereas combinatorial explosions are by definition very long but not unbounded searches. complexity theory bridges the gap because, instead of asking whether a problem can be solved at all, it poses questions about the resources needed to solve a problem. the lower left region contains the segments karp has been concerned with most recently and that contain open-ended questions. "randomized algorithm," for example, is situated opposite "probabilistic analysis" because both are alternatives to worst-case analyses of deterministic algorithms. randomized algorithms might be able to solve problems in polynomial time that deterministic ones cannot and that could mean an extension of the notion of good algorithms. perhaps through software designs for non-von neumann machines, algorithms can be made more efficient in practice through parallelism. finally, some parts of the puzzle are not yet defined. says karp, "they correspond to the unknown territory that remains to be explored in the future." karen a. frenkel using "test model-checking" to verify the runway-pa8000 memory model rajnish ghughal abdel mokkedem ratan nalumasu ganesh gopalakrishnan concurrent transition system semantics of process networks using concurrent transition systems [sta86], we establish connections between three models of concurrent process networks, kahn functions, input/output automata, and labeled processes. for each model, we define three kinds of algebraic operations on processes: the product operation, abstraction operations, and connection operations. we obtain homomorphic mappings, from input/output automata to labeled processes, and from a subalgebra (called "input/output processes") of labeled processes to kahn functions. the proof that the latter mapping preserves connection operations amounts to a new proof of the "kahn principle." our approach yields: (1) extremely simple definitions of the process operations; (2) a simple and natural proof of the kahn principle that does not require the use of "strategies" or "scheduling arguments"; (3) a semantic characterization of a large class of labeled processes for which the kahn principle is valid, (4) a convenient operational semantics for nondeterminate process networks. e. w. stark semantic models for concurrency michael w. mislove an execution model for exploiting and-or parallelism in logic programs (abstract) several models have been developed for parallel execution of logic programming languages. most of them involve variations of two basic mechanisms: and parallelism and or parallelism. our work [1] is situated between the and/or process model of conery [2] and the restricted and parallelism of degroot [3]. the model we have developped exploit both the and -and or- parallelism using a compile-time program-level and clause-level data dependence analysis to generate an execution graph that embodies the possible parallel executions. the execution graph is a directed acyclic graph, containing one node per atom of the clause body and two nodes for the head clause. simple tests on the terms provided at run-time determine which of the different possible executions graph is to be used. the purpose of the program-level data dependence analysis is to tag the variables of each atom in the clause, that is, to distinguish the variables which will be yield ground, non ground or the both case after the atom reduction. this tagging allows us to get rid of the eventualities that can never happen during the execution of the program. this improves the clause- level analysis and consequently generates a reduced graph. the result of clause-level data dependence analysis is the execution graph. it is based on the last analysis and on the dependence of atoms in the clause. this is constructed in three phases. the first phase, presents an algorithm which automatically extracts the maximum of parallelism from the body of the clause. it starts from the initial context, and by assuming that execution of an atom transforms the context by eliminating all the variables present in the atom, assuming they become ground. we get the maximum of parallelism by choosing to execute first the atom(or atoms) that allow to instantiate the maximum * permanent address: u.s.t.h.b, bp32 bab ezzouar algiers algeria number of variables in the clause using the tags and the weight of the atoms. the second phase, enriches the graph by considering the possibility that an atom can produce after its execution independent non ground terms. in this case another ordering is deduced, consequently some edges are added to the graph. the third phase deals with the possibility where the execution of the atom yields non ground but depended terms. in this case too, it means anticipating other ordering for the atoms, that is, we will consider that the atoms executed after an atom which supplies non ground dependent terms will be processed sequentially. in our approach, it is the only case where the degree of parallelism is less than that exploited in the dynamic approach of conery. finally the graph is completed in the case where one or more nodes in the graph have no consumers (these are the nodes with no outcoming edge). we add edges which supply the logical values of the atom to the node representing the head of the clause. the model support also the or-parallelism. the merge of the streams in the graph nodes is realized by a dynamic join algorithm that combines the multiple streams of partial solutions. we have adopted the algorithm proposed by li and martin in [4]. this model avoids the loss of parallelism due to the use of the static approach. the first improvement of this model, will be to limit the effect of the production of dependent terms to the atoms effectively affected. more details concerning this work can be found in [1]. m. belmesk bounding the number of geometric permutations induced by k-transversals we prove that a (k 1)-separated family of n compact convex sets in & lt;f>rd can be met byk-transversals in at mostod&l; t;sup>d22k+1-2knk+1& lt;rp post="par">kd-k or, for fixed k and d, o(nk(k+1)(d k)) different order types. this is the first non-trivial bound for 1< k 0, s > 0 satisfy sq′ p rq″. here sq denotes a scaling of q by factor s, and q′, q″ are some translates of q. this function λ gives us a new distance function between bodies which, unlike previously studied measures, is invariant under affine transformations. if homothetic bodies are identified, the logarithm of this function is a metric. (two bodies are homothetic if one can be obtained from the other by scaling and translation). for integer ≥ 3, define λ( ) to be the minimum value such that for each convex polygon p there exists a convex -gon q with λ(p, q) ≤ λ( ). among other results, we prove that 2.118… ≤ λ(3) ≤ 2.25 and λ( ) = 1 + ( -2). we give an (n22 n) time algorithm which for any input convex n-gon p, finds a triangle t that minimizes λ(t, p) among triangles. but in linear time, we can find a triangle t with λ(t, p) ≤ 2.25. our study is motivated by the attempt to reduce the complexity of the polygon containment problem, and also the motion planning problem. in each case, we describe algorithms which will run faster when certain implicit slackness parameters of the input are bounded away from 1. these algorithms illustrate a new algorithmic paradigm in computational geometry for coping with complexity. rudolf fleischer kurt mehlhorn gunter rote emo welzl chee yap video: objects that cannot be taken apart with two hands jack snoeyink monotone circuits for matching require linear depth it is proven that monotone circuits computing the perfect matching function on n-vertex graphs require (n) depth. this implies an exponential gap between the depth of monotone and nonmonotone circuits. ran raz avi wigderson the minimum consistent dfa problem cannot be approximated within any polynomial the minimum consistent dfa problem is that of finding a dfa with as few states as possible that is consistent with a given sample (a finite collection of words, each labeled as to whether the dfa found should accept or reject). assuming that p np, it is shown that for any constant k, no polynomial-time algorithm can be guaranteed to find a consistent dfa with fewer than optk states, where opt is the number of states in the minimum state dfa consistent with the sample. this result holds even if the alphabet is of constant size two, and if the algorithm is allowed to produce an nfa, a regular expression, or a regular grammar that is consistent with the sample. a similar nonapproximability result is presented for the problem of finding small consistent linear grammars. for the case of finding minimum consistent dfas when the alphabet is not of constant size but instead is allowed to vary with the problem specification, the slightly stronger lower bound on approximability of opt(1-ε)log logopt is shown for any ε > 0. leonard pitt manfred k. warmuth logics capturing local properties well-known theorems of hanf and gaifman establishing locality of first-order definable properties have been used in many applications. these theorems were recently generalized to other logics, which led to new applications in descriptive complexity and database theory. however, a logical characterization of local properties that correspond to hanf's and gaifman's theorems is still lacking. such a characterization only exists for structures of bounded valence. in this paper, we give logical characterizations of local properties behind hanf's and gaifman's theorems. we first deal with an infinitary logic with counting terms and quantifiers that is known to capture hanf-locality on structures of bounded valence. we show that testing isomorphism of neighborhoods can be added to it without violating hanf- locality, while increasing its expressive power. we then show that adding local second-order quantification to it caputures precisely all hanf-local properties. to capture gaifman-locality, one must also add a (potentially infinite) case statement. we further show that the hierarchy based on the number of variants in the case statement is strict. leonid libkin abstract semantics for a higher-order functional language with logic variables radha jagadeesan keshav pingali recent progress in understanding minimax search lookahead search has been known for a long time to be effective in tackling problems which can be cast in minimax form (mainly game playing, but others include maintaining unstable balance against gravity, and business decisions). recent results have shown that the benefit of lookahead depends on the structure inherent in the problem, and even that there exist some minimax problems for which lookahead search is dis-advantageous. this paper reviews those results and then discusses algorithms which can be interpreted as recognising the structure of local areas of the search in order to control search expansion. such algorithms can be orders of magnitude more cost- effective than search using alpha-beta alone. d. f. beal polytypic pattern matching johan jeuring checking computations in polylogarithmic time laszlo babai lance fortnow leonid a. levin mario szegedy computational geometry column 8 j. o'rourke optimal separations between concurrent-write parallel machines we obtain tight bounds on the relative powers of the priority and common models of parallel random-access machines (prams). specifically we prove that: the element distinctness function of n integers, though solvable in constant time on a priority pram with n processors, requires (a(n,p)) time to solve on a common pram with p ≥ n processors, where a ;(n,p) = n log n/p log (n/p log n \\+ 1). one step of a priority pram with n processors can be simulated on a common pram with p processors in (a(n,p)) steps. as an example, the results show that the time separation between priority and common prams each with n processors is (log n/log log n). r. b. boppana a new deterministic parallel sorting algorithm with an experimental evaluation david r. helman joseph jájá david a. bader concepts and experiments in computational reflection this paper brings some perspective to various concepts in computational reflection. a definition of computational reflection is presented, the importance of computational reflection is discussed and the architecture of languages that support reflection is studied. further, this paper presents a survey of some experiments in reflection which have been performed. examples of existing procedural, logic-based and rule-based languages with an architecture for reflection are briefly presented. the main part of the paper describes an original experiment to introduce a reflective architecture in an object- oriented language. it stresses the contributions of this language to the field of object-oriented programming and illustrates the new programming style made possible. the examples show that a lot of programming problems that were previously handled on an ad hoc basis, can in a reflective architecture be solved more elegantly. pattie maes types as abstract interpretations patrick cousot identifying -formula decision trees with queries thomas r. hancock nondeterministic polynomial-time computations and models of arithmetic: a semantic, or model theoretic, approach is proposed to study the problems p =? np and np =? co-np. this approach seems to avoid the difficulties that recursion-theoretic approaches appear to face in view of the result of baker et al. on relativizations of the p =? np question; moreover, semantical methods are often simpler and more powerful than syntactical ones. the connection between the existence of certain partial extensions of nonstandard models of arithmetic and the question np =? co-np is discussed. several problems are stated about nonstandard models, and a possible link between the davis-matijasevi@@@@-putnam-robinson theorem on diophantine sets and the np =? co-np question is mentioned. attila máte computing the volume, counting integral points, and exponential sums alexander i. barvinok some techniques for solving recurrences george s. lueker a parallel block algorithm for exact triangularization of rectangular matrices a new block algorithm for triangularization of regular or singular matrices with dimension _m_ × _n_ is proposed. taking benefit of fast block multiplication algorithms, it achieves the best known sequential complexity _ _(_m& lt;/i>_w_-1_n_) for any sizes and any rank. moreover, the block strategy enables to improve locality with respect to previous algorithms as exhibited by practical performances. jean-guillaume dumas jean-louis roch a supernormal-form theorem for context-free grammars h. a. maurer a. salomaa d. wood how good is the goemans-williamson max cut algorithm? howard karloff one, two, three . . . infinity: lower bounds for parallel computation in this paper we compare the power of the two most commonly used concurrent- write models of parallel computation, the common pram and the priority pram. these models differ in the way they resolve write conflicts. if several processors want to write into the same shared memory cell at the same time, in the common model they have to write the same value. in the priority model, they may attempt to write different values; the processor with smallest index succeeds. we consider pram's with n processors, each having arbitrary computational power. we provide the first separation results between these two models in two extreme cases: when the size m of the shared memory is small (m ≤ nε, ε < 1), and when it is infinite. in the case of small memory, the priority model can be faster than the common model by a factor of (log n), and this lower bound holds even if the common model is probabilistic. in the case of infinite memory, the gap between the models can be a factor of (log log log n). we develop new proof techniques to obtain these results. the technique used for the second lower bound is strong enough to establish the first tight time bounds for the priority model, which is the strongest parallel computation model. we show that finding the maximum of n numbers requires (log log n) steps, generalizing a result of valiant for parallel computation trees. f e fich f meyer auf der heide p ragde a wigderson a sharp threshold in proof complexity we give the first example of a sharp threshold in proof complexity. more precisely, we show that for any sufficiently small ε>0 and Δ>2.28, random formulas consisting of (1-ε)n 2-clauses and &dgr; n 3-clauses, which are known to be unsatisfiable almost certainly, almost certainly require resolution and davis- putnam proofs of unsatisfiability of exponential size, whereas it is easily seen that random formulas with (1+ε)n 2-clauses (and Δ n 3 clauses) have linear size proofs of unsatisfiability almost certainly. a consequence of our result also yields the first proof that typical random 3-cnf formulas at ratios below the generally accepted range of the satisfiability threshold (and thus expected to be satisfiable almost certainly) cause natural davis-putnam algorithms to take exponential time to find satisfying assignments. dimitris achlioptas paul beame michael molloy knowledge on the average - perfect, statistical and logarithmic william aiello mihir bellare ramarathnam venkatesan so many algorithms. so little time garrison w. greenwood quantum lower bounds by polynomials we examine the number of queries to input variables that a quantum algorithm requires to compute boolean functions on {0,1}_n_ in the _black- box_ model. we show that the exponential quantum speed-up obtained for _partial_ functions (i.e., problems involving a promise on the input) by deutsch and jozsa, simon, and shor cannot be obtained for any _total_ function: if a quantum algorithm computes some total boolean function _f_ with small error probability using _t_ black-box queries, then there is a classical deterministic algorithm that computes _f_ exactly with _o_(_ts6_) queries. we also give asymptotically tight characterizations of _t_ for all symmetric _f_ in the exact, zero-error, and bounded- error settings. finally, we give new precise bounds for and, or, and parity. our results are a quantum extension of the so-called polynomial method, which has been successfully applied in classical complexity theory, and also a quantum extension of results by nisan about a polynomial relationship between randomized and deterministic decision tree complexity. robert beals harry buhrman richard cleve michele mosca ronald de wolf a general sequential time-space tradeoff for finding unique elements an optimal (n2) lower bound is shown for the time-space product of any r-way branching program that determines those values which occur exactly once in a list of n integers in the range [1, r] where r ≥ n. this (n2) tradeoff also applies to the sorting problem and thus improves the previous time-space tradeoffs for sorting. because the r-way branching program is a such a powerful model these time-space product tradeoffs also apply to all models of sequential computation that have a fair measure of space such as off-line multi-tape turing machines and off-line log- cost rams. p. beame fully abstract compositional semantics for logic programs we propose a framework for discussing fully abstract compositional semantics, which exposes the interrelations between the choices of observables, compositions, and meanings. every choice of observables and compositions determines a unique fully abstract equivalence. a semantics is fully abstract if it induces this equivalence. we study the semantics of logic programs within this framework. we find the classical herbrand-base semantics of logic programs inadequate, since it identifies programs that should be distinguished and vice versa. we therefore propose alternative semantics, and consider the cases of no compositions, composition by program union, and composition of logic modules (programs with designated exported and imported predicates). although equivalent programs can be in different vocabularies, we prove that our fully abstract semantics are always in a subvocabulary of that of the program. this subvocabulary, called the essential vocabulary, is common to all equivalent programs. the essential vocabulary is also the smallest subvocabulary in which an equivalent program can be written. h. gaifman e. shapiro some complexity issues on the simply connected regions of the two-dimensional plane arthur w. chou ker-i ko mso definable string transductions and two-way finite-state transducers we extend a classic result of buchi, elgot, and trakhtenbrot: mso definable string transductions i.e., string-to-string functions that are definable by an interpretation using monadic second-order (mso) logic, are exactly those realized by deterministic two-way finite-state transducers, i.e., finite-state automata with a two-way input tape and a one- way output tape. consequently, the equivalence of two mso definable string transductions is decidable. in the nondeterministic case however, mso definable string tranductions, i.e., binary relations on strings that are mso definable by an interpretation with parameters, are incomparable to those realized by nondeterministic two-way finite-state transducers. this is a motivation to look for another machine model, and we show that both classes of mso definable string transductions are characterized in terms of hennie machines, i.e., two-way finite-state transducers that are allowed to rewrite their input tape, but may visit each position of their input only a bounded number of times. joost engelfriet hendrik jan hoogeboom parallel algorithms for evaluating sequences of set-manipulation operations given an off-line sequence s of n set-manipulation operations, we investigate the parallel complexity of evaluating s (i.e., finding the response to every operation in s and returning the resulting set). we show that the problem of evaluating s is in nc for various combinations of common set-manipulation operations. once we establish membership in nc (or, if membership in nc is obvious), we develop techniques for improving the time and/or processor complexity. mikhail j. atallah michael t. goodrich s. rao kosaraju an improved algorithm for the resolution of singularities this paper contains several improvements of villamayor's algorithm for the problem of resolution of the singularities of a hypersurface. the first improves the management of the charts which represent the blown up variety. the second improves the way how new resolution problems are created in the recursion, based on hironaka's theory of idealistic exponents. the remaining two improve the way how discrete information is used, based on the adaption of encinas and villamayor of abhyankar's theory of good points. gábor bodnár josef schicho combinational equivalence checking using satisfiability and recursive learning joão marques-silva thomas glass testing of the long code and hardness for clique johan håstad efficient computation of minimal polynomials in algebraic extensions of finite fields victor shoup the millionth computer program larry a. dunning ronald l. lancaster factoring numbers using singular integers leonard m. adleman call by name, assignment, and the lambda calculus we define an extension of the call-by-name lambda calculus with additional constructs and reduction rules that represent mutable variables and assignments. the extended calculus has neither a concept of an explicit store nor a concept of evaluation order; nevertheless, we show that programs in the calculus can be implemented using a single-threaded store. we also show that the new calculus has the church-rosser property and that it is a conservative extension of classical lambda calculus with respect to operational equivalence; that is, all algebraic laws of the functional subset are preserved. martin odersky dan rabin paul hudak rounds in communication complexity revisited noam nisan avi widgerson castles in the air revisited we show that the total number of faces bounding any single cell in an arrangement of n (d\\--1)-simplices in ird is o(nd\\--1 log n), thus almost settling a conjecture of pach and sharir. we present several applications of this result, mainly to translational motion planning in polyhedral environments. we then extend our analysis technique to derive other results on complexity in simplex arrangements. for example, we show that the number of vertices in such an arrangement, which are incident to the same cell on more than one "side," is o (nd-1 log n). we also show that the number of repetitons of a "k-flap," formed by intersecting d\\--k simplices, along the boundary of the same cell, summed over all cells and all k-flaps, is o(nd-1~~ log n). we use this quantity, which we call the excess of the arrangement, to derive bounds on the complexity of m distinct cells of such an arrangement. boris aronov micha sharir computational geometry column 27 joseph o'rourke summary of paper: feature & face editing in hybrid solid modeler randi m. summer efficient partition trees jiri matousek what are principal typings and what are they good for? trevor jim iterative tree automata, alternating turing machines, and uniform boolean circuits: relationships and characterization abdelaziz fellah sheng yu a decidable propositional probabilistic dynamic logic a propositional version of feldman and harel's pr (dl) is defined, and shown to be decidable. the logic allows propositional-level formulas involving probabilistic programs, and contains full real-number theory for dealing with probabilities. the decidability proof introduces model schemes, which seem to be the most basic structures relating programs and probabilities. yishai a. feidman recognizing substrings of lr(k) languages in linear time lr parsing techniques have long been studied as efficient and powerful methods for processing context free languages. a linear time algorithm for recognizing languages representable by lr(k) grammars has long been known. recognizing substrings of a context-free language is at least as hard as recognizing full strings of the language, as the latter problem easily reduces to the former. in this paper we present a linear time algorithm for recognizing substrings of lr(k) languages, thus showing that the substring recognition problem for these languages is no harder than the full string recognition problem. an interesting data structure, the forest structured stack, allows the algorithm to track all possible parses of a substring without loosing the efficiency of the original lr parser. we present the algorithm, prove its correctness, analyze its complexity, and mention several applications that have been constructed. joseph bates alon lavie time-space optimal convex hull algorithms hla min s. q. zheng improved low-degree testing and its applications sanjeev arora madhu sudan counting triangle crossings and halving planes every collection of t ≥ 2n2 triangles witha total of n vertices in r3 has w4n6< /de> crossing pairs. this implies that one of their edgesmeetswt3n6 of the triangles. from this it followsthat n points in r3 have only o(n8/3) halvingplanes. tamal k. dey herbert edelsbrunner efficient 3-d range searching in external memory darren erik vengroff jeffrey scott vitter dataflow for logic program as substitution manipulator this paper shows a method of constructing a dataflow, which denotes the deductions of a logic program, by means of a sequence domain based on equivalence classes of substitutions. the dataflow involves fair merge functions to represent unions of atom subsets over a sequence domain, as well as functions as manipulations of unifiers for the deductions of clauses. a continuous functional is associated with the dataflow on condition that the dataflow completely and soundly denotes the atom generation in terms of equivalent substitutions sets. its least fixpoint is interpreted as denoting the whole atom generation based on manipulations of equivalent substitutions sets. s. yamasaki quickly detecting relevant program invariants explicitly stated program invariants can help programmers by characterizing certain aspects of program execution and identifying program properties that must be preserved when modifying code. unfortunately, these invariants are usually absent from code. previous work showed how to dynamically detect invariants from program traces by looking for patterns in and relationships among variable values. a prototype implementation, daikon, accurately recovered invariants from formally-specified programs, and the invariants it detected in other programs assisted programmers in a software evolution task. however, daikon suffered from reporting too many invariants, many of which were not useful, and also failed to report some desired invariants. this paper presents, and gives experimental evidence of the efficacy of, four approaches for increasing the relevance of invariants reported by a dynamic invariant detector. one of them --- exploiting unused polymorphism --- adds desired invariants to the output. the other three --- suppressing implied invariants, limiting which variables are compared to one another, and ignoring unchanged values --- eliminate undesired invariants from the output and also improve runtime by reducing the work done by the invariant detector. michael d. enst adam czeisler william g. griswold david notkin exact learning of read-k disjoint dnf and not-so- disjoint dnf a polynomial-time algorithm is presented for exactly learning the class of read-k disjoint dnf formulas\\---boolean formulas in disjunctive normal form where each variable appears at most k) and every assignment to the variables satisfies at most one term of f. the (standard) protocol used allows the learning algorithm to query whether a given assignment of boolean variables satisfies the dnf formula to be learned (membership queries), as well as to obtain counterexamples to the correctness of its current hypothesis which can be any arbitrary dnf formula (equivalence queries). the formula output by the learning algorithm is logically equivalent to the formula to be learned. we show that this result also applies for a generalization of read-k disjoint dnf which we call read-k sat-j dnf; these are dnf formulas in which every variable appears at most k times and every assignment satisfies at most j terms. howard aizenstein leonard pitt symbolic parametrization of pipe and canal surfaces a canal surface s, generated by a parametrized curve m(t), in r3 is the envelope of the set of spheres with radius r(t) centered at m(t). this concept generalizes the classical offsets (for r(t) = const) of plane curves. in this paper we develop elementary symbolic methods for generating a rational parametrization of canal surfaces generated by rational curves m(t) with rational radius variation r(t). in a pipe surface r(t) is constant. gunter landsmann josef schicho franz winkler erik hillgarter prescriptive frameworks for multi-level lambda-calculi flemming nielson hanne riis nielson the union of congruent cubes in three dimensions a {\em dihedral (trihedral) wedge} is the intersection of two (resp. t hree) half-spaces in $\reals^3$. it is called {\em $\alpha$-fat} if the angle (resp., solid angle) determined by these half-spaces is at least $\alpha>0$. if, in addition, the sum of the three face angles of a trihedral wedge is at least $\gamma >4\pi/3$, then it is called {\em $(\gamma,\alpha)$-substantially fat}. we prove that, for any fixed $\gamma>4\pi/3, \alpha>0$, the combinatorial complexity of the union of $n$ (a) $\alpha$-fat dihedral wedges, (b) $(\gamma,\alpha)$-substantially fat trihedral wedges is at most $o(n^{2+\eps})$, for any $\eps>0$, where the constants of proportionality depend on $\eps$, $\alpha$ (and $\gamma$). we obtain as a corollary that the same upper bound holds for the combinatorial complexity of the union of $n$ (nearly) congruent cubes in $\reals^3$. these bounds are not far from being optimal. j. pach ido safruti micha sharir dynamic point location in general subdivisions the dynamic planar point location problem is the task of maintaining a dynamic set s of n non-intersecting, except possibly at endpoints, line segments in the plane under the following operations: •locate(q point): report the segment immediately above q, i.e., the first segment intersected by an upward vertical ray starting at < italic>q;  226;insert(s segment): add segment s to the collection s segments; < ;p>•delete(s segment): remove segment s from the collection s of segments. we present a solution which requires space o(n), has query and insertion time o(log n loglog n) and deletion time o(log2 n). a query time below o(log2 n) was previously only known for monotone subdivisions and horizontal segments and required non-linear space. hanna baumgarten hermann jung kurt mehlhorn two-way counter machines and diophantine equations eitan m. gurari oscar h. ibarra decidability of bisimulation equivalence for process generating context-free languages j. c. m. baeten j. a. bergstra j. w. klop the expressibility of languages and relations by word equations classically, several properties and relations of words, such as "being a power of the same word" can be expressed by using word equations. this paper is devoted to a general study of the expressive power of word equations. as main results we prove theorems which allow us to show that certain properties of words are not expressible as components of solutions of word equations. in particular, "the primitiveness" and "the equal length" are such properties, as well as being "any word over a proper subalphabet". juhani karhumäki filippo mignosi wojciech plandowski optical computational geometry y. b. karasik m. sharir ensembles reconnaissables de mots biinfinis the purpose of automata theory is to study and classify those properties of words that may be defined by a finite structure, say a finite automaton or a finite monoid. it seems natural to consider the same problem for infinite words. this amounts to studying the asymptotic behaviour of finite automata. as is well-known, this breaks the equivalence between determinism and non- determinism of finite automata. the study of the infinite behaviour of finite automata is based on a deep theorem due to b-&-uuml;chi and mc naughton: the recognizable sets of infinite words are the finite boolean combinations of deterministic ones (i.e. recognized by deterministic automata). the aim of this paper is to build an analogous theorem for two-sided infinite sequences. we define a biinfinite word as the equivalence class under the shift of a two-sided infinite sequence. the recognizable sets of biinfinite words are defined in a natural way and one is led to a two-sided notion of determinism. this notion seems to be new and justifies the consideration of biinfinite words. the main result of this paper is the extension to biinfinite words of the theorem of b-&-uuml;chi and mc naughton: the recognizable sets of biinfinite words are the finite boolean combinations of deterministic ones (theorem 3.1). there exist three available proofs of b-&-uuml;chi-mc naughton's theorem. the original one by mc naughton [4] is hard to read. the proof given by eilenberg in his book [2] has been constructed by sch-&-uuml;tzenberger and eilenberg from mc naughton's proof; it is similar to that of rabin [5]. finally, sch-&-uuml;tzenberger gave a further proof in [6], which makes the argument more direct by using the methods of the theory of finite monoids. the proof of our main result follows closely sch-&-uuml;tzenberger's method. this method allows to reduce the two-sided case to the one-sided case, although this seems very difficult to obtain directly. in the first section, we briefly recall the theory of the one-sided infinite behaviour of finite automata. in particular, we give sch-&-uuml;tzenberger's proof of b-&-uuml;chi-mc naughton's theorem. the elements of this proof are used in the proof of our main result. in the second section we define the notions of biinfinite word, biautomaton and deterministic biautomaton. the last section contains the proof of our main result. maurice nivat dominique perrin improving abstract interpretations by combining domains in this paper we consider static analyses based on abstract interpretation of logic programs over combined domains. it is known that analyses over combined domains potentially provide more information than obtainable by performing the independent abstract interpretations. however, the construction of a combined analysis often requires redefining the basic operations for the combined domain. we demonstrate for logic programs that in practice it is possible to obtain precision in a combined analysis without redefining the basic operations. we also propose a way of performing the combination which can be more precise than the straightforward application of the classical "reduced product" approach, while keeping the original components of the basic operations. the advantage of the approach is that proofs of correctness for the new domains are not required and implementations can be reused. we illustrate our results by showing that a combined sharing analysis--- constructed from "old" proposals---compares well with other "new" proposals suggested in recent literature both from the point of view of efficiency and accuracy. m. codish a. mulkers m. bruynooghe m. garcía de la banda m. hermenegildo semantic parallelization (abstract only): a non-standard denotational approach for imperative programs parallelization sequential programs parallelization is an adequate approach to exploit architectural features proposed on parallel computers. this paradigm has been successfully (i.e., industrially) applied on current supercomputers such as the cray family or japanese machines ; there, programs generally, but not only, written in sequential fortran77 are analyzed by smart vectorizers which can detect parallel constructs (loops) and map them on adequate hardware (pipeline processors). we propose to enhance this technique, pioneered by kuck, with the notion of semantic parallelization ; the principle is to consider the program transformations induced by parallelization as a non-standard denotational semantics of the programming language. put in an other way, we propose to define a new (i.e., non standard) semantics of the programming language which maps sequential programs into their parallel equivalent ; moreover, this semantic definition is integrated within the denotational framework [gordo79]. we show how to apply this concept to a toy language all, presented in [jouve86a]. first, we propose to parallelize complex commands such as loops with non-index variables in subscripts. we use for that goal semantic informations automatically extracted from the program source. these informations are implemented as systems of linear inequalities on the program variables ; we use them in order to have a better understanding of the memory locations concerned by every statement in the program. this semantic knowledge is important to decide whether two commands are in mutual conflict (because they have a non empty intersection of set of addresses) or not ; this notion is fundamental to parallelize these statements. second, we detect reductions (in the apl sense) such as the sum of a vector, which can be efficiently implemented on "tree processors". we designed a symbolic evaluator of the all language to detect these constructs [jouve86b]. we evaluate loop bodies and then try to know (using a pattern-matching technique) whether the corresponding function is associative (i.e., is a reduction operator). lastly, we try to deal with some type of programs using indirections (implemented by "scatter/gather" operations). we propose here a new notion, namely array predicates. these predicates are formulas in the full presburger arithmetics ; they are automatically built from the program source and give an approximation of the values stored in an array. these informations are subsequently used to decide whether a loop which uses this array as an indirection vector has any conflict. thanks to results in domain theory and abstract interpretation [jouve87a], we give correctness proofs for these transformations with respect to all standard semantic. a practical byproduct of our approach follows from the fact that a denotational specification is also an executable prototype if all functions have finite domains (i.e., may be implemented as pair lists). we implemented a semantic parallelizer in the ml higher-order functional language (with some critical routines coded in franzlisp). we have been able to automatically parallelize some toy examples which could not be tackled with previous techniques. p. jouvelot small representation of line arrangements in this video we illustrate a technique for lossy compression of arran gements. the video also demonstrates the visualization techniques that facilitated the research. detailed description of the algorithm appears in the full paper in this proceedings~\cite{dt-01}. david p. dobkin ayellet tal concept analysis - a new framework for program understanding gregor snelting two-level grammars are more expressive than type 0 grammars: or are they? dick grune rapid computation of bernoulli and related numbers k. hare on the design of some systolic algorithms the design of systolic algorithms is discussed, that is, algorithms that may efficiently be executed by a synchronous array of cells that perform local communications only. systolic algorithms are designed through techniques developed in the context of sequential programming. heuristics are given that guide the programmer in the design of a variety of efficient solutions. jan l. a. van de snepscheut johan b. swenker on the power of bounded concurrency ii: pushdown automata this is the second in a series of papers on the inherent power of bounded cooperative concurrency, whereby an automaton can be in some bounded number of states that cooperate in accepting the input. in this paper, we consider pushdown automata. we are interested in differences in power of expression and in exponential (or higher) discrepancies in succinctness between variants of pda's that incorporate nondeterminism (e), pure parallelism (a), and bounded cooperative concurrency (c). technically, the results are proved for cooperating push-down automata with cooperating states, but they hold for appropriate versions of most concurrent models of computation. we exhibit exhaustive sets of upper and lower bounds on the relative succinctness of these features for three classes of languages: deterministic context-free, regular, and finite. for example, we show that c represents exponential savings in succinctness in all cases except when both e and a are present (i.e., except for alternating automata), and that e and a represent unlimited savings in succinctness in all cases. tirza hirst david harel the learning complexity of smooth functions of a single variable we study the on-line learning of classes of functions of a single real variable formed through bounds on various norms of functions' derivatives. we determine the best bounds obtainable on the worst-case sum of squared errors (also "absolute" errors) for several such classes. don kimber philip m. long playing twenty questions with a procrastinator andris ambainis stephen a. bloch david l. schweizer decomposing a sequence of matrices that differ only in one submatrix wlodzimierz proskurowski simultaneous writes of parallel random access machines do not help to compute simple arithmetic functions the ability of the strongest parallel random access machine model wram is investigated. in this model different processors may simultaneously try to write into the same cell of the common memory. it has been shown that a parallel ram without this option (pram), even with arbitrarily many processors, can almost never achieve sublogarithmic time. on the contrary, every function with a small domain like binary values in case of boolean functions can be computed by a wram in constant time. the machine makes fast table look-ups using its simultaneous write ability. the main result of this paper implies that in general this is the "only way" to perform such fast computations and that a domain of small size is necessary. otherwise simultaneous writes do not give an advantage. functions with large domains for which any change of one of the n arguments also changes the result are considered, and a logarithmic lower time bound for wrams is proved. this bound can be achieved by machines that do not perform simultaneous writes. a simple example of such a function is the sum of n natural numbers. rudiger reischuk practical parallel algorithms for personalized communication and integer sorting a fundamental challenge for parallel computing is to obtain high-level, architecture independent, algorithms which efficiently execute on general- purpose parallel machines. with the emergence of message passing standards such as mpi, it has become easier to design efficient and portable parallel algorithms by making use of these communication primitives. while existing primitives allow an assortment of collective communication routines, they do not handle an important communication event when most or all processors have non-uniformly sized personalized messages to exchange with each other. we focus in this paper on the h-relation personalized communication whose efficient implementation will allow high performance implementations of a large class of algorithms. while most previous h-relation algorithms use randomization, this paper presents a new deterministic approach for h-relation personalized communication with asymptotically optimal complexity for h>p2. as an application, we present an efficient algorithm for stable integer sorting. the algorithms presented in this paper have been coded in split-c and run on a variety of platforms, including the thinking machines cm-5, ibm sp-1 and sp-2, cray research t3d, meiko scientific cs-2, and the intel paragon. our experimental results are consistent with the theoretical analysis and illustrate the scalability and efficiency of our algorithms across different platforms. in fact, they seem to outperform all similar algorithms known to the authors on these platforms. david a. bader david r. helman joseph jájá enumerating k distances for n points in the plane matthew t. dickerson r. l. scot drysdale finding (good) normal bases in finite fields t. beth w. geiselmann f. meyer dynamic deflection routing on arrays (preliminary version) andrei broder eli upfal simplifying array processing languages a language like apl was a masterpiece of simplification when seen through the eyes of a computer user of the seventies. the virtues of simplicity are usually held to be many.this paper firstly discusses simplicity in general, reviews some of the writing on simplicity coming from the computing world, and briefly construes the development of apl, and the later j, as being essentially efforts in simplification. possibilities for further simplification are then canvassed.firstly, simplification of the usually accepted but unfortunate naming conventions adopted by array processing languages is proposed. secondly, simplification of the arithmetic is very briefly outlined, more detailed treatment of this topic being available elsewhere. thirdly, syntactic means for having all functions and operations dyadic are treated, and the advantages of adopting such means evaluated. fourthly, the possibilities for a newly distinctive kind of function (called extractions) are described. these are considered as a kind of systematic renaming to supply arguments to functions. fifthly, and in the context of j's simplifications, the need for hyperoperators is asserted. finally, the nature of interpreters for array processing languages is reviewed, and suggestions made for facilities to be provided by such interpreters to aid the process of developing array processing code. neville holmes a constant-time optimal parallel string-matching algorithm given a pattern string, we describe a way to preprocess it. we design a crcw- pram constant time optimal parallel algorithm for finding all occurrences of the (preprocessed) pattern in any given text. zvi galil computer-aided complexity classification of combinational problems we describe a computer program that has been used to maintain a record of the known complexity results for a class of 4536 machine scheduling problems. the input of the program consists of a listing of known "easy" problems and a listing of known "hard" problems. the program employs the structure of the problem class to determine the implications of these results. the output provides a listing of essential results in the form of maximal easy and minimal hard problems as well as listings of minimal and maximal open problems, which are helpful in indicating the direction of future research. the application of the program to a restricted class of 120 single-machine problems is demonstrated. possible refinements and extensions to other research areas are suggested. it is also shown that the problem of determining the minimum number of results needed to resolve the status of all remaining open problems in a complexity classification such as ours is itself a hard problem. b. j. lageweg j. k. lenstra e. l. lawler a. h. g. rinnooy kan algorithmic elimination of spurious nondeterminism from mealy machines m chibnik on the power of finite automata with both nondeterministic and probabilistic states (preliminary version) anne condon lisa hellerstein samuel pottle avi wigderson half-order modal logic: how to prove real-time properties tom henzinger exact time bounds for computing boolean functions on prams without simultaneous writes m. dietzfelbinger m. kutylowski r. reischuk comparison of arithmetic functions with respect to boolean circuit depth helmut alt fast and reliable parallel hashing holger bast torben hagerup the typed polymorphic label-selective -calculus formal calculi of record structures have recently been a focus of active research. however, scarcely anyone has studied formally the dual notion--- i.e., argument-passing to functions by keywords, and its harmonization with currying. we have. recently, we introduced the label-selective λ-calculus, a conservative extension of λ-calculus that uses a labeling of abstractions and applications to perform unordered currying. in other words, it enables some form of commutation between arguments. this improves program legibility, thanks to the presence of labels, and efficiency, thanks to argument commuting. in this paper, we propose a simply typed version of the calculus, then extend it to one with ml-like polymorphic types. for the latter calculus, we establish the existence of principal types and we give an algorithm to compute them. thanks to the fact that label-selective λ-calculus is a conservative extension of λ-calculus by adding numeric labels to stand for argument positions, its polymorphic typing provides us with a keyword argument-passing extension of ml obviating the need of records. in this context, conventional ml syntax can be seen as a restriction of the more general keyword-oriented syntax limited to using only implicit positions instead of keywords. jacques garrigue hassan aït- kaci np is as easy as detecting unique solutions for all known np-complete problems the number of solutions in instances having solutions may vary over an exponentially large range. furthermore, most of the well-known ones, such as satisfiability, are parsimoniously interreducible, and these can have any number of solutions between zero and an exponentially large number. it is natural to ask whether the inherent intractability of np- complete problems is caused by this wide variation. in this paper we give a negative answer to this using randomized reductions. we show that the problems of distinguishing between instances of sat having zero or one solution, or finding solutions to instances of sat having unique solutions, are as hard as sat itself. several corollaries about the difficulty of specific problems follow. for example if the parity of the number of solutions of sat can be computed in rp then np = rp. some further problems can be shown to be hard for np or dp via randomized reductions. l g valiant v v vazirani deciding linear-trigonometric problems in this paper, we present a decision procedure for certain linear- trigonometric problems for the reals and integers formalized in a suitable first-order language. the inputs are restricted to formulas, where all but one of the quantified variables occur linearly and at most one occurs both linearly and in a specific trigonometric function. moreover we may allow in addition the integer-part operation in formulas. besides ordinary quantifiers, we allow also counting quantifiers. furthermore we also determine the qualitative structure of the connected components of the satisfaction set of the mixed linear-trigonometric variable. we also consider the decision of these problems in subfields of the real algebraic numbers. hirokazu anai volker weispfenning ip = space: simplified proof lund et al. [1] have proved that ph is contained in ip. shamir [2] improved this technique and proved that pspace = ip. in this note, a slightly simplified version of shamir's proof is presented, using degree reductions instead of simple qbfs. a. shen the equivalence problem for some non-real-time deterministic pushdown automata esko ukkonen doubly lexical orderings of matrices a doubly lexical ordering of the rows and columns of any real-valued matrix is defined. this notion extends to graphs. these orderings are used to prove and unify results on several classes of matrices and graphs, including totally balanced matrices and chordal graphs. an almost-linear time doubly lexical ordering algorithm is given. a lubiw data flow analysis of communicating finite state machines wuxu peng s. puroshothaman computability and data types newcomb greenleaf operational semantics of rewriting with the on-demand evaluation strategy kazuhiro ogata kockichi futatsugi finding stabbing lines in 3-dimensional space m. pellegrini p. shor symbolic computation and number of zeros of a real parameter polynomial in the unit disk b gleyse fast algorithms for n-dimensional restrictions of hard problems let m be a parallel ram with p processors and arithmetic operations addition and subtraction recognizing l nn in t steps. then l can be recognized by a (sequential!) linear search algorithm (lsa) in (n(log(n) + t \\+ log(p))) steps. thus many n-dimensional restrictions of np-complete problems (binary programming, traveling salesman problem, etc.) and even that of the uniquely optimum traveling salesman problem, which is Δp2-com plete, can be solved in polynomial time by an lsa. this result generalizes the construction of a polynomial lsa for the n-dimensional restriction of the knapsack problem previously shown by the author, and destroys the hope of proving nonpolynomial lower bounds for any problem which can be recognized by a pram as above with 2poly(n) processors in poly(n) time. f meyer auf der heide making zero-knowledge provers efficient mihir bellare erez petrank simulation as an iterated task eli gafni an information-theoretic approach to time bounds for on-line computation (preliminary version) static, descriptional complexity (program size) [16, 9] can be used to obtain lower bounds on dynamic, computational complexity (such as running time). we describe and discuss this "information-theoretic approach" in the following section. paul introduced it in [13], to obtain restricted lower bounds on the time complexity of sorting. we use the approach here to obtain lower time bounds for on-line simulation of one abstract storage unit by another. a major goal of our work is to promote the approach. wolfgang j. paul joel i. seiferas janos simon the complexity of the equivalence problem for commutative semigroups and symmetric vector addition systems this paper shows that the equivalence problems for commutative semigroups and symmetric vector addition systems are decidable in space cnlogn for some fixed constant c, solving an open question by cardoza, lipton, mayr, and meyer. from the exponential-space completeness of the word problems, it follows that our upper bound is nearly optimal. d t huynh an analytic model for parallel gaussian elimination on a binary n-cube architecture this paper summarizes an analytical technique which predicts the time required to execute a given parallel program, with given data, on a given parallel architecture. for illustration purposes, the particular parallel program chosen is parallel gaussian elimination and the particular parallel architecture chosen is a binary n-cube. the analytical technique is based upon a product-form queuing network model which is solved using an iterative method. the technique is validated by comparing performance predictions produced by the model against actual hypercube measurements. v. a. f. almeida l. w. dowdy m. r. leuze surveyor's forum: determining a search walter wilson asymmetric rendezvous on the plane edward j. anderson sándor p. fekete approximating shortest paths on a convex polytope in three dimensions given a convex polytope p withn faces in r3, points s,t∈6p, and a parameter 0≤1, we present an algorithm that constructs a path on6p from s tot whose length is at most1+edps,t, where dps,t is the length of the shortest path betweens andt on 6p. the algorithm runs in onlog1/e+1/e3 time, and is relatively simple. the running time ison+1/e3 if we only want the approximate shortest pathdistance and not the path itself. we also present an extension of thealgorithm that computes approximate shortest path distances from a givensource point on 6p to all vertices ofp. pankaj k. agarwal sariel har-peled micha sharir kasturi r. varadarajan a simple measure of software complexity lem o. ejiogu shifted normal forms of polynomial matrices bernhard beckermann george labahn gilles villard determination of mass properties of polygonal csg objects in parallel chandrasekhar narayanaswami william randolph franklin algorithm 586: itpack 2c: a fortran package for solving large sparse linear systems by adaptive accelerated iterative methods david r. kincaid john r. respess david m. young rober r. grimes efficient low-contention parallel algorithms the queue-read, queue- write (qrqw) pram model [gmr94] permits concurrent reading and writing, but at a cost proportional to the number of readers/writers to a memory location in a given step. the qrqw model reflects the contention properties of most parallel machines more accurately than either the well-studied crcw or erew models: the crcw model does not adequately penalize algorithms with high contention to shared memory locations, while the erew model is too strict in its insistence on zero contention at each step. of primary practical and theoretical interest, then, is the design of fast and efficient qrqw algorithms for problems for which all previous algorithms either suffer from high contention, fail to be fast, or fail to be work-optimal. this paper describes low-contention, fast, work-optimal qrqw pram algorithms for the fundamental problems of finding a random permutation, parallel hashing, load balancing, and sorting. there is no known fast, work-optimal erew algorithm known for finding a random permutation or for parallel hashing. for load balancing, we improve upon the erew result whenever the ratio of the maximum to the average load is not too large. we show that the logarithmic dependence of the qrqw running time on this ratio is inherent by providing a matching lower bound. we demonstrate the performance advantage of a qrqw random permutation algorithm, compared with the popular erew algorithm, by implementing and running both algorithms on the maspar mp-1. finally, we extend the work-time framework for the design of parallel algorithms to account for contention, and relate it to the qrqw pram model. we use our qrqw load balancing algorithm, as well as the qrqw linear compaction algorithm in [gmr94], to provide automatic tools for processor allocation---an issue that needs to be handled when translating an algorithm from its work- time presentation into the explicit pram description. phillip b. gibbons yossi matias vijaya ramachandran computing surveys' electronic symposium on the theory of computation p. degano r. gorrieri a. marchetti-spaccamela p. wegner nondeterministic linear tasks may require substantially nonlinear deterministic time in the case of sublinear work space log-size parabolic clique problem is a version of clique problem solvable in linear time by a log-space nondeterministic turning machine. however, no deterministic machine (in a very general sense of this term) with sequential- access read-only input tape and work space n solves log-size parabolic clique problem within time n1 + τ if + 2τ < 1/2. yuri gurevich saharon shelah a term calculus for unitary approach to normalization claudia faggian computational complexity of combinatorial surfaces we investigate the computational problems associated with combinatorial surfaces. specifically, we present an algorithm (based on the brahana-dehn- heegaard approach) for transforming the polygonal schema of a closed triangulated surface into its canonical form in (n log n) time, where n is the total number of vertices, edges and faces. we also give an (n log n \\+ gn) algorithm for constructing canonical generators of the fundamental group of a surface of genus g. this is useful in constructing homeomorphisms between combinatorial surfaces. gert vegter chee k. yap algorithmic elimination of spurious nondeterminism from mealy machines mara chibnik the turing machine of ackermann's function m. a. mcbeth proof of a conjecture of r. kannan r. kannan conjectured that every non-deterministic two-way finite automaton can be positionally simulated by a deterministic two-way finite automaton. the conjecture is proved here by reduction to a similar problem about finite semigroups. the method and the result are then generalized to alternating two- way finite automata. j.-c. birget layout algorithm for vlsi design andrea s. lapaugh are lower bounds easier over the reals? herve fournier pascal koiran unique decomposability of shuffled strings a string x is said to be decomposed into strings yl,y2 ,...,yn if x is in y1 @@@@ y2 @@@@ ... @@@@ yn where @@@@ is the shuffle operator. we consider the problem of decomposing x into yl< /subscrpt>,y2,...,yn~~ such that yl,y2&l; t;/subscrpt>,...,yn belong to predetermined languages l1,l2 ,...,ln, respectively. conditions under which such a decomposition is unique are presented as well as uniquely decomposable l1,l2&l; t;/subscrpt>,...,ln which are of practical significance from the viewpoint of time-multiplexed communication. kazuo iwama verbose typing robert ennals probabilistic analysis for combinatorial functions of moving points li zhang harish devarajan julien basch piotr indyk emerging opportunities for theoretical computer science alfred v. aho david s. johnson richard m. karp s. rao kosaraju catherine c. mcgeoch christos h. papadimitriou pavel pevzner copying with complexity (invited talk) (summary only) jerry saltzer machanizing proof: computing, risk, and trust peter g. neumann independence results in computer science? (preliminary version) although there has been considerable additional work discussing limitations of formal proof techniques for computer science ([yo-73&77], [har-76], [har&ho-77;], [haj-77&79], [go-79]), these papers show only very general consequences of incompleteness: the stated results hold for all sufficiently powerful formal systems for computer science. only the work of o'donnell and of lipton directly addresses the question of just how powerful formal axioms for computer science should be, and these two authors make rather radically different suggestions. we investigate this latter question: how powerful should a set of axioms be if it is to be adequate for computer science? in particular, in this paper we investigate the adequacy of the system of [li-78] as a formal system for computer science. deborah joseph paul young production trees: a compact representation of parsed programs abstract syntax trees were devised as a compact alternative to parse trees, because parse trees are known to require excessive amounts of storage to represent parsed programs. however, the savings that abstract syntax trees actually achieve have never been precisely described because the necessary analysis has been missing. without it, one can only measure particular cases that may not adequately represent all the possible behaviors. we introduce a data structure, production trees, that are more compact than either abstract syntax trees or parse trees. further, we develop the necessary analysis to characterize the storage requirements of parse trees, abstract syntax trees, and production trees and relate the size of all three to the size of the program's text. the analysis yields the parameters needed to characterize these storage behaviors over their entire range. we flesh out the analysis by measuring these parameters for a sample of "c" programs. for these programs, production trees were from 1/15 to 1/23 the size of the corresponding parse tree, l/2.7 the size of a (minimal) abstract syntax tree, and averaged only 2.83 times the size of the program text. vance e. waddle what does o(n) mean y gurevich extremal polygon containment problems sivan toledo on the complexity of learning minimum time-bounded turing machines ker-i ko a proof of generalized induction r mcbeth termination for direct sums of left-linear complete term rewriting systems y. toyama j. w. klop h. p. barendregt type inference in the presence of type abstraction a number of recent programming language designs incorporate a type checking system based on the girard-reynolds polymorphic λ-calculus. this allows the construction of general purpose, reusable software without sacrificing compile-time type checking. a major factor constraining the implementation of these languages is the difficulty of automatically inferring the lengthy type information that is otherwise required if full use is made of these languages. there is no known algorithm to solve any natural and fully general formulation of this "type inference" problem. one very reasonable formulation of the problem is known to be undecidable. here we define a restricted version of the type inference problem and present an efficient algorithm for its solution. we argue that the restriction is sufficiently weak to be unobtrusive in practice. h.-j. boehm the sequence equivalence problem is decidable for 0s systems a. ehrenfeucht g. rozenberg on the learnability of discrete distributions michael kearns yishay mansour dana ron ronitt rubinfeld robert e. schapire linda sellie fast algorithm for calculation of dirac's gamma-matrices traces v. a. ilyin a. p. kryukov a. y. rodioniov a. y. taranov an overview of computational complexity an historical overview of computational complexity is presented. emphasis is on the fundamental issues of defining the intrinsic computational complexity of a problem and proving upper and lower bounds on the complexity of problems. probabilistic and parallel computation are discussed. stephen a. cook almost optimal set covers in finite vc-dimension: (preliminary version) we give a deterministic polynomial time method for finding a set cover in a set system (x,r) of vc-dimension d such that the size of our cover is at most a factor of o(dlog(dc)) from the optimal size, c. for constant vc-dimension set systems, which are common in computational geometry, our method gives an o(logc) approximation factor. this improves the previous (log |x|) bound of the greedy method and beats recent complexity-theoretic lower bounds for set covers (which don't make any assumptions about vc-dimension). we give several applications of our method to computational geometry, and we show that in some cases, such as those that arise in 3-d polytope approximation and 2-d disc covering, we can quickly find o(c)-sized covers. herve bronnimann michael t. goodrich planar intersection, common inscribed sphere and dupin blending cyclides ching-kuang shene dynamic typing in a statically typed language statically typed programming languages allow earlier errorchecking, better enforcement of diciplined programming styles, and thegeneration of more efficient object code than languages where all typeconsistency checks are performed at run time. however, even instatically typed languages, there is often the need to deal with datawhose type cannot be determined at compile time. to handlesuch situations safely, we propose to add a typedynamic whose values are pairs of a valuev and a type tagt where vhas the type denoted by t. instances ofdynamic are built with an explicittagging construct and inspected with a type safetypecase construct. this paper explores the syntax, operational semantics, and denotational semantics of a simple language that includes the typedynamic. we give examples of howdynamically typed values can be used in programming. then we discuss anoperational semantics for our language and obtain a soundness theorem. wepresent two formulations of the denotational semantics of this languageand relate them to the operational semantics. finally, we consider theimplications of polymorphism and some implementation issues. martín abadi luca cardelli benjamin pierce gordon plotkin the unwinding number from the oxford english dictionary we find that _to unwind_ can mean "to become free from a convoluted state". further down we find the quotation "the solution of all knots, and unwinding of all intricacies", from h. brooke (the fool of quality, 1809). while we do not promise that the unwinding number, defined below, will solve _all_ intricacies, we do show that it may help for quite a few problems.our original interest in this area came from a problem in which an early version of derive was computing the wrong answer when simplifying sin(sin-1 _z_), which should always be just _z._ for _z_ > 1, derive was getting -_z_ as the answer. this bug has of course long since been fixed.what was happening was that in order to improve internal efficiency, all the inverse trig functions were represented as arctangents. consulting an elementary book of tables, one finds the identitysin-1 _z_ = tan-1 (_z_/ 1 - _z_2). (1)in the same vein, one finds thatsin(tan-1 _w_) = _w_/ 1 + _w_2. (2)substituting equations (1) and (2) into sin(sin-1 _z_) and simplifying, we get_z_/ 1 - _z_2 1/ 1/1 - _z_2, (3)which derive quite properly refused to simplify to _z,_ because this is _not_ always equal to _z_ (see [2]).the fix in this case was to replace equation (2) withsin(tan-1 _w_) = _w_ 1/1 + _w_2. (4)which differs from the original only on the branch cut. see [7] for more discussion. this change allows the simplification of sin(sin-1 _z_) to _z._ verifying that this approach worked, and indeed trying to understand the problem to begin with, led us to attempt various definitions of a 'branch function'. this introductory problem turned out to be the tip of an iceberg of problems connected with using the principal branch of multivalued elementary functions. robert m. corless david j. jeffrey the harmonic online k-server algorithm is competitive e. f. grove (optimal) duplication is not elementary recursive andrea asperti paolo coppola simone martini a simpler approach to the busy beaver problem charles b. dunham logical models of objects and of processes william p. coleman decidability of bisimulation equivalence for normed pushdown processes colin stirling on resetting dlba's oscar h. ibarra proof-carrying code (abstract): design, implementation and applications george c. necula a forward move algorithm for ll and lr parsers a wide variety of algorithms have been suggested for the repair of syntactic errors in a computer program. since there is usually more than one possible repair for any syntax error, many algorithms employ a cost function to guide the the repair, and some [1,3,4,6], guarantee that the repair chosen will be least-cost, according to some definition. (the others, although guided by costs, do not guarantee least-cost in all cases.) fischer et al. [4,6,7] define a "locally least-cost" repair using insertions and deletions, and provide algorithms for ll and lr parsers. a locally least-cost repair is a least-cost sequence of deletions and insertions such that one more symbol in the original string will be accepted by the parser. backhouse [2,3] uses a similar definition. in both cases, the repair algorithms operate by examining a single symbol in the input at any time. jon mauney charles n. fischer schemata for interrogating solid boundaries michael karasick derek lieber at2 bounds for a class of vlsi problems and string matching we present a class of boolean functions which have regional mappings, a generalization of the transitivity property defined in [12]. we prove that all transitive problems have regional mappings, as do a variety of interesting computational problems such as merging two sorted lists of arbitrary length, generalized integer multiplication and matrix-vector products. we present a general area-time lower bound for vlsi implementations of problems with regional mappings and confirm that the lower bound matches the previously known bound for transitive problems. for generalized integer multiplication, we present a custom vlsi implementation which provides a matching upper bound. the results improve at2 bounds on a number of open problems. in related work, we consider the problem of finding occurrences of a p-bit pattern in an n-bit text string. we show that every chip capable of solving this string matching problem requires at2 = (pn). a custom vlsi design, using an idea similar to that used for generalized integer multiplication, provides a matching upper bound when n is polynomial in p. micah adler john byers operational semantics and extensionality simona ronchi della rocca universal algorithms for store-and-forward and wormhole routing robert cypher friedhelm meyer auf der heide christian scheideler berthold vöcking the minimum consistent dfa problem cannot be approximated within and polynomial the minimum consistent dfa problem is that of finding a dfa with as few states as possible that is consistent with a given sample (a finite collection of words, each labeled as to whether the dfa found should accept or reject). assuming that p np, it is shown that for any constant k, no polynomial time algorithm can be guaranteed to find a consistent dfa of size optk, where opt is the size of a smallest dfa consistent with the sample. this result holds even if the alphabet is of constant size two, and if the algorithm is allowed to produce an nfa, a regular grammar, or a regular expression that is consistent with the sample. similar hardness results are described for the problem of funding small consistent linear grammars. l. pitt m. k. warmuth a moment of perfect clarity i: the parallel census technique lane a. hemaspaandra christian glaßer on polynomial time bounded truth-table reducibility of np sets to sparse sets m. ogiwara o. watanabe concurrent distributed pascal: a hands-on introduction to parallelism michael j. jipping jeffrey r. toppen stephen weeber von zur gathen's factorization challenge michael b. monagan hard examples for resolution exponential lower bounds are proved for the length-of-resolution refutations of sets of disjunctions constructed from expander graphs, using the method of tseitin. since these sets of clauses encode biconditionals, they have short (polynomial-length) refutations in a standard axiomatic formulation of propositional calculus. alasdair urquhart regular languages under f-gsm mappings a j dos reis feasibility testing for systems of real quadratic equations we consider the problem of deciding whether a given system of quadratic homogeneous equations over the reals has non-trivial solution. we design an approximative algorithm whose complexity is polynomial in the number of variables and exponential in the number of equations. some applications to general systems of polynomial equations and inequalities over the reals are discussed. alexander i. barvinok an efficient approach to removing geometric degeneracies our aim is to perturb the input so that an algorithm designed under the hypothesis of input non-degeneracy can execute on arbitrary instances. the deterministic scheme of [emca] was the first efficient method and was applied to two important predicates. here it is extended in a consistent manner to another two common predicates, thus making it valid for most algorithms in computational geometry. it is shown that this scheme incurs no extra algebraic complexity over the original algorithm while it increases the bit complexity by a factor roughly proportional to the dimension of the geometric space. the second contribution of this paper is a variant scheme for a restricted class of algorithms that is asymptotically optimal with respect to the algebraic as well as the bit complexity. both methods are simple to implement and require no symbolic computation. they also conform to certain criteria ensuring that the solution to the original input can be restored from the output on the perturbed input. this is immediate when the input to solution mapping obeys a continuity property and requires some case-specific work otherwise. finally we discuss extensions and limitations to our approach. ioannis emiris john canny 168 global character string search and replace j thornburg degenerate convex hulls in high dimensions without extra storage gunter rote ham-sandwich cuts in rd lo and steiger resolved the complexity question for computing a planar ham- sandwich cut by giving an optimal linear-time algorithm. we show how to generalize the ideas to every fixed dimension d > 2 by describing an algorithm that computes a ham-sandwich cut in rd in time _o(nd\\--1--a(d&l; t;/italic>)), for some a(d) > 0 (going to zero as d increases). for d = 3,4, the running time is almost proportional to ed ;\\--1(n;n/2), where d(k;n) denotes the maximal number of k-sets over sets of n points in rd, and with the current best bounds, we get o(n< /italic>3/2 log2 n/log n) running time for d = 3 and o(n8/3+ε) for d=4. we also give a linear time algorithm for three dimensional ham-sandwich cuts when the three sets are suitably separated. jiri matousek chi-yuan lo william steiger dehn-sommerville relations, upper bound theorem, and levels in arrangements in this note, we generalize theh-vector for simple, bounded convexpolytopes [14] to the h-matrix forsimple, bounded k-complexes. weobserve that the h-matrix isinvariant with respect to the defining linear function, and that thedehn-sommerville relations and mcmullen's upper bound theorem [13] forconvex polytopes follow from the invariance ofthe 0-th row and column ofthis matrix. the invariance of the other entries in theh-matrix should, perhaps, beinvestigated more. one new consequence is that, given any non- degeneratelinear function z, the number oflocal z-minima on thelth level of anyd-dimensional arrangement is boundedby & lt;stk>l+d-1d-1&l; t;rp post="par">, with exact equality if thel-th level is bounded andsimple. ketan mulmuley regular languages do not form a lattice under gsm mappings a j dos reis the cycle burning problem c b dunham solvability by radicals is in polynomial time every high school student knows how to express the roots of a quadratic equation in terms of radicals; what is less well-known is that this solution was found by the babylonians a millenia and a half before christ [ne]. three thousand years elapsed before european mathematicians determined how to express the roots of cubic and quartic equations in terms of radicals, and there they stopped, for their techniques did not extend. lagrange published a treatise which discussed why the methods that worked for polynomials of degree less than five did not work for quintic polynomials [lag], they require double exponential time. through the years other mathematicians developed alternate algorithms all of which, however, remained exponential. a major impasse was the problem of factoring polynomials, for until the recent breakthrough of lenstra, lenstra, and lovasz [l3], all earlier algorithms had exponential running time. their algorithm, which factors polynomials over the rationals in polynomial time, gave rise to a hope that some of the classical questions of galois theory might have polynomial time solutions. galois transformed the question of solvability by radicals from a problem concerning fields to a problem about groups. what we do is to change the inquiry into several problems concerning the solvability of certain primitive groups. susan landau gary lee miller on the degree of polynomials that approximate symmetric boolean functions (preliminary version) in this paper, we provide matching (up to a constant factor) upper and lower bounds on the degree of polynomials that represent symmetric boolean functions with an error 1/3. let Γ(f)=min{|2k\\-- ;n+1|:fk & lt;italic>fk\\+ 1 and 0 ≤ k ≤ n \\-- 1} where fi is the value of f on inputs with exactly i 1's. we prove that the minimum degree over all the approximating polynomials of f is ((n(n-Γ( f))).5). we apply the techniques and tools from approximation theory to derive this result. ramamohan paturi 2w-aray algorithm for extended problem of integer gcd hirokazu murao the future of computational complexity theory: part ii this is the final part of a 2-part column on the future of computational complexity theory. the grounds rules were that the contributors had no restrictions (except a 1-page limit). for readers interested in more formal reports, in addition to the two urls mentioned in the previous issue (ftp://ftp.cs.washington.edu/tr/1996/o3/uw-cse-96-o3-o3.ps.z and http://theory.lcs.mit.edu/-oded/toc-sp.html) i would also point to the recent "strategic directions" report (http://geisel.csl.uiuc.edu/-loui/complete.html).coming during the next few issues: the search for the perfect theory journal; thomas jefferson exposed as a theoretical computer scientist; and mitsunori ogihara's survey of dna-based computation. (the "lance fortnow in a clown suit" article promised in the previous column actually ran stand-alone last issue due to scheduling; it probably is still available at a library near you!) finally, some recent work by edith hemaspaandra, harald hempel, and myself, and of buhrman and fortnow, partially resolves one of the open questions from complexity theory column 11. e. allender j. feigenbaum j. goldsmith t. pitassi s. rudich time and space lower bounds for non-blocking implementations (preliminary version) prasad jayanti king tan sam toueg a closer look at iteration: the self stabilizing capability of loops let p be an iterative program on domain d, with state s. a close look at the structure of the loop invariant of p reveals that p has a desirable self stabilizing property: if during the computation, an error causes the state to take a contaminated value s', then, under some conditions, it is possible to recover from this contamination without knowing any previous correct state. this feature contrasts with the classical error recovery mechanisms which equate error recovery with the retrieval of a previously saved correct state, and the backtracking of the computation. ali mili algebraic laws for nondeterminism and concurrency since a nondeterministic and concurrent program may, in general, communicate repeatedly with its environment, its meaning cannot be presented naturally as an input/output function (as is often done in the denotational approach to semantics). in this paper, an alternative is put forth. first, a definition is given of what it is for two programs or program parts to be equivalent for all observers; then two program parts are said to be observation congruent if they are, in all program contexts, equivalent. the behavior of a program part, that is, its meaning, is defined to be its observation congruence class. the paper demonstrates, for a sequence of simple languages expressing finite (terminating) behaviors, that in each case observation congruence can be axiomatized algebraically. moreover, with the addition of recursion and another simple extension, the algebraic language described here becomes a calculus for writing and specifying concurrent programs and for proving their properties. matthew hennessy robin milner conditions for exact resultants using the dixon formulation a structural criteria on polynomial systems is developed for which the generalized dixon formulation of multivariate resultants defined by kapur, saxena and yang (1994) computes the resultant exactly. the concept of a dixon- exact support (the set of exponent vectors of terms appearing in a polynomial system) is introduced so that the dixon formulation produces the exact resultant for generic unmixed polynomial systems whose support is dixon-exact. a geometric operation, called direct-sum, on the supports is defined that preserves the property of supports being dixon-exact. generic n-degree systems and multigraded systems are shown to be a special case of generic unmixed polynomial systems whose support is dixon-exact. using a scaling techniques discussed by kapur and saxena (1997), a wide class of polynomial systems can be identified for which the dixon formulation produces exact resultants. this analysis can be used to classify terms appearing in the convex hull (also called the newton polytope) of the support of a polynomial system that can cause extraneous factors in the computation of a projection operation by the generalized dixon formulation. for the bivariate case, a complete analysis of the terms corresponding to the exponent vectors in the newton polytope of the support of a polynomial system is given vis a vis their role in producing extraneous factors in a projection operator. a necessary and sufficient condition is developed for a support to be dixon-exact. such an analysis is likely to give insights for the general case of elimination of arbitrarily many variables. arthur d. chtcherba deepak kapur the computational complexity of universal hashing y. mansour n. nisan p. tiwari a faster strongly polynomial minimum cost flow algorithm we present a new strongly polynomial algorithm for the minimum cost flow problem, based on a refinement of the edmonds-karp scaling technique. our algorithm solves the uncapacitated minimum cost flow problem as a sequence of (n log n) shortest path problems on networks with n nodes and m arcs and runs in (n log n(m + n log n)) steps. using a standard transformation, this approach yields an (m log n (m + n log n)) algorithm for the capacitated minimum cost flow problem. this algorithm improves the best previous strongly polynomial algorithm due to galil and tardos, by a factor of m/n. our algorithm is even more efficient if the number of arcs with finite upper bounds, say m', is much less than m. in this case, the number of shortest path problems solved is ((m + n) log n). james orlin a linear-time algorithm for triangulating simple polygons r e tarjan c j van wyk sorting in linear time? arne andersson torben hagerup stefan nilsson rajeev raman programmed grammars with multi-production core and their applications (abstract) two-dimensional programmed grammars was introduced in [1]. in this paper, programmed grammars with multi-production core are defined. various properties of this type of grammar are studied and illustrated by examples. in addition, the time and space complexity analyses are also investigated with illustrative examples. applications to the generation of simple three-dimensional objects are also studied. the main advantages are: (1) this type of grammar provides a concise and a convenient way to generate simple three-dimensional objects in a structured fashion; (2) the derivation process can be clearly envisioned. furthermore, this type of grammar also has useful applications in region filling, robotics, pattern recognition, pictorial information systems, visual languages and artificial intelligence. edward t. lee shangyong zhu constructive deterministic pram simulation on a mesh-connected computer we present a constructive deterministic simulation of a pram with n processors and m = n α shared variables, 1 < α ≤ 2, on an n-node mesh-connected computer where each node hosts a processor and a memory module. at the core of the simulation is a hierarchical memory organization scheme (hmos) that governs the distribution of the pram variables (each replicated into a number of copies) among the modules. the hmos consists of a cascade of explicit bipartite graphs whose expansion properties, combined with suitable access and routing protocols, yield a time performance that, for α < 3/2, is close to the w< rad>n bound imposed by the network's diameter, and that, for α ≥ 3/2, is a function of α never exceeding on5/8&l; t;/rm> . andrea pietracaprina geppino pucci jop f. sibeyn determining the idle time of a tiling karin högstedt larry carter jeanne ferrante a polynomial time solution for labeling a rectilinear map chung keung poon binhai zhu franis chin sparse sets in np-p this paper investigates the structural properties of sets in np-p and shows that the computational difficulty of lower density sets in np depends explicitly on the relations between higher deterministic and nondeterministic time-bounded complexity classes. the paper exploits the recently discovered upward separation method, which shows for example that there exist sparse sets in np-pif and only if exptime @@@@ nexptime. in addition, the paper uses relativization techniques to determine logical possibilities, limitations of these proof techniques, and, for the first time, to exhibit structural differences between relativized np and conp. j. hartmanis v. sewelson n. immerman the -sequence equivalence problem for dol systems is decidable the following problem is shown to be decidable. given are homomorphisms h1 and h2 from * to * and strings 1 and 2 over such that hni( i) is a proper prefix of hn+1i ( i) for i 1, 2 and all n ≥ 0, i.e. for i 1, 2, hi generates from i an infinite string αi with prefixes hni( i) for all n ≥ 0\. test whether α1 α2. from this result easily follows the decidability of limit language equivalence (ω-equivalence) for dol systems. karel culik tero harju inference of inequality constraints in logic programs (extended abstracts) alexander brodsky yehoshua sagiv generation and recognition of formal languages by modifiable grammars boris burshteyn a lower bound for integer greatest common divisor computations it is proved that no finite computation tree with operations { +, -, *, /, mod, < } can decide whether the greatest common divisor (gcd) of a and b is one, for all pairs of integers a and b. this settles a problem posed by gro"tschel et al. moreover, if the constants explicitly involved in any operation performed in the tree are restricted to be "0" and "1" (and any other constant must be computed), then we prove an (log log n) lower bound on the depth of any computation tree with operations { +, -, *, /, mod, < } that decides whether the gcd of a and b is one, for all pairs of n-bit integers a and b. a novel technique for handling the truncation operation is implicit in the proof of this lower bound. in a companion paper, other lower bounds for a large class of problems are proved using a similar technique. yishay mansour baruch schieber prasoon tiwari on the limited power of linear probes and other optimization oracles peter gritzmann victor klee john westwater boolean operations on sets using surface data toshiaki satoh hiroaki chiyokura a heretical view on type embedding r. t. boute completion of a set of rules modulo a set of equations jean-pierre jouannaud helene kirchner distance-sat: complexity and algorithms olivier bailleux pierre marquis improved string matching with k mismatches z galil r giancarlo open problems: 15 samir khuller algorithmic aspects of type inference with subtypes we study the complexity of type inference for programming languages with subtypes. there are three language variations that effect the problem: (i) basic functions may have polymorphic or more limited types, (ii) the subtype hierarchy may be fixed or vary as a result of subtype declarations within a program, and (iii) the subtype hierarchy may be an arbitrary partial order or may have a more restricted form, such as a tree or lattice. the naive algorithm for infering a most general polymorphic type, undervariable subtype hypotheses, requires deterministic exponential time. if we fix the subtype ordering, this upper bound grows to nondeterministic exponential time. we show that it is np-hard to decide whether a lambda term has a type with respect to a fixed subtype hierarchy (involving only atomic type names). this lower bound applies to monomorphic or polymorphic languages. we give pspace upper bounds for deciding polymorphic typability if the subtype hierarchy has a lattice structure or the subtype hierarchy varies arbitrarily. we also give a polynomial time algorithm for the limited case where there are of no function constants and the type hierarchy is either variable or any fixed lattice. patrick lincoln john c. mitchell generalized fair termination we present a generalization of the known fairness and equifairness notions, called @@@@-fairness, in three versions: unconditional, weak and strong. for each such version, we introduce a proof rule for the @@@@-fair termination induced by it, using well-foundedness and countable ordinals. each such rule is proved to be sound and semantically complete. we suggest directions for further research. nissim francez dexter kozen complete axiomatization of algorithmic properties of program schemes with bounded nondeterministic interpretations propositional algorithmic logic pal is a propositional counterpart of algorithmic logics. it investigates properties of program connectives: begin...end, while...do, if..then..else, or .... pal supplies tools for reasoning about programs constructed from program variables by means of program connectives and about their algorithmic properties. sound rules of inference and tautologies of pal are as important in analysis of programs (e.g. verification) as tautologies of classical propositional calculus. on the other hand propositional algorithmic theories are of highest interest, since they can capture properties of data structures and also algorithmic properties of behaviours of concurrent systems. we proved that the semantical and the syntactical consequence operations coincide (completeness property) and that the model existence theorem holds. grazyna mirkowska surveyor's forum: determining a search jacques cohen combinations of abstract domains for logic programming abstract interpretation [7] is a systematic methodology to designstatic program analysis which has been studied extensively in the logicprogramming community, because of the potential for optimizations inlogic programming compilers and the sophistication of the analyses whichrequire conceptual support. with the emergence of efficient genericabstract interpretation algorithms for logic programming, the mainburden in building an analysis is the abstract domain which gives a safeapproximation of the concrete domain of computation. however, accurateabstract domains for logic programming are often complex because of thevariety of analyses to perform their interdependence, and the need tomaintain structural information. the purpose of this paper is to proposeconceptual and software support for the design of abstract domains. itcontains two main contributions: the notion of open product and ageneric pattern domain. the openproduct is a new way of combining abstract domainsallowing each combined domain to benefit from information from the othercomponents through the notions of queries and open operations. the openproduct is general-purpose and can be used for other programmingparadigms as well. the generic patterndomain pat (r )automatically upgrades a domain d with structuralinformation yielding a more accurate domain pat (d) without additionaldesign or implementation cost. the two contributions are orthogonal andcan be combined in various ways to obtain sophisticated domains whileimposing minimal requirements on the designer. both contributions arecharacterized theoretically and experimentally and were used to designvery complex abstract domains such as pat(oprop⊗omode⊗ops) which would be very difficult todesign otherwise. on this last domain, designers need only contributeabout 20% (about 3,400 lines) of the complete system (about 17,700lines). agostino cortesi baudouin le charlier pascal van hentenryck is there a use for linear logic? philip wadler unbounded fan-in circuits and associative functions we consider the computation of finite semigroups using unbounded fan-in circuits. there are constant-depth, polynomial size circuits for semigroup product iff the semigroup does not contain a nontrivial group as a subset. in the case that the semigroup in fact does not contain a group, then for any primitive recursive function f, circuits of size o(nf 1(n)) and constant depth exist for the semigroup product of n elements. the depth depends upon the choice of the primitive recursive function f. the circuits not only compute the semigroup product, but every prefix of the semigroup product. a consequence is that the same bounds apply for circuits computing the sum of two n-bit numbers. ashok k. chandra steven fortune richard lipton effective transformations on infinite trees, with applications to high undecidability, dominoes, and fairness elementary translations between various kinds of recursive trees are presented. it is shown that trees of either finite or countably infinite branching can be effectively put into one-one correspondence with infinitely branching trees in such a way that the infinite paths of the latter correspond to the "p-abiding" infinite paths of the former. here p can be any member of a very wide class of properties of infinite paths. for many properties ??, the converse holds too. two of the applications involve (a) the formulation of large classes of highly undecidable variants of classical computational problems, and in particular, easily describable domino problems that are iii&l; t;supscrpt>11-complete, and (b) the existence of a general method for proving termination of nondeterministic or concurrent programs under any reasonable notion of fairness. david harel on probabilistic networks for selection, merging, and sorting tom leighton yuan ma torsten suel a super scalar sort algorithm for risc processors ramesh c. agarwal average case intractability of matrix and diophantine problems (extended abstract) ramarathnam venkatesan sivaramakrishnan rajagopalan experiments with a generic reduction computer a k dewdney on the inference of canonical context-free grammars l f fass finding k farthest pairs and k closest/farthest bichromatic pairs for points in the plane we study the problem of enumerating k farthest pairs for n points in the plane and the problems of enumerating k closest/farthest bichromatic pairs of n red and n blue points in the plane. we propose a new technique for geometric enumeration problems which iteratively reduces the search space by a half and provides efficient algorithms. as applications of this technique, we develop algorithms, using higher order voronoi diagrams, for the above problems, which run in o(min{n2, n log n \\+ k4/3 log n/log1/3 k}) time and o(_n+k4/3< ;/supscrpt>/(log k)1/3+k log n) space. since, to the authors' knowledge, no nontrivial algorithms have been known for these problems, our algorithms are currently fastest when k=o(n3/2). naoki katoh kazuo iwano the complexity of the circularity problem for attribute grammars: a note on a counterexample for a simpler construction joost engelfriet on the relationship between formal semantics and static analysis p. n. benton results on k-sets and j-facets via continuous motion artur andrzejak boris aronov sariel har-peled raimund seidel emo welzl improved projection for cad's of r 3 this paper presents an improved projection operator for the construction of cad's of r3. it is shown that, typically, it suffices to include in projection only leading coefficients (along with discriminants and resultants) rather than all coefficients. cases in which the leading coefficient alone does not suffice can be dealt with, in a sense, even more efficiently. generalizing the improved projection operator to dimension greater than three is a topic of ongoing research. christopher w. brown pracniques: meansort this paper presents an efficient algorithm based on quicksort. the quicksort algorithm is known to be one of the most efficient sorting techniques; however, one of the drawbacks of this method is its worst case situation of 0 (n2) comparisons. the algorithm presented here improves the average behavior of quicksort and reduces the occurrence of worst case situations. 3~ dalia motzkin on the power of bounded concurrency i: finite automata we investigate the descriptive succinctness of three fundamental notions for modeling concurrency: nondeterminism and pure parallelism, the two facets of alternation, and bounded cooperative concurrency, whereby a system configuration consists of a bounded number of cooperating states. our results are couched in the general framework of finite-state automata, but hold for appropriate versions of most concurrent models of computation, such as petri nets, statecharts or finite-state versions of concurrent programming languages. we exhibit exhaustive sets of upper and lower bounds on the relative succinctness of these features over * and ω, establishing that: (1) each of the three features represents an exponential saving in succinctness of the representation, in a manner that is independent of the other two and additive with respect to them. (2) of the three, bounded concurrency is the strongest, representing a similar exponential saving even when substituted for each of the others. for example, we prove exponential upper and lower bounds on the simulation of deterministic concurrent automata by afas, and triple-exponential bounds on the simulation of alternating concurrent automata by dfas. doron drusinsky david harel lalr(k) testing is pspace-complete the problem of testing whether or not an arbitrary context-free grammar is lalr(k) for a fixed integer k ≥ 1 (i.e. only the subject grammar is a problem parameter) is shown to be pspace-complete. the result is in contrast with testing the membership in several other easily parsed classes of grammars, such as lr(k), slr(k), lc(k) and ll(k) grammars, for which deterministic polynomial time membership tests are known. the pspace-hardness of the problem is proved using a transformation from the finite state automaton non- universality problem. a nondeterministic algorithm for constructing sets of lr(k) items leads to a polynomially space bounded algorithm for lalr(k) testing. esko ukkonen eljas soisalon-soininen pattern-matching and text-compression algorithms maxime crochemore thierry lecroq a threshold of ln n for approximating set cover (preliminary version) uriel feige absolute error bounds for learning linear functions online in this note, we consider the problem of learning a linear function from ** to ** online. two previous algorithms have been presented which achieve an optimal sum of squared error terms. we show that neither algorithm performs well with respect to absolute error. we also show that in both problem settings, very simple ideas given an algorithm which achieves optimal or close to optimal worst case absolute error. ethan j. bernstein matrix structure, polynomial arithmetic, and erasure-resilient encoding/decoding we exploit various matrix structures to decrease the running time and memory space of the known practical deterministic schemes for erasure-resilient encoding/decoding. polynomial interpolation and multipoint evaluation enable both encoding and decoding in nearly linear time but the overhead constants are large (particularly, for interpolation), and more straightforward quadratic time algorithms prevail in practice. we propose faster algorithms. at the encoding stage, we decrease the running time per information packet from c log2 r, for a large constant c, or from r (for practical encoding) to log r. for decoding, our improvement is by the factors c and n/log n, respectively, for the input of size n. our computations do not involve polynomial interpolation. multipoint polynomial evaluation is either also avoided or is confined to decoding. victor y. pan refining nondeterminism in relativizations of complexity classes xu mei-rui john e. doner ronald v. book metric invariants of tetrahedra via polynomial elimination we use grobner bases and the pedersen-roy-szpirglas real solution counting method to analyze systems of algebraic equations satisfied by metric invariants of the tetrahedron. the transformation from the geometric to the algebraic setting is effected using the cayley-menger determinant formula for the volume of a simplex. we prove that, in general, the four face areas, circumradius and volume together do not uniquely determine a tetrahedron, and that there exist non-regular tetrahedra that are uniquely determined just by the four face areas and circumradius. this settles two open problems posed by the american mathematical monthly in 1999. petr lisonek robert b. israel on the complexity of unique solutions christos h. papadimitriou using abstract interpretation to define a strictness type inference system b. monsuez every robust crcw pram can efficiently simulate a priority pram torben hagerup t. radzik probabilistic checking of proofs: a new characterization of np we give a new characterization of np: the class np contains exactly those languages l for which membership proofs (a proof that an input x is in l) can be verified probabilistically in polynomial time using logarithmic number of random bits and by reading sublogarithmic number of bits from the proof. we discuss implications of this characterization; specifically, we show that approximating clique and independent set, even in a very weak sense, is np- hard. sanjeev arora shmuel safra type analysis of logic programs in the presence of type definitions lunjin lu on finding the exact solution of a zero-one knapsack problem given a 0-1 knapsack problem with input drawn from a certain probability distribution, we show that for every ε > 0, there is a self-checking polynomial- time algorithm that finds an optimal solution with probability at least 1 -ε. we also prove some upper and lower bounds on random variables related to the problem. a. v. goldberg a. marchetti-spaccamela strong normalizability for the combined system of the typed lmbda calculus and an arbitrary convergent term rewrite system we give a proof of strong normalizability of the typed λ-calculus extended by an arbitrary convergent term rewriting system, which provides the affirmative answer to the open problem proposed in breazu-tannen [1]. klop [6] showed that a combined system of the untyped λ-calculus and convergent term rewriting system is not church-rosser in general, though both are church-rosser. it is well- known that the typed λ-calculus is convergent (church-rosser and terminating). breazu-tannen [1] showed that a combined system of the typed λ-calculus and an arbitrary church-rosser term rewriting system is again church-rosser. our strong normalization result in this paper shows that the combined system of the typed λ-calculus and an arbitrary convergent term rewriting system is again convergent. our strong normalizability proof is easily extended to the case of the second order (polymorphically) typed lambda calculus and the case in which μ-reduction rule is added. m. okada asymptomatic conditional probabilities for first-order logic motivated by problems that arise in computing degrees of belief, we consider the problem of computing asymptotic conditional probabilities for first-order formulas. that is, given first-order formulas and , we consider the number of structures with domain {1,…,n} that satisfy , and compute the fraction of them in which is true. we then consider what happens to this probability of first- order formulas, except that now we are considering asymptotic conditional probabilities. although work has been done on special cases of asymptotic conditional probabilities, no general theory has been developed. this is probably due in part to the fact that it has been known that, if there is a binary predicate symbol in the vocabulary, asymptotic conditional probabilities do not always exist. we show that in this general case, almost all the questions one might want to ask (such as deciding whether the asymptotic probability exists) are highly undecidable. on the other hand, we show that the situation with unary predicates only is much better. if the vocabulary consists only of unary predicate and constant symbols, it is decidable whether the limit exists, and if it does, there is an effective algorithm for computing it. the complexity depends on two parameters: whether there is a fixed finite vocabulary or an infinite one, and whether there is a bound on the depth of quantifier nesting. adam j. grove joseph y. halpern daphne koller a trivial knot whose spanning disks have exponential size if a closed curve in space is a trivial knot (intuitively, one can untie it without cutting) then it is the boundary of some disk with no self- intersections. in this paper we investigate the minimum number of faces of a polyhedral spanning disk of a polygonal knot with n segments. we exhibit a knot whose minimal spanning disk has exp(cn) faces, for some positive constant c. jack snoeyink space-bounded probabilistic computation: old and new stories ioan i. macarie abstract continuations: a mathematical semantics for handling full jumps continuation semantics is the traditional mathematical formalism for specifying the semantics of non-local control operations. modern lisp-style languages, however, contain advanced control structures like full functional jumps and control delimiters for which continuation semantics is insufficient. we solve this problem by introducing an abstract domain of rests of computations with appropriate operations. beyond being useful for the problem at hand, these abstract continuations turn out to have applications in a much broader context, e.g., the explication of parallelism, the modeling of control facilities in parallel languages, and the design of new control structures. matthias felleisen mitch wand daniel friedman bruce duba completeness and incompleteness of trace-based network proof systems most trace-based proof systems for networks of processes are known to be incomplete. extensions to achieve completeness are generally complicated and cumbersome. in this paper, a simple trace logic is defined and two examples are presented to show its inherent incompleteness. surprisingly, both examples consist of only one process, indicating that network composition is not a cause of incompleteness. axioms necessary and sufficient for the relative completeness of a trace logic are then presented. j. widom d. gries f. b. schneider factorization in zx : the searching phase in this paper we describe ideas used to accelerate the searching phase of the berlekamp---zassenhaus algorithm, the algorithm most widely used for computing factorizations in z[x]. our ideas do not alter the theoretical worst-case complexity, but they do have a significant effect in practice: especially in those cases where the cost of the searching phase completely dominates the rest of the algorithm. a complete implementation of the ideas in this paper is publicly available in the library ntl [16]. we give timings of this implementation on some difficult factorization problems. john abbott victor shoup paul zimmermann a bound on the shift function in terms of the busy beaver function bryant a. julstrom the isomorphism conjecture fails relative to a random oracle stuart a. kurtz stephen r. mahaney james s. royer fast deterministic approximate and exact parallel sorting torben hagerup rajeev raman relating typability and expressiveness in finite-rank intersection type systems (extended abstract) assaf j. kfoury harry g. mairson franklyn a. turbak j. b. wells author index author index time-space tradeoffs for some algebraic problems we study the time- space relationship of several algebraic problems such as matrix multiplication and matrix inversion. several results relating the algebraic properties of a set of functions to the structure of the graph of any straight-line program, that computes this set, are shown. some of our results are the following. multiplying m × n by n × p matrices with space s requires at least time t ≥ Ω(mnp/s). inverting an n × n matrix with space s requires at least time t ≥ Ω(n4/s). joseph ja'ja' fixed point languages, equality languages, and representation of recursively enumerable languages j. engelfriet g. rozenberg uniform closure properties of p-computable functions e kaltofen convergence complexity of optimistic rate based flow control algorithms (extended abstract) yehuda afek yishay mansour zvi ostfeld dynamic sets and their application in vdm shaoying liu john a. mcdermid arrangements of segments that share endpoints: single face results esther m. arkin dan halperin klara kedem joseph s. b. mitchell nir naor on acc0[pk] frege proofs alexis maciel toniann pitassi fractal structure of random programs peter kokol janez brest algorithms for generating fundamental cycles in a graph narsingh deo g. prabhu m. s. krishnamoorthy the tower of hanoi problem with arbitrary start and end positions e staples confluent and other types of thue systems ronald v. book a taxonomy of proof systems (part 1) oded goldreich maps and descendant husband trees (abstract only) map is an acronym for matching and permutation. a map is a finite set in which each member, x, has a spouse and a successor. the spouse function is a matching and the successor function is a permutation. the only tabulation required to describe a map is the successor function. the only primitive structure change is a successor swap. maps are efficient hosts to many data structures, including linked lists, trees, forests and digraphs. in each guest, each member, relation or operation is represented by a few members, relations or successor swaps in the host. a descendant husband tree, an example of a map, implements a binary tree, using only one pointer per mode. left-child, right-child and mother are computable in a few steps. insertion or deletion of a subtree is accomplished with a few successor swaps. preorder traversal of a tree requires only two steps per node visited and no auxiliary storage. don morrison remark on algorithm 644 d. e. amos ip = pspace in this paper, it is proven that when both randomization and interaction are allowed, the proofs that can be verified in polynomial time are exactly those proofs that can be generated with polynomial space. adi shamir a composition approach to time analysis of first order lazy functional programs bror bjerner s. holmström typing a multi-language intermediate code andrew d. gordon don syme iterated pushdown automata and complexity classes an iterated pushdown is a pushdown of pushdowns of ... of pushdowns. an iterated exponential function is 2 to the 2 to the ... to the 2 to some polynomial. the main result is that nondeterministic 2-way and multi-head iterated pushdown automata characterize deterministic iterated exponential time complexity classes. this is proved by investigating both nondeterministic and alternating auxiliary iterated pushdown automata, for which similar characterization results are given. in particular it is shown that alternation corresponds to one more iteration of pushdowns. these results are applied to the 1-way iterated pushdown automata: (1) they form a proper hierarchy with respect to the number of iterations, (2) their emptiness problem is complete in deterministic iterated exponential time. joost engelfriet constructing arrangements optimally in parallel (preliminary version) michael t. goodrich finite automation over real-number alphabets (abstract only): some theoretical results and applications in this paper we look at finite state loors ; achincs whose inputs are drain from the real numbers. such concepts as non- determinism, minimal realizations and languare acceptance are examined. such lachires are basic models of the hybrid devices commonly found in microprocessor-based control environments. martin e. kaliski a generalization of ogden's lemma christopher bader arnaldo moura an o(n logn)-size fault-tolerant sorting network (extended abstract) yuan ma a solution to line routing problems on the continuous plane d. w. hightower some applications of a technique of sakoda and sipser b. ravikumar on the sum of squares of cell complexities in hyperplane arrangements boris aronov jiri matousek micha sharir a unified approach to approximating resource allocation and scheduling we present a general framework for solving resource allocation and scheduling problems. given a resource of fixed size, we present algorithms that approximate the maximum throughput or the minimum loss by a constant factor. our approximation factors apply to many problems, among which are: (i) real- time scheduling of jobs on parallel machines, (ii) bandwidth allocation for sessions between two endpoints, (iii) general caching, (iv) dynamic storage allocation, and (v) bandwidth allocation on optical line and ring topologies. for some of these problems we provide the first constant factor approximation algorithm. our algorithms are simple and efficient and are based on the local- ratio technique. we note that they can equivalently be interpreted within the primal-dual schema. amotz bar-noy reuven bar-yehuda ari freund joseph (seffi) naor baruch schieber an automata-theoretic approach to branching-time model checking translating linear temporal logic formulas to automata has proven to be an effective approach for implementing linear-time model-checking, and for obtaining many extensions and improvements to this verification method. on the other hand, for branching temporal logic, automata-theoretic techniques have long been thought to introduce an exponential penalty, making them essentially useless for model-checking. recently, bernholtz and grumberg [1993] have shown that this exponential penalty can be avoided, though they did not match the linear complexity of non-automata-theoretic algorithms. in this paper, we show that alternating tree automata are the key to a comprehensive automata- theoretic framework for branching temporal logics. not only can they be used to obtain optimal decision procedures, as was shown by muller et al., but, as we show here, they also make it possible to derive optimal model-checking algorithms. moreover, the simple combinatorial structure that emerges from the automata-theoretic approach opens up new possibilities for the implementation of branching-time model checking and has enabled us to derive improved space complexity bounds for this long-standing problem. orna kupferman moshe y. vardi pierre wolper unification via se-style of explicit substitution mauricio ayala-rincón fairouz kamareddine junk facts and the slowsort m. david condic a taxonomy of proof systems (part 2) oded goldreich annotated type and effect systems flemming nielson functional reactive programming from first principles functional reactive programming, or frp, is a general framework for programming hybrid systems in a high-level, declarative manner. the key ideas in frp are its notions of behaviors and events. behaviors are time-varying, reactive values, while events are time-ordered sequences of discrete-time event occurrences. frp is the essence of fran, a domain-specific language embedded in haskell for programming reactive animations, but frp is now also being used in vision, robotics and other control systems applications. in this paper we explore the formal semantics of frp and how itrelates to an implementation based on streams that represent (and therefore only approximate) continuous behaviors. we show that, in the limit as the sampling interval goes to zero, the implementation is faithful to the formal, continuous semantics, but only when certain constraints on behaviors are observed. we explore the nature of these constraints, which vary amongst the frp primitives. our results show both the power and limitations of this approach to language design and implementation. as an example of a limitation, we show that streams are incapable of representing instantaneous predicate events over behaviors. zhanyong wan paul hudak on some deterministic space complexity problems in this paper we give a complete problem in dspace(n). the problem is whether there exists a cycle in the connected component containing (o,o,...,o) in the graph gp of the zeroes of a polynomial p over gf(2) under a suitable natural coding. hence the deterministic space complexity of this problem is o(n) but not o(n). we give as well several problems for which we can obtain very close upper and lower deterministic space bounds. for example, the deterministic space complexity to determine whether there exists a cycle in the graph of the set of assignments satisfying a boolean formula is o(n/log n) but not o(n/log2n). hong jia-wei a note on one-way functions and polynomial-time isomorphisms k i ko t j long d z du simplifying a polygonal subdivision while keeping it simple we study the problem of simplifying a polygonal subdivision, subjectof a given error bound, $\epsilon$, and subject to maintaining the topology of the input, while not introducing new (steiner) vertices. in particular, we require that the simplified chains may not cross themselves or cross other chains. in gis applications, for example, we are interested in simplifying the banks of a river without the left and right banks getting "tangled" and without "islands" becoming part of the land mass. maintaining topology during subdivision simplification is an important constraint in many real gis applications. \noindent we give both theoretical and experimental results. (a). we prove that the general problem we are trying to solve is in fact difficult to solve, even approximately: we show that it is min pb-complete and that, in particular, assuming p $\neq$ np, in the general case we cannot obtain in polynomial time an approximation within a factor $n^{1/5-\delta}$ of an optimal solution. (b). we propose some heuristic methods for solving the problem, which we have implemented. our experimental results show that, in practice, we get quite good simplifications in a reasonable amount of time. regina estkowski joseph s. b. mitchell computing amorphous program slices using dependence graphs david binkley almost optimal polyhedral separators we animate two deterministic polynomial time methods for finding a separator for two nested convex polyhedra in 3d. while this problem is np-complete, we show a reduction to set cover. we then animate the greedy method and the weighted method. herve bronnimann tetrahedral mesh generation by delaunay refinement jonathan richard shewchuk handling face coincidence in a boolean algorithm for breps julio carrera comparison-sorting and selecting in totally monotone matrices an m x n matrix a is called totally monotone if for all i1 < i2 and j1 < j2, a[i1, j1] > a[i1, j2 implies a[i2, j1] > a[i2, j2]. we consider the complexity of comparison-based selection and sorting algorithms in such matrices. although our selection algorithm counts only comparisons its advantage on all previous work is that it can also handle selection of elements of different (and arbitrary) ranks in different rows (or even selection of elements of several ranks in each row), in time which is slightly better than that of the best known algorithm for selecting elements of the same rank in each row. we also determine the decision tree complexity of sorting each row of a totally monotone matrix up to a factor of at most log n by proving a quadratic lower bound and by slightly improving the upper bound. no nontrivial lower bound was previously known for this problem. in particular for the case m = n we prove a tight (n2) lower bound. this bound holds for any decision-tree algorithm, and not only for a comparison-based algorithm. the lower bound is proved by an exact characterization of the bitonic totally monotone matrices, whereas our new algorithms depend on techniques from parallel comparison algorithms. noga alon yossi azar objects that cannot be taken apart with two hands it has been conjectured that every configuration c of convex objects in 3-space with disjoint interiorscan be taken apart by translation with two hands: that is, some propersubset of c can be translated to infinity withoutdisturbing its complement. we show that the conjecture holds for five orfewer objects and give a counterexample with six objects. we extend thecounterexample to a configuration that cannot be taken apart with twohands using arbitrary isometries (rigid motions). jack snoeyink jorge stolfi an indexed model of recursive types for foundational proof-carrying code the proofs of "traditional" proof carrying code (pcc) are type-specialized in the sense that they require axioms about a specific type system. in contrast, the proofs of foundational pcc explicitly define all required types and explicitly prove all the required properties of those types assuming only a fixed foundation of mathematics such as higher-order logic. foundational pcc is both more flexible and more secure than type-specialized pcc.for foundational pcc we need semantic models of type systems on von neumann machines. previous models have been either too weak (lacking general recursive types and first- class function-pointers), too complex (requiring machine-checkable proofs of large bodies of computability theory), or not obviously applicable to von neumann machines. our new model is strong, simple, and works either in λ-calculus or on pentiums. andrew w. appel david mcallester on the relationship between permission and obligation in two interesting papers ((1983), (1986)) thorne mccarty has presented a semantics for the central deontic concepts, permission and obligation, based upon a semantics for an action language. the latter, in turn, was constructed along lines deriving from pratt-harel dynamic logic. i shall here offer some critical comments on mccarty's analysis of the relationship between obligation and permission, and of his account of so- called "free- choice" permissions; these come in section ii, below. in section i an outline sketch is given of the main features of mccarty's semantics with which the criticism will be concerned. a. j. i. jones shortest paths in the plane with polygonal obstacles we present a practical algorithm for finding minimum-length paths between points in the euclidean plane with (not necessarily convex) polygonal obstacles. prior to this work, the best known algorithm for finding the shortest path between two points in the plane required (n2 log n) time and o(n2) space, where n denotes the number of obstacle edges. assuming that a triangulation or a voronoi diagram for the obstacle space is provided with the input (if is not, either one can be precomputed in o(n log n) time), we present an o(kn) time algorithm, where k denotes the number of "islands" (connected components) in the obstacle space. the algorithm uses only o(n) space and, given a source point s, produces an o(n) size data structure such that the distance between s and any other point x in the plane (x) is not necessarily an obstacle vertex or a point on an obstacle edge) can be computed in o(1) time. the algorithm can also be used to compute shortest paths for the movement of a disk (so that optimal movement for arbitrary objects can be computed to the accuracy of enclosing them with the smallest possible disk). james a. storer john h. reif feasibility of "perfect" function evaluation c. b. dunham topologically sweeping an arrangement h edelsbrunner l j guibas probing convex polytopes d dobkin h edelsbrunner c k yap automata theoretic techniques for modal logics of programs: (extended abstract) we present a new technique for obtaining decision procedures for modal logics of programs. the technique centers around a new class of finite automata on infinite trees for which the emptiness problem can be solved in polynomial time. the decision procedures then consist of constructing an automaton af for a given formula f, such that af accepts some tree if and only if f is satisfiable. we illustrate our technique by giving an exponential decision procedure for deterministic propositional dynamic logic and a variant of the &mgr-calculus; of kozen. moshe y. vardi pierre wolper improving symbolic traversals by means of activity profiles gianpiero cabodi paolo camurati stefano quer a parallelized search strategy for solving a multicriteria aircraft routing problem james j. grimm gary b. lamont andrew j. terzuoli technical correspondence: i. the equal-means case diane crawford np might not be as easy as detecting unique solutions richard beigel harry buhrman lance fortnow unified method for determining canonical forms of a matrix let _k_ be a field and _a_ a matrix with entries in _k._ it is well known that if det(_xi - a_) splits in _k,_ there exists a regular matrix _p_ with entries in _k,_ such that _p_-1 _ap_ is a diagonal matrix of jordan blocks. two problems arise that have been well treated in the literature. what should be done if the roots of _xi - a_ do not belong to _k?_ how can one compute _p?_ in this note, i answer these questions in a unified way. j. m. de olazábal an improved context-free recognizer a new algorithm for recognizing and parsing arbitrary context-free languages is presented, and several new results are given on the computational complexity of these problems. the new algorithm is of both practical and theoretical interest. it is conceptually simple and allows a variety of efficient implementations, which are worked out in detail. two versions are given which run in faster than cubic time. surprisingly close connections between the cocke-kasami-younger and earley algorithms are established which reveal that the two algorithms are "almost" identical. susan l. graham michael harrison and walter l. ruzzo space-bounded probabilistic game automata anne condon equations between regular terms and an application to process logic regular terms with the kleene operations ,;, and * can be thought of as operators on languages, generating other languages. an equation r1 r2 between two such terms is said to be satisfiable just in case languages exist which make this equation true. we show that the satisfiability problem even for *-free regular terms is undecidable. similar techniques are used to show that a very natural extension of the process logic of harel, kozen and parikh is undecidable. ashok chandra joe halpern albert meyer rohit parikh inadequacy of computable loop invariants hoare logic is a widely recommended verification tool. there is, however, a problem of finding easily checkable loop invariants; it is known that decidable assertions do not suffice to verify while programs, even when the pre- and postconditions are decidable. we show here a stronger result: decidable invariants do not suffice to verify single-loop programs. we also show that this problem arises even in extremely simple contexts. let n be the structure consisting of the set of natural numbers together with the functions s(x)=x+1,d(x)&l; t;/italic>=2(x)=***xx,y,z such that the asserted program =y=zn but any loop invariant i(x,y,z) for this asserted program is undecidable. andreas blass yuri gurevich a computational geometry workbench we are constructing a workbench for computational geometry. this is intended to provide a framework for the implementation, testing, demonstration and application of algorithms in computational geometry. the workbench is being written in smalltalk/v using an apple macintosh ii. the object-oriented model used in smalltalk is well- suited to algorithms manipulating geometric objects. in addition, the programming environment can be easily extended, and provides excellent graphics facilities, data abstraction, encapsulation, and incremental modification. we have completed the design and implementation of the workbench platform, insofar as such a system can ever be considered complete. among the features of the system are: an interactive graphical environment, including operations for creation and editing of geometric figures, and for the operation of algorithm on these figures the system supports: high-level representation- independent geometric objects (points, lines, polygons,…) geometric data structures (segment trees, range trees,…) non-geometric data structures (finger trees, splay trees, heaps, …) "standard" algorithmic tools in as general a form as possible. algorithms currently available in the system include tarjan and van wyk's triangulation of a simple polygon, fortune's voronoi diagram, preparata's chain decomposition, and melkman's convex hull algorithm. tools for the animation of geometric algorithms high-level graphical and symbolic debugging facilities portability, due to the separation of the machine-independent code and the machine-dependent user- interface. automatic handling of basic operations (device-independent graphics, storage management) allowing the implementor to focus on algorithmic issues our group is currently working on extensions in two directions: implementing additional algorithms from two-dimensional computational geometry providing the framework for implementations of three-dimensional algorithms we are also conducting comparison studies of different algorithms and data structures, including a comparison of different triangulation and convex hull algorithms for large input sizes and an empirical test of the dynamic optimality conjecture of sleator and tarjan using both splay and finger trees in the tarjan and van wyk triangulation. the workbench is being demonstrated during this symposium. a. knight j. may j. mcaffer t. nguyen j.-r. sack confluent reductions: abstract properties and applications to term rewriting systems: abstract properties and applications to term rewriting systems gerard huet a threshold of ln n for approximating set cover given a collection f of subsets of s ={1,…,n}, setcover is the problem of selecting as few as possiblesubsets from f such that their union coverss,, and maxk-cover is the problem of selectingk subsets from f such that their union has maximum cardinality. both these problems arenp-hard. we prove that (1 - o(1)) lnn is a threshold below which setcover cannot be approximated efficiently, unless np has slightlysuperpolynomial time algorithms. this closes the gap (up to low-orderterms) between the ratio of approximation achievable by the greedyalogorithm (which is (1 - o(1)) lnn), and provious results of lund and yanakakis, that showed hardness ofapproximation within a ratio of log2n/2 0.72 ln n. for maxk-cover, we show an approximationthreshold of (1 - 1/e)(up tolow-order terms), under assumption that p np. uriel feige rectilinear and polygonal p-piercing and p-center problems micha sharir emo welzl making hard constraints soft fred t. krogh factoring multivariate polynomials over finite fields this paper describes an algorithm for the factorization of multivariate polynomials with coefficients in a finite field that is polynomial-time in the degrees of the polynomial to be factored. the algorithm makes use of a new basis reduction algorithm for lattices over ifq. arjen k. lenstra on the formalization of architectural types with process algebras architectural styles play an important role in software engineering as they convey codified principles and experience which help the construction of software systems with high levels of efficiency and confidence. we address the problem of formalizing and analyzing architectural styles in an operational setting by introducing the intermediate abstraction of architectural type. we develop the concept of architectural type in a process algebraic framework because of its modeling adequacy and the availability of means, such as milner's weak bisimulation equivalence, which allow us to reason compositionally and efficiently about the well formedness of architectural types. marco bernardo paolo ciancarini lorenzo donatiello on resetting dlba's oscar h. ibarra min-max sort: a simple sorting method (abstract only) a simple sorting algorithm which can be considered as a double-ended selection sort is presented. the algorithm, called min-max sort is based on the optimal method for simultaneously finding the smallest and the largest elements in an array [2]. the smallest and the largest elements found are pushed respectively to the left and right of the array, and the process is repeated on the middle portion. the correctness and termination of min-max sort are easily demonstrated through the use of standard loop invariant ideas. the algorithm fits the "hard split/easy join" paradigm of merritt [1]. let n be even. given an array a[1..n], it is said to be in a "rainbow pattern" if for each i := 1 to n/2 we have a [i] ≤ a[n+1-i]. the following step 1 establishes the rainbow pattern in a. step 1 : for i := 1 to n div 2 do if a[i] > a[n+1-i] then swap(a[i],a[n+1-i]). the step 1 requires n/2 number of comparisons. observe that when an array is in a rainbow pattern it is guaranteed that the smallest element of the array is in the left half of the array and the largest in the second half. step 2 is to find p and q such that a[p] ≤ a[i] for i = 1 to n/2, and a[q] ≥ a[j] for j = n/2+1 to n, and swap(a[1],a[p]) and swap(a[q],a[n]). the step 2 requires (n/2-1)+(n/2-1) number of comparisons. thus we have brought the largest and the smallest of elements of a into first and the last positions respectively at a cost of 3n/2 - 2 comparisons (in contrast to 2n - 2 comparisons required in the straight forward method). observe that the rainbow pattern in a is still preserved, except perhaps the two pairs (a[p],a[n+1-p]) and (a[q],a[n+1-q]). the rainbow pattern is reinstated, in just two comparisons, by step 3 : if a[p] > a[n+1-p] then swap(a[p],a[n+1-p]), and if a(n+1-q] > a[q] then swap(a[q],a[n+1-q]). the general form of step 2 is step 2′ : find p and q such that a[p] ≤ a[i] for all i = k to n/2, and a[q] ≥ a[j] for all j = n/2+1 to n+1-k. the algorithm step 1; for k := 1 to n div 2-1 do begin step 2′; step 3; end; sorts the array a. step 1 requires n/2 comparisons; step 2′ requires 2(n/2-k) and step 3 requires two comparisons. thus the total number of comparisons required to sort the array a is = n/2 + @@@@ [2(n/2 - k) + 2] = (n2 \\+ 4n - 8)/4 the size of the central portion decreases steadily by 2 and the invariant assures that the smallest and largest of the central portion can be found with a number of comparisons approximately equal to the size of the portion. the algorithm is included in discussion by zahn [3]. narayan murthy relationships between and among models mylopoulos: when we talk about a model, it can be either a program snapshot or execution, or a program, a data base, a conceptual schema, or a knowledge base. we can think of a program as consisting of units of some sort, e.g., procedures, assertions, data types; and they are related by relationships of various kinds. some relationships are user-defined and dependent on the domain the model is dealing with. on the other hand, some of the relationships used to describe the model are primitive, in the sense that their semantics are well-defined and embedded in the modelling framework in terms of which the model has been defined. some examples from the three areas being represented here are isa, part-of, instance-of. certain other relationships include procedural attachment, used in ai to associate procedures to data classes to specify operations on instances of the classes. in programming languages, considering statements as the units, statement sequencing is a primitive relationship between these units. considering algol- like begin blocks as the units, scoping rules are relationships between units (blocks). procedure activation rules between blocks are another example of a relationship that can be considered as primitive and embedded in the modelling framework. simula concatenation, which allows the definition of classes to be given in terms of other classes gives a relationship between classes. the association of operations to a data type can also be treated as a relationship that has been used in pls. the ω sequence problem for dol systems is decidable karel culik tero harju a comparison of sequential delaunay triangulation algorithms peter su robert l. scot drysdale efficient algorithms for generalized intersection searching on non-iso- oriented objects generalized intersection searching problems are a class of geometric query- retrieval problems where the questions of interest concern the intersection of a query object with aggregates of geometric objects (rather than with individual objects.) this class contains, as a special case, the well-studied class of standard intersection searching problems and is rich in applications. unfortunately, the solutions known for the standard problems do not yield efficient solutions to the generalized problems. recently, efficient solutions have been given for generalized problems where the input and query objects are iso-oriented (i.e., axes- parallel) or where the aggregates satisfy additional properties (e.g., connectedness). in this paper, efficient algorithms are given for several generalized problems involving non-iso-oriented objects. these problems include: generalized halfspace range searching, segment intersection searching, triangle stabbing, and triangle range searching. the techniques used include: computing suitable sparse representations of the input, persistent data structures, and filtering search. prosenjit gupta ravi janardan michiel smid infinitary control flow analysis: a collecting semantics for closure analysis flemming nielson hanne riis nielson algorithms for subset testing and finding maximal sets in this paper we consider two related problems: subset testing and finding maximal sets. first, consider a sequence of n operations, where each operation either creates a set, inserts (deletes) an element into (from) a set, queries whether a particular element is in a set, queries whether or not one set is a subset of another, or queries whether or not the intersection of two sets is empty. we show that for any integer k, one can implement subset and intersection testing in time o(n1-(1/k) log n) and all of the other operations in time o(n1/k log n)). it requires o(nk+1)/k) space. when k = 2, this yields a worst case time complexity of o(n1/2 log n) per operation, and uses o(n3/2) space. second, consider a set of sets, where the total size of the input is o(n). we show that one can find those sets that are maximal (a set is maximal if it is not contained in any other set) in time o(mn), where m is the number of maximal sets. daniel m. yellin type systems luca cardelli solving projective complete intersection faster in this paper, we present a new method for solving square polynomial systems with no zero at infinity. we analyze its complexity, which indicates substantial improvements, compared with the previously known methods for solving such systems. we describe a framework for symbolic and numeric computations, developed in c++, in which we have implemented this algorithm. we mention the techniques that are involved in order to build efficient codes and compare with existing softwares. we end by some applications of this method, considering in particular an autocalibration problem in computer vision and an identification problem in signal processing, and report on the results of our first implementation. bernard mourrain philippe trebuchet testing polynomials which are easy to compute (extended abstract) we exploit the fact that the set of all polynomials pε@@@@[x1,..,xn] of degree ≤d which can be evaluated with ≤v nonscalar steps can be embedded into a zariski-closed affine set w(d,n,v),dim w(d,n,v)≤(v+1 +n)2 and deg w(d,n,v)≤(2vd)(v+1+n)2. as a consequence we prove that for u: 2v(d+1)2 and s: 6(v+1+n)2 there exist &abarbelow;~~1,..,&abarbelow;sε [u]n {1,2,..,u}n such that for all polynomials pεw(d,n,v):p(&abarbelow;1) p(&abarbelow;2) ... p(&abarbelow;s) o implies p o. this means that &abarbelow;1,...,&abarbelow;< supscrpt>s is a correct test sequence for a zero test on all polynomials in w(d,n,v). moreover, "almost every" sequence &abarbelow;1,..,&abarbelow;s ε[u]n is such a correct test sequence for w(d,n,v). the existence of correct test sequences &abarbelow; 1,..,&abarbelow;sε [u]n is established by a counting argument without constructing a correct test sequence. we even show that it is beyond the known methods to establish (i.e. to construct and to prove correctness) of such a short correct test sequence for w(d,n,v). we prove that given such a short, correct test sequence for w(d,n,v) we can efficiently construct a multivariate polynomial pε@@@@[x,..,xn] with deg(p) d and small integer coefficients such that p@@@@ w(d,n,v). for v>n log d lower bounds of this type are beyond our present methods in algebraic complexity theory. j. heintz c. p. schnorr the tight deterministic time hierarchy let k be a constant -&-ge; 2, and let us consider only deterministic k-tape turing machines. we assume t2(n) -&-gt; n and t2 is computable in time t2. then there is a language which is accepted in time t2, but not accepted in any time t1 with t1(n) -&-equil; o(t2(n)). furthermore, we obtain a strong hierarchy (isomorphic to the rationals q) for languages accepted in fixed space and variable time. martin furer efficient parallel computation of arrangements of hyperplanes in d dimensions torben hagerup h. jung e. welzl turtle graphics: hidden features in apl2 twenty-five years ago martin gardner wrote an article in "mathematical games" of the scientific american with the title _fantastic patterns traced by programmed "worms"_ [gar 1]. later on these worms were called "turtles". these turtle graphics are well known from the logo-system [ab/dis].these graphics are also vector graphics not made by setting absolute coordinates but by setting relative increments of distances and angles. with tiny apl2 idioms i have developed many 2d-graphics. it has happened in a short time in a normal manner and as a dialogue form. my top is "_one-liner as eye liner_" hendrik rama problems, puzzles, challenges this new section is for the submission of problems, puzzles and challenges for our computer algebra systems to solve, and for publishing the computer solutions to the above. we start the problem section by presenting some problems that appear at first glance to be purely numerical, that is they look like they should be solvable by using a purely numerical package. however, you may find that this just leads to an error message when you try to solve them using a purely numerical package. can our computer algebra systems come to the rescue and solve the following problems? g. j. fee m. b. managan individual sequence prediction - upper bounds and application for complexity chamy allenberg monotone circuits for matching require linear depth r. raz a. wigderson table-automata/ finite co-finite languages paul cull an operator calculus this paper extends a line of apl development presented in a sequence of papers [1-7] over the past six years. the main topics addressed are the interactions of operators such as rank, composition, derivative, and inverse (i.e., the beginnings of a calculus of operators), a simplification in the complement of attributes tentatively presented in [6], and a treatment of the shapes of individual results (as defined in [7]) in the case of empty frames. brief treatments are also given to a number of smaller matters: a transliteration or token substitution facility, the treatment of niladic functions, a custom (variant) operator, the obsolescence of certain system variables, and some changes in the function definition operator and in the treatment of supernumerary axes. kenneth e. iverson roland pesch j. henri schueler work-time-optimal parallel algorithms for string problems artur czumaj zvi galil leszek gasieniec kunsoo park wojciech plandowski a parallel repetition theorem ran raz specialization of inductively sequential functional logic programs maría alpuente michael hanus salvador lucas germán vidal two infinite sets of primes with fast primality tests infinite sets p and q of primes are described, p q. for any natural number n it can be decided if n ∈ p in (deterministic) time ((log n)9). this answers affirmatively the question of whether there exists an infinite set of primes whose membership can be tested in polynomial time, and is the main result of the paper. also, for every n ∈ q, we show how to produce at random, in expected time ((log n)3), a certificate of length (log n) which can be verified in (deterministic) time ((log n)3); this is less than the time needed for two exponentiations and is much faster than existing methods. finally it is important that p is relatively dense (at least cn& lt;/italic>2/3/log n elements less than n). elements of q in a given range may be generated quickly, but it would be costly for an adversary to search q in this range; this could be useful in cryptography. jános pintz william steiger endre szemeredi a lower bound for parallel string matching dany breslauer zvi galil layout of the batcher bitonic sorter (extended abstract) shimon even s. muthukrishnan michael s. paterson suleyman cenk sahinalp how to compute fast a function and all its derivatives: a variation on the theorem of baur-strassen jacques morgenstern a note on the preconditioning for factorization of homogeneous polynomials s. moritsugu e. goto an algorithm for parallel computation of partial sums an algorithm for parallel computation of several partial sums is proposed. the partial sums are partitioned into sets of r sums each. all sets are computed in parallel. in each set the redundancy shown to be inherent to the problem is utilized to reduce the required computation. in addition to a memory- accumulator architecture proposed earlier to implement this algorithm, a permuter-adder tree implementation is considered. both approaches admit of implementation in vlsi, ccd or software form. the addition operation could be replaced by any commutative and associative binary operation, with implications for a wide class of applications. adly t. fam crossing families boris aronov paul erdos wayne goddard daniel j. kleitman michael klugerman jános pach leonard j. schulman trading group theory for randomness in a previous paper [bs] we proved, using the elements of the theory of nilpotent groups, that some of the fundamental computational problems in matriz groups belong to np. these problems were also shown to belong to conp, assuming an unproven hypothesis concerning finite simple groups. the aim of this paper is to replace most of the (proven and unproven) group theory of [bs] by elementary combinatorial arguments. the result we prove is that relative to a random oracle b, the mentioned matrix group problems belong to (np conp)_b. the problems we consider are membership in and order of a matrix group given by a list of generators. these problems can be viewed as multidimensional versions of a close relative of the discrete logarithm problem. hence np conp might be the lowest natural complexity class they may fit in. we remark that the results remain valid for black box groups where group operations are performed by an oracle. the tools we introduce seem interesting in their own right. we define a new hierarchy of complexity classes am(k) "just above np", introducing arthur vs. merlin games, the bounded-away version of papdimitriou's games against nature. we prove that in spite of their analogy with the polynomial time hierarchy, the finite levels of this hierarchy collapse to am=am(2). using a combinatorial lemma on finite groups [be], we construct a game by which the nondeterministic player (merlin) is able to convince the random player (arthur) about the relation [g]=n provided arthur trusts conclusions based on statistical evidence (such as a slowly- strassen type "proof" of primality). one can prove that am consists precisely of those languages which belong to npb for almost every oracle b. our hierarchy has an interesting, still unclarified relation to another hierarchy, obtained by removing the central ingredient from the user vs. expert games of goldwasser, micali and rackoff. l babai the activity of a variable and its relation to decision trees the construction of sequential testing procedures from functions of discrete arguments is a common problem in switching theory, software engineering, pattern recognition, and management. the concept of the activity of an argument is introduced, and a theorem is proved which relates it to the expected testing cost of the most general type of decision trees. this result is then extended to trees constructed from relations on finite sets and to decision procedures with cycles. these results are used, in turn, as the basis for a fast heuristic selection rule for constructing testing procedures. finally, some bounds on the performance of the selection rule are developed. b. e. moret m. thomason and r. c. gonzalez a generalized euler-poincare equation jeff a. heisserman a theorem on probabilistic constant depth computations miklos ajtai michael ben-or the two-processor scheduling problem is in r-nc the two-processor scheduling problem is perhaps the most basic problem in scheduling theory, and several efficient algorithms have been discovered for it. however, these algorithms are inherently sequential in nature. we give a fast parallel (r-nc) algorithm for this problem. interestingly enough, our algorithm for this purely combinatoric-looking problem draws on some powerful algebraic methods. u vazirani v v vazirani discovery through rough set theory wojciech ziarko enumerating order types for small sets with applications order types are a means to characterize the combinatorial properties of a finite point configuration. in particular, the crossing properties of all straight-line segments spanned by an planar $n$-point set are reflected by its order type. we establish a complete and reliable data base for all possible order types of size $n=10$ or less. the data base includes a realizing point set for each order type in small integer grid representation. to our knowledge, no such project has been carried out before. we substantiate the usefulness of our data base by applying it to several problems in computational and combinatorial geometry. problems concerning triangulations, simple polygonalizations, complete geometric graphs, and $k$-sets are addressed. this list of possible applications is not meant to be exhaustive. we believe our data base to be of value to many researchers who wish to examine their conjectures on small point configurations. oswin aichholzer franz aurenhammer hannes krasser pac-learnability of determinate logic programs the field of inductive logic programming (ilp) is concerned with inducing logic programs from examples in the presence of background knowledge. this paper defines the ilp problem, and describes the various syntactic restrictions that are commonly used for learning first-order representations. we then derive some positive results concerning the learnability of these restricted classes of logic programs, by reduction to a standard propositional learning problem. more specifically, k-clause predicate definitions consisting of determinate, function-free, non-recursve horn clauses with variables of bounded depth are polynomially learnable under simple distributions. similarly, recursive k-clause definitions are polynomially learnable under simple distributions if we allow existential and membership queries about the target concept. saso dzeroski stephen muggleton stuart russell bandwidth-based lower bounds on slowdown for efficient emulations of fixed- connection networks this paper presents a new method for obtaining lower bounds on the slowdown of efficient emulations between network machines based on their communication bandwidth. the proofs measure the communication complexity of a message pattern by viewing its graph as a network machine and measuring the communication bandwidth β. this approach yields an intuitive lower bound on the time to route a communication pattern represented by multigraph c on a host machine h (with uniform load) as t ≥ (β(c)/β(h)), and thus a lower bound on the slowdown of emulating guest machine g on host h as the ratio of their communication bandwidths. clyde p. kruskal kevin j. rappoport efficient construction of lr(k) states and tables a new method for building lr(k) states and parsing tables is presented. the method aims at giving a feasible construction of a collection of lr(k) parsing tables, especially when k > 1\\. for nontrivial grammars. to this purpose, the algorithm first attempts to build a set of normal states for the given grammar, each one associated to a single parsing action in {accept, reduce, shift}. when such an action cannot be uniquely determined, that is, when up to k input symbols have to be examined (inadequacy), further states, belonging to a new type, called look-ahead states, are computed. the action associated with inadequate states is a new parsing action, look. states are built without actual computation of the firstk and effk functions; that is, nonterminals are kept in the context string of items composing each state, and their expansion to terminals is deferred until indispensable to solve inadequacy. the aforementioned method is illustrated; then the canonical collection of states and the canonical tables are compared with those obtained from the proposed method. a sufficient condition is stated, by which the size of parsing tables, obtained by applying this new method, is smaller than that of canonical tables. experimental results show that such a condition is verified by the grammars of several programming languagues and that significant speed is gained by avoiding the computation of the firstk function. m. anicona g. dodero v. gianuzzi m. morgavi the pl hierarchy collapses mitsunori ogihara large integer project brian cardarello matthew jarvis chhean saur polynomial learnability of semilinear sets naoki abe optimal bounds for decision problems on the crcw pram optimal (log n/log log n) lower bounds on the time for crcw prams with polynomially bounded numbers of processors or memory cells to compute parity and a number of related problems are proven. a strict time hierarchy of explicit boolean functions of n bits on such machines that holds up to (log n/log log n) time is also exhibited. that is, for every time bound t within this range a function is exhibited that can be easily computed using polynomial resources in time t but requires more than polynomial resources to be computed in time t \\- 1. finally, it is shown that almost all boolean functions of n bits require log n \\- log log n \\+ (1) time when the number of processors is at most polynomial in n. the bounds do not place restrictions on the uniformity of the algorithms nor on the instruction sets of the machines. paul beame johan hastad locality of order-invariant first-order formulas a query is local if the decision of whether a tuple in a structure satisfies this query only depends on a small neighborhood of the tuple. we prove that all queries expressible by order- invariant first-order formulas are local. martin grohe thomas schwentick sparse dynamic programming i: linear cost functions dynamic programming solutions to a number of different recurrence equations for sequence comparison and for rna secondary structure prediction are considered. these recurrences are defined over a number of points that is quadratic in the input size; however only a sparse set matters for the result. efficient algorithms for these problems are given, when the weight functions used in the recurrences are taken to be linear. the time complexity of the algorithms depends almost linearly on the number of points that need to be considered; when the problems are sparse this results in a substantial speed- up over known algorithms. david eppstein zvi galil raffaele giancarlo giuseppe f. italiano approximability and nonapproximability results for minimizing total flow time on a single machine hans kellerer thomas tautenhahn gerhard j. woeginger polynomial-size nonobtuse triangulation of polygons marshall bern david eppstein building and using polyhedral hierarchies david dobkin ayellet tal two nonlinear lower bounds we prove the following lower bounds for on line computation. 1) simulating two tape nondeterministic machines by one tape machines requires Ω(n log log n) time. 2) simulating k tape (deterministic) machines by machines with k pushdown stores requires Ω(n log1/(k+1)n) time. pavol d wolfgang paul zvi galil ruediger reischuk data flow analysis of recursive procedures p. fairfield m. a. hennell superdeterministic pdas: a subcase with a decidable inclusion problem s. a. greibach e. p. friedman computational complexity and knowledge complexity (extended abstract) oded goldreich rafail ostrovsky erez petrank on the planar intersection of natural quadrics ching-kuang shene john k. johnstone on the integer complexity of boolean matrix multiplication d m atkinson n santoro j urrutia the strong exponential hierarchy collapses the polynomial hierarchy, composed of the levels p, np, pnp, npnp, etc., plays a central role in classifying the complexity of feasible computations. it is not known whether the polynomial hierarchy collapses. we resolve the question of collapse for an exponential-time analogue of the polynomial-time hierarchy. composed of the levels e (i.e., c dtime[2cn]), ne, pne, npne, etc., the strong exponential hierarchy collapses to its Δ2 level. e pne = npne npnpne *** our proof stresses the use of partial census information and the exploitation of nondeterminism. extending our techniques, we also derive new quantitative relativization results. we show that if the (weak) exponential hierarchy's Δj+1 and j+1 levels, respectively e p ;j and ne pj, do separate, this is due to the large number of queries ne makes to its j database.1 our technique provide a successful method of proving the collapse of certain complexity classes. l. a. hemachandra some exact complexity results for straight-line computations over semirings mark jerrum marc snir the topological structure of asynchronous computability maurice herlihy nir shavit on the relationship between ll(1) and lr(1) grammars john c. beatty folding flat silhouettes and wrapping polyhedral packages: new results in computational origami erik d. demaine martin l. demaine joseph s. b. mitchell algebraic methods for interactive proof systems a new algebraic technique for the construction of interactive proof systems is presented. our technique is used to prove that every language in the polynomial-time hierarchy has an interactive proof system. this technique played a pivotal role in the recent proofs that ip = pspace [28] and that mip = nexp [4]. carsten lund lance fortnow howard karloff maintaining the minimal distance of a point set in polylogarithmic time michiel smid sharing a processor among many job classes g. fayolle i. mitrani r. iasnogorodski stabbing and ray shooting in 3 dimensional space in this paper we consider the following problems: given a set t of triangles in 3-space, with |t| = n, answer the query "given a line l, does l stab the set of triangles?" (query problem). find whether a stabbing line exists for the set of triangles (existence problem). given a ray , which is the first triangle in t hit by ? the following results are shown. there is an (n3) lower bound on the descriptive complexity of the set of all stabbers for a set of triangles. the existence problem for triangles on a set of planes with g different plane inclinations can be solved in (2n2 log n) time (theorem 2). the query problem is solvable in quasiquadratic (n2+ε) preprocessing and storage and logarithmic (log n) query time (theorem 4). if we are given m rays we can answer ray shooting queries in (m5 /6-δ n5/6+5δ log2 n \\+ m log2 n \\+ n log n log m) randomized expected time and (m \\+ n) space (theorem 5). in time ((n+m)5/3+4δ) it is possible to decide whether two non convex polyhedra of complexity m and n intersect (corollary 1). given m rays and n axis- oriented boxes we can answer ray shooting queries in randomized expected time (m3 /4-δ n3/4+3δ log4 n \\+ m log4 n \\+ n log n log m) and (m \\+ n) space (theorem 6). marco pellegrini technical opinion: noncomputability is easy to understand rafael morales-bueno an interval logic for higher-level temporal reasoning during the last several years, we have explored temporal logic as a framework for specifying and reasoning about concurrent programs, distributed systems, and communications protocols. previous papers[schwartz/melliar-smith81, 82, vogt82a,b] report on our efforts using temporal reasoning primitives to express very high-level abstract requirements that a program or system is to satisfy. based on our experiences with those primitives, we have developed an interval logic more suitable for expressing higher-level temporal properties. richard l. schwartz p. m. melliar-smith friedrich h. vogt the undecidability of the semi-unification problem a. j. kfoury j. tiuryn p. urzyczyn polynomial time inference of a subclass of context-free transformations this paper deals with a class of prolog programs, called context-free term transformations (ctf). we present a polynomial time algorithm to identify a subclass of cft, whose program consists of at most two clauses, from positive data; the algorithm uses 2-mmg (2-minimal multiple generalization) algorithm, which is natural extension of plotkin's least generalization algorithm, to reconstruct the pair of heads of the unknown program. using this algorithm, we show the consistent and conservative polynomial time identifiability of the class of tree languages defined by cftfbuniq together with tree languages defined by pairs of two tree patterns, both of which are proper subclasses of cft, in the limit from positive data. hiroki arimura hiroki ishizaka takeshi shinohara sharing the load of logic-program evaluation we propose a method of parallelizing bottom-up-evaluation of logic programs. the method does not introduce interprocess communication, or synchronization overhead. we demonstrate that it can be applied when evaluating several classes of logic programs, e.g., the class of linear single rule programs. this extends the work reported in [ws] by significantly expanding the classes of logic programs that can be evaluated in parallel. we also prove that there are classes of programs to which the parallelization method cannot be applied. o. wolfson depth reduction for noncommutative arithmetic circuits eric allender jia jiao some structural properties of polynomial reducibilities and sets in np in this abstract and discussion of forthcoming papers, we will be concerned with variations on a common theme: without assuming a solution to p vs np, what can one say of a general nature that relates structural properties of general classes of sets in np to reducibilities among these sets? by "structural" we mean, in ways that will become clearer as we proceed, properties which arise from general definitions rather than properties which may arise from a perhaps more "natural" computational point of view. although quite a bit is known about such questions relative to oracles or relative to the assumption that p @@@@ np, in so far as possible we wish to obtain absolute results; that is results which are about sets in np (not relativized) and which can be obtained without assuming a solution to p vs np. in a final section summarizing our results we will make some general comments about the historical antecedents and possible future significance of this approach. paul young church-rosser thue systems and formal languages since about 1971, much research has been done on thue systems that have properties that ensure viable and efficient computation. the strongest of these is the church-rosser property, which states that two equivalent strings can each be brought to a unique canonical form by a sequence of length- reducing rules. in this paper three ways in which formal languages can be defined by thue systems with this property are studied, and some general results about the three families of languages so determined are studied. robert mcnaughton paliath narendran friedrich otto perturbation analysis of horner's method for nice cases c. b. dunham a class of sorting algorithms based on quicksort bsort, a variation of quicksort, combines the interchange technique used in bubble sort with the quicksort algorithm to improve the average behavior of quicksort and eliminate the worst case situation of o(n2) comparisons for sorted or nearly sorted lists. bsort works best for nearly sorted lists or nearly sorted in reverse. roger l. wainwright a strassen-newton algorithm for high-speed parallelizable matrix inversion this paper describes techniques to compute matrix inverses by means of algorithms that are highly suited to massively parallel computation. in contrast, conventional techniques such as pivoted gaussian elimination and lu decomposition are efficient only on vector computers or fairly low-level parallel systems. these techniques are based on an algorithm suggested by strassen in 1969. variations of this scheme employ matrix newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. one-processor cray-2 implementations of these schemes range from one that is up to 55% faster than a conventional library routine to one that, while slower than a library routine, achieves excellent numerical stability. the problem of computing the solution to a single set of linear equations is discussed, and it is shown that shown that this problem can also be solved efficiently using these techniques. d. h. bailey h. r. p. gerguson on the performance of a direct parallel method for solving separable elliptic equations based on block cyclic reduction kevin kelleher s. lakshmivarahan sudarshan dhall generosity helps, or an 11 - competitive algorithm for three servers we propose a new algorithm called equipoise for the k-server problem, and we prove that it is 2-competitive for two servers and 11-competitive for three servers. for k=3, this is a tremendous improvement over previously known constants. the algorithm uses several techniques for designing on-line algorithms - convex hulls, work functions and forgiveness. the results presented in this paper were earlier announced at the dimacs workshop on on-line algorithms [8]. marek chrobak lawrence l. larmore simple and efficient bounded concurrent timestamping and the traceable use abstraction in a timestamping system, processors repeatedly choose timestamps so that the order of the timestamps obtained reflects the real-time order in which they were requested. concurrent timestamping systems permit requests by multiple processors to be issued concurrently; in bounded timestamping systems the sizes of the timestamps and the size and number of shared variables are bounded. an algorithm is wait-free if there exists an a priori bound on the number of steps a processor must take in order to make progress, independent of the action or inaction of other processors. letting n denote the number of procesors, we construct a simple wait-free bounded concurrent timestamping system requiring o(n) steps (accesses to shared memory) for a processor to read the current timestamps and determine the order among them, and o(n) steps to generate a timestamp, independent of the actions of the other processors. in addition, we introduce and implement the traceable use abstraction, a new primitive providing "inventory control" over values introduced by processors in the course of an algorithm execution. this abstraction has proved to be of great value in converting unbounded algorithms to bounded ones {attiya and rachman 1998; dwork et al. 1992; 1993]. cynthia dwork orli waarts alternation ashok k. chandra dexter c. kozen larry j. stockmeyer minimal surfaces, crystals, and norms on rn frank morgan diophantine linear system solving thom mulders arne storjohann efficiency of synchronous versus asynchronous distributed systems eshrat arjomandi michael j. fischer nancy a. lynch on dynamic voronoi diagrams and the minimum hausdorff distance for point sets under euclidean motion in the plane we show that the dynamic voronoi diagram of k sets of points in the plane, where each set consists of m points moving rigidly, has complexity o(n2k2< ;/supscrpt>λs_(k)) for some fixed s, where λ s(n) is the maximum length of a (n, s) davenport-schinzel sequence. this improves the result of aonuma et al., who show an upper bound of o(n3k4 log* k) for the complexity of such voronoi diagrams. we then apply this result to the problem of finding the minimum hausdorff distance between two point sets in the plane under euclidean motion. we show that this distance can be computed in time o((m \\+ n)6 log (mn)), where the two sets contain m and n points respectively. daniel p. huttenlocher klara kedem jon m. kleinberg on triangulating three-dimensional polygons gill barequet matthew dickerson david eppstein instance complexity pekka orponen ker-i ko uwe schöning osamu watanabe on -automata and temporal logic we study here the use of different representation for infinitary regular languages in extended temporal logic. we focus on three different kinds of acceptance conditions for finite automata on infinite words, due to buchi, streett, and emerson and lei (el), and we study their computational properties. our finding is that buchi, streett, and el automata span a spectrum of succinctness. el automata are exponentially more succinct than buchi automata, and complementation of el automata is doubly exponential. streett automata are of intermediate complexity. while translating from streett automata to buchi automata involves an exponential blow-up, so does the translation from el automata to streett automata. furthermore, even though streett automata are exponentially more succinct than buchi automata, complementation of streett automata is only exponential. as a result, we show that the decision problem for etlel, where temporal connectives are represented by el automata, is expspace-complete, and the decision problem for etl s , where temporal connectives are represented by streett automata, is pspace- complete. s. safra m. y. vardi approximating grammar probabilities: solution of a conjecture it is proved that the production probabilities of a probabilistic context-free grammar may be obtained as the limit of the estimates inferred from an increasing sequence of randomly drawn samples from the language generated by the grammar. r. chaudhuri a. n. v. rao the complexity of propositional linear temporal logics the complexity of satisfiability and determination of truth in a particular finite structure are considered for different propositional linear temporal logics. it is shown that these problems are np-complete for the logic with f and are pspace- complete for the logics with f, x, with u, with u, s, x operators and for the extended logic with regular operators given by wolper. a. p. sistla e. m. clarke more on the complexity of negation-limited circuits robert beals tetsuro nishino keisuke tanaka stability and stabilizability of discrete event dynamic systems cuneyt m. ozveren alan s. willsky panos j. antsaklis type fixpoints: iteration vs. recursion zdzislaw splawski pawel urzyczyn fundamental discrepancies between average-case analyses under discrete and continuous distributions: a bin packing case study e. g. coffman c. courcoubetis m. r. garey d. s. johnson l. a. mcgeoch p. w. shor r. r. weber m. yannakakis strongly equivalent logic programs a logic program p1 is said to be equivalent to a logic program p2 in the sense of the answer set semantics if p and p2 have the same answer sets. we are interested in the following stronger condition: for every logic program, p, p1, ∪ p has the same answer sets as p2 ∪ p. the study of strong equivalence is important, because we learn from it how one can simplify a part of a logic program without looking at the rest of it. the main theorem shows that the verification of strong equivalence can be accomplished by cheching the equivalence of formulas in a monotonic logic, called the logic of here-and- there, which is intermediate between classical logic and intuitionistic logic. fooling a two-way automaton or one pushdown store is better than one counter for two way machines (preliminary version) we define a language l and show that is cannot be recognized by any two way deterministic counter machine. it is done by fooling any given such machine; i.e. showing that if it accepts l' @@@@ l, then l'-l @@@@ . for this purpose, an argument stronger than the well known crossing sequence argument needs to be introduced. since l is accepted by a two-way deterministic pushdown automaton, we consequently show that one pushdown stack is more powerful than one counter for deterministic two-way machines. pavol d zvi galil inequalities about factors of integer polynomials maurice mignotte randomized algorithms for binary search and load balancing with geometric applications j. reif s. sen a model for comparing the space usage of lazy evaluators adam bakewell colin runciman a randomized art-gallery algorithm for sensor placement this paper descirbes a placement strategy to compute a set of "good" locations where visual sensing will be most effective. throughout this paper it is assumed that a {\em polygonal 2-d map} of a workspace is given as input. this polygonal map --- also known as a {\em floor plan} of {\em layout} --- is used to compute a set of locations where expensive sensing tasks (such as 3-d image acquisition) could be executed. a map-building robot, for example, can visit these locations in order to build a full 3-d model of the workspace. the sensor placement strategy relies on a randomized algorithm that solves a variant of the {\em art-gallery problem}-\cite{oro87, she92, urr97} : find the minimum set of guards inside a polygonal workspace from which the entire workspace boundary is visibe. to better take into account the limitations of physical sensors, the algorithm computes a set of guards that satisfies incidence and range constraints. although the computed set of guards is not guaranteed to have minimum size, the algorithm does compute with high probability a set whose size is at most a factor $\big0{ (n + h) \cot \log(c \ (n + h) )$ from the optimal size$c$, where $n$ is the number of edges in the input polygonal map and $n$ the number of obstacles in its interior (holes). h. gonzalez-banos technical correspondence diane crawford computing timed transition relations for sequential cycle-based simulation gianpiero cabodi paolo camurati claudio passerone stefano quer a methodology of parsing mathematical notation for mathematical computation yanjie zhao tetsuya sakurai hiroshi sugiura tatsuo torii denotational abstract interpretation of logic programs logic- programming languages are based on a principle of separation "logic" and "control.". this means that they can be given simple model-theoretic semantics without regard to any particular execution mechanism (or proof procedure, viewing execution as theorem proving). although the separation is desirable from a semantical point of view, it makes sound, efficient implementation of logic-programming languages difficult. the lack of "control information" in programs calls for complex data-flow analysis techniques to guide execution. since data-flow analysis furthermore finds extensive use in error-finding and transformation tools, there is a need for a simple and powerful theory of data-flow analysis of logic programs. this paper offers such a theory, based on f. nielson's extension of p. cousot and r. cousot's abstract interpretation. we present a denotational definition of the semantics of definite logic programs. this definition is of interest in its own right because of its compactness. stepwise we develop the definition into a generic data-flow analysis that encompasses a large class of data-flow analyses based on the sld execution model. we exemplify one instance of the definition by developing a provably correct groundness analysis to predict how variables may be bound to ground terms during execution. we also discuss implementation issues and related work. kim marriott harald søndergaard neil d. jones on partitions and presortedness of sequences jingsen chen svante carlsson new lower bounds for parallel computation lower bounds are proven on the parallel-time complexity of several basic functions on the most powerful concurrent-read concurrent-write pram with unlimited shared memory and unlimited power of individual processors (denoted by priority(∞)): it is proved that with a number of processors polynomial in n, (log n) time is needed for addition, multiplication or bitwise or of n numbers, when each number has n' bits. hence even the bit complexity (i.e., the time complexity as a function of the total number of bits in the input) is logarithmic in this case. this improves a beautiful result of meyer auf der heide and wigderson [22]. they proved a log n lower bound using ramsey-type techniques. using ramsey theory, it is possible to get an upper bound on the number of bits in the inputs used. however, for the case of polynomially many processors, this upper bound is more than a polynomial in n. an (log n) lower bound is given for priority(∞) with no(1) processors on a function with inputs from {0, 1}, namely for the function (x1, … , xn,) = nl\\- 1 xl ai where a is fixed and xi ε {0, 1}. finally, by a new efficient simulation of priority(∞) by unbounded fan-in circuits, that with less than exponential number of processors, it is proven a priority(∞) cannot compute parity in constant time, and with no(1) processors (@@@@log n) time is needed. the simulation technique is of independent interest since it can serve as a general tool to translate circuit lower bounds into pram lower bounds. further, the lower bounds in (1) and (2) remain valid for probabilistic or nondeterministic concurrent- read concurrent-write prams. ming li yaacov yesha a restated pumping lemma for context-free languages don colton the polynomial-time hierarchy and sparse oracles questions about the polynomial-time hierarchy are studied. in particular, the questions, "does the polynomial-time hierarchy collapse?" and "is the union of the hierarchy equal to pspace?" are considered, along with others comparing the union of the hierarchy with certain probabilistic classes. in each case it is shown that the answer is "yes" if and only if for every sparse set s, the answer is "yes" when the classes are relativized to s if and only if there exists a sparse set s such that the answer is "yes" when the classes are relativized to s. thus, in each case the question is answered if it is answered for any arbitrary sparse oracle set. long and selman first proved that the polynomial-time hierarchy collapses if and only if for every sparse set s, the hierarchy relative to s collapses. this result is re-proved here by a different technique. jose l. balcázar ronald v. book uwe schöning regular attribute grammars and finite state machines n. p. chapman towards a syntactic characterization of ptas sanjeev khanna rajeev motwani finite state verifiers ii: zero knowledge the zero knowledge properties of interactive proof systems (ipss) are studied in the case that the verifier is a 2-way probabilistic finite state automaton (2pfa). the following results are proved: (1) there is a language l such that l has an ips with 2pfa verifiers but l has no zero knowledge ips with 2pfa verifiers. (2) consider the class of 2pfa's that are sweeping and that halt in polynomial expected time. there is a language l such that l has a zero knowledge ips with respect to this class of verifiers, and l cannot be recognized by any verifier in the class on its own. a new definition of zero knowledge is introduced. this definition captures a concept of "zero knowledge" for ipss that are used for language recognition. cynthia dwork larry stockmeyer triangulations intersect nicely oswin aichholzer gunter rote treating failure as value l. wong b. c. ooi counting problems in genomics the past decade has seen the emergence of an overwhelming amount of genome data and the desperate need to catalog, organize, analyze, and interpret it. this "tsunami of information", as gene myers, jr. of celera genomics calls it, is continuously being fed by the masses of raw genome data erupting from the successful genome projects of various organisms, especially humans. it has forced the creation of the interdisciplinary field of genomics --- the union between computer science and biology, in which programmers acknowledge biology as an information science and biologists admit that there is too much data for them to fairly and efficiently sift through alone. andrea christoforou counting linear extensions is #p-complete graham brightwell peter winkler verifying secrets and relative secrecy systems that authenticate a user based on a shared secret (such as a password or pin) normally allow anyone to query whether the secret is a given value. for example, an atm machine allows one to ask whether a string is the secret pin of a (lost or stolen) atm card. yet such queries are prohibited in any model whose programs satisfy an information-flow property like noninterference. but there is complexity-based justification for allowing these queries. a type system is given that provides the access control needed to prove that no well-typed program can leak secrets in polynomial time, or even leak them with nonnegligible probability if secrets are of sufficient length and randomly chosen. however, there are well-typed deterministic programs in a synchronous concurrent model capable of leaking secrets in linear time. dennis volpano geoffrey smith randomized versus nondeterministic communication complexity our main result is the demonstration of a boolean function f with nondeterministic and co-nondeterministic complexities o(log n) and ε-error randomized complexity (log2 n), for 0 ≤ ε < 1/2. this is the first separation of this kind for a decision problem. paul beame joan lawry implementation of the typed call-by-value -calculus using a stack of regions we present a translation scheme for the polymorphically typed call- by-value λ-calculus. all runtime values, including function closures, are put into regions. the store consists of a stack of regions. region inference and effect inference are used to infer where regions can be allocated and de-allocated. recursive functions are handled using a limited form of polymorphic recursion. the translation is proved correct with respect to a store semantics, which models as a region- based run-time system. experimental results suggest that regions tend to be small, that region allocation is frequent and that overall memory demands are usually modest, even without garbage collection. mads tofte jean- pierre talpin load balancing requires (log*n) expected time in order to obtain very fast parallel algorithms, it is almost always necessary to have some sort of load balancing procedure, so that processors which have finished their required tasks can help processors which have not. if the overloaded processors are not helped, then the expected time of the entire algorithm suffers. in general, we would like to distribute the remaining work as evenly as possible among the processors, or more formally, given at most n independent tasks distributed in an arbitrary way among n processors, we would like to redistribute the tasks so that each processor contains o(1) tasks. we show here that even on the strongest randomized crcw pram model, for a simple random distribution tasks load balancing requires (log* n) expected time. gil, matias, and vishkin [9] give an o(log* n) expected time randomized algorithm which solves the load balancing problem in the worst case, so the lower bound is tight. by reduction we show that both padded sort [12], and linear approximate compaction [13] require (log* n) expected time. we note that our basic technique is one of the few parallel lower bound techniques known which only require 0/1 inputs. we also note that the bounds given in this paper do not place any restriction on the instruction set of the machine, the amount of information which can be stored in a memory cell, or on the number of memory cells. philip d. mackenzie randomized competitive algorithms for the list update problem sandy irani nick reingold jeffery westbrook daniel d. sleator lower bounds for the low hierarchy eric allender lane a. hemachandra logics with counting and local properties the expressive power of first-order logic over finite structures is limited in two ways: it lacks a recursion mechanism, and it cannot count. overcoming the first limitation has been a subject of extensive study. a number of fixpoint logics have been introduced. and shown to be subsumed by an infinitary logic lw ;∞w . this logic is easier to analyze than fixpoint logics, and it still lacks counting power, as it has a 0-1 law. on the counting side, there is no analog of ~~lw∞w . there are a number of logics with counting power, usually introduced via generalized quantifiers. most known expressivityy bounds are based on the fact that counting extensions of first-order logic preserve the locality properties. this article has three main goals. first, we introduce a new logic l*& lt;/sup>∞w&l; t;/inf> (c) that plays the same role for counting aslw&l; t;/sup>∞w< ;/inf> does for recursion---it subsumes a number of extensions of first-order logic with counting, and has nice properties that make it easy to study. second, we give simple direct proof thatlw< /g>∞w (c) expresses only local properties: those that depend on the properties of small neighborhoods, but cannot grasp a structure as a whole. this is a general way of saying that a logic lacks a recursion mechanism. third, we consider a finer analysis of locality of counting logics. in particular, we address the question of how local a logic is, that is, how big are those neighborhoods that local properties depend on. we get a uniform answer for a variety of logics between first-order and l*∞& lt;g>w (c). this is done by introducing a new form of locality that captures the tightest condition that the duplicator needs to maintain in order to win a game. we also use this technique to give bounds on outputs of l*& lt;/sup>∞w&l; t;/inf> (c)-definable queries. leonid libkin parallel searching in generalized monge arrays with applications a. aggarwal d. kravets j. park s. sen making commitments in the face of uncertainty: how to pick a winner almost every time (extended abstract) baruch awerbuch yossi azar amos fiat tom leighton e -approximations with minimum packing constraint violation (extended abstract) we present efficient new randomized and deterministic methods for transforming optimal solutions for a type of relaxed integer linear program into provably good solutions for the corresponding np-hard discrete optimization problem. without any constraint violation, the ε-approximation problem for many problems of this type is itself np-hard. our methods provide polynomial-time ε-approximations while attempting to minimize the packing constraint violation. our methods lead to the first known approximation algorithms with provable performance guarantees for the s-median problem, the tree prunning problem, and the generalized assignment problem. these important problems have numerous applications to data compression, vector quantization, memory-based learning, computer graphics, image processing, clustering, regression, network location, scheduling, and communication. we provide evidence via reductions that our approximation algorithms are nearly optimal in terms of the packing constraint violation. we also discuss some recent applications of our techniques to scheduling problems. jyh-han lin jeffrey scott vitter the parallel complexity of exponentiating polynomials over finite fields f e fich m tompa comparison of two-dimensional fft methods on the hypercube complex two-dimensional ffts up to size 256 x 256 points are implemented on the intel ipsc/system 286 hypercube with emphasis on comparing the effects of data mapping, data transposition or communication needs, and the use of distributed ffts. two new implementations of the 2d-fft include the local- distributed method which performs local ffts in one direction followed by distributed ffts in the other direction, and a vector-radix implementation that is derived from decimating the dft in two-dimensions instead of one. in addition, the transpose-split method involving local ffts in both directions with an intervening matrix transposition and the block 2d-fft involving distributed fft butterflies in both directions are implemented and compared with the other two methods. timing results show that on the intel ipsc/system 286, there is hardly any difference between the methods, with the only differences arising from the efficiency or inefficiency of communication. since the intel cannot overlap communication and computation, this forces the user to buffer data. in some of the methods, this causes processor blocking during communication. issues of vectorization, communication strategies, data storage and buffering requirements are investigated. a model is given that compares vectorization and communication complexity. while timing results show that the transpose- split method is in general slightly faster, our model shows that the block method and vector-radix method have the potential to be faster if the communication difficulties were taken care of. therefore if communication could be "hidden" within computation, the latter two methods can become useful with the block method vectorizing the best and the vector-radix method having 25% fewer multiplications than row-column 2d-fft methods. finally the local-distributed method is a good hybrid method requiring no transposing and can be useful in certain circumstances. this paper provides some general guidelines in evaluating parallel distributed 2d-fft implementations and concludes that while different methods may be best suited for different systems, better implementation techniques as well as faster algorithms still perform better when communication become more efficient. c. y. chu highly parallelizable problems o. berkman z. galil b. schieber u. vishkin a one-way array algorithm for matroid scheduling matthias f. m. stallmann direct proof of a theorem by kalkbrener, sweedler, and taylor erich kaltofen from to v: a journey through calculi of explicit substitutions this paper gives a systematic description of several calculi of explicit substitutions. these systems are orthogonal and have easy proofs of termination of their substitution calculus. the last system, called λv, entails a very simple environment machine for strong normalization of λ-terms. pierre lescanne on the distribution of independent formulae of number theory it follows by godel's incompleteness theorem [6] that any effective sound system of logic for elementary arithmetic must be incomplete. we show that in any effective sound system of logic for elementary arithmetic, there exist valid unprovable formulae that are quite small relative to the complexity of the logical system. also, such formulae are quite dense. in fact, the situation is about as bad as it could possibly be. that is , no infinite axiom system for elementary arithmetic can be much more compact than a listing of all the valid formulae. the unprovable formulae we construct express predicates in the classes &sgr;2 and π 2 of the kleene arithmetic hierarchy [12]. the construction yields a set of short formulae, at least one of which must be valid and unprovable, but the construction does not tell us which one is valid and unprovable. we also construct small valid unprovable formulae expressing a relation in the class π 1 of the kleene arithmetic hierarchy. these latter formulae are not as small. we do not know how small the independent formulae corresponding to the class π 1 are. the constructions are based on the concept of a restricted oracle, first introduced in [10] and further developed in [11]. the proofs make use of the recent result of matijasevic [9] concerning the relationship between recursively enumerable sets and diophantine equations. david a. plaisted properties of the delaunay triangulation oleg r. musin logical and mathematical reasoning about imperative programs: preliminary report daniel leivant practical methods for approximate geometric pattern matching under rigid motions: (preliminary version) we present practical methods for approximate geometric pattern matching in d-dimensions along with experimental data regarding the quality of matches and running times of these methods versus those of a branch-and-bound search. our methods are faster than previous methods but still produce good matches. michael t. goodrich joseph s. b. mitchell mark w. orletsky converses of pumping lemmas richard johnsonbaugh david p. miller on kilbury's modification of earley's algorithm we improve on j. kilbury's proposal to interchange "predictor" and "scanner" in earley's parser. this modification of earley's parser can trivially be combined with those suggested by s. graham, m. harrison, and w. ruzzo, leading to smaller parse tables and almost the power of lookahead 1. along these lines we can also obtain earley-parsers having partial lookahead r ≥ 1, without storing right contexts. parse trees with shared structure can be stored in the parse tables directly, rather than constructing the trees from "dotted rules." hans leiss the future of computational complexity theory: part i as you probably already know, there is an active discussion going on---in forums ranging from lunch- table conversations to workshops on "strategic directions" to formal reports \---regarding the future of theoretical computer science. since your complexity columnist does not know the answer, i've asked a number of people to contribute their comments on the narrower issue of the future of complexity theory. the only ground rule was a loose 1-page limit; each contributor could choose what aspect(s) of the future to address, and the way in which to address them. the first installment of contributions appears in this issue, and one or two more installments will appear among the next few issues.also coming during the next few issues: the search for the perfect theory journal, and (for the sharp-eyed) lance fortnow dons a clown suit. finally, let me mention that work of russell impagliazzo resolves one of the open questions from complexity theory column 11. christos h. papadimitriou oded goldreich avi wigderson alexander a. razborov michael sipser a hierarchy of temporal properties z. manna a. pnueli a type system for dynamic web documents many interactive web services use the cgi interface for communication with clients. they will dynamically create html documents that are presented to the client who then resumes the interaction by submitting data through incorporated form fields. this protocol is difficult to statically type-check if the dynamic documents are created by arbitrary script code using printf- like statements. previous proposals have suggested using static document templates which trades flexibility for safety. we propose a notion of typed, higher-order templates that simultaneously achieve flexibility and safety. our type system is based on a flow analysis of which we prove soundness. we present an efficient runtime implementation that respects the semantics of only well-typed programs. this work is fully implemented as part of the system for defining interactive web services. anders sandholm michael i. schwartzbach symmetry and complexity laszlo babai robert beals pal takacsi-nagy on the theory of average case complexity this paper takes the next step in developing the theory of average case complexity initiated by leonid a. levin. previous works [levin 84, gurevich 87, venkatesan and levin 88] have focused on the existence of complete problems. we widen the scope to other basic questions in computational complexity. our results include: the equivalence of search and decision problems in the context of average case complexity; an initial analysis of the structure of distributional-np under reductions which preserve average polynomial-time; a proof that if all distributional-np is in average polynomial-time then non- deterministic exponential-time equals deterministic exponential time (i.e., a collapse in the worst case hierarchy); definitions and basic theorems regarding other complexity classes such as average log- space. s. ben-david b. chor o. goldreich programs complexity: comparative analysis hierarchy, classification i ion r arhire m macesanu riemann hypothesis and finding roots over finite fields it is shown that assuming generalized riemann hypothesis, the roots of (x) = o mod p, where p is a prime and f(x) is an integral abilene polynomial can be found in deterministic polynomial time. the method developed for solving this problem is also applied to prime decomposition in abelian number fields, and the following result is obtained: assuming generalized riemann hypotheses, for abelian number fields k of finite extension degree over the rational number field q, the decomposition pattern of a prime p in k, i.e. the ramification index and the residue class degree, can be computed in deterministic polynomial time, providing p does not divide the extension degree of k over q. it is also shown, as a theorem fundamental to our algorithm, that for q, p prime and m the order of p mod q, there is a q-th nonresidue in the finite field &l; t;italic>fpm that can be written as ao \\+ a1w \\+ … \+ am-1wm-1, where |a1| ≤ cq2 log2(pq), c is an absolute effectively computable constant, and 1, w, …, wm-1 form a basis of fpm< ;/supscrpt> over fp. more explicitly, w is a root of the q-th cyclotomic polynomial over fp. this result partially generalizes, to finite field extensions over fp, a classical result in number theory stating that assuming generalized riemann hypothesis, the least q-th nonresidue mod p for p,q prime and q dividing p \\- t is bounded by c log2p, where c is an absolute, effectively computable constant. m-d a huang some comments on a recent note by ravikumar ernst l. leiss a lower bound on the size of shellsort networks r. cypher specification of a functional synchronous dataflow language for parallel implementations with the denotational semantics guilhem de wailly fernand boe'ri a parallel implementation for optimal lambda-calculus reduction marco pedicini francesco quaglia on rational solution of the state equation of a finite automation (abstract only) in this paper we contribute some interesting results on the state space approach following the work of lee [2], yang and huang [1]. we prove that the necessary and sufficient condition for the state equation of a finite automaton m to have a rational solution is that the lexicographic godel numbers of the strings belonging to each of the end-sets of m form an ultimately periodic set. a state space approach was proposed by lee[2] as an alternative way to analyze finite automata. the approach is based on the transformation of a set of words into a formal power series over the field of integers modulo 2 and also obtaining a state equation in some linear space associated with the automaton. some useful algorithms associated with the state space approach were discussed by yang and huang[1] along with a condition for the state equation to have a rational solution when the automaton has either 4 or 8 states. the question of existence of a rational solution of a state equation is certainly an interesting one and it is not difficult to see that the condition given by yang and huang [1] is by no means necessary even for automata with only 4 states. the main objective of our present work is to give the necessary and sufficient condition for the state equation of an automaton to have a rational solution and thus provide a complete answer to the question left open by yang and huang [1]. we also show that the condition obtained in [1] is a special case of our theorem. furthermore, we discuss a practical method for determining whether the state equation of an automaton has a rational solution and also how to obtain the rational solution in case it exists. r. chaudhuri h. höft on the space complexity of randomized synchronization the "waite-free hierarchy" provides a classification of multiprocessor synchronization primitives based on the values of n for which there are deterministic wait-free implementations of n-process consensus using instances of these objects and read-write registers. in a randomized wait-free setting, this classification is degenerate, since n-process consensus can be solved using only o(n) read-write registers. in this paper, we propose a classification of synchronization primitives based on the space complexity of randomized solutions to n-process consensus. a historyless object, such as a read-write register, a swap register, or a test&set; register, is an object whose state depends only on the lost nontrivial operation thate was applied to it. we show that, using historyless objects, (n object instances are necessary to solve n-process consensus. this lower bound holds even if the objects have unbounded size and the termination requirement is nondeterministic solo termination, a property strictly weaker than randomized wait-freedom. we then use this result to related the randomized space complexity of basic multiprocessor synchronization primitives such as shared counters, fetch&add; registers, and compare&swap; registers. viewed collectively, our results imply that there is a separation based on space complexity for synchronization primitives in randomized computation, and that this separation differs from that implied by the deterministic "wait-free hierarchy." faith fich maurice herlihy nir shavit how to design dynamic programming algorithms sans recursion kirk pruhs imperfect random sources and discrete controlled processes we consider a simple model for a class of discrete control processes, motivated in part by recent work about the behavior of imperfect random sources in computer algorithms. the process produces a string of characters from {0, 1} of length n and is a "success" or "failure" depending on whether the string produced belongs to a prespecified set l. in an uninfluenced process each character is chosen by a fair coin toss, and hence the probability of success is |l|/2n. we are interested in the effect on the probability of success in the presence of a player (controller) who can intervene in the process by specifying the value of certain characters in the string. we answer the following questions in both worst and average case: (1) how much can the player increase the probability of success given a fixed number of interventions? (2) in terms of |l| what is the expected number of interventions needed to guarantee success? in particular our results imply that if |l|/2n = 1/w(n) where w(n) tends to infinity with n (so the probability of success with no interventions is o(1)) then with ( nlog&l; t;italic>w(n)) interventions the probability of success is 1-o(1). our main results and the proof techniques are related to a well-known theorem of kruskal, katona, and harper in extremal set theory. d. lichtenstein n. linial m. saks a computational model of everything nicholas carriero david gelernter graphical represenation and graph transformation hartmut ehrig gabriele taentzer a class of convex programs with applications to computational geometry we consider the solution of convex programs in a small number of variables but large number of constraints, where all but a small number of the constraints are linear. we develop a general framework for obtaining algorithms for these problems which run in time linear in the number of constraints. we give an application to computing minimum spanning ellipsoids in fixed dimension. martin dyer parallel algorithms for arrangements r. anderson p. beame e. brisson sequencing jobs with readiness times and tails on parallel machines nodari vakhania some optimal inapproximability results we prove optimal, up to an arbitrary ε > 0, inapproximability results for max-e _k_-sat for _k_ ≥ 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations modulo a prime _p_ and set splitting. as a consequence of these results we get improved lower bounds for the efficient approximability of many optimization problems studied previously. in particular, for max-e2-sat, max- cut, max-di-cut, and vertex cover. johan håstad acceptance trees a simple model, at, for nondeterministic machines is presented which is based on certain types of trees. a set of operations, , is defined over at and it is shown to be completely characterized by a set of inequations over . at is used to define the denotational semantics of a language for defining nondeterministic machines. the significance of the model is demonstrated by showing that this semantics reflects an intuitive operational semantics of machines based on the idea that machines should only be differentiated if there is some experiment that differentiates between them. m. hennessy there is a universal topological plane jacob e. goodman richard pollack rephael wenger tudor zamfirescu the conjecture of fliess on commutative context-free languages the conjecture of fliess concerning commutative context-free languages is disproved using a counterexample. juha kortelainen succinct model semantics: a simple model for inclusive interpretations sei chun huizhu lu jonghoon chun a complete roundness classification procedure kurt mehlhorn thomas c. shermer chee k. yap big omega versus the wild functions paul m. b. vitányi lambert meertens characterizing linear size circuits in terms of privacy eyal kushilevitz rafail ostrovsky adi rosen asynchronous shared memory parallel computation n. nishimura toward a complete transformational toolkit for compilers pim is an equational logic designed to function as a "transformational toolkit" for compilers and other programming tools that analyze and manipulate imperative languages. it has been applied to such problems as program slicing, symbolic evaluation, conditional constant propagation, and dependence analysis. pim consists of the untyped lambda calculus extended with an algebraic data type that characterizes the behavior of lazy stores and generalized conditionals. a graph form of pim terms is by design closely related to several intermediate representations commonly used in optimizing compilers. in this article, we show that pim's core algebraic component, pimt, possesses a complete equational axiomatization (under the assumption of certain reasonable restrictions on term formation). this has the practical consequence of guaranteeing that every semantics-preserving transformation on a program representable in pimt can be derived by application of pimt rules. we systematically derive the complete pimt logic as the culmination of a sequence of increasingly powerful equational systems starting from a straightforward "interpreter" for closed pimt terms. this work is an intermediate step in a larger program to develop a set of well- founded tools for manipulation of imperative programs by compilers and other systems that perform program analysis. j. a. bergstra t. b. dinesh j. field j. heering optimal bounds for decision problems on the crcw pram we prove optimal (log n/log log n) lower bounds on the time for crcw pram's with polynomially bounded numbers of processors or memory cells to compute parity and a number of related problems. we also exhibit a strict time hierarchy of explicit boolean functions of n bits on such machines which holds up to (log n/log log n) time. furthermore, we show that almost all boolean functions of n bits require log n \\- log log n \\+ (1) time when the number of processors is at most polynomial in n. our bounds do not place restrictions on the uniformity of the algorithms nor on the instruction sets of the machines. p. beame j. hastad factorization over finitely generated fields this paper considers the problem of factoring polynomials over a variety of domains. we first describe the current methods of factoring polynomials over the integers, and extend them to the integers mod p. we then consider the problem of factoring over algebraic domains. having produced several negative results, showing that, if the domain is not properly specified, then the problem is insoluble, we then show that, for a properly specified finitely generated extension of the rationals or the integers mod p, the problem is soluble. we conclude by discussing the problems of factoring over algebraic closures. james h. davenport barry m. trager programming language foundations of computation theory harry mairson borel sets and circuit complexity it is shown that for every k, polynomial-size, depth-k boolean circuits are more powerful than polynomial- size, depth-(k 1) boolean circuits. connections with a problem about borel sets and other questions are discussed. michael sipser the polynomial hierarchy and fragments of bounded arithmetic s r buss placement algorithms for hierarchical cooperative caching madhukar r. korupolu c. greg plaxton rajmohan rajaraman an optimal parallel algorithm for the maximal element problem (abstract) given a set (p1, p2, …, pn) of n points in a plane, a point pi dominates another point pj iff x (pi) ≥ x(pj) and y (pi) ≥ y(pj) where x (p) and y (p) denote the coordinates of a point p. our problem is to find the points which are dominated by no other point. these points are called maximal elements. in this abstract we suggest an (log n) parallel algorithm with (n) processors in the erew parallel model which is weaker than crew. (note that most parallel algorithms in the literature use the power of crew pram model.) the main idea is to reduce the problem to the parallel prefix problem [2]. our algorithm runs as follows: first, from p = {pl, …, pn}, find a subset p' = {p1 …, pm }, where each point pi is dominated by no other point having the same x -coordinate aspi. this can be done in (log n) time with (n) processors in the erew model applying the parallel sort algorithm by [1]. next sort p' according to the x -coordinate of each point and store each y -coordinate in an array a in the increasing order of the corresponding x -coordinate. assume that the y -coordinate of pi is stored in a [i ] for i = 1, 2, …, m. then for each a [i ], find an index j such that i ≤ j and a [j] ≥ a [k ] for each i ≤ k ≤ m. if i = j, then pi is one of the maximal elements. this can be done in (log n) time with (n) processors on the erew parallel model by applying a parallel prefix algorithm |2|. c. rhee s. k. dhall s. lakshmivarahan constructing a perfect matching is in random nc r m karp e upfal a wigderson an algebra for data flow anomaly detection an algebra a is developed that is specialized for the detection of data flow anomalies by interpreting the regular expression for the paths in a program as an a expression. two methods are subsequently presented that use a but do not require the explicit computation of the regular expression for the paths. one method is based on the prime program decomposition. the other is based upon the iterative algorithms of global data flow analysis. in addition the use of the algebra to get better warning messages than just the detection of anomalies is presented. ira r. forman a simple realization of a parallel device recognizing regular trace languages this paper studies parallel devices recognizing trace languages. we introduce a concept of an asynchronous automaton with ε-moves and show that for a given regularly defined trace language there exists a simple version of an asynchronous automaton with ε-moves recognizing this language. ryszard janicki thomasz muldner definability in dynamic logic we study the expressive power of various versions of dynamic logic and compare them with each other as well as with standard languages in the logical literature. one version of dynamic logic is equivalent to the infinitary logic lckω1 ;ω, but regular dynamic logic is strictly less expressive. in particular, the ordinals ωω and ω ω.2 are indistinguishable by formulas of regular dynamic logic. albert r. meyer rohit parikh a time-randomness tradeoff for oblivious routing three parameters characterize the performance of a probabilistic algorithm: t, the runtime of the algorithm; q, the probability that the algorithm fails to complete the computation in the first t steps and r, the amount of randomness used by the algorithm, measured by the entropy of its random source. we present a tight tradeoff between these three parameters for the problem of oblivious packet routing on n-vertex bounded- degree networks. we prove a (1 - q) log n/t \\- log q \\- (1) lower bound for the entropy of a random source of any oblivious packet routing algorithm that routes an arbitrary permutation in t steps with probability 1 - q. we show that this lower bound is almost optimal by proving the existence, for every e3 log n ≤ t ≤ n1/2, of an oblivious algorithm that terminates in t steps with probability 1 - q and uses (1-q+ (1))logn/t-logq independent random bits. we complement this result with an explicit construction of a family of oblivious algorithms that use less than a factor of log n more random bits than the optimal algorithm achieving the same run-time. danny krizanc david peleg eli upfal an efficient algorithm for csg to b-rep conversion maged s. tawfik a parallel multi-operation scheduling problem with machine order constraints weizhen mao analytic derivation of comparisons in binary search timothy j. rolfe theory of neuromata a finite automaton---the so-called neuromaton, realized by a finite discrete recurrent neural network, working in parallel computation mode, is considered. both the size of neuromata (i.e., the number of neurons) and their descriptional complexity (i.e., the number of bits in the neuromaton representation) are studied. it is proved that a constraint time delay of the neuromaton output does not play a role within a polynomial descriptional complexity. it is shown that any regular language given by a regular expression of length n is recognized by a neuromaton with (n) neurons. further, it is proved that this network size is, in the worst case, optimal. on the other hand, generally there is not an equivalent polynomial length regular expression for a given neuromaton. then, two specialized constructions of neural acceptors of the optimal descriptional complexity (n) for a single n-bit string recognition are described. they both require o(n1/2) neurons and either o(n) connections with constant weights or o(n1/2) edges with weights of the on hopfield condition stating when a regular language is a hopfield language, is formulated. a construction of a hopfield neuromaton is presented for a regular language satisfying the hopfield condition. the class of hopfield languages is shown to be closed under union, intersection, concatenation and complement, and it is not closed under iteration. finally, the problem whether a regular language given by a neuromaton (or by a hopfield acceptor) is nonempty, is proved to be pspace- complete. as a consequence, the same result for a neuromaton equivalence problem is achieved. jiri sima jiri wiedermann typed representation of objects by functions a systematic representation of objects grouped into types by constructions similar to the composition of sets in mathematics is proposed. the representation is by lambda expressions, which supports the representation of objects from function spaces. the representation is related to a rather conventional language of type descriptions in a way that is believed to be new. ordinary control- expressions (i.e.,case- and let-expressions) are derived from the proposed representation. j. steensgaard-madsen towards scalable compositional analysis james c. corbett george s. avrunin minimal degrees for polynomial reducibilities the existence of minimal degrees is investigated for several polynomial reducibilities. it is shown that no set has minimal degree with respect to polynomial many-one or turing reducibility. this extends a result of ladner in which only recursive sets are considered. a polynomial reducibility ≤< italic>ht< /italic> is defined. this reducibility is a strengthening of polynomial turing reducibility, and its properties relate to the p = ? np question. for this new reducibility, a set of minimal degree is constructed under the assumption that p = np. however, the set constructed is nonrecursive, and it is shown that no recursive set is of minimal ≤ h ;t degree. steven homer efficient simulation of finite automata by neural nets let k(m) denote the smallest number with the property that every m-state finite automaton can be built as a neural net using k(m) or fewer neurons. a counting argument shows that k(m) is at least ((m log m)1/3), and a construction shows that k(m) is at most < italic>o(m3/4&l; t;/supscrpt>). the counting argument and the construction allow neural nets with arbitrarily complex local structure and thus may require neurons that themselves amount to complicated networks. mild, and in practical situations almost necessary, constraints on the local structure of the network give, again by a counting argument and a construction, lower and upper bounds for k(m) that are both linear in m. noga alon a. k. dewdney teunis j. ott program synthesis based on boyer-moore theorem proving techniques program synthesis based on theorem proving usually relies on resolution theorem proving techniques. we develop a deductive approach to program synthesis based on non-resolution theorem proving techniques -- boyer- moore techniques. it combines knowledge base, theorem proving, and program synthesis in a systematic way of knowledge development, and can be used in program construction, program verification, and program optimization. sun yong-qiang lu ru-zhan bi hua tradeoffs between communication and space this paper initiates the study of communication complexity when the processors have limited work space. the following tradeoffs between number c of communications steps and space s are proved: for multiplying two n × n matrices in the arithmetic model with two-way communication, cs = (n). for convolution of two degree n polynomials in the arithmetic model with two- way communication, cs = (n2). & lt;/item> for multiplying an n × n matrix by an n-vector in the boolean model with one- way communication, cs = (n2). in contrast, the discrete fourier transform and sorting can be accomplished in (n) communication steps and (log n) space simultaneously, and the search problems of karchmer and wigderson associated with any language in nck can be solved in (logk< ;/supscrpt> n) communication steps and (logk< ;/supscrpt> n) space simultaneously. t. lam p. tiwari m. tompa a nonlinear lower bound for random-access machines under logarithmic cost for on-line random-access machines under logarithmic cost, the simple task of storing arbitrary binary inputs has nonlinear complexity. even if all kinds of powerful internal operations are admitted and reading of storage locations is free of charge, just successively changing the storage contents for properly storing arbitrary n-bit inputs requires an average cost of order n * log*n. arnold schönhage verification of a production cell using an automatic verification environment for vhdl ronald herrmann thomas reielts a framework for the recursive definition of data structures jean-louis giavitto parallelism in sequential functional languages guy blelloch john greiner pp is closed under intersection richard beigel nick reingold daniel spielman on playing "twenty questions" with a liar motivated by the problem of searching in the presence of adversarial errors, we consider a version of the game "twenty questions" played on the set {0,…,n-1} where the player giving answers may lie in her answers. the questioner is allowed q questions and the responder may lie in upto [rq] of the answers, for some fixed and previously known fraction r. under three different models of this game and for two different question classes, we give precise conditions (i.e. tight bounds on r and, in most cases, optimal bounds on q under which the questioner has a winning strategy in the game. aditi dhagat peter gács peter winkler parallel prefix computation richard e. ladner michael j. fischer lower bounds for algebraic computation trees a topological method is given for obtaining lower bounds for the height of algebraic computation trees, and algebraic decision trees. using this method we are able to generalize, and present in a uniform and easy way, almost all the known nonlinear lower bounds for algebraic computations. applying the method to decision trees we extend all the apparently known lower bounds for linear decision trees to bounded degree algebraic decision trees, thus answering the open questions raised by steele and yao [20]. we also show how this new method can be used to establish lower bounds on the complexity of constructions with ruler and compass in plane euclidean geometry. michael ben-or algebraic approaches to nondeterminism - an overview michal walicki sigurd meldal mechanizing unity in isabelle unity is an abstract formalism for proving properties of concurrent systems, which typically are expressed using guarded assignments [chandy and misra 1988]. unity has been mechanized in higher-order logic using isabelle, a proof assistant. safety and progress primitives, their weak forms (for the substitution axiom), and the program composition operator (union) have been formalized. to give a feel for the concrete syntax, this article presents a few extracts from the isabelle definitions andproofs. it discusses a small example, two-process mutual exclusion. a mechanical theory of unions of programs supports a degree of compositional reasoning. original work on extending program states is presented and then illustrated through a simple example involving an array of processes. lawrence c. paulson learnability of description logics this paper considers the learnability of subsets of first-order logic. piror work has established two boundaries of learnability: haussler [1989] has shown that conjunctions in first-order logic cannot be learned in the valiant model, even if the form of the conjunction is highly restricted; on the other hand, valiant [1984] has shown that propositional conjunctions are learnable. in this paper, we study the learnability of the restricted first-order logics known as description logics. description logics are also subsets of predicate calculus, but are expressed using a different syntax, allowing a different set of syntactic restrictions to be explored. in this paper, we first define a simple description logic, summarize some results on its expressive power, and then analyze its learnability. it is shown that the full logic cannot be tractably learned; however, syntactic restrictions that enable tractable learning exist. the learnability results hold even if the alphabets of primitive classes and roles (over which descriptions are constructed) are infinite; our positive result thus generalizes not only the result of valiant [1984] on learning monomials to learning concepts in our (conjunctive) first order language, but also the result of blum [1990] on learning monomials over infinite attribute spaces. william w. cohen haym hirsh quicksort algorithms with an early exit for sorted subfiles the quicksort algorithm is known to be one of the most efficient internal sorting techniques. quicksort has received considerable attention almost from the moment of its invention. this paper reviews some of the important improvements to quicksort that have appeared in the literature. historically, the improvements to quicksort have been in one of the following areas: (1) algorithms for determining a better pivot value, (2) algorithms that consider the size of the generated subfiles, and (3) various schemes used to partition the file. despite improvements in these areas, the worst case situation of (n2) comparisons for sorted or nearly sorted files still remains. this paper proposes a fourth research area for quicksort improvement designed to remove the worst case behavior due to sorted or nearly sorted files. during the partitioning process (using any scheme) determine if the left and right subfiles are in sorted order. this is a minor but very effective modification to the quicksort algorithm. a new quicksort algorithms, qsorte, is presented that provides an early exit for sorted subfiles. test results on randomly generated lists, nearly sorted lists, sorted and sorted lists in reverse order are given for quicksort, quickersort, bsort, qsorte and several other algorithms. results show qsorte performs just as well as quicksort for random files and in addition has (n) comparisons for sorted or nearly sorted files and (n) comparisons for sorted or nearly sorted files in reverse. roger l. wainwright deriving algorithms from type inference systems: application to strictness analysis the role of non-standard type inference in static program analysis has been much studied recently. early work emphasised the efficiency of type inference algorithms and paid little attention to the correctness of the inference system. recently more powerful inference systems have been investigated but the connection with efficient inference algorithms has been obscured. the contribution of this paper is twofold: first we show how to transform a program logic into an algorithm and, second, we introduce the notion of lazy types and show how to derive an efficient algorithm or strictness analysis. chris hankin daniel le metayer impossibility results for asynchronous pram (extended abstract) maurice herlihy 2d-bubblesorting in average time on< rm>lgn doug ierardi on deletion in delaunay triangulations olivier devillers the linear-array conjecture in communication complexity is false eyal kushilevitz nathan linial rafail ostrovsky an o(n2log n) time algorithm for the minmax angle triangulation we show that a triangulation of a set of n points in the plane that minimizes the maximum angle can be computed in time o(n2 log n) and space o(n). in the same amount of time and space we can also handle the constrained case where edges are prescribed. the algorithm iteratively improves an arbitrary initial triangulation and is fairly easy to implement. herbert edelsbrunner tiow seng tan roman waupotitsch alternation and the power of nondeterminism while nondeterminism is widely beleived to be more powerful than determinism in various contexts (the most famous being the conjecture that np strictly contains p), no proof of the added power of nondeterminism is available for any significant issue. the weaker conjecture (than np strictly contains p) that there is a language accepted by a nondeterministic linear time bounded multitape turing machine that cannot be accepted by a deterministic linear time bounded multi-tape tm still seems quite hard (paul 1982). the aim of this paper is to show how the existance of the polynomial-time hierarchy of meyer and stockmeyer(1972) and the related concept of alternation (chandra, kozen and stockmeyer(1981)) can be exploited to prove the power of nondeterminism over determinism in some contexts. it is hoped that this approach may be useful in proving stronger results. ravi kannan using the groebner basis algorithm to find proofs of unsatisfiability matthew clegg jeffery edmonds russell impagliazzo safe fusion of functional expressions large functional programs are often constructed by decomposing each big task into smaller tasks which can be performed by simpler functions. this hierarchical style of developing programs has been found to improve programmers' productivity because smaller functions are easier to construct and reuse. however, programs written in this way tend to be less efficient. unnecessary intermediate data structures may be created. more function invocations may be required. to reduce such performance penalties, wadler proposed a transformation algorithm, called deforestation, which could automatically fuse certain composed expressions together in order to eliminate intermediate tree-like data structures. however, his technique is only applicable to a subset of first-order expressions. this paper will generalise the deforestation technique to make it applicable to all first-order and higher- order functional programs. our generalisation is made possible by the adoption of a model for safe fusion which views each function as a producer and its parameters as consumers. through this model, static program properties are proposed to classify producers and consumers as either safe or unsafe. this classification is used to identify sub-terms that can be safely fused/eliminated. we present the generalised transformation algorithm as a set of syntax-directed rewrite rules, illustrate it with examples, and provide an outline of its termination proof. wei-ngan chin a logarithmic time sort for linear size networks a randomized algorithm that sorts on an n node network with constant valence in o(log n) time is given. more particularly, the algorithm sorts n items on an n-node cube-connected cycles graph, and, for some constant k, for all large enough α, it terminates within kα log n time with probability at least 1 - nα. john h. reif leslie g. valiant extending formal language hierarchies to higher dimensions dora giammarresi antonio restivo embedded implicational dependencies and their inference problem it is shown that the general inference problem for embedded implicational dependencies (eids) is undecidable. for the more important case of finite inference (i.e., inference for finite data bases), the problem is not even recursively enumerable (r.e.); rather, it is complete in co-r.e. these results hold even for typed eids without equality, as well as for (untyped) template dependencies. the case for typed template dependencies remains open. the complexity of the inference problem for full dependencies has also been characterized - it is complete in exponential time for full implicational dependencies, and even for full typed template dependencies. ashok k. chandra harry r. lewis johann a. makowsky primitive recursion without implicit predecessor m. j. fischer r. p. fischer r. beigel sparsity structure and gaussian elimination i. s. duff a. m. erisman c. w. gear j. k. reid formal justification of a proof system for communicating sequential processes krzysztof r. apt two-way string-matching maxime crochemore dominique perrin visualization of an algorithm for convexifying a simple planar polygon with rigid motions elif tosun deciding combinations of theories robert e. shostak a confluent calculus of macro expansion and evaluation syntactic abbreviations or macros provide a powerful tool to increase the syntactic expressiveness of programming languages. the expansion of these abbreviations can be modeled with substitutions. this paper presents an operational semantics of macro expansions and evaluation where substitutions are handled explicitly. the semantics is defined in terms of a confluent, simple, and intuitive set of rewriting rules. the resulting semantics is also a basis for developing correct implementations. ana bove laura arbilla quasi-optimal upper bounds for simplex range searching and new zone theorems bernard chazelle micha sharir emo welzl anytime, anywhere the ambient calculus is a process calculus where processes may reside within a hierarchy of locations and modify it. the purpose of the calculus is to study mobility, which is seen as the change of spatial configurations over time. in order to describe properties of mobile computations we devise a modal logic that can talk about space as well as time, and that has the ambient calculus as a model. luca cardelli andrew d. gordon an wlog logn lower bound for routing in optical networks optical communication is likely to significantly speed up parallel computation because the vast bandwidth of the optical medium can be divided to produce communication networks of very high degree. however, the problem of contention in high-degree networks makes the routing problem in these networks theoretically (and practically) difficult. in this paper we examine valiant's h-relation routing problem, which is a fundamental problem in the theory of parallel computing. the h-relation routing problem arises both in the direct implementation of specific parallel algorithms on distributed-memory machines and in the general simulation of shared memory models such as the pram on distributed-memory machines. in an h -relation routing problem each processor has up to h messages that it wishes to send to other processors and each processor is the destination of at most h messages. we present a lower bound for routing an h-relation (for any h > 1) on a complete optical network of size n. our lower bound applies to any randomized distributed algorithm for this task. specifically. we show that the expected number of communication steps required to route an arbitrary h-relation is wh+loglogn . this is the first known lower bound for this problem which does not restrict the class of algorithms under consideration. leslie ann goldberg mark jerrum philip d. mackenzie simple, fast, and practical non-blocking and blocking concurrent queue algorithms maged m. michael michael l. scott linearity and the pi-calculus naoki kobayashi benjamin c. pierce david n. turner putting type annotations to work martin odersky konstantin läufer automatic verification of scheduling results in high-level synthesis hans eveking holger hinrichsen gerd ritter dynamic point location in arrangements of hyperplanes ketan mulmuley sandeep sen soft typing with conditional types we present a simple and powerful type inference method for dynamically typed languages where no type information is supplied by the user. type inference is reduced to the problem of solvability of a system of type inclusion constraints over a type language that includes function types, constructor types, union, intersection, and recursive types, and conditional types. conditional types enable us to analyze control flow using type inference, thus facilitating computation of accurate types. we demonstrate the power and practicality of the method with examples and performance results from an implementation. alexander aiken edward l. wimmers t. k. lakshman a practical evaluation of kinetic data structures julien basch leonidas j. guibas craig d. silverstein li zhang space-bounded probabilistic turing machine complexity classes are closed under complement (preliminary version) for tape constructible functions s(n)≥log n, if a language l is accepted by an s(n) tape bounded probabilistic turing machine, then there is an s(n) tape bounded probabilistic turing machine that accepts , the complement of l. janos simon an internal semantics for modal logic in kripke semantics for modal logic, "possible worlds" and the possibility relation are both primitive notions. this has both technical and conceptual shortcomings. from a technical point of view, the mathematics associated with kripke semantics is often quite complicated. from a conceptual point of view, it is not clear how to use kripke structures to model knowledge and belief, where one wants a clearer understanding of the notions that are primitive in kripke semantics. we introduce modal structures as models for modal logic. we use the idea of possible worlds, but by directly describing the "internal semantics" of each possible world. it is much easier to study the standard logical questions, such as completeness, decidability, and compactness, using modal structures. furthermore, modal structures offer a much more intuitive approach to modelling knowledge and belief. r fagin m y vardi intersection of regular languages and state complexity jean-camille birget bounds on the greedy routing algorithm for array networks we extend previous work on greedy routing for array networks by providing bounds on the average delay and the average number of packets in the system. we analyze the dynamic routing problem, where packets are generated at each node according to a poisson process. each packet is sent to a destination chosen uniformly at random. packets are routed greedily, first moving to the correct column and then to the correct row. packets require unit time to travel across a directed edge between nodes; only a single packet can cross an edge at any given time, and packets waiting for an edge are buffered. our bounds are based on comparisons with computationally more simple queueing networks, and the methods used are generally applicable to other network systems. a primary contribution of the paper is a new lower bound technique that also improves on the previous lower bounds by stamoulis and tsitsiklis for heavily loaded hypercube networks. [11] on heavily loaded array networks, our lower and upper bounds differ by only a small constant factor. we further examine extensions of the problem where our methods prove useful. for example, we consider variations where edges can have different transmission rates or the destination distribution is non-uniform. in particular, we study to what extent optimally configured array networks outperform the standard array network. michael mitzenmacher computing subsets of equivalence classes for large fsms gianpiero cabodi stefano quer paolo camurati efficient partial enumeration for timing analysis of asynchronous systems eric verlind gjalt de jong bill lin nodes and arcs, the ideal structural primitives for language? for a language to be reasonably small while providing referencing facilities which are adequate for direct natural expression, it is necessary that all operands be built from a small number of primitive structural types. for the referencing capabilities to be used in a clearly readable way the structure of the data must accurately reflect the structure of the conceptual objects being modeled without representational distortions. for objects from a small set of primitive structural types to be combined in a way that accurately reflects the structure of many kinds of object it is necessary that the structural primitives have a minimum of internal structure. these considerations and others indicate that nodes and arcs of directed graphs are probably a permanent optimum choice of structural primitives for a comprehensive programming and system description language. they provide a highly "plastic material" which can be "shaped and hardened" by declarations to accurately model the structure of many kinds of parts of different problem environments. current major languages emphasize function but distort the structure in system descriptions. accurate structural description simplifies application programs by eliminating irrelevant representational conventions and providing higher level, declarative information about the problem environment enabling the computer to be more helpful. it provides a basis for combining the advantages of many languages, leading to simpler and more coherently designed software systems. accurate description of preexisting data and systems can ease many interfacing and compatibility problems. the dawn language based on the above ideas is described briefly. it is a general language with a fully integrated semantic data model for data base and system control. a number of interdependencies among various kinds of language improvement are also described. the process of eliminating representational irrelevancies can probably be completed soon since those remaining are becoming sparse and objectively identifiable when similar language constructs are compared. edward s. lowry institutions: abstract model theory for specification and programming there is a population explosion among the logical systems used in computing science. examples include first-order logic, equational logic, horn- clause logic, higher-order logic, infinitary logic, dynamic logic, intuitionistic logic, order-sorted logic, and temporal logic; moreover, there is a tendency for each theorem prover to have its own idiosyncratic logical system. the concept of institution is introduced to formalize the informal notion of "logical system." the major requirement is that there is a satisfaction relation between models and sentences that is consistent under change of notation. institutions enable abstracting away from syntactic and semantic detail when working on language structure "in-the-large"; for example, we can define language features for building large logical system. this applies to both specification languages and programming languages. institutions also have applications to such areas as database theory and the semantics of artificial and natural languages. a first main result of this paper says that any institution such that signatures (which define notation) can be glued together, also allows gluing together theories (which are just collections of sentences over a fixed signature). a second main result considers when theory structuring is preserved by institution morphisms. a third main result gives conditions under which it is sound to use a theorem prover for one institution on theories from another. a fourth main result shows how to extend institutions so that their theories may include, in addition to the original sentences, various kinds of constraint that are useful for defining abstract data types, including both "data" and "hierarchy" constraints. further results show how to define institutions that allow sentences and constraints from two or more institutions. all our general results apply to such "duplex" and "multiplex" institutions. joseph a. goguen rod m. burstall on a fast integer square root algorithm timothy j. rolfe on the degree of boolean functions as real polynomials every boolean function may be represented as a real polynomial. in this paper we characterize the degree of this polynomial in terms of certain combinatorial properties of the boolean function. our first result is a tight lower bound of (log n) on the degree needed to represent any boolean function that depends on n variables. our second result states that for every boolean function f the following measures are all polynomially related:(1) the decision tree complexity of f. (2) the degree of the polynomial representing f. (3) the smallest degree of a polynomial approximating f in the lmax norm. noam nisan mario szegedy provably-secure programming languages for remote evaluation dennis volpano on the power of one-way communication in this paper, a very simple model of parallel computation is considered, and the question of how restricting the flow of data to be one way compares with two-way flow is studied. it is shown that the one-way version is surprisingly very powerful in that it can solve problems that seemingly require two-way communication. whether or not one-way communication is strictly weaker than two-way is an open problem, although the conjecture in this paper is in the positive. it is shown, however, that proving this conjecture is at least as hard as some well- known open problems in complexity theory. jik h. chang oscar h. ibarra anastasios vergis computing the minimum hausdorff distance for point sets under translation we consider the problem of computing a translation that minimizes the hausdorff distance between two sets of points. for points in @@@@1 in the worst case there are &ominus;(mn) translations at which the hausdorff distance is a local minimum, where m is the number of points in one set and n is the number in the other. for points in @@@@2 there are &ominus;(mn(m \\+ n)) such local minima. we show how to compute the minimal hausdorff distance in time (mn log mn) for points in @@@@1 and in time m2n2α( ;mn)) for points in @@@@2. the results for the one-dimensional case are applied to the problem of comparing polygons under general affine transformations, where we extend the recent results of arkin et al on polygon resemblance under rigid body motion. the two-dimensional case is closely related to the problem of finding an approximate congruence between two points sets under translation in the plane, as considered by alt et al. daniel p. huttenlocher klara kedem time-space-optimal string matching (preliminary report) in this paper we describe a new linear-time string-matching algorithm requiring neither dynamic storage allocation nor other high-level capabilities. the algorithm can be implemented to run in linear time even on a six-head two-way finite automaton. moreover, the automaton requires only "{ @@@@}- branching" [1]. (decisions depend on which of the six scanned pattern or text symbols and positions are the same, but not on the particular symbols or how many symbols there are. hence the same algorithm works even for an infinite alphabet.) a "real-time" implementation is possible on such a multihead finite automaton with a few more heads. zvi galil joel seiferas a simpler construction for showing the intrinsically exponential complexity of the circularity problem for attribute grammars mehdi jazayeri speedups of deterministic machines by synchronous parallel machines this paper presents the new speedups dtime(t) @@@@ atime(t/log t) and dtime(t) @@@@ pram-time(@@@@t). these improve the results of hopcroft, paul, and valiant that dtime(t) @@@@ dspace(t/log t), and of paul and reischuk that dtime(t) @@@@ atime(t log log t/log t). the new approach unifies not only these two previous results, but also the result of paterson and valiant that size(t) @@@@ depth(o(t/log t)). patrick w. dymond martin tompa on the state complexity of intersection of regular languages sheng yu qingyu zhuang a framework for combining analysis and verification we present a general framework for combining program verification and program analysis. this framework enhances program analysis because it takes advantage of user assertions, and it enhances program verification because assertions can be refined using automatic program analysis. both enhancements in general produce a better way of reasoning about programs than using verification techniques alone or analysis techniques alone. more importantly, the combination is better than simply running the verification and analysis in isolation and then combining the results at the last step. in other words, our framework explores synergistic interaction between verification and analysis. in this paper, we start with a representation of a program, user assertions, and a given analyzer for the program. the framework we describe induces an algorithm which exploits the assertions and the analyzer to produce a generally more accurate analysis. further, it has some important features: it is flexible: any number of assertions can be used anywhere; it is open: it can employ an arbitrary analyzer; it is modular: we reason with conditional correctness of assertions; it is incremental: it can be tuned for the accuracy/efficiency tradeoff. nevin heintze joxan jaffar r ambiguity in context-free grammars bruce s. n. cheung complex properties of grammars fred g. abramson yuri breitbart forbes d. lewis the competitiveness of on-line assignments consider the on-line problem where a number of servers are ready to provide service to a set of customers. each customer's job can be handled by any of a subset of the servers. customers arrive one-by- one and the problem is to assign each customer to an appropriate server in a manner that will balance the load on the servers. this problem can be modeled in a natural way by a bipartite graph where the vertices of one side (customers) appear one at a time and the vertices of the other side (servers) are known in advance. we derive tight bounds on the competitive ratio in both deterministic and randomized cases. let n denote the number of servers. in the deterministic case we provide an on-line algorithm that achieves a competitive ratio of k = [log2 n] (up to an additive 1) and prove that this is the best competitive ratio that can be achieved by any deterministic on-line algorithm. in a similar way we prove that the competitive ratio for the randomized case is k=ln(n) (up to an additive 1). we conclude that for this problem, randomized algorithms differ from deterministic ones by precisely a constant factor. yossi azar joseph naor raphael rom pcp characterizations of np: towards a polynomially-small error-probability irit dinur eldar fischer guy kindler ran raz shmuel safra equational theories and database constraints we present a novel way to formulate database dependencies as sentences of first-order logic, using equational statements instead of horn clauses. dependency implication is directly reduced to equational implication. our approach is powerful enough to express functional and inclusion dependencies, which are the most common database constraints. we present a new proof procedure for these dependencies. we use our equational formulation to derive new upper and lower bounds for the complexity of their implication problems. s s cosmadakis p c kanellakis a complexity theory based on boolean algebra a projection of a boolean function is a function obtained by substituting for each of its variables a variable, the negation of a variable, or a constant. reducibilities among computational problems under this relation of projection are considered. it is shown that much of what is of everyday relevance in turing-machine-based complexity theory can be replicated easily and naturally in this elementary framework. finer distinctions about the computational relationships among natural problems can be made than in previous formulations and some negative results are proved. s. skyum l. g. valiant on the church-rosser property for the direct sum of term rewriting systems the direct sum of two term rewriting systems is the union of systems having disjoint sets of function symbols. it is shown that if two term rewriting systems both have the chruch-rosser property, then the direct sum of these systems also has this property. yoshihito toyama lower bounds for randomized exclusive write prams philip d. mackenzie on the decidability of grammar problems h. b. hunt countable nondeterminism and random assignment four semantics for a small programming language involving unbounded (but countable) nondeterminism are provided. these comprise an operational semantics, two state transformation semantics based on the egli-milner and smyth orders, respectively, and a weakest precondition semantics. their equivalence is proved. a hoare-like proof system for total correctness is also introduced and its soundness and completeness in an appropriate sense are shown. finally, the recursion theoretic complexity of the notions introduced is studied. admission of countable nondeterminism results in a lack of continuity of various semantic functions, and this is shown to be necessary for any semantics satisfying appropriate conditions. in proofs of total correctness, one resorts to the use of (countable) ordinals, and it is shown that all recursive ordinals are needed. k. r. apt g. d. plotkin constructing 3-d discrete medial axis xinhua yu john a. goldak lingxian dong a model and temporal proof system for networks of processes a model and a sound and complete proof system for networks of processes in which component processes communicate exclusively through messages is given. the model, an extension of the trace model, can describe both synchronous and asynchronous networks. the proof system uses temporal-logic assertions on sequences of observations --- a generalization of traces. the use of observations (traces) makes the proof system simple, compositional and modular, since internal details can be hidden. the expressive power of temporal logic makes it possible to prove temporal properties (safety, liveness, precedence, etc.) in the system. the proof system is language- independent and works for both synchronous and asynchronous networks. van nguyen david gries susan owicki parametric pattern router this paper presents a new pattern router based on an enumeration scheme which sequentially produces alternative paths for a pin pair. first, we represent a pattern or shape of a route by a sequence of symbols representing directions of constituent line segments. then, we investigate a condition that a sequence of symbols represents a valid pattern. we also count all valid patterns. given two points, we can specify any route between them by a pattern of the route and lengths of segments. if we vary their lengths as parameters, then we can generate various routes. therefore, the parametric pattern router with all possible patterns can always find a path between two points if one exists. the user of the router may define a set of admissible routing patterns according to circumstances. tetsuo asano modular resetting of synchronous data-flow programs gregoire hamon marc pouzert space-time tradeoffs and first order problems in a model of programs we introduce a model of programs for comparison-based problems. this model gives a measure of space usage and is "uniform". we first obtain upper and lower bounds on the selection problems which demonstrate the tradeoffs between time and space. we next introduce the class of first order problems and characterize them semantically. a surprisingly simple classification of first order problems into three complexity classes is shown. finally we extend the first order problems to the weak second order problems and show that these can be solved in polynomial time by programs in our model augmented with push-down stores. c. k. yap constructing nonresidues in finite fields and the extended riemann hypothesis johannes buchmann victor shoup the analysis of programming structure the standard course in theory of computation introduces students to turing machines and computability theory. this model prescribes what _can_ be computed, and what _cannot_ be computed, but the negative results have far more consequences. to take the common example, suppose an operating systems designer wants to determine whether or not a program will halt given enough memory or other resources. even a turing machine program cannot be designed to solve this problem---and turing machines have far more memory than any physical computer. the negative results of computability theory are also _robust_ (a principle enshrined as church's thesis): since many other models of computation, including λ-calculus, post systems, and -recursive functions, compute the same class of functions on the natural numbers, negative results from one description apply to all other descriptions.but the discipline of programming and the architecture of modern computers impose other constraints on what can be computed. the constraints are ubiquitous. for example, a combination of hardware and software in operating systems prevents programs from manipulating protected data structures except through the system interface. in programming languages, there are programs that "cannot be written," _e.g.,_ a sort procedure in pascal that works on arrays of any size. in databases, there is no datalog program to calculate the parity of a relation (see [1]). each of these settings involves a uniprocessor machine, but the constraints become even more pronounced in distributed systems: for instance, there is no mutual exclusion protocol for _n_ processors using fewer than _n_ atomic read/write registers [5]. all of these problems are computable in turing's sense: one can encode each of these problems as computation over the natural numbers, and one can write programs to solve the problems. so in what sense is church's thesis applicable? it is important to remember that computability theory only describes properties of the set of computable functions on the natural numbers (although there have been attempts to extend computability theory and complexity theory to higher-order functions; see, _e.g.,_ [13, 12, 20].) if one adopts computability theory as the _only_ theory of computation, one is naturally forced to encode other forms of computation as functions on the natural numbers. alan perlis' phrase "turing tarpit" highlights this potential misuse of computability theory: the encoding of computation into one framework forces many relevant distinctions to become lost.any attempt to explain other computing constraints must necessarily look for theories beyond computability theory. semantics aims to fill this niche: it is the mathematical analysis and synthesis of _programming structures._ the definition is admittedly broad and not historically based: semantics was originally a means of describing programming languages, and the definition covers areas not usually called "semantics." this essay attempts to flesh out this definition of semantics with examples, comparisons, and sources of theories. while most of the ideas will be familiar to the practicing semanticist, the perspective may be helpful to those in and out of the field. john c. mitchell jon g. riecke a pivoting algorithm for convex hulls and vertex enumeration of arrangements and polyhedra david avis komei fukuda modular competitiveness for distributed algorithms james aspnes orli waarts (sm)2-ii: a new version of the sparse matrix solving machine hideharu amano taisuke boku tomohiro kudoh hideo aiso operational and semantic equivalence between recursive programs jean-claude raoult jean vuillemin many hard examples for resolution for every choice of positive integers c and k such that k ≥ 3 and c< /italic>2-k ≥ 0.7, there is a positive number ε such that, with probability tending to 1 as n tends to ∞, a randomly chosen family of cn clauses of size k over n variables is unsatisfiable, but every resolution proof of its unsatisfiability must generate at least (1 + ε)n clauses. vasek chvatal endre szemeredi sparse dynamic programming ii: convex and concave cost functions dynamic programming solutions to two recurrence equations, used to compute a sequence alignment from a set of matching fragments between two strings, and to predict rna secondary structure, are considered. these recurrences are defined over a number of points that is quadratic in the input size; however, only a sparse set matters for the result. efficient algorithms are given for solving these problems, when the cost of a gap in the alignment or a loop in the secondary structure is taken as a convex or concave function of the gap or loop length. the time complexity of our algorithms depends almost linearly on the number of points that need to be considered; when the problems are sparse, this results in a substantial speed-up over known algorithms. david eppstein zvi galil raffaele giancarlo giuseppe f. italiano approximate matching of polygonal shapes (extended abstract) helmut alt bernd behrends johannes blömer representing problems as string transformations k l williams m r meybodi orthologic and quantum logic: models and computational elements motivated by a growing need to understand the computational potential of quantum devices we suggest an approach to the relevant issues via quantum logic and its model theory. by isolating such notions as quantum parallelism and interference within a model-theoretic setting, quite divorced from their customary physical trappings, we seek to lay bare their logical underpinnings and possible computational ramifications. in the first part of the paper, a brief account of the relevant model theory is given, and some new results are derived. in the second part, we model the simplest classical gate, namely the n-gate, propose a quantization scheme (which translates between classical and quantum models, and from which emerges a logical interpretation of the notion of quantum parallelism), and apply it to the classical n-gate model. a class of physical instantiations of the resulting quantum n-gate model is also briefly discussed. j. p. rawling s. a. selesnick a space improvement in the alternating semantic evaluator it is possible to reduce the space requirements of pass-oriented attribute grammar evaluators by abandoning the traditional bias towards evaluating an attribute as early as possible. two techniques for achieving this improvement are presented. empirical data show the superiority of these techniques over traditional evaluation methods. mehdi jazayeri diene pozefsky on the computational complexity of bisimulation faron moller scott a. smolka finding minimum-cost circulations by canceling negative cycles a classical algorithm for finding a minimum-cost circulation consists of repeatedly finding a residual cycle of negative cost and canceling it by pushing enough flow around the cycle to saturate an arc. we show that a judicious choice of cycles for canceling leads to a polynomial bound on the number of iterations in this algorithm. this gives a very simple strongly polynomial algorithm that uses no scaling. a variant of the algorithm that uses dynamic trees runs in o(nm(log n) min{log(nc), mlog n}) time on a network of n vertices, m arcs, and arc costs of maximum absolute value c. this bound is comparable to those of the fastest previously known algorithms. andrew goldberg robert tarjan the boolean formula value problem is in alogtime s. r. buss range searching with efficient hierarchical cuttings we present an improved space/query time tradeoff for the general simplex range searching problem, matching known lower bounds up to small polylogarithmic factors. in particular, we construct a linear-space simplex range searching data structure with o(n1 \--1/d) query time, which is optimal for d=2 and probably also for d>2\\. further we show that multilevel range searching data structures can be built with only a polylogarithmic overhead in space and query time per level (the previous solutions require at least a small fixed power of n). we show that hopcroft's problem (detecting an incidence among n lines and n points) can be solved in time n&l; t;/italic>4/32(log n). in all these algorithms, we apply chazelle's results on computing optimal cuttings. jiri matousek on the np-completeness of cryptarithms d epstein voronoi diagram in statistical parametric space by kullback-leibler divergence kensuke onishi hiroshi imai a very fast substring search algorithm this article describes a substring search algorithm that is faster than the boyer-moore algorithm. this algorithm does not depend on scanning the pattern string in any particular order. three variations of the algorithm are given that use three different pattern scan orders. these include: (1) a "quick search" algorithm; (2) a "maximal shift" and (3) an "optimal mismatch" algorithm. daniel m. sunday logic programming as constructivism: a formalization and its application to databases the features of logic programming that seem unconventional from the viewpoint of classical logic can be explained in terms of constructivistic logic. we motivate and propose a constructivistic proof theory of non-horn logic programming. then, we apply this formalization for establishing results of practical interest. first, we show that 'stratification' can be motivated in a simple and intuitive way. relying on similar motivations, we introduce the larger classes of 'loosely stratified' and 'constructively consistent' programs. second, we give a formal basis for introducing quantifiers into queries and logic programs by defining 'constructively domain independent' formulas. third, we extend the generalized magic sets procedure to loosely stratified and constructively consistent programs, by relying on a 'conditional fixpoint' procedure. f. bry figures of merit: the sequel martin tompa computational complexity theory michael c. loui proof verification and the hardness of approximation problems we show that every language in np has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. if a string is in the language, then there exists a proof such that the verifier accepts with probability 1 (i.e., for every choice of its random string). for strings not in the language, the verifier rejects every provided "proof" with probability at least 1/2. our result builds upon and improves a recent result of arora and safra [1998] whose verifiers examine a nonconstant number of bits in the proof (though this number is a very slowly growing function of the input length). as a consequence, we prove that no max snp-hard problem has a polynomial time approximation scheme, unless np = p. the class max snp was defined by papadimitriou and yannakakis [1991] and hard problems for this class include vertex cover, maximum satisfiability, maximum cut, metric tsp, steiner trees and shortest superstring. we also improve upon the clique hardness results of feige et al. [1996] and arora and safra [1998] and show that there exists a positive ε such that approximating the maximum clique size in an n-vertex graph to within a factor of nε is np-hard. sanjeev arora carsten lund rajeev motwani madhu sudan mario szegedy monotone circuits for connectivity require super-logarithmic depth we prove that every monotone circuit which tests st-connectivity of an undirected graph on n nodes has depth (log2n). this implies a superpolynomial (n (log n)) lower bound on the size of any monotone formula for st- connectivity. the proof draws intuition from a new characterization of circuit depth in terms of communication complexity. it uses counting arguments and extremal set theory. within the same framework, we also give a very simple and intuitive proof of a depth analogue of a theorem of krapchenko concerning formula size lower bounds. mauricio karchmer avi wigderson a deterministic algorithm for the three-dimensional diameter problem jiri matousek otfried schwarzkopf denotational semantics for program analysis shan-jon chao barrett r. bryant a new class of heuristic algorithms for weighted perfect matching the minimum-weight perfect matching problem for complete graphs of n vertices with edge weights satisfying the triangle inequality is considered. for each nonnegative integer k ≤ log3n, and for any perfect matching algorithm that runs in t(n) time and has an error bound of (n) times the optimal weight, an & lt;italic>o(max{n, t(3-kn)})-time heuristic algorithm with an error bound of (7/3)k(1 + (3 kn)) - 1 is given. by the selection of k as appropriate functions of n, heuristics that have better running times and/or error bounds than existing ones are derived. m. d. grigoriadis b. kalantari exponential lower bounds for the pigeonhole principle paul beame russell impagliazzo jan krajicek toniann pitassi pavel pudlák alan woods descriptive complexity theory over the real numbers erich grädel klaus meer faster output-sensitive parallel convex hulls for d 3: optimal sublogarithmic algorithms for small outputs neelima gupta sandeep sen the complexity of the equivalence problem for simple programs eitan m. gurari oscar h. ibarra on the arithmetic power of context-free languages our research focuses on characterizing the computational limits of context- free languages based on a non-classical definition of "computation". in our study we identify a computation as a process of recognizing a string of a's and b's in which the number b's is a function of the number of a's. we feel that this ability to "compute" arithmetic functions in this sense is a fundamental property that should be examined for all models of computation. our poster presents a proof of what we have called the arithmetic power theorem for context-free languages that we state as follows: _arithmetic power theorem for context-free languages_: a language l of the form l=ai bf(i) is context-free only if f(i)= (i) or f(i) = (1). our approach in proving this theorem was directed towards a reformulation of our question based on subsets of l that we called pumping sets. the notion of a pumping set is a generalization, and in many ways is an extension, of the well-known pumping lemma that exposes the ability to "pump" a given string in a context-free language into another string in the same language via replication. pumping sets are not found in the standard textbooks on computational theory [1][2][3] and to our knowledge are absent from the existing research on context-free grammars. however, the study of sets of pumped strings (known as _paired loops_) has yielded interesting results in the area of _slender languages_ [4]. we define a pumping set as follows: a pumping set of a derivation is the set of all strings obtained by replicating a path between two occurrences of some non- terminal in the derivation tree. this is illustrated in the following example: we call the derivation in the definition of pumping set a generator of a pumping set. it was necessary to establish several properties which would allow us to use pumping sets to characterize the ability of a production rule in a grammar (or a sequence of such rules) to affect the growth of b's in the strings of our language l. albert goldfain analytic constraint solving and interval arithmetic in this paper we describe the syntax, semantics, and implementation of the constraint logic programming language clp(f) and we prove that the implementation is sound. a clp(f) constraint is a conjunction of equations and inequations in a first order theory of analytic univariate functions over the reals. the theory allows vector-valued functions over closed intervals to be constrained in several ways, including specifying functional equations (possibly involving the differentiation operator) that must hold at each point in the domain, arithmetic constraints on the value of a function at a particular point in its domain, and bounds on the range of a function over its domain. after describing the syntax and semantics of the constraint language for clp(f) and giving several examples, we show how to convert these analytic constraints into a subclass of simpler functional constraints which involve neither differentiation nor evaluation of functions. we then present an algorithm for solving these latter constraints and prove that it is sound. this implies the soundness of the clp(f) interpreter. we also provide some timing results from an implementation of clp(f) based on gnu prolog. the current implementation is able to solve a wide variety of analytic constraints, but on particular classes of constraints (such as initial value problems for autonomous odes), it is not competitive with other non-constraint based, interval solvers such as lohner's awa system. clp(f) should be viewed as a first step toward the long term goal of developing a practical, declarative, logic-based approach to numerical analysis. timothy j. hickey a generalized class of polynomials that are hard to factor a class of univariate polynomials is defined which make the berlekamp-hensel factorization algorithm take an exponential amount of time. the class contains as subclasses the swinnerton-dyer polynomials discussed by berlekamp and a subset of the cyclotomic polynomials. aside from shedding light on the complexity of polynomial factorization this class is also useful in testing implementations of the berlekamp-hensel and related algorithms. erich kaltofen david r. musser b. david saunders reversal complexity of counter machines it has long been known that deterministic 1-way counter machines recognize exactly all r.e. sets. here we investigate counter machines with general recursive bounds on counter reversals. our main result is that for bounds which are at least linear, counter reversal is polynomially related to turing machine time, for both 1-way and 2-way counter machines and in both the deterministic and the nondeterministic cases. this leads to natural characterizations of the classes p and np, and hence of the p ? np question, on the counter machine model. we also establish reversal complexity hierarchies for counter machines, using a variety of techniques which include translation of turing machine time hierarchies, padding arguments as well as more ad hoc counting arguments. tat-hung chan an efficient expected time parallel algorithm for voronoi construction b. c. vemuri r. varadarajan n. mayya point-sets with few k-sets helmut alt stefan felsner ferran hurtado marc noy bisimilarity for a first-order calculus of objects with subtyping andrew d. gordon gareth d. rees triangulation and csg representation of polyhedra with arbitrary genus tamal k. dey a short note on shor's factoring algorithm this note shows that shor's algorithm for factoring in polynomial time on a quantum computer can be made to work with zero error probability. harry buhrman some notions on notations amir m ben-amram optimal linear-time algorithm for the shortest illuminating line segment in a polygon given a simple polygon, we present an optimal linear-time algorithm that computes the shortest illuminating line segment, if one exists; else it reports that none exists. this solves an intriguing open problem by improving the onlogn)-time algorithm for computing such a segment. gautam das giri narasimhan parallel rams with owned global memory and deterministic context-free language recognition we identify and study a natural and frequently occurring subclass of concurrent read, exclusive write parallel random access machines (crew-prams). called concurrent read, owner write, or crow-prams, these are machines in which each global memory location is assigned a unique "owner" processor, which is the only processor allowed to write into it. considering the difficulties that would be involved in physically realizinga full crew-pram model and demonstrate its stability under several definitional changes. second, we precisely characterize the power of the crow-pram by showing that the class of languages recognizable by it in time o(log n) (and implicity with a polynomial number of processors) is exactly the class logdcfl of languages log space reducible to deterministic context-free languages. third, using the same basic machinery, we show that the recognition problem for deterministic context-free languages can be solved quickly on a deterministic auxilliary pushdown automation having random access to its input tape, a log n space work tape, and pushdown store of small maximum height. for example, time o(n1 \+ ε) is achievable with pushdown height o(log2 n). these result extend and unify work of von braunmohl, cook, mehlhorn, and verbeek, klein and reif; and rytter. patrick w. dymond walter l. ruzzo complexity theory column 5: the not-ready-for-prime-time conjectures lane a. hemaspaandra uniform characterizations of complexity classes heribert vollmer machine models and linear time complexity kenneth w. regan randomized parallel algorithms for trapezoidal diagrams kenneth l. clarkson richard cole robert e. tarjan on efficient unsuccessful search this paper introduces a general technique for speeding up unsuccessful search using very little extra space (2 bits per key). this technique is applicable to many data structures including linear lists, and search trees. for linear lists we get on-line algorithms for processing a sequence of successful and unsuccessful searches which are competitive with strong off-line algorithms. in a virtual memory environment our self-adjusting algorithm for multi-way search trees is competitive with an optimal static multi-way tree and will often outperform the static tree. lucas chi kwong hui charles martel symbolic model checking for event-driven real-time systems jin yang aloysius k. mok farn wang on the communication complexity of distributed algebraic computation zhi-quan luo john n. tsitsiklis time complexity of the towers of hanoi problem c gerety p cull symmetric monoidal sketches martin hyland john power distributed k-selection: from a sequential to a distributed algorithm a methodology for transforming sequential recursive algorithms to distributive ones is suggested. the assumption is that the program segments between recursive calls have a distributive implementation. the methodology is applied to two k-selection algorithms and yields new distributed k-selection algorithms. some complexity issues of the resulting algorithms are discussed. liuba shrira nissim francez michael rodeh morphing: combining structure and randomness ian p. gent holger h. hoos patrick prosser toby walsh some distributions that allow perfect packing a probability distribution μ on [0, 1] allows perfect packing if n items of size x1, … , xn , independent and identically distributed according to μ can be packed in unit size bins in such a way that the expected wasted space is o(n). a large class of distributions that allow perfect packing is exhibited. as a corollary, the intervals [a, b] for which the uniform distribution on [a, b] allows perfect packing are determined. wansoo t. rhee michel talagrand when do extra majority gates help? suppose that f is computed by a constant depth circuit with 2m and-, or-, and not-gates, and m majority-gates. we prove that f is computed by a constant depth circuit with 2& lt;supscrpt>m o(1) and-, or-, and not-gates, and a single majority-gate, which is at the root. one consequence is that if f is computed by and ac0 circuit plus polylog majority-gates, then f is computed by a probabilistic perceptron having polylog order. another consequence is that if f agrees with the parity function of three- fourths of all inputs, then fcannot be computed by a constant depth circuit with 2no(1) and-, or-, and not- gates, and no(1) majority-gates. richard beigel substring parsing for arbitrary context-free grammars jan rekers wilco koorn the correct definition of finite elasticity: corrigendum to identification of unions tatsuya motoki takeshi shinohara keith wright approximate nearest neighbors: towards removing the curse of dimensionality piotr indyk rajeev motwani parallel sorting by over partitioning hui li kenneth c. sevcik beyond np: the qsat phase transition ian p. gent toby walsh fast programs for initial segments and polynomial time computation in weak models of arithmetic (preliminary abstract) in this paper we study two alternative approaches for investigating whether np complete sets have fast algorithms. one is to ask whether there are long initial segments on which such sets are easily decidable by relatively short programs. the other approach is to ask whether there are weak fragments of arithmetic for which it is consistent to believe that p np. we show, perhaps surprisingly, that the two questions are equivalent: it is consistent to believe that p np in certain models of weak arithmetic theories iff it is true (in the standard model of computation) that there are infinitely many initial segments on which satisfiability is polynomially decidable by programs that are much shorter than the length of the initial segment. deborah joseph paul young module algebra an axiomatic algebraic calculus of modules is given that is based on the operators combination/union, export, renaming, and taking the visible signature. four different models of module algebra are discussed and compared. j. a. bergstra j. heering p. klint context-free languages thomas h. spencer analysis of free schedule in periodic graphs wolfgang backes uwe schwiegelshohn lothar thiele semantics study and reality of computing kohei honda nondeterministic communication with a limited number of advice bits j. hromkovic g. schnitger strategic directions in research in theory of computing anne condon faith fich greg n. frederickson andrew v. goldberg david s. johnson michael c. loui steven mahaney prabhakar raghavan john e. savage alan l. selman david b. shmoys on the complexity of functions for random access machines tight bounds are proved for sort, merge, insert, gcd of integers, gcd of polynomials, and rational functions over a finite inputs domain, in a random access machine with arithmetic operations, direct and indirect addressing, unlimited power for answering yes/no questions, branching, and tables with bounded size. these bounds are also true even if additions, subtractions, multiplications, and divisions of elements by elements of the field are not counted. in a random access machine with finitely many constants and a bounded number of types of instructions, it is proved that the complexity of a function over a countable infinite domain is equal to the complexity of the function in a sufficiently large finite subdomain. nader h. bshouty symbolic reachability analysis of large finite state machines using don't cares youpyo hong peter a. beerel lower bounds for randomized k-server and motion- planning algorithms howard karloff yuval rabani yiftach ravid inverse currying transformation on attribute grammars inverse currying transformation of an attribute grammar moves a context condition to places in the grammar where the violation of the condition can be detected as soon as the semantic information used in the condition is computed. it thereby takes into account the evaluation order chosen for the attribute grammar. inverse currying transformations can be used to enhance context sensitive parsing using predicates on attributes, to eliminate sources of backtrack when parsing according to ambiguous grammars, and to facilitate semantics-supported error correction. reinhard wilhelm enhanced operational semantics pierpaolo degano corrado priami a predicate transformer approach to semantics of parallel programs c. s. jutla e. knapp j. r. rao convex decompositions of polyhedra an important direction of research in computational geometry has been to find methods for decomposing complex structures into simpler components. in this paper, we examine the problem of decomposing a three-dimensional polyhedron p into a minimal number of convex pieces. letting n be the number of vertices in p and n the number of edges which exhibit a reflex angle (i.e. the notches of p), our main result is an o(nn3) time algorithm for computing a convex decomposition of p. the algorithm produces o(n2) convex parts, which is optimal in the worst case. in most situations where the problem arises (e.g. graphics, tool design, pattern recognition), the number of notches n seems greatly dominated by the number of vertices n; the algorithm is therefore viable in practice. bernard m. chazelle worlds to die for lane a. hemaspaandra ajit ramachandran marius zimand nonclausal deduction in first-order temporal logic this paper presents a proof system for first-order temporal logic. the system extends the nonclausal resolution method for ordinary first-order logic with equality, to handle quantifiers and temporal operators. soundness and completeness issues are considered. the use of the system for verifying concurrent programs is discussed and variants of the system for other modal logics are also described. martín abadi zohar manna polytopes in arrangements boris aronov tamal k. dey lowness: a yardstick for np-p lane a. hemaspaandra deterministic sorting in nearly logarithmic time on the hypercube and related computers r. cypher c. g. plaxton bounded quantification is undecidable f≤ is a typed λ-calculus with subtyping and bounded second-order polymorphism. first proposed by cardelli and wegner, it has been widely studied as a core calculus for type systems with subtyping. curien and ghelli proved the partial correctness of a recursive procedure for computing minimal types of f≤ terms and showed that the termination of this procedure is equivalent to the termination of this procedure is equivalent to the termination of its major component, a procedure for checking the subtype relation between f≤ types. this procedure was thought to terminate on all inputs, but the discovery of a subtle bug in a purported proof of this claim recently reopened the question of the decidability of subtyping, and hence of typechecking. this question is settled here in the negative, using a reduction from the halting problem for two-counter turing machines to show that the subtype relation of f≤ is undecidable. benjamin c. pierce how hard are n2-hard problems? stephen a. bloch jonathan f. buss judy goldsmith a decision method for the equivalence of some non-real-time deterministic pushdown automata a generalization of the alternate stacking procedure of valiant for deciding the equivalence of some deterministic pushdown automata (dpda) is introduced. to analyze the power of the generalized procedure we define a subclass of dpda's, called the proper dpda's. this class properly contains the non- singular dpda's and the real time strict dpda's, and the corresponding class of languages properly contains the real time strict deterministic languages. the equivalence problem for proper automata is reducible to the problem of deciding whether or not an automaton is proper. the main result of the paper is that the generalized procedure yields an equivalence test for proper dpda's, at least one of which is also a finite-turn machine. esko ukkonen predicates are predicate transformers: a unified compositional theory for concurrency j. zwiers w. roever a functional theory of local names λv is an extension of the λ-calculus with a binding construct for local names. the extension has properties analogous to classical λ-calculus and preserves all observational equivalences of λ. it is useful as a basis for modeling wide-spectrum languages that build on a functional core. martin odersky a note on efficient zero-knowledge proofs and arguments (extended abstract) in this note, we present new zero-knowledge interactive proofs and arguments for languages in np. to show that x ε l, with an error probability of at most 2-k, our zero-knowledge proof system requires o(|x|c1)+o(lg&l; t;supscrpt>c2 |x|)kc1 and c2 depend only on l. this construction is the first in the ideal bit commitment model that achieves large values of k more efficiently than by running k independent iterations of the base interactive proof system. under suitable complexity assumptions, we exhibit zero knowledge arguments that require o(lgc|x|kl bits of communication, where c depends only on l, and l is the security parameter for the prover. this is the first construction in which the total amount of communication can be less than that needed to transmit the np witness. our protocols are based on efficiently checkable proofs for np[4]. joe kilian a dynamic load balancer for a parallel branch and bound algorithm this paper presents a load-balancing scheme for a parallel branch and bound (pbb) algorithm utilizing a dynamic load balancer (dlb). the pbb algorithm is used to solve a resource scheduling problem on the hypercube. the dlb is included in the pbb algorithm and is distributed to each node to balance the workload during run time. the dlb is evaluated by implementing two pbb algorithms, with dlb and without dlb. results show that the pbb with dlb is 3.14 times faster than the algorithm without dlb. r. p. ma f-s. tsung m-h. ma a syntactic theory of message passing recent developments by hewitt and others have stimulated interest in message- passing constructs as an alternative to the more conventional applicative semantics on which most current languages are based. the present work illuminates the distinction between applicative and message-passing semantics by means of the μ-calculus, a syntactic model of message-passing systems similar in mechanism to the λ-calculus. algorithms for the translation of expressions from the λ\- to the μ-calculus are presented, and differences between the two approaches are discussed. message-passing semantics seem particularly applicable to the study of multiprocessing. the μ-calculus, through the mechanism of conduits, provides a simple model for a limited but interesting class of parallel computations. multiprocessing capabilities of the μ-calculus are illustrated, and multiple- processor implementations are discussed briefly. stephen a. ward robert h. halstead complexity of partial satisfaction k. j. lieberherr e. specker simple path selection for optimal routing on processor arrays christos kaklamanis danny krizanc satish rao an algorithm for finding canonical sets of ground rewrite rules in polynomial time in this paper, it is shown that there is an algorithm that, given by finite set e of ground equations, produces a reduced canonical rewriting system r equivalent to e in polynomial time. this algorithm based on congruence closure performs simplification steps guided by a total simplification ordering on ground terms, and it runs in time o(n3). jean gallier paliath narendran david plaisted stan raatz wayne snyder intensions and extensions in a reflective tower this article presents a model of the reflective tower based on the formal semantics of its levels. they are related extensionally by their mutual interpretation and intensionally by reification and reflection. the key points obtained here are: a formal relation between the semantic domains of each level; a formal identification of reification and reflection; the visualisation of intensional snapshots of a tower of interpreters; a formal justification and a generalization of brown's meta-continuation; a (structural) denotational semantics for a compositional subset of the model; the distinction between making continuations jumpy and pushy; the discovery of the tail-reflection property; and a scheme implementation of a properly tail- reflective and single-threaded reflective tower. section 1 presents the new approach taken here: rather than implementing reification and reflection leading to a tower, we consider an infinite tower described by the semantics of each level and relate these by reification and reflection. meta- circularity then gives sufficient conditions for implementing it. section 2 investigates some aspects of the environments and control in a reflective tower. an analog of the funarg problem is pointed out, in relation with the correct environment at reification time. jumpy and pushy continuations are contrasted, and the notions of ephemeral level and proper tail-reflection are introduced. our approach is compared with related work and after a conclusion, some issues are proposed. olivier danvy karoline malmkjaer composing processes kohei honda relativizing complexity classes with sparse oracles baker, gill, and solovay constructed sparse sets a and b such that p(a) np(a) and np(b) co- np(b). in contrast to their results, we prove that p = np if and only if for every tally language t, p(t) = np( t), and that np = co-np if and only if for every tally language t, np(t) = co-np(t). we show that the polynomial hierarchy collapses if and only if there is a sparse set s such that the polynomial hierarchy relative to s collapses. similar results are obtained for several other complexity classes. timothy j. long alan l. selman implicit parameters this paper introduces a language feature, called implicit parameters, that provides dynamically scoped variables within a statically-typed hindley-milner framework. implicit parameters are lexically distinct from regular identifiers, and are bound by a special with construct whose scope is dynamic, rather than static as with let. implicit parameters are treated by the type system as parameters that are not explicitly declared, but are inferred from their use. we present implicit parameters within a small call-by-name λ-calculus. we give a type system, a type inference algorithm, and several semantics. we also explore implicit parameters in the wider settings of call-by-need languages with overloading, and call-by-value languages with effects. as a witness to the former, we have implemented implicit parameters as an extension of haskell within the hugs interpreter, which we use to present several motivating examples. jeffrey r. lewis john launchbury erik meijer mark b. shields regular resolution lower bounds for the weak pigeonhole principle we prove that any regular resolution proof for the weak pigeon hole principle, with n holes and any number of pigeons, is of length (2^{n^{ε}}), (for some global constant ε > 0$). toniann pitassi ran raz subtypes and quantification dennis m. volpano explicit substitutions the λ -calculus is a refinement of the λ-calculus where substitutions are manipulated explicitly. the λ -calculus provides a setting for studying the theory of substitutions, with pleasant mathematical properties. it is also a useful bridge between the classical λ-calculus and concrete implementations. m. abadi p. l. curien j. j. levy combining tentative and definite executions for very fast dependable parallel computing z. m. kedem k. v. palem a. raghunathan p. g. spirakis exponential determinization for -automata with strong-fairness acceptance condition (extended abstract) in [saf88] an exponential determination procedure for bu"chi automata was shown, yielding tight bounds for decision procedures of some logics ([ej88, saf88, sv89, kt89]). in [sv89] the complexity of determinization and complementation of ω-automata was further investigated, leaving as an open question the complexity of the determinization of a single class of ω-automata. for this class of ω-automata with strong fairness as acceptance condition (street automata), [sv89] managed to show an exponential complementation procedure, but showed that the blow-up of the translation of these automata to any of the classes known to admit exponential determinization is inherently exponential. this might suggest that the blow-up of the determinization of street automata is inherently doubly exponential. surprisingly, we show an exponential determinization construction for any streett automaton. in fact, the complexity of our construction is roughly the same as the complexity achieved in [saf88] for bu"chi automata. moreover, a simple observation extends this upper bound to the complexity of the complementation problem. since any ω-automaton that admits exponential determinization can be easily converted into a streett automaton, we get one procedure that can be used for all of these conversions. this construction is optimal (up to a constant factor in the exponent) for all of these conversions. our results imply that streett automata (with strong fairness as acceptance condition) can be used instead of bu"chi automata (with the weaker acceptance condition) without any loss of efficiency. shmuel safra a polynomial-time algorithm for learning k-variable pattern languages from examples michael kearns leonard pitt computing the singular-value decomposition on the illiac iv franklin t. luk two-prover one-round proof systems: their power and their problems (extended abstract) we characterize the power of two-prover one-round (mip(2,1)) proof systems, showing that mip(2,1)=nexptime. however, the following intriguing question remains open: does parallel repetition decrease the error probability of mip(2,1) proof systems?. we use techniques based on quadratic programming to study this problem, and prove the parallel repetition conjecture in some special cases. interestingly, our work leads to a general polynomial time heuristic for any np-problem. we prove the effectiveness of this heuristic for several problems, such as computing the chromatic number of perfect graphs. uriel feige laszlo lovasz the random access hierarchy to be considered fast, algorithms for operations on large data structures should operate in polylog time, i.e., with the number of steps bounded by a polynomial in log(n) where n is the size of the data structure. example: an ordered list of reasonably short strings can be searched in log2 (n) time via binary search. to measure the time and space complexity of such operations, the usual turing machine with its serial-access input tape is replaced by a random access model. to compare such problems and define completeness, the appropriate relation is loglog reducibility: the relation generated by random- access transducers whose work tapes have length at most log(log(n)). the surprise is that instead of being a refinement of the standard log space, polynomial time, polynomial space, ... hierarchy, the complexity classes for these random-access turing machines form a distinct parallel hierarchy, namely, polylog time, polylog space, exppolylog time, ... . propositional truth evaluation, context-free language recognition and searching a linked list are complete for polylog space. searching ordered lists and searching unordered lists are complete for polylog time and nondeterministic polylog time respectively. in the serial-access hierarchy, log-space reducibility is not fine enough to classify polylog-time problems and there can be no complete problems for polylog space even with polynomial-time turing reducibility dale myers communication complexity of secure computation (extended abstract) a secret-ballot vote for a single proposition is an example of a secure distributed computation. the goal is for m participants to jointly compute the output of some n-ary function (in this case, the sum of the votes), while protecting their individual inputs against some form of misbehavior. in this paper, we initiate the investigation of the communication complexity of unconditionally secure multi-party computation, and its relation with various fault-tolerance models. we present upper and lower bounds on communication, as well as tradeoffs among resources. first, we consider the "direct sum problem" for communications complexity of perfectly secure protocols: can the communication complexity of securely computing a single function f : fn -> f at k sets of inputs be smaller if all are computed simultaneously than if each is computed individually? we show that the answer depends on the failure model. a factor of o(n/log n) can be gained in the privacy model (where processors are curious but correct); specifically, when f is n-ary addition (mod 2), we show a lower bound of (n2 log n) for computing f o(n) times simultaneously. no gain is possible in a slightly stronger fault model (fail-stop mode); specifically, when f is n-ary addition over gf(q), we show an exact bound of (kn2 log q) for computing f at k sets of inputs simultaneously (for any k ≥ 1). however, if one is willing to pay an additive cost in fault tolerance (from t to t-k+1), then a variety of known non-cryptographic protocols (including "provably unparallelizable" protocols from above!) can be systematically compiled to compute one function at k sets of inputs with no increase in communication complexity. our compilation technique is based on a new compression idea of polynomial-based multi-secret sharing. lastly, we show how to compile private protocols into error-detecting protocols at a big savings of a factor of o (n3 matthew franklin moti yung on tape versus core an application of space efficient perfect hash functions to the invariance of space in complexity theory the use of informal estimates can be justified by appealing to the invariance thesis which states that all standard models of computing devices are sufficiently equivalent. this thesis would require, among others, that a ram can be simulated by a turingmachine with constant factor overhead in space. such a simulation is hard to obtain if the traditional spacemeasure for ram-space is used. the simulation uses a new method for condensing space, based on perfect hashing. c. slot p. van emde boas approximation algorithms for metric facility location and k-median problems using the primal-dual schema and lagrangian relaxation we present approximation algorithms for the metric uncapacitated facility location problem and the metric k-median problem achieving guarantees of 3 and 6 respectively. the distinguishing feature of our algorithms is their low running time: o(m logm) and o(m logm(l \\+ log (n))) respectively, where n and m are the total number of vertices and edges in the underlying complete bipartite graph on cities and facilities. the main algorithmic ideas are a new extension of the primal-dual schema and the use of lagrangian relaxation to derive approximation algorithms. kamal jain vijay v. vazirani a complexity theoretic approach to randomness we study a time bounded variant of kolmogorov complexity. this notion, together with universal hashing, can be used to show that problems solvable probabilistically in polynomial time are all within the second level of the polynomial time hierarchy. we also discuss applications to the theory of probabilistic constructions. michael sipser the (true) complexity of statistical zero knowledge m. bellare s. micali r. ostrovsky probabilistic analysis of bandwidth minimization algorithms we study the probabilistic performance of heuristic algorithms for the np- complete bandwidth minimization problem. let (equation) be a graph with (equation). define the bandwidth of g by (equation) where τ ranges over all permutations on v. let a be a bandwidth minimization algorithm and let a (g) denote the bandwidth of the layout produced by a on the graph g. we say that a is a level algorithm if for all graphs (equation) the layout τ produced by a on g satisfies (equation) the level algorithms were first introduced by cuthill and mckee [1] and have proved quite successful in practice. however, it is easy to construct examples that cause the level algorithms to perform poorly. consequently worst-case analysis provides no insight to their practical success. in this paper we use probabilistic analysis in order to gain an understanding of these algorithms and to help us design better algorithms. let (equation) be the graph defined by (equation) and let g be a random spanning subgraph of bn in which the vertices have been randomly re-labelled. we show that if a, is a level algorithm and (equation) then (equation) almost always holds, where ε is any positive constant. we also introduce a class of algorithms called the modified level algorithms and show that if a ' is a modified level algorithm and (equation) then (equation) almost always holds. a particular modified level algorithm mla1 is analyzed and we show that when (equation). we also study several other properties of random subgraphs of bn . jonathan turner exact primitives for smallest enclosing ellipses bernd gärtner sven schönherr a simple on-line bin-packing algorithm the one-dimensional on-line bin-packing problem is considered, a simple o(1)-space and o(n)-time algorithm, called harmonicm, is presented. it is shown that this algorithm can achieve a worst-case performance ratio of less than 1.692, which is better than that of the o(n)-space and o(n log n)-time algorithm first fit. also shown is that 1.691 … is a lower bound for all 0(1)-space on-line bin-packing algorithms. finally a revised version of harmonicm , an o(n)-space and o(n)- time algorithm, is presented and is shown to have a worst-case performance ratio of less than 1.636. d c. c. lee d. t. lee complexity of parallel qr factorization an optimal algorithm to perform the parallel qr decomposition of a dense matrix of size n is proposed. it is deduced that the complexity of such a decomposition is asymptotically 2n, when an unlimited number of processors is available. m. cosnard y. robert fast algorithms for n-dimensional restrictions of hard problems let m be a parallel ram with p processors and arithmetic operations addition and subtraction recognizing l nn in t steps. (inputs for m are given integer by integer, not bit by bit.) then l can be recognized by a (sequential) linear search algorithm (lsa) in o(n_4(log(n ) + t \\+ log(p))) steps. thus many n-dimensional restrictions of np-complete problems (binary programming, traveling salesman problem, etc.) and even that of the uniquely optimum traveling salesman problem, which is &am; p;#916;p2-complete, can be solved in polynomial time by an lsa. this result generalizes the construction of a polynomial lsa for the n-dimensional restriction of the knapsack problem previously shown by the author, and destroys the hope of proving nonpolynomial lower bounds on lsas for any problem that can be recognized by a pram as above with 2poly(n) processors in poly(n) time. friedhelm meyer auf der heide proportionate progress: a notion of fairness in resource allocation s. k. baruah n. k. cohen c. g. plaxton d. a. varvel turing award lecture: it's time to reconsider time richard edwin stearns pushing disks together - the continuous-motion case marshall bern amit sahai degrees of inferability most theories of learning consider inferring a functionf from either (1) observations aboutf or, (2) questions aboutf. we consider a scenario whereby thelearner observes f and asks queriesto some set a.ex[a] is the set of concept classesex-learnable by an inductiveinference machine with oracle a.a andf areex- equivalent ifex[a] = ex[b]. the equivalenceclasses induced are the degrees ofinferability. we prove several results about thesedegrees: (1) there are an uncountable number of degrees. (2) fora r.e., rec ε bc[a] iff &0slash; ≤ta , and there is evidence thisholds for all sets a. (3) fora, b r.e.,a ≡t biffex[a] = ex[b]. (4) there existsa, b low2 r.e.,a|rb,ex[a] = ex[b]. (hence (3) isoptimal). peter cholak efim kinber rod downey martin kummer lance fortnow stuart kurtz william gasarch theodore a. slaman the cost of the missing bit: communication complexity with help laszlo babai thomas p. hayes peter g. kimmel a polynomial factorization challenge joachim von zur gathen a necessary condition for a doubly recursive rule to be equivalent to a linear recursive rule nonlinear recursive queries are usually less efficient in processing than linear recursive queries. it is therefore of interest to transform non-linear recursive queries into linear ones. we obtain a necessary and sufficient condition for a doubly recursive rule of a certain type to be logically equivalent to a single linear recursive rule obtained in a specific way. weining zhang c. t. yu coverage estimation for symbolic model checking yatin hoskote timothy kam pei- hsin ho xudong zhao on the sectional area of convex polytopes david avis prosenjit bose godfried toussaint thomas c. shermer binhai zhu jack snoeyink on the decidability of accessibility problems (extended abstract) rajeev motwani rina panigrahy vijay saraswat suresh ventkatasubramanian the equivalence problem for real-time dpdas the equivalence problem for deterministic real-time pushdown automata is shown to be decidable. this result is obtained by showing that valiant's parallel stacking technique using a replacement function introduced in this paper succeeds for deterministic real-time pushdown automata. equivalence is also decidable for two deterministic pushdown automata, one of which is real-time. michio oyamaguchi an iteration theorem for simple precedence languages yael krevner amiram yehudai how reductions to sparse sets collapse the polynomial-time hierarchy: a primer: part ii restricted polynomial-time reductions paul young aggregating strategies volodimir g. vovk on construction of k-wise independent random variables howard karloff yishay mansour time-adaptive algorithms for synchronization rajeev alur hagit attiya gadi taubenfeld the 1-2-3 routing algorithm or the single channel 2-step router on 3 interconnection layers in this paper an algorithm is presented for the single channel routing on 3 interconnection layers. first some general characteristics of routing on 3 interconnection layers are presented. then the specifications are introduced of the routing problem on 3 interconnection layers that will be considered. pins will be allowed to come out on both the diffusion/poly layer and the second metal layer with the routing done on both the first and second metal layer. if only the first metal layer was to be used horizontally then the routing problem could be solved by a simple left-edge channel algorithm. however the 1-2-3 algorithm presented here will solve identical problems with a smaller number of tracks and via's since it makes use of some specific characteristics of routing on 3 interconnection layers. exponential separation of quantum and classical communication complexity ran raz studying overheads in massively parallel min/max-tree evaluation ranier feldmann peter mysliwiete burkhard monien a note on probabilistically verifying integer and polynomial products probabilistic algorithms are presented for testing the result of the product of two n-bit integers in o(n) bit operations and for testing the result of the product of two polynomials of degree n over any integral domain in 4n \\+ o(n) algebraic operations with the error probability o(l/n1-ε) for any ε > 0\\. the last algorithm does not depend on the constants of the underlying domain. michael kaminski lower bounds for parallel computation on linked structures f. fich v. ramachandran improved bounds for routing and sorting on multi-dimensional meshes we show improved bounds for 1--1 routing and sorting on multi- dimensional meshes and tori. in particular, we give a fairly simple deterministic algorithm for sorting on the d-dimensional mesh of side length n that achieves a running time of 3 dn/2+o(nd-dimensional mesh and torus, respectively, that make one copy of each element. we also show lower bounds for sorting with respect to a large class of indexing schemes, under a model of the mesh where each processor can hold an arbitrary number of packets. finally, we describe algorithms for permutation routing whose running times come very close to the diameter lower bound. torsten suel foundational aspects of syntax dale miller catuscia palmidessi quantum complexity theory ethan bernstein umesh vazirani polygon triangulation in o(n log log n) time with simple data-structures we give a new (n log log n)-time deterministic linear-time algorithm for triangulating simple n-vertex polygons, which avoids the use of complicated data-structures. in addition, for polygons whose vertices have integer coordinates of polynomially bounded size, the algorithm can be modified to run in (n log* n) time. the major new techniques employed are the efficient location of horizontal visibility edges which partition the interior of the polygon into regions of approximately equal size, and a linear-time algorithm for obtaining the horizontal visibility partition of a subchain of a polygonal chain, from the horizontal visibility partition of the entire chain. this latter technique has other interesting applications, including a linear-time algorithm to convert a steiner triangulation of a polygon into a true triangulation. this research was partially supported by dimacs and the following grants: nserc 583584, nserc 580485, nsf-stc88-09648, onr-n00014-87-0467. david g. kirkpatrick maria m. klawe robert e. tarjan a method for deciding whether the galois group is abelian we propose a polynomial time algorithm to decide whether the galois group of an irreducible polynomial ∈ q[x] is abelian, and, if so, determine all its elements along with their action on the set of roots of . this algorithm does not require factorization of polynomials over number fields. instead we shall use the quadratic newton---lifting and the truncated expressions of the roots of over a p\\---adic number field qp, for an appropriate prime p in z. pilar fernandez-ferreiros maria de los angeles gomez-molleda the prime factor non-binary discrete fourier transform and use of crystal_router as a general purpose communication routine we have implemented one of the fast fourier transform algorithms, the prime factor algorithm (pfa), on the hypercube. on sequential computers, the pfa and other discrete fourier transforms (dft) such as the winograd algorithm (wfa) are known to be very efficient. however, both algorithms require full data shuffling and are thus challenging to any distributed memory parallel computers. we use a concurrent communication algorithm, called the crystal_router for communicating shuffled data. we will show that the speed gained in reduced arithmetic compared to binary fft is sufficient to overcome the extra communication requirement up to a certain number of processors. beyond this point the standard cooley-tukey fft algorithm has the best performance. we comment briefly on the application of the dft to signal processing in synthetic aperture radar (sar). g. aloisio n. veneziani j. s. kim g. c. fox computer vision for computer interaction figure 1 shows a vision of the future from the 1939 world's fair. the human-machine interface that was envisioned is wonderful. both machines are equipped with cameras; the woman interacts with the machine using an intuitive gesture. that degree of naturalness is a goal today for researchers designing human-machine interfaces. william t. freeman paul a. beardsley hiroshi kage ken-ichi tanaka kazuo kyuma craig d. weissman dynamic scheduling of a fixed bandwidth communications channel for controlling multiple robots we describe a distributed software system for controlling a group of miniature robots using a low capacity communication system. space and power limitations on the robots drastically limit the capacity of the communication system and require sharing bandwidth and other resources among the robots. we have developed a scheduling and resource allocation system that is capable of dynamically assigning resources to each robot. paul e. rybski sascha a. stoeter maria gini dean f. hougen nikolaos papanikolopoulos twinkle, twinkle, shooting star yasuhiro yamaguchi human vision, anti-aliasing, and the cheap 4000 line display despite its other advantages, one of the major objections to raster graphics has been the poor image quality and aliasing effects caused by discrete sampling. these effects include "jaggies" or stair-stepping, crawling, line breakup, and scintillation. several solutions have been proposed in the literature, however, most suffer severe drawbacks and are only partially successful at eliminating aliasing effects. one solution, area anti-aliasing, is not only effective, it produces results comparable to higher resolution systems. using widely available data on human visual response, it is shown how this technique actually increases the perceived resolution of a display beyond the hardware resolution by factors of up to 16x. the requirements of such a system are discussed, as well as some of the problems encountered. william j leler the design of interactive simulations we propose a methodology for developing simulations in the interactive mode. rather than use an iconic model for doing so, we show how the use of systems theory representation enhances the interaction. this not only facilitates debugging, but is of fundamental assistance in the model development itself. although recent advances in the realistic portrayal of systems has made great advances, the systems representation has been neglected in considering interaction. our hope is that our observations will lead to more research in this area. alfred w. jones inclusion problems in parallel learning and games (extended abstract) martin kummer frank stephan qsplat: a multiresolution point rendering system for large meshes advances in 3d scanning technologies have enabled the practical creation of meshes with hundreds of millions of polygons. traditional algorithms for display, simplification, and progressive transmission of meshes are impractical for data sets of this size. we describe a system for representing and progressively displaying these meshes that combines a multiresolution hierarchy based on bounding spheres with a rendering system based on points. a single data structure is used for view frustum culling, backface culling, level-of-detail selection, and rendering. the representation is compact and can be computed quickly, making it suitable for large data sets. our implementation, written for use in a large-scale 3d digitization project, launches quickly, maintains a user-settable interactive frame rate regardless of object complexity or camera position, yields reasonable image quality during motion, and refines progressively when idle to a high final image quality. we have demonstrated the system on scanned models containing hundreds of millions of samples. szymon rusinkiewicz marc levoy the display of characters using gray level sample arrays character fonts on raster scanned display devices are usually represented by arrays of bits that are displayed as a matrix of black and white dots. this paper reviews a filtering and sampling method as applied to characters for building multiple bit per pixel arrays. these arrays can be used as alternative character representations for use on devices with gray scale capability. discussed in this paper are both the filtering algorithms that are used to generate gray scale fonts and some consequences of using gray levels for the representation of fonts including: 1\\. the apparent resolution of the display is increased when using gray scale fonts allowing smaller fonts to be used with higher apparent positional accuracy and readability. this is especially important when using low resolution displays. 2\\. fonts of any size and orientation can be generated automatically from suitable high precision representations. this automatic generation removes the tedious process of "bit tuning" fonts for a given display. john e. warnock homomorphic factorization of brdfs for high-performance rendering _a bidirectional reflectance distribution function (brdf) describes how a material reflects light from its surface. to use arbitrary brdfs in real-time rendering, a compression technique must be used to represent brdfs using the available texture-mapping and computational capabilities of an accelerated graphics pipeline. we present a numerical technique, homomorphic factorization, that can decompose arbitrary brdfs into products of two or more factors of lower dimensionality, each factor dependent on a different interpolated geometric parameter. compared to an earlier factorization technique based on the singular value decomposition, this new technique generates a factorization with only positive factors (which makes it more suitable for current graphics hardware accelerators), provides control over the smoothness of the result, minimizes relative rather than absolute error, and can deal with scattered, sparse data without a separate resampling and interpolation algorithm._ michael d. mccool jason ang anis ahmad efficient exploration for optimizing immediate reward dale schuurmans lloyd greenwald the utility of planning shlomo zilberstein using a simulation model to evaluate the configuration of a sortation facility dale masel david goldsmith music lessons wilson smith millennium fever alan kapler rendering fur with three dimensional textures j. t. kajiya t. l. kay a simulation approach to design a motoreducer assembly and testing facility this paper describes the application of a digital computer simulation model to design a large facility for assembling and testing approximately 30,000 motoreducers a year. the simulation model is designed and calibrated entirely on actual operating data, and used in conjunction with statistically designed experiments to evaluate the effects of various controllable factors on the size and configuration of the department level facility. the paper shows that a relatively simple and straightforward simulation model can provide quite insightful and valuable results, yet still be well within the capabilities and budget of the "ordinary practitioner" of management science in facilities planning. harold j. steudel multiple-valued complex functions and computer algebra i recently taught a course on complex analysis. that forced me to think more carefully about branches. being interested in computer algebra, it was only natural that i wanted to see how such programs dealt with these problems. i was also inspired by a paper by stoutemyer ([3]).while programs like derive, maple, mathematica and reduce are very powerful, they also have their fair share of problems. in particular, branches are somewhat of an achilles' heel for them. as is well- known, the complex logarithm function is properly defined as a multiple-valued function. and since the general power and exponential functions are defined in terms of the logarithm function, they are also multiple valued. but for actual computations, we need to make them single valued, which we do by choosing a branch. in section 2, we will consider some transformation rules for branches of multiple-valued complex functions in painstaking detail.the purpose of this short article is not to do a comprehensive comparative study of different computer algebra system. (for an attempt at that, see [4].) my goal is simply to make the readers aware of some of the problems, and to encourage the readers to sit down and experiment with their favourite programs.i would like to thank willi-hans steeb and michael wester for helpful comments. helmer aslaksen links: what is an intelligent tutoring system? reva freedman syed s. ali susan mcroy gross motion planning - a survey motion planning is one of the most important areas of robotics research. the complexity of the motion-planning problem has hindered the development of practical algorithms. this paper surveys the work on gross-motion planning, including motion planners for point robots, rigid robots, and manipulators in stationary, time-varying, constrained, and movable-object environments. the general issues in motion planning are explained. recent approaches and their performances are briefly described, and possible future research directions are discussed. yong k. hwang narendra ahuja star wars episode 1: the phantom menace yves metraux optimizing triangle strips for fast rendering francine evans steven skiena amitabh varshney a generalized carrier-null method for conservative parallel simulation the carrier-null message approach to conservative-distributed discrete-event simulation can significantly reduce the number of synchronization messages required to avoid deadlock. in this paper we show that the original approach does not apply to simulations with arbitrary communication graphs and we propose a modified carrier-null approach which does. we present and discuss some preliminary results obtained using our approach to simulate digital logic circuits. kenneth r. wood stephen j. turner presenting gpss/h results with the graphical kernel system (gks) this paper discusses the interfacing of the latest version of gpss, namely gpss/h, and gks, a graphics system which allows programs to support a wide variety of graphics devices. a computer program to convert a running gpss/h program to a form compatible for graphical emulation has been developed. an animation program which takes as input the data generated by the modified gpss/h program has also been developed. thus, the gpss programmer needs only to take care of correct modelling of the system under study and need not know anything at all about graphics and animation. raphael parambi aseem chandawarkar hardware support for non-photorealistic rendering special features such as ridges, valleys and silhouettes, of a polygonal scene are usually displayed by explicitly identifying and then rendering `edges' for the corresponding geometry. the candidate edges are identified using the connectivity information, which requires preprocessing of the data. we present a non-obvious but surprisingly simple to implement technique to render such features without connectivity information or preprocessing. at the hardware level, based only on the vertices of a given flat polygon, we introduce new polygons, with appropriate color, shape and orientation, so that they eventually appear as special features. ramesh raskar fight club bob hoffman factor screening of multiple responses lori s. cook a projective drawing system osama tolba julie dorsey leonard mcmillan fractal landscapes in context f. kenton musgrave on the consistency assumption, monotone criterion and the monotone restriction james b. h. kwa feature-based image metamorphosis thaddeus beier shawn neely ace: going from prototype to product with an expert system ace (automated cable expertise) is a knowledge-based expert system that provides trouble-shooting and diagnostic reports for telephone company managers. its application domain is telephone cable maintenance. ace departs from standard expert system architecture in that a separate data base system is used as its primary source of information. ace designers were influenced by the r1/xcon project, and ace uses techniques similar to those of r1/xcon. this paper reports the progress of ace as it moves out of experimentation and into a live software environment, and characterizes it in terms of current expert system technology. j. r. wright f. d. miller g. v. e. otto e. m. siegfried g. t. vesonder j. e. zielinski news douglas blank lessons from a restricted turing test stuart m. shieber a model for simulating the photographic development process on digital images joe geigel f. kenton musgrave ai: what simulationists really need to know david p. miller r. james firby paul a. fishwick jeff rothenberg fast line scan-conversion a major bottleneck in many graphics displays is the time required to scan- convert straight line segments. most manufacturers use hardware based on bresenham's [5] line algorithm. in this paper an algorithm is developed based on the original bresenham scan- conversion together with the symmetry first noted by gardner [18] and a recent double-step technique [31]. this results in a speed-up of scan-conversion by a factor of approximately 4 as compared to the original bresenham algorithm. hardware implementations are simple and efficient since the property of using only shift and increment operations is preserved. j. g. rokne brian wyvill xiaolin wu a tutorial on simulation programming with simpas simpas is a portable, strongly-typed, event-oriented, discrete system simulation language embedded in pascal. it extends pascal by adding statements for event declaration and scheduling, entity declaration, creation and destruction, linked list declaration and manipulation, and statistics collection. a library of standard pseudo-random number generators is also provided. this paper gives a tutorial on simulation programming using simpas. we briefly discuss event- oriented simulation language concepts, give an overview of the programming language pascal, and then describe in detail the simulation extensions that simpas provides. r. m. bryant languages for expert system building: a comparison a study of some languages which may be used for expert system building is conducted. the characteristics necessary in an expert system building language are detailed. languages that are unable to provide for many different types of expert systems to be built are not given much weight. six relatively standard high level languages are studied, which include fortran, modula-2, ada**, pascal, lisp and prolog. some of the newly developed expert system building languages are also studied. the study was done in conjunction with the development of the expert system known as fess and a discussion of the reasons for the choice of its development language are given. the construction of the system has provided further insights for the language study. it is found that a best expert system building language may yet be developed. ** ada is a trademark of the dod (ada joint program office) lawrence o. hall abraham kandel parallel implementation of an integrated edge-preserving smoothing algorithm in clusters of workstations chih-cheng hung asim yarkhan kwai wong s. a. von laven tommy coleman a parallel distributed simulation of a large-scale pcs network: keeping secrets brian a. malloy albert t. montroy the implementation of temporal intervals in qualitative simulation graphs in this paper we develop and implement a simulation modeling methodology that combines discrete event simulation with qualitive simulation. our main reason for doing so is to extend the application of discrete event simulation to systems found in business for which precise quantitative information is lacking. the approach discussed in this paper is the implementation of temporal interval specifications in the discrete event model and the construction of a temporal interval clock for the qualitative simulation model. ricki g. ingalls douglas j. morrice andrew b. whinston using self-diagnosis to adapt organizational structures the specific organization used by a multi-agent system is crucial for its effectiveness and efficiency. in dynamic environments, or when the objectives of the system shift, the organization must therefore be able to change as well. in this paper we propose using a general diagnosis engine to drive this process of adaptation, using the \tems\ modeling language as the primary representation of organizational information. results from experiments employing such a system in the producer-consumer-transporter domain are also presented. bryan horling brett benyo victor lesser computational efficiency of batching methods david goldsman bruce w. schmeiser the 21st acm north american computer chess championship after twenty years of traveling from city to city across the united states, the acm north american computer chess championship came back to the place of its birth, the new york hilton hotel, where the competitions began in 1970. this latest five- round event ended in a two-way tie for first place between mephisto and deep thought/88. finishing in a two-way tie for third place were hitech and m chess. a total of 10 teams participated, and the level of play was at the low grandmaster level. a special three-round end-game championship was won by mephisto, who also captured the prize for the best small computing system. a total of $8000 in prizes was divided up among the winners. monty newborn danny kopec theorem proving via general mappings peter b. andrews real-time display of quadric on the i.m.o.g.e.n.e. machine sylvain karpf christophe chaillou eric nyiri michel meriaux ray tracing complex models containing surface tessellations john m. snyder alan h. barr a continuous clustering method for vector fields h. garcke t. preuber m. rumpf a. telea u. weikard j. van wijk verification and normalization of sentences marco costa jose neves orlando sousa simas santos a cognitive approach to judicial opinion structure: applying domain expertise to component analysis empirical research on basic _components_ of american judicial opinions has only scratched the surface. lack of a coordinated pool of legal experts or adequate computational resources are but two reasons responsible for this deficiency. we have undertaken a study to uncover fundamental components of judicial opinions found in american case law. the study was aided by a team of twelve expert attorney-editors with a combined total of 135 years of legal editing experience. the scientific hypothesis underlying the experiment was that after years of working closely with thousands of judicial opinions, expert attorneys would develop a refined and internalized schema of the content and structure of legal cases. in this study participants were permitted to describe both concept-related and format-related components. the resultant components, representing a combination of these two broad categories, are reported on in this paper. additional experiments are currently under way which further validated and refine this set of components and apply them to new search paradigms. jack g. conrad daniel p. da bney ray tracing of steiner patches steiner patches are triangular surface patches for which the cartesian coordinates of points on the patch are defined parametrically by quadratic polynomial functions of two variables. it has recently been shown that it is possible to express a steiner patch in an implicit equation which is a degree four polynomial in x,y,z. furthermore, the parameters of a point known to be on the surface can be computed as rational polynomial functions of x,y,z. these findings lead to a straightforward algorithm for ray tracing steiner patches in which the ray intersection equation is a degree four polynomial in the parameter of the ray. the algorithm presented represents a major simplification over existing techniques for ray tracing free-form surface patches. thomas w. sederberg david c. anderson a sequential procedure for simultaneous estimation of several means kimmo e. e. raatikainen power: using uml/ocl for modeling legislation - an application report the dutch tax and customs administration (dtca in dutch: belastingdienst) conducts a research program power in which methods and tools are developed that support a systematic translation of (new) legislation into the dtca's processes. the methods and tools developed help to improve the quality of (new) legislation and codify the knowledge used in the translation processes in which legislation and regulations are transformed into procedures, computer programs and other designs. thereby the time-to-market of the implementation of legislation will be reduced. in this article we focus on the method we developed for modeling legislation. we will elaborate upon the principles behind the method and explain the use of catalysis and uml/ocl in the modeling process. the coupling of models of legislation and task models originating from business policy is demonstrated and finally we will show the way knowledge-based components in function of applications are generated automatically. tom m. van engers rik gerrits margherita boekenoogen erwin glassee patries kordelaar battlezone \\- eighteen years later: remaking a classic carey chico a tutorial view of simulation model development working from the background of simulation language developments, we develop an understanding of the current status of simulation model development. factors characterizing the current status include a shift in emphasis from program to model, more commitment to modeling tools, and the lingering impedance of simulation language isolation. current and future needs are identified. specific approaches to meeting these needs are cited in an extensive description of current research, and in summary we conclude that the technology of simulation model development is in a transitional period that portends more rapid changes for the future. richard e. nance a generalization of algebraic surface drawing the mathematical description of three dimensional surfaces usually falls in one of two classifications: parametric and algebraic. the form is defined as all points which satisfy some equation: f(x,y,z) 0. this form is ideally suited for image space shaded picture drawing, the pixel coordinates are substituted for x and y and the equation is solved for z. algorithms for drawing such objects have been developed primarily for first and second order polynomial functions. this paper presents a new algorithm applicable to other functional forms, in particular to the summation of several gaussian density distributions. the algorithm was created to model electron density maps of molecular structures but can be used for other artistically interesting shapes. james f. blinn network routing models applied to aircraft routing problems zhiqiang chen andrew t. holle bernard m. e. moret jared saia ali boroujerdi jam: a bdi-theoretic mobile agent architecture marcus j. huber incompleteness in knowledge bases the first topic for discussion at the high level abstraction workshop was what should be modelled? that is, what aspects of the world (slice of reality / enterprise) must be dealt with in a high level conceptual model. i would like to address this question from the point of view of incomplete knowledge bases. although my remarks pertain mainly to ai applications, i suspect that any application using an information system of some sort will eventually have to face similar issues. the knowledge bases required in many applications can be characterized by a lack of complete information about the world of interest. hector j. levesque rendering with concentric mosaics heung-yeung shum li-wei he 1984 and beyond norton h. goldstein using simulation to compile diagnostic rules from a manufacturing process representation l. a. becker r. barlett f. soroushian the bert robot karl brown book reviews karen sutherland deep shadow maps we introduce deep shadow maps, a technique that produces fast, high-quality shadows for primitives such as hair, fur, and smoke. unlike traditional shadow maps, which store a single depth at each pixel, deep shadow maps store a representation of the fractional visibility through a pixel at all possible depths. deep shadow maps have several advantages. first, they are prefiltered, which allows faster shadow lookups and much smaller memory footprints than regular shadow maps of similar quality. second, they support shadows from partially transparent surfaces and volumetric objects such as fog. third, they handle important cases of motion blur at no extra cost. the algorithm is simple to implement and can be added easily to existing renderers as an alternative to ordinary shadow maps. tom lokovic eric veach simulation model verification and validation: increasing the users' confidence stewart robinson development and deployment of a multi-agent system for public service access p. charlton y. arafa e. mamdani industrial applications of distributed ai most work done in distributed artificial intelligence (dai) had targeted sensory networks, including air traffic control, urban traffic control, and robotic systems. the main reason is that these applications necessitate distributed interpretation and distributed planning by means of intelligent sensors. planning includes not only the activities to be undertaken, but also the use of material and cognitive resources to accomplish interpretation tasks and planning tasks. these application areas are also characterized by a natural distribution of sensors and receivers in space. in other words, the sensory data- interpretation tasks and action planning are inter-dependent in time and space. for example, in air traffic control, a plan for guiding an aircraft must be coordinated with the plans of other nearby aircraft to avoid collisions. b. chaib-draa vision-assisted image editing eric n. mortensen a computational framework for dialectical reasoning pierre st-vincent daniel poulin paul bratley coordinating mobile robot group behavior using a model of interaction dynamics dani goldberg maja j. mataric a rendering algorithm for visualizing 3d scalar fields paolo sabella temporal anti-aliasing in computer generated animation the desirability of incorporating temporal anti-aliasing, or motion blur, into computer generated animation is discussed and two algorithms for achieving this effect are described. the first approximates continuous object movement and determines intervals during which each object covers each pixel. hidden surface removal is then performed, allowing the calculation of visible object intensity functions and subsequent filtering. the second form of algorithm detailed involves supersampling the moving image and then filtering the resulting intensity function to "multiply-expose" each output picture. the effects of filter types and the relationship of the algorithms to forms of spatial anti-aliasing are discussed. jonathan korein norman badler java based conservative distributed simulation alois ferscha michael richter simjava - a framework for modeling queueing networks in java wolfgang kreutzer jane hopkins marcel van mierlo on face detection in the compressed domain _we propose a fast face detection algorithm that works directly on the compressed dct domain. unlike the previous dct domain processing designs that are mainly based on skin-color detection, our algorithm analyzes both color and texture information contained in the dct parameters, therefore could generate more reliable detection results. our texture analysis is mainly based on statistical model training and detection. a number of fundamental problems,_ e.g., _block quantization, preprocessing in the dct domain, and feature vector selection and classification in the dct domain, are discussed_. huitao luo alexandros eleftheriadis understanding supercritical speedup simulations running under time warp using lazy cancellation can beat the bound given by the critical path. we explain this phenomenon to be the result of a kind of intra-object parallelism. specifically, we show that instances of beating the critical path, called supercritical speedup, are made possible by messages that are independent of some event which precedes the message. this insight leads to a new definition for the critical path which is a tighter lower bound on time-warp simulations using lazy cancellation than any previously proposed. these results suggest criteria for choosing between lazy and aggressive cancellation. michial a. gunter the evolution of cycl, the cyc representation language douglas b. lenat r. v. guha a hierarchical illumination algorithm for surfaces with glossy reflection larry aupperle pat hanrahan a two-list method for synchronization of event driven simulation the traditional mechanism for maintaining a list of pending events in a discrete event simulation is the simple linked list. however, in large scale simulations this list often becomes cumbersome to maintain since the number of pending events may become quite large. as a result, the execution time required by the simple linked list is often a significant portion of total simulation time. several articles have been published suggesting improved synchronization procedures. the most efficient procedures reported are the time indexed procedure and the two level procedure. both methodologies are designed for use in languages such as pascal or pl/l, and as a result neither algorithm translates well into fortran. further, both procedures require external parameter definition, which is a major handicap to their adoption by a general purpose language. this paper introduces a new synchronization procedure, the two list procedure, which is much faster than simple linked lists for large pending event files. this procedure was designed for implementation in fortran, and properly implemented it is transparent to the user; thus it is ideal for adoption by general purpose simulation languages. john h. blackstone gary l. hogg don t. phillips examples of maple applied to problems from the american mathematical monthly gaston h. gonnet application of simulation modeling to emergency population evacuation kambiz farahmand backtracking: robots that fly, part i christopher welty lou hoebel ronin romance classics bruce pukema a sketch of logic without truth 1\\. jøgensen's dilemma - 2. the proposed solution - 3. short history of a philosophical prejudice - 4. the abstract, syntactical and semantic notion of consequence - 4.1. the abstract notion of consequence - 4.2. the syntactical notion of consequence - 4.3. the semantic notion of consequence - 4.4. the meaning given by rules of use in a context - 4.5. which operators does logic require? - 5. deontic logic - 6. the consequences for computer scientists. c. e. alchourrón a. a. martino contour tracing by piecewise linear approximations we present a method for tracing a curve that is represented as the contour of a function in euclidean space of any dimension. the method proceeds locally by following the intersections of the contour with the facets of a triangulation of space. the algorithm does not fail in the presence of high curvature of the contour; it accumulates essentially no round-off error and has a well-defined integer test for detecting a loop. in developing the algorithm, we explore the nature of a particular class of triangulations of euclidean space, namely, those generated by reflections. david p. dobkin allan r. wilks silvio v. f. levy william p. thurston cpu wayne gilbert automated generation of intent-based 3d illustrations doree duncan seligmann steven feiner an experimental evaluation of computer graphics imagery accurate simulation of light propagation within an environment and perceptually based imaging techniques are necessary for the creation of realistic images. a physical experiment that verifies the simulation of reflected light intensities for diffuse environments was conducted. measurements of radiant energy flux densities are compared with predictions using the radiosity method for those physical environments. by using color science procedures the results of the light model simulation are then transformed to produce a color television image. the final image compares favorably with the original physical model. the experiment indicates that, when the physical model and the simulation were viewed through a view camera, subjects could not distinguish between them. the results and comparison of both test procedures are presented within this paper. gary w. meyer holly e. rushmeier michael f. cohen donald p. greenberg kenneth e. torrance improving the representation of legal case texts with information extraction methods the prohibitive cost of assigning indices to textual cases is a major obstacle for the practical use of ai and law systems supporting reasoning and arguing with cases. while progress has been made toward extracting certain facts from well-structured case texts or classifying case abstracts under key number concepts, these methods still do not suffice for the complexity of indexing concepts in cbr systems. in this paper, we lay out how a better example representation may facilitate classification-based indexing. our hypotheses are that (1) abstracting from the individual actors and events in cases, (2) capturing actions in multi-word features, and (3) recognizing negation, can lead to a better representation of legal case texts for automatic indexing. we discuss how to implement these techniques with state-of-the-art nlp tools. preliminary experimental results suggest that a combination of domain-specific knowledge and information extraction techniques can be used to generalize from the examples and derive more powerful features. stefanie bruninghaus kevin d. ashley dealing with large sets of stimuli in learning tasks (abstract only) most previous ai research in learning has been aimed at achieving competence in sophisticated logical domains such as mathematical reasoning. such systems usually handle small problems very well. however, many applications of learning may be required to deal with large numbers of inputs (<1000) that are available simultaneously from sensors and other sources. in addition, responses may have to be selected from a large set. borrowing ideas from stimulus sampling theory, we are working on programs that will handle problems in this domain. operations include handling probabilistic relations between inputs and behaviors, changing relations through reinforcement, selectively sampling stimuli, and grouping related stimuli. roger ferguson william w. mcmillan dynamically reparameterized light fields this research further develops the light field and lumigraph image-based rendering methods and extends their utility. we present alternate parameterizations that permit 1) interactive rendering of moderately sampled light fields of scenes with significant, unknown depth variation and 2) low- cost, passive autostereoscopic viewing. using a dynamic reparameterization, these techniques can be used to interactively render photographic effects such as variable focus and depth-of- field within a light field. the dynamic parameterization is independent of scene geometry and does not require actual or approximate geometry of the scene. we explore the frequency domain and ray- space aspects of dynamic reparameterization, and present an interactive rendering technique that takes advantage of today's commodity rendering hardware. aaron isaksen leonard mcmillan steven j. gortler reducing manual labor: an experimental analysis of learning aids for a text editor it is by now a truism to say that computational systems should be designed with ease of use in mind. indeed, shneiderman [10] collects together nearly a dozen lists of advice produced by authors in the past decade on how to meet this laudable goal. the advice that is given seems often to be quite good, but it is almost always qualitative in nature, rather vague, and sometimes contradictory. remarkably little work has been done examining the actual usefulness of carrying out some of the advice, the extent of savings that can be realized by so doing, or the theoretical rationale behind it. in this paper we will present the rationale for investigating two variables that may aid the user, in particular the novice user, and we will describe an extensive experiment designed to examine the actual effects of these two variables. the goals of the work are (1) to understand better the acquisition, representation, and utilization of knowledge by novice or occasional users of computer-based information systems, and (2) to put to the test some ideas derived from current views of memory and attention. donald j. foss mary beth rosson penny l. smith analytic antialiasing with prism splines michael d. mccool skin: a constructive approach to modeling free-form shapes lee markosian jonathan m. cohen thomas crulli john hughes automatic illustration of 3d geometric models: lines debra dooley michael f. cohen an investigation of a standard simulation-knowledge interface carol s. russell adel s. elmaghraby james h. graham parameterized environment maps ziyad s. hakura john m. snyder jerome e. lengyel sensitivity analysis of discrete event systems with autocorrelated inputs benjamin melamed reuven y. rubinstein computational fluid dynamics in a traditional animation environment patrick witting a model for frequency domain experiments we present a meta-model which is useful for understanding simulation frequency domain experiments. this model consists of polynomial gain followed by a linear filter with additive noise. the assumptions for performing frequency domain experiments are thus made explicit. we demonstrate how the model leads to a straightforward mechanism for factor screening via statistical hypothesis testing. paul j. sanchez arnold h. buss boundaries, identity, and aggregation (abstract): plurality issues in multiagent systems les gasser announcements amruth kumar using the visual differences predictor to improve performance of progressive global illumination computation a novel view-independent technique for progressive global illumination computing that uses prediction of visible differences to improve both efficiency and effectiveness of physically-sound lighting solutions has been developed. the technique is a mixture of stochastic (density estimation) and deterministic (adaptive mesh refinement) algorithms used in a sequence and optimized to reduce the differences between the intermediate and final images as perceived by the human observer in the course of lighting computation. the quantitive measurements of visibility were obtained using the model of human vision captured in the visible differences predictor (vdp) developed by daly [1993]. the vdp responses were used to support the selection of the best component algorithms from a pool of global illumination solutions, and to enhance the selected algorithms for even better progressive refinement of image quality. the vdp was also used to determine the optimal sequential order of component-algorithm execution, and to choose the points at which switchover between algorithms should take place. as the vdp is computationally expensive, it was applied exclusively at the design and tuning stage of the composite technique, and so perceptual considerations are embedded into the resulting solution, though no vdp calculations were performed during lighting simulation. the proposed illumination technique is also novel, providing intermediate image solutions of high quality at unprecedented speeds, even for complex scenes. one advantage of the technique is that local estimates of global illumination are readily available at the early stages of computing, making possible the development of a more robust adaptive mesh subdivision, which is guided by local contrast information. efficient object space filtering, also based on stochastically-derived estimates of the local illumination error, is applied to substantially reduce the visible noise inherent in stochastic solutions. valdimir volevich karol myszkowski andrei khodulev edward a. kopylov the directional parameter plane transform of a height field the linear parameter plane transform (ppt) of a height field attributes an inverted cone of empty space to each height field cell. in is known that height field ray-tracing efficiency can be improved by traversing rays in steps across inverted cones of empty space. however, steps across inverted cones of empty space along rays close to the base of a steep ridge will be short, even if there are no obstructions along the line of sight, because the cones will be narrow. this weakness can be virtually eliminated by allowing the opening angles of the inverted cones of empty space to vary between sectors, i.e., by directionalizing the linear ppt. an efficient algorithm for computing the linear directional ppt of a height field is given and its properties are investigated. david w. paglieroni a clustering algorithm for radiosity in complex environments we present an approach for accelerating hierarchical radiosity by clustering objects. previous approaches constructed effective hierarchies by subdividing surfaces, but could not exploit a hierarchical grouping on existing surfaces. this limitation resulted in an excessive number of initial links in complex environments. initial linking is potentially the most expensive portion of hierarchical radiosity algorithms, and constrains the complexity of the environments that can be simulated. the clustering algorithm presented here operates by estimating energy transfer between collections of objects while maintaining reliable error bounds on each transfer. two methods of bounding the transfers are employed with different tradeoffs between accuracy and time. in contrast with the o(s2) time and space complexity of the initial linking in previous hierarchical radiosity algorithms, the new methods have complexities of o(slogs) and o(s) for both time and space. using these methods we have obtained speedups of two orders of magnitude for environments of moderate complexity while maintaining comparable accuracy. brian smits james arvo donald greenberg artificial texturing: an aid to surface visualization texture is an important surface characteristic which provides a great deal of information about the nature of a surface, and several computationally complex algorithms have been implemented to replicate realistic textures in computer shaded images. perceptual psychologists have recognized the importance of surface texture as a cue to space perception, and have attempted to delineate which factors provide primary shape information. a rendering algorithm is presented which uses these factors to produce a texture specifically designed to aid visualization. since the texture is not attempting to replicate an actual texture pattern, it is called "artificial" texturing. this algorithm avoids the computational limitations of other methods because this "artificial" approach does not require complex mappings from a defined texture space to the transformed image to be rendered. a simple filtering method is presented to avoid unacceptable aliasing. dino schweitzer efficient bump mapping hardware mark peercy john airey brian cabral computing all factorizations in *** we present a new algorithm for determining _all_ factorizations of a polynomial _f_ in the domain zn[_x_], a non-unique factorization domain, given in terms of parameters. from the prime factorization of _n_, the problem is reduced to factorization in z_ph_[_x_] where _p_ is a prime and _k_ 1\\. if _pk_ does not divide the discriminant of _f_ and one factorization is given, our algorithm determines all factorizations with complexity _ _(_n_3 _m_(_k_ log _p_)) where _n_ denotes the degree of the input polynomial and _m_(_t_) denotes the complexity of multiplication of two _t_-bit numbers. our algorithm improves on the method of von zur gathen and hartlieb, which has complexity _ _(_n_7 _k_(_k_log _p_ \\+ log _n_2). the improvement is achieved by processing all factors at the same time instead of one at a time and by computing the kernels and determinants of matrices over z_pk_ in an efficient manner. howard cheng george labahn an alternative to constraint logic programming for managing domain arithmetics in prolog expert systems david roach hal berghel feature selection, perception learning, and a usability case study for text categorization hwee tou ng wei boon goh kok leong low knowledge acquisition and knowledge base refinement problems in developing the kbs legal expert system julia barragán luciano barragán realizing 3d visualization using crossed-beam volumetric displays david ebert edward bedwell stephen maher laura smoliar elizabeth downing building expert systems through the integration of mental models z. chen vizard - visualization accelerator for realtime display gunter knittel wolfgang straßer cheops: a compact explorer for complex hierarchies luc beaudoin marc-antoine parent louis c. vroomen digital illusion: theme park visualization clark dodsworth direct wysiwyg painting and texturing on 3d shapes pat hanrahan paul haeberli exact: algorithm and hardware architecture for an improved a-buffer andreas schilling wolfgang straßer beyond paint rosalee wolfe handscape jay lee victor su sandia ren hiroshi ishii james hsiao rujira hongladaromp letter from the chair jeffrey m. bradshaw focused analysis and training environments f. bradley armstrong barbara werner mazziotti ken powell visual modeling for computer animation: graphics with a vision over the past decade, the visual modeling research program at the university of toronto has consistently championed the concerted exploration of computer graphics and computer vision. our premise has been this: graphics, the forward/synthesis/models-to-images problem, and vision, the inverse/analysis/images-to-models problem, pose mutually converse challenges, which may best be tackled synergistically through the development of advanced modeling techniques catering to the needs of both fields. with illustrated case studies of three projects spanning a twelve-year period, this brief article presents a personal retrospective on image-based modeling for computer animation. as we shall see, one of the projects has also created opportunities for computer animation to contribute to the study of visual perception in living systems.i shall begin by reviewing an early computer animation project that pioneered the use of image-based modeling to combine natural and synthetic imagery. the animation _cooking with kurt,_ produced in 1987 at the schlumberger palo alto research center, introduced a paradigm in which computer vision was applied to acquire 3d models of objects from their images. the acquired models were then dynamically animated in a simulated physical scene reconstructed from the image of a real scene. the approach demonstrated a promising alternative to the established convention of keyframe animating manually constructed geometric models.the human face is a natural objective for the image-based modeling approach. i will next describe a facial animation project that uses specialized imaging devices to capture models of human heads with functional, biomechanically simulated faces conforming closely to human subjects. facial animation is by no means the sole benefactor of an exciting area of advanced graphics modeling that lies at the intersection of virtual reality and artificial life [6]. accordingly, i will also describe a virtual "seaquarium" populated by artificial marine animals whose 3d shapes and appearances were captured from images of real fishes. as i discuss in the final section, lifelike, self-animating creatures now serve as biomimetic autonomous agents in the study of animal perception. demetri terzopoulos convergence characteristics of keep-best reproduction kay wiese scott d. goodwin on detection and representation of multiscale low-level image structure narendra ahuja links syed s. ali susan mcroy songsak channarvkal it's all about the nose christos demosthenous real-time interactive graphics scott s. fisher glen fraser scot thrane refsland cofess: cooperative fuzzy expert systems for intelligent recognition on small computers in this paper we investigate the implementation of a cooperative fuzzy expert systems for intelligent recognition of patterns. first we discuss the communication method developed to communicate between the various expert systems on small computers, and then we describe in detail the functions and the structure of each expert system. we also show how fuzzy set theory and fuzzy logic is utilized in the implementation of the system. abraham kandel moti schneider softy puffs: paper chase shannon gilley some intelligent software supply chain agents mark e. nissen anshu mehra spatial representations and route descriptions agnès gryl marie-rose gonçalves a human factors study of color notation systems for computer graphics toby berk arie kaufman lee brownston building a simulator with gpss/h douglas s. smith daniel t. brunner robert c. crain parameterized ray-tracing c. h. sequin e. k. smyrl controlled simplification of genus for polygonal models jihad el-sana amitabh varshney residual speech signal compression: an experiment in the practical application of neural network technology neural networks are a popular area of research today. however, neural network algorithms have only recently proven valuable to application problems. this paper seeks to aid in the process of transferring neural network technology from research to a development environment by describing our experience in applying this technology. the application studied here is speaker identity verification (siv), which is the task of verifying a speaker's identity by comparing the speaker's voice pattern to a stored template. in this paper, we describe the application of the back-propagation neural network algorithm to one aspect of the siv problem, called residual compression (rc). the rc problem is to extract useful features from a part of the speech signal that was not utilized by previous siv systems. here, we describe a neural network architecture, pre-processing algorithm, training methodology, and empirical results for this problem. we also present a few guidelines for the use of neural networks in applied settings. lorien pratt kathleen d. cebulka peter clitherow the disciple integrated shell and methodology for rapid development of knowledge-based agents mihai boicu kathryn wright dorin marcu seok won lee michael bowman gheorghe tecuci a volume density optical model peter l. williams nelson max localized-hierarchy surface splines (less) carlos gonzalez-ochoa jorg peters faster integration of the equations of motion breton m. saunders the lumigraph steven j. gortler radek grzeszczuk richard szeliski michael f. cohen ai update douglas blank teaching surface design made easy yan zhou yuan zhao john l. lowther ching- kuang shene a cellular texture basis function steven worley specializing shaders brian guenter todd b. knoblock erik ruf the audition derek flood das werk hybrid stereo camera: an ibr approach for synthesis of very high resolution stereoscopic image sequences this paper introduces a novel application of ibr technology for efficient rendering of high quality cg and live action stereoscopic sequences. traditionally, ibr has been applied to render novel views using image and depth based representations of the plenoptic functions. in this work, we present a restricted form of ibr in which lower resolution images for the views to be generated at a very high resolution are assumed to be available. specifically, the paper addresses the problem of synthesizing stereo imax(r)1 3d motion picture images at a standard resolution of 4-6k. at such high resolutions, producing cg content is extremely time consuming and capturing live action requires bulky cameras. we propose a _hybrid stereo camera_ concept in which one view is rendered at the target high resolution but the other is rendered at a much lower resolution. methods for synthesizing the second view sequence at the target resolution using image analysis and ibr techniques are the focus of this work. the high quality results from the techniques presented in this paper have been visually evaluated in the imax 3d large screen projection environment. the paper also highlights generalizations and extensions of the hybrid stereo camera concept. harpreet s. sawhney yanlin guo keith hanna rakesh kumar sean adkins samuel zhou a framework for designing an animated simulation system based on model- animator-scheduler paradigm james t. lin kuang-chau yeh liang-chyau sheu en derive patrice mugnier ali ensad bruno follet a pictorial aid for programming expert systems edward n. schwartz a practical evaluation of popular volume rendering algorithms michael meißner jian huang dirk bartz klaus mueller roger crawfis adaptive hierarchical visibility in a tiled architecture feng xie michael shantz on the use of simulation in the design and installation of a power and free conveyor system if the introduction of the first power and free conveyor system in a plant is a risky adventure, then the introduction of the second power and free conveyor system in a plant where the first one failed is an extremely hazardous undertaking. the people proposing this power and free conveyor system decided to decrease their risks by initiating a simulation study while the system was early in the design phase. the primary purpose of the proposed power and free system is to minimize the manual handling of parts between the two plating tanks compared to the system it replaced. the basic objective of this simulation study was to evaluate the feasibility of the proposed system to perform the required tasks while operating at specified production levels with efficient use of the planned number of workers. george l. good j. thomas bauner the hierarchical simulation language hsl: a versatile tool for process- oriented simulation d. p. sanderson r. sharma r. rozin s. treu computer graphics in brazil marcio lobo netto editorial/letter from the chair lewis johnson implementation of multilinear operators in reduce and applications in mathematics marcel roelofs peter k. h. gragert behavioral self-organization in lifelike agents jiming liu hong qin a framework for realistic image synthesis donald p. greenberg generalized clustering, supervised learning, and data assignment clustering algorithms have become increasingly important in handling and analyzing data. considerable work has been done in devising effective but increasingly specific clustering algorithms. in contrast, we have developed a generalized framework that accommodates diverse clustering algorithms in a systematic way. this framework views clustering as a general process of iterative optimization that includes modules for supervised learning and instance assignment. the framework has also suggested several novel clustering methods. in this paper, we investigate experimentally the efficacy of these algorithms and test some hypotheses about the relation between such unsupervised techniques and the supervised methods embedded in them. annaka kalton pat langley kiri wagstaff jungsoon yoo transforming spheres - in three parts thomas g. west some label efficient learning results david helmbold sandra panizza building block shaders gregory d. abram turner whitted algorithms for division free perspective correct rendering well known implemetations for perspective correct rendering of planar polygons require a division per rendered pixel. such a division is better to be avoided as it is an expensive operation in terms of silicon gates and clock cycles. in this paper we present a family of efficient midpoint algorithms that can be used to avoid division operators. these algorithms do not require more than a small number of additions per pixel. we show how these can be embedded in scan line algorithms and in algorithms that use mipmaps. experiments with software implementations show that the division free algorithms are a factor of two faster, provided that the polygons are not too small. these algorithms are however most profitable when realised in hardware. b. barenbrug f. j. peters c. w. a. m. van overveld streaming qsplat: a viewer for networked visualization of large, dense models szymon rusinkiewicz marc levoy smart virtual prototypes: distributed 3d product simulations for web based environments a technology that enables development of functional and interactive 3d product design models for web based environments is presented. a novel prototyping feature of these models is that they have capabilities to simulate complicated functions of the target product. simulation can be executed either locally in the host environment or in a distributed manner between the client and server environments. in the latter case, 3d visualization model of the product is executed on the client side, and complex functional and behavioral simulations on the server side. downloadable product models can be supplied with intelligent software agents and other plug-ins that perform specific tasks, such as recording of interaction data for usability tests. because of their capabilities, these product models are called smart virtual prototypes. also outlined are the integrated design-time and run-time environments for developing smart virtual prototypes. marko salmela harri kyllönen a contour display generation algorithm for vlsi implementation recent articles have discussed the current trend towards designing raster graphics algorithms into vlsi chips. the purpose of these design efforts is to capture some of the real-time animation capability found in vector graphics systems. currently, real-time vector graphics animation is limited primarily to operations involving coordinate transformations. in order to enhance this animation capability, frequently encountered vector graphics algorithms that require the high speed, parallel computation capability of vlsi must be identified. real-time contour display generation from grid data is one such algorithm. this paper describes the specifics of a contour display generation algorithm, the architectural framework of a processor that performs this algorithm and the architectural requirements of such a processor. the contouring algorithm is based on a data structure, the contouring tree, whose regularity and amenability for parallel computation make it an ideal candidate for vlsi. the architectural framework for a contouring processor chip that performs this algorithm for the real-time environment of interactive graphics is discussed, particularly the issues of memory size and data distribution. a model of the contouring process is created in order to determine the necessary physical parameters of the contouring processor in this architectural framework. conclusions are drawn concerning the feasibility of producing a vlsi chip that performs this contouring algorithm. michael j. zyda i-collide: an interactive and exact collision detection system for large-scale environments we present an exact and interactive collision detection system, i-collide, for large-scale environments. such environments are characterized by the number of objects undergoing rigid motion and the complexity of the models. the algorithm does not assume the objects' motions can be expressed as a closed form function of time. the collision detection system is general and can be easily interfaced with a variety of applications. the algorithm uses a two- level approach based on pruning multiple-object pairs using bounding boxes and performing exact collision detection between selected pairs of polyhedral models. we demonstrate the performance of the system in walkthrough and simulation environments consisting of a large number of moving objects. in particular, the system takes less than 1/20 of a second to determine all the collisions and contacts in an environment consisting of more than 1000 moving polytopes, each consisting of more than 50 faces on an hp-9000/750. jonathan d. cohen ming c. lin dinesh manocha madhav ponamgi multi-resolution dynamic meshes with arbitrary deformations ariel shamir chandrajit bajaj valerio pascucci digital signal processors for computation intensive statistics and simulation p. r. pukite j. pukite high-level planning and low-level execution: towards a complete robotic agent karen zita haigh manuela m. veloso a framework for the simulation experimentation process thomas c. fall learner characteristics that predict success in using a text-editor tutorial today, it is not unusual for secretaries to use computer-based word- processing systems to deal with manuscripts, correspondence, and memos. in the future, such functions as updating personal calendars, filing, and leaving messages undoubtedly will be handled by computers. for all these functions, people without a technical background are required to interact effectively with a computer system. a person's introduction to computers often begins with an attempt to learn how to use a text editor. thus, knowing how to use a text editor is a requirement for a growing number of jobs. the importance of text editing is underscored by the recent psychological research devoted to understanding this skill [1,2,3,5]. dennis e. egan cheryll bowers louis m. gomez phong normal interpolation revisited phong shading is one of the best known, and at the same time simplest techniques to arrive at realistic images whem rendering 3d geometric models. however, despite (or maybe due to) its success and its widespread use, some aspects remain to be clarified with respect to its validity and robustness. this might be caused by the fact that the phong method is based on geometric arguments, illumination models, and clever heuristics. in this article we address some of the fundamentals that underlie phong shading, such as the computation of vertex normals for nonmanifold models and the adequacy of linear interpolation and we apply a new interpolation technique to achieve an efficient and qualitatively improve result. c. w. a. m. van overveld b. wyvill an implementation of operators for symbolic algebra systems in this paper we propose a design and implementation of operators and their associated functionality for symbolic algebra systems. we believe that operators should blend harmoniously (syntactically and semantically) with the underlying language, in such a way that users will find them convenient and appealing to use. it is "vox populi" that operators are needed in a symbolic algebra system, although there is little consensus on what these should be, what the semantics should be, allowable operations, syntax, etc. all of these ideas, and examples, have been implemented and work as described in our current version of maple cha85. during the first maple retreat83 we established a basic design for operators. the implementation of this design was delayed until some remaining crucial details were finally solved during the 1985 maple retreat (sept 1985). in this sense, this paper is the result of the collective work of all the participants of these two retreats. what is an operator? we would like to define an operator to be an abstract data type which describes (at various possible degrees: totally, partially or minimally) an operation to be performed on its arguments. this abstract data type is closely associated with the operations of application and composition, but will also allow most (or all) of the other algebraic operations. we it found useful to have some "witness" examples that we want to solve in an elegant and general form. the two main examples were: (a) how to represent the first derivative of (x) at 0, i.e. ′(0) (the above really boils down to an effective representation of the differentiation operator) (b) how to represent and to operate with a non-communative multiplication operator, for example matrix multiplication. of course many systems solve the above problems, but in some cases (in particular for the first example) as an ad-hoc solution. by an ad-hoc solution we mean that, for the differentiation example, this operator cannot be written in terms of the primitives given by the language. it is important to note that there are three issues to resolve: a purely representational/syntactic argument: how to input/output these operators. a purely functional argument: how to perform all the operations we want performed. an integrational argument: how to join operators harmoniously with a symbolic algebra system. g. h. gonnet overlapping multi-processing and graphics hardware acceleration: performance evaluation recently, multi-processing has been shown to deliver good performance in rendering. however, in some applications, processors spend too much time executing tasks that could be more efficiently done through intensive use of new graphics hardware. we present in this paper a novel solution combining multi-processing and advanced graphics hardware, where graphics pipelines are used both for classical visualization tasks and to advantageously perform geometric calculations while remaining computations are handled by multi- processors. the experiment is based on an implementation of a new parallel wavelet radiosity algorithm. the application is executed on the sgi origin2000 connected to the sgi infinitereality2 rendering pipeline. a performance evaluation is presented. keeping in mind that the approach can benefit all available workstations and super-computers, from small scale (2 processors and 1 graphics pipeline) to large scale (p processors and n graphics pipelines), we highlight some important bottlenecks that impede performance. however, our results show that this approach could be a promising avenue for scientific and engineering simulation and visualization applications that need intensive geometric calculations. xavier cavin laurent alonso jean-claude paul modeling a hospital main cafeteria william a. stout a model for efficient and flexible image computing as common as imaging operations are, the literature contains little about how to build systems for image computation. this paper presents a system which addresses the major issues of image computing. the system includes an algorithm for performing imaging operations which guarantees that we only compute those regions of the image that will affect the result. the paper also discusses several other issues critical when creating a flexible image computing environment and describes solutions for these problems in the context of our model. these issues include how one handles images of any resolution and how one works in arbitrary coordinate systems. it also includes a discussion of the standard memory models, a presentation of a new model, and a discussion of each one's advantages and disadvantages. michael a. shantzis a 3d stereo window system for virtual environments michael a. wingfield an experimental distributed microprocessor-based knowledge base system (abstract only) the current research of our data base and knowledge base interesting group is aimed to set up an experimental data-driven, function distributed knowledge base machine, based on multiprocessor, including dedicated processors which implement a variety of database/knowledge base functions, such as sort/merge, retrieve, join, logical language parsing and inference, etc. by the joint effort in the fields of database system and artificial intelligence, the system will become a suitable environment for investigating the knowledge base system which supports the automatic programming, cai, intelligent problem solving, etc. the behavior of different knowledge base models in a distributed environment will also be investigated. we consider our project as a part of the general trend toward the goal of the so called fifth-generation computer system. fu tong evaluation may be easier than generation (extended abstract) moni naor parallel algorithms for simulating continuous time markov chains we have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time markov chains. this paper reviews the basic method and compares four different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. the methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. touchstone delta multiprocessor, using up to 256 processors. david m. nicol philip heidelberger metamodel estimation using integrated correlation methods this paper develops a generalized approach for combining the use of the schruben-margolin correlation induction strategy and control variates in a simulation experiment designed to estimate a metamodel that is linear in the unknown parameters relating the response variable of interest to selected exogenous decision variables. this generalized approach is based on standard techniques of regression analysis. under certain broad assumptions, the combined use of the schruben-margolin correlation induction strategy and control variates is shown to give a more efficient estimator of the metamodel coefficients than each of the following conventional correlation-based variance reduction techniques: independent streams, common random numbers, control variates, and the schruben-margolin strategy. jeffrey d. tew james r. wilson controlled precision volume integration kevin novins james arvo a don't care back propagation algorithm applied to satellite image recognition antonette logar edward corwin samuel watters ronald weger ronald welch isle: intelligent scalable logistics environment lisa sokol steven geyer robert lasken katherine murphy creativity and expert systems: a divergence of the minds donald o. walter assyst - computer support for guideline sentencing assyst (applied sentencing system) was developed so that judges, prosecutors, defense attorneys, probation officers and other key members of the federal criminal justice system could easily compute, record, archive and examine the implications of guidelines promulgated by the united states sentencing commission [1]. although assyst was not developed using the formalisms employed by most expert system designers, it does meet at least some of the criteria of an expert system [2] and in particular an expert system for legal analysis [3]. assyst is a domain specific knowledge representation of the rules for determinate sentencing in the federal judicial system. it elicits from the user all the information required to make a final determinate sentence. unlike expert systems which resolve decision conflicts, assyst will only suggest tentative solution sets to grouping rules required by sentencing guidelines and, in this respect, it is a deterministic rather than a conflict resolving system. assyst has been a success. in june of 1988, every u.s. probation main office received a copy of the software. the response has been very favorable. officers who have taken the time to learn the system have reported substantial decreases in amount of time spent per case. the software has proliferated throughout the federal criminal justice system since, as a publication by the federal government, it is not copyrightable. in a related paper, sergot et al. [4] have discussed the formalization of the british nationality act in prolog. in their article, sergot et al. demonstrated how the language of the statute could be translated into definite horn clauses which form the basic logic structure of prolog's programming language. the difficulty usually encountered in the logical interpretation of statutes, according to sergot et al. is that the statute language is often vague. phrases such as "having reasonable excuse" or "being in good character" are two examples from the british nationality act. the sentencing reform act of 1984 [5], the statutory basis for federal sentencing guidelines, also contains many vague phrases. as an example, the enacting legislation requires the u.s. sentencing commission to consider the following offender characteristics: mental and emotional condition, community ties, and criminal history. there is no exposition in the statute on any of these factors. it was the responsibility of the commission and the guideline writers to translate the intent of the legislation into coherent and consistent rules of determinate sentencing. analogous to sergot et al., our programming effort attempted to formalize legislative logic, in this case determinate sentencing, into rigorous programming logic. unlike sergot et al., our burden was ameliorated by others who transformed the vague language of this statute into the rules of sentencing guidelines. sentencing guidelines were proposed by congress to reduce sentencing disparity among federal prisoners and to provide a consistent set of rules to codify characteristics of both the offense and the offender. in the past, judges have had a great deal of latitude in their sentencing decisions. while some would argue that this discretion was necessary in order to treat the individual circumstances of the crime, others have argued that the degree of latitude introduced inconsistent sentencing decisions. this paper does not attempt to resolve the imbroglio between proponents and opponents of sentencing guidelines. rather it is intended to show how a computer application became an integral component of the continued development of the guideline process. e. simon g. gaes buddies robin roepstorff modelling constructs of simscript ii.5 this tutorial will present the highlights of the simscript ii.5 approach to building discrete event simulation models. the approach will be to construct a small example problem, implement the program in simscript ii.5, and then to display the modularity which is possible with simscript by adding several "real- world" complexities to the model. edward c. russell the automatic element routine generator: an automatic programming tool for functional simulator design cheng-i chuang stephen a. szygenda james d. baker concepts and criteria to assess acceptability of simulation studies: a frameof reference tuncer i. Ã-ren simulation studio springer w. cox closed loop methodology applied to simulation by using modern system theory to model physical systems, questions concerning the design, analysis, test, and evaluation of such systems can be answered. this tutorial will compare the traditional open loop methodology of simulation to the modern closed loop methodology. advantages and disadvantages of each methodology will be discussed. systems will be typed as being deterministic or stochastic. the use of partial differential equations, ordinary differential equations, and algebraic equations to describe the system will be discussed. in order to solve system questions, the need to solve the estimation, identi- fication, and control (eic) problems will be motivated. the use of the eic solutions with respect to the traditional open loop methodology (olm) and the modern control loop methods (clm) will be examined. al fermelia feature-based cellular texturing for architectural models cellular patterns are all around us, in masonry, tiling, shingles, and many other materials. such patterns, especially in architectural settings, are influenced by geometric features of the underlying shape. bricks turn corners, stones frame windows and doorways, and patterns on disconnected portions of a building align to achieve a particular aesthetic goal. we present a strategy for feature-based cellular texturing, where the resulting texture is derived from both patterns of cells and the geometry to which they are applied. as part of this strategy, we perform texturing operations on features in a well- defined order that simplifies the interdependence between cells of adjacent patterns. _occupancy maps_ are used to indicate which regions of a feature are already occupied by cells of its neighbors, and which regions remain to be textured. we also introduce the notion of a _pattern generator_ \\--- the cellular texturing analogy of a shader used in local illumination --- and show how several can be used together to build complex textures. we present results obtained with an implementation of this strategy and discuss details of some example pattern generators. justin legakis julie dorsey steven gortler planet paranoid wolfgang morell environment matting and compositing douglas e. zongker dawn m. werner brian curless david h. salesin time critical isosurface refinement and smoothing v. pascucci c. l. bajaj latent semantic space: iterative scaling improves precision of inter-document similarity measurement we present a novel algorithm that creates document vectors with reduced dimensionality. this work was motivated by an application characterizing relationships among documents in a collection. our algorithm yielded inter- document similarities with an _average precision_ up to 17.8% higher than that of singular value decomposition (svd) used for latent semantic indexing. the best performance was achieved with dimensional reduction rates that were 43% higher than svd on average. our algorithm creates basis vectors for a reduced space by iteratively "scaling" vectors and computing eigenvectors. unlike svd, it breaks the symmetry of documents and terms to capture information more evenly across documents. we also discuss correlation with a probabilistic model and evaluate a method for selecting the dimensionality using log- likelihood estimation. rie kubota ando the role of simulation in machine learning research this paper discusses the role of simulation in machine learning studies and presents a view of simulation-based machine learning. based on the concept of the intelligent agent, it is shown how each of a variety of learning subsystems interacts with a simulated performance engine and how they may interact with each other. in particular, in the context of ongoing research into the coordination of various approaches to learning into an integrated facility called the learning testbed, the centrality of the simulation performance engine netsim to the development of the learning testbed is discussed. netsim is a fine-grained simulation of the call placement process in a circuit-switched telecommunications network which allows observation of the effectiveness of various traffic control strategies on network performance when time-varying traffic patterns are encountered. the users of the netsim program are three learning programs, which embody three different approaches to how a specialized domain, such as network traffic control, might be learned. william j. frawlley continuous systems simulation using gpss/h eugenia kalisz the computer aided simulation modeling environment: an overview ray j. paul texture shaders michael d. mccool wolfgang heidrich updating computer animation (panel): an interdisciplinary approach jane veeder put that where? voice and gesture at the graphics interface _a person stands in front of a large projection screen on which is shown a checked floor. they say, "make a table," and a wooden table appears in the middle of the floor."on the table, place a vase," they gesture using a fist relative to palm of their other hand to show the relative location of the vase on the table. a vase appears at the correct location."next to the table place a chair." a chair appears to the right of the table."rotate it like this," while rotating their hand causes the chair to turn towards the table."view the scene from this direction," they say while pointing one hand towards the palm of the other. the scene rotates to match their hand orientation._in a matter of moments, a simple scene has been created using natural speech and gesture. the interface of the future? not at all; koons, thorisson and bolt demonstrated this work in 1992 [23]. although research such as this has shown the value of combining speech and gesture at the interface, most computer graphics are still being developed with tools no more intuitive than a mouse and keyboard. this need not be the case. current speech and gesture technologies make multimodal interfaces with combined voice and gesture input easily achievable. there are several commercial versions of continuous dictation software currently available, while tablets and pens are widely supported in graphics applications. however, having this capability doesn't mean that voice and gesture should be added to every modeling package in a haphazard manner. there are numerous issues that must be addressed in order to develop an intuitive interface that uses the strengths of both input modalities.in this article we describe motivations for adding voice and gesture to graphical applications, review previous work showing different ways these modalities may be used and outline some general interface guidelines. finally, we give an overview of promising areas for future research. our motivation for writing this is to spur developers to build compelling interfaces that will make speech and gesture as common on the desktop as the keyboard and mouse. mark billinghurst gmsim: a tool for compositional gsmp modeling frode b. nilson micro-simpas: a microprocessor based simulation language simpas [1] is a discrete system simulation language based on pascal. ucsd pascal [6] is a pascal-based operating system designed for microcomputer use. micro-simpas is a version of simpas which runs under ucsd pascal. this paper discusses the conversion of an existing simpas implementation into micro- simpas, and discusses our experience in using micro-simpas for the construction of some simple simulations. we also discuss new features of micro-simpas for the interactive display of simulation output. our experience with such routines has been favorable; we provide samples of their use in the paper. raymond m. bryant a methodology for agent-oriented analysis and design michael wooldridge nicholas r. jennings david kinny building virtual structures with physical blocks we describe a tangible interface for building virtual structures using physical building blocks. we demonstrate two applications of our system. in one version, the blocks are used to construct geometric models of objects and structures for a popular game, quake ii . in another version, buildings created with our blocks are rendered in different styles, using intelligent decoration of the building model. david anderson james l. frankel joe marks darren leigh eddie sullivan jonathan yedidia kathy ryall two methods for display of high contrast images high contrast images are common in night scenes and other scenes that include dark shadows and bright light sources. these scenes are difficult to display because their contrasts greatly exceed the range of most display devices for images. as a result, the image constrasts are compressed or truncated, obscuring subtle textures and details. humans view and understand high contrast scenes easily, "adapting" their visual response to avoid compression or truncation with no apparent loss of detail. by imitating some of these visual adaptation processes, we developed methods for the improved display of high- contrast images. the first builds a display image from several layers of lighting and surface properties. only the lighting layers are compressed, drastically reducing contrast while preserving much of the image detail. this method is practical only for synthetic images where the layers can be retained from the rendering process. the second method interactively adjusts the displayed image to preserve local contrasts in a small "foveal" neighborhood. unlike the first method, this technique is usable on any image and includes a new tone reproduction operator. both methods use a sigmoid function for contrast compression. this function has no effect when applied to small signals but compresses large signals to fit within an asymptotic limit. we demonstrate the effectiveness of these approaches by comparing processed and unprocessed images. jack tumblin jessica k. hodgins brian k. guenter constraint-driven generation of model structures this article presents a framework for generating model structures with respect to a set of constraints and modelling requirements. the framework is based on multifacetted modelling and artificial intelligence concepts. two knowledge representations, the system entity structure and the production rule formalism are incorporated into an automatic procedure for generating model configurations. the procedure is implemented in the turbo prolog environment. a simple case study based on a local area network (lan) modelling problem is discussed to illustrate the conceptual framework. jerzy w. rozenblit yueh-min huang inductive modelling in law: example based expert systems in administrative law jørgen karpf a rule network for efficient implementation of a mixed-initiative reasoning scheme midst (mixed inferencing dempster shafer tool) is a rule-based expert system shell that incorporates mixed-initiative reasoning and uncertain reasoning based on the dempster-shafer evidence combination scheme. this paper discusses the design and implementation of a rule network for midst that facilitates an efficient implementation of the mixed-initiative reasoning scheme. g. biswas x. yu diagnosis of complex industrial devices bechir el ayeb too many triangles wolfgang seibold geoff wyvill a tutorial on see why and witness a. r. gilman c. billingham organization of the basic agent steven a. vere quadratic bezier triangles as drawing primitives j. bruijns real-time principal direction line drawings of arbitrary 3d surfaces ahna girshick victoria interrante intelligent software technology for the new decade our collective has been working in the al domain for more than 15 years. during most of that period our research focused on two "classical" subjects: 1\. natural language interfaces and 2. knowledge representation and processing. from the early 1980s, in addition to working on the conceptual aspects of al, we have been increasingly involved in developing advanced technology of intelligent systems. participating in the start project [1] stimulated our research and development [r&d;] in this direction. our task in the project was to design a more elaborate version of an industrial-type technology based on pilot software "factories." these would allow a multiple ---10-fold and more---increase in the efficiency alexander s. narin'yani a distributed graphics system for large tiled displays recent interest in large displays has led to renewed development of tiled displays, which are comprised of several individual displays arranged in an array and used as one large logical display. stanford's "interactive mural" is an example of such a display, using an overlapping four by two array of projectors that back-project onto a diffuse screen to form a 6' by 2' display area with a resolution of over 60 dpi. writing software to make effective use of the large display space is a challenge because normal window system interaction metaphors break down. one promising approach is to switch to immersive applications; another approach, the one we are investigating, is to emulate office, conference room or studio environments which use the space to display a collection of visual material to support group activities. in this paper we describe a virtual graphics system that is designed to support multiple simultaneous rendering streams from both local and remote sites. the system abstracts the physical number of computers, graphics subsystems and projectors used to create the display. we provide performance measurements to show that the system scales well and thus supports a variety of different hardware configurations. the system is also interesting because it uses transparent "layers," instead of windows, to manage the screen. greg humphreys pat hanrahan prolexs divide and rule: a legal application a. oskamp r. f. walker j. a. schrickx p. h. van den berg issues in modeling and simulation: policies and technologies david r. pratt drew w. beasley procedure based help desk system in this paper, we describe an outline of "procedure based help desk system". preparing enough amounts of contents for help desk system is important for constructing an efficient help desk system. however, the preparation of contents is a hard job for contents-creators (usually, who is an expert of the work.). to support making help desk contents, we developed "procedure based help desk system". primary functions of this system are to easily generate help desk contents about software usage (they will be called as "procedure data".). then the system classifies procedure data and constructs procedure database. also the system provides useful functions to refer accumulated procedure data. akira takano yuko yurugi atsushi kanaegami modeling the interaction of light between diffuse surfaces a method is described which models the interaction of light between diffusely reflecting surfaces. current light reflection models used in computer graphics do not account for the object-to-object reflection between diffuse surfaces, and thus incorrectly compute the global illumination effects. the new procedure, based on methods used in thermal engineering, includes the effects of diffuse light sources of finite area, as well as the "color- bleeding" effects which are caused by the diffuse reflections. a simple environment is used to illustrate these simulated effects and is presented with photographs of a physical model. the procedure is applicable to environments composed of ideal diffuse reflectors and can account for direct illumination from a variety of light sources. the resultant surface intensities are independent of observer position, and thus environments can be preprocessed for dynamic sequences. cindy m. goral kenneth e. torrance donald p. greenberg bennett battaile ray tracing deterministic 3-d fractals j. c. hart d. j. sandin l. h. kauffman deep compression for streaming texture intensive animations daniel cohen-or yair mann shachar fleishman knowledge-based distributed simulation generator yung-chang wong shu-yuen hwang vex: a volume exploratorium: an integrated toolkit for interactive volume visualization larry gelberg david kamins jeff vroom dimension-based analysis of hypotheticals from supreme court oral argument in this paper we examine a sequence of hypotheticals taken from a supreme court oral argument. we use the idea of a "dimension," developed previously in our case-based reasoning system hypo, to analyze the hypotheticals and to speculate on how the justices might have arrived at them. the case we consider is taken from the area of fourth amendment law concerning warrantless search and seizure. e. l. rissland building cognitively rich agents using the sim_agent toolkit aaron sloman brian logan medmodel: healthcare simulation software steven h. denney polar forms for geometrically continuous spline curves of arbitrary degree hans-peter seidel towards knowledge-based robotics systems a spatial planning technique is examined whereby the robot system plans in a general space which greatly enhances the flexibility of manipulator operation by leaving resolution of trajectory and placement minutae to "motion execution time". this permits the robot planning system to deal with elements in the working environment which are in continuous or semi-continuous motion, as well as those normally considered to be at rest. a multi-modal representation and knowledge-base (kb) are used. david r. dodds fundamentals of digital simulation modeling this paper and the tutorial session with which it is associated treat the fundamental concepts of digital simulation. the topics discussed include system modeling, simulation models and their advantages and disadvantages relative to mathematical models, the development of simulation and current applications, the role of simulation modeling in systems analysis and simulation languages. the paper and the tutorial are presented at a level which requires no previous exposure to digital simulation. however, familiarity with the fundamentals of probability, probability distributions and inferential statistics will facilitate the participant's understanding of the material presented. j. w. schmidt a conceptual activity cycle-based simulation modeling method jingsheng shi the role and reality of graphics standards (panel) graphics standards are formally sanctioned guidelines for graphics programming. just as ansi standards provide fortran programmers with a common ground among compilers, the role of graphics standards is to define a set of common procedures or subroutines for developing graphics applications. the primary objectives underlying graphics standards are program portability and a common methodology for application program development. graphic standards are clearly desirable and efforts to bring them about are well underway. nevertheless, the role of graphics standards must be matched with the reality of software development environments. a common problem faced by all standards committees is the very long lead time required to arrive at consensus. most standards groups are staffed by volunteers, chartered to develop standards models around generally-accepted current professional practices. often, this is a time-consuming process relying on compromise and diplomacy and hampered by long intervals between meetings. as a result, standards inevitably lag behind the industry, where competitive hardware vendors play leapfrog with one another, pushing the state of the art. much of the concern surrounding standards implementations stems from the low level of standards literacy in the graphics user community. beyond a vague understanding of functionality, most graphics software shoppers have little understanding on the different levels of standards, how to measure a good implementation, and methods of programming efficiently using a graphics standards methodology. as a result, standards- based packages face the difficult challenge of balancing the theoretical world of standards with the down to-earth needs of application developers working against deadlines. this session will measure the several standards proposals now in the ansi pipeline against the requirements and literacy level of the buying public. the evolution of a standard through ansi and the implications of certification procedures will be presented and discussed. the problems of implementing a graphics standard will also be discussed, along with the general buying criteria to follow when shopping for standards-based graphics software. james r. warner on the power of learning robustly sanjay jain carl smith rolf wiehagen letter from the chair jeff bradshaw polarization and birefringency considerations in rendering in this work we render non-opaque anisotropic media. a mathematical formalism is described in which polarization effects resulting from light/material interactions are represented as transformation matrices. when applying the matrices a skewing is performed to ensure that like reference coordinates are used. the intensity and direction of an extraordinary ray is computed. david c. tannenbaum peter tannenbaum michael j. wozny 3d painting for non-photorealistic rendering daniel teece verification and validation of simulation models robert g. sargent synthetic aperture radar image formation with neural networks ted frison s. walt mccandless robert renze structured spatial domain image and data comparison metrics often, images or datasets have to be compared, to facilitate choices of visualization and simulation parameters respectively. common comparison techniques include side-by-side viewing and juxtaposition, in order to facilitate visual verification of verisimilitude. in this paper, we propose quantitative techniques which accentuate differences in images and datasets. the comparison is enabled through a collection of partial metrics which, essentially, measure the lack of correlation between the datasets or images being compared. that is, they attempt to expose and measure the extent of the inherent structures in the difference between images or datasets. besides yielding numerical attributes, the metrics also produce images, which can visually highlight differences. our metrics are simple to compute and operate in the spatial domain. we demonstrate the effectiveness of our metrics through examples for comparing images and datasets. nivedita sahasrabudhe john e. west raghu machiraju mark janus news douglas blank neural network region-based inexact relational medical image matching steven m. vajdic a. bouzerdoum m. j. brooks a. r. downing h. e. katz adaptively sampled distance fields: a general representation of shape for computer graphics adaptively sampled distance fields (adfs) are a unifying representation of shape that integrate numerous concepts in computer graphics including the representation of geometry and volume data and a broad range of processing operations such as rendering, sculpting, level-of-detail management, surface offsetting, collision detection, and color gamut correction. its structure is uncomplicated and direct, but is especially effective for quality reconstruction of complex shapes, e.g., artistic and organic forms, precision parts, volumes, high order functions, and fractals. we characterize one implementation of adfs, illustrating its utility on two diverse applications: 1) artistic carving of fine detail, and 2) representing and rendering volume data and volumetric effects. other applications are briefly presented. sarah f. frisken ronald n. perry alyn p. rockwood thouis r. jones drawing and animation using skeletal strokes the use of skeletal strokes is a new vector graphics realization of the brush and stroke metaphor using arbitrary pictures as "ink". it is based on an idealized 2d deformation model defined by an arbitrary path. its expressiveness as a general brush stroke replacement and efficiency for interactive use make it suitable as a basic drawing primitive in drawing programs as well as windowing and page description systems. this paper presents our drawing and animation system, "skeletal draw", based on skeletal strokes. the effectiveness of the system in stylish picture creation is illustrated with various pictures made with it. decisions made in the handling of sub-strokes in a higher order stroke and recursive strokes are discussed. the general anchoring mechanism in the skeletal stroke framework allows any arbitrary picture deformation to be abstracted into a single stroke. its extension to piecewise continuous anchoring and the anchoring of shear angle and stroke width are explained. we demonstrated how this mechanism allows us to build up powerful pseudo-3d models which are particularly useful in the production of 2 1/2 d cartoon drawings and animation. animation sequences have been made to illustrate the ideas, including a vector graphics based motion blurring technique. siu chi hsu irene h. h. lee modeling and rendering of weathered stone julie dorsey alan edelman henrik wann jensen justin legakis hans køhling pedersen towards a model of learning via dialogue nadim obeid computing budget allocation for simulation experiments with different system structure chun-hung chen yu yuan hsiao-chang chen enver yucesan liyi dai 3d graphics and the wave theory a continuing trend in computer representation of three dimensional synthetic scenes is the ever more accurate modelling of complex illumination effects. such effects provide cues necessary for a convincing illusion of reality. the best current methods simulate multiple specular reflections and refractions, but handle at most one scattering bounce per light ray. they cannot accurately simulate diffuse light sources, nor indirect lighting via scattering media, without prohibitive increases in the already very large computing costs. conventional methods depend implicitly on a particle model; light propagates in straight and conceptually infinitely thin rays. this paper argues that a wave model has important computational advantages for the complex situations. in this approach, light is represented by wave fronts which are stored as two dimensional arrays of complex numbers. the propagation of such a front can be simulated by a linear transformation. several advantages accrue. propagations in a direction orthogonal to the plane of a front are convolutions which can be done by fft in o(n log n) time rather than the n 2 time for a similar operation using rays. a typical speedup is about 10,000. the wavelength of the illumination sets a resolution limit which prevents unnecessary computation of elements smaller than will be visible. the generated wavefronts contain multiplicities of views of the scene, which can be individually extracted by passing them through different simulated lenses. lastly the wavefront calculations are ideally suited for implementation on available array processors, which provide more cost effective calculation for this task than general purpose computers. the wave method eliminates the aliasing problem; the wavefronts are inherently spatially filtered, but substitutes diffraction effects and depth of focus limitations in its stead. hans p. moravec transfer between text editors this paper describes a successful test of a quantitative model that accounts for large positive transfer effects between similar screen editors, between different line editors and from line editors to a screen editor, and between text and graphic editors. the model is tested in an experiment using two very similar full-screen text-editors differing only in the structure of their editing commands, verb-noun vs noun- verb. quantitative predictions for training time were derived from a production system model based on the polson and kieras (1985) model of text editing. peter g. polson susan bovair david kieras a letter from the western front daniel kanemoto from collective to individual commitments lambèr royakkers frank dignum performance of the perceptron algorithm for the classification of computer users s. a. bleha j. knopp m. s. obaidat skitters and jacks: interactive 3d positioning tools let scene composition be the precise placement of shapes relative to each other, using affine transformations. by this definition, the steps of scene composition are the selection of objects to be moved, the choice of transformation, and the specification of the parameters of the transformation. these parameters can be divided into two classes: anchors (such as an axis of rotation) and end conditions (such as a number of degrees to rotate). i discuss the advantages of using cartesian coordinate frames to describe both kinds of parameters. coordinate frames used in this way are called jacks. i also describe an interactive technique for placing jacks, using a three- dimensional cursor, called a skitter. eric allan bier detecting null alleles with vasarely charts (case study) microsatellite genotypes can have problems that are difficult to detect with existing tools. one such problem is null alleles. this paper presents a new visualization tool that helps to find and characterize these errors. the paper explains how the tool is used to analyze groups of genotypes and proposes other possible uses. carl j. manaster elizabeth nanthakumar phillip a. morin real-time shadows, reflections, and transparency using a z buffer/ray tracer hybrid abe megahed interactive image cube visualization and analysis james m. torson plakon - an approach to domain-independent construction r. cunis a. gunter i. syska h. peters h. bode introduction to siman/cinema sherri a. conrad david t. sturrock jacob p. poorte automatic grasp planning: an operation space approach matthew t. mason randy c. brost matching, unification and complexity deepak kapur paliath narendran the cambrian burgess shale creatures: early evolution of animals tetsuhiko awaji integrated planning of robotic and computer vision based spatial reasoning tasks this paper describes research that is designed to integrate computer vision with complex robot planning in order to achieve a system capable of autonomously reasoning in spatially reconfigurable environments. in its basic form, the computer vision based spatial reasoning system provides the capability to view objects of known structure and kinematic design and to reason about their spatial locations and orientations relative to a robotic manipulator. for the case of the specific robotic task panel to be discussed, the spatial reasoning system has knowledge of the structure and allowable motions of each of the substructures such as the movable doors, a drawer and a circular latch. this model based knowledge is used to direct an intelligent robot path planner toward accomplishing a final goal in which one of the substructures is manipulated or toward an intermediate goal such as repositioning the visual sensor for the purpose of obtaining better viewpoints so that more accurate visual reasoning can take place. michael magee william hoff lance gatrell martin marietta william wolfe social plans: a preliminary report (abstract) anand s. rao michael p. georgeff elizabeth a. sonenberg tree resolution and generalized semantic tree a resolution proof or a derivation of the empty clause from a set of clauses s = {c1, c2, …, ck} is called a tree resolution if no clause ci is used in more than one resolvent. we show that an unsatisfiable set of clauses s has a tree resolution proof if and only if there is a general semantic tree for s in which no clause appears in more than one terminal node. as an important application of this result, we derive a simple algorithm for obtaining a tree resolution proof, if one exists. the tree resolution proofs are important because they allow us to obtain the shortest "explanation". s kundu cace: computer-assisted case evaluation in the brooklyn district attorney's office s. s. weiner a lighting model aiming at drive simulators eihachiro nakamae kazufumi kaneda takashi okamoto tomoyuki nishita the wielandt length of some 3-groups m. f. newman e. a. o'brien topological considerations in isosurface generation a popular technique for rendition of isosurfaces in sampled data is to consider cells with sample points as corners and approximate the isosurface in each cell by one or more polygons whose vertices are obtained by interpolation of the sample data. that is, each polygon vertex is a point on a cell edge, between two adjacent sample points, where the function is estimated to equal the desired threshold value. the two sample points have values on opposite sides of the threshold, and the interpolated point is called an intersection point. when one cell face has an intersection point in each of its four edges, then the correct connection among intersection points becomes ambiguous. an incorrect connection can lead to erroneous topology in the rendered surface, and possible discontinuities. we show that disambiguation methods, to be at all accurate, need to consider sample values in the neighborhood outside the cell. this paper studies the problems of disambiguation, reports on some solutions, and presents some statistics on the occurrence of such ambiguities. a natural way to incorporate neighborhood information is through the use of calculated gradients at cell corners. they provide insight into the behavior of a function in well-understood ways. we introduce two gradient consistency heuristics that use calculated gradients at the corners of ambiguous faces, as well as the function values at those corners, to disambiguate at a reasonable computational cost. these methods give the correct topology on several examples that caused problems for other methods we examined. allen van gelder jane wilhelms volume rendering on scalable shared-memory mimd architectures jason nieh marc levoy intelligent jurisprudence research: a new concept intelligent jurisprudence research (ijr) is a concept that consists in performing jurisprudence research with a computational tool that employs artificial intelligence (ai) techniques. jurisprudence research is the search employed by judicial professionals when seeking for past legal situations that may be useful to a legal activity. when humans perform jurisprudence research, they employ analogical reasoning in comparing a given actual situation with past decisions, noting the affinities between them. in the process of remembering a similar situation when faced to a new one, case-based reasoning (cbr) systems simulate analogical reasoning. therefore, cbr is an appropriate technology to deal with the chosen problem. rosina weber a morphable model for the synthesis of 3d faces volker blanz thomas vetter arc-consistency for dynamic constraint satisfaction problems over real intervals duong tuan anh kanchana kanchanasut theory formation in artificial intelligence theories in terms of causal mechanisms and causal relationships are a critical component of problem solving in artificial intelligence. a theory for explaining a given observation should satisfy constraints based on causal knowledge. in this paper, we present a new approach to theory formation. under this approach, a theory is formed by reasoning with causal constraints. the reasoning method is constraint-satisfaction. each coherent set of causal mechanisms discovered by the method instantiates the domain causal model to generate a causal hypothesis. if the domain causal model is true, then it can be shown that one of the causal hypotheses generated is true. in the case of using multi-level constraints, a theory is refined into more details by reasoning top-down through the levels of constraints. l.-m. fu pola x sophie bordone an interactive framework for visualizing foreign currency exchange options (case study) analyzing options is a complex, multi-variate process. option behavior depends on a variety of market conditions which vary over the time course of the option. the goal of this project is to provide an interactive visual environment which allows the analyst to explore these complex interactions, and to select and construct specific views for communicating information to non-analysts (e.g., marketing managers and customers). in this paper we describe an environment for exploring 2- and 3-dimensional representations of options data, dynamically varying parameters, examining how multi-variate relationships develop over time, and exploring the likelihood of the development of different outcomes over the life of the option. we also demonstrate how this tool has been used by analysts to communicate to non- analysts how particular options no longer deliver the behavior they were originally intended to provide. d. l. gresh b. e. rogowitz m. s. tignor e. j. mayland potential performance of parallel conservative simulation of vlsi circuits and systems mark rawling rhys francis david abramson commonsense reasoning based on fuzzy logic although most human reasoning is approximate rather than precise in nature, traditional logical systems focus almost exclusively on those modes of reasoning which lend themselves to precise formalization. in recent years, however, in our attempt to design systems which are capable of performing tasks requiring a high level of cognitive skill, it has become increasingly clear that in order to attain this goal we need logical systems which can deal with knowledge that is imprecise, incomplete or not totally reliable. prominent among the systems which have been suggested for this purpose are those based on default reasoning (reiter, 1983), circumscription (mccarthy, 1980), nonmonotonic reasoning (mcdermott and doyle, 1980, 1982), and probabilistic logic (nilsson, 1984). these and related systems are basically extensions of first-order predicate calculus and probability theory, and are rooted in bivalent logic. in a departure from reliance on bivalent logical systems, we have developed an approach to commonsense reasoning based on fuzzy logic (zadeh, 1983, 1984). in this approach, a central role is played by the concept of dispositionality and the closely related concept of usuality. furthermore, an extensive use is made of syllogistic reasoning (zadeh, 1985), in which the premises are propositions containing fuzzy quantifiers such as most, many, usually, etc. the point of departure in the fuzzy-logic-based approach to commonsense reasoning is the assumption that commonsense knowledge consists for the most part of dispositions, that is, propositions which are preponderantly, but not necessarily always, true. for example: birds can fly. slimness is attractive. glue is sticky if it is not dry. typically, a disposition contains one or more implicit fuzzy quantifiers. for example, birds can fly may be interpreted as most birds can fly or, equivalently, as usually (if x is a bird then x can fly). a proposition of the general form usually (p) or usually (if p then q), where p and q are propositions, is said to be usuality-qualified. in this sense, commonsense knowledge may be viewed as a collection of usuality-qualified propositions in which the fuzzy quantifier usually is typically implicit rather than explicit. our approach to inference from commonsense knowledge may be viewed as an application of fuzzy logic, under the assumption that a disposition may be expressed in the canonical form qa's are b's, where q is a fuzzy quantifier, e.g., most, almost all, usually, etc., and a and b are fuzzy predicates such as small, tall, slim, young, etc. fuzzy logic provides a basis for inference from dispositions of this type through the use of fuzzy syllogistic reasoning (zadeh, 1985). as the name implies, fuzzy syllogistic reasoning is an extension of classical syllogistic reasoning to fuzzy predicates and fuzzy quantifiers. in its generic form, a fuzzy syllogism may be expressed as the inference schema q1a's are b's q2c's are d's/ q3e's are f's in which a, b, c, d, e and f are interrelated fuzzy predicates and q1, q2 and q3 are fuzzy quantifiers. the interrelations between a,b,c,d,e and f provide a basis for a classification of fuzzy syllogisms. the more important of these syllogisms are the following ( @@@@ conjunction, @@@@disjunction): (a) intersection/product syllogism: c=a b, e=a, f=c d (b) chaining syllogism: c=b, e=a, f=d (c) consequent conjunction syllogism: a=c=e, f=b d (d) consequent disjunction syllogism: a=c=e, f=b d (e) antecedent conjunction syllogism: b=d=f, e=a c (f) antecedent disjunction syllogism: b=d=f, e=a c in the context of expert systems, these and related syllogisms provide a set of inference rules for combining evidence through conjunction, disjunction and chaining (zadeh, 1983). one of the basic problems in fuzzy syllogistic reasoning is the following: given a, b, c, d, e and f, find the maximally specific (i.e., most restrictive) fuzzy quantifier q3 such that the proposition q3 e's are f's is entailed by the premises. in the case of (a), (b) and (c), this leads to the following syllogisms: intersection/product syllogism. q1 a's are b's q2(a and b)'s are c's/ (q1 ⊗ q2) ;a's are (bandc< /italic>)'s where ⊗ denotes the product in fuzzy arithmetic (kaufmann and gupta, 1985). it should be noted that (4.1) may be viewed as an analog of the basic probabilistic identity p(b,c/a) = p(b/a)p(c/ a,b) a concrete example of the intersection/product syllogism is the following: (2) most students are young most young students are single/most2 students are young and single where most2 denotes the product of the fuzzy quantifier most with itself. chaining syllogism. q1a's are b's q2b's are c's/ (q1 ⊗ q2) ;a's are c's this syllogism may be viewed as a special case of the intersection product syllogism. it results when b a and q1 and q2 are monotone increasing, that is, ≥ q1 = q1, and ≥ q2 = q2, where ≥ q1 should be read as at least q1, and likewise for q. a simple example of the chaining syllogism is the following: most students are undergraduates most undergraduates are single/ most2 students are single note that undergraduates students and that in the conclusion f = single, rather than young and single, as in (2). consequent conjunction syllogism. the consequent conjunction syllogism is a example of a basic syllogism which is not a derivative of the intersection/product syllogism. its statement may be expressed as follows: (3)< ;/p> q1a's are b's q~~2a's are c's q a's are (b and c)'s, where q is a fuzzy quantifier which is defined by the inequalities (4) 0@@@@(q1 q2&ominus;1) ≤ q q~~1@@@@q ;2 in which @@@@, @@@@, and &ominus; are the operations of @@@@ (max), @@@@ (min), + and - in fuzzy arithmetic. an illustration of (3) is provided by the example most students are young most students are single/ q students are single and young where 2most &ominus;1 ≤ q ≤ most. this expression for q follows from (4) by noting that most @@@@ most = most and 0@@@@(2most &ominus;]) = 2most &ominus;1 the three basic syllogisms stated above are merely examples of a collection of fuzzy syllogisms which may be developed and employed for purposes of inference from commonsense knowledge. in addition to its application to commonsense reasoning, fuzzy syllogistic reasoning may serve to provide a basis of rules for combining uncertain evidence in expert systems (zadeh, 1983). l. a. zadeh modeling generalized cylinders via fourier morphing generalized cylinders provide a compact representation for modeling many components of natural objects as well as a great variety of human-made industrial parts. this paper presents a new approach to modeling generalized cylinders based on cross-sectional curves defined using fourier descriptors. this modeling is based on contour interpolation and is implemented using a subdivision technique. the definition of generalized cylinders uses a three- dimensional trajectory which provides an adequate control for the smoothness of bend with a small number of parameters and includes the orientation of each cross- section (i.e, the local coordinate system) in the interpolation framework. fourier representations of cross-sectional curves are obtained from contours in digital images, and corresponding points are identified by considering angular and arc-length parametrizations. changes in cross-section shape through the trajectory are performed using fourier morphing. the technique proposed provides a comprehensive definition that allows the modeling of a wide variety of shapes, while maintaining a compact characterization to facilitate the description of shapes and displays. alberto s. aguado eugenia montiel ed zaluska tokitama hustle momoyo iwase a tex-reduced-interface w. antweiler a. strotmann v. winkelmann strategic directions in simulation research (panel) ernest h. page david m. nicol osman balci richard m. fujimoto paul a. fishwick pierre l'ecuyer roger smith searching for important factors: sequential bifurcation under uncertainty russell c. h. cheng an integrated architecture for learning and planning in robotic domains michael barbehenn seth hutchinson an analysis and algorithm for polygon clipping you-dong liang brian a. barsky division george m. nadeau ian quinn a workstation model for an interactive graphics system by introducing the concept of an abstract graphics device called the workstation, an existing graphics system is generalized to support multiple devices in applications software. mike heck martin plaehn galaxy cluster dynamics john dubinski joel welling anjana kar greg foss a strategy for transforming generate and test logic programs khaled bsaïes infinite-source, ample-server control variate models for fininte-source, finite-server repairable item systems a new modeling idea for comparing infinite-source, ample-server models (∞/∞) and finite-source, finite-server models (f/f) is considered. this comparison provides an estimate of the error when approximating an f/f system with an ∞/∞ system, and allows analytical solutions of the ∞/∞ model to be used as control variates. this approach is applied to estimate the difference in performance between an m/g/∞ queueing system (∞/∞) and the classical machine repair problem (f/f) and the difference in performance between infinite-source, ample-server multiechelon repairable item inventory systems and finite-source, finite-server multiechelon systems. using an ∞/∞ model as a control variate is shown to be an effective variance reduction technique for estimating the performance of many f/f systems. mohamed a. ahmed douglas r. miller enabling level-of-detail matching for exterior scene synthesis randy k. scoggins robert j. moorhead raghu machiraju xvl: a compact and qualified 3d representation with lattice mesh and surface for the internet computer graphics systems and cad/cam systems are widely used and an abundance of 3d-data in various fields exists. however, based on the vrml technique, it is difficult to send such 3d-data through the internet, because of the large data size. transmission of practical and highly detailed 3d-data through the internet becomes a primary requirement. therefore, a compact and qualified 3d-data representation method is greatly required. this paper describes xvl (extended vrml with lattice), a new framework for compact 3d-data representation with high quality surface shape. by utilizing a free-form surface technique, qualified surfaces are transferred with a limited amount of data size and rendered. free-form surfaces transferred by an efficient data structure are called lattice structure. this data structure contains only vertices, topologies, and attributes, and can be converted to the original surface. because the lattice structure is regarded as a polygon mesh, it can be easily integrated to a vrml file. these surfaces and lattice have the same topology and are thus interchangeable in the lattice structure. by using weighting attributes, a sophisticated surface shape can be represented. some practical xvl applications, such as an intuitive surface design system, are also introduced. akira wakita makoto yajima tsuyoshi harada hiroshi toriya hiroaki chiyokura superior augmented reality registration by integrating landmark tracking and magnetic tracking andrei state gentaro hirota david t. chen william f. garrett mark a. livingston derivatives of likelihood ratios and smoothed perturbation analysis for the routing problem p. bremaud w.-b. gong siggraph vrml 3d ph.d conetree this paper describes the work-in-progress of a vrml 3d conetree to visualize ph.d. graduates and their ph.d. thesis advisors in the field of computer graphics. as part of the siggraph history project chaired by carl machover, this vrml 3d conetree debuts publicly at the 25th conference celebration at siggraph 98, july 1998, in orlando, fl. this conetree allows a vrml-enabled browser to visualize ph.d. contributors to the field of computer graphics and to trace their influences and academic progeny. the database developed may seed further studies of the field's contributors. the initial phase of this project (using ph.d advisors/graduates) provides a hierarchical (if arbitrary) view of part of siggraph's academic structure. norman i. badler roger w. webster graphical search and replace david kurlander eric a. bier anatomy-based modeling of the human musculature ferdi scheepers richard e. parent wayne e. carlson stephen f. may cmunited: a team of robotics soccer agents collaborating in an adversarial environment manuela veloso peter stone kwun han sorin achim frequency domain metamodelling of a feedback queue schruben and cogliano [1987] introduced frequency domain experiments as a tool for metamodel identification. to design a frequency domain experiment, the experimenter must choose appropriate values for certain experimental variables such as oscillation frequency, window size, and range of oscillation. in this paper, we demonstrate that these experimental variables affect the outcome of frequency domain experiments and the magnitude of these effects are model dependent. tapas k. som robert g. sargent lee w. schruben efficient simulation/optimization of dispatching priority with "fake" processing time susumu morito keun hyung lee smooth b-spline illumination maps for bidirectional ray tracing in this paper we introduce b-spline illumination maps and their generalizations and extensions for use in realistic image generation algorithms. the b-spline lighting functions (i.e., illumination maps) are defined as weighted probability density functions. the lighting functions can be estimated from random data and may be used in bidirectional distributed ray tracing programs as well as radiosity oriented algorithms. the use of these lighting functions in a bidirectional ray tracing system that can handle dispersion as well as the focusing of light through lenses is presented. richard a. redner mark e. lee samuel p. uselton computer-generated watercolor cassidy j. curtis sean e. anderson joshua e. seims kurt w. fleischer david h. salesin image snapping michael gleicher applications of irradiance tensors to the simulation of non-lambertian phenomena james arvo anytime algorithm development tools joshua grass shlomo zilberstein a metafile for efficient sequential and random display of graphics graphics metafiles have been in use at the los alamos national laboratory since early 1977. the first metafile format was defined in 1976 and has been updated several times to allow efficient graphics support in the los alamos computing environment. history and current applications of the common graphics system (cgs) metafile are given. design objectives, details of the format, and random access extensions incorporated in the metafile are described. theodore n. reed visibility culling using hierarchical occlusion maps hansong zhang dinesh manocha tom hudson kenneth e. hoff evolution of a multi-paradigm blackboard based network diagnostic platform b. pagurek j. bryant n. dawes a simple model to structure the information of parties in online adr arno r. lodder fast spheres, shadows, textures, transparencies, and imgage enhancements in pixel-planes henry fuchs jack goldfeather jeff p. hultquist susan spach john d. austin frederick p. brooks john g. eyles john poulton lie-ba"cklund symmetries of coupled nonlinear schro"dinger equations v. p. gerdt n. v. khutornoy a. yu. zjarkov burden of proof in legal argumentation arthur m. farley kathleen freeman rasterization of nonparametric curves we examine a class of algorithms for rasterizing algebraic curves based on an implicit form that can be evaluated cheaply in integer arithmetic using finite differences. these algorithms run fast and produce "optimal" digital output, where previously known algorithms have had serious limitations. we extend previous work on conic sections to the cubic and higher order curves, and we solve an important undersampling problem. john d. hobby doppler estimation by ma model and its comparison with fft m. tabiani s. h. r. sadjadpour the simplest subdivision scheme for smoothing polyhedra given a polyhedron, construct a new polyhedron by connecting every edge- midpoint to its four neighboring edge-midpoints. this refinement rule yields a c1 surface and the surface has a piecewise quadratic parametrozation except at a finite number of isolated points. we analyze and improve the construction. jörg peters ulrich reif real-time texture synthesis by patch-based sampling we present an algorithm for synthesizing textures from an input sample. this patch-based sampling algorithm is fast and it makes high-quality texture synthesis a real-time process. for generating textures of the same size and comparable quality, patch-based sampling is orders of magnitude faster than existing algorithms. the patch-based sampling algorithm works well for a wide variety of textures ranging from regular to stochastic. by sampling patches according to a nonparametric estimation of the local conditional mrf density function, we avoid mismatching features across patch boundaries. we also experimented with documented cases for which pixel-based nonparametric sampling algorithms cease to be effective but our algorithm continues to work well. lin liang ce liu ying-qing xu baining guo heung-yeung shum monitoring deployed agent teams recent years are seeing an increasing need for on-line monitoring of deployed distributed teams of cooperating agents, e.g., for visualization, or performance tracking. however, in deployed systems, we often cannot rely on the agents to communicate their state to the monitoring system: (a) we rarely can change the behavior of already-deployed agents to communicate the required information (e.g., in legacy or proprietary systems); (b) different monitoring goals require different information to be communicated (e.g., agents' beliefs vs. plans); and (c) communications may be expensive, unreliable, or insecure. this paper presents a non-intrusive approach based on plan-recognition, in which the monitored agents' state is inferred from observations of their routine actions. in particular, we focus on inference of the team state based on its observed \emph{routine} communications, exchanged as part of coordinated task execution. the paper includes the following key novel contributions: (i) a \emph{linear time} probabilistic plan-recognition algorithm, well-suited for processing communications as observations; (ii) an approach to exploiting general knowledge of teamwork to predict agent responses during normal execution, to reduce monitoring uncertainty; and (iii) a monitoring algorithm that trades expressivity for scalability, representing only certain useful monitoring hypotheses, but allowing for any number of agents and their different activities, to be represented in a single coherent entity. our empirical evaluation illustrates that monitoring based on observed routine communications enables significant monitoring accuracy, while not being intrusive. the results also demonstrate a key lesson: a combination of complementary low-quality techniques is cheaper, and better, than a single, highly-optimized monitoring approach. gal a. kaminka david v. pynadath milind tambe a backplane approach for cosimulation in high-level system specification environments s. schmerler y. tanurhan k. d. muller-glaser is visualization really necessary?: the role of visualization in science, engineering, and medicine nahum d. gershon richard mark friedhoff john gass robert langridge hans-peter meinzer justin d. pearlman stretching the boundaries of simulation software james o. henriksen telstra's avcord: a novel approach to large scale knowledge acquisition and rule processing m. j. cannard setup of the konsum art.server margarete jahrmann max moswitzer beat: the behavior expression animation toolkit the behavior expression animation toolkit (beat) allows animators to input typed text that they wish to be spoken by an animated human figure, and to obtain as output appropriate and synchronized nonverbal behaviors and synthesized speech in a form that can be sent to a number of different animation systems. the nonverbal behaviors are assigned on the basis of actual linguistic and contextual analysis of the typed text, relying on rules derived from extensive research into human conversational behavior. the toolkit is extensible, so that new rules can be quickly added. it is designed to plug into larger systems that may also assign personality profiles, motion characteristics, scene constraints, or the animation styles of particular animators. justine cassell hannes hogni vilhjalmsson timothy bickmore expertfit: total support for simulation input modeling averill m. law michael g. mccomas model-based knowledge acquisition for heuristic classification systems e.plaza r. l. de mà ntaras static analysis of consistency and redundancy of a rule based expert system r. loganatharaj antialiasing of interlaced video animation john amanatides don p. mitchell model-driven reasoning for diagnosis j yan representation of scientific hypotheses for use by the expert system, research assistant: the setup david j. weiner e. judith weiner a unified framework for conservative and optimistic distributed simulation a great deal of research in the area of distributed discrete event simulation has focussed on evaluating the performance of variants of conservative and optimistic methods on different types of applications. application characteristics like lookahead, communication patterns, etc. have been found to affect the suitability of a specific protocol to simulate a given model. for many systems, it may be the case that different subsystems possess contradictory characteristics such that whereas some subsystems may be simulated efficiently using a conservative protocol, others may be more amenable to optimistic methods. furthermore, the suitability of a protocol for a given subsystem may change dynamically. we propose a parallel simulation protocol that allows different parts of a system to be simulated using different protocols, allowing these protocols to be switched dynamically. a proof of correctness is presented, along with some preliminary performance discussion. vikas jha rajive l. bagrodia the elucidation of non-monotonous reasoning and related notions in artificial intelligence s. m. humphrey generation and display of geometric fractals in 3-d we present some straightforward algorithms for the generation and display in 3-d of fractal shapes. these techniques are very general and particularly adapted to shapes which are much more costly to generate than to display, such as those fractal surfaces defined by iteration of algebraic transformations. in order to deal with the large space and time requirements of calculating these shapes, we introduce a boundary-tracking algorithm particularly adapted for array- processor implementation. the resulting surfaces are then shaded and displayed using z-buffer type algorithms. a new class of displayable geometric objects, with great diversity of form and texture, is introduced by these techniques. alan norton bike dietmar offenhuber markus decker skydivers bob hoffman a physically based approach to 2 - d shape blending thomas w. sederberg eugene greenwood automatic sense disambiguation using machine readable dictionaries: how to tell a pine code from an ice cream cone michael lesk wavelength dependent reflectance functions a wavelength based bidirectional reflectance function is developed for use in realistic image synthesis. a geodesic sphere is employed to represent the brdf, and a novel data structure is used to store this description and to recall it for rendering purposes. a virtual goniospectrophotometer is implemented by using a monte carlo ray tracer to cast rays into a surface. an optics model that incorporates phase is used in the ray tracer to simulate interference effects. an adaptive subdivision technique is applied to elaborate the data structure from rays scattered into the hemisphere above the surface. the wavelength based brdf and virtual goniospectrophotometer are utilized to analyze and make pictures of thin films, idealized pigmented materials, and pearlescent paints. jay s. gondek gary w. meyer jonathan g. newman hyperspeech: navigating in speech-only hypermedia barry arons remote reality via omni-directional imaging terrance e. boult strategy diffusion in autonomous agent system: detecting key agents for wide range consensus masakatsu ohta toshiyuki iida a micro-macro simulation model for a signalized network a micro-macro discrete simulation model sketches traffic behavior within a network of signalized intersections. this model utilizes a continuous simulation sub- model which describes the behavior of a single platoon of vehicles, and permits the formulation of a simple exponential expression approximating the platoon motion. most components of the discrete model are treated microscopically, but in order to maneuver the platoons the simple exponential is applied macroscopically. this is done to avoid the time- consuming numerical solutions of the non-linear car-following differential equation, which is the basis for the sub-model. the prime objective in constructing the model is to assist in finding an optimal synchronization of all the signals for the minimum delay for traffic flowing through the network. arie kaufman murdoch: publish/subscribe task allocation for heterogeneous agents brian p. gerkey maja j. mataric fast volume rendering using an efficient, scalable parallel formulation of the shear-warp algorithm minesh b. amin ananth grama vineet singh generating qualitatively different plans through metatheoretic biases karen l. myers thomas j. lee artificial life trip yoichiro kawaguchi tangerine dream a historical view of hybrid simulation/analytic models robert g. sargent the implementation and experiences of a structure-oriented text editor. this paper presents a generalized approach to data editing in interactive systems. we describe the ed3 editor, which is a powerful tool for text editing combining the ability to handle hierarchical structures with screen-oriented text editing facilities. extensions for handling simple pictures and formatted data records in a uniform way are part of our approach. examples of ed3 applications are presented. o. strömfors l. jonesjö simulation for weapons logistics system planning this study investigates the usefulness of a multi-staged discrete simulation model in combination with an interactive cost model for the development of systems parameters in a naval weapons logistic system environment. the model involves operations at a variety of decision levels and physical stages over the life time of the system. in this initial examination, the content of the storages (pipeline, cycle and safety stocks), the character of the related queues and the level of fulfillment of operational (combat and exercise) needs are the basis for determining the performance rating of the system over its life cycle. manipulation of storage capacities allows the selection of a "best" system. william j. maddocks simulation of a robotic assembly process with visual decision making a combined discrete and continuous simulation model of a robotic assembly process is presented. motion and handling sequences are simulated as discrete events, whereas vision and decision making processes are simulated as continuous processes. the siman simulation language is used to model the assembly process on an ibm compatible personal computer. d. l. kimbler g. k. bennett abstraction morphisms for world modelling in high autonomy systems cheng-jye luh bernard p. zeigler constant-time filtering with space-variant kernels alain fournier eugene fiume dynamic scan-converted images with a frame buffer display device a color interactive display system which produces images of three-dimensional polygons and labels on a frame buffer display device is being developed. the entire image is scan converted and written into the frame buffer whenever it is modified. since an entire image cannot be written into the frame buffer faster than 4.6 frames per second for the particular device chosen, an illusion of continuous motion cannot be supported. however, a rate of 3 frames per second has been found sufficient to provide feedback to continuous user input. in order to achieve this frame rate for a reasonably complex picture, the display device has been microprogrammed to accept run length encoded data and text, and the instruction set of the computer has been extended by microprogramming special-purpose instructions which perform visible surface calculations. these microprograms currently can process a scene which consists of up to 170 polygons at 3 frames per second. j. h. jackson a real-time optical 3d tracker for head-mounted display systems jih-fang wang vernon chi henry fuchs multi-user view-dependent rendering jihad el-sana roaming terrain: real-time optimally adapting meshes mark duchaineau murray wolinsky david e. sigeti mark c. miller charles aldrich mark b. mineev- weinstein compatibility: a barrier to applying technology to documentation this "paper" is a summary of the points i will raise during the sigdoc panel on using the new technology at acm '81. because we plan a panel discussion with audience participation, i am only outlining here the subject areas i propose to discuss. also, the problem of compatibility is not one for which there is a single "answer"\---rather, there is a variety of approaches which have different advantages and disadvantages. stephanie rosenbaum mind-warping: towards creating a compelling collaborative augmented reality game computer gaming offers a unique test-bed and market for advanced concepts in computer science, such as human computer interaction (hci), computer-supported collaborative work (cscw), intelligent agents, graphics, and sensing technology. in addition, computer gaming is especially well-suited for explorations in the relatively young fields of wearable computing and augmented reality (ar). this paper presents a developing multi-player augmented reality game, patterned as a cross between a martial arts fighting game and an agent controller, as implemented using the wearable augmented reality for personal, intelligent, and networked gaming (warping) system. through interactions based on gesture, voice, and head movement input and audio and graphical output, the warping system demonstrates how computer vision techniques can be exploited for advanced, intelligent interfaces. thad starner bastian leibe brad singletary jarrell pair knowledge acquisition at the metalevel: creation of custom-tailored knowledge- acquisition tools mark a. musen the robot's sense of touch: some lessons from human taction a consideration of the logical functioning of human taction may suggest methods for solving problems of automated taction. the features of the human tactile system that may be relevant include its spatial resolution, mechanical modification of stimuli, temporal and spatial inhibition, adaptation characteristics, and exploration strategies. artificial tactile systems would be least likely to gain from lower-level, physiology- coupled features of human taction, but might be significantly advanced if higher-level transformations of sensory input were implemented. william w. mcmillan a framework for the efficient petri net simulation of real-time systems william m. evanco jeffrey yang flexicon: an evaluation of a statistical ranking model adapted to intelligent legal text management the flexicon system was designed to provide legal professionals with an effective and easy-to-use legal text management tool. this paper discusses the structured knowledge representation model designed for the flexicon system serving both as an internal knowledge representation scheme, in conjunction with statistical ranking, and as an external representation used to summarize legal text for rapid evaluation of the search results. the model is evaluated and compared to alternative information retrieval models. experimental test data is presented to demonstrate the model's retrieval effectiveness in comparison to boolean search. daphne gelbart j. c. smith surface reconstruction from unorganized points hugues hoppe tony derose tom duchamp john mcdonald werner stuetzle who serves whom? dynamic resource matching in an activity-scanning simulation system photios g. ioannou julio c. martinez ai (panel session): what simulationists really need to know david p. miller jeff rothenberg david w. franke paul a. fishwick r. james firby a practical approach to error recovery for multiagent planning systems a practical approach to error recovery strategy for real-time multiagent environments is presented. in a dynamic domain, when things do not happen as a pre-generated plan has expected, parts of the plan should be recovered to cope with any unexpected situations rather than abandoning the original plan and replanning from scratch. in order to deal with such situations, domain heuristics are used to order and classify conditions, and a resource management scheme is developed. two tables, wedge table and action table, are introduced for prompt location of the contaminated plan parts. plan rationale and wedge structure of a plan tree are recorded in these two tables. in order to solve the truth maintenance problem, action dependency calculation is established. this error recovery approach for multiagent planning environments not only promises successful achievement of a goal but also secures the safety of facilities in a domain. it enhances system performance and flexibility due to the versatility in real-time plan execution. hyugoo han kai-hsiung chang strategies for success in parallel simulation applications (keynote speech) while the pads community has traditionally focused on \---and done a greater job with---the technical aspects of developing simulations that run fast and can be connected to other simulations, it has paid little or no attention to the overall strategies required to produce a marketable, useful, and successful parallel simulation. this lack of market focus has led to many fears of the demise of the pads community, complaints of its lack of general acceptance by the broader simulation community, and predictions that it will become merely another venue for simulation interconnections. these fears, complaints, and predictions are unnecessary. there are several examples of successful parallel simulations---in domains as far apart as aviation modeling and wargames. what can we learn from their successes? how can we translate their general acceptance into other parallel simulation domains? are there market opportunities that we are missing? in short, what is the parallel simulation community lacking, and what does it need to do in order to be more successful? the purpose of this talk is to begin a discussion on the answers to these questions (as opposed to definitively answering them). we will draw on numerous examples of successful applications, and make some concrete suggestions for furthering the community. frederick wieland wired for speed: efficient routes in vrml 2.0 daniel j. woods alan norton gavin bell thm-net: an approach to office systems modeling a formal model for office systems analysis and modeling is proposed. it aims at describing the static as well as the dynamic aspects of an office (information) system. the modeling concepts are based on general net theory, i.e. predicate/transition nets, and the concepts of the semantic data model thm. whereas the places of the so-called thm-net are used to represent entities, relationships, and events, the transitions, described by pre-/post- conditions, specify the consumption/generation of entities, relationships, and events, respectively. the thm-net is supplemented with a thm data schema description. thus our approach provides the modeling power of modern semantic data models without neglecting the dynamic aspects which are very crucial for office (information) systems. angelika horndasch rudi studer methodology for simulation application to virtual manufacturing environments tracey l. geller suzanne e. lammers gerald t. mackulak the round earth project andrew johnson definition of visualization tracing ray differentials homan igehy converging flows stanley craig bowman an image synthesizer ken perlin automatic synthesis of graphical object descriptions a technique is presented for automatically synthesizing graphical object descriptions from high-level specifications. the technique includes mechanisms for describing, selecting, and combining primitive elements of object descriptions. underlying these mechanisms are a referential framework for describing information used in the construction of object descriptions and a computational model of the object-synthesis process. this technique has been implemented in two prototype systems to synthesize object descriptions in near-real time. one system creates graphical displays of information that resides in a conventional database. the other system is a computer graphicist's tool for creating backgrounds of complex, three- dimensional scenes. mark friedell cracking the cracking problem with coons patches we present a new approach to solving the cracking problem. the cracking problem arises in many contexts in scientific visualization and computer graphics modeling where there is need for an approximation based upon domain decomposition that is fine in certain regions and coarse in others. this includes surface rendering, approximation of images and multiresolution terrain visualization. in general, algorithms based upon adaptive refinement strategies must deal with this problem. the new approach presented here is simple and general. it is based upon the use of a triangular coons patch. both the basic idea of using a triangular coons patch in this context and the particular coons patch that is used constitute the novel contributions of this paper. gregory m. nielson dave holliday tom roxborough computing exact shadow irradiance using splines michael m. stark elaine cohen tom lyche richard f. riesenfeld combatting maelstroms in networks of communicating agents james e. hanson jeffrey o. kephart classification characteristics of som and art2 j. j. aleshunas daniel c. st. clair w. e. bond proposal of "effective floating-point number" for approximate algebraic computation fujio kako tateaki sasaki social implications of artificial intelligence the introduction and use of apparently intelligent systems and machines require a new paradigm for society and its economic order. the panel discusses the ramifications of industrial ai. robotics, expert systems, medical consultation, and other ai programs and machines are already in use. what are the effects of these products? expert systems may be called on to make medical or military judgments in real time. industrial robots may obviate the need for a blue-collar work force. natural language interfaces and automated programming tools may allow naive users a range of powerful tools which appear genuinely intelligent. are there differences between the employment of these tools and older (unintelligent) tools? ira pohl the out of box experience: lessons learned creating compelling vrml 2.0 content sam chen rob myers rick pasetto party from final fantasy viii satoshi tsukamoto xml linking steven j. derose restructuring a parallel simulation to improve cache behavior in a shared- memory multiprocessor: the value of distributed synchronization synchronization is a significant cost in many parallel programs, and can be a major bottleneck if it is handled in a centralized fashion using traditional shared-memory constructs such as barriers. in a parallel time- stepped simulation, the use of global synchronization primitives limits scalability, increases the sensitivity to load imbalance, and reduces the potential for exploiting locality to improve cache behavior. this paper presents the results of an initial one- application study quantifying the costs and performance benefits of distributed, nearest neighbors synchronization. the application studied, mp3d, is a particle-based wind tunnel simulation. our results for this one application on current shared-memory multiprocessors show a significant decrease in synchronization time using these techniques. we prototyped an application-independent library that implements distributed synchronization. the library allows a variety of parallel simulations to exploit these techniques without increasing the application programming beyond that of conventional approaches. david r. cheriton hendrik a. goosen hugh holbrook philip machanick joining existing simulation programs the main purpose of this paper is to make people (developers, users, managers, researchers) aware that there are numerous factors and issues that affect joining separate simulation models and programs. the primary content of the paper is the identification of the different factors and issues that affect joining existing simulation programs. two fundamental ways of joining existing simulation models are identified and discussed: where none of the simulation models interact with any of the others during simulation runs and where some or all of the simulation models do interact with others during simulation runs. the concept of "amount of reusability" of simulation models and programs is introduced. also, the need for research is discussed. robert g. sargent learning quantitative knowledge for multiagent coordination david jensen michael atighetchi regis vincent victor lesser a unified approach to interference problems using a triangle processor fujio yamaguchi improving radiosity solutions through the use of analytically determined form- factors d. r. baum h. e. rushmeire j. m. winget les pecheurs de perles hiroyuki okui identifying the scope of modeling in multi-agent settings sanguk noh piotr j. gmytrasiewicz methods for preventing cloth self-intersection john mcdonald robert welland how to make a visually realistic 3d display margaret a. hagen panel on transportation and logistics modeling john s. carson mani s. manivannan eric miller mark brazier h. donald ratliff continuous simulation of air base assets (csaa) - "integrating logistics support operations" - a proposed methodology stephen r. parker patrick williams variable coupling of agents to their environment (abstract): combining situated and symbolic automata george kiss mtv-forests thomas haegele piotr karwas barriers to the practical use of simulation analysis simulation techniques are used in only a small fraction of instances in which they seem applicable. this paper considers the reasons for such "non- uses." in particular the paper considers simulator programming, the simulation/database interface, and two statistical topics as past, present and future limiting factors in the practical use of simulation techniques. harry m. markowitz communications approaches for simulation-ai interactions yu cao james h. graham adel s. elmaghraby coping with ongoing knowledge acquisition from collaborating hierarchies of experts b. g. silverman r. g. wenig t. wu modular simulation package for product design studies a simulation package designed to facilitate the creation and execution of large fortran based simulations has been developed by the xerox corporation. after a discussion of the model assumptions required to use the package, this paper discusses the key components of the package. this explanation includes the structure of the simulation models and data, the preprocessors used to interface the models to the simulation executive, and the capabilities of the executive. to provide a concrete understanding of the package a simple model is developed, interfaced, and executed. dennis b. ulrich the high level architecture: is there a better way? wayne j. davis gerald l. moeller a biologically inspired robotic model for learning by imitation aude billard maja j. mataric accelerated mpeg compression of dynamic polygonal scenes this paper describes a methodology for using the matrix-vector multiply and scan conversion hardware present in many graphics workstations to rapidly approximate the optical flow in a scene. the optical flow is a 2-dimensional vector field describing the on-screen motion of each pixel. an application of the optical flow to mpeg compression is described which results in improved compression with minimal overhead. dan s. wallach sharma kunapalli michael f. cohen interactive visualization of 3d-vector fields using illuminated stream lines malte zöckler detlev stalling hans-christian hege keynote address (summary only) alan kay asked us to remember the words of bob barton, "good ideas don't often scale", with respect to using the same window paradigm for everything from macintoshes to ultra-high resolution large screens. he further suggested that good user interfaces should be useable by children under six years of age. adults don't make good subjects because they have too much patience. "they've learned to suffer. that's what schools are for". a videotape of a 22 month-old girl adroitly using macpaint drove home his point. kay promoted the idea of "agents", computer-created creatures with personality and some ability to act on their own. as an experiment in using advanced technology to further this idea, kay brought together a group of people including a disney animator to spend a weekend with an e&s; ct-6 real-time shaded graphics system. out of this came some very interesting animation of a bouncing rabbit's-eye-view ramble through an infinite forest and a swim in a shallow sea in the company of a couple of realistically swimming sharks. we were exhorted to strive for impressionistic imagery such as that seen in the dance of the sugar-plum fairies in disney's fantasia, rather than spending teraflops trying to achieve the ever-receding goal of absolute realism. kay closed by noting that while art imitates life, computer art/animation can imitate creation itself. alan kay sea dance steven churchill a survey of enhancements to the alpha-beta algorithm current game- playing programs have developed numerous move ordering and search reduction techniques in order to improve the effectiveness of the alpha-beta algorithm. a critical review of these search modifications is provided, and a recursive formula to estimate the search time is proposed, one which reflects the characteristics of the strongly ordered trees produced through use of improved search enhancements. t. a. marsland m. campbell visualization at the australian national university drew whitehouse learning in broker agent xiaocheng luan yun peng timothy finin curriculum descant: stories and plays about the ethical and social implications of artificial intelligence richard g. epstein deepak kumar input modeling when simple models fail barry l. nelson marne c. cario chester a. harris stephanie a. jamison j. o. miller james steinbugl jaehwan yang peter ware 3d scan-conversion algorithms for voxel-based graphics an assortment of algorithms, termed three-dimensional (3d) scan-conversion algorithms, is presented. these algorithms scan-convert 3d geometric objects into their discrete voxel-map representation within a cubic frame buffer (cfb). the geometric objects that are studied here include three-dimensional lines, polygons (optionally filled), polyhedra (optionally filled), cubic parametric curves, bicubic parametric surface patches, circles (optionally filled), and quadratic objects (optionally filled) like those used in constructive solid geometry: cylinders, cones, and spheres. all algorithms presented here do scan- conversion with computational complexity which is linear in the number of voxels written to the cfb. all algorithms are incremental and use only additions, subtractions, tests and simpler operations inside the inner algorithm loops. since the algorithms are basically sequential, the temporal complexity is also linear. however, the polyhedron-fill and sphere-fill algorithms have less than linear temporal complexity, as they use a mechanism for writing a voxel run into the cfb. the temporal complexity would then be linear with the number of pixels in the object's 2d projection. all algorithms have been implemented as part of the cube architecture, which is a voxel-based system for 3d graphics. the cube architecture is also presented. arie kaufman eyal shimony mars: runtime support for coordinated applications neal sample carl bartlett matthew haines predictive learning and cognitive momentum: a foundation for intelligent, autonomous systems steve donaldson surveyor's forum: image models govind sharma minimal cost complexity pruning of meta-classifiers andreas l. prodromidis salvatore j. stolfo vector field comparisons using earth mover's distance yingmei lavin rajesh batra lambertus hesselink an efficient method for volume rendering using perspective projection kevin l. novins francois x. sillion donald p. greenberg visualize a port in africa james n. robinson exotica yoichiro kawaguchi steven churchill tangerine dream expert-vsim (abstract only): an expert simulation environment expert- vsim is the intended product of a current research effort. it is a software system that provides a complete, intelligent environment for the construction and simulation of discrete event models. the initial stage consisted of the construction of a simulation environment called vsim [1] and the second involves the integration of an expert system. vsim provides a highly interactive environment. included is a graphics interface which constitutes the main model construction tool. network models are used to describe real-world systems through an entity-activity world view. models can be built and tested incrementally, increasing the confidence of correctness. in addition, the multi level structure of vsim allows the user to define subnetworks as nodes within a larger network. a definition language complements the graphics interface and is used to describe the details of the model as well as to create user-defined nodes. a prototype for the vsim stage has been completed and thoroughly tested. the design of the second stage is in active development and near completion. this design includes the creation of a simulation and statistics expert system to be integrated with vsim to obtain the final software product: expert-vsim. this expert system is rule-based implemented in prolog. the presence of such an expert system is of significant importance in the modeling and simulation process. it provides database and inference capabilities that assist the user in: the selection of probability distributions for input variables using common heuristics and procedures [2] to choose from the known theoretical distributions and estimate their parameters (with or without that availability of data), or to allow the user to fit a distribution to experimental data; the analysis of the simulation results, the construction of confidence intervals for the observed variables, and the comparison of simulation runs; the construction of the model itself by providing an on-line database containing the descriptions of the standard and user-defined node types together with the ability to traverse and inspect the multi-level structure of the model; the extension of the expert system itself by allowing the user to add to the set of rules and facts through the prolog interface; the independent use of the statistical analysis functions and procedures available to analyze and process data; and the management of experimental and simulation data through the database capabilities of the system. enrique v. kortright z3: an economical hardware technique for high-quality antialiasing and transparency norman p. jouppi chun-fa chang implementing knowledge bases on secondary storage (abstract only) in the past, many knowledge representation (kr) schemes in artificial intelligence (ai) research have assumed a primary storage-resident knowledge base. this approach to knowledge base implementation is becoming less feasible with current demands and expectations for ai-based software, in particular, large knowledge systems. the incorporation of secondary storage-resident knowledge bases into knowledge systems requires considerable rethinking with respect to knowledge structures and knowledge system design. it is the premise of this paper that there are many potential applications for knowledge systems that are based on a tightly-coupled system design, in which traditional ai kr schemes are modified to incorporate file processing techniques, e.g., hashing and signatures. researchers in both ai and the database field are currently experimenting with a variety of approaches to "merging" ai and database technology. essentially, there are four approaches: develop a simple interface between an ai development system, such as prolog, and a database system, such as ingres, i.e., a loose- coupling; extend a database system to accommodate ai tasks, e.g., add inferencing capabilities; extend an ai development system by adding database capabilities; and develop a tightly-coupled system with its own ai and database capabilities. although these four approaches to incorporating secondary storage residency into knowledge systems are very important ones, there is also much potential for incorporating specialized, file processing techniques, i.e., developing systems for applications that do not demand full database-level capabilities \---a simplified approach to (4) above. for example, there may be applications that demand large knowledge bases, and therefore require the efficient use of secondary storage, but that do not require concurrent access, recovery capabilities, and so on. in these cases, we believe that knowledge systems with specialized input/output processing, tailored to a particular kr scheme, will be completely adequate and self-contained, and possibly more efficient and cost-effective. knowledge bases implemented in primary storage on computers with large virtual storage capabilities cannot substitute for the above. there are many issues to be addressed in designing a knowledge system with secondary storage processing, based on either file or database processing techniques. one important issue concerns the lack of (traditional) primary keys during inferencing and query resolution [1]. we are currently investigating partitioning schemes with hashed partitions, where each partition contains "homogeneous" facts and/or rules and uses traditional ai kr schemes, e.g., frame-like knowledge structures, such as prolog structures, and tuples in prolog relations. another related issue is partition size, i.e., knowledge-partition resolution and granularity. partition size can be determined by several criteria, including available primary storage, "natural" partition size of "like" knowledge, traditional bucket size criteria for file processing, and inferencing techniques. furthermore, the implementation language has a bearing on some of these issues. for example, with prolog there must be an explicit accommodation of the default backtracking technique, whereas with lisp the programmer is free to tailor the backtracking algorithm to the knowledge structures from the outset. as another example, consider the default indexing techniques used with prolog. some prolog implementations index on the first component of a relation only [2]. additional code must be introduced to supplement and properly override such defaults. by tightly coupling traditional kr schemes with partitioning, indexing, and hashing schemes it will be possible to develop an efficient knowledge system capable of managing a large knowledge base. jerry d. smith continuous tone representation of three-dimensional objects illuminated by sky light tomoyuki nishita eihachiro nakamae soar/ifor: intelligent agents for air simulation and control paul e. nielsen a database driven server for an internet based plant layout presentation system the work presented in this paper is part of a virtual reality research project of the heinz nixdorf institut and the siemens ag kwu. jurgen gausemeier holger krumm thorsten molt peter ebbesmeyer peter gehrmann moebius: the city of fire victor wong classification of textures using higher-order fractal dimensions a. ait- kheddache fastsplats: optimized splatting on rectilinear grids jian huang roger crawfis naeem shareef klaus mueller knowledge acquisition for classification expert systems expert systems are generally described by a mixture of terms that confuse implementation language with knowledge structure and the search process. this confusion makes it difficult to analyze new problems and to derive a set of knowledge engineering principles. a rigorous, logical description of expert systems reveals that a small set of terms and relations can be used to describe many rule-based expert systems. in particular, one common method for solving problems is by classification\\---heuristically relating data abstractions to a preenumerated network of solutions. this model can be used as a framework for knowledge acquisition, particularly in the early stages for organizing the expert's vocabulary and decomposing problems. william j. clancey current issues in metamodeling (panel) robert g. sargent the simulation of a pipelined event set processor the availability of inexpensive, sophisticated processing elements affords the computer system designer the opportunity to create tailored computer systems for specialized applications. in this paper, the authors present a design for a discrete event simulation computer, in which event set manipulation is performed by a pipelined set of microprocessors. a simulation model for the system is presented and the selection of optimal parameters for the system are discussed. the results of the simulation and suggestions for further research are presented. john craig comfort anita miller if there is artificial intelligence? is there such a thing as artificial stupidity j. liebowitz model evolution: a rotary index table case history in this paper, an indexed rotary table employed in the assembly of instrument clusters for automobiles is modeled. the purpose of the paper is to illustrate the procedures involved in model development. the evolution and rationale behind various slam ii® models of the indexed rotary table are described. the paper demonstrates that alternative modeling concepts and viewpoints are important, and that modeling procedures and analysis can lead to a greater understanding of the system under study. a. alan b. pritsker controllable morphing of compatible planar triangulations two planar triangulations with a correspondence between the pair of vertex sets are compatible (_isomorphic_) if they are topologically equivalent. this work describes methods for morphing compatible planar triangulations with identical convex boundaries in a manner that guarantees compatibility throughout the morph. these methods are based on a fundamental representation of a planar triangulation as a matrix that unambiguously describes the triangulation. morphing the triangulations corresponds to interpolations between these matrices.we show that this basic approach can be extended to obtain better control over the morph, resulting in valid morphs with various natural properties. two schemes, which generate the linear trajectory morph if it is valid, or a morph with trajectories close to linear otherwise, are presented. an efficient method for verification of validity of the linear trajectory morph between two triangulations is proposed. we also demonstrate how to obtain a morph with a natural evolution of triangle areas and how to find a smooth morph through a given intermediate triangulation. dimensioning analysis: toward automatic understanding of engineering drawings dov dori tactical simulation in an object-oriented animated graphics environment mikel d. petty j. michael moshell charles e. hughes real-time vision-based camera tracking for augmented reality applications dieter koller gudrun klinker eric rose david breen ross whitaker mihran tuceryan simulation of advanced manufacturing systems gerald w. evans william e. biles pgvt: an algorithm for accurate gvt estimation the time warp mechanism uses memory space to save event and state information for rollback processing. as the simulation advances in time, old state and event information can be discarded and the memory space reclaimed. this reclamation process is called fossil collection and is guided by a global time value called global virtual time (gvt). that is, gvt represents the greatest minimum time of the fully committed events (the time before which no rollback will occur). gvt is then used to establish a boundary for fossil collection. this paper presents a new algorithm for gvt estimation called pgvt. pgvt was designed to support accurate estimates of the actual gvt value and it operates in an environment where the communication subsystem does not support fifo message delivery and where message delivery failure may occur. we show that pgvt correctly estimates gvt values and present some performance comparisons with other gvt algorithms. loy m. d'souza xianzhi fan philip a. wilsey do computer games need to be 3d? richard rouse efficient reinforcement learning in this paper we propose a new formal model for studying reinforcement learning, based on valiant's pac framework. in our model the learner does not have direct access to every state of the environment. instead, every sequence of experiments starts in a fixed initial state and the learner is provided with a "reset" operation that interrupts the current sequence of experiments and starts a new one (from the initial state). we do not require the agent to learn the optimal policy but only a good approximation of it with high probability. more precisely, we require the learner to produce a policy whose expected value from the initial state is ε-close to that of the optimal policy, with probability no less than 1 δ. for this model, we describe an algorithm that produces such an (ε,δ)-optimal policy for any environment, in time polynomial in n,k,1/ε,1/δ,1/(1 β) and rmax, where n is the number of states of the environment, k is the maximum number of actions in a state, β is the discount factor and rmax is the maximum reward on any transition. claude-nicolas fiechter cooperative problem-solving guided by intentions and perception (abstract) birgit burmeister kurt sundermeyer curlybot phil frei victor su hiroshi ishii event processing for complicated routes in vrml 2.0 masaaki taniguchi web-based simulation: revolution or evolution? the nature of the emerging field of web-based simulation is examined in terms of its relationship to the fundamental aspects of simulation research and practice. the presentation, assuming a form of debate, is based on a panel session held at the first international conference on web-based modeling and simulation, which was sponsored by the society for computer simulation during 11-14 january 1998 in san diego, california. while no clear "winner" is evident in this debate, the issues raised here certainly merit ongoing attention and contemplation. ernest h. page arnold buss paul a. fishwick kevin j. healy richard e. nance ray j. paul control of initialization bias in multivariate simulation response lee w. shruben quasi-linear z buffer eugene lapidous guofang jiao the turing test is for the birds gary fostel nigavaps - outbreeding in genetic algorithms carlos fernandes rui tavares agostinho c. rosa analogy by generalization - and the quest of the grail antonio l. furtado path specification and path coherence this paper presents an interactive method for specifying a path in space and time through a three- dimensional environment. a sequence is generated by showing the series of views along the path. the sequence is previewed on a vector scope, and after it is interactively refined, each frame is rendered on a raster device. the path is represented by a b-spline to provide smooth, continuous motion. the timing along the path is also defined by a b-spline so that changes in velocity are smooth. the use of "path coherence" is introduced. the utilization of the available data from the a priori temporal and spatial path definition holds great promise for frame to frame coherence. the path coherence can be used to reduce the number of polygons which need to be considered in a viewed environment. this reduction makes the previewing of complex environments appear less cluttered. furthermore, the computational expense of the culling and sorting operations in the visible line/surface determination is reduced. one sample usage of this is a tree-structured partitioned environment where the priority ordering of the environment must be changed only when the path crosses a partition boundary. kim l. shelley donald p. greenberg acquisition of macro-operators from worked examples in problem solving s. yamada s. tsuji using discrete-event simulation to model human performance in complex systems romn laughery news lisa meeden you can't beat the clock: studies in problem solving this paper has a broad purpose and a narrow purpose. the broad purpose is to teach some general techniques of problem solving, and the narrow purpose is to teach a particular approach to modeling. these purposes are attained by presenting a sequence of three problems and solutions to these problems. many of the techniques of problem solving presented are taken from, or suggested by, the works of polya (1957, 1973, 1981). readers interested in expanding their problem solving abilities should strongly consider reading these works. the particular modeling viewpoint advocated in this paper is that time-oriented approaches to modeling are frequently preferable to space- oriented approaches. hence, the title of this paper. unfortunately, for most people, space-oriented approaches to modeling are more natural than time- oriented approaches. by emphasizing the advantages of the time-oriented approach, we hope to broaden the perspective of the reader. james o. henriksen the mirage problem in digital halftone resolution venkateshwar e. rao inside the loom description classifier robert m. macgregor amalgamating regulation- and case-based advice systems through suggested answers this paper proposes a method whereby regulation- and case- based systems may be amalgamated in logic-based advice systems through the use of suggested answers, originally used in a non- legal domain. these are based on relationships, declared at the meta-level, between generalized cases and the vague concepts of the regulations; satisfaction of the conditions derived from cases is not, however, seen as implying that the vague concepts hold, merely as being indicative of them. the resulting systems allow the user free interpretation of the vague concepts, but provide help with interpretation, if required, based on the cases; they also provide clear separation of regulations and cases. d. e. wolstenholme fast object-precision shadow generation for area light source norman chin steven feiner presidents' forum computer graphics has been an integral part of the computer environment since the mid-1950's with origins in military, academic and industrial applications. presidents of several leading computer graphics companies will discuss some of the issues they see facing this dynamic industry. one president believes that the 1980's will belong to the systems integrators because of the rapid rate of change in both technology and the competitive environment. vertical integration will not be a successful strategy for the 1980's. another president is concerned that industry research analysts are misusing the term "cad/cam" and that, in fact, most of today's cad/cam systems are simply productive drafting systems. he is concerned about the confusion between realistic pictures and solid modelling and about the proper mix of functionality and interactivity. still another president believes the crucial issues in computer graphics are those that concern users; those that have access to the tools and those of future development of hardware and software to meet their needs. he suggests that only about one to five percent of the potential users have adopted computer graphics. he is concerned about providing increased access to computer graphics facilities and about removing the widespread psychological barrier of "i can't use the computer to do what i want!". other current issues will also be addressed. carl machover donald feddersen ralph t. linsalata peter preuss richard n. spann animating algorithms with xtango john stasko integrating discourse and domain knowledge for document drafting document drafting is a key component of legal expertise. effective legal document drafting requires knowledge both of legal domain knowledge and of the structure of legal discourse. automating the task of legal document drafting therefore requires explicit representation of both these types of knowledge. this paper proposes an architecture that integrates these two disparate knowledge sources in a modular architecture under which representation and control are optimized for each task. this architecture is being implemented in docuplanner 2.0, a system for interactive document drafting. l. karl branting charles b. callaway bradford w. mott james c. lester fiction 2000: technology, tradition, and the essence of story andrew glassner turner whitted image measurement and recognition (abstract only) images, like snowflakes, come in virtually infinite variety. in this abstract, we will focus our attention on image edge detection and image grammars. there are three steps in the syntactic method for image recognition [1]: decompose the images into subimages or primitives describe the structural relationship among the primitives by some operators determine a grammar governing the ways of combination of the primitives and the operators the process of reducing an image to line drawings of boundaries is known as edge detection. several methods have been used so far in this process. local methods: this category of procedures uses the concept of discrete gradient transformation to detect local intensity (grey level value) variations. a point is classified as belonging to an edge if the discrete gradient exceeds some threshold value. regional methods: one first considers what a theoretical edge should look like and then finds, for each region, how far from this configuration it is by using a best fit criterion such as the least square one. global methods: a linear shift invariant filtering operation is performed on the image. sequential methods: one first finds some elements belonging to an edge and then proceeds from these points onwards in a sequential way in order to find the remaining points belonging to the same edge. this kind of methods is also known as contour following. heuristic methods: one searches a narrow band between the successive points in the plan and let the program connect them. for instance, dynamic programming may be used for extracting curves in a noisy environment. dynamic methods: one uses a sequence of thresholds and considers the "stable regions" of darker pixels which have small variation whilst the image is thresholded at different values. relaxation methods: one considers each pixel and its neighbors simultaneously and iteratively. define a probability for a pixel to belong to some edge class. the probability converges after iterations and determines whether a pixel belongs to the edge class. images grammars may be classified into one-, two-, or higher dimensional according to the connectivity of primitives can be expressed. early studies borrowed much theoretical construct from the field of formal languages and were mostly concentrated on one-dimensional, that are, string grammars, in which the only relation between primitives is the linear concatenation. edward t. lee r. t. wu c. c. huang tests for the verification and validation of computer simulation models this paper discusses the quantitative as well as qualitative tests which can be run in trying to convince the user of a simulation model that the results are valid robert e. shannon ontomap: portal for upper-level ontologies currently the evaluation of the feasibility of general-purposeontologies and upper-level models is expensive mostly because oftechnical problems such as different representation formalisms andterminologies used. additionally, there are no formal mappingsbetween the upper-level ontologies that could ease any kind ofstudies and comparisons. we present the ontomap project(http://www.ontomap.org), a project with the pragmatic goal tofacilitate the access, understanding, and reuse of such resources.a semantic framework on the conceptual level is implemented that issmall and easy enough to be learned on-the-fly. we tried to designthe framework so that it captures most of the semantics usuallyencoded in upper-level models. technically, ontomap is a web-siteproviding access to several upper-level ontologies and manualmapping between them. atanas kiryakov kiril iv. simov marin dimitrov art + design + computer graphics technology isaac v. kerlow behavioral control for real-time simulated human agents a system for controlling the behaviors of an interactive human-like agent, and executing them in real-time, is presented. it relies on an underlying model of continuous behavior as well as a discrete scheduling mechanism for changing behavior over time. a multiprocessing framework executes the behaviors and renders the motion of the agents in real- time. finally we discuss the current state of our implementation and some areas of future work. john p. granieri welton becket barry d. reich jonathan crabtree norman i. badler lights from highlights and shadows pierre poulin alain fournier using morphing for information visualization wolfgang muller marc alexa guidelines for simulation project success kenneth j. musselman replicas - a new continuous system simulation language a new continuous system simulation language - replicas, the rational, efficient programming language for the implementation of computerized analysis and simulation - is proposed for general engineering, scientific and econometric applications. the use of gear's integration method coupled with a non-linear quasi-newton solver relying on broyden's method results in a reliable and efficient simulation system invoked by a language which requires only that the user define a mathematical model in terms of first-order, ordinary differential equations. extensions to gear's method accommodate discontinuities, extreme stiffness and steady-state within a single evaluation procedure. peter mclaughlin the time dimension of neural network models richard rohwer multiagent systems: an emerging subdiscipline of ai victor r. lesser the legend of dragon kenichi iwata shuji hiramatsu takahiro fuji hideki mizoguchi yasuharu yoshizawa kouji miyata momoko ikeda takashi kanai sketching with projective 2d strokes freehand sketching has long had appeal as an artistic medium for conceptual design because of its immediacy in capturing and communicating design intent and visual experience. we present a sketching paradigm that supports the early stages of design by preserving the fluidity of traditional freehand drawings. in addition, it attempts to fill the gap between 2d drawing programs, which have fixed views, and 3d modeling programs that allow arbitrary views. we implement our application as a two- dimensional drawing program that utilizes a projective representation of points --- i.e. points that lie on the surface of a unit sphere centered at the viewpoint. this representation facilitates the production of novel re- projections generated from an initial perspective sketch and gives the user the impression of being immersed in the drawing or space. we describe a method for aligning a sketch drawn outside the system using its vanishing points, allowing the integration of computer sketching and freehand sketching on paper in an iterative manner. the user interface provides a virtual camera, projective grids to guide in the construction of proportionate scenes, and the ability to underlay sketches with other drawings or photographic panoramas. osama tolba julie dorsey leonard mcmillan tracking and modifying human motion with dynamic simulation victor b. zordan jessica k. hodgins a system supporting the human divergent thinking process by provision of relevant and heterogeneous pieces of information based on an outsider model kazushi nishimoto shinji abe tsutomu miyasato fumio kishino the f-buffer: a rasterization-order fifo buffer for multi-pass rendering multi-pass rendering is a common method of virtualizing graphics hardware to overcome limited resources. most current multi-pass rendering techniques use the rgba framebuffer to store intermediate results between each pass. this method of storing intermediate results makes it difficult to correctly render partially-transparent surfaces, and reduces the performance of shaders that need to preserve more than one intermediate result between passes. we propose an alternative approach to storing intermediate results that solves these problems. this approach stores intermediate colors (or other values) that are generated by a rendering pass in a fifo buffer as the values exit the fragment pipeline. on a subsequent pass, the contents of the fifo buffer are fed into the top of the fragment pipeline. we refer to this fifo buffer as a fragment- stream buffer (or f-buffer), because this approach has the effect of associating intermediate results with particular rasterization fragments, rather than with an (x,y) location in the framebuffer. implementing an f-buffer requires some changes to current mainstream graphics architectures, but these changes can be minor. we describe the design space associated with implementing an f-buffer, and compare the f-buffer to recirculating pipeline designs. we implement f-buffers in the mesa software renderer, and demonstrate our programmable-shading system running on top of this renderer. william r. mark kekoa proudfoot method of displaying optical effects within water using accumulation buffer a precise shading model is required to display realistic images. recently research on global illumination has been widespread. in global illumination, problems of diffuse reflection have been solved fairly well, but some optical problems after specular reflection and refraction still remain. some natural phenomena stand out in reflected/refracted light from the wave surface of water. refracted light from water surface converges and diverges, and creates shafts of light due to scattered light from particles. the color of the water is influenced by scattering/absorption effects of water molecules and suspensions. for these effects, the intensity and direction of incident light to particles plays an important role, and it is difficult to calculate them in conventional ray-tracing because light refracts when passing through waves. therefore, the pre-processing tracing from light sources is necessary. the method proposed here can effectively calculate optical effects, shaft of light, caustics, and color of the water without such pre-processing by using a scanline z-buffer and accumulation buffer. tomoyuki nishita eihachiro nakamae a knowledge dictionary for expert systems and reorganization techniques william t. harding richard t. redmond autostat: output statistical analysis for automod users john s. carson hybrid sort-first and sort-last parallel rendering with a cluster of pcs we investigate a new hybrid of sort-first and sort-last approach for parallel polygon rendering, using as a target platform a cluster of pcs. unlike previous methods that statically partition the 3d model and/or the 2d image, our approach performs dynamic, view-dependent and coordinated partitioning of both the 3d model and the 2d image. using a specific algorithm that follows this approach, we show that it performs better than previous approaches and scales better with both processor count and screen resolution. overall, our algorithm is able to achieve interactive frame rates with efficiencies of 55.0% to 70.5% during simulations of a system with 64 pcs. while it does have potential disadvantages in client-side processing and in dynamic data management---which also stem from its dynamic, view-dependent nature---these problems are likely to diminish with technology trends in the future. rudrajit samanta thomas funkhouser kai li jaswinder pal singh ontological engineering for b2b e-commerce in this paper we discuss the nature of our overall enterprise tocreate ontologies in the product and service knowledge space forbusiness-to-business (b2b) electronic commerce. we describe onecrucial problem: the mapping problem, i.e., mapping amongontologies, taxonomies, and classification systems, some of whichare more semantically sound and coherent than others. this problemwe consider to be in need of a sustained research program iftenable solutions are to be found, since the lack of a solutionwill preclude widespread adoption of ontologies by the commercialworld. finally, we summarize the general issues we faced andindicate prospective future research. leo obrst robert e. wray howard liu ray tracing: graphics for the masses paul rademacher tuple centres for the coordination of internet agents andrea omicini franco zambonelli multi-color and artistic dithering victor ostromoukhov roger d. hersch hardware-accelerated texture advection for unsteady flow visualization bruno jobard gordon erlebacher m. yousuff hussaini comparison of monte carlo and deterministic methods for non-adaptive optimization hisham a. al-mharmah james m. calvin scalable distributed visualization using off-the-shelf components this paper describes a visualization architecture for scalable computer systems. the architecture is currently being prototyped for use in beowulf- class clustered systems. a set of opengl frame buffers are driven in parallel by a set of cpus. the visualization architecture merges the contents of these frame buffers by user-programmable associative and commulative combining operations. the system hardware is built from off-the-shelf components including opengl accelerators, field programmable gate arrays (fpgas), and gigabit network interfaces and switches. a second-generation prototype supports 60 hz operation at 1024 × 1024 pixel resolution with interactive latency up to 1000 nodes. alan heirich laurent moll filtering edges for gray-scale displays while simple line-drawing techniques produce "jagged" lines on raster images, more complex anti-aliasing, or filtering, techniques use gray-scale to give the appearance of smooth lines and edges. unfortunately, these techniques are not frequently used because filtering is thought to require considerable computation. this paper presents a simple algorithm that can be used to draw filtered lines; the inner loop is a variant of the bresenham point-plotting algorithm. the algorithm uses table lookup to reduce the computation required for filtering. simple variations of the algorithm can be used to draw lines with different thicknesses and to smooth edges of polygons. satish gupta robert f. sproull focus on color: the 1995 siggraph educators' slide set rosalee wolfe artdefo: accurate real time deformable objects doug l. james dinesh k. pai enabling classification and shading for 3d texture mapping based volume rendering using opengl and extensions we present a new technique which enables direct volume rendering based on 3d texture mapping hardware, enabling shading as well as classification of the interpolated data. our technique supports accurate lighting for a one directional light source, semi- transparent classification, and correct blending. to circumvent the limitations of one general classification, we introduce multiple classification spaces which are very valuable to understand the visualized data, and even mandatory to comprehensively grasp the 3d relationship of different materials present in the volumetric data. furthermore, we illustrate how multiple classification spaces can be realized using existing graphics hardware. in contrast to previously reported algorithms, our technique is capable of performing all the above mentioned tasks within the graphics pipeline. therefore, it is very efficient: the three dimensional texture needs to be stored only once and no load is put onto the cpu. besides using standard opengl functionality, we exploit advanced per pixel operations and make use of available opengl extensions. michael meißner ulrich hoffmann wolfgang straßer explosion potion yin-fang liao chess programs (editor's note): from basement to marketplace selection using a one-eyed cursor in a fish tank vr environment this study investigates the use of a 2d cursor presented to one eye for target selection in fish tank vr and other stereo environments. it is argued that 2d selection of 3d objects should be less difficult than 3d selection. vision research concerning binocular rivalry and the tendency we have to project images onto surfaces suggests that this mode of viewing will not seem particularly unnatural. a fitt's law experiment was done to directly compare target acquisition with a one-eyed 2d cursor and target acquisition using a 3d cursor. in both cases we used the same input device (polhemus fastrak) so that the device lag and gain parameters were exactly matched. the results show a large improvement in target acquisition time using the 2d cursor. the practical implications of this is that the 2d selection method using a one- eyed cursor in preferable to the 3d selection method. theoretical implications relate to methods for extending fitts' law from the one-dimensional task for which it was designed to 2d and 3d tasks. we conclude that the existing approaches to this problem are not adequate. colin ware kathy lowther real interactivity in interactive entertainment talin global illumination using local linear density estimation this article presents the density estimation framework for generating view- independent global illumination solutions. it works by probabilistically simulating the light flow in an environment with light particles that trace random walks origination at luminaires and then using statistical density estimation techniques to reconstruct the lighting on each surface. by splitting the computation into separate transport and reconstruction stages, we gain many advantages including reduced memory usage, the ability to simulate nondiffuse transport, and natural parallelism. solutions to several theoretical and practical difficulties in implementing this framework are also described. light sources that vary spectrally and directionally are integrated into a spectral particle tracer using nonuniform rejection. a new local linear density estimation technique eliminates boundary bias and extends to arbitrary polygons. a mesh decimation algorithm with perceptual calibration is introduced to simplify the gouraud-shaded representation of the solution for interactive display. bruce walter philip m. hubbard peter shirley donald p. greenberg behavioral synergy without explicit integration maja j. mataric clamping: a method of antialiasing textured surfaces by bandwidth limiting in object space an object space method is given for interpolating between sampled and locally averaged signals, resulting in an antialiasing filter which provides a continuous transition from a sampled signal to its selectively dampened local averages. this method is applied to the three standard euclidean dimensions and time, resulting in spatial and frame to frame coherence. the theory allows filtering of a variety of functions, including continuous and discrete representations of planar texture. alan norton alyn p. rockwood philip t. skolmoski designing computer simulation experiments this tutorial focuses on that part of a simulation study concerning setting the various specific model parameters and the experimental conditions under which the model will be exercised. other issues in setting up and designing simulation experiments, such as variance reduction, ranking and selection, and optimization, are also mentioned. the focus is on careful choice of such parameters beforehand, with an eye toward the statistical analysis of the simulation's results. output analysis is not treated per se, being covered in another tutorial in this conference (by gordon clark), but the design and analysis activities must be done hand in hand. w. david kelton a new algorithm for handling continuous-valued attributes in decision tree generation and its application to drawing recognition wei lu masao sakauchi simulation environment of the 1990's (panel) voratas kachitvicyanukul james o. henriksen richard e. nance c. dennis pegden charles r. standrige brian w. unger application of hierarchical modeling concepts to a multi-analysis environment joel luna learning to reason we introduce a new framework for the study of reasoning. the learning (in order) to reason approach developed here views learning as an integral part of the inference process, and suggests that learning and reasoning should be studied together. the learning to reason framework combines the interfaces to the world used by known learning models with the reasoning task and a performance criterion suitable for it. in this framework, the intelligent agent is given access to its favorite learning interface, and is also given a grace period in with it can interact with this interface and construct a representation kb of the world w. the reasoning performance is measured only after this period, when the agent is presented with queries α from some query language, relevant to the world, and has to answer whether w implies α. the approach is meant to overcome the main computational difficulties in the traditional treatment of reasoning which stem from its separation from the "world". since the agent interacts with the world when construction its knowledge representation it can choose a representation that is useful for the task at hand. moreover, we can now make explicit the dependence of the reasoning performance on the environment the agent interacts with. we show how previous results from learning theory and reasoning fit into this framwork and illustrate the usefulness of the learning to reason approach by exhibiting new results that are not possible in the traditional setting. first, we give learning to reason algorithms for classes of propositional languages for which there are no efficient reasoning algorithms, when represented as a traditional (formula-based) knowledge base. second, we exhibit a learning to reason algorithm for a class of propositional languages that is not know to be learnable in the traditional sense. roni khardon dan roth evaluating intelligent tutoring with gaming-simulations julika siemer marios c. angelides view-dependent culling of dynamic systems in virtual environments stephen chenney david forsyth optimal quadratic-form estimator of the variance of the sample mean wheyming tina song neng-hui shih mingjian yuan an introduction to using prosim for business process simulation and analysis malay a. dalal madhav erraguntla perakath benjamin standards pipeline png, vrml 97, biif, imaging standards george s. carson simulating decorative mosaics this paper presents a method for simulating decorative tile mosaics. such mosaics are challenging because the square tiles that comprise them must be packed tightly and yet must follow orientations chosen by the artist. based on an existing image and user- selected edge features, the method can both reproduce the image's colours and emphasize the selected edges by placing tiles that follow the edges. the method uses centroidal voronoi diagrams which normally arrange points in regular hexagonal grids. by measuring distances with an manhattan metric whose main axis is adjusted locally to follow the chosen direction field, the centroidal diagram can be adapted to place tiles in curving square grids instead. computing the centroidal voronoi diagram is made possible by leveraging the z-buffer algorithm available in many graphics cards. alejo hausner a real-time low-latency hardware light-field renderer matthew j. p. regan gavin s. p. miller steven m. rubin chris kogelnik localization and identification of visual landmarks this project focused on designing and evaluating methods for reading barcodes on visual landmarks. such landmarks have many applications, including visual tracking, servo-ing, and mobile robot navigation. our goal was to improve a preliminary version of the barcode reader developed the previous summer in the middlebury robotics and vision lab. dan knights jeff lanza book review lynellen d. s. perry stratified sampling of spherical triangles james arvo computing the antipenumbra of an area light source seth j. teller the lilog knowledge representation system toni bollinger udo pletat an adaptive subdivision method for surface-fitting from sampled data francis j m schmitt brian a. barsky wen-hui du using focus for generating felicitous locative expressions we are concerned with using discourse focus to generate felicitous natural language responses to "where is"-type queries by a user with respect to a map. in ordinary language this is typically achieved by using a locative expression whose syntax involves using a preposition (such as in or at) and its object (which serves as a reference point). the selection of an appropriate reference point is important when generating such locative expressions. we attempt to use discourse focus to model the user's mental body position in the selection of an appropriate reference point. this enables the user to use body-oriented inference strategies associated with small scale space to make better sense of the overall spatial organization of the geographic entities and the large scale space of which these geographic entities are a part. s. m. haller s. s. ali unsupervised updating of a classification tree in a dynamic environment daniel boley vivian borst sufficiency analysis for the calculus of variations robert m. corless using bivariate bezier distributions to model simulation input processes mary ann flanigan wagner james r. wilson an experience with a prolog-based object-oriented language this paper presents an experience with a programming language spool which is based on the combination of object-oriented programming and logic programming. this language inherits the capability of knowledge base organization from object- oriented programming and its expressive power from logic programming. the experience of the application of spool to the program annotation system showed that this combination was quite useful to formalize domain knowledge into declarative data types and make them reusable in different contexts. it also showed the need for further study such as better linguistic support to exploit the full power of this combination. koichi fukunaga shin-ichi hirose a new simple and efficient antialiasing with subpixel masks andreas schilling the turing test and the economist stuart c. shapiro mighty joe young mary reardon turtle trouble josh nizzi combining physical and visual simulation - creation of the planet jupiter for the film "2010" larry yaeger craig upson robert myers imbedding gpss in a general purpose programming language gpss has proven to be an excellent simulation language, but was not designed to perform the logical and computational tasks of a programming language. strategies for improvement can take one of two paths: building adequate analytic constructs into the existing gpss language, or building gpss-like constructs into an existing general purpose programming language. the second choice can be made easier by use of a language-building utility program to translate simulation constructs into the selected programming language. it is also simplified by the fact that some gpss statements (such as input-output and control) may be dropped, since any good programming language will provide native facilities for these functions. such a fusion between the constructs of a good simulation language, and those of a good programming language, provides a far more flexible system than either alone. this paper discusses pl/i gpss, an implementation of gpss in a pl/i environment which uses pl/i variables as transaction variables and permits use of general pl/i expressions for most gpss statement parameters. jerrold rubin adaptive web site agents michael j. pazzani daniel billsus a hierarchical symptom classification for model based causal reasoning model based causal reasoning has been widely used for physical systems diagnosis. the system fault is localized with the causal relation of system structure and behavior. in such applications, if the system fault is not localized with the observed behavior, then a subsequent observation is made. this research studies a hierarchical symptom classification for guiding a subsequent observation in model based causal reasoning. the diagnostic symptoms are mapped to the system functional hierarchy and the symptoms are classified by partitioning the functional hierarchy. the dependency relation of symptoms guides subsequent observation. this strategy enhances the control of subsequent observation by hierarchically structuring and classifying the symptoms. c. lee p. liu s. clark m. y. chiu alex an expert system for truck loading an expert system has been built that plans the loading of boxed products into semi-trailer trucks. the system seeks to optimize the volumetric utilization of a trailer while minimizing the likelihood of damage to the products while in transit. it is currently operating in both interactive and batch modes in the production data processing environment of a large manufacturer of household appliances. this paper describes and compares the methods of expert loaders and the expert system, as well as general information about the implementation. ron lemaster advancing game graphics: a war of escalation steve ogden transportability to other languages: the natural language processing project in the ai program at mcc we discuss a recently launched, long-term project in natural language processing, the primary concern of which is that natural language applications be transportable among human languages. in particular, we seek to develop system tools and linguistic processing techniques that are themselves language-independent to the maximum extent practical. in this paper we discuss our project goals and outline our intended approach, address some cross- linguistic requirements, and then present some new linguistic data that we feel support our approach. jonathan slocum carol f. justus introduction to siman this paper discusses the concepts and methods simulating manufacturing systems using the siman simulation language. siman is a new general purpose simulation language which incorporates special purpose features for modeling manufacturing systems. these special purpose features greatly simplify and enhance the modeling of the material handling component of a manufacturing system. c. dennis pegden a bayesian batch means methodology for analysis of simulation output the purpose of this research is to investigate the use of bayesian methodology in the analysis of simulation output. specifically, the bayesian methodology is introduced in the context of the batch means procedure for building a confidence interval for the output mean. we assume that the output process is at steady state or equivalently that the output process is second order stationary. we also assume that the length is fixed at say n. so the output process can be given by x1, x2, ..., xn,a sequence of observations from a continuous state stationary stochastic process with mean m, variance o2x and autocorrelation function {pi}@@@@i l this bayesian batch means methodology has been thoroughly tested. the five measures of effectiveness suggested in schriber and andrews (1981) are reported for a variety of simulated theoretical output processes. in addition, each run is compared with various batch means procedures. richard w. andrews thomas j. schriber adaptive forward differencing for rendering curves and surfaces sheue-ling lien michael shantz vaughan pratt the integration of retrieval, reasoning and drafting for refugee law: a third generation legal knowledge based system we identify an argument to be the basic unit of reasoning of a system that supports the construction of arguments and drafting of determinations in refugee law. collaboration with the refugee review tribunal of australia has led to the development of a framework for argument construction that includes over 200 generic arguments. however, these arguments may not encompass all arguments used in any particular case. the construction of non-generic arguments involves the integration of information retrieval within reasoning. this retrieval is passage based from a wide variety of text sources. the framework also acts as the illocutionary structure in a document drafting process. in conceptualising this system we have found it useful to propose a classification of knowledge based systems in law. john yearwood andrew stranieri news douglas blank towards a semantics of desires (abstract) george kiss han reichgelt device-directed rendering rendering systems can produce images that include the entire range of visible colors. imaging hardware, however, can reproduce only a subset of these colors: the device gamut. an image can only be correctly displayed if all of its colors lie inside of the gamut of the target device. current solutions to this problem are either to correct the scene colors by hand, or to apply gamut mapping techniques to the final image. we propose a methodology called device- directed rendering that performs scene color adjustments automatically. device-directed rendering applies classic minimization techniques to a symbolic representation of the image that describes the relationship of the scene lights and surfaces to the pixel colors. this representation can then be evaluated to produce an image that is guaranteed to be in gamut. although our primary application has been correcting out-of-gamut colors, this methodology can be generally applied to the problem of adjusting a scene description to accommodate constraints on the output image pixel values. andrew s. glassner kenneth p. fishkin david h. marimont maureen c. stone reproducing color images as duotones joanna l. power brad s. west eric j. stollnitz david h. salesin adjoint equations and random walks for illumination computation in this paper we introduce the potential equation that along with the rendering equation forms an adjoint system of equations and provides a mathematical frame work for all known approaches to illumination computation based on geometric optics. the potential equation is more natural for illumination computations that simulate light propagation starting from the light sources, such as progressive radiosity and particle tracing. using the mathematical handles provided by this framework and the random-walk solution model, we present a number of importance sampling schemes for improving the computation of flux estimation. of particular significance is the use of approximately computed potential for directing a majority of the random walks through regions of importance in the environment, thus reducing the variance in the estimates of luminous flux in these regions. finally, results from a simple implementation are presented to demonstrate the high-efficiency improvements made possible by the use of these techniques. s. n. pattanaik s. p. mudur tablet-based valuators that provide one, two, or three degrees of freedom the ability of the user of a graphics system to interactively control the motions of 3-d objects enhances his or her spatial perception and comprehension of those objects. this paper describes several logical devices based on the tablet, each particularly suited to control some type of interactive manipulation of 2-d and 3-d objects in real time. the "turntable" and the "stirrer" convert rotary motion of the tablet pen or puck into values for one-axis rotational control; the "rack", a two-axis device, is used for scaling control; for three-axis rotational control, the tablet-based three- axis trackball is particularly suited. kenneth b. evans peter p. tanner marceli wein perception of change for a socially enhanced robot imitator yuval marom gillian hayes expert systems seminar in recent years there has been significant progress in developing computer programs that can perform technical tasks normally thought to require the knowledge, experience and judgment of human specialists. known as expert systems or knowledge-based systems, these programs have achieved high levels of performance at such diverse but specialized tasks as medical diagnosis, mass-spectrum analysis, chemical synthesis, mineral exploration, and computer system configuration. this seminar will give an overview of the fundamental principles of knowledge representation and plausible inference used in these systems. particular emphasis will be placed on rule-based systems, which provide a particularly attractive, uniform approach to representing and using judgmental knowledge. the following outline summarizes the topics to be covered. bruce g. buchanan richard o. duda from a modular visualization environment to an environment for computational problem solving ken brodlie helen wright learning with unreliable boundary queries avrim blum prasad chalasani sally a. goldman donna k. slonim game theoretic reasoning in multi-agent coordination by negotiation with a trusted third party shih-hung wu von-wun soo a knowledge-based methodology for developing knowledge acquisition tools chih- cheng chien cheng-seen ho the ohco model of text: merits and concerns stuart a. selber genetic algorithms stephanie forrest on the utility of plan-space (causal) encodings amol d. mali subbarao kambhampati some patterns of technological change in high-performance computers high-performance computer technology is undergoing a period of unusually rapid change, and this paper attempts to describe the patterns of these changes in a systematic way. pattern recognition is the basis of technology forecasting, and it is through technology forecasting that we obtain the anticipatory information that allows us to avoid problems and create opportunities. we will first identify the stages in which technological changes occur, and then define "change" as the first derivative of an information function that describes the state of a technology. we then explore the driving forces that cause three generic patterns of technological change: incremental, exponential, and logistic. some areas in high-performance computer technology that are following these patterns are then identified, and a forecast is developed. finally, some limits of high-performance computing are discussed. j. worlton markov chains and computer-aided geometric design: part i - problems and constraints ronald n. goldman language support for parallel discrete-event simulations rajive l. bagrodia the methodology roles in the realization of a model development environment the definition of "methodology" is followed by a very brief review of past work in modeling methodologies. the dual role of a methodology is explained: (1) conceptual guidance in the modeling task, and (2) definition of needs for environment designers. a model development environment based on the conical methodology serves for specific illustration of both roles. richard e. nance james d. arthur advanced modeling techniques for computer graphics david s. ebert using response surface methodology to link force structure budgets to campaign objectives james b. grier t. glenn bailey jack a. jackson survey on special purpose computer architectures for ai b wah g j li real-time interactive storytelling glen d. fraser a multiprocessor raster display for interactive graphics system design the design of increasingly complex vlsi circuits and multilayer printed circuit boards have increased the demands on computer aided design methods including interactive graphics systems. earlier minicomputer based systems with direct view storage tube displays lack the processing power and display characteristics that allow high degrees of interactivity and visual discrimination required for high productivity in complex design situations. declining semiconductor ram costs and increasing capabilities of microprocessors have allowed raster display technology to keep pace with the demands of current interactive graphics systems. the recently introduced lexidata graphics system 8000 combines state of the art color and black and white raster display technology with the latest microprocessor technology to provide a powerful element for the design of interactive graphics systems to meet today's greater needs. the gs8000 includes a very fast schottky display processor to convert vectors to raster and fill areas and a 16 bit microprocessor to service interactive devices and process a world coordinate data base for editing and display. the system hardware off loads a significant processing and memory burden from the host computer allowing the use of a less powerful host or more users on a single host. an extensive software library resident in the graphics system 8000 reduces the software development required of the user allowing him to concentrate on the applications software necessary for the creation, maintenance and non-graphic processing of his design data base on his host computer. walter m. anderson temporal difference learning and td-gammon ever since the days of shannon's proposal for a chess-playing algorithm [12] and samuel's checkers- learning program [10] the domain of complex board games such as go, chess, checkers, othello, and backgammon has been widely regarded as an ideal testing ground for exploring a variety of concepts and approaches in artificial intelligence and machine learning. such board games offer the challenge of tremendous complexity and sophistication required to play at expert level. at the same time, the problem inputs and performance measures are clear-cut and well defined, and the game environment is readily automated in that it is easy to simulate the board, the rules of legal play, and the rules regarding when the game is over and determining the outcome. gerald tesauro hierarchically organised formalisations formalisations to date have tended to ignore a salient feature of statutes: they exhibit a multi-layered logical structure. this, rather than the unsuitability of logic for representing statute law, is at the root of at least some of the knowledge representation problems researchers have uncovered. in addition, the failure of any formalisation to mirror this kind of structure means that it inevitably will be an impoverished translation of the original article. t. routen partial scan selection for user-specified fault coverage clay gloster franc brglez spatialized normal come hierarchies david e. johnson elaine cohen exquisite fun: a digital sketchbook peggy reinecke rebecca hermann trophomotion amanda roth editorial: looking back. looking ahead jim foley digital facial engraving victor ostromoukhov parallel texture caching homan igehy matthew eldridge pat hanrahan dynamic nurbs with geometric constraints for interactive sculpting this article develops a dynamic generalization of the nonuniform rational b-spline (nurbs) model. nurbs have become a defacto standard in commercial modeling systems because of their power to represent free-form shapes as well as common analytic shapes. to date, however, they have been viewed as purely geometric primitives that require the user to manually adjust multiple control points and associated weights in order to design shapes. dynamic nurbs, or d-nurbs, are physics-based models that incorporate mass distributions, internal deformation energies, and other physical quantities into the popular nurbs geometric substrate. using d-nurbs, a modeler can interactively sculpt curves and surfaces and design complex shapes to required specifications not only in the traditional indirect fashion, by adjusting control points and weights, but also through direct physical manipulation, by applying simulated forces and local and global shape constraints. d-nurbs move and deform in a physically intuitive manner in response to the user's direct manipulations. their dynamic behavior results from the numerical integration of a set of nonlinear differential equations that automatically evolve the control points and weights in response to the applied forces and constraints. to derive these equations, we employ lagrangian mechanics and a finite- element-like discretization. our approach supports the trimming of d-nurbs surfaces using d-nurbs curves. we demonstrate d-nurbs models and constraints in applications including the rounding of solids, optimal surface fitting to unstructured data, surface design from cross sections, and free-form deformation. we also introduce a new technique for 2d shape metamorphosis using constrained d-nurbs surfaces. demetri terzopoulos hong qin consistent mesh parameterizations a basic element of digital geometry processing algorithms is the establishment of a smooth parameterization for a given model. in this paper we propose an algorithm which establishes parameterizations for a set of models. the parameterizations are called consistent because they share the same base domain and respect features. they give immediate correspondences between models and allow remeshes with the same connectivity. such remeshes form the basis for a large class of algorithms, including principal component analysis, wavelet transforms, detail and texture transfer between models, and _n_-way shape blending. we demonstrate the versatility of our algorithm with a number of examples. emil praun wim sweldens peter schröder knowledge-based object recognition system girija chetty narendra deshpande learning to model behaviors from boolean responses anish biswas sandip sen inverse kinematics positioning using nonlinear programming for highly articulated figures an articulated figure is often modeled as a set of rigid segments connected with joints. its configuration can be altered by varying the joint angles. although it is straight forward to compute figure configurations given joint angles (forward kinematics), it is more difficult to find the joint angles for a desired configuration (inverse kinematics). since the inverse kinematics problem is of special importance to an animator wishing to set a figure to a posture satisfying a set of positioning constraints, researchers have proposed several different approaches. however, when we try to follow these approaches in an interactive animation system where the object on which to operate is as highly articulated as a realistic human figure, they fail in either generality or performance. so, we approach this problem through nonlinear programming techniques. it has been successfully used since 1988 in the spatial constraint system within jack, a human figure simulation system developed at the university of pennsylvania, and proves to be satisfactorily efficient, controllable, and robust. a spatial constraint in our system involves two parts: one constraint on the figure, the end- effector, and one on the spatial environment, the goal. these two parts are dealt with separately, so that we can achieve a neat modular implementation. constraints can be added one at a time with appropriate weights designating the importance of this constraint relative to the others and are always solved as a group. if physical limits prevent satisfaction of all the constraints, the system stops with the (possibly local) optimal solution for the given weights. also, the rigidity of each joint angle can be controlled, which is useful for redundant degrees of freedom. jianmin zhao norman i. badler images and reversals: talking less, drawing more thomas g. west towards an expert/novice learning system with application to infectious disease d. kopec l. latour m. brody mathml: a proposal for representing mathematics in html neil soiffer introduction to simulation robert e. shannon a new point-location algorithm and its practical efficiency: comparison with existing algorithms ta. asano on loebner's lessons stuart m. shieber parallel processing image synthesis and anti-aliasing the continuing evolution of microelectronics provides the tools for developing new methods of synthesizing digital images by utilizing parallel processing architectures which hold the promise of reliability, flexibility and low cost. beginning with the earliest real-time flight simulators, parallel processing architectures for image synthesis have been built, but "anti-aliasing" remains a problem. a parallel processing architecture is described and simulated which consists of a serial chain of processors which produces as output a depth sorted list of those objects which are at least potentially visible at each pixel. the lists are then filtered to provide the final shading at each pixel. richard weinberg volume visualization at the center for supercomputing research and development peter shirley henry neeman graphics on the wayback machine maurice molyneaux live: an architecture for learning from the environment wei-min shen two-handed direct manipulation on the responsive workbench lawrence d. cutler bernd frölich pat hanrahan discrete-event simulation and the event horizon part 2: event list management jeffrey s. steinman improving opening book performance through modeling of chess opponents steven walczak 2-d shape blending: an intrinsic solution to the vertex path problem thomas w. sederberg peisheng gao guojin wang hong mu the design of a multi-microprocessor based simulation computer - ii this paper presents further results in development of a discrete event simulation computer based on a network of micro processors. the network is being designed by identifying simulation tasks which may be performed in parallel with other computation required by the simulation, and then assigning those subtasks to attached processing elements in the network. the tasks of priority queue processing and state accounting are considered in this paper. a three attached processor simulation computer has been designed, using two processors for the event set and the third for state statistics accumulation. in a simulation model of this system, a forty to fifty percent reduction in the execution of a benchmark simulation program is easily achieved. (the benchmark program itself uses an adaptive scheduling algorithm). further observations and suggestions for future research are presented. john craig comfort time-critical multiresolution scene rendering we describe a framework for time-critical rendering of graphics scenes composed of a large number of objects having complex geometric descriptions. our technique relies upon a scene description in which objects are represented as multiresolution meshes. we perform a constrained optimization at each frame to choose the resolution of each potentially visible object that generates the best quality image while meeting timing constraints. the technique provides smooth level-of-detail control and aims at guaranteeing a uniform, bounded frame rate even for widely changing viewing conditions. the optimization algorithm is independent from the particular data structure used to represent multiresolution meshes. the only requirements are the ability to represent a mesh with an arbitrary number of triangles and to traverse a mesh structure at an arbitrary resolution in a short predictable time. a data structure satisfying these criteria is described and experimental results are discussed. enrico gobbetti eric bouvier a linear time exact hidden surface algorithm this paper presents a new hidden surface algorithm. its output is the set of the visible pieces of edges and faces, and is as accurate as the arithmetic precision of the computer. thus calculating the hidden surfaces for a higher resolution device takes no more time. if the faces are independently and identically distributed, then the execution time is linear in the number of faces. in particular, the execution time does not increase with the depth complexity. this algorithm overlays a grid on the screen whose fineness depends on the number and size of the faces. edges and faces are sorted into grid cells. only objects in the same cell can intersect or hide each other. also, if a face completely covers a cell then nothing behind it in the cell is relevant. three programs have tested this algorithm. the first verified the variable grid concept on 50,000 intersecting edges. the second verified the linear time, fast speed, and irrelevance of depth complexity for hidden lines on 10,000 spheres. this also tested depth complexities up to 30, and showed that perspective scenes with the farther objects smaller are even faster to calculate. the third verified this for hidden surfaces on 3000 squares. wm randolph franklin zeus: a toolkit and approach for building distributed multi-agent systems hyacinth s nwana divine t. ndumu lyndon c. lee jaron c. collis dealing with uncertainty in fine motion: a neural approach enrique cervera angel p. del pobil edward marta miguel a. serna a fast relighting engine for interactive cinematic lighting design we present new techniques for interactive cinematic lighting design of complex scenes that use procedural shaders. deep-framebuffers are used to store the geometric and optical information of the visible surfaces of an image. the geometric information is represented as collections of oriented points, and the optical information is represented as bi-directional reflection distribution functions, or brdfs. the brdfs are generated by procedurally defined surface texturing functions that spatially vary the surfaces' appearances. the deep-framebuffer information is rendered using a multi-pass algorithm built on the opengl graphics pipeline. in order to handle both physically- correct as well as non-realistic reflection models used in the film industry, we factor the brdf into independent components that map onto both the lighting and texturing units of the graphics hardware. a similar factorization is used to control the lighting distribution. using these techniques, lighting calculations can be evaluated 2500 times faster than previous methods. this allows lighting changes to be rendered at rates of 20hz in static environments that contain millions of objects of with dozens of unique procedurally defined surface properties and scores of lights. reid gershbein pat hanrahan perspectives on simulation using gpss thomas j. schriber data representation and qualitative optimization - some issues in enterprise modeling sulin ba aimo hinkkanen andrew b. whinston a comparison of restart implementations marnix j. j. garvels dirk p. kroese the analyst - a workstation for analysis and design this paper describes a system currently being developed called the "analyst". the analyst is a support system for analysis and design methods. the method support facilities are being implemented using expert system or knowledge based techniques. the user can add rules for additional analysis of the application facts. the explicit representation of rules and facts (i.e. the knowledge base) makes it relatively easy to add methods to cover different phases or aspects of the software life cycle. mark stephens ken whitehead book reviews karen t. sutherland automatic indexing one of the first projects in computer analysis of natural language was to devise procedures for representing the subject content of a document by a few text-derived terms, a process called automatic indexing. although many developments have taken place over the years, the essential techniques of automatic indexing continue to focus on answering three basic questions: 1\\. how can automatic indexing be made to more adequately represent the subject content of a document? 2\\. how can automatic indexing improve recall by increasing the number of relevant documents retrieved? 3\\. how can automatic indexing improve precision by decreasing the number of non-relevant documents retrieved? during this tutorial session, we will review the advances that have been made, and the procedures that have been devised, to improve the effectiveness of automatic indexing. we will then consider the question of whether online retrieval systems can rely entirely on automatic indexing to achieve adequate retrieval effectiveness or whether some manual pre-indexing is still necessary. harold borko an architecture for easy web page updating john aycock michael levy prolog as a simulation language prolog is a rather new language and is very different from traditional languages. prolog is favored by the japanese for their fifth generation computer systems. the acronym prolog is derived from programming in logic and emphasizes the derivation of the language from predicate logic. prolog can be considered as a general purpose very high level language, best suited for general symbol manipulation, intelligent and flexible database handling or problems, where some kind of search is required. examples of application areas are computer aided design, database and "knowledge-base" management, natural language processing and rapid prototyping. it is the purpose of this paper to demonstrate, how prolog can be used as a tool in a simulation project. the paper consists of two parts: a survey of the language prolog and a description of t-prolog, a prolog based simulation language, using a process interaction approach. heimo h. adelsberger empirical investigation of knowledge representation servers: design issues and applications experience with krs brain r. gaines measurement in motion marge cappo kathy darling computer rendering of fractal curves and surfaces fractals are a class of highly irregular shapes that have myriad counterparts in the real world, such as islands, river networks, turbulence, and snowflakes. classic fractals include brownian paths, cantor sets, and plane- filling curves. nearly all fractal sets are of fractional dimension and all are nowhere differentiable. previously published procedures for calculating fractal curves employ shear displacement processes, modified markov processes, and inverse fourier transforms. they are either very expensive or very complex and do not easily generalize to surfaces. this paper presents a family of simple methods for generating and displaying a wide class of fractal curves and surfaces. in so doing, it introduces the concept of statistical subdivision in which a geometric entity is split into smaller entities while preserving certain statistical properties. loren c. carpenter the "parallel vectors" operator: a vector field visualization primitive in this paper we propose an elementary operation on a pair of vector fields as a building block for defining and computing global line-type features of vector or scalar fields. while usual feature definitions often are procedural and therefore implicit, our operator allows precise mathematical definitions. it can serve as a basis for comparing feature definitions and for reuse of algorithms and implementations. applications focus on vortex core methods. ronald peikert martin roth towards a legal analogical reasoning system: knowledge representation and reasoning methods analogy has many important functions in the domain of law. since the number of legal rules is restricted and their content is often incomplete, it is necessary at times for a lawyer to opt for an analogical application of a legal rule to a given case in order to decide the case properly. he may apply the rule, though it may not have originally been deemed related to such an event, on the basis of some similarity between the event of the case and the requirement of the relevant legal rule. this type of reasoning is called legal analogy. this paper analyzes an actual case of legal analogy in the field of japanese civil law in order to clarify the reasoning methods used in analogy, as well as knowledge to justify the analogy. finally it will be shown how the knowledge is utilized in a symbolic reasoning system both in terms of inverse and standard resolution. hajime yoshino makoto haraguchi seiichiro sakurai sigeru kagayama it knows what you're going to do: adding anticipation to a quakebot the complexity of ai characters in computer games is continually improving; however they still fall short of human players. in this paper we describe an ai bot for the game quake ii that tries to incorporate some of the missing capabilities. this bot is distinguished by its ability to build its own map as it explores a level, use a wide variety of tactics based on its internal map, and in some cases, anticipate its opponents actions. the bot was developed in the soar architecture and uses dynamical hierarchical task decomposition to organize it knowledge and actions. it also uses internal prediction based on its own tactics to anticipate its opponents actions. this paper describes the implementation, its strengths and weaknesses, and discusses future research. john e. laird haptic rendering: programming touch interaction with virtual objects haptic rendering is the process of computing and generating forces in response to user interactions with virtual objects. recent efforts by our team at mit's ai laboratory have resulted in the development of haptic interface devices and algorithms for generating the forces of interaction with virtual objects. this paper focuses on the software techniques needed to generate sensations of contact interaction and material properties. in particular, the techniques we describe are appropriate for use with the phantom haptic interface, a force generating display device developed in our laboratory. we also briefly describe a technique for representing and rendering the feel of arbitrary polyhedral shapes and address issues related to rendering the feel of non- homogeneous materials. a number of demonstrations of simple haptic tasks which combine our rendering techniques are also described. k. salisbury d. brock t. massie n. swarup c. zilles learning state features from policies to bias exploration in reinforcement learning bryan singer manuela veloso a repository of knowledge about handling exceptions in multi-agent systems a critical challenge to creating effective agent- based systems is allowing them to operate effectively in environments where failures (' exceptions') can occur. an important barrier to achieving this has been the lack of systematized dissemination of exception handling techniques. this paper describes a semi- formal web- accessible repository, built as an augmentation of the mit process handbook, that is designed to enable learning about, adding to, and exploiting multiagent system exception handling expertise. mark klein scinema event jeff linnell linear discriminant analysis using genetic algorithms aaron h. konstam voxel space automata: modeling with stochastic growth processes in voxel space n. greene the discriminability of colored-patterns: less than meets the eye brian wandell an efficient representation for irradiance environment maps we consider the rendering of diffuse objects under distant illumination, as specified by an environment map. using an analytic expression for the irradiance in terms of spherical harmonic coefficients of the lighting, we show that one needs to compute and use only 9 coefficients, corresponding to the lowest-frequency modes of the illumination, in order to achieve average errors of only 1%. in other words, the irradiance is insensitive to high frequencies in the lighting, and is well approximated using only 9 parameters. in fact, we show that the irradiance can be procedurally represented simply as a quadratic polynomial in the cartesian components of the surface normal, and give explicit formulae. these observations lead to a simple and efficient procedural rendering algorithm amenable to hardware implementation, a prefiltering method up to three orders of magnitude faster than previous techniques, and new representations for lighting design and image-based rendering. ravi ramamoorthi pat hanrahan a survey of ranking, selection, and multiple comparison procedures for discrete-event simulation james r. swisher sheldon h. jacobson map/1 tutorial map/1tm is a simulation-based modeling and analysis program developed by pritsker and associates for use in designing and evaluating discrete part manufacturing systems. map/1 is a generalized model of a discrete part manufacturing system, which is parameterized to represent a specific manufacturing facility. parameters which describe the manufacturing facility configuration and associated operating rules are input to the map/1 analysis program, which produces a set of predefined output reports and provides measures of facility performance. using map/1, in-depth simulation analyses may be performed to investigate alternative system configurations and operating procedures. robin j. miner laurie j. rolston commercial and industrial ai toshinori munakata the logic of common sense vladimir lifschitz computer animation with cinema j. p. poorte d. a. davis experimental investigation of a forest of quadtrees (abstract only) forest of quadtrees have recently been recognized as an efficient data structure to store quadtrees. of special interest is the case in which a single 2m by 2m square region is contained in a 2n by 2n image. a study of the effect of placement of the square region inside the image was done to determine the usefulness of forest of quadtrees. the results indicate that a forest of quadtrees improves over the normal quadtree for many positions of the square region as the worst case occurs when both structures are equal. an analysis of the best and worst case position will be presented. narinder lakhani s. s. iyengar adaptation of scan and slit-scan techniques to computer animation the adaptation and generalization of scan and slit-scan animation stand techniques for use in computer generated animation is discussed. scan and slit-scan techniques are based on moving artwork, camera, and, for slit-scan, a thin aperture while the camera shutter is open. these processes can be described as selectively sampling an environment over time and recording the result as a single image. sequences of such images form the animated film. rather than use mechanical means to accomplish this, it is possible to develop algorithms which mimic this processes but are based on sampling dynamic environment descriptions to generate computer produced images. the use of computer graphics allows these techniques to be generalized in ways difficult or impossible even with very elaborate animation stands. use of multiple independent scanning apertures and three-dimensional environments are natural generalizations. the exact algorithms used depend on the characteristics of the graphics systems used. an approach based on using a real-time shaded graphics system and an approach using frame buffer systems are outlined. the first approach can also be applied to refresh vector graphics systems. frederic i. parke the effective use of animation in simulation model validation christopher l. swider kenneth w. bauer thomas f. schuppe correlation of markov chains simulated in parallel paul glasserman pirooz vakili progressive negotiation for time-constrained autonomous agents abdel-illah mouaddib timewarp rigid body simulation the traditional high-level algorithms for rigid body simulation work well for moderate numbers of bodies but scale poorly to systems of hundreds or more moving, interacting bodies. the problem is unnecessary synchronization implicit in these methods. jefferson's timewarp algorithm [22] is a technique for alleviating this problem in parallel discrete event simulation. rigid body dynamics, though a continuous process, exhibits many aspects of a discrete one. with modification, the timewarp algorithm can be used in a uniprocessor rigid body simulator to give substantial performance improvements for simulations with large numbers of bodies. this paper describes the limitations of the traditional high-level simulation algorithms, introduces jefferson's algorithm, and extends and optimizes it for the rigid body case. it addresses issues particular to rigid body simulation, such as collision detection and contact group management, and describes how to incorporate these into the timewarp framework. quantitative experimental results indicate that the timewarp algorithm offers significant performance improvements over traditional high-level rigid body simulation algorithms, when applied to systems with hundreds of bodies. it also helps pave the way to parallel implementations, as the paper discusses. brian mirtich a framework for assisted exploration with collaboration we approach the problem of exploring a virtual space by exploiting positional and camera- model constraints on navigation to provide extra assistance that focuses the user's explorational wanderings on the task objectives. our specific design incorporates not only task-based constraints on the viewer's location, gaze, and viewing parameters, but also a personal "guide" that serves two important functions: keeping the user oriented in the navigation space, and "pointing" to interesting subject areas as they are approached. the guide's cues may be ignored by continuing in motion, but if the user stops, the gaze shifts automatically toward whatever the guide was interested in. this design has the screndipitous feature that it automatically incorporates a nested collaborative paradigm simply by allowing any given viewer to be seen as the "guide" of one or more viewers following behind; the leading automated guide (we tend to select a guide dog for this avatar) can remind the leading live human guide of interesting sites to point out, while each real human collaborator down the chain has some choices about whether to follow the local leader's hints. we have chosen vrml as our initial development medium primarily because of its portability, and we have implemented a variety of natural modes for leading and collaborating, including ways for collaborators to attach to and detach from a particular leader. eric a. wernert andrew j. hanson on liouvillian solutions of homogeneous linear differential equations this paper deals with the problem of finding liouvillian solutions of an nth order homogeneous linear differential equation l(y)=0 with coefficients in a differential field k whose field of constants is c. for second order linear differential equations such an algorithm has been given by j. kovacic and implemented. a general decision procedure for finding liouvillian solutions of nth order equations has been given by m.f. singer, but the resulting algorithm, although constructive, is not in implementable form even for second order equations. the algorithm uses the fact that, if l(y)=0 has a liouvillian solution, then, l(y)=0 has a solution z such that u=z′/z is algebraic over k, which means that l(y) has a solution z of the form e u, where u is algebraic over k. since the logarithmic derivative u=z′/z of a solution z is a solution of the riccati equation r(y)=0 associated to l(y)=0, the problem thus reduces to find an algebraic solution u of r(y)=0. this task is now split into two parts: to find the set deg(n) of possible degrees n for the minimal polynomial p(x)=0 of u over k. to compute, for each possible degree of p(x), the possible coefficients of p(x). if we denote c(ii) the complexity of the second step and #deg(n) the size of the set deg(n), we see that the complexity of the whole procedure is of the form c(ii)#deg(n) and thus exponential in #deg(n). this shows that the only way to make the procedure effective is to get sharp bounds on the size of the set deg(n), which is the scope of this paper. initially, we construct, using representation theory of linear groups, a set deg(n) where all n are of bounded size and only divisible by a small set of primes. because of the divisibility condition the size of the set deg(n) is asymptotically small. we derive the upper bound 2n4*π(n+1) for #deg(n), where n is the degree of l(y)=0 and π(n) denotes the number of primes less or equal to n. this improves the bound of #deg(n) from jordan's theorem. the bound on the size of the primes that divide n is also a bound for the primes that divide the algebraic degree of the logarithmic derivative of at least one solution of l(y)=0. from the conditions on the size of the primes, some structure of the differential galois group can also be derived. next, we study the action of the differential galois group on u to get sharp bounds for #deg(n). the resulting set deg(n) is the best possible one for n=2 and probably small enough to allow an implementation of the singer-algorithm for n=3. we show that, for an algebraic solution u of r(u)=0, the degree n of the minimal polynomial p(x) of u equals the size of the orbit of u under the action of the differential galois group of l(y)=0. we then bound the size of the orbit of u by a value that can be effectively computed from a classification of the finite subgroups of pgl(n,c). f. ulmer j. calmet illuminating micro geometry based on precomputed visibility many researchers have been arguing that geometry, bump maps, and brdfs present a hierarchy of detail that should be exploited for efficient rendering purposes. in practice however, this is often not possible due to inconsistencies in the illumination for these different levels of detail. for example, while bump map rendering often only considers direct illumination and no shadows, geometry- based rendering and brdfs will mostly also respect shadowing effects, and in many cases even indirect illumination caused by scattered light. in this paper, we present an approach for overcoming these inconsistencies. we introduce an inexpensive method for consistently illuminating height fields and bump maps, as well as simulating brdfs based on precomputed visibility information. with this information we can achieve a consistent illumination across the levels of detail. the method we propose offers significant performance benefits over existing algorithms for computing the light scattering in height fields and for computing a sampled brdf representation using a virtual gonioreflectometer. the performance can be further improved by utilizing graphics hardware, which then also allows for interactive display. finally, our method also approximates the changes in illumination when the height field, bump map, or brdf is applied to a surface with a different curvature. wolfgang heidrich katja daubert jan kautz hans-peter seidel maintaining quality and quantity production constraints in a multi-agent batch scheduling environment jim butler key problems and thorny issues in multidimensional visualization georges grinstein sharon laskowski alfred inselberg software/modelware application requirements (panel) david withers phil cohen laura giussani tom schuppe marvin seppanen animation from observation: motion capture and motion editing michael gleicher observations on the complexity of composable simulation ernest h. page jeffrey m. opper octree based assembly sequence generation this paper describes a system for the automatic recognition of assembly features and the generation of assembly/disassembly sequences. the paper starts by reviewing the nature and use of assembly features. one of the conclusions drawn from this survey is that the majority of assembly features involve sets of spatially adjacent faces. two principle types of adjacency relationships are identified and an algorithm is presented for identifying assembly features which arise from "_spatial_" and "_contact_" face adjacency relationships (known as _s-adjacency_ and _c-adjacency_ respectively). the algorithm uses an octree representation of a b-rep model to support the geometric reasoning required to locate assembly features on disjoint bodies. a pointerless octree representation is generated by recursively sub-dividing the assembly model's bounding box into octants which are used to locate: * those portions of faces which are _c-adjacent_ (i.e. they effectively touch within the tolerance of the octree). * those portions of faces which are _s-adjacent_ to a nominated face. the resulting system can locate and partition spatially adjacent faces in a wide range of situations and a different resolutions. the assembly features located are recorded as attributes in the b-rep model and are then used to generate a disassembly sequence plan for the assembly. this sequence plan is represented by a _transition state tree_ which incorporates knowledge of the availability of feasible gripping features. by way of illustration, the algorithm is applied to several trial components raymond c. w. sung jonathan r. corney doug e. r. clark robotics: the new automation tool industrial robots have seen limited use by industries for over a decade but until the auto industry introduced robotics for spot welding applications in the late 60s the robot was not considered seriously. now they are readily accepted throughout industry. the reasons for their current popularity are the rapidly increasing costs of labor and the seemingly declining productivity of todays workers. in addition, robots today are credited for additional product and quality improvements not seriously considered in the past. benefits similar to those that have long been attributes associated with hard automation. this paper looks at robotics from a users viewpoint and addresses some of the benefits and concerns attributable to their use. to this end, this paper will describe several applications that are currently used in production with results relative to production gains, side benefits and operator acceptance. harold r. marcotte building an interactive, three-dimensional virtual world raymond mazza providing paradigm orientation without implementational handcuffs howard e. shrobe integration of various emotion eliciting factors for life-like agents kwangyong lee executing the dod modeling and simulation strategy - making simulation systems of systems a reality james w. hollenbach william l. alexander learning monotone log-term dnf formulas based on the uniform distribution pac learning model, the learnability for monotone disjunctive normal form formulas with at most o(logn) terms (o(logn)-term mdnf) is investigated. using the technique of restriction, an algorithm that learns o(logn)-term mdnf in polynomial time is given. yoshifumi sakai akira maruoka time and norms: a formalisation in the event-calculus rafael hernandez marin giovanni sartor progressive compression for lossless transmission of triangle meshes lossless transmission of 3d meshes is a very challenging and timely problem for many applications, ranging from collaborative design to engineering. additionally, frequent delays in transmissions call for progressive transmission in order for the end user to receive useful successive refinements of the final mesh. in this paper, we present a novel, fully progressive encoding approach for lossless transmission of triangle meshes with a very fine granularity. a new valence-driven decimating conquest, combined with patch tiling and an original strategic retriangulation is used to maintain the regularity of valence. we demonstrate that this technique leads to good mesh quality, near-optimal connectivity encoding, and therefore a good rate-distortion ratio throughout the transmission. we also improve upon previous lossless geometry encoding by decorrelating the normal and tangential components of the surface. for typical meshes, our method compresses connectivity down to less than 3.7 bits per vertex, 40% better in average than the best methods previously reported [5, 18]; we further reduce the usual geometry bit rates by 20% in average by exploiting the smoothness of meshes. concretely, our technique can reduce an ascii vrml 3d model down to 1.7% of its size for a 10-bit quantization (2.3% for a 12-bit quantization) while providing a very progressive reconstruction. pierre alliez mathieu desbrun constructing material interfaces from data sets with volume-fraction information kathleen s. bonnell kenneth i. joy bernd hamann daniel r. schikore mark duchaineau mental models, text interpretation, and knowledge acquisition ashok k. goel kurt p. eiselt an interactive tool for placing curved surfaces without interpenetration john m. snyder the physiology of prolog expert system inference engine the current literature on expert systems development in prolog is replete with sample inference engines. however, the available models tend to be fragmentary and simplistic. important issues such as how to incorporate arithmetic evaluation into the reasoning process are often ignored. therefore, we present and describe a more realistic prolog expert system inference engine. the inference engine is an enhanced version of a standard model. it includes features and employs techniques that are non- existent in current models. we give special consideration to the relationship that the inference engine bears to the underlying prolog engine. the role of unification and sld resolution in the life of a prolog expert system inference engine is also discussed. david roach hal berghel ap2-earth: a simulation based system for the estimating and planning of earth moving operations dany hajjar simaan abourizk spacial classification and multi-spectral fusion with neural networks craig harston grand challenges in ai raj reddy hot topics in graphics hardware nick england multi-dimensional input techniques and articulated figure positioning by multiple constraints a six degree-of-freedom input device presents some novel possibilities for manipulating and positioning three-dimensional objects. some experiments in using such a device in conjunction with a real- time display are described. a particular problem which arises in positioning an articulated figure is the solution of three-dimensional kinematics subject to multiple joint position goals. a method using such an input device to interactively determine positions and a constraint satisfaction algorithm which simultaneously achieves those constraints is described. examples which show the power and efficiency of this method for key-frame animation positioning are demonstrated. norman i. badler kamran h. manoochehri david baraff real-time animation and motion capture in web human director (whd) motion capture systems usually work in conjunction with complex 3d applications, such as 3d studio max by kinetix or maya by alias/wavefront. once models have been created in these applications, motion capture systems provide the necessary data input to animate these models. the context of this paper introduces a simple motion capture system, which is integrated into a web-based application, thus allowing hanim humanoids to be animated using vrml and java. since web browser/vrml plugin context is commonly available on computers, the presented motion capture application is easy to use on any platform. taking benefit of a standard language as vrml makes the exploitation of produced animation easier than other commercial application with their specific formats. christian babski daniel thalmann cognitive modeling for games and animation john funge capability-based agent matchmaking anthony cassandra damith chandrasekara marian nodine toward adding knowledge to learning algorithms for indexing legal cases case-based reasoning systems have shown great promise for legal argumentation, but their development and wider availability are still slowed by the cost of manually representing cases. in this paper, we present our recent progress toward automatically indexing legal opinion texts for a cbr system. our system smile uses a classification-based approach to find abstract fact situations in legal texts. to reduce the complexity inherent in legal texts, we take the individual sentences from a marked-up collection of case summaries as examples. we illustrate how integrating a legal thesaurus and linguistic information with a machine learning algorithm can help to overcome the difficulties created by legal language. the paper discusses results from a preliminary experiment with a decision tree learning algorithm. experiments indicate that learning on the basis of sentences, rather than full documents, is effective. they also confirm that adding a legal thesaurus to the learning algorithm leads to improved performance for some, but not all, indexing concepts. stefanie bruninghaus kevin d. ashley simulation of manufacturing systems averill m. law michael g. mccomas linkwinds: interactive scientific data analysis and visualization allan s. jacobson andrew l. berkin martin n. orton wild wild west yves metraux computer graphics, are we forcing people to evolve? roger e. wilson brenda laurel terence mckenna leonard shlain data filtering for automatic classification of rocks from reflectance spectra the ability to identify the mineral composition of rocks and soils is an important tool for the exploration of geological sites. for instance, nasa intends to design robots that are sufficiently autonomous to perform this task on planetary missions. spectrometer readings provide one important source of data for identifying sites with minerals of interest. reflectance spectrometers measure intensities of light reflected from surfaces over a range of wavelengths. spectral intensity patterns may in some cases be sufficiently distinctive for proper identification of minerals or classes of minerals. for some mineral classes, carbonates for example, specific short spectral intervals are known to carry a distinctive signature. finding similar distinctive spectral ranges for other mineral classes is not an easy problem. we propose and evaluate data-driven techniques that automatically search for spectral ranges optimized for specific minerals. in one set of studies, we partition the whole interval of wavelengths available in our data into sub- intervals, or bins, and use a genetic algorithm to evaluate a candidate selection of subintervals. as alternatives to this computationally expensive search technique, we present an entropy-based heuristic that gives higher scores for wavelengths more likely to distinguish between classes, as well as other greedy search procedures. results are presented for four different classes, showing reasonable improvements in identifying some, but not all, of the mineral classes tested. jonathan moody ricardo silva joseph vanderwaart collaborative plan construction for multiagent mutual planning (abstract) ei- ichi osawa mario tokoro design-time simulation of a large-scale, distributed object system we present a case study in using simulation at design time to predict the performance and scalability properties of a large-scale distributed object system. the system, called consul, is a network management system designing to support hundreds of operators managing millions of network devices. it is essential that a system such as consul be designed with performance and scalability in mind, but due to consul's complexity and scale, it is hard to reason about performance and scalability using ad hoc techniques. we built a simulation of consul's design to guide the design process by enabling performanace and scalability analysis of various design alternatives. a major challenge in doing design-time simulation is that many parameters for the simulation are based on estimates rather than measurements. we developed analysis methods that derive conclusions that are valid in the presence of estimation errors. in this article, we describe our scalability analysis method for design simulations of distributed object systems. the main idea is to use relative and comparative reasoning to analyze design alternatives and compare transaction behaviors. we demonstrate the analysis approach by describing its application to consul. svend frølund pankaj garg delta's virtual physics laboratory (case study): a comprehensive learning platform on physics astronomy perhaps the most effective instrument to simplify and to clarify the comprehension of any complex mathematical or scientific theory is through visualisation. moreover using interactivity & 3d real time representations, one can easily explore and hence learn quickly in the virtual environments. the concept of virtual and safe laboratories has vast potentials in education. with the aid of computer simulations & 3d visualisations, many dangerous or cumbersome experiments may be implemented in the virtual environments, with rather small effort. nonetheless visualisation alone is of little use if the respective simulation is not scientifically accurate. hence a rigours combination of precise computation as well as sophisticated visualisation, presented through some intuitive user interface is required to realise a virtual laboratory for education. here we introduce delta's virtual physics laboratory, comprising of a wide range of applications in the field of physics & astronomy, which can be implemented and used as an interactive learning tool on the world wide web. sepideh chakaveh udo zlender detlef skaley konstantinos fostiropoulos dieter breitschwerdt hybrid pattern recognition system capable of self-modification charles w. glover nageswara s. v. rao e. m. oblow space-efficient search algorithms richard e. korf inverse global illumination: recovering reflectance models of real scenes from photographs yizhou yu paul debevec jitendra malik tim hawkins moving right along: a computational model of metaphoric reasoning about events srinivas narayanan resent car hotaka koike cognitive modeling: knowledge, reasoning and planning for intelligent characters john funge xiaoyuan tu demetri terzopoulos panned/zoomed landscape video sequences composited with computer-generated still images eihachiro nakamae xueying qin guofang jiao katsumi tadamura przemyslaw rokita yuji usagawa model diagnosis using the condition specification: from conceptualization to implementation c. michael overstreet ernest h. page richard e. nance collaborative augmented reality: exploring dynamical systems anton fuhrmann helwig löffelmann dieter schmalstieg regression metamodeling in simulation using bayesian methods russell c. h. cheng the dual dfa learning problem (extended abstract): hardness results for programming by demonstration and learning first-order representations william w. cohen high performance presence-accelerated ray casting we present a novel presence acceleration for volumetric ray casting. a highly accurate estimation for object presence is obtained by projecting all grid cells associated with the object boundary on the image plane. memory space and access time are reduced by run-length encoding of the boundary cells, while boundary cell projection time is reduced by exploiting projection templates and multiresolution volumes. efforts have also been made towards a fast perspective projection as well as interactive classification. we further present task partitioning schemes for effective parallelization of both boundary cell projection and ray traversal procedures. good load balancing has been reached by taking full advantage of both the optimizations in the serial rendering algorithm and shared-memory architecture. our experimental results on a 16-processor sgi power challenge have shown interactive rendering rates for 2563 volumetric data sets at 10 - 30 hz. this paper describes the theory and implementation of our algorithm, and shows its superiority over the shear- warp factorization approach. ming wan arie kaufman steve bryson implicit fairing of irregular meshes using diffusion and curvature flow mathieu desbrun mark meyer peter schröder alan h. barr parallel pattern recognition using fuzzy cooperative expert systems moti schneider eliahu shnaider abe kandel the effect of synchronization requirements on the performance of distributed simulations recent experiments have shown that conservative methods can achieve good performance by exploiting the characteristics of the system being simulated. in this paper we focus on the interrelationship between run time and synchronization requirements of a distributed simulation. a metric that considers the effect of lookahead and the physical rate of transmission of messages, and an arrival approximation that models the effect of synchronization requirements on the run time are developed. it is shown that even when good lookahead is exploited in the system, poor run-time performance is achieved if an inefficient mapping of lps to processors is used. murali s. shanker b. eddy patuwo issues and techniques in touch-sensitive tablet input william buxton ralph hill peter rowley p'tit parc bruno follet concurrent simulation: an alternative to distributed simulation the advent of a new generation of multiprocessors allows new approaches to parallel simulation. previous work in this area has concentrated on distributed simulation; this approach uses spatial decomposition to allow simulations to be run on networks of machines, where the message flow between processors in the network is related closely to the topology of the system being simulated. this paper presents an alternate approach, concurrent simulation, which is based on temporal decomposition. this allows natural use to be made of the shared memory facilities and load-balancing capabilities of the new multiprocessors, and it overcomes some fundamental limitations of the distributed approach. douglas w. jones trends in robotics george a. bekey autostereoscopic displays and computer graphics autostereoscopic displays present a three-dimensional image to a viewer without the need for glasses or other encumbering viewing aids. three classes of autostereoscopic displays are described: reimaging displays, volumetric displays and parallax displays. reimaging displays reproject an existing three-dimensional object to a new location or depth. volumetric displays illuminate points in a spatial volume. parallax displays emit directionally varying image information into the viewing zone. parallax displays are the most common autostereoscopic displays and are most compatible with computer graphics. different display technologies of the three types are described. computer graphics techniques useful for three-dimensional image generation are outlined. michael halle the luminous room: some of it, anyway john underkoffler daniel chak gustavo s. santos jessica laszlo hiroshi ishii an agent-oriented multiagent planning system this paper describes a multiagent planning system, mupac, that formulates cooperative plans efficiently. it contains three features: meta-level planning, breakable and unbreakable action representations, and an integrated agent screening and assignment procedure. the meta-level planning transforms an original goal statement into a skeletal plan, which is easier to follow and helps reduce the chance of conflicts at low-level actions. the breakable/unbreakable action representation specifies specific agent-action requirements. it also specifies concurrency and cooperation possibilities among actions. it makes plan generation and agent assignment straight forward, thus reducing the reasoning time of finding parallelism and cooperation among agents. the integrated agent screening and assignment procedure formulates plans following the skeletal plan. the performance of mupac has been discussed along four aspects: planning efficiency, planning flexibility, agent cooperation, and plan quality. results have shown that significant improvement has been achieved. kai-hsiung chang william b. day suebskul phiphobmongkol a simulation model of a wheat elevator system using slam this paper shows that simulation can be used as an analytical tool for agricultural marketing and the potential of the slam simulation language to model such systems. this analysis shows the effects upon the elevator complex from two different harvest season scenarios. the model also has the versatility of being able to derive the effects of different decision strategies on the elevator complex without designing a totally different model. the effects of elevator breakdowns, building of more elevator storage facilities, smaller and larger dumping areas, and different initial elevator inventory levels can be easily investigated. also, different elevator systems (country, terminal, and export) can be developed from the existing model. thomas r. harris behrokh khoshnevis a realistic camera model for computer graphics craig kolb don mitchell pat hanrahan hungry robots tony belpaeme andreas birk performance of clustering algorithms on time-dependent spatial patterns a single linkage clustering algorithm adapted for points distributed over time (slot) is used to recover clusters from coplanar points associated with time parameters. cluster number and shape determine separability and hence effectiveness of the algorithm. performance in simulation experiments also depended on the probability of recording cluster points. slot links points observed at different times if they are within some limiting distance, and the distance parameter becomes critical when detectability is low. performance comparisons are made with other algorithms, and results are presented in the context of a rule-based expert system for solving problems involving cluster analysis of time-dependent spatial patterns. ben pinkowski le ciel est a tout le monde anne bourdais ali ensad frederic grably bruno follet polygonal inductive generalisation system d. a. newlands g. i. webb lester: using paradigm cases in a quasi-precedential legal domain we are developing lester (legal expert system for termination of employment review), a case-based reasoning program to advise in the area of unjust discharge from employment under collective bargaining agreements. lester uses paradigm cases to reason in a legal domain that is not governed by a strong concept of precedent. this paper describes the domain and gives an overview of the current version of the program. k. a. lambert m. h. grunewald language assessment criteria for discrete simulation criteria are suggested for use in conducting comparative assessments of languages for use in discrete simulation. the criteria are grouped within the categories of simulation-specific criteria and general criteria. a discussion is provided concerning the significance the various assessment criteria have in modeling and simulation. suggestions are offered concerning the use of the criteria in a language selection process. james w. hooper stereo analyst: visualizing large stereoscopic imagery in real-time jason rosenberg gena hillhouse younian wang a multiple cooperating intelligent agents project progress repost the multiple cooperating agents project (mcap) at east stroudsburg university will be described. mcap is an ongoing set of experiments in artificial intelligence designed around a set of up to four cooperating robots. the tool manipulating robots operate in a world of workbenches, screws, nails, screwdrivers and hammers. natural language, english, is used as the human interface. research is underway in natural language recognition and generation, interaction of intelligent agents, modeling, planning and activity execution. system architecture will be discussed as well as the current status of those integrated subsystems making up the kernel. implementation data and future plans also will be presented. richard d. amori deriving empirical equations from simulation results analysis of variance is performed in a formal sensitivity analysis of a simulation model. this technique identifies those variables that significantly affect the response variable(s). then curves are fitted to the results to explain the response in terms of the significant variables. finally, the equations are available to optimize the system under study. leo j. boelhouwer advisory systems for pro se litigants increasing numbers of litigants represent themselves in court. advisory systems designed to help litigants understand the available legal remedies and satisfy the substantive and procedural requirements to obtain those remedies have the potential to assist these litigants and thereby reduce the burden that they impose on the courts. this paper presents a four-component model of advisory systems for prose litigants. this model was implemented in the protection order advisory (poa), an advisory system for pro se protection order applicants, poa illustrates how existing inference, document-drafting, and interface-design techniques can be used to construct advisory systems for pro se litigants in a wide range of legal domains for which (1) determining whether a prima facie case is satisfied does not require open-textured reasoning, and (2) the documents required to initiate an action are characterized by homogeneity and simple structure. l. karl branting hierarchical splatting: a progressive refinement algorithm for volume rendering david laur pat hanrahan revival of lost creatures, planet of ocean yukiko homma m. c. leon brian blau verbal communication: using approximate sound propagation to design an inter- agents communication language jean-sebastien monzani daniel thalmann using background knowledge to speed reinforcement learning in physical agents this paper describes icarus, an agent architecture that embeds a hierarchical reinforcement learning algorithm within a language for specifying agent behavior. an icarus program expresses an approximately correct theory about how to behave with options at varying levels of detail, while the icarus agent determines the best options by learning from experience. we describe icarus and its learning algorithm, then report on two experiments in a vehicle control domain. the first examines the benefit of new distinctions about state, whereas the second explores the impact of added plan structure. we show that background knowledge increases learning rate and asymptotic performance, and decreases plan size by three orders of magnitude, relative to the typical formulation of the learning problem in our test domain. daniel shapiro pat langley ross shachter automated learning of rules for heuristic classification systems a kent spackman a proposal for adding reality to expert systems human reasoning involves much more than just "true" and "false"; we use degrees of truth. for example, i am reasonably sure that telepathy is impossible, but i am not certain. expressing "reasonably certain" is easy in fuzzy logic, but is impossible in traditional logic --- either i believe telepathy possible or not possible. traditional logic is a forced choice system whereas fuzzy logic is not. since uncertainty of information in the knowledge base leads to uncertainty in the conclusions, the inference engine must be able to analyze the transmission of uncertainty from premises to conclusions. this uncertainty should be understandable to the user. we have before us an excellent example of acting via computer, assuming certainty, with the 500 point dow jones drop on october 19, 1987. had fuzzy logic been built into these trading programs, there might not have been such a panic, although neither human stupidity nor greed can be underestimated. without some method, based on a sound methodology, to deal with uncertainty, we can expect more "black mondays", whether it is in the space program, or banking, or the stock market, etc. it is the belief of the author fuzzy logic can lead the way. t. f. higginbotham the contour spectrum chandrajit l. bajaj valerio pascucci daniel r. schikore on being a teammate: experiences acquired in the design of robocup teams stacy marsella jafar adibi yaser al-onaizan gal a. kaminka ion muslea milind tambe a simulation model and analysis: integrating agv's with non-automated material handling companies that integrate old and new technologies need analysis methods with complex logic to evaluate the resulting system, making simulation a primary analysis tool. this paper presents the logic for a general purpose simulation model representing an automatic guided vehicle (agv) system integrated with traditional material handling equipment. to accurately represent type of system, logic includes vehicle loading/unloading and conventional equipment passing capability. the model's flexibility accommodates any combination of straight aisles and intersections through minor adjustments to the general model. the logic concepts are implemented using the siman simulation language. an application is presented which demonstrates using the model to analyze the impact of interfacing agv and traditional traffic upon aisle congestion and overall system performance. catherine m. harmonosky randall p. sadowskl high school weekly timetabling by evolutionary algorithms carlos fernandes joão paulo caldeira fernando melicio agostinho rosa special session on effective information presentation techniques are computer graphics displays as effective as they should be? are they as appealing as they could be? how can displays of information acquire more impact? how can they be made more memorable. questions like these are being asked frequently in the computer graphics community. because of advanced display equipment, lower costs, effective communication networks, and increasingly sophisticated user groups, computer graphics is entering a new stage in its development. visualized information will reach more people in this decade than ever before. computer graphics displays will become centrally involved with concept formation and decision making on a mass scale. for this reason computer graphics systems should incorporate more successfully the knowledge of visual communication professionals who have developed valid, effective principles for conveying facts and concepts through typography, color, symbols, motion, photography, etc. aaron marcus mervyn kurlansky susan marcus jack reineck gay reineck hardware support for adaptive subdivision surface rendering adaptive subdivision of triangular meshes is highly desirable for surface generation algorithms including adaptive displacement mapping in which a highly detailed model can be constructed from a coarse triangle mesh and a displacement map. the communication requirements between the cpu and the graphics pipeline can be reduced if more detailed and complex surfaces are generated, as in displacement mapping, by an adaptive tessellation unit which is part of the graphics pipeline. generating subdivision surfaces requires a large amount of memory in whicmultiple arbitrary accesses are required to neighbouring vertices to calculate the new vertices. in this paper we present a meshing scheme and new architecture for the implementation of adaptive subdivision of triangular meshes that allows for quick access using a small memory making it feasible in hardware, while at the same time allowing for new vertices to be adaptively inserted. the architecutre is regular and characterized by an efficient data management that minimizes the data storage and avoids the wait cycles that would be associated with the multiple data accesses required for traditional subdivision. this architecture is presented as an improvement for adaptive displacement mapping algorithms, but could also be used for adaptive subdivision surface generation in hardware. m. bóo m. amor m. doggett j. hirche w. strasser modeling acoustics in virtual environments using the uniform theory of diffraction realistic modeling of reverberant sound in 3d virtual worlds provides users with important cues for localizing sound sources and understanding spatial properties of the environment. unfortunately, current geometric acoustic modeling systems do not accurately simulate reverberant sound. instead, they model only direct transmission and specular reflection, while diffraction is either ignored or modeled through statistical approximation. however, diffraction is important for correct interpretation of acoustic environments, especially when the direct path between sound source and receiver is occluded. the uniform theory of diffraction (utd) extends geometrical acoustics with diffraction phenomena: illuminated edges become secondary sources of diffracted rays that in turn may propagate through the environment. in this paper, we propose an efficient way for computing the acoustical effect of diffraction paths using the utd for deriving secondary diffracted rays and associated diffraction coefficients. our main contributions are: 1) a beam tracing method for enumerating sequences of diffracting edges efficiently and without aliasing in densely occluded polyhedral environments; 2) a practical approximation to the simulated sound field in which diffraction is considered only in shadow regions; and 3) a real-time auralization system demonstrating that diffraction dramatically improves the quality of spatialized sound in virtual environments. nicolas tsingos thomas funkhouser addy ngan ingrid carlbom are the tiki gods angry? stuart gordon paul simpson volume rendering and data feature enhancement wolfgang krueger document preparation by an experimental text and facsimile integrated workstation since '78, an experimental workstation has been built up [1 to 5] at the siemens central research laboratories at munich which allows composing, filing, and transmission of illustrated office documents, offering to the user's disposal the following packet of facilities we expect to become standard in the future "paperless" office. we decided to design a system model in order to verify this conception by experiment. the result of this design is an assembly of function modules, which, in the general, correspond to peripherals, linked by a local network (scope fig. 1): • dialog screen • document screen • keyboard, for text input • tablet, for window control, menu selection, graphics and handprint input • printer, for printout of unformatted or formatted text, graphics, and images • scanner, for facsimile input of black & white as well as 6gray-level images, optionally to by integrated with printer into an intelligent copier • harddisk, for system software and temporary storage of user files • floppy, for backup, transport and letter mail • transceiver, for remote connection. any of these peripherals can, if required, be duplicated for time sharing users, or for increased performance. since an essential feature of our system conception is the integration of text and facsimile facilities, we defined the name textfax which stands as well for the above conception as for the experimental system we are now going to discuss in detail. wolfgang harak wolfgang postl animating facial expressions recognition and simulation of actions performable on rigidly-jointed actors such as human bodies have been the subject of our research for some time. one part of an ongoing effort towards a total human movement simulator is to develop a system to perform the actions of american sign language (asl). however, one of the "channels" of asl communication, the face, presents problems which are not well handled by a rigid model. an integrated system for an internal representation and simulation of the face is presented, along with a proposed image analysis model. results from an implementation of the internal model and simulation modules are presented, as well as comments on the future of computer controlled recognition of facial actions. we conclude with a discussion on extensions of the system, covering relations between flexible masses and rigid (jointed) ones. applications of this theory into constrained actions, such as across rigid nonmoving sheets of bone (forehead, eyes) are also discussed. stephen m. platt norman i. badler a study of interactive 3d point location in a computer simulated virtual environment james boritz kellogg s. booth bringing visualization to the user alvy ray smith complexity, ontology, and the causal markov assumption paul b. losiewicz applications of the universal joint task list to joint exercise results sam h. parry michael c. mcaneny richard j. dromerhauser fast shadows and lighting effects using texture mapping mark segal carl korobkin rolf van widenfelt jim foran paul haeberli hypnos nick eberle using layered support graphs for verifying external adequacy in rule-based expert systems gabriel valiente oopm/rt: a multimodeling methodology for real-time simulation when we build a model of real-time systems, we need ways of representing the knowledge about the system and also time requirements for simulating the model. considering these different needs, our question is "how can we determine the optimal model that simulates the system by a given deadline while still producing valid outputs at an acceptable level of detail?" we have designed oopm/rt (object-oriented physical modeler for real-time simulation) methodology. the oopm/rt framework has three phases: (1) generation of multimodels in oopm using both structural and behavioral abstraction techniques, (2) generation of at (abstraction tree) which organizes the multimodels based on the abstraction relationship to facilitate the optimal model selection process, and (3) selection of the optimal model that guarantees the deliver simulation results by the given amount of time. a more- detailed model (low abstraction model) is selected when we have enough time to simulate, while a less-detailed model (high abstraction model) is selected when the deadline is immediate. the basic idea of selection is to trade structural information for a faster runtime while minimizing the loss of behavioral information. we propose two possible approaches for the selection: an integer-programming-based approach and a search-based approach. by systematically handling simulation deadlines while minimizing the modeler's interventions, oopm/rt provides an efficient modeling environment for real- time systems. kangsun lee paul a. fishwick casmir - a community of software agents collaborating in order to retrieve multimedia data bredan berney elaine ferneley fitting virtual lights for non-diffuse walkthroughs bruce walter gun alppay eric lafortune sebastian fernandez donald p. greenberg improving human computer interaction in a classroom environment using computer vision in this paper we discuss our use of multi-modal input to improve human computer interaction. specifically we look at the methods used in the intelligent classroom to combine multiple input modes, and examine in particular the visual input modes. the classroom provides context that improves the functioning of the visual input modes. it also determines which visual input modes are needed when. we examine a number of visual input modes to see how they fit into the general scheme, and look at how the classroom controls their operation. joshua flachsbart david franklin kristian hammond simulation methodology: lessons of the past, challenges for the future this panel discussion is patterned after a similar session held at the recent annual meeting of the society for general systems research, held in detroit (may, 1983). the objective of that panel session was to raise the level of awareness of systems researchers regarding the role of modelling and simulation as a key tool in systems methodologies. audience reaction to the session was intense and generated much debate. consequently, the idea arose of presenting a matching version to the simulation community. it will be interesting to compare reactions emerging from the different perspectives of systems research and simulation modelling practice. for interested readers, the panel contributions are being expanded and deepened by the authors to take the form of articles for publication in the post proceedings of the detroit sgsr meeting and the journal behavioral science. the current panel discussion aims to examine the methodology of current practice, and from this base, go on to point out the challenges that lie in the near and long term future. the session will consist of an integrated sequence of presentations by simulation methodologists. beginning with problems and approaches in contemporary large- scale simulation modelling, it will move onto more ambitious computer- supported methodologies under development. bernard p. zeigler report from the joint siggraph/sigcomm workshop on graphics and networking ralph droms bob haber fengmin gong chris maeda an empirical approach to solving the general utility problem in speedup learning anurag chaudhry lawrence b. holder towards a smooth scattered data reconstruction with sharp features jing ren synthetic image generation with a lens and aperture camera model michael potmesil indranil chakravarty artificial neural network models for texture classification via: the radon transform a. d. kulkarni p. byars integrating pomdp and reinforcement learning for a two layer simulated robot architecture larry d. pyeatt adele e. howe communication oriented organizational modeling nardo b. j. van der rijst the quickhull algorithm for convex hulls the convex hull of a set of points is the smallest convex set that contains the points. this article presents a practial convex hull algorithm that combines the two-dimensional quickhull algorithm with the general-dimension beneath-beyond algorithm. it is similar to the randomized, incremental algorithms for convex hull and delaunay triangulation. we provide empirical evidence that the algorithm runs faster when the input contains nonextreme points and that it used less memory. computational geometry algorithms have traditionally assumed that input sets are well behaved. when an algorithm is implemented with floating-point arithmetic, this assumption can lead to serous errors. we briefly describe a solution to this problem when computing the convex hull in two, three, or four dimensions. the output is a set of "thick" facets that contain all possible exact convex hulls of the input. a variation is effective in five or more dimensions. c. bradford barber david p. dobkin hannu huhdanpaa constructing and dynamically maintaining perspective-based agent models in a multi-agent environment k. s. barber j. kim animating fracture james f. o'brien jessica k. hodgins ice - intelligent computer explanation (abstract only) to quote a popular advertisement: "of the 235 million people in america, only a fraction can use a computer." on-line help systems can transfer needed information to those people and to experienced users as well. this paper surveys existing help systems, examining: the kind of help provided: e.g., documentation, debugging assistance, tutorial assistance, spelling checking. the method of communication used: e.g., menus, canned text, natural language, graphics. the intelligence of the help system: e.g., what it knows about the user or about problem solving. the form of knowledge representation used. the inference mechanisms used. the form of knowledge acquisition used. in particular, a new design for intelligent computer explanation will be described. margaret christensen the definition and rendering of terrain maps gavin s p miller atlas supplement to the 1972 county and city data book a series of maps presenting the spatial distribution of the tabular data from the 1972 county and city data book is discussed. using an automated mapping procedure developed on a minicomputer by two non-computer scientists, 196 choropleth maps of county-level data for the state of washington were prepared. the maps provide governmental decision- makers and planners a means to quickly comprehend patterns in the data that are not readily noticeable in the tabular presentation of the data. lucky m. tedrow eugene a. hoerauf a reinforcement learning model of selective visual attention this paper proposes a model of selective attention for visual search tasks, based on a framework for sequential decision-making. the model is implemented using a fixed pan-tilt-zoom camera in a visually cluttered lab environment, which samples the environment at discrete time steps. the agent has to decide where to fixate next based purely on visual information, in order to reach the region where a target object is most likely to be found. the model consists of two interacting modules. a reinforcement learning module learns a policy on a set of regions in the room for reaching the target object, using as objective function the expected value of the sum of discounted rewards. by selecting an appropriate gaze direction at each step, this module provides top-down control in the selection of the next fixation point. the second module performs "within fixation" processing, based exclusively on visual information. its purpose is twofold: to provide the agent with a set of locations of interest in the current image, and to perform the detection and identification of the target object. detailed experimental results show that the number of saccades to a target object significantly decreases with the number of training epochs. the results also show the learned policy to find the target object is invariant to small physical displacements as well as object inversion. silviu minut sridhar mahadevan experiments in load migration and dynamic load balancing in speedes linda f. wilson wei shen expert dictionaries: knowledge based tools for explanation and maintenance of complex application environments the paper describes a new type of tool, which has been defined to support complex applications (such as the ones typically found in office environments). the relevant features of this tool are the following: it collects "domain" (also called "deep") knowledge (relevant for the application domain), and uses it to explain the application "behaviour", either automated or not it makes use of an extensional representation of the domain knowledge based on hypertexts it can support the activity of maintenance (of the application) and of training (to understand how the application works). we have chosen for the tool the name "expert dictionary". a first implementation of this tool for a specific application target (document generation) has been carried on with hypertalk, the programming language of hypercard. franca garzotto paolo paolini a visible polygon reconstruction algorithm s. sechrest d. p. greenberg evolution in the first person zak margolis john oyzon elouise oyzon pathematic agents: rapid development of believable emotional agents in intelligent virtual environments carlos martinho ana paiva explanatory lifelike avatars: performing user-centered tasks in 3d learning environments james c. lester luke s. zettlemoyer joel p. gregoire william h. bares bill gates' basement brummbaer matzerath a goal driven knowledge based system for a domain of private international law a. w. koers d. kracht an efficient data structure for random walk algorithms in faceted porous media modern x-ray computerized micro-tomography (cmt) facilities allow researchers interested in composite materials and porous media to image their samples in 3d with micrometer resolution. the datasets obtained for representative samples are frequently very large (10243 voxels in gray-scale levels). performing a tessellation on such datasets would produce hundreds millions facets, which would be impossible to handle in memory on rather powerful computers. various numerical methods are classical for the prediction of some effective properties of porous and other composite media from the phase properties and the micro- structure (diffusivities, conductivities). the choice of a monte- carlo random walk scheme is justified by its minimal memory cost in addition to image storage. in order to employ it, one must be able to perform ray- tracing in large and precise 3d images. the new framework we present allows that feature by using a memory-sparing data structure dedicated to such algorithms. we only store in memory the vertices provided by the marching cube algorithm. so, since the facets are not stored, the needed memory size is divided by a factor of five, without any significant increasing of the computation time: the extraction of properties from very large micro- porous media samples is now possible. this study allows us to claim that a simulation making an intensive use of ray-tracing in tessellated media obtained with the marching-cube algorithm is not as expensive (in terms of memory and time cost) as it could seem. we show that the marching-cube algorithm, when it is used dynamically to connect vertices upon request, is still a very powerful mesh generator since it consumes then very few memory, and that it can be trivially implemented. jean- françois delesse bertrand le sa gerard vignoles artistic screening victor ostromoukhov roger d. hersch creation and rendering of realistic trees jason weber joseph penn editor's introduction r. daniel bergeron training agents to recognize text by example henry lieberman bonnie a. nardi david wright jitterbug bob hoffman computer aided cinematography techniques for model validation graphical display is a tool for visualizing patterns through quantification of dimensions or parameters that may not be observable by tabular methods. this paper focuses on graphical display where greater than 2 dimensions or parameters are being iteratively evaluated. the specific techniques included in this paper are: (1) two-dimensional plotting of hydrologic flow, with time as a third dimension; (2) three- dimensional plotting of vegetational biomass on a landscape coordinate system; and (3) three-dimensional plotting of carbon monoxide concentrations in a coal gasification facility versus hour-of-day and location with sequential plotting of each day for an extended time period. each of these examples of cinematography illustrates patterns in data with time- or space-varying parameters. specific statistical parameters such as the coefficient variation have been used to idenitfy potential patterns of interest. graphic cinematography is useful for (a) recognition of pattern where a single two- dimensional or three-dimensional display fails to reveal important features, (b) rapid review of data, (c) easy display of various combinations of parameters, and (d) evaluations of large volume temporal data. m. e. vansuch r. h. strand m. p. farrell computer graphics as an enabling technology for cooperative, globalapplications aderito fernandes marcos alain chesnais jose enearnaçÃo dehumanized people and humanized programs: a natural language view of being there dan fass bidding in reinforcement learning: a paradigm for multi-agent systems ron sun chad sessions in response hugh gene loebner interactive computer modelling of truck/shovel operations in an open-pit mine an interactive computer model of truck/shovel operations in an open- pit copper mine has been developed to aid in the design and evaluation of a computer-based truck dispatching system. interactive features include computer graphics and alphanumeric displays of on-going simulation results and a facility for user command entry. these interactive features allow the model user to act as the decision maker during simulation experiments. a non- interactive option allows execution of high speed simulation experiments with no displays and with automatic decision making. l. k. nenonen p. w.u. graefe a. w. chan epistemological and heuristic adequacy revisited matthew l. ginsberg acm algorithms policy f. t. krogh instant radiosity alexander keller run length control using parallel spectral method kimmo e. e. raatikainen letter from the chair jeffrey m. bradshaw scale-dependent reproduction of pen-and-ink illustrations mike salisbury corin anderson dani lischinski david h. salesin modelling higher cognitive functions with hebbian cell assemblies marcin chady algorithmic determination of structure of infinite lie pseudogroups of symmetries of pdes i. g. lisle g. j. reid a. boulton carousel hiroshi shiokawa transformation approach for consistency in object-oriented knowledge bases hsin-hsen yao hwa soo kim is a visualization language possible? thomas g. west principles and applications of pencil tracing mikio shinya t. takahashi seiichiro naito the two-user responsive workbench: support for collaboration through individual views of a shared space maneesh agrawala andrew c. beers ian mcdowall bernd fröhlich mark bolas pat hanrahan cutting legal loops riverrun, past eve and adam's, from swerve of shore to bend of bay, brings us by a commodious vicus of recirculation back to howth castle and enviorns. ***** a way a lone a last a loved a long the joyce, finnegans wake 1, 628 (1955) recursion is the act of defining an object or solving a problem in terms of itself. a careless recursion can lead to an infinite regress. we avoid the bottomless circularity inherent in this tactic by demanding that the recursion be stated in terms of some "simpler" object, and by providing the definition or solution of some trivial base case. properly used, recursion is a powerful problem solving technique, both in artificial domains like mathematics and computer programming, and in real life. friedman & felleisen, the little lisper ix (1986) d. h. berman incremental system development of large discrete-event simulation models lars g. randell lars g. holst gunnar s. bolmsjö selective sampling for nearest neighbor classifiers michael lindenbaum shaul markovich dmitry rusakov only wooksang chang knowledge-based processing/interpretation of oceanographic satellite data an expert system is being developed as a step towards a more automated environment for processing satellite data of a region of the atlantic ocean and for interpreting the data with respect to mesoscale events. oceanic events of interest include the gulf stream boundaries, warm-core rings, and cold-core rings. m g thomason r e blake m lybanon alternative modeling perspectives: finding the creative spark j. o. henriksen parallel lumigraph reconstruction this paper presents three techniques for reconstructing lumigraphs/lightfields on commercial ccnuma parallel distributed shared memory computers. the first method is a parallel extension of the software-based method proposed in the lightfield paper. this expands the ray/two-plane intersection test along the film plane, which effectively becomes scan conversion. the second method extends this idea by using a shear/warp factorization that accelerates rendering. the third technique runs on an sgi reality monster using up to eight graphics pipes and texture mapping hardware to reconstruct images. we characterize the memory access patterns exhibited using the hardware-based method and use this information to reconstruct images from a tiled uv plane. we describe a method to use quad-cubic reconstruction kernels. we analyze the memory access patterns that occur when viewing lumigraphs. this allows us to ascertain the cost/benefit ratio of various tilings of the texture plane. peter-pike sloan charles hansen near real-time shaded display of rigid objects described is a visible surface algorithm and an implementation that generates shaded display of objects with hundreds of polygons rapidly enough for interactive use --- several images per second. the basic algorithm, introduced in [fuchs, kedem and naylor, 1980], is designed to handle rigid objects and scenes by preprocessing the object data base to minimize visibility computation cost. the speed of the algorithm is further enhanced by its simplicity, which allows it to be implemented within the internal graphics processor of a general purpose raster system. henry fuchs gregory d. abram eric d. grant illustrating transparent surfaces with curvature-directed strokes victoria interrante henry fuchs stephen pizer future dependent events, as shown by the example of slam random events in discrete simulation models are classified in ordinary and future dependent events. an event occurring between to and t is classified as an ordinary event, if the probability distribution for the realization in time is known at time to. if changes of the system after time to can affect the probability distribution of an event, it is called a future dependent event. to show the difference we give two examples. in contrary to ordinary events future dependent events are not well supported by the currently most used languages like gpss, simula, simscript and slam. in taking slam as an example, we are going to demonstrate, how to adapt a simulation language to be able to handle future dependent events with as slight an effort as possible. heimo h. adelsberger real-time programmable shading one of the main techniques used by software renderers to produce stunningly realistic images is programmable shading---executing an arbitrarily complex program to compute the color at each pixel. thus far, programmable shading has only been available on software rendering systems that run on general-purpose computers. rendering each image can take from minutes to hours. parallel rendering engines, on the other hand, have steadily increased in generality and in performance. we believe that they are nearing the point where they will be able to perform moderately complex shading at real-time rates. some of the obstacles to this are imposed by hardware, such as limited amounts of frame-buffer memory and the enormous computational resources that are needed to shade in real time. other obstacles are imposed by software. for example, users generally are not granted access to the hardware at the level required for programmable shading. this paper first explores the capabilities that are needed to perform programmable shading in real times. we then describe the design issues and algorithms for a prototype shading architecture on pixelflow, an experimental graphics engine under construction. we demonstrate through examples and simulation that pixelflow will be able to perform high- quality programmable shading at real-time (30 to 60 hz) rates. we hope that our experience will be useful to shading implementors on other hardware graphics systems. anselmo lastra steven molnar marc olano yulan wang an experimental comparison of rgb, yiq, lab, hsv, and opponent color models the increasing availability of affordable color raster graphics displays has made it important to develop a better understanding of how color can be used effectively in an interactive environment. most contemporary graphics displays offer a choice of some 16 million colors; the user's problem is to find the right color. folklore has it that the rgb color space arising naturally from color display hardware is user-hostile and that other color models such as the hsv scheme are preferable. until now there has been virtually no experimental evidence addressing this point. we describe a color matching experiment in which subjects used one of two tablet- based input techniques, interfaced through one of five color models, to interactively match target colors displayed on a crt. the data collected show small but significant differences between models in the ability of subjects to match the five target colors used in this experiment. subjects using the rgb color model matched quickly but inaccurately compared with those using the other models. the largest speed difference occurred during the early convergence phase of matching. users of the hsv color model were the slowest in this experiment, both during the convergence phase and in total time to match, but were relatively accurate. there was less variation in performance during the second refinement phase of a match than during the convergence phase. two-dimensional use of the tablet resulted in faster but less accurate performance than did strictly one- dimensional usage. significant learning occurred for users of the opponent, yiq, lab, and hsv color models, and not for users of the rgb color model. michael w. schwarz william b. cowan john c. beatty spatial input/display correspondence in a stereoscopic computer graphic work station an interactive stereoscopic computer graphic workspace is described. a conventional frame store is used for three-dimensional display, with left/right eye views interlaced in video and viewed through plzt shutter glasses. the video monitor is seen reflected from a half silvered mirror which projects the graphics into a workspace, into which one can reach and manipulate the image directly with a "magic wand". the wand uses a magnetic six degree-of-freedom digitizer. in an alternative configuration, a graphics tablet was placed within the workspace for input intensive tasks. christopher schmandt vicious strategies for vickrey auctions we show that the vickrey auction, despite its theoretical benefits, is inappropriate if "antisocial" agents participate in the auction process. more specifically, an antisocial attitude for economic agents that makes {\em reducing the profit of competitors}\/ their main goal besides maximizing their own profit is introduced. under this novel condition, agents need to deviate from the dominant truth-telling strategy. this paper presents a strategy for bidders in repeated vickrey auctions who are intending to inflict losses to fellow agents in order to be more successful, not in absolute measures, but relatively to the group of bidders. felix brandt gerhard wei shape-based volumetric collision detection nikhil gagvani deborah silver physically based computer animation: introduction andrew rosenbloom the jungle boy ming-huei shih semi-regular mesh extraction from volumes zoÃ" j. wood peter schröder davidbreen mathieu desbrun prudence in language learning stuart a. kurtz james s. royer relating the performance of partial-order planning algorithms to domain features craig a. knoblock qiang yang simulation with simnet ii hamdy a. taha hardware accelerated rendering of csg and transparency this paper describes algorithms for implementing accurate rendering of csg and transparency in a hardware 3d accelerator. the algorithms are based on a hardware architecture which performs front-to-back z-sorted shading; a multiple-pass algorithm which allows an unlimited number of z-sorted object layers is also described. the multiple-pass algorithm has been combined with an image partitioning algorithm to improve efficiency, and to improve performance of the resulting hardware implementation. michael kelley kirk gould brent pease stephanie winner alex yen a conceptual framework for research in the analysis of simulation output thomas j. schriber richard w. andrews viewpoint: a troubleshooting-specific knowledge acquisition tool j. e. caviedes m. k. reed reconstructing multishell solids from voxel-based contours i. gargantini g. schrack a. kwok elements in transformations #2 ying tan jeffrey stolet acquiring knowledge by explaining observed problem solving j. d. martin m. redmond applications of the dempster-shafer theory of evidence for simulation the dempster-shafer theory of belief functions shows promise as a means of incorporating incompleteness of evidence into simulation models. the key feature of the dempster-shafer theory is that precision in inputs is required only to a degree justified by available evidence. the output belief function contains an explicit measure of the firmness of output probabilities. this paper gives an overview of belief function theory, presents the basic methodology for application to simulation, and gives a simple example of a simulation involving belief functions. kathryn b. laskey marvin s. cohen simultaneous presentation in text generation (abstract only) the early part of the 1980s has seen increasing interest and research in computer generation of text which has lead to the creation of several systems to handle various aspects of the text generation process. some of these systems include mckeown's text system [6], appelt's kamp [1], mcdonald's mumble [5] and mann's penman system [3] (which includes matthiessen's nigel [4]). as proposed by thompson [7], the text generation process can profitably be viewed as consisting of a strategic component which decides what to say and a tactical component which determines how to say it. in terms of processing, the strategic component uses the speaker's intentions, real-world knowledge, and knowledge of the user's beliefs and knowledge to produce a message structure containing the message elements to be presented and the relation between these message elements. the tactical component takes the message structure and produces the final text so as to convey both the propositional content as well as the speaker's attitude and intentions. in the various text generation systems that have been produced the message elements in the message structure are processed serially. that is, one element is realized, then the next, etc. there are, however, situations where one message element appears to "interrupt" another. when this occurs the impression is that the two message elements are presented in parallel. i call this situation simultaneous presentation. simultaneous presentation is realized in text by parenthetical phrases, appositives, and unrestricted relative clauses. there are three uses for simultaneous presentation: correction of perceived ambiguous or unsuccessful reference, definition of terms and naming of concepts, and emphasis on the co-reference of two items. preliminary results indicate that the three uses may be distinguished by the identification intention proposed by appelt for concept activation actions [2]. these three uses have in common the fact that each involves the co- reference of the two elements presented simultaneously. indeed, it appears that simultaneous presentation is an explicit indication of co- reference. my research demonstrates that simultaneous presentation is a useful and necessary capability if a text generation system is to have full expressive power. the conditions under which simultaneous presentation can and cannot be used felicitously will be considered as well as the effects of the use of simultaneous presentation on the final text produced. in addition, the implications of simultaneous presentation for the text generation process and the requirements it places on a text generation system will also be studied. the final result will be to produce a system capable of using simultaneous presentation. i believe it should be possible to produce such a system by building on currently existing strategic and tactical text generators (mckeown's text system and mcdonald's mumble are one possibility). such a system should have particular utility in the generation of explanations and will be tested in such a domain. kenneth r. lee generating antialiased images at low sampling densities don p. mitchell a brief introduction to discrete-event simulation programming languages philip kiviat the visual effect of a seismic wavefield zhen po partitioned multiagent systems in information oriented domains claudia v. goldman jeffrey s. rosenschein trends in high performance graphic systems(panel session) accompanying the rapid development of integrated circuit fabrication technology has been a parallel, but slower, development of ic design techniques and systems. recent approaches to ic design enable individual designers to consider developing their own vlsi circuits. such capability may open the door to a more varied set of system designs than could previously be considered in most design environments. the panel members are all currently designing graphic systems oriented towards vlsi implementation. each will give a short presentation describing one of their system designs. following these presentations, three members of the previous session (forest baskett, andreas bechtolsheim, and fred parke) will discuss common issues, problems, and future prospects. henry fuchs d. cohen bob sproull jim clark fred parke guest editorial - simulation for training: foundations and techniques osman balci modular visualization environments: past, present, and future gordon cameron automated design data capture using relaxation techniques h. r. myler a. j. gonzalez smooth hierarchical surface triangulations tran s. gieng bernd hamann kenneth i. joy gregory l. schussman issac j. trotts enhancing information retrieval by automatic acquisition of textual relations using genetic programming we have explored a novel method to find textual relations in electronic documents using genetic programming and semantic networks. this can be used for enhancing information retrieval and simplifying user interfaces. the automatic extraction of relations from text enables easier updating of electronic dictionaries and may reduce interface area both for search input and hit output on small screens such as cell phones and pdas (personal digital assistants). agneta bergström patricija jaksetic peter nordin a simulation-based production testbed albert jones michael iuliano simulation of group work processes in manufacturing willi bernhard axel schilling achieving scalability and expressiveness in an internet-scale event notification service this paper describes the design of siena, an internet-scale event notification middleware service for distributed event-based applications deployed over wide-area networks. siena is responsible for selecting the notifications that are of interest to clients (as expressed in client subscriptions) and then delivering those notifications to the clients via access points. the key design challenge for siena is maximizing expressiveness in the selection mechanism without sacrificing scalability of the delivery mechanism. this paper focuses on those aspects of the design of siena that fundamentally impact scalability and expressiveness. in particular, we describe siena's data model for notifications, the covering relations that formally define the semantics of the data model, the distributed architectures we have studied for siena's implementation, and the processing strategies we developed to exploit the covering relations for optimizing the routing of notifications. antonio carzaniga david s. rosenblum alexander l. wolf three approaches to heuristic search in networks three different approaches to heuristic search in networks are analyzed. in the first approach, as formulated initially by hart, nilsson, and raphael, and later modified by martelli, the basic idea is to choose for expansion that node for which the evaluation function has a minimum value. a second approach has recently been suggested by nilsson. in this method, in contrast to the earlier one, a node that is expanded once is not expanded again; instead, a "propagation" of values takes place. the third approach is an adaptation for networks of an and/or graph "marking" algorithm, originally due to martelli and montanari. five algorithms are presented. algorithms a and c illustrate the first approach; propa and propc, the second one; and marka, the third one. the performances of these algorithms are compared for both admissible and inadmissible heuristics using the following two criteria: (i) cost of the solution found; (ii) time of execution in the worst case, as measured by the number of node expansions (a, c), or node "selections" (propa, propc), or arc "markings" (marka). the relative merits and demerits of the algorithms are summarized and indications are given regarding which algorithm to use in different situations. a. bagchi a. mahanti a re-usable broker agent architecture with dynamic maintenance capabilities catholijn m. jonker jan treur dialectic semantics for argumentation frameworks we provide a formalism for the study of dialogues, where a dialogue is a two- person game, initiated by the proponent who defends a proposed thesis. we examine several different winning criteria and several different dialogue types, where a dialogue type is determined by a set of positions, an attack relation between positions and a legal-move function. we examine two proof theories, where a proof theory is determined by a dialogue type and a winning criterion. for each of the proof theories we supply a corresponding declarative semantics. h. jakobovits d. vermeir monological reason-based logic: a low level integration of rule-based reasoning and case-based reasoning this paper contains an informal introduction to a theory about legal reasoning (reason-based logic) that takes the notion of a reason to be central. arguing for a conclusion comes down to first collecting the reasons that plead for and against the conclusion, and second weighing them. the paper describes how we can establish the presence of a reason and how we can argue whether the reasons for or the reasons against the conclusion prevail. it also addresses the topic of meta-level reasoning about the use of rules in concrete cases. it is shown how both rule-based reasoning and case-based reasoning are naturally incorporated in the theory of reason-based logic. jaap hage guest editor's introduction to special issue on computational geometry jurg nievergelt beyond sgml roger price validation (panel) stewart v. hoover an intelligent backtracking schema in a logic programming environment ilyas cicekli a system for monte carlo experimentation a new computer system for monte carlo experimentation is presented in this thesis. the new system speeds and simplifies the process of coding and preparing a monte carlo experiment; it also encourages the proper design of monte carlo experiments, and the careful analysis of the experimental results. a new functional language is the core of this system. monte carlo experiments, and their experimental designs, are programmed in this new language; those programs are compiled into fortran output. the fortran output is then compiled and executed. the experimental results are analyzed with a standard statistics package such as s, isp, or minitab or with a user supplied program. both the experimental results and the experimental design may be directly loaded into the workspace of those packages. the new functional language frees programmers from many of the details of programming an experiment. experimental designs such as factorial, fractional factorial or latin square are easily described by the control structures and expressions of the language. specific mathematical models, such as arima(p,n,q) models, regression models with specific collinearity properties, tabular data generated by logit or log-linear models are generated by the routines of the language. numerous random number generators and many standard statistic routines are included. it is easy to use standard variance reduction techniques, such as common or antithetic variables, conditional monte carlo, weighted samples, importance sampling or control variates. david alan grier wildfire visualization (case study) james ahrens patrick mccormick james bossert jon reisner judith winterkamp modelling john s. carson vizcraft (case study): a multimensional visualization tool for aircraft configuration design we describe a visualization tool to aid aircraft designers during the conceptual design stage. the conceptual design for an aircraft is defined by a vector of 10-30 parameters. the goal is to find a vector that minimizes an objective function while meeting a series of constraints. vizcraft integrates the simulation code that evaluates the design with visualizations for analyzing the design individually or in contrast to other designs. vizcraft allows the designer to easily switch between the view of a design in the form of a parameter set, and a visualization of the corresponding aircraft. the user can easily see which, if any, constraints are violated. vizcraft also allows the user to view a database of designs using parallel coordinates. a. goel c. baker c. a. shaffer b. grossman r. t. haftka w. h. mason l. t. watson corrigendum: topological considerations in isosurface generation allen van gelder jane wilhelms parallel processing of the shear-warp factorization with the binary-swap method on a distributed-memory multiprocessor system kentaro sano hiroyuki kitajima hiroaki kobayashi tadao nakamura a characterization of ten rasterization techniques n. gharachorloo s. gupta r. f. sproull i. e. sutherland the umass intelligent home project victor lesser michael atighetchi brett benyo bryan horling anita raja regis vincent thomas wagner ping xuan shelley xq. zhang implementation results and analysis of a parallel progressive radiosity pascal guitton jean roman gilles subrenat learning to use a text processing system: evidence from "thinking aloud" protocols there is growing interest in cognitive science in the the mental processes that underly learning and using computer systems (e. g., bott {1}; mayer, {2}; card, moran & newell {3}). in this paper we report generalizations about the problems people who are not experienced with computers have learning to use a text-processing system. we are especially interested in unaided self- instruction, because of the practical interest in reducing the role of experienced personel in the training process. we analyze these difficulties in terms of the interaction between the cognitive characteristics of the learner, and the design of self-instruction, and the interface. finally, we are also interested in implications of these problems for designing better training methods and computer interfaces that are easier to learn. clayton lewis robert mack conception of cognitive interfaces for legal knowledge: evolution of the jurisque project on the risks of avalanches this work falls within the field of model-based legal information retrieval. we were brought to conceive new types of interfaces after having evaluated a legal database available on the internet. although legally validated, this legal database jurisque-1 was considered as unsuitable to the expertise's necessities of mountains' professionals (whether legal practitioners or not) working on practical issues of responsibility in cases of avalanches. we thus proposed to develop _cognitive interfaces_ which have the peculiarity to integrate a _model_ of the field into the management of various resources (software packages or knowledge) and ensure that they communicate with one another. filipe borges danièle bourcier evelyne andreewsky raoul borges determining the usefulness of information from its use during problem solving elise h. turner john phelps austlll's aide - natural language legislative rulebases aide (`austlii inferencing development environment') provides a quasi-natural language form of knowledge representation which is reasonably close to statutory language, but at the same time represents knowledge so that it can be used by an inferencing engine using predicate calculus. aide also provides a supportive development environment for the user by parse tree differentiation to assist the user choosing the correct parsing of rules. the development environment and the user environment are both web-based, facilitating collaborative development of knowledgebases, and the integration of inferencing dialogues with legal source texts on the web. russell allen philip chuns andrew mowbray graham greenleaf gi-cube: an architecture for volumetric global illumination and rendering the power and utility of volume rendering is increased by global illumination. we present a hardware architecture, gi-cube, designed to accelerate volume rendering, empower volumetric global illumination, and enable a host of ray- based volumetric processing. the algorithm reorders ray processing based on a partitioning of the volume. a cache enables efficient processing of coherent rays within a hardware pipeline. we study the flexibility and performance of this new architecture using both high and low level simulations. frank dachille arie kaufman a hierarchical error controlled octree data structure for large-scale visualization dmitriy v. pinskiy joerg meyer bernd hamann kenneth i. joy eric brugger mark duchaineau on the power of the frame buffer raster graphics displays are almost always refreshed out of a frame buffer in which a digital representation of the currently visible image is kept. the availability of the frame buffer as a two-dimensional memory array representing the displayable area in a screen coordinate system has motivated the development of algorithms that take advantage of this memory for more than just picture storage. the classic example of such an algorithm is the depth buffer algorithm for determining visible surfaces of a three-dimensional scene. this paper constitutes a first attempt at a disciplined analysis of the power of a frame buffer seen as a computational engine for use in graphics algorithms. we show the inherent power of frame buffers to perform a number of graphics algorithms in terms of the number of data fields (registers) required per pixel, the types of operations allowed on these registers, and the input data. in addition to upper bounds given by these algorithms, we prove lower bounds for most of them and show most of these algorithms to be optimal. one result of this study is the introduction of new frame buffer algorithms for computing realistic shadows and for determining the convex intersection of half spaces, an operation important in computational geometry and in rendering objects defined using planes rather than polygons. another result is that it shows clearly the relationships between different and important areas of research in computer graphics, such as visible surface determination, compositing, and hardware for smart frame buffers. alain fournier donald fussell polygonization of non-manifold implicit surfaces jules bloomenthal keith ferguson toward the essence of information leonel morales diaz intelligence report generation based on computer intensive data analysis (abstract only) computer intensive methods of data analyses provide a basis for automatic production of expert interpretations expressed in english. these methods exploit monte carlo sampling techniques along with the bootstrap and the jackknife methods to replace ones previously developed to minimize computation. the attractiveness of these methods grows at rates dictated by the fall in the cost of computation and the limits on the amount of data to be analyzed. computer intensive techniques produce parameter-free estimates of multi-component statistical quantities, for which the error in each component is estimated. such estimations, along with expressions of confidence interval serve to both characterize the amount of order present in a table of raw data and generate an english report. alvin j. surkan quen pin-ngern knowledge acquisition: issues, techniques and methodology yihwa i. liou the 20th annual acm north american computer chess championship despite entering ranked almost a class above the field, a last-round loss forced deep thought to settle for a first-place tie with hitech at the 20th annual acm north american computer chess championship. the five-round swiss- style tournament was held november 12-15 at bally's-reno in conjunction with supercomputing '89. it marked the twentieth consecutive year that acm has organized this major chess event. until 1988, the tournament took place at the annual acm conferences. in 1988 and again this year, however, the event was hosted by the joint acm sigarch/ieee computer society supercomputing conference. ten teams participated in the strongest computer chess tournament in history. every program was playing at least at the expert level. this year's tournament offered $5000 in prizes. hitech and deep thought's programmers each won $2000 for their first-place tie while mephisto x and bebe's programmers split the $1000 third-place prize. in addition to the cash prizes, trophies were awarded to the first three finishers. a special trophy was given to mephisto x as the "best small computing system." a technical session chaired by tony marsland was held during the championship. the topic of the session was endgame play by computers. once upon a time computers played the endgame particularly badly, but this is no longer the case. the session considered some of the improvements and some of the problems that remain. david levy served as tournament director, returning after a layoff of almost a decade. he served as td for the first time in 1971, continuing into the early 1980s when his own programs began to compete. levy will take on deep thought in london in a four-game match in december.* in 1978, he won a bet made in 1968 that no computer would defeat him during the following ten years. this time he appears to be the underdog. attending the championship as an honored guest was ben mittman. mittman was head of northwestern university's vogelback computing center during the years that slate, atkin, and gorlen's programs dominated the acm events. some give him credit for being northwestern university's greatest and most successful "coach." from 1971 through 1983, ben also was involved in the organization of the tournaments, from 1977 through 1983, ben served as the first president of the international computer chess association. he was also the first editor of what is now called the icca journal, the main journal for technical papers on computer chess. this year the championship is scheduled to be a part of supercomputing '90 in new york city on november 11-14. the 1990 event will see the first major change in the tournament rules. for the last 20 years, the rules have specified that each player is given two hours to make the first 40 moves and an additional hour for each 20 moves thereafter. games frequently lasted more than six hours. this year, each computer will be required to make all its moves in two hours, thus guaranteeing that no game will last more than four hours. in addition to the main championship, a special endgame tournament will be held testing the programs' abilities in this special part of the game. for the first time at supercomputing '90, all games will be played during the day beginning at 1:oo p.m.---except for one 7:00 p.m. sunday evening game on the 11th. the event will be a five-round swiss-style tournament. for information contact professor monty newborn, school of computer science, mcgill university, 3480 university street, montreal, quebec, canada, h3a. 2a7. monty newborn the project approach to simulation language comparison we present a new approach to comparing simulation software which is based on the use of a representative project. we introduce our approach and the need for it. we then demonstrate it by considering space and communication group's dock-to-stores system and six simulation languages (map/1, siman, simfactory, slam ii, witness, xcell +). finally, we summarize our results and provide a perspective on the work performed. f. bradley armstrong scott sumner antialiasing of curves by discrete pre-filtering a. e. fabris a. r. forrest on pac learning using winnow, perceptron, and a perceptron-like algorithm rocco a. servedio a case study of verification, validation, and accreditation for advanced distributed simulation the techniques and methodologies for verification and validation of software- based systems have arguably realized their greatest utility within the context of simulation. advanced distributed simulation (ads), a major initiative within the defense modeling and simulation community, presents a variety of challenges to the classical approaches. a case study of the development process and concomitant verification and validation activities for the joint training confederation (jtc) is presented. the jtc is one of the largest current ads efforts, and the primary application of the aggregate level simulation protocol. a dichotomy between classical verification and validation approaches and the requirements of a prototypical ads environment is illustrated. mechanisms and research directions to resolve these differences are briefly discussed. ernest h. page bradford s. canova john a. tufarolo the checklist in the field catriona macaulay real-time occlusion culling for models with large occluders satyan coorg seth teller relational knowledge-based system hou-mei henry chang the knowledge agency stefan haustein sascha ludecke christian schwering integrating constraints and direct manipulation michael gleicher a standard simulation environment: a review of preliminary requirements mary ann flanigan wagner suleyman sevinc oryal tanir peter l. haigh james d. arthur richard e. nance herbert d. schwetman on learning read-k-satisfy-j dnf we study the learnability of read-k-satisfy-j (rksj) dnf formulae. these are dnf formulae in which the maximal number of occurrences of a variable is bounded by k, and the number of terms satisfied by any assignment is at most j. we show that this class of functions is learnable in polynomial time, using equivalence and membership queries, as long as k•j=on/loglogn). learnability was previously known only in case that both k and j are constants. we also present a family of boolean functions that have short (poly(n)) read-2-satisfy-1 dnf formulae but require cnf formulae of size > 2w< it>n . therefore, our result does not seem to follow from the recent learnability result of [bsh93]. avrim blum roni khardon eyal kushilevitz leonard pitt dan roth using rat navigation models to learn orientation from visual input on a mobile robot rodents possess extraordinary navigation abilities that are far in excess of what current state- of- the- art robot agents are capable of. this paper describes research that is part of larger project aimed at developing a robot navigation system that is capable of robust autonomous navigation in real- time by using biologically plausible constructs inspired from the many neurological and behavioral studies conducted on freely navigating rats. specifically, this paper discusses the implementation of a ratinspired system that allows a robot to learn and recognize its allocentric orientation based on what it perceives. the system described here is a pragmatic, minimalist implementation of recently proposed models of the rat head direction system. the rat head direction system are collectively, the neural assemblies that neuroscientific studies suggest are responsible for encoding the orientation of the rat's head in a global reference frame. this paper describes an implementation of the system on an autonomous robot that operates in real time, in office environments, and with laboratory limited computational hardware. experiments conducted in a messy laboratory environment with a range of visual features demonstrate that the demonstrate that the system, while simple in structure, is able to learn and recognize allocentric orientations after 5 minutes of exploration. the results demonstrate the worth of bridging robotics with biological research and pave the way for developing a more complete, and competent, robot architecture. brett browning adaptive shadow maps shadow maps provide a fast and convenient method of identifying shadows in scenes but can introduce aliasing. this paper introduces the adaptive shadow map (asm) as a solution to this problem. an asm removes aliasing by resolving pixel size mismatches between the eye view and the light source view. it achieves this goal by storing the light source view (i.e., the shadow map for the light source) as a hierarchical grid structure as opposed to the conventional flat structure. as pixels are transformed from the eye view to the light source view, the asm is refined to create higher- resolution pieces of the shadow map when needed. this is done by evaluating the contributions of shadow map pixels to the overall image quality. the improvement process is view-driven, progressive, and confined to a user- specifiable memory footprint. we show that asms enable dramatic improvements in shadow quality while maintaining interactive rates. randima fernando sebastian fernandez kavita bala donald p. greenberg ape: learning user's habits to automate repetitive tasks the ape (adaptive programming environment) project focuses on applying machine learning techniques to embed a software assistant into the visualworks smalltalk interactive programming environment. the assistant is able to learn user's habits and to automatically suggest to perform repetitive tasks on his behalf. this paper describes our assistant and focuses more particularly on the learning issue. it explains why state-of-the-art machine learning algorithms fail to provide an efficient solution for learning user's habits, and shows, through experiments on real data that a new algorithm we have designed for this learning task, achieves better results than related algorithms. jean-david ruvini christophe dony cognitive classification janet aisbett greg gibbon automod tutorial matthew w. rohrer multi-level texture caching for 3d graphics hardware traditional graphics hardware architectures implement what we call the _push architecture_ for texture mapping. local memory is dedicated to the accelerator for fast local retrieval of texture during rasterization, and the application is responsible for managing this memory. the push architecture has a bandwidth advantage, but disadvantages of limited texture capacity, escalation of accelerator memory requirements (and therefore cost), and poor memory utilization. the push architecture also requires the programmer to solve the bin- packing problem of managing accelerator memory each frame. more recently graphics hardware on pc- class machines has moved to an implementation of what we call the _pull architecture._ texture is stored in system memory and downloaded by the accelerator as needed. the pull architecture has advantages of texture capacity, stems the escalation of accelerator memory requirements, and has good memory utilization. it also frees the programmer from accelerator texture memory management. however, the pull architecture suffers escalating requirements for bandwidth from main memory to the accelerator. in this paper we propose multi-level texture caching to provide the accelerator with the bandwidth advantages of the push architecture combined with the capacity advantages of the pull architecture. we have studied the feasibility of 2-level caching and found the following: (1) significant re-use of texture between frames; (2) l2 caching requires significantly less memory than the push architecture; (3) l2 caching requires significantly less bandwidth from host memory than the pull architecture; (4) l2 caching enables implementation of smaller l1 caches that would otherwise bandwidth-limit accelerators on the workloads in this paper. results suggest that an l2 cache achieves the original advantage of the pull architecture --- stemming the growth of local texture memory --- while at the same time stemming the current explosion in demand for texture bandwidth between host memory and the accelerator. michael cox narendra bhandari michael shantz a generalized approach to document markup text processing and word processing systems typically require users to intersperse additional information in the natural text of the document being processed. this added information, called "markup," serves two purposes: 1\\. it separates the logical elements of the document; and 2\\. it specifies the processing functions to be performed on those elements. c. f. goldfarb vrml for urban visualization lee a. belfore rajesh vennam where do intelligent agents come from? cristobal baray kyle wagner filtering by repeated integration paul s. heckbert view-dependent simplification of arbitrary polygonal environments david luebke carl erikson learning from a consistently ignorant teacher one view of computational learning theory is that of a learner acquiring the knowledge of a teacher. we introduce a formal model of learning capturing the idea that teachers may have gaps in their knowledge. the goal of the learner is still to acquire the knowledge of the teacher, but now the learner must also identify the gaps. this is the notion of learning from a consistently ignorant teacher. we consider the impact of knowledge gaps on learning, for example, monotone dnf and d-dimensional boxes, and show that learning is still possible. negatively, we show that knowledge gaps make learning conjunctions of horn clauses as hard as learning dnf. we also present general results describing when known learning algorithms can be used to obtain learning algorithms using a consistently ignorant teacher. michael frazier sally goldman nina mishra leonard pitt en route to more efficient conservative parallel event simulation meng-lin yu global and local deformations of solid primitives new hierarchical solid modeling operations are developed, which simulate twisting, bending, tapering, or similar transformations of geometric objects. the chief result is that the normal vector of an arbitrarily deformed smooth surface can be calculated directly from the surface normal vector of the undeformed surface and a transformation matrix. deformations are easily combined in a hierarchical structure, creating complex objects from simpler ones. the position vectors and normal vectors in the simpler objects are used to calculate the position and normal vectors in the more complex forms; each level in the deformation hierarchy requires an additional matrix multiply for the normal vector calculation. deformations are important and highly intuitive operations which ease the control and rendering of large families of three- dimensional geometric shapes. alan h. barr integrating uncertainty into a language for knowledge based systems fm is an object oriented language designed to serve as a testbed for experiments in the development of conceptual structure in continuous domains. possibilistic truth representation is fully integrated into a language for building knowledge based systems offerring support for object, rule, and data- access based programming styles as well as the more traditional procedural form. a prototype implementation of fm has been written in franzlisp and has been used to construct two simple expert systems, one a data structure consultant and the other a knowledge based automobile driver. this paper describes the facilities provided in fm for representing and reasoning with uncertain information. bruce d'ambrosio visual simulation of lightning a method for rendering lightning using conventional raytracing techniques is discussed. the approach taken is directed at producing aesthetic images for animation, rather than providing a realistic physically based model for rendering. a particle system is used to generate the path of the lightning channel, and subsequently to animate the lightning. a technique, using implicit surfaces, is introduced for illuminating objects struck by lightning. todd reed brian wyvill separating world and regulation knowledge: where is the logic joost breuker nienke den haan edge inference with applications to antialiasing an edge, when point- sampled for display by a raster device and not aligned with a display axis, appears as a stair-case.this common aliasing artifact often occurs in computer images generated by two- and three-dimensional algorithms. the precise edge information often is no longer available but, from the set of vertical and horizontal segments which form the staircase, an approximation to the original edge with a precision beyond that of the raster may be inferred. this constitutes a smoothing of the staircase edge. among other applications, the inferred edges may be used to reshade the pixels they intersect, thereby antialiasing the inferred edges. the antialiased inferred edges prove a more attractive approximation to the real edges than their aliased counterparts. presented here are algorithms for the detection and smoothing of edges and the filtering of an image in accordance with the inferred edges. jules bloomenthal a multimodel methodology for qualitative model engineering qualitative models arising in artificial intelligence domain often concern real systems that are difficult to represent with traditional means. however, some promise for dealing with such systems is offered by research in simulation methodology. such research produces models that combine both continuous and discrete-event formalisms. nevertheless, the aims and approaches of the ai and the simulation communities remain rather mutually ill understood. consequently, there is a need to bridge theory and methodology in order to have a uniform language when either analyzing or reasoning about physical systems. this article introduces a methodology and formalism for developing multiple, cooperative models of physical systems of the type studied in qualitative physics. the formalism combines discrete-event and continuous models and offers an approach to building intelligent machines capable of physical modeling and reasoning. paul a. fishwick bernard p. zeigler cino thomas inesi an expert system for the application of import and export regulations g. van nevel f. balfroid r. venken spaceprobe: a system for representing complex knowledge spaceprobe is an experimental system for representing and reasoning about such things as beliefs, wants, fictions, hypotheses (both possibilities and counterfactuals), situations, generalizations, and time in a uniform and principled way. it is based on the theory of knowledge partitioning and simulative reasoning [2, 4, 5]. j dinsmore peripherality based level of detail switching as a visualization enhancement of high-risk simulations gerald pitts daniel cornell lara needs seat pascal vuong majid loukil sophie bordone a bayesian platform for automating scientific induction (dissertation) kevin b. korb smp - a symbolic manipulation program smp is a new general-purpose symbolic manipulation computer program which has been developed during the past year by the authors, with help from g.c. fox, j.m. greif, e.d. mjolsness, l.j. romans, t. shaw and a.e. terrano. the primary motivation for the construction of the program was the necessity of performing very complicated algebraic manipulations in certain areas of theoretical physics. the need to deal with advanced mathematical constructs required the program to be of great generality. in addition, the size of the calculations anticipated demanded that the program should operate quickly and be capable of handling very large amounts of data. the resulting program is expected to be valuable in a wide variety of applications. in this paper, we describe some of the basic concepts and principles of smp. the extensive capabilities of smp are described, with examples, in the "smp handbook" (available on request from the authors). chris a. cole stephen wolfram simulation of multiple time-pressured agents scott d. anderson automating transfer function design for comprehensible volume rendering based on 3d field topology analysis (case study) this paper describes initial results of a 3d field topology analysis for automating transfer function design aiming at comprehensible volume rendering. the conventional reeb graph-based approach to describing topological features of 3d surfaces is extended to capture the topological skeleton of a volumetric field. based on the analysis result, which is represented in the form of a hyper reeb graph, a procedure is proposed for designing appropriate color/opacity transfer functions. two analytic volume datasets are used to preliminarily prove the feasibility of the present design methodology. issei fujishiro taeko azuma yuriko takeshima a global synchronization network for a non-deterministic simulation architecture marc bumble lee coraor reconstruction of solids j. d. boissonnat knowledge-based learning integrating acquisition and learning empirical learning algorithms are hampered by their inability to use domain knowledge to guide the induction of new rules. this paper describes knowledge- based learning, an approach to learning that selects the examples and relevant attributes for an empirical algorithm. knowledge-based learning can be used for developing rules for engineering expert systems. engineers often have some rules for problem solving, but also many experiences (examples) that facilitate solving problems. knowledge-based learning systems are able to use both forms of knowledge. bradley l. whitehall robert e. stepp stephen c-y. lu algernon - a tractable system for knowledge-representation j. m. crawford b. j. kuipers on the complexity of teaching sally a. goldman michael j. kearns layered disclosure: why is the agent doing what it's doing? peter stone patrick riley manuela veloso light-water interaction using backward beam tracing mark watt memory access patterns of occlusion-compatible 3d image warping william r. mark gary bishop continuous tone representation of three-dimensional objects taking account of shadows and interreflection tomoyuki nishita eihachiro nakamae interactive graphics and discrete event simulation languages (panel session) julian reitman okies: a troubleshooter in the factory okies is an expert system that troubleshoots newly assembled at&t; 3b2 computer systems. all at&t; 3b2 models and configurations are analyzed by okies. the expert system uses an architecture-based design to apply the same knowledge to different machines. an architectural model of the machine is constructed when the session begins. this model is used to determine which tests are applicable, the components that compose the machine, and how the machine should be fixed. okies was built as a production system using the ops/83[1] language. all inference is done by matching. no search or backtracking is performed. the first okies prototype used rules generated by a conceptual clustering system. a diagnosis is interactively developed by first having the user pick among problem descriptions. the expert system then refines this by asking the user questions. for instance, the user is asked to examine hardware connections, run tests and report on error messages. if the expert system still can not classify the problem it requests further tests or presents a new problem classification. after determining the problem a treatment is prescribed. the treatment depends on the problem found, the machine's configuration, and the machine's prior history. the current organization of okies is presented along with a description of the process by which it was built. initially the system was developed ad hoc with structure later imposed. specifically, generalization and decision trees were used to organize the knowledge. douglas gordin douglas foxvog james rowland pamela surko gregg vesonder snap-dragging eric a. bier maureen c. stone volumetric shape description of range data using "blobby model" shigeru muraki a synergy of agent components: social comparison for failure detection gal a. kaminka milind tambe directional flow visualization of vector fields ed boring alex pang justice: a judicial search tool using intelligent concept extraction a legal knowledge based system called justice is presented which provides conceptual information retrieval for legal cases. justice can identify heterogeneous representations of concepts across all major australian jurisdictions. the knowledge representation scheme used for legal and common sense concepts is inspired by human processes for the identification of concepts and the expected order and location of concepts. these are supported by flexible search functions and various string utilities. justice is a client-based legal software agent which works with both plaintext and html representations of legal cases over file systems, and the world wide web. in creating justice an ontology for legal cases was developed, and is implicit within justice. further, the identification of concepts within data is shown to be a process enabling conceptual information retrieval and search, conceptualised summarisation, automated statistical analysis, and the conversion of informal documents into formalised semi-structured representations. justice was tested on the precision, recall and usefulness of its concept identifications; achieving good results. the results show the promise of the approach and establish justice as an intelligent legal research aid offering improved multifaceted access to the concepts within legal cases. james osborn leon sterling rendering techniques: past, present and future alan watt from coarse to fine skin and face detection a method for fine skin and face detection is described that starts from a coarse color segmentation. some regions represent parts of human skin and are selected by minimizing an error between the color distribution of each region and the output of a compression decompression neural network, which learns skin color distribution for several populations of different ethnicity. this ann is used to find a collection of skin regions, which is used in a second learning step to provide parameters for a gaussian mixture model. a finer classification is performed using a bayesian framework and makes the skin and face detection invariant to scale and lighting conditions. finally, a face shape based model is used to decide whether a skin region is a face or not. hichem sahbi nozha boujemaa perlin noise pixel shaders while working on a method for supporting real-time procedural solid texturing, we developed a general purpose multipass pixel shader to generate the perlin noise function. we implemented this algorithm on sgi workstations using accelerated opengl pixelmap and pixeltransfer operations, achieving a rate of 2.5 hz for a 256x256 image. we also implemented the noise algorithm on the nvidia geforce2 using register combiners. our register combiner implementation required 375 passes, but ran at 1.3 hz. this exercise illustrated a variety of abilities and shortcomings of current graphics hardware. the paper concludes with an exploration of directions for expanding pixel shading hardware to further support iterative multipass pixel-shader applications. john c. hart structured reactive controllers: controlling robots that perform everyday activity michael beetz interactive beautification: a technique for rapid geometric design takeo igarashi satoshi matsuoka sachiko kawachiya hidehiko tanaka a complete simplification package for the absolute value function in reduce h caprasse generating automatically tuned bitmaps from outlinesconsider the problem of generating bitmaps from character shapes given asoutlines. the obvious scan-conversion process does not produce acceptableresults unless important features such as stem widths are carefully controlledduring the scan-conversion process. this paper describes a method forautomatically extracting the necessary feature information and generatinghigh-quality bitmaps without resorting to hand editing. almost all of the workis done in a preprocessing step, the result of which is an intermediate formthat can be quickly converted into bitmaps once the font size and deviceresolution are known.a heuristically defined system of linear equations describes how the idealoutlines should be distorted in order to produce the best possible resultswhen scan converted in a straightforward manner. the lovász basis reductionalgorithm then reduces the system of equations to a form that makes it easy tofind an approximate solution subject to the constraint that some variablesmust be integers.the heuristic information is of such a general nature that it applies equallywell to roman fonts and japanese kanji.john d. hobby using split event sets to form and schedule event combinations in discrete event simulation n. manjikian w. m. loucks an introduction to quest martin r. barnes virtual voyage: interactive navigation in the human colon lichan hong shigeru muraki arie kaufman dirk bartz taosong he a personal news agent that talks, learns and explains daniel billsus michael j. pazzani databases and artificial intelligence: enabling technologies for simulation modeling martha a. centeno charles r. standridge the empirical study of knowledge elicitation techniques nigel shadbolt a. mike burton with a wysh and a prayer: an experiment in cooperative development of legal knowledgebases this paper describes an ongoing experiment in collaborative construction of legal knowledgebases over the world wide web. the home page for this research is at http://wysh.austlii.edu.au, and further papers on this work may be found there. graham greenleaf philip chung daniel austin russell allen andrew mowbray quadril: a computer language for the description of quadric-surface bodies most man-made objects can be closely approximated by bodies whose surfaces are composed of portions of second-order (quadric) surfaces. these surfaces include elliptic, hyperbolic, and parabolic cylinders, as well as quadric cones, paraboloids, hyperboloids, ellipsoids, and pairs of planes. simple planes (first-order surfaces) may be included as degenerate quadric surfaces. because these quadric-surface bodies are so useful for modelling man-made objects, it is important that any computer-aided design (cad) system be able to work with such bodies. the "quadril" language described here was designed to accept descriptions of quadric-surface bodies in character-string form. quadril has a mixture of english-like and algebraic syntax. it may be used to specify quadric-surface bodies and then to display them on various media. quadril will accept descriptions of quadric-surface bodies either as "volumetric" combinations of basic bodies, or as boolean functions of bounding surfaces. english-like syntax is used for specifying what surfaces and basic bodies are used, while algebraic syntax is used to transform the canonical forms of the surfaces or bodies into the shape, position, and orientation that the user desires. volumetric combination of bodies involves the operations of union (+), intersection (*), and subtraction (-). boolean specification of volumes is in terms of a boolean tree with the bounding surfaces as leaf nodes. the tree is expressed as a character string. quadril permits using user-created "structures" as component bodies ("objects") in greater structures. the display of the quadric-surface bodies may also be specified in quadril. the user is considered fixed in space, while the body is transformed to give the desired view. joshua zev levin news lisa meeden voting for movies: the anatomy of a recommender system sumit ghosh manisha mundhe karina hernandez sandip sen ray tracing volume densities this paper presents new algorithms to trace objects represented by densities within a volume grid, e.g. clouds, fog, flames, dust, particle systems. we develop the light scattering equations, discuss previous methods of solution, and present a new approximate solution to the full three-dimensional radiative scattering problem suitable for use in computer graphics. additionally we review dynamical models for clouds used to make an animated movie. james t. kajiya brian p von herzen matchmaker: manifold breps for non-manifold r-sets jarek rossignac david cardoze computer gaming's new worlds jeff minter use of seminar gaming to specify and validate simulation models seminar gaming can be a useful method to use in specifying and validating a simulation model. it provides an interactive forum where a real or proposed system associated with a complex problem domain can by systematically studied by a variety of expert participants. it allows the proper integration of various technical, operational, and social/political considerations into the specification of the simulation model. due to the visibility of the gaming process and the personal involvement of expert participants, a seminar game can contribute to model validation simultaneously with its specification. the method is especially useful in establishing model credibility and acceptability from the outset of its development. this paper describes the seminar gaming process and its application in specifying a simulation model to varying level of detail. the contribution of the process to model validation is outlined. edward a. davis decimation of triangle meshes william j. schroeder jonathan a. zarge william e. lorensen integrating distributed simulation objects joseph a. heim synchronous relaxation for parallel simulations with applications to circuit- switched networks synchronous relaxation, a new, general-purpose, efficient method for parallel simulation, is proposed. the method is applied to obtain a new parallel algorithm for simulating large circuit-switched communication networks. to show that synchronous-relaxation method is efficient, we present the results of circuit-switched network simulation experiments, and analytic approximations derived from a mathematical model of the simulation method. stephen g. eick albert g. greenberg boris d. lubachevsky alan weiss fuzzy information processing and robotics research from the time the term "robot" was conceived by the czech novelist karel capek, human beings have dreamed of creating a machine with stamina, durability and a form of intelligence enabling it to perform a variety of tasks skillfully, perhaps in a hostile environment demanding precision, reliability and repetition. such a machine would provide many useful services on a day to day basis, resembling man with its integration of (artificial) intelligence and the human-like abilities of sight, tactile sensing, voice recognition and others. a robot with all of these charactertistics is not yet in existence and awaits the coming of new technologies to support it; however it is conceivable that such an "intelligent" robot may be readily available during the next two decades. numerically controlled machines have been called by some researchers, the "first generation" robots, and those available commercially (such as in the auto industry), the second generation. robots equipped with various types of sensors with feedback or adaptive controllers are the subject of current research and comprise what might be called the third generation robots. the capabilities of future generations of robots appear to be entirely subject to our imagination, and the availability of new technologies. it is perhaps not too far fetched to think of a robot with the "sensitivity" required to give injections to patients in a hospital. a robot might be programmed to roam the corridors of a hotel, vaccuming or polishing floors; snake-like robots might be used to grasp on object of any shape and hardness without dropping it and without structural damage to it. paul p. wang vc dimension of an integrate-and-fire neuron model anthony m. zador barak a. pearlmutter proof animation: reaching new heights in animation nancy j. earle james o. henriksen automated reasoning with legal xml documents we have integrated the jess expert system tool from sandia labs [2] with the xerces xml parser. we submit to this software contracts and court filings for litigation involving those contracts. these are written as per a contract standard submitted to the legal xml standards group [5] and the court filing proposed standards. the software determines if a summary judgment request can be granted based on the submitted affidavits, contracts, and other documents. laurence l. leff system concept development with virtual prototyping james c. schaaf faye lynn thompson evolutionary co-operative design between human and computer: implementation of "the genetic sculpture park" the genetic sculpture park seeks to blur the distinction between artist and observer and to empower the novice in the creation of complex computer graphic models. each visitor to the park experiences a unique set of forms and engages in a co-operative dialogue with the computer to produce more aesthetically pleasing designs. inspired by darwin's theory of evolution, genetic algorithms are used to allow visitors to 'breed' forms tailored to his or her own individual sense of aesthetics. this paper recounts investigations into evolutionary design methodologies ("3d head models") and describes their implementation in an interactive java/vrml world. duncan rowland frank biocca generalized implicit functions for computer graphics stan sclaroff alex pentland using an artificial neural system to determine the knowledge based of an expert system this paper gives a mapping of rule based expert systems into artificial neural expert systems. while the mapping is not one-to-one, it does show that the two systems are essentially equivalent. there are, of course, many examples of artificial neural systems that are not expert systems. we can use the reverse mapping of an artificial neural expert system to a rule based expert system to determine the knowledge base of the rule based expert system, i.e., to determine the exact nature of the rules. this yields an automated procedure for determining the knowledge base of an expert system that shows much promise. we have implemented this expert system tool on several larger microcomputers, including an intel sugarcube. the sugarcube implementation is a very natural one. g. m. whitson cathy wu pam taylor 3-d transformations of images in scanline order currerntly texture mapping onto projections of 3-d surfaces is time consuming and subject to considerable aliasing errors. usually the procedure is to perform some inverse mapping from the area of the pixel onto the surface texture. it is difficult to do this correctly. there is an alternate approach where the texture surface is transformed as a 2-d image until it conforms to a projection of a polygon placed arbitrarily in 3-space. the great advantage of this approach is that the 2-d transformation can be decomposed into two simple transforms, one in horizontal and the other in vertical scanline order. horizontal scanline order, sophisticated light calculation is also time consuming and difficult to calculate correctly on projected polygons. instead of calculating the lighting based on the position of the polygon, lights, and eye, the lights and eye can be transformed to a corresponding position for a unit square which we can consider to be a canonical polygon. after this canonical polygon is correctly textured and shaded it can be easily conformed to the projection of the 3-d surface. ed catmull alvy ray smith automatic parsing of degenerate quadric-surface intersections in general, two quadric surfaces intersect in a nonsingular quartic space curve. under special circumstances, however, this intersection may "degenerate" into a quartic with a double point, or a composite of lines, conics, and twisted cubics whose degrees, counted over the complex projective domain, sum to four. such degenerate forms are important since they occur with surprising frequency in practice and, unlike the generic case, they admit rational parameterizations. invoking concepts from classical algebraic geometry, we formulate the condition for a degenerate intersection in terms of the vanishing of a polynomial expression in the quadric coefficients. when this is satisfied, we apply a multivariate polynomial factorization algorithm to the projecting cone of the intersection curve. factors of this cone which correspond to intersection components "at infinity" may be removed a priori. a careful examination of the remaining cone factors then facilitates the identification and parameterization of the various real, affine intersection elements that may arise: isolated points, lines, conics, cubics, and singular quartics. the procedure is essentially automatic (avoiding the tedium of case- by-case analyses), encompasses the full range of quadric forms, and is amenable to implementation in exact (symbolic) arithmetic. r. t. farouki c. neff m. a. o'conner supernova bob hoffman policy based agent management using conversation patterns in this paper we provide a framework for building management services for software agents using conversation patterns. these patterns classify agent interaction, using the principles of object oriented software design patterns, encapsulating pure communication requirements and responsibilities. role theory and reusable policy specifications regulate the way agents participate, providing a rich source of information for conversation management. the methodology promotes platform independence and fits the needs of a modular, distributed environment. so, management services use the plug- and- play concept. we also introduce the concept of co- operation patterns, which are built upon conversation patterns, but also describe the social relationships between agents based on beliefs, desires and intentions. christos stergiou geert arys achieving reliability in simulation software ronald c. van wagenen charles r. harrell state of the art in parallel simulation richard fujimoto david nicol learning form-meaning mappings for language nancy chang an attribute binding model an attribute binding model is presented and discussed to illustrate the use of such reference models in the ansi x3h3 graphics software standardization effort. the model helps in defining and illustrating issues, and in explaining the proposed standard to those who will accept and use it. using the model components and attributes can be categorized and characterized. michael t. garrett dissolving descartes: perception and the construction of reality (address) mark pesce perceptual grouping and attention in a multi-agent world randall w. hill large-scale machine translation: an interlingua approach deryle w. lonsdale alexander m. franz john r. r. leavitt immersive 4d visualization of complex dynamics estela a. gavosto james r. miller john sheu inference propagation in emitter, system hierarchies emitter and system hierarchies are represented by inference nets and propositional relationships. emitters are the primitive objects of the domain and systems consist of relationships among emitters. evidence gathered concerning the identification of emitters must be used to classify both emitters and systems. evidential reasoning and inference nets are used to to combine information at each level. methods of direct and indirect transfer of evidence between levels are presented. t sudkamp specifying composite illustrations with communicative goals ibis (intent-based illustration system) generates illustrations automatically, guided by communicative goals. communicative goals specify that particular properties of objects, such as their color, size, or location are to be conveyed in the illustration. ibis is intended to be part of an interactive multimedia explanation generation system. it has access to a knowledge base that contains a collection of objects, including information about their geometric properties, material, and location. as the goals are interpreted by a rule-based control component, the system generates a precise definition of the final illustration. if ibis determines that a set of goals cannot be satisfied in a single picture, then it attempts to create a composite illustration that has multiple viewports. for example, a composite illustration may contain a nested inset illustration showing an object in greater detail than is possible in the parent picture. each component illustration is defined by its placement, size, viewing specification, lighting specification, and list of objects to be displayed and their graphical style. d. d. seligmann s. feiner dodge perfection terry windell abbe daniel irene kim rafael castelblanco tom wichitsripornkul clay budin bob hoffman mark voelpel an overview of the lers1 learning system j. w. grzymala-busse planning and resource allocation for hard real-time, fault-tolerant plan execution ella m. atkins tarek f. abdelzaher kang g. shin edmund h. durfee priority rendering with a virtual reality address recalculation pipeline virtual reality systems are placing never before seen demands on computer graphics hardware, yet few graphics systems are designed specifically for virtural reality. an address recalculation pipeline is a graphics display controller specifically designed for use with head mounted virtual reality systems, it performs orientation viewport mapping after rendering which means the users head orientation does not need to be known accurately until less than a microsecond before the first pixel of an update frame is actually sent to the head mounted display device. as a result the user perceived latency to head rotations is minimal. using such a controller with image composition it is possible to render different objects within the world at different rate, thus it is possible to concentrate the available rendering power on the sections of the scene that change the most. the concentration of rendering power is known as priority rendering. reductions of one order of magnitude in the number of objects rendered for an entire scene have been observed when using priority rendering. when non interactive background scenes which are rendered with a high quality rendering algorithm such as ray tracing are added to the world, highly realistic virtual worlds are possible with little or no latency. matthew regan ronald pose taking the work out of simulation modeling: an application of technology integration gregory s. baker an interactive pattern recognition laboratory (iprl) this paper describes an interactive pattern recognition laboratory. the laboratory was designed for both research and teaching. for the researcher, it provides standard pattern recognition functions, a hierarchically organized pattern recognition data base, and a multidimensional graphic display capability. for the student it provides, in addition to the above capabilities, a vehicle for developing new pattern recognition algorithms. in addition to not having to develop support software, the student may compare the performance of his algorithms in the same environment as the existing ones. starvos christodoulakis a neural network as a quality control monitor of an intelligent system ray r. hashemi john r. talburt meena velusamy 1991 steven a. coons award lecture andries van dam an introduction to extend david krahl algebraic properties of knowledge representation systems new concepts of knowledge representation systems, like object and attribute factors, connectedness relations, and seven kinds of homomorphisms of knowledge representation systems are introduced. some properties of these homomorphisms, related to factors and connectedness, are shown. the theory presented here may be used to aggregate sets of objects, attributes, and descriptors of the original system in order to produce a simpler system, which preserves the description function of the original system. in some applications, the new system is sufficient representation of the original one. j w grzymala-busse ai-techniques and concept analysis this paper shows a different way of using ai techniques that turned out to be very useful in a particular project and may offer potentials for the futures. anja oskamp from logic to dialectics in legal argument henry prakken a tutorial on tess: the extended simulation system tess@@@@, the extended simulation system, integrates simulation, data management and graphics capabilities to provide a framework for performing simulation projects. capabilities for building slam ii@@@@ networks graphically, animating simulation runs without programming, and generating graphs of all simulation results are provided. report generation and the post- run analysis of simulation results are included. forms input for simulation run controls and user defined data are provided. the fourth generation tess language provides a single user interface to all tess capabilities. charles r. standridge steven a. walker david vaughan assisted articulation of closed polygonal models marek teichmann seth teller dynamic programming as graph searching: an algebraic approach stefania gnesi ugo montanari alberto martelli uniform frequency images: adding geometry to images to produce space-efficient textures adam hunter jonathan d. cohen style sheet support for hypermedia documents jacco van ossenbruggen lyndahardman lloyd rutledge anton eliÃ"ns symbiotic jobscheduling for a simultaneous mutlithreading processor allan snavely dean m. tullsen task-structure analysis for knowledge modeling b. chandrasekaran todd r. johnson jack w. smith visualization environments: short-term potential critique of siman as a programming language (abstract only) this paper examines the simulation language siman, describes its capabilities and critiques it as a programming language. siman is one of the latest of the fourth generation general purpose simulation languages. it's design was oriented toward the simulation of production and manufacturing systems. we discuss the structural organization of the language including its control, data, name and syntactic structures. the basic program in siman consists of a simulation model and a simulation experiment which are compiled separately and then linked. this is the correct format for a simulation language but in some cases there is not enough power in the experimental frame to appropriately control the simulation model. in other cases, the strong checking of parameter sets in the experimental frame make expansion of the model very difficult. there are some problems about syntactic consistency and regularity. there is a blocks program available for the pc version of siman that greatly eases the creation of the model part of the siman simulation. this program is not available for the mainframe version of siman and the resulting confusion about comma, semicolon, and colon delimiters causes frustrating little bugs. we also discuss many of the strengths of the language both in terms of simulation and in terms of language design principles. examples used to illustrate the power and weaknesses of the language will include simulation of several computer systems that utilize various priority scheduling schemes for input jobs. we examine cpu utilization, various statistics on the delays by type of job and maximum delays. the computer systems will be modeled with various assumptions on the cpu speed, the distribution of the job classes, the required time of each job class and we allow for cpu preemption by higher priority jobs. a strength of the language is that once the basic model has been generated all of the above simulations can be done by trivial modifications in the experiment file. in fact, the model need not be recompiled. as a very simple example of siman's power we will model the queuing problems for a terminal room where some of the terminals may be designated as express only. another example will examine the delay characteristic for an entry control facility where the decision variables are the number of entry booths that must be constructed. we also must determine if the entry testing procedure is sufficiently quick and accurate to accommodate the present arrival patterns of the employees given the physical constraints that put an absolute upper bound on the number of booths. the major strength of the siman simulation language is for simulation of manufacturing systems. we present another example to show how easily complex manufacturing shops can be simulated. this example will also show some of the deficiencies in data entry and manipulation for the language. siman also does continuous simulation and we present a epidemic simulation demonstrating this capability. siman, in general, is a good differential equation solver that also provides adequate graphical output for the presentation of the simulation results. finally, we will also discuss some experiences we have had using siman as part of a faculty computer literacy program. some participants with almost no computer experience were able to develop moderately sophisticated models in their area of expertise that were very useful to them in research or teaching. david j. thuente efficient algorithms for local and global accessibility shading this paper discusses the use of two different approaches for computing the "accessibility" of a surface. these metrics characterize how easily a surface may be touched by a spherical probe. the paper also presents various acceleration techniques for accessibility. the idea of surface accessibility is extended to include "global accessibility" which measures the ability of a spherical probe to enter a structure from outside as well as to fit locally on the surface. the visual effect of shading using accessibility is shown to resemble the patina on certain tarnished surfaces which have then been cleaned. gavin miller volume rendering based interactive navigation within the human colon (case study) we present an interactive navigation system for virtual colonoscopy, which is based solely on high performance volume rendering. previous colonic navigation systems have employed either a surface rendering or a z-buffer-assisted volume rendering method that depends on the surface rendering results. our method is a fast direct volume rendering technique that exploits distance information stored in the potential field of the camera control model, and is parallelized on a multiprocessor. experiments have been conducted on both a simulated pipe and patients' data sets acquired with a ct scanner. ming wan qingyu tang arie kaufman zhengrong liang mark wax multiresolution rendering with displacement mapping stefan gumhold tobias huttner subdivision schemes for fluid flow henrik weimer joe warren interactive modification of real and virtual lights for augmented reality celine loscos george drettakis luc robert evaluating planners, plans, and planning agents martha e. pollack an extension of liouville's theorem on integration in finite terms in this paper we give an extension of the liouville theorem [risc69, p. 169] and give a number of examples which show that integration with special functions involves some phenomena that do not occur in integration with the elementary functions alone. our main result generalizes liouville's theorem by allowing, in addition to the elementary functions, special functions such as the error function, fresnel integrals and the logarithmic integral to appear in the integral of an elementary function. the basic conclusion is that these functions, if they appear, appear linearly. m. f. singer b. d. saunders b. f. caviness book preview jennifer bruer a methodology for simulating computer systems simulation languages, while providing the modeler with the essential tools for model development, do not provide well defined philosophies for modeling specific classes of systems. although some languages strongly suggest a particular modeling approach, deriving from a particular world view, a methodology must be developed by the practitioner. a methodology for developing simulation models of computer systems is discussed. in all computer systems there are universal processes which may be broken down into various hardware and software steps. standard model elements which simulate universal communication and input/output processes are explained. other software to support model development and end user model execution is also presented. the methodology presented here has proven to reduce model implementation time, produce more reliable models, and relax modeler training requirements. peter l. haigh overview of the basic image interchange format (biif) george s. carson separation for boundary to csg conversion vadim shapiro donald l. vossler modeling formalisms for dynamic structure systems we present a new concept for a system network to represent systems that are able to undergo structural change. change in structure is defined in general terms, and includes the addition and deletion of systems and the modification of the relations among components. the structure of a system network is stored in the network executive. any change in structure-related information is mapped into modifications in the network structure. based on these concepts, we derive three new system specifications that provide a shorthand notation to specify classes of dynamic structure systems. these new formalisms are: dynamic structure discrete time system, dynamic structure differential equation specified systems, and dynamic structure discrete event system specification. we demonstrate that these formalisms are closed under coupling, making hierarchical model construction possible. formalisms are described using set theoretic notation and general systems theory concepts. fernando j. barros jurisconsulto: retrieval in jurisprudencial text bases using juridical terminology in the legal domain, jurisprudence has an important role as a juridical source; its decisions support the application of the law to a concrete case. the problem is that brazilian courts produce an enormous amount of decisions every year, turning these text sources larger every time and forcing juridical professionals to spend more time in the search for a relevant decision. sophisticated ai techniques are needed to minimize searches time and improves the quality and appropriateness of the retrieved information. this paper describes a case-based approach for the intelligent retrieval of jurisprudencial texts. the approach enables the retrieval of adequate texts with characteristics similar to information supplied by the user in natural language. new documents are automatically included into the knowledge base by extracting relevant information. in order to enable the processing of informal textual knowledge in natural language, a controlled vocabulary and a juridical thesaurus based on common juridical terminology, is integrated into the retrieval and extraction process. the approach is based on sentences to criminal proceedings in the domain of the brazilian right. tânia c. d'agostini bueno christiane gresse von wangenheim eduardo da silva mattos hugo cesar hoeschl ricardo m. barcia a scan-line hidden surface removal procedure for constructive solid geometry this paper presents a new methodology for resolving visible surface images of solid models derived from boolean combinations of volumetric building blocks. the algorithm introduced here is an extension of well- established scan-line hidden surface removal procedures, and it integrates knowledge of a boolean construction tree in the surface resolution process. several hidden surface coherence properties are discussed in terms of their possible exploitation in the intricate solid model visualization process. while many of the earlier coherence techniques depend on a polygon environment in which surfaces and volumes do not intersect, the boolean process can not afford that luxury because it is inherently required to handle intersecting volumes and surfaces. initial tests indicate that substantial performance improvements over previous methods can be achieved with the algorithm described in this paper, and that these improvements increase as model complexity increases. an underlying philosophy of a dual solid modeling system is proposed in this paper. it suggests that two solid modelers are necessary to successfully satisfy both analytical precision requirements and user interface visualization requirements. the visual solid modeling task addressed in this paper provides greatly improved response capabilities, as compared to other systems, by striving to optimize the constructive solid model (csg) solid model computations specifically for display purposes. peter r atherton delab - a simulation laboratory delab is a simulation laboratory designed to provide support to programmers who build complex simulation programs and to system analysts who use these programs. in this paper we present the structure of the laboratory and report on the current status of the effort to implement it. the laboratory has been implemented in a 'bottom up' fashion. first we have developed the denet simulation language which is a modula-2 based discrete event simulation language. once the language became operational, a database management system was added to the laboratory. for each simulation study a relational database is automatically created. when a simulation terminates it stores a description of the run in the database. the system analyst can later retrieve this data by means of a relational query language. denet has been successfully used in a number of real life simulation studies. the database management system is currently evaluated by a number of researchers in our department who employ it in their simulation studies. the requests, criticism, and encouragement provided by users of both the language and the management system have guided our iterative effort to design and implement an effective simulation laboratory. miron livny plic: bridging the gap between streamlines and lic this paper explores mapping strategies for generating lic-like images from streamlines and streamline-like images from lic. the main contribution of this paper is a technique which we call pseudo-lic or plic. by adjusting a small set of key parameters, plic can generate flow visualizations that span the spectrum of streamline-like to lic-like images. among the advantages of plic are: image quality comparable with lic, performance speedup over lic, use of a template texture that is independent of the size of the flow field, handles the problem of multiple streamlines occupying the same pixel in image space, reduced aliasing, applicability to time varying data sets, and variable speed animation. vivek verma david kao alex pang an importance-driven radiosity algorithm brian e. smits james r. arvo david h. salesin the figure understander: a system for integrating text and diagram input to a knowledge base raman rajagopalan benjamin kuipers feature selection for ensembles david w. opitz analysis of rule sets generated by the cn2, id3, and multiple convergence symbolic learning methods elizabeth m. boll daniel c. st. clair hormone-based control for self-reconfigurable robots wei-min shen yimin lu peter will from range scans to 3d models each year, we see a growing number of 3d range scanning products on the siggraph exhibition floor. you may find yourself asking "how do these technologies work?" and "how can i make use of the shape data they produce?" in this article, i will describe a few of the more common range scanning technologies. then, i will step through a pipeline that takes the range data into a single geometric model and will conclude with a discussion of the future of range scanning. brian curless adapting an agent to a similar environment paul scerri nancy e. reed letters thomas haegele matthias wittmann carolin grosser three-pass affine transforms for volume rendering pat hanrahan text compression as a test for artificial intelligence matthew v. mahoney formalising bio-spatial knowledge there is now a growing literature on qualitative spatialrepresentations covering many aspects of spatial representationincluding mereology, topology, orientation and distance. in thispaper i will briefly outline some of these approaches toqualitative spatial representation and then apply these theories tothe task of formalising a non-trivial domain: that of representingcell structure. the paper is thus a contribution to the evaluationof qualitative spatial representations and spatial ontologies andmay form the basis of a bio-informatic information system. anthony g. cohn rendering interactive holographic images mark lucente tinsley a. galyean rayman - no parking francois petavy salad bowl: a carrot's tale michael s. blum real algebraic closure of an ordered field: implementation in axiom renaud rioboo real-time simulation of dust behavior generated by a fast traveling vehicle simulation of physically realistic complex dust behavior is very useful in training, education, art, advertising, and entertainment. there are no published models for real-time simulation of dust behavior generated by a traveling vehicle. in this paper, we use particle systems, computational fluid dynamics, and behavioral simulation techniques to simulate dust behavior in real time. first, we analyze the forces and factors that affect dust generation and the behavior after dust particles are generated. then, we construct physically-based empirical models to generate dust particles and control the behavior accordingly. we further simplify the numerical calculations by dividing dust behavior into three stages, and establishing simplified particle system models for each stage. we employ motion blur, particle blending texture mapping, and other computer graphics techniques to achieve the final results. our contributions include constructing physically- based empirical models to generate dust behavior and achieving simulation of the behavior in real time. jim x. chen xiadong fu j. wegman interactive behaviors for bipedal articulated figures cary b. phillips norman i. badler separable image warping with spatial lookup tables g. wolberg t. e. boult drive-in house takehiko nagakura marlos christedeulides michael webb kent larson methods of colored barcodes creating ivan a. dychka yevgeniya s. sulema general purpose visual simulation system: a functional description john l. bishop osman balci image-based visual hulls in this paper, we describe an efficient image-based approach to computing and shading visual hulls from silhouette image data. our algorithm takes advantage of epipolar geometry and incremental computation to achieve a constant rendering cost per rendered pixel. it does not suffer from the computation complexity, limited resolution, or quantization artifacts of previous volumetric approaches. we demonstrate the use of this algorithm in a real-time virtualized reality application running off a small number of video streams. wojciech matusik chris buehler ramesh raskar steven j. gortler leonard mcmillan time in neural networks j.-c. chappelier a. grumbach imagistic reasoning kenneth yip feng zhao elisha sacks computer graphic modeling of american sign language the essential grammatical information of american sign language (asl) is conveyed through changes in the movement and spatial contouring of the hands and arms. an interactive computer graphic system is described for the analysis and modeling of sign language movement. this system consists of four components. the first component reconstructs actual movements in three dimensions and allows the user to interactively segment and transform the data for later analysis. the second component allows a user to interactively create synthetic signs by specifying angle functions in a jointed model. the third component provides a novel technique for manipulating movement quality independently of spatial path. the fourth component allows the building of complex stimuli and real- time stimulus sequencing for psycholinguistic experiments. the emphasis is on interactive techniques and data structures that allow analysis and modeling of the complex hand and arm movements of american sign language. jeffrey loomis howard poizner ursula bellugi alynn blakemore john hollerbach georges jose grisius toon spri fernando tunon thierry lechien dario scire tim yates sinead walsh bill guischer controlling and augmenting legal inferencing: ysh, a case study if legal inferencing systems are to be used for immediate practical application, they are best constructed by embedding them in other technologies which can assist in augmenting and controlling the course of inferencing. adoption of a (quasi) natural language knowledge representation assists easier development of user interpretative facilities, user control of the course of inferencing and explanation facilities. the paper explains how the datalex workstation software, particularly its inference engine, ysh, implements these approaches. graham greenleaf andrew mowbray annual report shows high activity every year, each director on the siggraph executive committee compiles an annual report for the annual siggraph organization report. to keep the overall report from becoming a book, the siggraph chair must pick highlights from each area. this issue's _computer graphics_ column contains my full annual report as presented to the siggraph executive committee i hope that it helps to give you a better understanding of the structure of the professional chapters, the events we sponsor at the annual conference and what it is we do over the course of a typical year. please keep in mind that this report is for the 1998-99 year so some information dates back to siggraph 98. as always, if you have any questions or comments, feel free to email me at **lang@siggraph.org.** scott lang kizamu: a system for sculpting digital characters this paper presents kizamu, a computer-based sculpting system for creating digital characters for the entertainment industry. kizamu incorporates a blend of new algorithms, significant technical advances, and novel user interaction paradigms into a system that is both powerful and unique. to meet the demands of high-end digital character design, kizamu addresses three requirements posed to us by a major production studio. first, animators and artists want _digital clay_ \\--- a medium with the characteristics of real clay and the advantages of being digital. second, the system should run on standard hardware at interactive rates. finally, the system must accept and generate standard 3d representations thereby enabling integration into an existing animation production pipeline. at the heart of the kizamu system are adaptively sampled distance fields (adfs), a volumetric shape representation with the characteristics required for digital clay. in this paper, we describe the system and present the major research advances in adfs that were required to make kizamu a reality. ronald n. perry sarah f. frisken image moment-based stroke placement michio shiraishi yasushi yamaguchi free-form deformation of solid geometric models thomas w. sederberg scott r. parry real-time interactive graphics in computer gaming scott s. fisher glen fraser amy jo kim view-independent environment maps wolfgang heidrich hans-peter seidel editing 3d objects without 3d geometry rui yamada mitsuharu ohki symbolic calculation of zero dynamics for nonlinear control systems bram de jager heresy: a virtual image-space 3d rasterization architecture tzi-cker chiueh the integration of subjective and objective data in the animation of human movement animation of human movement can be based either on analog inputs derived directly from actual movements or on symbolic inputs chosen to produce the desired movement. the former type of input can be quite accurate and objective but is a description of the required movement whereas the latter is often quite imprecise and subjective but provides an analysis of the required movements. two existing systems for a computer based animation are being used to explore the problems involved in integrating such inputs. specifically, animation driven by analog signals from electro-goniometers is integrated with animation derived from labanotation commands; the results are illustrated with a short movie. t. w. calvert j. chapman a. patla fuzzy input coding for an artificial neural - network modelling visual speech movements hans-heinrich bothe tiled polygon traversal using half-plane edge functions existing techniques for traversing a polygon generate fragments one (or more) rows or columns at a time. (a fragment is all the information needed to paint one pixel of the polygon.) this order is non-optimal for many operations. for example, most frame buffers are tiled into rectangular pages, and there is a cost associated with accessing a different page. pixel processing is more efficient if all fragments of a polygon on one page are generated before any fragments on a different page. similarly, texture caches have reduced miss rates if fragments are generated in tiles (and even tiles of tiles) whose size depends upon the cache organization. we describe a polygon traversal algorithm that generates fragments in a tiled fashion. that is, it generates all fragments of a polygon within a rectangle (tile) before generating any fragments in another rectangle. for a single level of tiling, our algorithm requires one additional saved context (the values of all interpolator accumulators, such as z depth, red, green, blue, etc.) over a traditional traversal algorithm based upon half-plane edge functions. an additional level of tiling requires another saved context for the special case of rectangle copies, or three more for the general case. we describe how to use this algorithm to generate fragments in an optimal order for several common scenarios. joel mccormack robert mcnamara an application of color graphics to the display of surface curvature in developing a mathematical representation for a surface, designers currently must use line drawing graphics to examine the curvature of a line in a plane, a two-dimensional analysis. by combining a result from differential geometry with the use of color raster graphics, the method described in this paper provides a means for the designer to examine surface curvature, a three- dimensional analysis. in particular, a formulation for the gaussian and average curvatures is given and it is shown how these indicate the presence or absence of protrusions, hollows, etc. in a surface, i.e., how, where, and by how much the surface curves. showing a fourth variable, curvature in this case, over a three-dimensional surface is difficult, if not impossible with traditional line drawing computer graphics. the method described solves this problem by using color as a fourth dimension. examples are given, including both known shapes (torus) and automotive parts (hood, fender). john c. dill surface simplification using quadric error metrics michael garland paul s. heckbert a language for bitmap manipulation in this paper we propose that bitmaps, or raster images, should be given full citizen status in the world of computer science. we introduce a calculus of bitmap operations and mumble, a programming language appropriate for describing bitmap computations. we illustrate the use of mumble by several interesting graphical applications. we also discuss the structure of bop, an efficient implementation of the bitmap calculus that is the underpinning of our system. leo j. guibas jorge stolfi constrained texture mapping for polygonal meshes recently, time and effort have been devoted to automatic texture mapping. it is possible to study the parameterization function and to describe the texture mapping process in terms of a functional optimization problem. several methods of this type have been proposed to minimize deformations. however, these existing methods suffer from several limitations. for instance, it is difficult to put details of the texture in correspondence with features of the model, since most of the existing methods can only constrain iso-parametric curves. we introduce in this paper a new optimization-based method for parameterizing polygonal meshes with minimum deformations, while enabling the user to interactively define and edit a set of constraints. each user-defined constraint consists of a relation linking a 3d point picked on the surface and a 2d point of the texture. moreover, the non-deformation criterion introduced here can act as an extrapolator, thus making it unnecessary to constrain the border of the surface, in contrast with classic methods. to minimize the criterion, a conjugate gradient algorithm is combined with a compressed representation of sparse matrices, making it possible to achieve a fast convergence. bruno levy progressive compression of arbitrary triangular meshes in this paper we present a mesh compression method based on a multiresolution decomposition whose detail coefficients have a compact representation and thus smaller entropy than the original mesh. given an arbitrary triangular mesh with an irregular connectivity, we use a hierarchical simplification scheme, which generates a multiresolution model. by reversing the process we define a hierarchical progressive refinement process, where a simple prediction plus a correction is used for inserting vertices to form a finer level. we show how the connectivity of an arbitrary triangulation can be encoded efficiently by a coloring technique, and recovered incrementally during the progressive reconstruction of the original mesh. daniel cohen-or david levin offir remez graphics (panel session): its role and limitations john comfort three dimensional terrain modeling and display for environmental assessment k. kaneda f. kato e. nakamae t. nishita h. tanaka takao noguchi links: ai planning resources on the web robert st. amant r. michael young are faithfulness and accuracy necessary? "we are approximate beings…animals don't require precise measurements and high accuracy to function. machines do." nahum gershon developing computational models of discretion to build legal knowledge based systems few legal knowledge based systems have been constructed which provide numerical advice. none have been built in discretionary domains. our research, directed towards the domains of sentencing and family law property division has lead to the development of three distinct forms of judicial discretion. to model these different discretionary domains we use diverse artificial intelligence tools including case-based reasoning and knowledge discovery from databases. we carry out a detailed comparison of two discretionary legal knowledge based systems. judge's apprentice is a case- based reasoner which recommends ranges of sentences for convicted israeli rapists and robbers. splitup uses knowledge discovery from databases to learn what percentage of marital property the partners to a divorce in australia will receive. the systems are compared with regard to reasoning, explanation, evaluation and coping with conflicting cases. yaakov hacohen kerner uri schild john zeleznikow integrating perception, action and learning jim antonisse harry wechsler an action-based ontology of legal relations this paper aims at linking the ai notion of action to the notion of legal relation. legal rules concern actions to accomplish or to exclude and states to achieve or to avoid. the body of the rule establishes who and under which conditions must respect the obligation. in this paper, we introduce a definition of _obligation_ and use it as an ontological basis to describe the legal relations appearing in the a-hohfeld language. guido boella lyda favali leonardo lesmo creating full view panoramic image mosaics and environment maps richard szeliski heung-yeung shum hybrid propositional encodings of planning amol d. mali designing special-purpose input devices for an increasing number of applications, we may be reaching the point of diminishing returns with general purpose computer input devices, such as the keyboard and mouse. at digital image design incorporated (did) we've created special purpose input devices to perform tasks that are commonly addressed with software and general purpose devices alone. these more specific devices have led to significant advantages in accomplishing the tasks, and we've developed an approach to designing these devices that i hope will help others in doing similar work.when someone says "build us a system to speed up computer character animation," the typical solution is purely software. this is not necessarily optimal or even cost effective, and the only way to determine that is to consider developing a hardware device during the initial project exploration. let's go through the process of designing a special purpose device together. real-world details are critical to this process, so we'll do it in the problem domain of animation, where did and others have previously developed special purpose input devices. i'll generalize where appropriate. w. bradford paley maintenance planning and scheduling using network simulations there are many opportunities for applying modeling and simulation techniques in an air force maintenance depot. this paper provides an overview of the scope and types of maintenance done at sacramento air logistics center and describes some of our q-gert and computer-generated graphics uses. the paper emphasizes the uses in the planning and scheduling of our aircraft maintenance activities and identifies areas for future application of simulations. robert e. mortenson slam iitm tutorial in 1979, the state-of-the-art in simulation languages was extended with the introduction of slamtm, the first language that provided three different modeling viewpoints in a single integrated framework.(5) slam permits discrete event, continuous, and network modeling perspectives and/or any combination of the three to be implemented in a single model. slam represented a significant breakthrough in simulation methods development, as it provided the flexibility to use the most appropriate world view for the system being studied. this improved upon the more traditional situation in which simulation modelers were restricted to the modeling perspective embodied in the language they were using. the success of this new approach was readily apparent. jean j. o'reilly a. alan b. pritsker agent design patterns: elements of agent application design yariv aridor danny b. lange "they all suck!" (wheras we've been running too fast to do anything but exhale) carrie sim generating photomosaics: an empirical study nicholas tran agent aided aircraft maintenance onn shehory katia sycara gita sukthankar vick mukherjee new development of optimal computing budget allocation for discrete event simulation hsiao-chang chen liyi dai chun-hung chen enver yucesan symbolic evaluation in the nonlinear mechanical systems d. m. klimov v. m. rudenko v. v. leonov analysis of bottling and storage operations at brown forman distillers corporation this paper describes a simulation model developed for the analysis of the bottling and storage operations and facilities of the brown- forman distillers corporation in louisville, kentucky. the paper evolved from a continuing study of the bottling and storage operations at the brown forman bottling facility, which began in the fall of 1978. s. m. alexander g. r. weckman simulation with gpss/h robert c. crain verification and validation of simulation models robert g. sargent second-order surface analysis using hybrid symbolic and numeric operators results from analyzing the curvature of a surface can be used to improve the implementation, efficiency, and effectiveness of manufacturing and visualization of sculptured surfaces. we develop a robust method using hybrid symbolic and numeric operators to create trimmed surfaces, each of which is solely convex, concave, or saddle and partitions the original surface. the same method is also used to identify regions whose curvature lies within prespecified bounds. gershon elber elaine cohen the jester paul charette mark sagar greg decamp jessica vallot decomposing polygon meshes for interactive applications xuetao li tong wing toon zhiyong huang concordance programs for literacy analysis ian lancashire anaphora resolution in the extraction of treatment history language from court opinions by partial parsing this paper describes an information extraction system that identifies and analyzes statements in court opinions regarding the value of cited cases as precedents. the system employs partial parsing techniques in conjunction with a semantic grammar to identify the language associated with such rulings. the most novel aspect of the system lies in its anaphora resolution module that combines syntactic, semantic, and domain-specific inference rules with local discourse information to link such language to case references. khalid al-kofahi brian grom peter jackson hardware accelerated rendering of antialiasing using a modified a-buffer algorithm stephanie winner mike kelley brent pease bill rivard alex yen topics in document research david m. levy seeing the forest for the trees: hierarchical displays of hypertext structures most recent hypertext systems support hierarchy only as a restricted subset of directed graph structure. consequently they do not provide many of the capabilities for graphical information hiding and structure manipulation that a tree makes possible. this paper describes display techniques developed for igd, a hypertext system that supports the creation of large graphical documents whose arbitrary directed graph structure is embedded in a strict hierarchy. igd offers the full generality of arbitrary keyworded links, while simultaneously allowing hierarchies to be easily manipulated and displayed with much of their structural detail selectively abstracted. steven feiner a qualitative simulation approach for fuzzy dynamical models this article deals with simulation of approximate models of dynamic systems. we propose an approach that is appropriate when the uncertainty intrinsic in some models cannot be reduced by traditional identification techniques, due to the impossibility of gathering experimental data about the system itself. the article presents a methodology for qualitative modeling and simulation of approximately known systems. the proposed solution is based on the fuzzy sets theory, extending the power of traditional numerical-logical methods. we have implemented a fuzzy simulator that integrates a fuzzy, qualitative approach and traditional, quantitative methods. andrea bonarini gianluca bontempi light field rendering marc levoy pat hanrahan virtual imaginations require real bodies dena elisabeth eber simulation of expansion alternatives for the erika engine-block finishing facility a simulation of the erika engine-block finishing operations at michigan casting center, flat rock, michigan aided ford motor company management in evaluating expansion alternatives. various problems associated with simulation-language choice, data collection, and model validation are discussed. outcomes of the simulation for various expansion alternatives are presented. edward j. williams new uses of interactive 3d for museums (panel session) (abstract only) robert temel kris covarrubias jim spadaccini manfred koob volume seedlings michael f. cohen james painter mihir mehta kwan-liu ma clock aaron lim knowledge acquisition in a machine fault diagnosis shell m. krishnamurthi a. j. underbrink noise-tolerant distribution-free learning of general geometric concepts nader h. bshouty sally a. goldman h. david mathias subhash suri hisao tamaki the impact of diversity on performance in multi-robot foraging tucker balch top-down hierarchical planning of coherent visual discourse michelle x. zhou steven k. feiner approximating polyhedra with spheres for time-critical collision detection philip m. hubbard project t.o.p.i.c. corporate w. germany univ. of constance selecting partners bikramjit banerjee sandip sen first experiences with the sb-one knowledge representation workbench in natural-language applications alfred kobsa developing the interactive first person p.o.v.: using characters as a sensory lens ella tallyn john f. meech liouvillian solutions of third order linear differential equations: new bounds and necessary conditions michael f. singer felix ulmer passages john s. banks a performance study of the cancelback protocol for time warp this work presents results from an experimental evaluation of the space-time tradeoffs in time warp augmented with the cancelback protocol for memory management. an implementation of the cancelback protocol on time warp is described that executes on a shared memory multiprocessor, a 32 processor kendall square research machine (ksr1). the implementation supports canceling back more than one object when memory has been exhausted. the limited memory performance of the system is evaluated for three different workloads with varying degrees of symmetry. these workloads provide interesting stress cases for evaluating limited memory behavior. we, however, make certain simplifying assumptions (e.g., uniform memory requirement by all the events in the system) to keep the experiments tractable. the experiments are extensively monitored to determine the extent to which various overheads affect performance. it is observed that (i) depending on the available memory and asymmetry in the workload, canceling back several (called the salvage parameter) events at one time may improve performance significantly, by reducing certain overheads, (ii) a performance nearly equivalent to that with unlimited memory can be achieved with only a modest amount of memory depending on the degree of asymmetry in the workload. samir r. das richard m. fujimoto unsolved problems and opportunities for high-quality, high-performance 3d graphics on a pc platform david b. kirk mighty joe young - research and development highlights christian rouet the holodeck ray cache: an interactive rendering system for global illumination in nondiffuse environments we present a new method for rendering complex environments using interactive, progressive, view- independent, parallel ray tracing. a four-dimensional holodeck data structure serves as a rendering target and caching mechanism for interactive walk-throughs of nondiffuse environments with full global illumination. ray sample density varies locally according to need, and on- demand ray computation is supported in a parallel implementation. the holodeck file is stored on disk and cached in memory by a server using a least- recently-used (lru) beam-replacement strategy. the holodeck server coordinates separate ray evaluation and display processes, optimizing disk and memory usage. different display systems are supported by specialized drivers, which handle display rendering, user interaction, and input. the display driver creates an image from ray samples sent by the server and permits the manipulation of local objects, which are rendered dynamically using approximate lighting computed from holodeck samples. the overall method overcomes many of the conventionl limits of interactive rendering in scenes with complex surface geometry and reflectance properties, through an effective combination of ray tracing, caching, and hardware rendering. gregory ward maryann simmons rendering with coherent layers jed lengyel john snyder topological considerations in isosurface generation extended abstract jane wilhelms allen van gelder evolutionary control via sensorimotor input and actuation todd m. schrider the role of computer simulation in the development of a new elevator product this paper discusses a new elevator dispatching strategy called advanced traffic management (or atm). a substantial computer simulation effort proved to be a major influence in the development of this new product for introduction to the marketplace. a simplified version of atm will be presented, and the simulation modeling process will be outlined. the main contributions of the simulation effort were in initial product verification and in design improvements resulting in increased u.s. patent protection. bruce a. powell case #m1251 geof pelaia the casm environment revisited ray j. paul vlatka hlupic a ray tracing algorithm for progressive radiosity j. r. wallace k. a. elmquist e. a. haines case-based learning in inductive inference there is proposed a formalization of case-based learning in terms of recursion-theoretic inductive inference. this approach is directly derived from some recently published case-based learning algorithms. the intention of the present paper is to exhibit the relationship between case-based learning and inductive inference and to specify this relation with mathematical precision. in particular, it is the author's intention to invoke inductive inference results for pointing to the crucial questions in case-based learning which allow to improve the power of case-based learning algorithms considerably. there are formalized several approaches to case-based learning. first, they vary in the way of presenting cases to a learning algorithm. second, they are different with respect to the underlying semantics of case bases together with similarity measures. third, they are distinguished by the flexibility in using similarity functions. the investigations presented relate the introduced learning types to identification types in recursion-theoretic inductive inference. klaus p. jantke function design document: toward better documentation for data processing contracts discussions of data processing contracts often emphasize the goals of: 1)negotiating the best possible deal; and, 2) protecting the parties in case of dispute. considering the expense, delay and frustration wrought by disputes, especially when litigation results, an equally valuable goal is the preparation of contracts in such a way as to avoid disputes. the author proposes that this goal be furthered by development of a documentation standard, to be known for purposes of this discussion as a "function design document" (fdd). the fdd would be produced before a contract for software services & products (ss & p)1 was entered into. the fdd would ideally require both parties to clearly define the functionality of the particular ss & p. a contract could then be entered into under which each party knew its own obligations as well as the expectations of the other party. consideration is also given to the way in which the fdd should be handled, from a legal standpoint, since the possibility of a dispute cannot be ignored. steven brower forecasting investment opportunities through dynamic simulation stephen r. parker classification of game knowledge samuel baskinger scott briening anthony emma graig fisher vincent johnson christopher moyer a comparison of lisp, prolog, and ada programming productivity in ai area an experiment comparing programming productivity of lisp, prolog, and ada used in the ai (artificial intelligence) area is reported. lisp and prolog have been the main languages used for ai programming because of their symbol manipulation and list processing facilities. there is a possibility, however, that general purpose languages such as ada or c can be used to develop large- scale, practical-use ai software because of their high performance and good maintainability. the purpose of this experiment is to present quantitative productivity data comparing these languages used for ai programming. in the experiment, several ai software components are programmed in three languages: lisp, prolog, and ada, and the languages' effects on programming productivity are examined. the main results of the experiment are: lisp and prolog programming productivity in coding and debugging stages are about two times that of ada; the programming effort is in proportion to program size. this paper presents the experimental data supporting this and a number of additional conclusions. f. hattori k. kushima t. wasano why cows go moon andrew welihozkiy scale-sensitive dimensions, uniform convergence, and learnability learnability in valiant's pac learning model has been shown to be strongly related to the existence of uniform laws of large numbers. these laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. classes of real-valued functions enjoying such a property are also known as uniform glivenko-cantelli classes. in this paper, we prove, through a generalization of sauer's lemma that may be interesting in its own right, a new characterization of uniform glivenko- cantelli classes. our characterization yields dudley, gine, and zinn's previous characterization as a corollary. furthermore, it is the first based on a gine, and zinn's previous characterization as a corollary. furthermore, it is the first based on a simple combinatorial quantity generalizing the vapnik-chervonenkis dimension. we apply this result to obtain the weakest combinatorial condition known to imply pac learnability in the statistical regression (or "agnostic") framework. furthermore, we find a characterization of learnability in the probabilistic concept model, solving an open problem posed by kearns and schapire. these results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class. noga alon shai ben- david nicolò cesa-bianchi david haussler simulation model to evaluate maintenance strategies for large network of fielded systems a dynamic simulation model of a field maintenance organization has been developed and validated. the objective of the model is to provide an analytical tool for evaluating the operational feasibility of new maintenance strategies prior to their introduction into the field. this simulation model is an accurate replica of the policies and time critical maintenance activities within a field organization, providing a rendering in detail of each technician and each facility as events take place in time. this paper discusses the structure of the model, the functions simulated, the validation experiment, and an application of the model. gene a. wong confidence intervals using orthonormally weighted standardized time series we extend the standardized time series area method for constructing confidence intervals for the mean of a stationary stochastic process. the proposed intervals are based on orthonormally weighted standardized time series area variance estimators. the underlying area estimators possess two important properties: they are first-order unbiased, and they are asymptotically independent of each other. these properties are largely the result of a careful choice of weighting functions, which we explicitly describe. the asymptotic independence of the area estimators yields more degrees of freedom than various predecessors; this, in turn, produces smaller mean and variance of the length of the resulting confidence intervals. we illustrate the efficacy of the new procedure via exact and monte carlo examples. we also provide suggestions for efficient implementation of the method. robert d. foley david goldsman frameless rendering: double buffering considered harmful the use of double-buffered displays, in which the previous image is displayed until the next image is complete, can impair the interactivity of systems that require tight coupling between the human user and the computer. we are experimenting with an alternate rendering strategy that computes each pixel based on the most recent input (i.e., view and object positions) and immediately updates the pixel on the display. we avoid the image tearing normally associated with single-buffered displays by randomizing the order in which pixels are updated. the resulting image sequences give the impression of moving continuously, with a rough approximation of motion blur, rather than jerking between discrete positions. we have demonstrated the effectiveness of this frameless rendering method with a simulation that shows conventional double-buffering side-by-side with frameless rendering. both methods are allowed the same computation budget, but the double-buffered display only updates after all pixels are computed while the frameless rendering display updates pixels as they are computed. the frameless rendering display exhibits fluid motion while the double-buffered display jumps from frame to frame. the randomized sampling inherent in frameless rendering means that we cannot take advantage of image and object coherence properties that are important to current polygon renderers, but for renderers based on tracing independent rays the added cost is small. gary bishop henry fuchs leonard mcmillan ellen j. scher zagier living in a dynamic world r. l. andersson guaranteed ray intersections with implicit surfaces d. kalra a. h. barr applying cartoon animation techniques to graphical user interfaces if judiciously applied, animation techniques can enhance the look and feel of computer applications that present a graphical human interface. such techniques can smooth the rough edges and abrupt transitions common in many current graphical interfaces, and strengthen the illusion of direct manipulation that many interfaces strive to present. to date, few applications include such animation techniques. one possible reason is that animated interfaces are difficult to implement: they are difficult to design, place great burdens on programmers, and demand high-performance from underlying graphics systems.this article describes how direct manipulation human computer interfaces can be augmented with techniques borrowed from cartoon animators. in particular, we wish to improve the visual feedback of a direct manipulation interface by smoothing the changes of an interface, giving manipulated objects a feeling of substance and providing cues that anticipate the result of a manipulation. our approach is to add support for animation techniques such as object distortion and keyframe interpolation, and to provide prepackaged animation effects such as animated widgets for common user interface interactions.to determine if these tools and techniques are practical and effective, we built a prototype direct manipulation drawing editor with an animated interface and used the prototype editor to carry out a set of human factors experiments. the experiments show that the techniques are practical even on standard workstation hardware, and that the effects can indeed enhance direct manipulation interfaces. bruce h. thomas paul calder cone-spheres nelson max nada mas bruno follet shared variables in distributed simulation although users may want to employ shared variables when they program distributed simulation applications, almost none of the currently existing distributed simulation systems do offer this facility. in this paper, we present new algorithms which provide the illusion of consistent shared variables in distributed simulation systems without physically shared memmory. horst mehl stefan hammes qota: a fast, multi-purpose algorithm for terrain following in virtual environments john w. barrus richard c. waters computing the distribution function of a conditional expectation via monte carlo: discrete conditioning spaces shing-hoi lee peter w. glynn slacker andreas procopiou an architecture for understanding in planning, action, and learning richard alterman tamitha carpenter roland zito-wolf the irradiance jacobian for partially occluded polyhedral sources the irradiance at a point on a surface due to a polyhedral source of uniform brightness is given by a well-known analytic formula. in this paper we derive the corresponding analytic expression for the irradiance jacobian, the derivative of the vector representation of irradiance. although the result is elementary for unoccluded sources, within penumbrae the irradiance jacobian must incorporate more information about blockers than either the irradiance or vector irradiance. the expression presented here holds for any number of polyhedral blockers and requires only a minor extension of standard polygon clipping to evaluate. to illustrate its use, three related applications are briefing described: direct computation of isolux contours, finding local irradiance extrema, and iso-meshing. isolux contours are curves of constant irradiance across a surface that can be followed using a predictor-corrector method based on the irradiance jacobian. similarly, local extrema can be found using a descent method. finally, iso- meshing is a new approach to surface mesh generation that incorporates families of isolux contours. james arvo simulation program development by stepwise refinement in unity marc abrams ernest h. page richard e. nance fundamentals of simulation using micro saint catherine e. drury k. ronald laughery computer puppetry: an importance-based approach computer puppetry maps the movements of a performer to an animated character in real-time. in this article, we provide a comprehensive solution to the problem of transferring the observations of the motion capture sensors to an animated character whose size and proportion may be different from the performer's. our goal is to map as many of the _important_ aspects of the motion to the target character as possible, while meeting the online, real-time demands of computer puppetry. we adopt a kalman filter scheme that addresses motion capture noise issues in this setting. we provide the notion of dynamic importance of an end-effector that allows us to determine what aspects of the performance must be kept in the resulting motion. we introduce a novel inverse kinematics solver that realizes these important aspects within tight real-time constraints. our approach is demonstrated by its application to broadcast television performances. hyun joon shin jehee lee sung yong shin michael gleicher the use of graphical models in model validation the use of graphical models for model specification and in modelling is increasing rapidly. this paper discusses the use of these graphical models in model validation. robert g. sargent maintaining multiple views in feature modeling klaas jan de kraker maurice dohmen willem f. bronsvoort dyna, an integrated architecture for learning, planning, and reacting richard s. sutton the rack peter p. tanner kenneth b. evans a multiresolution spline with application to image mosaics peter j. burt edward h. adelson automating parallel simulation using parallel time streams this paper describes a package for parallel steady-state stochastic simulation that was designed to overcome problems caused by long simulation times experienced in our ongoing research in performance evaluation of high-speed and integrated- services communication networks, while maintaining basic statistical rigors of proper analysis of simulation output data. the package, named akaroa, accepts ordinary (nonparallel) simulation programs, and alll further stages of stochastic simulation should be transparent for users. the package employs a new method of sequential estimation for the multiple- replications-in-parallel scenario. all basic functions, including the transformation of originally nonparallel simulators into ones suitable for parallel execution, control of the precision of estimates, and stopping of parallel simulation processes when the required precision of the overall steady-state estimates is achieved, are automated. the package can be used on multiprocessor systems and/or heterogeneous computer networks, involving an arbitrary number of processors. the design issues, architecture, and implementation of akaroa, as well as the results of its preliminary performance studies are presented victor yau knowledge granularity for task oriented agents yiming ye john k. tsotsos the use of genetic algorithms and neural networks to investigate the baldwin effect michael jones aaron konstam control knowledge in planning: benefits and tradeoffs yi-cheng huang bart selman henry kautz a multiagent and suebskul phiphobmongkol planner using meta-knowledge and agent constraints multiagent planning is a process of generating action assignments for multiple agents to achieve a given goal. most of the current approaches create single agent plans first, and try to synchronize them by using communication primitives. because parallelism is devised after single agent plans are constructed, cooperation among agents is difficult to achieve. since most conflicts are resolved at low level actions, the search space of conflict identification and resolution can be large. another problem is the lack of using agent constraints in plan generation. agents are assumed to be similar while in real life they may differ in the ways they achieve a goal. this paper describes a multiagent planning approach that solve some of the above difficulties. this approach has three features. they are meta-level planning, breakable/unbreakable action representations, and integrated agent assignment scheme. using these approaches, conflicts among low level actions can be reduced. cooperation among agents is possible and the output plan will be suitable for different types of agents. kai-hsiung chang monomial orderings and gro"bner bases fritz schwarz the local time warp approach to parallel simulation the two main approaches to parallel discrete event simulation -- conservative and optimistic -- are likely to encounter some limitations when the size and complexity of the simulation system increases. for such large scale simulations, the conservative approach appears to be limited by blocking overhead and sensitivity to lookahead, whereas the optimistic approach may become prone to cascading rollbacks, state saving overhead, and demands for larger memory space. these drawbacks restrict the synchronization schemes based on each of the two approaches from scaling up. a combined approach may resolve these limitations, while preserving and utilizing potential advantages of each method. however, the schemes proposed so far integrate the two views at the same level, i.e. local to a logical process, and hence may not be able to fully solve the problems. in this paper we propose the local time warp method for parallel discrete-event simulation and present a novel synchronization scheme for it called hctw. the new scheme hierarchically combines a conservative time window algorithm with time warp and aims at reducing cascade rollbacks, sensitivity to lookahead, and the scalability problems. local time warp is believed to be suitable for parallel machines equipped with thousands of processors and thus an appropriate candidate for simulation of large and complex systems. hassan rajaei rassul ayani lars-erik thorelli a progressive multi-pass method for global illumination shenchang eric chen holly e. rushmeier gavin miller douglass turner an implicit formulation for precise contact modeling between flexible solids marie-paule gascuel sorb heath hanlin an overview of hierarchical control flow graph models douglas g. fritz robert g. sargent design galleries: a general approach to setting parameters for computer graphics and animation j. marks b. andalman p. a. beardsley w. freeman s. gibson j. hodgins t. kang b. mirtich h. pfister w. ruml k. ryall j. seims s. shieber breathing time warp time warp and breathing time buckets are two general-purpose optimistic synchronization strategies for supporting parallel discrete-event simulations. however, each one of these approaches has potential fatal shortcomings. time warp may exhibit rollback explosions that can cause an avalance of antimessages. breathing time buckets, on the other hand, may not be able to process enough events per synchronization cycle to remain efficient. a new strategy, called breathing time warp, has been developed in the synchronous parallel environment for emulation and discrete-event simulation (speedes) operating sytem. this new strategy solves both of these problems by mixing the two algorithms together, resulting in the best of both methods. this paper describes the implementation of the breathing time warp algorithm in speedes, and then shows how this new approach sometimes improves the performance of parallel discrete-event simulations. jeff s. steinman total energy plant - simulation model a case of total energy plant (tep) has been modeled and the operational characteristics simulated. applying the simulated data to the model will assist the tep dispatcher in deciding on the operational schedule for the plant energy generating components, to attain an optimal cost of generating consumer demands for various types of energies. r. d. doering y. a. hosni syntactic approach to image analysis (abstract only) during the past several years, syntactic approach [1,2] has attracted growing attention as promising avenues of approach in image analysis. the object of image analysis is to extract as much information as possible from a given image or a set of images. in this abstract, we will focus our attention on the use of semantic information and grammatical inference. in an attributed grammar, there are still a set of nonterminals, a set of terminals and a start symbol just as in conventional grammars. the productions are different. each semantic rule. two kinds of attributes are included in the semantic rules: inherited attributes and synthesized attributes. one example of the attributes is the length of a specific line segment used as a primitive. all the attributes identified for a pattern are expressed in a "total attribute vector". instead of using attributes, stochastic grammars associate with each production a probability. that means, one sub-pattern may generate one subpattern with some probability, and another with a different probability. a string may have two or more possible parses. in this case of ambiguity, the probabilities associated with the several possible productions are compared to determine the best fit one. probabilities are multiplied in multiple steps of stochastic derivations. besides these, fuzzy languages[3-6] have also been introduced into pattern recognition. by using similarity measures as membership functions, this approach describes patterns in a more understandable way than stochastic grammars. moreover, fuzzy languages make use of individual characteristics of a class of patterns rather than collective characteristics as in stochastic languages, and therefore it is probably easier to develop grammars than stochastic languages. yet a lot of work still need to be done in order to develop sufficient theories in this field for practical uses. an appropriate grammar is the core of any type of syntactic pattern recognition process. grammars may be established by inferring from a priori knowledge about the objects or scenes to be recognized. another way to establish a pattern grammar is by direct inference from some sample input patterns. once a grammar is derived from some sample input patterns, other patterns similar to them or belonging to the same class can be parsed according to the grammar. therefore grammatical inference enables a system to learn most information from an input pattern, and, furthermore, to apply the obtained knowledge to future recognition processes. it seems to be the ultimate aim of image analysis. inference can be supervised or unsupervised. in supervised inference, a "teacher" who is able to discriminate valid and invalid strings helps in reducing the length of sentences or inserting substrings until some iterative regularity is detected. in unsupervised inference, no prior knowledge about the grammar is assumed. the difficulty of inference is proportional to the complexity of the grammar, and the inference problem does not have a unique solution unless some additional constraints are placed upon the grammars. some theoretical algorithms have been developed for inferencing regular (finite- state) grammars, but they still have severe limitations for practical use because of large amount computation due to the combinatorial effect. context-free grammars are even harder to deal with since many decidable properties about regular grammars are undecidable for context-free grammars, such as the equivalency of two contex-free grammars. therefore, inference algorithms have been developed only for some specific types of context-free grammars and most of them rely on heuristic methods. syntactic approach to image analysis may be applied to many areas including space object surveillance and identification [7]. t. k. ho edward t. lee t. t. ho a study on estimation of high resolution component for image enhancement using wavelet yasumasa itoh yoshinori izumi yutaka tanaka three-dimensional distance field metamorphosis given two or more objects of general topology, intermediate objects are constructed by a distance field metamorphosis. in the presented method the interpolation of the distance field is guided by a warp function controlled by a set of corresponding anchor points. some rules for defining a smooth least- distorting warp function are given. to reduce the distortion of the intermediate shapes, the warp function is decomposed into a rigid rotational part and an elastic part. the distance field interpolation method is modified so that the interpolation is done in correlation with the warp function. the method provides the animator with a technique that can be used to create a set of models forming a smooth transition between pairs of a given sequence of keyframe models. the advantage of the new approach is that it is capable of morphing between objects having a different topological genus where no correspondence between the geometric primitives of the models needs to be established. the desired correspondence is defined by an animator in terms of a relatively small number of anchor points daniel cohen-or amira solomovic david levin portrait of the artists in a young industry today's big computer games are often graphics-intensive behemoths with sgi-rendered imagery filled with 3d figures and created with big budgets. but games and graphics haven't always been so closely linked. i was pleased to be invited to write this retrospective, as even though my own background in games began in programming, far from art, i've always enjoyed the special synergy that can develop between artists and programmers working together on a computer game. the programmers are amazed as their crude stick-figure sketches turn into glorious images in the hands of a talented artist. then it is the artists' turn to smile, as their series of still frames comes to life and responds to their control enabled by the magic of the programmer. how has the role of graphics evolved in games, and what lies ahead? noah falstein discrete event simulation languages current status and future directions simulation software is changing. within the past several years, significant developments in simulation software have taken place: 1\\. new simulation languages have been developed. 2\\. new software packages have been developed for use in conjunction with simulation (for purposes other than building models, per se.) 3\\. new features have been added to existing languages. 4\\. vendors new to the simulation community have marketed implementations of existing software packages. 5\\. simulation environments, comprising integrated collections of simulation software tools have been built. as a consequence of these developments, those readers whose perceptions of simulation software are several years old should consider themselves out of date. those readers whose perceptions are five or more years old should consider themselves extremely out of date. furthermore, enormous amounts of time and energy are presently being expended on research and development of simulation software. thus, we can expect dramatic changes to take place in the near future. simulation software of the 1990's will be as far removed from present software as present software is removed from building models "from scratch" in languages such as fortran. this paper, a tutorial, summarizes the present state of simulation software, identifies pressures for changes, and describes an emerging consensus on the major characteristics of simulation software of the future. james o. henriksen surveyor's forum: image models narendra ahuja b. j. schachter 3d palette: a virtual reality content creation tool mark billinghurst sisinio baldis lydia matheson mark philips object lesson dylan sisson andrew woods kyle hanson smooth connection of trimmed nurbs surfaces an automatic smooth surface connection method that has the capability of tension control is presented. given two trimmed nurbs surfaces, the new method constructs a smooth _connection surface_ to connect the trimming regions of the trimmed surfaces at the trimming curves. the connection satisfies the pseudo-_g_1 or pseudo-_c_1 smoothness requirement, a condition not as strong as _g_1 or _c_1, but smooth enough for most industrial applications. the construction process consists of four major steps: _connection curves construction and alignment, initial blends construction, setting up continuity constraints_, and _internal and external boundary smoothing_. the advantages of the new method include: (1) providing the users with more flexibility in adjusting the shape of the connection surface, (2) the representation of the connection surface is compatible with most of the current data-exchange standards, (3) including the classical blending as a special case but with more flexibility on the setting of the rail curves, and (4) smoother shape of the resulting connection surface through an energy optimization process. test cases that cover important applications are included. pifu zhang fuhua cheng speech word recognition with backpropagation and fuzzy-artmap neural networks lipo wang modeling motion blur in computer-generated images this paper describes a procedure for modeling motion blur in computer- generated images. motion blur in photography or cinematography is caused by the motion of objects during the finite exposure time the camera shutter remains open to record the image on film. in computer graphics, the simulation of motion blur is useful both in animated sequences where the blurring tends to remove temporal aliasing effects and in static images where it portrays the illusion of speed or movement among the objects in the scene. the camera model developed for simulating motion blur is described in terms of a generalized image-formation equation. this equation describes the relationship between the object and corresponding image points in terms of the optical system-transfer function. the use of the optical system-transfer function simplifies the description of time-dependent variations of object motion that may occur during the exposure time of a camera. this approach allows us to characterize the motion of objects by a set of system-transfer functions which are derived from the path and velocity of objects in the scene and the exposure time of a camera. michael potmesil indranil chakravarty interactive shape metamorphosis image metamorphoses (morphing) is a powerful and easy-to-use tool for generating new 2d images from existing 2d images. in recent years morphing has become popular as an artistic tool and is used extensively in the entertainment industry. in this paper we describe a new technique for controlled, feature-based metamorphosis of certain types of surfaces in 3-space; it applies well-understood 2d methods to produce shape metamorphosis between 3d models in a 2d parametric space. we also describe an interactive implementation on a parallel graphics multicomputer, which allows the user to define, modify and examine the 3d morphing process in real time. david t. chen andrei state david banks robust rendering of general ellipses and elliptical arcs dieter w. fellner christoph helmberg occlusion horizons for driving through urban scenery laura downs tomas möller carlo h. sequin solid model input through orthographic views this paper describes the results of basic studies on procedures for creating solid models of component geometry from two-dimensional orthographic projections. an interactive graphic program was developed to allow the input of three orthographic views of a component geometry by digitizing from a drawing. the views may contain straight lines and circular arcs, solid or dashed. no restrictions are placed on the order or direction of lines and arcs in any view. using an extension of the wesley-markowski procedure, the program constructs a three-dimensional solid model of the object. when the projections are ambiguous, multiple solid models are produced. the solid model may contain planar, cylindrical, conical, spherical and toroidal surfaces. topological information of the solid model is stored in a winged edge structure. geometric information is stored as vertex coordinates and surface equations. the procedure for 2d-3d conversion provides a powerful new method for manual input of solid models, a common interface to all turnkey graphics systems, and, properly integrated with existing technology for scanning of drawings, a powerful new method for acquisition of cad/cam data bases from existing drawings. the procedure is described, examples of typical input and output are shown, and possible extensions are discussed. hiroshi sakurai david c. gossard improving documentation the following report focuses on some of the documentation problems faced by many data centers as well as some proposed solutions. many of the difficulties experienced at today's data centers stem from the need to better motivate programmers to generate accurate and useful documentation, and to simplify and standardize documentation tasks. this report presents four recommendations to solve these problems. the recommendations are: 1) the creation of a computerized documentation system, 2) the use of matching labels in source code and documentation, 3) the use of tiered documentation formats, 4) the creation of the post of documentation administrator. scott l. mcgregor about the art in this issue steven cherry virtues and limitations of multifusion based action selection michael r. benjamin direct illumination with lazy visibility evaluation david hart philip dutre donald p. greenberg u.s. army modsim on jade's timewarp dirk baezner chuck rohs harry jones learning structured reactive navigation plans from executing mdp navigation policies autonomous robots, such as robot office couriers, need navigation routines that support flexible task execution and effective action planning. this paper describes \xfl, a system that learns structured symbolic navigation plans. given a navigation task, \xfl\ learns to structure continuous navigation behavior and represents the learned structure as compact and transparent plans. the structured plans are obtained by starting with monolithical default plans that are optimized for average performance and adding subplans to improve the navigation performance for the given task. compactness is achieved by incorporating only subplans that achieve significant performance gains. the resulting plans support action planning and opportunistic task execution. \xfl\ is implemented and extensively evaluated on an autonomous mobile robot. michael beetz thorsten belker collision detection framework using model simplification tiow-seng tan ket-fah chong kok-lim low bunny chris wedge abstract task specifications for conversation policies renee elio afsaneh haddadi ajit singh the effect of state-saving in optimistic simulation on a cache-coherent non- uniform memory access architecture christopher d. carothers kalyan s. perumalla richard m. fujimoto inspector gadget mary reardon improving the application development process with modular visualization environments hambleton d. lord musicbottles hiroshi ishii h. r. fletcher j. lee s. choo j. berzowska c. wisneski c. cano a. hernandez c. bulthaup the interaction technique notebook: adding shadows to a 3d cursor scott e. hudson finding parents in a heap fred heller generalization in partially connected layered neural networks we study the learning from examples in a partially connected single layer perceptron and a two-layer network. partially connected student networks learn from fully connected teacher networks. we study the generalization in the annealed approximation. we consider a single layer perceptron with binary weights. when a student is weakly diluted, there is a first order phase transition from the poor learning to the good learning state similar to that of fully connected perceptron. with a strong dilution, the first order phase transition disappears and the generalization error decreases continuously. we also study learning of a two-layer committee machine with binary weights. contrary to the perceptron learning, there always exist a first order transition irrespective of dilution. the permutation symmetry is broken at the transition point and the generalization error is reduced to a non-zero minimum value. kyung-hoon kwon kukjin kang jong-hoon oh the whole is greater than the sum of its parts or the effects of interfacing waterloo script with the ibm 6670 information distributor waterloo script is a powerful text formatting language with extensive capabilities. the ibm 6670 information distributor is a laser printer that produces high-quality output. however, without special considerations, text formatted with waterloo script cannot be printed correctly on the 6670. an interface is needed in order to allow use of both technologies together. this paper presents the evolution of clemson's script/6670 post processor and examines its impact on users as well as the academic computing support staff at clemson university. janet m. hall the relation-based knowledge representation of king kong samuel bayer marc vilain evaluating the probability of a good selection barry l. nelson souvik banerjee sensible agents: an implemented multi-agent system and testbed sensible agents have been engineered to solve distributed problems in complex, uncertain, and dynamic domains. each sensible agent is composed of four modules: the action planner, perspective modeler, conflict resolution advisor, and autonomy reasoner. these modules give sensible agents the abilities to plan, model, resolve individual conflicts, and change agent system organization. two component suites provide a variety of user- oriented features: the sensible agent run- time environment (sarte) and the sensible agent testbed. the sarte provides facilities for instantiating sensible agents, deploying a sensible agent system, and monitoring run- time operations. the sensible agents testbed facilitates automated generation of parameter combinations for controlled experiments, deterministic and non- deterministic simulation, and configuration of sensible agents and data acquisition. experimentation is a crucial step in gaining insight into the behavior of agents, as well as evidence toward or against hypotheses. using a real- world example, this paper explains and demonstrates: (1) the functional capabilities of sensible agents, (2) the sensible agent run- time environments facilities for monitoring and control of sensible agent systems and (3) the experimental set- up, monitoring, and analysis capabilities of the sensible agent testbed. k. s. barber r. mckay m. macmahon c. e. martin d. n. lam a. goel d. c. han j. kim virtual clay modeling system ken-ichi kameyama application of simulation to scheduling, sequencing, and material handling edward j. williams ramu narayanaswamy a subset coloring algorithm and its applications to computer graphics we consider the following problem: we are given a diagram made up of intersecting circles, where each region is colored either black or white. we wish to display this diagram on a bitmap device, where we are allowed to (i) paint a given circle white and (ii) invert the colors within a given circle, changing white to black and vice versa. (these operations are frequently provided in graphics hardware or software.) we ask: using only these paint and invert operations, is it possible to draw the diagram? a generalization of this problem leads to an analogous coloring problem on a subset of the power set of n elements. we give a polynomial-time algorithm that answers the question above, and produces a "short" sequence of instructions to draw the diagram, if one exists. a simple modification of the algorithm permits us to handle the case where there are more colors than just black and white, and the colors are represented by bit strings. this corresponds to the conventions frequently used with color raster devices. d. rubinstein j. shallit m. szegedy automatic model synthesis: using automatic programming and expert systems techniques toward simulation modeling a knowledge-based model construction (kbmc) system is described which has been developed to automate the model construction phase of the simulation life- cycle. the system utilizes a knowledge-based approach to automatic programming to build a simulation model and extends the knowledge-based approach to include model specification acquisition. the system's underlying rule base, implemented in the production system paradigm of ops83, incorporates several types of knowledge. domain knowledge is used in conjunction with simulation modeling knowledge to facilitate a structured interactive dialog for the acquisition of a complete model specification from a user. modeling knowledge and target language (siman) knowledge are then used to automatically construct an executable discrete simulation model from this specification. this paper presents an overview of the kbmc system and focuses on various issues involved in the conceptualization and implementation of such a system. karen j. murray sallie v. sheppard tribu bruno follet natural and efficient viewing parameters the viewing scheme in the core graphics system is based on a unified approach to the specification of the viewing parameters for all types of planar geometric projections. as a result, the specification of some of the viewing parameters is inconsistent with traditional ways of specification and will often be found unnatural for use in certain application areas. additionally, this choice of viewing parameters leads to inefficient implementations of viewing parameter modification, particularly in a high-performance graphics system which supports transformations in hardware or firmware. the natural ways to specify the different viewing parameters are discussed and efficiency considerations for viewing implementations are described. the core system viewing scheme is evaluated in terms of its naturalness and efficiency of implementation. an alternate viewing scheme is proposed that provides viewing parameters that are more natural for many applications and that can he modified more efficiently. james c. michener ingrid b. carlbom improving planning efficient by conceptual clustering automated acquisition and organization of plan knowledge has been investigated by many researchers. vere's thoth (1980) induces a minimal set of relational operators that cover a training set of state to state transitions. for example, having observed the many individual transitions required to build a block tower, thoth might formulate abstract operator descriptions that correspond to the 'classic' operators of stack, pick-up, etc. however, thoth does not have a strong notion of 'good' operator organization, other than to discover a minimal set of abstractions that cover the training examples. nonetheless, thoth's ability to autonomously discover operator 'classes' makes it an early conceptual ancestor of the clustering approach that we propose. unlike thoth, strips (fikes, hart & nilsson, 1972) begins with a set of abstract operator descriptions and conjoins them using means-ends analysis to form plans. moreover, strips generalizes the applicability of these plans (using analytic methods in contrast to thoth's empirical approach) and stores them for reuse. however, recent work in learning to plan indicates that a strips approach to saving plans in an unconstrained manner may actually have detrimental effects on planning time: the time to search for applicable past experience may eventually surpass the cost of planning from scratch (minton, 1988). anderson and farley (1988) suggests a possible way to mitigate the cost of finding applicable past knowledge. their system, planerus, generates a hierarchy based on common add conditions of strips-like operators. add condition indices allow planerus to find operators that reduce differences in a means-ends planner. in principle a discrimination net over add conditions can be very efficient, but like thoth, planerus appears to lack a strong prescription of operator class quality: its indexing method appears to require an exponential number of indices in the worst case because it groups operators based on combinations of one or more shared conditions. in this regard minton (1988) points out that even with indexing schemes, systems must also be willing to dispose of past experiences (e.g., abstractions, add-condition combinations) that prove to be of low utility (e.g., infrequent). thoth, strips, and planerus are important precursors to our work, but we hope to extend the ideas illustrated by these systems in several directions. first, a system like planerus is designed primarily to facilitate goal-driven behavior, as its exclusive reliance on add-condition indexing indicates. however, work in reactive or situated planning (schoppers, 1989) suggests that the current situation should also influence the selection of applicable operators: an ideal operator is one that achieves desirable goals and requires minimal alterations to the current situation to do so. thus, we propose that when using strips-like operators, pre conditions, as well as add conditions should be used to retrieve operators that make progress towards the goal and that best fit the current conditions of the environment. in addition, operator class discovery and indexing should be controlled by a strong heuristic prescription of high utility operator and plan classes. without these prescriptions, planning with or without the benefit of previous experience remains a search-intensive, often intractable enterprise (ginsberg, 1989). hua yang douglas h. fisher hubertus franke integrating analogical reasoning in a natural language understander the research described in this paper addresses the problem of integrating analogical reasoning and argumentation into a natural language understanding system. we present an approach to completing an implicit argument-by-analogy as found in a natural language editorial text. the transformation of concepts from one domain to another, which is inherent in this task, is a complex process requiring basic reasoning skills and domain knowledge, as well as an understanding of the structure and use of both analogies and arguments. the integration of knowledge about natural language understanding, argumentation, and analogical reasoning is demonstrated in a proof of concept system called ariel. ariel is able to detect the presence of an analogy in an editorial text, identify the source and target components, and develop a conceptual representation of the completed analogy in memory. the design of our system is modular in nature, permitting extensions to the existing knowledge base and making the argumentation and analogical reasoning components portable to other understanding systems. stephanie e. august lawrence p. mcnamee representation and extraction of volumetric attributes using trivariate splines: a mathematical framework our goal in this paper is to leverage traditional strengths from the geometric design and scientific visualization communities to produce a tool valuable to both. we present a method for representing and specifying attribute data across a trivariate nurbs volume. some relevant attribute quantities include material composition and density, optical indices of refraction and dispersion, and data from medical imaging. the method is independent of the granularity of the physical geometry, allowing for a decoupling of the resolution of the carried data from that of the volume. volume attributes can be modeled or fit to data. a method is presented for efficient evaluation of trivariate nurbs. we incorporate methods for data analysis and visualization including isosurface extraction, planar slicing, volume ray tracing, and optical path tracing, all of which are grounded in refinement theory for splines. the applications for these techniques are diverse, including such fields as optics, fluid dynamics, and medical visualization. william martin elaine cohen visualization (panel): the hard problems david zeltzer ann m. bisantz krzysztof lenk jock d. mackinlay randall w. simons of metaphor and the difficulty of computer discourse gerald j. johnson on the small-sample optimality of multiple-regeneration estimators james m. calvin peter w. glynn marvin k. nakayama extending graphics standards to meet industry requirements (panel session) salim abi-ezzi gregory d. laib richard puk tube terminology and style in the 80's we will be addressing a different audience. this places demands on documentation that we heretofore have not experienced. computers have historically been used to address operational problems, using discrete elements and handling specific tasks. this isn't the type of information managers will be using in the 80's. so we see new kinds of systems emerging--- design (hence documentation) aimed at a new audience---managers. glenna james introduction to gpss summary information about key aspects of the simulation modeling language gpss is provided.the class of problems to which gpss applies especially well is described; commentary on the semantics and syntax the language is offered; the learning-oriented literature for gpss is summarized; various gpss implementations are commented on; the time-sharing networks offering gpss are cited; and public courses on the language are listed. the gpss tutorial itself will delve into the fundamental details of gpss and present examples of simple gpss models. copies of the transparencies used for the tutorial will be distributed to those in attendance. (these transparency copies are excluded from reproduction here because of page-count limits.) thomas j. schriber technique for automatically correcting words in text research aimed at correcting words in text has focused on three progressively more difficult problems:(1) nonword error detection; (2) isolated-word error correction; and (3) context-dependent work correction. in response to the first problem, efficient pattern-matching and n-gram analysis techniques have been developed for detecting strings that do not appear in a given word list. in response to the second problem, a variety of general and application- specific spelling correction techniques have been developed. some of them were based on detailed studies of spelling error patterns. in response to the third problem, a few experiments using natural-language-processing tools or statistical-language models have been carried out. this article surveys documented findings on spelling error patterns, provides descriptions of various nonword detection and isolated-word error correction techniques, reviews the state of the art of context-dependent word correction techniques, and discusses research issues related to all three areas of automatic error correction in text. karen kukich some techniques for minimizing and optimizing the rule base of an expert system the construction of an expert system can be divided into two somewhat independent phases, knowledge engineering and software engineering. in the knowledge engineering phase, the heuristics and the data base for the system must be deduced through interviews with a domain expert. in the software engineering phase, a working program must be constructed. in a rule- based system this working program consists of a set of rules that embody the heuristics and data base of the expert. in the course of constructing an expert system that gives academic advice to mathematics and computer science majors1 we have found some methods for structuring the rule base and for structuring the rules within the rule base that have proven to be very useful. we will discuss some of these methods. conceptually, rules in such a system consist of if- then statements. each rule has a single consequent and usually several general antecedents and some specific antecedents. when the goal of the system is to make determinations in its knowledge domain (for example, to give advice), it searches the rule base for a rule with an appropriate consequent. a rule fires when it is reached by such a search and all of its antecedents are true. if any antecedent is not true, the rule does not fire and the search continues. a first principle, then, in the construction of a rule base is that every possible problem configuration for the system's knowledge domain should cause some rule in the rule base to fire, i.e. the rule base should be complete. we have developed some pictorial representations, called "k-trees" and "question lattices", that we describe elsewhere1 that help us to check the completeness of the rule base. at worst, there should be a rule that fires when there is insufficient information to make a determination. it is apparent that the order of the rules in the rule base is very important to the performance of the system. creating the rule base is a kind of programming. it would seem that with such limited programming constructs available only simple programming techniques would be needed. this, in fact, is not the case. performance and accuracy of the expert system depend in large part upon the organization of the rule base. organizing the rule base is an important programming technique. there are a variety of methods for structuring the rule base that minimize the size of the final system and that cause the system to ask questions in the appropriate order. one way to decrease the size of a rule base is to use "mop- up" rules. when rules are grouped by goals there is often one consequent that appears with greater frequency than any other. all of the rules with this particular consequent can be replaced by a single rule with that consequent and set of general antecedents and no specific antecedents. this new "mop-up" rule is then placed after all of the more specific rules in the rule base. a small specific example from our expert academic advising system will illustrate the point. a student at manhattanville college must complete a senior thesis in his or her first area (major). there are, therefore, essentially four rules involved in giving advice on the senior thesis to a senior mathematics or computer science major. the rules are: if it is fall of the senior year and a senior thesis has not been completed, the student should be advised that the thesis must be completed some time in the senior year. if it is fall of the senior year and a senior thesis has been completed (this can happen because some zealous students actually write their senior theses in the spring of their junior year), the student should be commented for the completion of the work. if it is spring of the senior year and a senior thesis has not been completed, the student should be advised that his or her senior thesis must be completed this term. if it is spring of the senior year and a senior thesis has been completed, the student should be commended for the completion of the work. clearly, rules 2 and 4 have identical antecedents. figure 1 shows the way these rules actually appear in the computer science section of our expert system. notice that in the system, rules 2 and 4 have been combined in the third rule in the figure. the new rule is a "mop-up" rule for this recommendation. the new rule comes after the other two and has fewer antecedents. this rule will fire if neither of the other two do. this "mop-up" rule has saved one rule, a 25% savings. that number seems rather silly in such a small example but, in our experience, "mop-up" rules generally represent savings of between 20% and 40% in the rules of a specific grouping. another way to reduce the size of the rule base is to notice that often entire groupings of rules are duplicated with the exception of only one or two specific antecedents. when this occurs it is possible to perform a "contraction" on the grouping of rules. this is done by replacing the property or properties that differ among the rules in the groupings by a new property that summarizes the existing properties. this new property must be added to the taxonomy. this is a simple process, however, and is a natural step in the development of an expert system. the new "summary" property is then evaluated by several rules. once this property has been evaluated, the groupings of rules that were formerly needed can be replaced by a single grouping of the size of one of the former groupings. a savings is realized whenever fewer rules are required to instantiate the new property than were needed in the groupings that were eliminated. while potential contractions are most easily spotted when the rules are represented pictorially1, we will present an example of rules that evaluate one of these "summary" properties and indicate how this property creates a contraction that saves a large number of rules. we have a sequence of four intermediate level computer science courses that alternate through the fall and spring semesters of consecutive years. the recommendations that the advising system gives that need knowledge about those courses, therefore, would require four groupings of rules; one set for each of the possible courses under consideration. we realized that except for the specific intermediate level course involved as an antecedent, the four groupings were identical. we, therefore, defined a new property called current that represents the currently offered course. the rules that instantiate current are shown in figure 2. notice, the last rule in figure 2 is another example of a "mop-up" rule. here, one rule replaces four. when they are written out, each grouping of rules initially contains 21 rules. when the specific courses in the groupings are replaced by current the four groupings "contract" to one. the 5 rules in figure 2 thus replace 63 rules. another concern is the ordering of antecedents within individual rules. this ordering effects both the efficiency of the rule base and the way in which the expert system asks questions. in general, a rule is most efficient when its most general antecedents precede its more specific antecedents. the reason for this is that there is usually no sense in checking many specific antecedents if a rule will fail to fire because a general antecedent is not satisfied. the system tries to check for the truth of antecedents in the order in which the antecedents appear in the rules. as soon as an antecedent is found to be false, the system determines that the rule will not fire and continues its search of the rule base. in order to test whether a rule fires, considerable amounts of computation may have to be done and many questions may have to be asked. if this work can be avoided simply because a general antecedent that is already instantiated (or that must be instantiated for future use) is false, then that antecedent should precede any others in the rule. if antecedents are evaluated by questions, the order of the antecedents helps to determine the order in which questions will be asked. frequently, many of the properties in an expert system are, in some way, interrelated. knowing the value of one of these properties often allows us to compute the values of all or some of the others by auxiliary rules in the rule base. if this is the case, we certainly wish to avoid having the system ask the user redundant questions. moreover, we may wish to ask different users questions about the related properties in different orders. an example of such inter-related properties from the mathematics portion of our advising system are the properties of whether a student has completed calculus 1, calculus 2, and calculus 3. since each of the preceding courses is a prerequisite for the following one in the sequence, the inter-relationships among these courses are the obvious ones. the auxiliary rules that embody these relationships are shown in figure 3. we have set up our system so that it tries to instantiate the properties call, cal2, and cal3 first by rules and then by asking questions. looking at these rules we see that they force questions to be asked by the system in a different order depending upon whether the user is or is not a freshman. if a student is not a freshman the rules force questions to be asked first about calculus 3, then about calculus 2, and then about calculus 1. otherwise, the questions are asked in the opposite order. these auxiliary rules force questions to be asked in the desired order and insure that redundant questions are not asked. having said all of this, there remains the question of whether rule-base size should always be minimized. we believe the answer to this question is no. often entire rules and antecedents in rules are redundant and can be eliminated by the kind of ordering techniques that we have described. nevertheless, we often choose to leave these rules and/or antecedents in the rule-base as internal self-documentation. this makes the rule base much more readable and understandable. the cost in time of search must be weighed against the value of the internal documentation and seems to be justified unless system performance degrades unacceptably. gerald kiernan arnold koltun george psihountas edward schwartz jabberwocky caleb strauss is an mve the right environment for your visualization application? kent lee jun ni tom halverson eric van wyk judith r. brown an application of machine learning to the problem of parameter setting in non- destructive testing this article presents an aid system for the setting of non-destructive testing instruments. some problems inherent in this field are briefly discussed, before showing how they led us to introduce machine learning techniques into the system. the approach uses learning from examples. the goal of the learning module is to determine dependencies between parameters of different experiments in order to automatically generate a set of rules. a prototype, called mandrin, has been implemented and is being evaluated on a real application: an x-ray tomograph. the first results are presented in the last section. j. c. royer a. merle c. de sainte marie genesis of a tex-based markup metalanguage alan e. wittbecker optical mark reading - making it easy for users one of the technologies in common use at university computer centers is optical mark reading (omr). at clemson university this form of input is becoming more widely used; as a result, faculty and staff who have had no previous contact with computing are beginning to depend on computer processing of their multiple-choice tests and survey forms. student government elections have been processed using omr input. as omr usage has increased, the need has arisen for additional user services support in the areas of omr documentation, education, utility programming, and office procedures. this paper will briefly trace the history of omr usage at clemson university. a detailed explanation will be made of how clemson faculty and staff are able to maximize their usage of omr with minimum knowledge of computing through the use of standardized omr forms and test-scoring programs. potential uses of omr equipment in a university environment will also be discussed. andrew m. smith solving engine maintenance capacity problems with simulation robert gatland eric yang kenneth buxton multivariate logit models and neural networks as knowledge acquisition tools for expert systems martin lukanowicz martin natter a versatile navigation interface for virtual humans in collaborative virtual environments igor pandzic tolga capin nadia magnenat-thalmann daniel thalmann classic learning description logics, also called terminological logics, are commonly used in knowledge-based systems to describe objects and their relationships. we investigate the learnability of a typical description logic, classic, and show that classic sentences are learnable in polynomial time in the exact learning model using equivalence queries and membership queries (which are in essence, "subsumption queries"). we show that membership queries alone are insufficient for polynomial time learning of classic sentences. combined with earlier negative results of cohen and hirsh showing that, given standard complexity theoretic assumptions, equivalence queries alone are insufficient (or random examples alone in the pac setting are insufficient), this shows that both sources of information are necessary for efficient learning in that neither type alone is sufficient. in addition, we show that a modification of the algorithm deals robustly with persistent malicious two-sided classification noise in the membership queries with the probability of a misclassification bounded below 1/2. michael frazier leonard pitt combining antithetic variates and control variates in simulation experiments antithetic variates and control variates are two well-known variance reduction techniques. we consider combining antithetic variates and control variates to estimate the mean response in a stochastic simulation experiment. when applying antithetic variates to generate control variates across paired replications, we show that the integrated control-variate estimator is unbiased and yields, under the assumption of common correlations induced for all control variates, a smaller variance than the conventional control-variate estimator without using antithetic variates. we examine the proposed estimator and two alternative integrated control-variate estimators when applying antithetic variates on control variates and show that the proposed estimator is the optimal integrated control-variate estimator we implement these three integrated control-variate estimators and the conventional control-variate estimator in a simulation model of a stochastic network to evaluate the performance of each control-variate estimator empirical results show that the proposed estimator outperforms the other control-variate estimators. wei-ning yang wei-win liou computing surveys' electronic symposium on hypertext and hypermedia: editorial helen ashman rosemary michelle simpson seeing and understanding: representing the visual world yiannis aloimonos c. fermuller a. rosenfeld fpga implementation of a novel, fast motion estimation algorithm for real-time video compression a novel block matching algorithm for motion estimation in a video frame sequence, well suited for a high performance fpga implementation is presented in this paper. the algorithm is up to 40% faster when compared to one of the fastest existing algorithms, viz., one-at-a-time step search algorithm without compromising either in the image quality or in the compression effected. the speed advantage is preserved even in the event of a sudden scene change in a video sequence. the proposed algorithm is also capable of dynamically detecting the direction of motion of image blocks. the fpga implementation of the algorithm is capable of processing color pictures of sizes up to 1024x768 pixels at the real time video rate of 25 frames/second and conforms to mpeg-2 standards. s. ramachandran s. srinivasan array-driven simulation of real databases william s. keezer trianglecaster: extensions to 3d-texturing units for accelerated volume rendering gunter knittel visual effects: incredible effects vs. credible science george suhayda identifying partners and sustenance of stable, effective coalitions partha sarathi dutta sandip sen distributed, parallel simulation of multiple, deliberative agents multi-agent systems comprise multiple, deliberative agents embedded in and recreating patterns of interactions. each agent's execution consumes considerable storage and calculation capacities. for testing multi-agent systems, distributed parallel simulation techniques are required that take the dynamic pattern of composition and interaction of multi-agent systems into account. analyzing the behavior of agents in virtual, dynamic environments necessitates relating the simulation time to the actual execution time of agents. since the execution time of deliberative components can hardly be foretold, conservative techniques based on lookahead are not applicable. on the other hand, optimistic techniques become very expensive if mobile agents and the creation and deletion of model components are affected by a rollback. the developed simulation layer of james (a java based agent modeling environment for simulation) implements a moderately optimistic strategy which splits simulation and external deliberation into different threads and allows simulation and deliberation to proceed concurrently by utilizing simulation events as synchronization points. a. m. uhrmacher k. gugler on the relationship between autoepistemic logic and parallel circumscription the purpose of this paper is to investigate the relationship between two approaches to the formalization of non-monotonic reasoning - j. mccarthy's approach based on the notion of circumscription [mc] and the autoepistemic logic approach of r. moore [mr]. since these two approaches differ considerably in scope, we will limit our attention to the situation where some common ground can be found. namely, we consider only the propositional case of parallel circumpsription, when all predicates of a formula t are circumscribed simultaneously. m gelfond h przymusinska designing simulation experiments: taguchi methods and response surface metamodels john s. ramberg susan m. sanchez paul j. sanchez ludwig j. hollick exploiting deep parallel memory hierarchies for ray casting volume rendering michael e. palmer stephen taylor brian totty live paint: painting with procedural multiscale textures ken perlin luiz velho high level knowledge sources in usable speech recognition systems the authors detail an integrated system which combines natural language processing with speech understanding in the context of a problem solving dialogue. the minds system uses a variety of pragmatic knowledge sources to dynamically generate expectations of what a user is likely to say. s. l. young a. g. hauptmann w. h. ward e. t. smith p. werner general convergence results for linear discriminant updates adam j. grove nick littlestone dale schuurmans a self interpreter for balinda lisp an intepreter for balinda lisp, a parallel lisp dialect designed for the biddle machine [1], is presented. the interpreter is itself written in balinda lisp, and is not at present executable. however, it constitutes a detailed specification of the language and a description of how to execute it, and will be used in subsequent implementations on biddle and other computers. a number of system primitives required to implement the language have also been specified. c. k. yuen w. f. wong representing visual conditions in a legal knowledge based system legal kbss are based on knowledge contained in legal texts such as legislation, regulations and case histories and the practice of domain experts charged with operationalising this legislation. legal texts and their opertionalisation can be analysed using textual analysis tools which lead to the production of a rule base which can be manipulated to establish a desired goal. in this paper we describe an approach to the development of legal kbss where the legal texts include visual conditions which do not lend themselves to simple interpretation using textual analysis tools. the approach focuses on the use of preprocessors to generate descriptors derived from the geometrical interpretation of the visual data in question. these descriptors can then be used as direct input to a kbs without the need to include complex mathematics, to which kbs representations are not well suited, within individual rules. the approach was developed as part of a much larger project concerned with the production of a legal kbs to advise the navigators of ocean going vessels on how best to avoid collision with other vessels as prescribed by international maritime law. a fragment of the legislation on which this kbs is based is used as an example. frans coenen trevor bench-capon peter smeaton a formal framework for inter-agent dialogues we present a logic-based formalism for modeling of dialogues between intelligent and autonomous software agents, building on a theory of abstract dialogue games which we present. the formalism enables representation of complex dialogues as sequences of moves in a combination of dialogue games, and allows dialogues to be embedded inside one another. the formalism can be readily operationalized and its modular nature enables different types of dialogues to be represented. peter mcburney simon parsons 3d behavioral model design for simulation and software engineering modeling is used to build structures that serve as surrogates for other objects. as children, we learn to model at a very young age. an object such as a small toy train teaches us about the structure and behavior of an actual train. vrml is a file standard for representing the structure of objects such as trains, while the behavior would be represented in a computer language such as ecmascript or java. vrml is an abbreviation for virtual reality modeling language [2], which represents the standard 3d language for the web. our work is to extend the power of vrml so that it is used not only for defining shape models, but also for creating structures for behavior. "behavior shapes" are built using metaphors mapped onto wellknown dynamic model templates such as finite state machines, functional block models and petri nets. the low level functionality of the design still requires a traditional programming language, but this level is hidden underneath a modeling level that is visualized by the user. we have constructed a methodology called rube which provides guidelines on building behavioral structures in vrml. the result of our endeavors has yielded a set of vrml prototypes that serve as dynamic model templates. we demonstrate several examples of behaviors using primitive shape and architectural metaphors. paul a. fishwick accurate color reproduction for computer graphics applications bruce j. lindbloom the palace of soviets takehiko nagakura cscat: the compaq supply chain analysis tool rick g. ingalls cynthia kasales inbetweening for computer animation utilizing moving point constraints this paper presents an approach to computerized inbetweening which allows the animator more control over an interpolation sequence than existing keyframe techniques. in our approach, the animator specifies in addition to a set of keyframe constraints, a set of new constraints called moving points. moving points are curves in space and time which constrain both the trajectory and dynamics of certain points on the keyframes. the sets of keyframes and moving points form a constraint or patch network specification of the desired dynamics. several algorithms are presented for inbetweening or completing such a patch network. by measuring these algorithms with respect to a set of evaluation criteria, the algorithm which best meets our interpolation needs is selected. william t. reeves opentag and tmx: xml in the localization industry william burns walter smith using animation to enhance a marine-terminal monte carlo simulator rodney w. cyr the process of discovery: hypertext and scholarship mark bernstein george p. landow elli mylonas john b. smith total order planning is more efficient than we thought vincent vidal pierre regnier a device independent graphics imaging model for use with raster devices in building graphic systems for use with raster devices, it is difficult to develop an intuitive, device independent model of the imaging process, and to preserve that model over a variety of device implementations. this paper describes an imaging model and an associated implementation strategy that: 1\\. integrates scanned images, text, and synthetically generated graphics into a uniform device independent metaphor; 2\\. isolates the device dependent portions of the implementation to a small set of primitives, thereby minimizing the implementation cost for additional devices; 3\\. has been implemented for binary, grey-scale, and full color raster display systems, and for high resolution black and white printers and color raster printers. john warnock douglas k. wyatt port activity simulation: an overview said ali hassan simis ii - an environment for material flow systems simulation a user-oriented model for simulation of automatic transportation systems is described. special emphasis is laid on the description of the user model world and the problems in its design. helmut ludwigs simstat: a tool for simulation analysis warren e. blaisdell jorge haddock structured graphics for distributed systems k. a. lantz w. i. nowicki towards real-time photorealistic rendering: challenges and solutions andreas schilling graphical techniques for output analysis david alan grier issues and requirements for building a generic animation d. michelle benjamin barbara werner mazziotti f. bradley armstrong artificial intelligence meets artificial insemination: an application of rule extraction from neural networks emanoil pop alan b. tickle ross hayward don simonetta joachim diederich volumetric backprojection frank dachille klaus mueller arie kaufman a maximum-likelihood interpretation of batch means estimators kevin j. healy integrating interactive graphics techniques with future technologies (panel session) theresa marie rhyne eric gidney tomasz imielinski pattie maes ronald vetter on the influence of the representation granularity in heuristic forma recombination carlos cotta jose m. troya mapping communicative goals into conceptual tasks to generate graphics in discourse we address the problem of realizing communicative plans in graphics. our approach calls for mapping communicative goals to conceptual tasks and then using task-based graphic design for selecting graphical techniques. in this paper, we present the mapping rules in several dimensions: data aggregation and selection, task synthesis, and task aggregation. those rules have been incorporated in autobrief, a research system for multimedia explanation. stephan kerpedjiev steven f. roth sketch: an interface for sketching 3d scenes robert c. zeleznik kenneth p. herndon john f. hughes teddy: a sketching interface for 3d freeform design takeo igarashi satoshi matsuoka hidehiko tanaka performance behavior: a study in the fragility of expertise b. r. huguenard m. j. prietula f. j. lerch spectral processing of point-sampled geometry we present a new framework for processing point-sampled objects using spectral methods. by establishing a concept of local frequencies on geometry, we introduce a versatile spectral representation that provides a rich repository of signal processing algorithms. based on an adaptive tesselation of the model surface into regularly resampled displacement fields, our method computes a set of windowed fourier transforms creating a spectral decomposition of the model. direct analysis and manipulation of the spectral coefficients supports effective filtering, resampling, power spectrum analysis and local error control. our algorithms operate directly on points and normals, requiring no vertex connectivity information. they are computationally efficient, robust and amenable to hardware acceleration. we demonstrate the performance of our framework on a selection of example applications including noise removal, enhancement, restoration and subsampling. mark pauly markus gross dragon gate geoff wyvill surface splatting modern laser range and optical scanners need rendering techniques that can handle millions of points with high resolution textures. this paper describes a point rendering and texture filtering technique called _surface splatting_ which directly renders opaque and transparent surfaces from point clouds without connectivity. it is based on a novel screen space formulation of the elliptical weighted average (ewa) filter. our rigorous mathematical analysis extends the texture resampling framework of heckbert to irregularly spaced point samples. to render the points, we develop a surface splat primitive that implements the screen space ewa filter. moreover, we show how to optimally sample image and procedural textures to irregular point data during pre- processing. we also compare the optimal algorithm with a more efficient view- independent ewa pre- filter. surface splatting makes the benefits of ewa texture filtering available to point-based rendering. it provides high quality anisotropic texture filtering, hidden surface removal, edge anti-aliasing, and order- independent transparency. matthias zwicker hanspeter pfister jeroen van baar markus gross actor-based computing: vision forestalled, vision fulfilled arthur allen the yang-baxter equation and a systematic search for poisson brackets on associative algebras walter oevel klaus strack grow & fold: compression of tetrahedral meshes andrzej szymczak jarek rossignac knowledge criteria for the evaluation of legal beliefs l. mommers h. j. van den herik rendering parametric surfaces in pen and ink georges winkenbach david h. salesin surround-screen projection-based virtual reality: the design and implementation of the cave carolina cruz-neira daniel j. sandin thomas a. defanti active mental entities: a new approach to building intelligent autonomous agents pietro baroni daniela fogli giovanni guida silvano mussi applications of pixel textures in visualization and realistic image synthesis wolfgang heidrich rudiger westermann hans-peter seidel thomas ertl the rgyb color geometry background: the gamut of a color crt is defined by its three primary colors, each produced by a phosphor/electron gun combination. light from the primaries combines additively, so the color gamut is a subset of a three dimensional vector space [1]. with the primaries as basis vectors normalized to 1.0, the color gamut is a unit cube, known as the rgb color geometry, since the three primaries are usually red, green, and blue. user interaction via rgb is generally thought to be counterintuitive, and transformations of rgb, such as smith's hsv geometry [10] which is derived from centuries old artists' models [2], are more popular. more recent color theories, based on psychophysical and physiological models of early visual processing, suggest that more intuitive geometries may be possible. the rgyb geometry is based on two recent discoveries about the human visual system. first, the three color signals from the cone receptors are organized into three opponent channels [1, 7]. a single achromatic channel indicates lightness or brightness. two chromatic channels, red/green and yellow/blue, signal the chromatic quantities. second, signals on the achromatic channel are easily distinguishable from signals on the chromatic ones [6]. consequently, it is usual to represent colors as a set of surfaces of colors that vary in chromaticity, each at a different level of brightness. examples are as diverse as cie chromaticity coordinates, the cieluv uniform color space, the munsell color system, and computer graphics color spaces such as hsv and hls [10, 12]. colin ware william cowan simscript ii.5 tutorial this tutorial will present the highlights of the simscript ii.5 approach to building discrete event simulation models. the approach will be to construct a small example problem, implement the program in simscript ii.5, and then to display the modularity which is possible with simscript by adding several "real-world" complexities to the model. edward c. russell reading between the lines - a method for extracting dynamic 3d with texture marc proesmans luc van gool bankxx: a program to generate argument through case-base research in this paper we describe a system, called bankxx, which generates arguments by performing a heuristic best-first search of a highly interconnected network of legal knowledge. the legal knowledge includes cases represented from a variety of points of view---cases as collections of facts, cases as dimensionally- analyzed fact situations, cases as bundles of citations, and cases as prototypical factual scripts---as well as legal theories represented in terms of domain factors. bankxx performs its search for useful information using one of three evaluation functions encoded at different levels of abstraction: the domain level, an "argument-piece" level, and the overall argument level. evaluation at the domain level uses easily accessible information about the nodes, such as their type; evaluation at the argument- piece level uses information about generally useful components of case-based argument, such as best cases and supporting legal theories; evaluation at the overall-argument level uses factors, called argument dimensions, which address the overall substance and quality of an argument, such as the centrality of its supporting cases or the success record of its best theory. bankxx is instantiated in the area of personal bankruptcy governed by chapter 13 of the u.s. bankruptcy code, which permits a debtor to be discharged from debts through completion of a court-approved payment plan. in particular, our system addresses the requirement that such chapter 13 plans be "proposed in good faith." edwina l. rissland david b. skalak m. timur friedman an empirical study of non-binary genetic algorithm-based neural approaches for classification parag c. pendharkar james a. rodger implicitization using moving curves and surfaces thomas w. sederberg falai chen corporate modeling and planning politics at conrail a careful analysis of the corporate planning process in a given company will generally provide valuable insight to any group charged with the responsibility of developing corporate planning models. the location of decision-making authority (herein referred to as the politics of corporate planning or corporate politics) is one of the most important factors to consider when developing a corporate planning model. roger h. mehl isosurface extraction in time-varying fields using a temporal branch-on-need tree (t-bon) the temporal branch-on-need tree (t-bon) extends the three-dimensional branch- on-need octree for time-varying isosurface extraction. at each time step, only those portions of the tree and data necessary to construct the current isosurface are read from disk. this algorithm can thus exploit the temporal locality of the isosurface and, as a geometric technique, spatial locality between cells in order to improve performance. experimental results demonstrate the performance gained and memory overhead saved using this technique. philip sutton charles d. hansen computation of the axial view of a set of isothetic parallelepipeds we present a new technique to display a scene of three-dimensional isothetic parallelepipeds (3d-rectangles), viewed from infinity along one of the coordinate axes (axial view). in this situation, there always exists a topological sorting of the 3d-rectangles based on the relation of occlusion (a dominance relation). the arising total order is used to generate the axial view, where the two-dimensional view of each 3d-rectangle is incrementally added, starting from the closest 3d-rectangle. the proposed scene-sensitive algorithm runs in time o(n log2n \\+ d log n), where n is the number of 3d-rectangles and d is the number of edges of the display. this improves over the previously best known technique based on the same approach. franco p. preparata jeffrey s. vitter mariette yvinec measuring and predicting visual fidelity this paper is a study of techniques for measuring and predicting visual fidelity. as visual stimuli we use polygonal models, and vary their fidelity with two different model simplification algorithms. we also group the stimuli into two object types: animals and man made artifacts. we examine three different experimental techniques for measuring these fidelity changes: naming times, ratings, and preferences. all the measures were sensitive to the type of simplification and level of simplification. however, the measures differed from one another in their response to object type. we also examine several automatic techniques for predicting these experimental measures, including techniques based on images and on the models themselves. automatic measures of fidelity were successful at predicting experimental ratings, less successful at predicting preferences, and largely failures at predicting naming times. we conclude with suggestions for use and improvement of the experimental and automatic measures of visual fidelity. benjamin watson alinda friedman aaron mcgaffey seeding, evolutionary growth and reseeding: supporting the incremental development of design environments gerhard fischer ray mccall jonathan ostwald brent reeves frank shipman a logical approach to high-level agent control recent work in animated human-like agent has made impressive progress toward generating agents with believable appearances and realistic motions for the interactive applications of inhabited virtual worlds. it remains difficult, however, to instruct animated agents to perform specific tasks or take initiatives. this paper addresses the challenge of instructability by introducing cognitive modelling - a novel logical approach based on a highly developed logical theory of actions, i. e. event calculus. cognitive models go beyond behavioural models in that they govern an agent's behaviour by reasoning about its knowledge, actions and events. to facilitate the construction of the language (bsl) from the event calculus formalism. using bsl, we can specify and agent's domain knowledge, design behaviour controllers and then control the agent's behaviour in terms of goals and/ or goals and/ or user's instructions. this approach allows agent's behaviours to be specified and controlled more naturally and intuitively, more succinctly and at a much highter level of abstraction than would otherwise be possible. it als provides a logical characterisation of planning via abductive reasoning process. furthermore, we integrate sensing capability into our underlying theoretical framework, thus enabling animated agents to generate appropriate behaviour even in complex, dynamic virtual worlds. an animated human- like interface agent for virtual environments is used to demonstrate the approach. the architecture for implementing the approach is also described. l. chen k. bechkoum g. clapworthy fast algorithms for volume ray tracing john danskin pat hanrahan an automatic beautifier for drawings and illustrations theo pavlidis christopher j. van wyk texturing issues in scene generation with constructive solid geometry joseph giovatto 3d imaging for rapid response on remote sites j. a. beraldin f. blais l. cournoyer m. rioux s. h. el-hakim linearly anticipatory autonomous agents paul davidsson forward into the past: a revival of old visual talents with computer visualization thomas g. west third commentary on "what is text really?" r. stanley dicks stuart little donald levy choosing classes in conceptual modeling jeffrey parsons yair wand acm president's letter: performance analysis: experimental computer science as its best peter j. denning weighted jackknife-after-bootstrap: a heuristic approach jin wang j. sunil rao jun shao whirlygig jason wen volume sculpting we present a modeling technique based on the metaphor of interactively sculpting complex 3d objects from a solid material, such as a block of wood or marble. the 3d model is represented in a 3d raster of voxels where each voxel stores local material property information such as color and texture. sculpting is done by moving 3d voxel-based tools within the model. the affected regions are indicated directly on the 2d projected image of the 3d model. by reducing the complex operations between the 3d tool volume and the 3d model down to primitive voxel-by-voxel operations, coupled with the utilization of a localized ray casting for image updating, our sculpting tool achieves real-time interaction. furthermore, volume sampling techniques and volume manipulations are employed to ensure that the process of sculpting does not introduce aliasing into the models. sidney w. wang arie e. kaufman adaptive view dependent tessellation of displacement maps displacement mapping is an effective technique for encoding the high levels of detail found in today's triangle based surface models. extending the hardware rendering pipeline to be capable of handling displacement maps as geometric primitives, will allow highly detailed models to be constructed without requiring large numbers of triangles to be passed from the cpu to the graphics pipeline. we present a new approach based on recursive tessellation that adapts to the surface complexity described by the displacement map. we also ensure that the resolution of the displaced mesh is tessellated with respect to the current view point. our tessellation scheme performs all tests only on triangle edges to avoid generating cracks on the displaced surface. the main decision for vertex insertion is based on two comparisons involving the average height surrounding the vertices and the normals at the vertices. individually, the tests will fail to tessellate a mesh satisfactorily, but their combination achieves good results. we propose several additions to the typical hardware rendering pipeline in order to achieve displacement map rendering in hardware. the mesh tessellation is placed within the rendering pipeline so that we can take advantage of the pre-existing vertex transformation units to perform the setup calculations for our view dependent test. our method adds only simple arithmetic and comparison operations to the graphics pipeline and makes use of existing units for calculations wherever possible. michael doggett johannes hirche improving edge detection by an objective edge evaluation qiuming zhu context-aware office assistant this paper describes the design and implementation of the office assistant --- an agent that interacts with visitors at the office door and manages the office owner's schedule. we claim that rich context information about users is key to making a flexible and believable interaction. we also argue that natural face-to-face conversation is an appropriate metaphor for human- computer interaction. hao yan ted selker analytic representations of simulation in much the same way that a mathematical model of a real or conceptual system is used to improve the design and operation of the system, mathematical models of simulation are used to improve the design and operation of simulation experiments. the participants in this discussion are advocates of various analytic representations of simulation. contained in the paper that follows are each participant's responses to a set of questions concerning the representation of simulation they represent. bruce w. schmeiser visualizing gridded datasets with large number of missing values (case study) much of the research in scientific visualization has focused on complete sets of gridded data. this paper presents our experience dealing with gridded data sets with large number of missing or invalid data, and some of our experiments in addressing the shortcomings of standard off-the-shelf visualization algorithms. in particular, we discuss the options in modifying known algorithms to adjust to the specifics of sparse datasets, and provide a new technique to smooth out the side-effects of the operations. we apply our findings to data acquired from nexrad (next generation radars) weather radars, which usually have no more than 3 to 4 percent of all possible cell points filled. suzana djurcilov alex pang a level-set approach for the metamorphosis of solid models david e. breen ross t. whitaker illumination from curved reflectors don mitchell pat hanrahan computer graphics tools for the study of minimal surfaces recent research indicates machine computation and mathematical theory have proceeded hand in hand and have proved to be of great benefit to one another. m. j. callahan d. hoffman j. t. hoffman plenoptic sampling this paper studies the problem of plenoptic sampling in image-based rendering (ibr). from a spectral analysis of light field signals and using the sampling theorem, we mathematically derive the analytical functions to determine the minimum sampling rate for light field rendering. the spectral support of a light field signal is bounded by the minimum and maximum depths only, no matter how complicated the spectral support might be because of depth variations in the scene. the minimum sampling rate for light field rendering is obtained by compacting the replicas of the spectral support of the sampled light field within the smallest interval. given the minimum and maximum depths, a reconstruction filter with an optimal and constant depth can be designed to achieve anti-aliased light field rendering. plenoptic sampling goes beyond the minimum number of images needed for anti- aliased light field rendering. more significantly, it utilizes the scene depth information to determine the minimum sampling curve in the joint image and geometry space. the minimum sampling curve quantitatively describes the relationship among three key elements in ibr systems: scene complexity (geometrical and textural information), the number of image samples, and the output resolution. therefore, plenoptic sampling bridges the gap between image-based rendering and traditional geometry-based rendering. experimental results demonstrate the effectiveness of our approach. jin-xiang chai shing-chow chan heung- yeung shum xin tong an extension of manifold boundary representations to the r-sets in this paper we study the relationship between manifold solids (r-sets whose boundaries are two-dimensional closed manifolds) and r-sets. we begin by showing that an r-set may be viewed as the limit of a certain sequence of manifold solids, where distance is measured using the hausdorff metric. this permits us to introduce a minimal set of generalized euler operators, sufficient for the construction and manipulation of r-sets. the completeness result for ordinary euler operators carries over immediately to the generalized euler operators on the r-sets and the modification of the usual boundary data structures, corresponding to our extension to nonmanifold r-sets, is straightforward. we in fact describe a modification of a well-known boundary data structure in order to illustrate how the extension can be used in typical solid modeling algorithms, and describe an implementation. the results described above largely eliminate what has been called an inherent mismatch between the modeling spaces defined by manifold solids and by r-sets. we view the r-sets as a more appropriate choice for a modeling space: in particular, the r-sets provide closure with respect to regularized set operations and a complete set of generalized euler operators for the manipulation of boundary representations, for graphics and other purposes. it remains to formulate and prove a theorem on the soundness of the generalized euler operators. h. desaulniers n. f. stewart suitor: an attentive information system attentive systems pay attention to what users do so that they can attend to what users need. such systems track user behavior, model user interests, and anticipate user desires and actions. because the general class of attentive systems is broad --- ranging from human butlers to web sites that profile users --- we have focused specifically on attentive information systems, which observe user actions with information resources, model user information states, and suggest information that might be helpful to users. in particular, we describe an implemented system, simple user interest tracker (suitor), that tracks computer users through multiple channels --- gaze, web browsing, application focus --- to determine their interests and to satisfy their information needs. by observing behavior and modeling users, suitor finds and displays potentially relevant information that is both timely and non- disruptive to the users' ongoing activities. paul p. maglio rob barrett christopher s. campbell ted selker design for a real-time high-quality volume rendering workstation marc levoy conference review janyce wiebe navigating through a sea of information: an experiment in the development of an expert system for consulting questions by mastering the ability to sail forth and navigate the seas, people have expanded their wealth and knowledge throughout the ages. the ability to sail forth, however, was greatly aided by the invention of navigational tools. today, we who work in centers of information dissemination are faced with a similar challenge---navigating through the sea of information that confronts us every day. like the early sailors, who had only a few crude instruments to guide them on their journeys, we, too, have had little to guide us through our extensive database of knowledge. although our destinations may not seem as exotic as the ancient ports-of-call, they are no less important in exploring knowledge and pushing back the frontiers of discovery. arnold alagar a calendar with common sense digital devices today have little understanding of their real-world context, and as a result they often make stupid mistakes. to improve this situation we are developing a database of world knowledge called thoughttreasure at the same time that we develop intelligent applications. in this paper we present one such application, sensical, a calendar with a degree of common sense. we discuss the pieces of common sense important in calendar management and present methods for extracting relevant information from calendar items. erik t. mueller automated conversion of curvilinear wire-frame models to surface boundary models; a topological approach john a. brewer s. mark courter evaluating risk: flexibility and feasibility in multi-agent contracting john collins maksim tsvetovat rashmi sundareswara joshua van tonder maria gini bamshad mobasher an approach to automating knowledge acquisition for expert systems: annotated traces diagnostic heirarchies lee a. becker luke immes simulation-based approach to the warehouse location problem for a large-scale real instance kazuyoshi hidaka hiroyuki okano a level-set method for flow visualization rudiger westermann christopher johnson thomas ertl brain drain, reconsidering spatial ability thomas g. west simulation modeling at multiple levels of abstraction perakath benjamin madhav erraguntla dursun delen richard mayer a tool in modelling disagreement in law: preferring the most specific argument henry prakken gmss graphic modelling and simulation system gmss is a simulation modelling system providing a tool kit of functions to support the automation needs of simulation analysis. the goal of gmss is to put simulation modelling into the hands of the decision maker. r. r. willis w. p. austell planning motions with intentions we apply manipulation planning to computer animation. a new path planner is presented that automatically computes the collision-free trajectories for several cooperating arms to manipulate a movable object between two configurations. this implemented planner is capable of dealing with complicated tasks where regrasping is involved. in addition, we present a new inverse kinematics algorithm for the human arms. this algorithm is utilized by the planner for the generation of realistic human arm motions as they manipulate objects. we view our system as a tool for facilitating the production of animation. yoshihito koga koichi kondo james kuffner jean-claude latombe spray rendering as a modular visualization environment alex pang craig m. wittenbrink anisotropic nonlinear diffusion in flow visualization many applications produce three-dimensional points that must be further processed to generate a surface. surface reconstruction algorithms that start with a set of unorganized points are extremely time-consuming. sometimes, however, points are generated such that there is additional information available to the reconstruction algorithm. we present spiraling edge, a specialized algorithm for surface reconstruction that is three orders of magnitude faster than algorithms for the general case. in addition to sample point locations, our algorithm starts with normal information and knowledge of each point's neighbors. our algorithm produces a localized approximation to the surface by creating a star-shaped triangulation between a point and a subset of its nearest neighbors. this surface patch is extended by locally triangulating each of the points along the edge of the patch. as each edge point is triangulated, it is removed from the edge and new edge points along the patch's edge are inserted in its place. the updated edge spirals out over the surface until the edge encounters a surface boundary and stops growing in that direction, or until the edge reduces to a small hole that is filled by the final triangle. t. preußer m. rumpf the role of learning in autonomous robots rodney a. brooks simobject: from rapid prototype to finished model - a breakthrough in graphical model building john g. goble on modularity in integrated architectures steven minton grouping and parameterizing irregularly spaced points for curve fitting given a large set of irregularly spaced points in the plane, an algorithm for partitioning the points into subsets and fitting a parametric curve to each subset is described. the points could be measurements from a physical phenomenon, and the objective in this process could be to find patterns among the points and describe the phenomenon analytically. the points could be measurements from a geometric model, and the objective could be to reconstruct the model by a combination of parametric curves. the algorithm proposed here can be used in various applications, especially where given points are dense and noisy. examples demonstrating the behavior of the algorithm under noise and density of the points are presented and discussed. a. ardeshir goshtasby adaptive unwrapping for interactive texture painting takeo igarashi dennis cosgrove fuzzy logic based non-parametric color image segmentation with optional block processing naoko ito yoshihisa shimazu teruo yokoyama yutaka matushita fuzzy logic (abstract): issues, contentions and perspectives lotfi zadeh interactive arrangement of botanical l-system models joanna l. power a. j. bernheim brush przemyslaw prusinkiewicz david h. salesin star wars episode 1: the phantom menace rob coleman ned gorman christian rouet scott squires john knoll modeling the urban development process and the related transportation impacts a conceptually realistic but highly idealized approach was taken to create a simulation model of the structural interactions between the many elements of urban land use and transportation decisions. the objective is to use the model to discover elements and links that are amenable to land use intensification policies. while land use decisions are based on system perspective, transportation decisions are individualized. while land use decisions are permanent, transportation decisions are susceptible to be influenced. through planning and other governmental policies, it is hoped that "infilling" can be achieved and the transportation conditions ensued will be energy efficient and environmentally compatible. tenny n. lam richard corsi katherine wages freedom minoru sasaki tuneo sakai kakuzo urao news douglas blank on genetic algorithms eric b. baum dan boneh charles garrett refutational proofs of geometry theorems via characteristic set computation a refutational approach to geometry theorem proving using ritt-wu's algorithm for computing a characteristic set is discussed. a geometry problem is specified as a quantifier-free formula consisting of a finite set of hypotheses implying a conclusion, where each hypothesis is either a geometry relation or a subsidiary condition ruling out degenerate cases, and the conclusion is another geometry relation. the conclusion is negated, and each of the hypotheses (including the subsidiary conditions) and the negated conclusion is converted to a polynomial equation. characteristic set computation is used for checking the inconsistency of a finite set of polynomial equations over an algebraic closed field. the method is contrasted with a related refutational method that used buchberger's grobner basis algorithm for the inconsistency check. d. kapur h. k. wan graphics in overlapping bitmap layers one of the common uses of bitmap terminals is storing multiple programming contexts in multiple, possibly overlapping, areas of the screen called windows. windows traditionally store the visible state of a programming environment, such as an editor or debugger, while the user works with some other program. this model of interaction is attractive for one-process systems, but to make full use of a multiprogramming environment, windows must be asynchronously updated, even when partially or wholly obscured by other windows. for example, a long compilation may run in one window, displaying messages as appropriate, while the user edits a file in another window. this paper describes a set of low-level graphics primitives to manipulate overlapping asynchronous windows, called layers, on a bitmap display terminal. unlike previous window software, these primitives extend the domain of the general bitmap operator bitblt to include bitmaps that are partially or wholly obscured. rob pike devs-scheme simulation of stream water quality piotr l. jankowski jerzy w. rozenblit incremental and hierarchical hilbert order edge equation polygon rasterizatione _a rasterization algorithm must efficiently generate pixel fragments from geometric descriptions of primitives. in order to accomplish per-pixel shading, shading parameters must also be interpolated across the primitive in a perspective-correct manner. if some of these parameters are to be interpreted in later stages of the pipeline directly or indirectly as texture coordinates, then translating spatial and parametric coherence into temporal coherence will improve texture cache performance. finally, if framebuffer access is also organized around cached blocks, then organizing rasterization so fragments are generated in block-sequential order will maximize framebuffer cache performance. hilbert-order rasterization accomplishes these goals, and also permits efficient incremental evaluation of edge and interpolation equations._ michael d. mccool chris wales kevin moule a look-ahead learning algorithm for inductive learning through examples ray r. hashemi frederick r. jelovsek a new output analysis approach for systems experiencing disruptions mary c. court jorge haddock nested expansions and hardy fields john shackell feature-based control of visibility error: a multi-resolution clustering algorithm for global illumination fraņois sillion george drettakis stable fluids jos stam combatting rendering latency latency or lag in an interactive graphics system is the delay between user input and displayed output. we have found latency and the apparent bobbing and swimming of objects that it produces to be a serious problem for head-mounted display (hmd) and augmented reality applications. at unc, we have been investigating a number of ways to reduce latency; we present two of these. slats is an experimental rendering system for our pixel-planes 5 graphics machine guaranteeing a constant single ntsc field of latency. this guaranteed response is especially important for predictive tracking. just-in-time pixels is an attempt to compensate for rendering latency by rendering the pixels in a scanned display based on their position in the scan. marc olano jon cohen mark mine gary bishop from data to knowledge: implications of data mining joseph s. fulda dynamic real-time deformations using space & time adaptive sampling this paper presents a robust, adaptive method for animating dynamic visco- elastic deformable objects that provides a guaranteed frame rate. our approach uses a novel automatic _space and time adaptive_ level of detail technique, in combination with a large-displacement (green) strain tensor formulation. the body is partitioned in a non-nested multiresolution hierarchy of tetrahedral meshes. the local resolution is determined by a _quality condition_ that indicates where and when the resolution is too coarse. as the object moves and deforms, the sampling is refined to concentrate the computational load into the regions that deform the most. our model consists of a continuous differential equation that is solved using a local explicit finite element method. we demonstrate that our adaptive green strain tensor formulation suppresses unwanted artifacts in the dynamic behavior, compared to adaptive mass-spring and other adaptive approaches. in particular, damped elastic vibration modes are shown to be nearly unchanged for several levels of refinement. results are presented in the context of a virtual reality system. the user interacts in real-time with the dynamic object through the control of a rigid tool, attached to a haptic device driven with forces derived from the method. gilles debunne mathieu desbrun marie-paule cani alan h. barr a prolog system for case-based classification (abstract) it is becoming apparent from the research of kolodner, schank, and others that case- based reasoning is an important tool for artificial intelligence. we have implemented a case-based system in prolog that classifies objects described by attribute-value pairs. it uses prolog's facilities to manage a case database and to encode features. the basis for classification is a "nearest neighbor" measure on a universal scale of similarity. the user specifies a scaling factor for naturally numeric data, thus allowing comparisons across dimensions. for non-numeric data, the user can specify numeric values on the standard scale for each value of the feature. for features that take on unordered values, the user can specify a standard difference to use in the case of inequality between values. cases can have missing values for any feature, a circumstance that arises often in real applications. in a comparison between cases where one has a missing value for a feature, a standard difference for that feature is used. the user can control the importance of features in classification by manipulating scale factors and the numbers assigned to non-numeric values. if the system misclassifies an object, it saves its description with the correct classification. as the user interacts with the program, new features can be introduced. the program has been tested successfully with numeric audiology data and with non-numeric descriptors. this program should make case-based classification more accessible to potential users. william w. mcmillan christopher j. gardiner parallel discrete event simulation: a modeling methodological perspective ernest h. page richard e. nance real-time hatching drawing surfaces using hatching strokes simultaneously conveys material, tone, and form. we present a real-time system for non-photorealistic rendering of hatching strokes over arbitrary surfaces. during an automatic preprocess, we construct a sequence of mipmapped hatch images corresponding to different tones, collectively called a _tonal art map_. strokes within the hatch images are scaled to attain appropriate stroke size and density at all resolutions, and are organized to maintain coherence across scales and tones. at runtime, hardware multitexturing blends the hatch images over the rendered faces to locally vary tone while maintaining both spatial and temporal coherence. to render strokes over arbitrary surfaces, we build a lapped texture parametrization where the overlapping patches align to a curvature-based direction field. we demonstrate hatching strokes over complex surfaces in a variety of styles. emil praun hugues hoppe matthew webb adam finkelstein the threshold of event simultaneity frederick wieland interactive image understanding cooperation between a human expert and an image processing system can give much better results than either the human or computer working alone. the computer must display geometrical information that exploits the perceptual characteristics of the human user, while the human must convey to the computer system ideas that can result in practical computation. we discuss new image understanding systems with human interfaces that support a powerful dialog about image features and characteristics. andrew j. hanson the future of visual interaction design? shannon ford frank m. marchak computing the separating surface for segmented data gregory m. nielson richard franke the design of digital filters using reduce manfred hollenhorst physically-based modeling: past, present, and future d. terzopoulos j. pltt a. barr d. zeltzer a. witkin j. blinn modeling and rendering of metallic patinas julie dorsey pat hanrahan an enriched knowledge model for formal ontological analysis this paper presents and motivates an extended ontology knowledgemodel which explicitly represents semantic information aboutconcepts. this knowledge model is grounded on the meta-propertiesof formal ontological analysis and it results from enriching theusual conceptual model with semantic information which preciselycharacterises the concept's properties and expected ambiguities,including which properties are prototypical of a concept and whichare exceptional, the behaviour of properties over time and thedegree of applicability of properties to subconceptsthis enrichedconceptual model permits a precise characterisation of what isrepresented by class membership mechanisms and helps a knowledgeengineer to determine, in a straightforward manner, themeta-properties holding for a concept. meta-properties arerecognised to be the main tool for a formal ontological analysisthat allows building ontologies with a clean and untangledtaxonomic structure. moreover, this enriched semantics facilitatesthe development of reasoning mechanisms on the state of affairsthat instantiates the ontologies. such reasoning mechanisms can beused in order to solve ambiguities that can arise when ontologiesare integrated and one needs to reason with the integratedknowledge. valentina a. m. tamma trevor bench-capon adaptivity and learning in intelligent real-time systems jurgen lind christopher g. jung christian gerber cooperative plan identification: constructing concise and effective plan descriptions r. michael young a general framework for large scale systems development aleks o. göllu farokh h. eskafi combining hierarchical radiosity and discontinuity meshing dani lischinski filippo tampieri donald p. greenberg extensions of groups defined by power-commutator presentations s. p. glasby semiring-based constraint satisfaction and optimization we introduce a general framework for constraint satisfaction and optimization where classical csps, fuzzy csps, weighted csps, partial constraint satisfaction, and others can be easily cast. the framework is based on a semiring structure, where the set of the semiring specifies the values to be associated with each tuple of values of the variable domain, and the two semiring operations (+ and x) model constraint projection and combination respectively. local consistency algorithms, as usually used for classical csps, can be exploited in this general framework as well, provided that certain conditions on the semiring operations are satisfied. we then show how this framework can be used to model both old and new constraint solving and optimization schemes, thus allowing one to both formally justify many informally taken choices in existing schemes, and to prove that local consistency techniques can be used also in newly defined schemes. stefano bistarelli ugo montanari francesca rossi an efficient collision detection algorithm using range data for walk-through systems sonou lee junhyeok heo kwangyun wohn texture mapping progressive meshes given an arbitrary mesh, we present a method to construct a progressive mesh (pm) such that all meshes in the pm sequence share a common texture parametrization. our method considers two important goals simultaneously. it minimizes texture stretch (small texture distances mapped onto large surface distances) to balance sampling rates over all locations and directions on the surface. it also minimizes texture deviation ("slippage" error based on parametric correspondence) to obtain accurate textured mesh approximations. the method begins by partitioning the mesh into charts using planarity and compactness heuristics. it creates a stretch-minimizing parametrization within each chart, and resizes the charts based on the resulting stretch. next, it simplifies the mesh while respecting the chart boundaries. the parametrization is re- optimized to reduce both stretch and deviation over the whole pm sequence. finally, the charts are packed into a texture atlas. we demonstrate using such atlases to sample color and normal maps over several models. pedro v. sander john snyder steven j. gortler hugues hoppe viable inference systems because uncertain and provisional reasoning is an inherent part of intelligent systems, a diverse (and competing) array of formalisms for handling uncertain inference has been developed in theory and practice. each of these formalisms tends to be argued and justified on its own ground, starting from assumptions which ensure its "correctness" or "optimality." it is argued here, however, that the key concept for an intelligent system is viability \\- it is as unproductive to seek "correctness" in open inference systems as it would be to prove the "optimality" of a biological organism. several inference formalisms are examined in the light of this criticism. the viability of systems based on these formalisms is seen to rest critically on their robustness to changes in their context. the assumptions which usually function to hold that context fixed must thus be critically examined. joseph deken high-level design using helix studies of architectures and performance trade-offs prior to implementation of an electronic design are critical to the success of the design. typically, software simulators are developed specifically for a single project and are discarded at the end of the project. the use of a general purpose behavioral simulation system can increase the productivity of such projects by 1) reducing the effort required to begin simulation, and 2) offering more capability than is typically available from a more specialized simulator. this paper discusses the use of a behavioral simulation system, helix, in the analysis of alternative network designs and protocols. david r. coelho interactive lens visualization techniques this paper describes new techniques for minimally immersive visualization of 3d scalar and vector fields, and visualization of document corpora. in our glyph-based visualization system, the user interacts with the 3d volume of glyphs using a pair of button-enhanced 3d position and orientation trackers. the user may also examine the volume using an interactive lens, which is a rectangle that slices through the 3d volume and displays scalar information on its surface. a lens allows the display of scalar data in the 3d volume using a contour diagram, and a texture-based volume rendering. christopher d. shaw james a. hall david s. ebert d. aaron roberts geometric compression for interactive transmission olivier devillers pierre- marie gandoin the future of virtual reality: head mounted displays versus spatially immersive displays (panel) ed lantz a chinese-english microcomputer system computing in the people's republic of china has recently begun to gain momentum. because of china's large population, its computing requirements are of worldwide interest. this article discusses the background of chinese computing and the implementation of a chinese-english microcomputer system which is now being used in several places including the prc. n. p. archer m. w. l. chan s. j. huang r. t. liu computer rendering of stochastic models a recurrent problem in generating realistic pictures by computers is to represent natural irregular objects and phenomena without undue time or space overhead. we develop a new and powerful solution to this computer graphics problem by modeling objects as sample paths of stochastic processes. of particular interest are those stochastic processes which previously have been found to be useful models of the natural phenomena to be represented. one such model applicable to the representation of terrains, known as "fractional brownian motion," has been developed by mandelbrot. the value of a new approach to object modeling in computer graphics depends largely on the efficiency of the techniques used to implement the model. we introduce a new algorithm that computes a realistic, visually satisfactory approximation to fractional brownian motion in faster time than with exact calculations. a major advantage of this technique is that it allows us to compute the surface to arbitrary levels of details without increasing the database. thus objects with complex appearances can be displayed from a very small database. the character of the surface can be controlled by merely modifying a few parameters. a similar change allows complex motion to be created inexpensively. alain fournier don fussell loren carpenter mre: a flexible approach to multi-resolution modeling anand natrajan paul f. reynolds sudhir srinivasan the video "geodesics and waves" konrad polthier markus schmies martin steffens christian teitzel computer graphics: introduction adam lake the duck father tomoyuki harashima explanation-based learning: a survey of programs and perspectives explanation-based learning (ebl) is a technique by which an intelligent system can learn by observing examples. ebl systems are characterized by the ability to create justified generalizations from single training instances. they are also distinguished by their reliance on background knowledge of the domain under study. although ebl is usually viewed as a method for performing generalization, it can be viewed in other ways as well. in particular, ebl can be seen as a method that performs four different learning tasks: generalization, chunking, operationalization, and analogy. this paper provides a general introduction to the field of explanation-based learning. considerable emphasis is placed on showing how ebl combines the four learning tasks mentioned above. the paper begins with a presentation of an intuitive example of the ebl technique. subsequently ebl is placed in its historical context and the relation between ebl and other areas of machine learning is described. the major part of this paper is a survey of selected ebl programs, which have been chosen to show how ebl manifests each of the four learning tasks. attempts to formalize the ebl technique are also briefly discussed. the paper concludes with a discussion of the limitations of ebl and the major open questions in the field. thomas ellman body story briohny pogue awesim: the integrated simulation system a. alan b. pritsker jean j. o'reilly cmpack: a complete software system for autonomous legged soccer robots this paper describes a completely implemented, fully autonomous software system for soccer playing quadruped ro\\\\-bots. the system includes real-time color vision, probabilistic localization, quadruped locomotion/motion, and a hierarchical behavior system. each component was based on well tested algorithms and approaches from other domains. our design exposed strengths and weaknesses in each component, and led to improvements and extensions that made them more capable in general, as well as better suited for our testing domain. integrating the components revealed design assumptions that were violated. we describe the problems that arose and how we addressed them. the integrated system was then used at the annual robo\\\\-cup robotic soccer competition where we placed third, losing only a single game. we reflect on how our system addressed its goals and what was learned through implementation and testing on real robots. scott lenser james bruce manuela veloso a survey of methods for eliciting the knowledge of experts robert r. hoffman a model for self-adaptation in a robot colony a model for self- adaptation in a colony of robots to changes occurring in the environment is presented. the model employs two levels of computations : one at a metalevel and the other at a lowerlevel. the metalevel computations produce a learning behaviour in the robot which is responsible for the adaptation and those at the low-level control the robot motions. inductive learning is employed as the learning strategy. an instance of the 'producer- consumer' problem is presented in this context. t v d kumar n parameswaran visual simulation of smoke in this paper, we propose a new approach to numerical smoke simulation for computer graphics applications. the method proposed here exploits physics unique to smoke in order to design a numerical method that is both fast and efficient on the relatively coarse grids traditionally used in computer graphics applications (as compared to the much finer grids used in the computational fluid dynamics literature). we use the inviscid euler equations in our model, since they are usually more appropriate for gas modeling and less computationally intensive than the viscous navier- stokes equations used by others. in addition, we introduce a physically consistent vorticity confinement term to model the small scale rolling features characteristic of smoke that are absent on most coarse grid simulations. our model also correctly handles the inter-action of smoke with moving objects. ronald fedkiw jos stam henrik wann jensen algebraic decomposition of regular curves s. arnborg h. feng time-dependent visual adaptation for fast realistic image display human vision takes time to adapt to large changes in scene intensity, and these transient adjustments have a profound effect on visual appearance. this paper offers a new operator to include these appearance changes in animations or interactive real-time simulations, and to match a user's visual responses to those the user would experience in a real-world scene. large, abrupt changes in scene intensities can cause dramatic compression of visual responses, followed by a gradual recovery of normal vision. asymmetric mechanisms govern these time-dependent adjustments, and offer adaptation to increased light that is much more rapid than adjustment to darkness. we derive a new tone reproduction operator that simulates these mechanisms. the operator accepts a stream of scene intensity frames and creates a stream of color display images. all operator components are derived from published quantitative measurements from physiology, psychophysics, color science, and photography. ept intentionally simple to allow fast computation, the operator is meant for use with real-time walk-through renderings, high dynamic range video cameras, and other interactive applications. we demonstrate its performance on both synthetically generated and acquired "real-world" scenes with large dynamic variations of illumination and contrast. sumanta n. pattanaik jack tumblin hector yee donald p. greenberg an object-oriented framework for the integration of interactive animation techniques robert c. zeleznik d. brookshire conner matthias m. wloka daniel g. aliaga nathan t. huang philip m. hubbard brian knep henry kaufman john f. hughes andries van dam anytime coordination for progressive planning agents abdel-illah mouaddib temporal reasoning david p. miller collaboration between robotic agents at the smart office f. mizoguchi h. ohwada h. nishiyama h. hiraishi fast backface culling using normal masks hansong zhang kenneth e. hoff shape transformation for polyhedral objects james r. kent wayne e. carlson richard e. parent gertis: a dempster-shafer approach to diagnosing hierarchical hypotheses gertis---a prototype expert system---not only demonstrates the feasibility of applying the dempster-shafer-based reasoning model to diagnosing hierarchically related hypotheses, but also suggests ways to generate better explanations by using knowledge about the structure of the hypothesis space and knowledge about the intended effects of the rules. john yen hierarchical z-buffer visibility ned greene michael kass gavin miller digital halftones by dot diffusion this paper describes a technique for approximating real-valued pixels by two- valued pixels. the new method, called dot diffusion, appears to avoid some deficiencies of other commonly used techniques. it requires approximately the same total number of arithmetic operations as the floyd-steinberg method of adaptive grayscale, and it is well suited to parallel computation; but it requires more buffers and more complex program logic than other methods when implemented sequentially. a "smooth" variant of the method may prove to be useful in high-resolution printing. donald e. knuth methodology for the increased computational efficiency of discrete-event simulation in 3 dimensional space eugene p. paulo linda c. malone integrating tools and infrastructures for generic multi-agent systems in this paper, we present madkit/sedit, an agent infrastructure combined with a generic design tool for multi-agent systems. this toolkit is based on a organizational metaphor to integrate highly heterogeneous agent systems. we explain the principles of madkit, the underlying agent platform, and show how it can integrate various agent architectures and provides structuration for multiple simultaneous systems and semantics. the architecture, based on a minimal agent runtime, agentified platform services and modular application host, is presented. the sedit design tool, built itself as a mas is also discussed. we present its key points in terms of multi-model support, and integration with the infrastructure, from design to maintenance. we illustrate our approach by discussing some consequences of this architecture, and describe our motivation for this design: integration and reuse, organizational patterns, and overall versatility. a summary is given of some key madkit-based applications to date. olivier gutknecht jacques ferber fabien michel impel brandon morse simulation of multivariate extreme values s. nadarajah faster 3d game graphics by not drawing what is not seen kenneth e. hoff improv: a system for scripting interactive actors in virtual worlds ken perlin athomas goldberg improving interaction with radiosity-based lighting simulation programs claude puech francois sillion christophe vedel multi-pass pipeline rendering: realism for dynamic environments paul j. diefenbach norman i. badler texture synthesis on surfaces many natural and man-made surface patterns are created by interactions between texture elements and surface geometry. we believe that the best way to create such patterns is to synthesize a texture directly on the surface of the model. given a texture sample in the form of an image, we create a similar texture over an irregular mesh hierarchy that has been placed on a given surface. our method draws upon texture synthesis methods that use image pyramids, and we use a mesh hierarchy to serve in place of such pyramids. first, we create a hierarchy of points from low to high density over a given surface, and we connect these points to form a hierarchy of meshes. next, the user specifies a vector field over the surface that indicates the orientation of the texture. the mesh vertices on the surface are then sorted in such a way that visiting the points in order will follow the vector field and will sweep across the surface from one end to the other. each point is then visited in turn to determine its color. the color of a particular point is found by examining the color of neighboring points and finding the best match to a similar pixel neighborhood in the given texture sample. the color assignment is done in a coarse-to-fine manner using the mesh hierarchy. a texture created this way fits the surface naturally and seamlessly. greg turk a negotiation shell mihai barbuceanu the creation of new kinds of interactive environments in art, education, and entertainment (panel session) stephen wilson david backer myron krueger peter richards sonia sheridan david ucko overview of three-dimensional computer graphics donald h. house a two-and-a-half-d motion-blur algorithm nelson l. max douglas m. lerner shock resistant time warp alois ferscha james johnson a continuous process simulation using gpss this paper describes the methodology and results of a simulation of an oil purification process used by a food processing company in toronto, ontario. the process involves reacting a mixture of edible oils with a caustic solution to precipitate out various free fatty acid and gum components in the oils in the form of soap solids. the process is discrete in that batches of crude oil are processed individually but the processing of each batch is done on a continuous basis. the simulation model developed for this process simulates one crude oil batch at a time. thus the model simulates an essentially continuous operation. the success of gpss, a discrete event simulation language, in simulating this continuous process forms the basis of this paper. robert greer lavery trichromatic approximation for computer graphics illumination models carlos f. borges shape extraction for a polygon mesh tong-wing woon xuetao li tiow-seng tan a distributed real-time knowledge base for teams of autonomous systems in manufacturing environments johann schweiger andreas koller frisk spider harald zwart alastair hearsum matt taylor lee houlker robin carlisle rachel mills julia fetterman sally mattinson efficient programming in maple: a case study bruno salvy rendering volumetric data using sticks representation scheme claudio montani roberto scopigno motion compensated compression of computer animation frames brian k. guenter hee cheol yun russell m. mersereau grape: an environment to build display processes tom nadas alain fournier scripting distributed agents wilfred c. jamison doug lea sticky splines: definition and manipulation of spline structures with maintained topological relations this paper describes an augmentation to the spline concept to account for topological relations between different spline curves. these topological relations include incidence relations, constraining the extremes of spline curves to other spline curves, and also more general geometric relations, for example, involving the tangents of spline curves in their extremes. to maintain these incidence relations, some spline curves may have to be transformed (translated, rotated, scaled), or even deformed (i.e., the shape of the curve may change) as a result of modifying other spline curves. a data structure and algorithms are given to implement the propagation of these transformations and deformations. based on the augmented spline concept, to be called sticky splines, both a script system to represent spline structures and an interactive system for editing drawings while automatically, maintaining their topological structure are presented. c. w. a. m. van overveld marie luce viaud interactive texture mapping jerome maillot hussein yahia anne verroust verification & validation in military simulations dean s. hartley a conceptual framework for simulation experiment design and analysis yu-hui tao barry l. nelson variance based classifier comparison in text catergorization (poster session) text categorization is one of the key functions for utilizing vast amount of documents. it can be seen as a classification problem, which has been studied in pattern recognition and machine learning fields for a long time and several classification methods have been developed such as statistical classification, decision tree, support vector machines and so on. many researchers applied those classification methods to text categorization and reported their performance (e.g., decision tree[3], bayes classifier[2], support vector machine[l]). yang conducted comprehensive study of comparison or text categorization and reported that k nearest neighbor and support vector machines works well for text categorization[4]. in the previous studies, classification methods were usually compared using single pair of training and test data however, classification method with more complex family of classifiers requires more training data and small training data may result in deriving unreliable classifier, that is, the performance of the derived classifier varies much depending on training data. therefore, we need to take the size of training data into account when comparing and selecting a classification method. in this paper, we discuss how to select a classifier from those derived by various classification methods and how the size of training data affects the performance of the derived classifier. in order to evaluate the reliability of classification method, we consider the variance of accuracy of derived classifier. we first construct a statistical model. in the text categorization, each document is usually represented with a feature vector that consists of weighted frequencies of terms. in the vector space model, document is a point in high dimensional feature space and a classifier separates the feature space into subspaces each of which is labeled with a category. atsuhiro takasu kenro aihara on the suitability of market-based mechanisms for telematics applications christian gerber christian ruß gero vierke eps ii: estate planning with prototypes d. a. schlobohm l. t. mccarty time management, simultaneity and time-critical computation in interactive unsteady visualization environments steve bryson sandy johan three sources of simulation inaccuracy (and how to overcome them) stewart robinson imt-rb/ed: a system based on multi-reorganization (abstract) with the advance of ai and the coming of the idea of intelligent mt system, several important practical and theoretical advances have been made in the field of machine translation. recently a system called imtrb/ec is being implemented in our institute. it is the grammar rule based system of imt/ec (intelligent english-chinese translation system) and is a relatively independent and complete subsystem. as a rule based system, many ai techniques have been used in it. this paper presents a method for the multi- reorganization of the rule base system and the general principles of imt- rb/ec. this system is composed by six parts: rule base, parsing mechanism, rule accessing mechanism and man- machine interfaces. a transformation rule is composed by five parts: source expression, target expression, condition, context change and chinese translation. the knowledge and the algorithms are separated, the rules are divided into layers. we make the system as subset of grammar rules, the learning mechanism can add new rules or delete unsatisfied rules or change a rule according to the running result or the knowledge of expert. with different attributes (dynamic or quiescent factors), we realized the multi- reorganization. the result of multi-reorganization will be used as the heuristic information. the performance of the system is greatly improved by apply these techniques. ye yiming chen zhao-xiong zhang xiong gao qinshi tutorial: artificial intelligence and simulation j. rothenberg kards: hybrid knowledge acquisition for a security risk model elizabeth gurrie joachim diederich alan tickle alison anderson on the efficiency of the splitting and roulette approach for sensitivity analysis viatcheslav b. melas building shallow expert systems using back-propagation learning y. cheng integrated systems based on behaviors rodney a. brooks representing developing legal doctrine a. v. d. l. gardner in-house vs. off-the-shelf graphic software tools in this article, we discuss the software tools that rhythm & hues uses to create such award-winning effects as those featured in _babe,_ the coca-cola polar bears commercials and ride films such as _seafari._ the primary topic for discussion is the question of internal "in-house" proprietary software versus external "off-the-shelf" commercial software, as relates to our company.to help explain what we are doing today, we will look at the environment when rhythm & hues was founded, how the industry has changed and the challenges we face today. since this is a complex subject, there are varying opinions about the analysis of the situation within the company, so the description below should not be viewed as fact, but as a presentation of the authors' views. linda martino paul allen newell reconfigurable physical agents masahiro fujita hiroaki kitano koji kageyama application of modern software techniques to modeling and simulation it is commonly agreed that software developments tend to be high risk activities; simulation is recognized as being even more "exciting". great emphasis is being placed to develop methodologies which lower the risk of software development. since a major portion of simulation activity is software oriented, it is natural to look to these modern software methodologies for solutions applicable to the modeling and simulation community. the total solution to current dilemmas is many years away. however, significant progress is being made to develop a comprehensive software development methodology that is readily extendable to simulation activities. the major additional burden of modeling and simulation is the handling of the system modeling part of the process. this paper introduces an integrated software methodology (isomet) currently in use which has been successfully applied to simulation projects. modifications to the methodology to accommodate specialized modeling considerations are discussed. ronald m. huhn edward r. comer a simple model of ocean waves alain fournier william t. reeves low cost illumination computation using an approximation of light wavefronts we present an efficient method to simulate the propagation of wavefronts and approximate the behavior of light in an environment of freeform surfaces. the proposed method can emulate the behavior of a wavefront emanating from a point or spherical light source, and possibly refracted and/or reflected from a freeform surface. moreover, it allows one to consider and to render images with extreme illumination conditions such as caustics. the proposed method can be embedded into rendering schemes that are based on scan conversion. using a direct freedom surface z buffer renderer, we also demonstrate the use of the wavefront approximation in illumination computation. gershon elber an investigation of out-of-core parallel discrete-event simulation anna l. poplawski david m. nicol massively parallel and distributed simulation of a class of discrete event systems: a different perspective in this paper we propose a new approach to parallel and distributed simulation of discrete event systems. most parallel and distributed discrete event simulation algorithms are concerned with the simulation of one "large" discrete event system. in this case the computational intensity is due to the size and complexity of the simulated system. in contrast, we are interested in simulating a "large" number of "medium sized" systems. these are variants of a "nominal system" with different system parameter values or operation policies. the computational intensity in our case is due to the "large" number of simulated variants. many simulation projects such as factor screening, performance modeling, and optimization require system performance evaluations at many parameter values; and others, we believe, could significantly benefit from them. there is considerable work in the literature on stochastic coupling of trajectories of parametric families of stochastic processes. our approach can be viewed as the simulation of the coupled trajectories. we use a single clock mechanism that drives all trajectories simultaneously, hence the approach is called single clock multiple system (scms) simulation. the single clock synchronizes all trajectories such that the "same" event occurs at the "same" time at all systems. this synchronization is the basis of our parallel and distributed algorithms. we focus on a particular implementation of the scms simulation using the so- called standard clock (sc) technique and also on the massively parallel implementation of the sc algorithms on the simd connection machine. orders of magnitude of speedup is possible. furthermore, the possibility of concurrent performance evaluation and comparison at many system parameter values offers new and significant opportunities for performance optimization. pirooz vakili models (fractal and otherwise) for perception and generation of images perception is best understood as the interpretation of sensory data in terms of models of how the world is structured and how it behaves; these models are exactly those that are most useful for generation of computer images. by recognizing and exploiting this commonality we have been able to make surprising progress in both fields. alex p. pentland providing a low latency user experience in a high latency application brook conner loring holden a satisficing multiagent plan coordinating algorithm for dynamic domains pradeep m. pappachan edmund h. durfee from contours to surfaces: testbed and initial results this paper is concerned with the problem of reconstructing the surface of three-dimensional objects, given a collection of planar contours representing cross-sections through the objects. this is an important problem, with applications in clinical medicine, bio-medical research and instruction, and industrial inspection. current solutions to this problem have raised interesting theoretical questions about search techniques and the exploitation of domain- specific aspects of such search problems. in this paper, we survey known reconstruction techniques, describe a testbed for evaluating these techniques and present an improvement on the simple divide-and-conquer method analyzed by fuchs, kedem and uselton [5]. kenneth r. sloan james painter book reviews karen t. sutherland constructive modeling of frep solids using spline volumes we present an approach to constructive modeling of frep solids [2] defined by real-valued functions using 4d uniform rational cubic bspline volumes as primitives. while the first three coordinates are used to represent the spatial component of the volume to be sculpted, the fourth coordinate is used as a scalar, which corresponds to a function value or a volume density. thus, the shape can be manipulated by changing the scalar control coefficients of the spline volume. this modeling process is interactive as the isosurface can be polygonized and visualized in real time. the distance property we obtain, combined with the properties of the spline volumes, allow us to use the resulting 3d solid as a leaf of a constructive tree and to apply to it set- theoretic, blending and other operations defined using r-functions [2]. additional deformations can be achieved by moving arbitrary points in the coordinate space and applying space mapping at any level of the constructive tree. the final constructive solid is defined by a single real-valued function evaluated by the tree traversing procedure. schmitt benjamin pasko alexander schlick christophe wet and messy fur armin bruderlin characterization of static 3d graphics workloads tzi-cker chiueh wei-jen lin frame-to-frame coherence and the hidden surface computation: constraints for a convex world h. hubschman s. w. zucker optimal depth buffer for low-cost graphics hardware eugene lapidous guofang jiao a viewer for mathematical structures and surfaces in 3d david p. dobkin stephen c. north nathaniel j. thurston moving objects in space: exploiting proprioception in virtual-environment interaction mark r. mine frederick p. brooks carlo h. sequin selecting text spans for document summaries: heuristics and metrics vibhu mittal mark kantrowitz jade goldstein jaime carbonell confidence intervals for univariate discrete-event simulation output using the kalman filter randall b. howard mark a. gallagher kenneth w. bauer peter s. maybeck simulation of natural scenes using textured quadric surfaces because of the high complexity of the real world, realisticsimulation of natural scenes is very costly in computation. thetopographical subtlety of common natural features such as trees andclouds remains a stumbling block to cost- effective computermodeling. a new scene model, composed of quadric surfaces boundedwith planes and overlaid with texturing, provides an efficient andeffective means of representing a wide range, of natural features.the new model provides a compact and functional data base whichminimizes the number of scene elements. efficient hidden surfacealgorithms for quadric surfaces bounded by planes are included. amathematical texturing function represents natural surface detailin a statistical manner. techniques have been developed to simulatenatural scenes with the artistic efficiency of an impressionistpainter. geoffrey y. gardner gpss/vi duane ball relational learning of pattern-match rules for information extraction mary elaine califf raymond j. mooney pen-and-ink rendering in volume visualisation s. m. f. treavett m. chen cg crowds: the emergence of the digital extra juan buhler jonathan gibbs christophe hery dale mcbeath saty raghavachary seamless texture mapping of subdivision surfaces by model pelting and texture blending subdivision surfaces solve numerous problems related to the geometry of character and animation models. however, unlike on parametrised surfaces there is no natural choice of texture coordinates on subdivision surfaces. existing algorithms for generating texture coordinates on non- parametrised surfaces often find solutions that are locally acceptable but globally are unsuitable for use by artists wishing to paint textures. in addition, for topological reasons there is not necessarily any choice of assignment of texture coordinates to control points that can satisfactorily be interpolated over the entire surface. we introduce a technique, pelting, for finding both optimal and intuitive texture mapping over almost all of an entire subdivision surface and then show how to combine multiple texture mappings together to produce a seamless result. dan piponi george borshukov the making of black-hole and nebula clouds for the motion picture "sphere" with volumetric rendering and the f-rep of solids gokhan kisacikoglu biometric identification anil jain lin hong sharath pankanti computer typesetting at a university as text processing becomes more widespread at a university, users become intrigued with the possibilities of making publishing easier, quicker, and less costly. authors, administrative offices, editors of newsletters, and even students creating attractive resumes want to use computerized typesetting. not all universities can justify buying their own typesetting equipment, but files can be created on a university computer and sent to a commercial typesetter for processing. such a system will be discussed. hannah kaufman continuous anti-aliased rotation and zoom of raster images raster graphics images are difficult to smoothly rotate and zoom because of geometric digitization error. a new algorithm is presented for continuous rotation and zoom, free from the disturbing aliasing artifacts introduced by traditional methods. applications include smooth animation. no matrix multiplication of pixel coordinates is executed. instead row and column parallel operations which resemble local digital filters are used. this suggests real time implementation with simple hardware. anti-aliasing is inherent in the algorithm which operates solely on pixel data, not the underlying geometric structures whose images the pixels may depict. zoom magnification is achieved without replicating pixels and is easily attained for any rational scale factor including but not restricted to the integer values which most existing commercial raster graphics systems use. the algorithm is based on a digitized code for lines on rasters, generalized to an interpolation scheme capable of executing all linear geometric transformations. samples of images which have been rotated and zoomed by a software implementation of the algorithm are presented. carl f. r. weiman comparison of second-order polynomial model selection methods: an experimental survey grace w. rumantir rendering on a budget: a framework for time-critical rendering we present a technique for optimizing the rendering of highdepth complexity scenes. prioritized-layered projection (plp) does this by rendering an estimation of the visible set for each frame. the novelty in our work lies in the fact that we do not explicitly compute visible sets. instead, our work is based on computing on demand a priority order for the polygons that maximizes the likelihood of rendering visible polygons before occluded ones for any given scene. given a fixed budget, e.g. time or number of triangles, our rendering algorithm makes sure to render geometry respecting the computed priority. there are two main steps to our technique: (1) an occupancy-based tessellation of space; and (2) a solidity-based traversal algorithm. plp works by computing an occupancy-based tessellation of space, which tends to have smaller cells where there are more geometric primitives, e.g., polygons. in this spatial tessellation, each cell is assigned a solidity value, which is directly proportional to its likelihood of occluding other cells. in its simplest form, a cell's solidity value is directly proportional to the number of polygons contained within it. during our traversal algorithm, cells are marked for projection, and the geometric primitives contained within them actually rendered. the traversal algorithm makes use of the cells' solidity, and other view-dependent information to determine the ordering in which to project cells. by tailoring the traversal algorithm to the occupancy-based tessellation, we can achieve very good frame rates with low preprocessing and rendering costs. in this paper, we describe our technique and its implementation in detail. we also provide experimental evidence of its performance and briefly discuss extensions of our algorithm. james t. klosowski cláudio t. silva the attack of the autistic peripherals dave taylor hypermedia as integration: recollections, reflections, and exhortations randall h. trigg proving temporal properties of hybrid systems sanjai narain jeff rothenberg anti-aliasing through the use of coordinate transformations the use of the point-line distance in evaluating the 2-dimensional anti- aliasing convolution is studied. we derive transformations of the point-spread function (psf) that give the effective convolution in terms of the point-line distance when the class of object space primitives is limited to lines and polygons. because the quality of filtering is embedded in a table indexed by the point- line distance, this approach allows one to use arbitrarily complex psf's, only the width and not the shape of the psf affects the amount of computation. we apply the cordic algorithm to point-line distance evaluation, and show its merits. also, we show the more standard use of the cordic algorithm for coordinate rotation, polar-to-rectangular and rectangular-to- polar conversion, and calculating the norm of a vector. rounded end points can be achieved by using the point-segment distance, computational methods are given, including cordic implementation. the cordic algorithms for the aforementioned geometric operations are prime candidates for vlsi implementation because of their inherent parallel/pipeline nature. kenneth turkowski the split-up system: integrating neural networks and rule-based reasoning in the legal domain john zeleznikow andrew stranieri josie: an integration of specialized representation and reasoning tools robert nado jeffrey van baalen richard fikes integrating perception, language-handling, learning and planning in the childlike system ganesh mani leonard uhr links syed s. ali an efficient algorithm for hidden surface removal k. mulmuley deliberate evolution in multi-agent systems (extended abstract) frances m. t. brazier catholijn m. jonker jan treur niek j. e. wijngaards developer's choice in the legal domain: the sisyphean journey with dbr or down hill with rules (a working paper for the case-rules panel at the third international conference of artificial intelligence and law) donald h. berman hide-and-seek: effective use of memory in perception/action systems glenn s. wasson gabriel j. ferrer worthy n. martin simulation model design in physical environments volker brauer rule-based machine learning of spatial data concepts steve stearns daniel c. st. clair the rhet system james f. allen the accumulation buffer: hardware support for high-quality rendering paul haeberli kurt akeley cellular texture generation kurt w. fleischer david h. laidlaw bena l. currin alan h. barr robust mesh watermarking emil praun hugues hoppe adam finkelstein levels of reasoning as the basis for a formalisation of argumentation andrew stranieri john zeleznikow proteus* - adaptive polling system for proactive management of atm networks using collaborative intelligent agents jide odubiyi george meekins song huang tracy yin un temps pour elle bruno follet a temporal connectionist approach to natural language christian jacquemin fundamentals of digital simulation modeling this paper and the tutorial session with which it is associated treat the fundamental concepts of digital simulation. the topics discussed include system modeling, simulation models and their advantages and disadvantages relative to mathematical models, the development of simulation and current applications, the role of simulation modeling in systems analysis and simulation languages. the paper and the tutorial are presented at a level which requires no previous exposure to digital simulation. however, familiarity with the fundamentals of probability, probability distributions and inferential statistics will facilitate the participant's understanding of the material presented. j. w. schmidt computational experience with the batch means method christos alexopoulos george s. fishman andrew f. seila karen and jennifer stephen shearer nicole berger progressive meshes hugues hoppe explorations of new visual systems kostas terzidis achieving color uniformity across multi-projector displays aditi majumder zhu he herman towles greg welch planar 2-pass texture mapping and warping alvy ray smith size preserving pattern mapping yair kurzion torsten möller roni yagel annie: a simulated neural network for empirical studies and application prototyping wendy l. huxhold troy f. henson j. dan bowman connected component labeling using quadtrees hanan samet an extended transformation approach to inductive logic programming inductive logic programming (ilp) is concerned with learning relational descriptions that typically have the form of logic programs. in a transformation approach, an ilp task is transformed into an equivalent learning task in a different representation formalism. propositionalization is a particular transformation method, in which the ilp task is compiled to an attribute-value learning task. the main restriction of propositionalization methods such as linus is that they are unable to deal with nondeterminate local variables in the body of hypothesis clauses. in this paper we show how this limitation can be overcome, by systematic first-order feature construction using a particular individual- centered feature bias. the approach can be applied in any domain where there is a clear notion of individual. we also show how to improve upon exhaustive first-order feature construction by using a relevancy filter. the proposed approach is illustrated on the "trains" and "mutagenesis" ilp domains. nada lavrac peter a. flach distribution selection and validation stephen g. vincent w. david kelton putting the expert in charge: graphical knowledge acquisition for fault diagnosis and repair l. l. rodi j. a. pierce r. e. dalton orientable textures for image-based pen-and-ink illustration michael p. salisbury michael t. wong john f. hughes david h. salesin the cave: audio visual experience automatic virtual environment carolina cruz- neira daniel j. sandin thomas a. defanti robert v. kenyon john c. hart introduction to processmodel and processmodel 9000 bruce d. gladwin charles r. harrell simplification and optimization transformations of chains of recurrences eugene v. zima session 9a: knowledge-based software engineering d. r. barstow recognizing objects in range images and finding their position in space we present a method for recognizing polyhedral objects from range images. an object is said to be recognized as one of the models of a library of object models when many features of the model can be made to match the features of the observed object by the same rotation-translation transformation (the object pose). in the proposed approach, the number of considered pairs of image and model features is reduced by selecting at random only a few of all the possible image features and matching them to appropriate model features. the rotation and translation required for each match are computed, and a robust lms (least median of squares) method is applied to determine clusters in translation and rotation spaces. the validity of the object pose suggested by the clusters is verified by a similarity measure which evaluates how well a model in the suggested pose would fit the original range image. the pose estimation and verification are performed for all models in the model library. the recognized model is the model which yields the smallest value of the similarity measure, and the pose of the object is found in the process. jun ohya daniel dementhon larry s. davis linking simulation model specification and parallel execution through unity marc abrams ernest h. page richard e. nance dart: expert systems for automated computer fault diagnosis in anticipation of computer faults, most manufacturers prepare specialized diagnostic aids that allow field engineers without complete knowledge of a system to diagnose the majority of its failures. unfortunately, these diagnostics are not infallible for they do not take into account every fault and combination of faults for every possible system configuration. thus, in some cases, it is necessary to call in someone with expert knowledge about the design of the system. these experts are expensive and often not immediately available, and there is inevitably a delay and a loss of working time to the system users. dart is a joint project of the heuristic programming project and ibm that explores the application of artificial intelligence techniques to the diagnosis of computer faults. the primary goal of the dart project is to develop programs that capture the special design knowledge and diagnostic abilities of these experts and to make them available to field engineers. the practical goal is the construction of an automated diagnostician capable of pinpointing the functional units responsible for observed malfunctions in arbitrary system configurations. michael genesereth james s. bennett clifford r. hollander checkpoint and recovery methods in the parasol simulation system edward mascarenhas felipe knop reuben pasquini vernon rego visualizing volume data using physical models david r. nadeau michael j. bailey texture synthesis over arbitrary manifold surfaces algorithms exist for synthesizing a wide variety of textures over rectangular domains. however, it remains difficult to synthesize general textures over arbitrary manifold surfaces. in this paper, we present a solution to this problem for surfaces defined by dense polygon meshes. our solution extends wei and levoy's texture synthesis method [25] by generalizing their definition of search neighborhoods. for each mesh vertex, we establish a local parameterization surrounding the vertex, use this parameterization to create a small rectangular neighborhood with the vertex at its center, and search a sample texture for similar neighborhoods. our algorithm requires as input only a sample texture and a target model. notably, it does not require specification of a global tangent vector field; it computes one as it goes - either randomly or via a relaxation process. despite this, the synthesized texture contains no discontinuities, exhibits low distortion, and is perceived to be similar to the sample texture. we demonstrate that our solution is robust and is applicable to a wide range of textures. li-yi wei marc levoy simulation for decision making: an introduction a. thesen l. e. travis advanced methods for simulation output analysis christos alexopoulos accelerating 3d convolution using graphics hardware (case study) many volume filtering operations used for image enhancement, data processing or feature detection can be written in terms of three-dimensional convolutions. it is not possible to yield interactive frame rates on todays hardware when applying such convolutions on volume data using software filter routines. as modern graphics workstations have the ability to render two- dimensional convoluted images to the frame buffer, this feature can be used to accelerate the process significantly. this way generic 3d convolution can be added as a powerful tool in interactive volume visualization toolkits. matthias hopf thomas ertl online radiosity in interactive virtual reality applications frank schöffel kansas state robotics frank blecha tim beese damon kuntz jonathan cameron david sexton david gustafson the system dynamics approach to analysis of complex industrial and management systems the tutorial is designed to expand a person's ability to understand and to model complex industrial and management systems. modeling of physical systems such as aircraft flight or engineering design structures are not addressed. the system dynamics approach to examining, conceptualizing, and modeling complex systems is emphasized. complex managerial and social systems are of special interest. the informational feedback characteristics of these systems are discussed and methods to focus on feedback structures are presented. special attention is given to the dynamic interaction among structure, policy, and time delays in decision and actions that determine system behavior and performance. thomas d. clark interactive display of large-scale nurbs models we present serial and parallel algorithms for interactive rendering of large scale and complex nurbs models on current graphics systems. the algorithms tessellate the nurbs surfaces into triangles and render them using triangle rendering engines. the main characteristics of the algorithms are improved polygonization algorithms, exploitation of spatial and temporal coherence and back-patch culling. polygonization anomalies like cracks and angularities are avoided. we analyze a number of issues in parallelization of these techniques, as well. the algorithms work well in practice and are able to display models consisting of thousands of surfaces at interactive frame rates on the highly parallel graphics system, pixel-planes 5. subodh kumar dinesh manocha anselmo lastra global tele-immersion at the electronic visualization laboratory jason leigh andrew johnson thomas defanti maxine brown samroeng thongrong image-based objects manuel m. oliveira gary bishop multi-resolution multi-field ray tracing: a mathematical overview a rigorous mathematical review of ray tracing is presented. the concept of a generic voxel decoder acting on flexible voxel formats is introduced. the necessity of interpolating opacity weighted colors is proved, using a new definition of the blending process in terms of functional integrals. the continuum limit of the discrete opacity accumulation formula is presented, and its convexity properties are investigated. the issues pertaining to interpolation/classification order are discussed. the lighting equation is expressed in terms of opacity weighted colors. the multi-resolution (along the ray) correction of the opacity-weighted color is derived. the mathematics of filtering on the image plane are studied, and an upper limit of the local pixel size on the image plane is obtained. interpolation of pixel values on the image plane is shown to be inequivalent to blending of interpolated samples. c. gasparakis tracking text in mixed-mode documents j patrick bixler the versatility of color mapping extracting information from large amounts of data by using tables of numbers is difficult. often, such data can be presented more effectively with graphics. the reduction in the cost of memory has allowed more powerful display systems to provide for the simultaneous display of hundreds, thousands, and even millions of colors. effective and efficient manipulation of the colors in the display system is necessary to manage the use of such a large number of colors. these extended color capabilities can also be used to enrich the understanding of presentations of complex data sets. applications which previously might have required the user to mentally correlate several displays can now display the same information in a single image with a corresponding increase in user understanding and accuracy of interpretation. samuel p. uselton mark e. lee randy a. brown on the partitionability of hierarchical radiosity the hierarchical radiosity algorithm (hra) is one of the most efficient sequential algorithms for physically based rendering. unfortunately, it is hard to implement in parallel. there exist fairly efficient shared-memory implementations but things get worst in a distributed memory (dm) environment. in this paper we examine the structure of the iira in a graph partitioning setting. various measurements performed on the task access graph of the hra indicate the existance of several bottlenecks in a potential dm implementation. we compare "optimal" partitioning results obtained by the partitioning software metis with a trivial and a spatial partitioning algorithm, and show that the spatial partitioning copes with most of the bottlenecks well. robert garmann towards interactive bump mapping with anisotropic shift-variant brdfs in this paper a technique is presented that combines interactive hardware accelerated bump mapping with shift-variant anisotropic reflectance models. an evolutionary path is shown how some simpler reflectance models can be rendered at interactive rates on current low-end graphics hardware, and how features from future graphics hardware can be exploited for more complex models. we show how our method can be applied to some well known reflectance models, namely the banks model, ward's model, and an anisotropic version of the blinn- phong model, but it is not limited to these models. furthermore, we take a close look at the necessary capabilities of the graphics hardware, identifiy problems with current hardware, and discuss possible enhancements. jan kautz hans-peter seidel new mathematical morphology operators and their applications george meghabghab abe kandel the detailed semantics of graphics input devices the concept of virtual input devices, enunciated by wallace, has been the accepted basis for producing device-independent interactive graphics systems. it was used by gspc for the core system, and it underlies the draft international standard gks. during the recently concluded technical review of gks, the input facilities became a bone of contention. the discussions revealed many inadequacies in the virtual input device concept, and were finally resolved using a refined and extended model of input, which is presented here by some of the participants in the discussions. examples are included, showing how the gks facilities derive from the model, and the core's "stroke" device is used to show how the model controls future extensions to gks. the model is also used to describe the other differences between the input facilities of the core system and gks. david s. h. rosenthal james c. michener gunther pfaff rens kessener malcolm sabin a device-independent network graphics system the design and implementation of a basic graphics system for a heterogeneous network environment is described. the design has been influenced by the siggraph core system, gks, and proposals being considered by the ansi technical committee on computer graphics programming languages. it permits hierarchical object definition, direct and indirect attribute specification, screen window management and complex styles of interaction. important parts of the implementation include a device-independent database for graphical objects, a workstation driver which produces device code, and a device kernel which manages the display list. problems relating to device independence and network partitioning are discussed. deborah u. cahn albert c. yen a retrospective (panel): six perennial issues in computer graphics graphics has been an industry for more than 15 years. some workers trace its origins almost )0 years. the dramatic gains in silicon technology along with more highly developed understanding of the mathematics of graphics have transformed the architecture of computer graphic systems and produced a bewildering array of products and services. this retrospective panel will try to put several key things in perspective. dr. alan kay, recently of atari computer and currently an apple fellow, will characterize the role of graphics in the overall world of computing and information processing. carl machover of machover associates will trace the development of display technology and its employment in computer graphics systems. dr. david evan, chairman of the board of evans and sutherland will create the retrospective on our ability to produce realism in imagery. in turn dr. robert sproull, sutherland, sutherland and associates will trace the development of transformation that have found their way into silicon technology. dr. james foley, of george washington university will track our progress in the technology of interaction, while dr. robin forrest of the university of east angela will trace geometric modeling and dr. ed cattail, director of development at lucasfilm will put our progress in animation in perspective. robert m. dunn minimum cost adaptive synchronization: experiments with the parasol system we present a novel adaptive synchronization algorithm, called the minimum average cost (mac) algorithm, in the context of the parasol parallel simulation system. parasol is a multithreaded system for parallel simulation on shared- and distributed-memory environments, designed to support domain- specific simulation object libraries. the proposed mac algorithm is based on minimizing the cost of synchronization delay and rollback at a process, whenever its simulation driver must decide whether to either proceed optimistically or to delay processing. in the former case the risk is rollback cost, in the event of a straggler's arrival. in the latter case the risk is unnecessary delay, in the event a latecomer is not a straggler. in addition to the mac algorithm and an optimal delay computation model, we report on some early experiments comparing the performance of mac-based adaptive synchronization to optimistic synchronization. edward mascarenhas felipe knop reuben pasquini vernon rego an inexpensive scheme for calibration of a colour monitor in terms of cie standard coordinates the commission internationale d'eclairage system of colorimetry is a method of measuring colours that has been standardized, and is widely used by industries involved with colour. knowing the cie coordinates of a colour allows it to be reproduced easily and exactly in many different media. for this reason graphics installations which utilize colour extensively ought to have the capability of knowing the cie coordinates of displayed colours, and of displaying colours of given cie coordinates. such a capability requires a function which transforms video monitor gun voltages (rgb colour space) into cie coordinates (xyz colour space), and vice versa. the function incorporates certain monitor parameters. the purpose of this paper is to demonstrate the form that such a function takes, and to show how the necessary monitor parameters can be measured using little more than a simple light meter. because space is limited, and because each user is likely to implement the calibration differently, few technical details are given, but principles and methods are discussed in sufficient depth to allow the full use of the system. in addition, several visual checks which can be used for quick verification of the integrity of the calibration are described. the paper begins with an overview of the cie system of colorimetry. it continues with a general discussion of transformations from rgb colour space to xyz colour space, after which a detailed step-by-step procedure for monitor calibration is presented. william b. cowan a general two-pass method integrating specular and diffuse reflection f. sillion c. puech the back system - an overview christof peltason a layered approach for an autonomous robotic soccer system manuela veloso peter stone sorin achim news lisa meeden shading of regions on vector display devises given an arbitrary simple polygon with n vertices we present an algorithm for shading the interior of the polygon with a set of parallel lines where the slope and the distance between lines are prespecified. if the number of shading line segments is m, the algorithm described in the paper runs in 0(n log n + m) time. the algorithm is generalizable to shade any region or regions of an arbitrary planar subdivision. d. t. lee a simple naturalistic hair model we propose a new computer graphics model to render naturalistic human hair. in this model, a simplified progressive polyline simulation was used for fast processing. the problems of inter-hair collisions and hair-head collisions were avoided by certain assumptions, and the bend in the hair was provided by angle-dependent pseudo-force equations. several hair-colour and shadowing techniques were tested. despite the simplicity of the hair model itself, when it was combined with the coloring and shadowing techniques described here, the proposed model did effectively render hair, as illustrated by the still images. john rankin richard hall a unified representation for numerical and qualitative simulations models fall naturally into two main categories, those of discrete systems and those of continuous systems. our model-based reasoning work deals with continuous systems, augmented to provide for the possibility that one continuous system model transitions to another, as when a threshold event occurs such as a thermostat turning on.one aspect of model-based reasoning is simulation. a model is defined and its behavior(s) inferred through qualitative or numerical simulation. the simulated trajectories then facilitate tasks that one expects model-based reasoning to aid in, such as prediction, monitoring, diagnosis, and design.here we describe an approach to representing simulation trajectories that results in descriptions of system behavior that contain both qualitative and quantitative information about trajectories closely integrated together. those descriptions are supported by an internal, representation methodology that also closely integrates qualitative and quantitative information. the internal representation methodology supports quantitative inferences about the trajectories, and an example trace of such inferencing is provided. daniel berleant matchmaking among minimal agents without a facilitator multi-agent systems are a promising way of dealing with large complex problems. however, it is not yet clear just how much complexity or pre- existing structure individual agents must have to allow them to work together effectively. in this paper, we ask to what extent agents with minimal resources, local communication and without a directory service can solve a consumer-provider matchmaking problem. we are interested in finding a solution that is massively scalable and can be used with resource poor agents in an open system. we create a model involving random search and a grouping procedure. through simulation of this model, we show that peer-to-peer communication in a environment with multiple copies of randomly distributed like clients and providers is sufficient for most agents to discover the service consumers or providers they need to complete tasks. we simulate systems with between 500 and 32,000 agents, between 10 and 2000 categories of services, and with three to six services required by each agent. we show that, for instance, in a system with 80 service categories and 2000 agents, each requiring three random services between 93\% and 97\% of possible matches are discovered. such a system can work with at least 90 different service categories and tens of thousands of agents. elth ogston stamatis vassiliadis two-handed interactive stereoscopic visualization david s. ebert christopher d. shaw amen zwa cindy starr introduction to simulation this paper offers an introduction to the fundamental concepts of system modeling with emphasis on the application of digital simulation. the topics presented include system modeling, model classification, a discussion of mathematical and simulation models, their distinction and relative advantages, an overview of systems analysis, an example simulation model, and brief discussions of random numbers, random variable generation and simulation languages. the material presented is largely conceptual and requires no prior background in modeling. j. w. schmidt a survey of technical computer users resulting in guidelines for the development of technical computer documentation m. puscas conference review paul mc kevitt conn mulvihill sean o nuallain a cognitively valid knowledge acquisition tool j. s. lancaster c. r. westphal k. l. mcgraw image-based motion blur for stop motion animation stop motion animation is a well-established technique where still pictures of static scenes are taken and then played at film speeds to show motion. a major limitation of this method appears when fast motions are desired; most motion appears to have sharp edges and there is no visible motion blur. appearance of motion blur is a strong perceptual cue, which is automatically present in live-action films, and synthetically generated in animated sequences. in this paper, we present an approach for automatically simulating motion blur. ours is wholly a post-process, and uses image sequences, both stop motion or raw video, as input. first we track the frame-to-frame motion of the objects within the image plane. we then integrate the scene's appearance as it changed over a period of time. this period of time corresponds to shutter speed in live-action filming, and gives us interactive control over the extent of the induced blur. we demonstrate a simple implementation of our approach as it applies to footage of different motions and to scenes of varying complexity. our photorealistic renderings of these input sequences approximate the effect of capturing moving objects on film that is exposed for finite periods of time. gabriel j. brostow irfan essa animation control for real-time virtual humans norman i. badler martha s. palmer rama bindiganavale a bitmap scaling and rotation design for sh1 low power cpu ying-wen bai ching- ho lai toward an architecture for adaptive, rational, mobile agents (abstract) innes a. ferguson image re-composer shoji tanaka jun kurumizawa seiji inokuchi design and implementation of an immersive geoscience toolkit (case study) having a better way to represent and to interact with large geological models are topics of high interest in geoscience, and especially for oil and gas companies. we present in this paper the design and implementation of a visualization program that involves two main features. it is based on the central data model, in order to display in real time the modifications caused by the modeler. furthermore, it benefits from the different immersive environments which give the user a much more accurate insight of the model than a regular computer screen. then, we focus on the difficulties that come in the way of performance. christophe winkler fabien bosquet xavier cavin jean-claude paul a simple method for improved color printing of monitor images to print image data optimized for display on a color monitor, the red, green, and blue values that drive the display must be transformed into data that control the amounts of cyan, magenta, yellow, and black on the print. the differences in the way display and print images are produced have important consequences for the transformation. matching the appearance of the monitor and print images may be impossible, and achieving satisfactory results is complex. a method for obtaining pleasing prints from display image data is presented. this method assumes that good results can be achieved by users who do not have extensive knowledge of color reproduction and who have a minimum of color measuring equipment available. michael g. lamming warren l. rhodes hierarchical b-spline refinement david r. forsey richard h. bartels knowledge elicitation during dynamic scene description j. malec interval volume tetrahedrization gregory m. nielson junwon sung cloning: a novel method for interactive parallel simulation maria hybinette richard fujimoto can legal knowledge be derived from legal texts? knowledge acquisition is undoubtedly one of the major bottle-necks in the development of legal expert systems. usually the knowledge is collected by knowledge engineers who are forced to make their own interpretations of the knowledge in order to map it on a knowledge representation technique, thus resulting into erroneous and legally unacceptable interpretations of the law. the aim of nomos (an ec supported project under the esprit ii initiative) was to assist the knowledge engineer by providng tools that perform semi-automatic knowledge acquisition from legal texts in italian and french. this paper reports on the results of the first evaluation of the knowledge collected by these tools. the evaluation was performed by complementing the tools with a fully functional expert system that accepted the generated knowledge bases and allowed experts to test the completeness of the knowledge through a series of interactive consultations. the knowledge base used for this evaluation was derived from the text for the italian value added tax law. the text was pre- processed in its ascii form by the nomos tools and the generated knowledge base was filtered through to a conventional expert system shell to generate the evaluation expert system. knowledge extracted directly from text was converted into a hybrid of production rules and conceptual graphs. [see sowa 1984] knowledge collected from other sources, such as previously resolved cases, explanations of terms and examples, were linked to the knowledge base using an automated hypertext technique. [see konstantinou & morse 1992] finally, the expert system was tested using real-life cases supplied by the italian ministry of finance. vassilis konstantinou john sykes georgios n. yannopoulos photorealistic rendering of knitwear using the lumislice we present a method for efficient synthesis of photorealistic free-form knitwear. our approach is motivated by the observation that a single cross- section of yarn can serve as the basic primitive for modeling entire articles of knitwear. this primitive, called the _lumislice_, describes radiance from a yarn cross-section based on fine-level interactions --- such as occlusion, shadowing, and multiple scattering --- among yarn fibers. by representing yarn as a sequence of identical but rotated cross-sections, the lumislice can effectively propagate local microstructure over arbitrary stitch patterns and knitwear shapes. this framework accommodates varying levels of detail and capitalizes on hardware-assisted transparency blending. to further enhance realism, a technique for generating soft shadows from yarn is also introduced. ying-qing xu yanyun chen stephen lin hua zhong enhua wu baining guo heung- yeung shum an efficient antialiasing technique xiaolin wu the warpengine: an architecture for the post-polygonal age we present the warpengine, an architecture designed for real-time imaged-based rendering of natural scenes from arbitrary viewpoints. the modeling primitives are real- world images with per-pixel depth. currently they are acquired and stored off- line; in the near future real-time depth-image acquisition will be possible, the warpengine is designed to render in immediate mode from such data sources. the depth-image resolution is locally adapted by interpolation to match the resolution of the output image. 3d warping can occur either before or after the interpolation; the resulting warped/interpolated samples are forward- mapped into a warp buffer, with the precise locations recorded using an offset. warping processors are integrated on-chip with the warp buffer, allowing efficient, scalable implementation of very high performance systems. each chip will be able to process 100 million samples per second and provide 4.8gigabytes per second of bandwidth to the warp buffer. the warpengine is significantly less complex than our previous efforts, incorporating only a single asic design. small configurations can be packaged as a pc add-in card, while larger deskside configurations will provide hdtv resolutions at 50 hz, enabling radical new applications such as 3d television. warpengine will be highly programmable, facilitating use as a test-bed for experimental ibr algorithms. voicu popescu john eyles anselmo lastra joshua steinhurst nick england lars nyland top ten visualization problems bill hibbard panel session: improving the teaching of simulation the diversity of application areas represented at a typical simulation conference attests to the increased usage of computer simulation, however, increased usage may lead to a decrease in the overall "quality" of simulations. in an effort to improve simulation quality, research continues in methodology areas such as documentation, validity, and statistical techniques. another constant concern is the manner in which computer simulation is being taught by educational institutions and private companies. the development of sophisticated simulation languages has made simulation a practical decision making tool, but often the apparent simplicity of a language leads to deficiencies in the teaching of basic methodology. all practitioners of simulation should feel a responsibility to work toward improving the quality of teaching in the field. carl j. bellas lewis corner a alan b pritsker thomas schriber introduction to the artificial life issue bill stevenson design of simulation models the organizer of this panel has chosen to address this difficult subject by experience. the four invited panelists average 20 years of experience in the field of simulation. they represent the users and the builders of simulation models, both private industry and government, and are familiar with a diversity of languages used in modeling. each panelist has been invited to give a fifteen minute presentation drawing of their unique perspective of the design of simulation models. following the presentation, a period will be provided for discussion and questions. donald a. heimburger julian reitman henry kleine otis l. newton howard l. swaim a framework for interactive texturing on curved surfaces hans køhling pedersen the giftbringer michael makara kim allen kluge michael sanborn surfels: surface elements as rendering primitives surface elements (surfels) are a powerful paradigm to efficiently render complex geometric objects at interactive frame rates. unlike classical surface discretizations, i.e., triangles or quadrilateral meshes, surfels are point primitives without explicit connectivity. surfel attributes comprise depth, texture color, normal, and others. as a pre-process, an octree-based surfel representation of a geometric object is computed. during sampling, surfel positions and normals are optionally perturbed, and different levels of texture colors are prefiltered and stored per surfel. during rendering, a hierarchical forward warping algorithm projects surfels to a z-buffer. a novel method called visibility splatting determines visible surfels and holes in the z-buffer. visible surfels are shaded using texture filtering, phong illumination, and environment mapping using per-surfel normals. several methods of image reconstruction, including supersampling, offer flexible speed-quality tradeoffs. due to the simplicity of the operations, the surfel rendering pipeline is amenable for hardware implementation. surfel objects offer complex shape, low rendering cost and high image quality, which makes them specifically suited for low-cost, real-time graphics, such as games. hanspeter pfister matthias zwicker jeroen van baar markus gross view-dependent adaptive tessellation of spline surfaces jatin chhugani subodh kumar hollywood and highland damijan saccio peter korian peipei yuan accurate triangulations of deformed, intersecting surfaces brian von herzen alan h. barr using process requirements as the basis for the creation and evaluation of process ontologies for enterprise modeling michael gruninger craig schlenoff amy knutilla steven ray an auto-adaptive dead reckoning algorithm for distributed interactive simulation wentong cai francis b. s. lee l. chen real-time, continuous level of detail rendering of height fields peter lindstrom david koller william ribarsky larry f. hodges nick faust gregory a. turner apt - a productivity tool for supporting expert analysis of time series data computer programs for graphing and analyzing time series data are widely available. for large data analysis applications, however, the analyst may invest a great deal of time navigating an ocean of data in order to find the relevant and interesting pieces. by making this process of discovery easier we can improve the productivity of the analyst. in this paper we describe a data analysis system composed of an eclectic combination of pattern recognition, artificial intelligence, and digital signal processing with the goal of providing some of the right tools. the machine is used to accept abstract descriptions of interesting or anomalous data and then to bring that data quickly into the user interface. the same tools can screen large datasets in the analyst's absence. the human analyst spends less time wading through graphs and numbers and more time answering the question of the day. our goal is to empower the analyst by providing a higher-level language with which to manipulate, visualize, and restructure the semantic concepts of the domain. jonathan delatizky jeffrey morrill view-dependent refinement of progressive meshes hugues hoppe some characterizations of families of surfaces using functional equations in this article functional equations are used to characterize some families of surfaces. first, the most general surfaces in implicit form f(x, y, z) = 0, such that any arbitrary intersection with the planes z = z0, y = y0, and x = x0 are linear combinations of sets of functions of the other two variables, are characterized. it is shown that only linear combinations of tensor products of univariate functions are possible for fx, y, z). second, we obtain the most general families of surfaces in explicit form such that their intersections with planes parallel to the planes y = 0 and x = 0 belong to two, not necessarily equal, parametric families of curves. finally, functional equations are used to analyze the uniqueness of representation of gordon-coon surfaces. some practical examples are used to illustrate the theoretical results. enrique castillo andres iglesias 1987 steven a. coons award lecture donald p. greenberg pest management modeller's workbench (abstract only) the pest management modelier's workbench is a program developed at l.s.u. as part of an expert system for soybean crop production. the past management program is designed to simulate the growth rates of an insect by each development stage of the insect. the attributes of an insect to mocel are recorded in a data base and maintained through editing features built in the package. the results of a simulation run is a day by day account of the insect's population per stage. another feature of the program is a plot graph of the results, allowing selection and/or grouping of insect stages as desired and the optional inclusion of field data, plotted as points. the pest management modeller's workbench program is designed for ease of use by interacting with the system through the program and supplying the user with "help" during any point of the program. the pest management program is currently running on a vax under unix and is also being adapted to run on the ibm pc system. brady r. rimes walter g. rudd an introspective environment for knowledge based simulation an intelligent system is developed to help the user in building models which can disclose their operation, learn, verify and check their own operations. the knowledge necessary for this is represented using the knowledge representation language srl and this allows the user to enter the necessary information at different levels of abstraction. in addition to automatic verification, some important debugging aids like selective tracing of any collection of model elements under different conditions are also developed. venkataseshan baskaran y. v. reddy h-blob: a hierarchical visual clustering method using implicit surfaces t. c. sprenger r. brunella m. h. gross the mutable cursor: using the cursor as a descriptive and directive device in digital interactive stories ella tallyn alan chalmers scott pitkethly sigma tutorial lee schruben vite: a visual interface supporting the direct manipulation of structured data using two-way mappings information processed by computers is frequently stored and organized for the computer's, rather than for the user's, convenience. for example, information stored in a database is normalized and indexed so computers can efficiently access, process, and retrieve it. however, it is not natural for people to manipulate such formal/prescriptive representations. instead, people frequently sort items by rough notions of association or categorization. one natural organizational process has been found to center around manipulations of objects in spatial arrangements. examples of this range from the organization of documents and other items on a regular office desktop to the use of 3″×5″ cards to organize a conference program. using visual cues and spatial proximity, people change the categorizations of and relationships between objects. without the help of indices or perfect memory people can still interpret, locate, and manipulate the information represented by the items and the higher-level visual structures they form. the vite system presented here is an intuitive interface for people to manipulate information in their own way and at their own pace. vite provides for configurable visualizations of structured data sets so users can design their own "perspectives" and a direct manipulation interface allowing editing of and manipulation on the structured data. hao-wei hsieh frank m. shipman normal forms in function fields we consider function fields of functions of one variable augmented by the binary operation of composition of functions. it is shown that the straightforward axiomatization of this concept allows the introduction of a normal form for expressions denoting elements in such fields. while the description of this normal form seems relatively intuitive, it is surprisingly difficult to prove this fact. we present an algorithm for the normalization of expressions, formulated in the symbolic computer algebra language mathematica. this allows us to effectively decide compositional identities in such fields. examples are given. k. aberer computational strategies for object recognition this article reviews the available methods for automated identification of objects in digital images. the techniques are classified into groups according to the nature of the computational strategy used. four classes are proposed: (1) the simplest strategies, which work on data appropriate for feature vector classification, (2) methods that match models to symbolic data structures for situations involving reliable data and complex models, (3) approaches that fit models to the photometry and are appropriate for noisy data and simple models, and (4) combinations of these strategies, which must be adopted in complex situations. representative examples of various methods are summarized, and the classes of strategies with respect to their appropriateness for particular applications. paul suetens pascal fua andrew j. hanson reduce package for the indefinite and definite summation this article describes the reduce package zeilberg implemented by gregor st lting and the author which can be obtained from redlib, accessible via anonymous ftp on ftp.zib-berlin.de in the directory pub/redlib/rules.the reduce package zeilberg is a careful implementation of the gosper and zeilberger algorithms for indefinite, and definite summation of hypergeometric terms, respectively. an expression _ak_ is called a _hypergeometric term_ (or _closed form_), if _ak/ak-1_ is a rational function with respect to _k._ typical hypergeometric terms are ratios of products of powers, factorials, function terms, binomial coefficients, and shifted factorials (pochhammer symbols) that are integer- linear in their arguments.the package covers further extensions of both gosper's and zeilberger's algorithm which in particular are valid for ratios of products of powers, factorials, function terms, binomial coefficients, and shifted factorials that are rational-linear in their arguments.a similar maple package is described elsewhere [2]. wolfram koepf introduction to markup languages issue jason bedunah an integrated system for multi-rover scientific exploration tara estlin alexander gray tobias mann gregg rabideau rebecca castaño steve chien eric mjolsness neural networks and open texture in this paper some experiments designed to explore the suitability of using neural nets to tackle problems of open texture in law are described. three key questions are investigated: can a net classify cases successfully; can an acceptable rationale be uncovered by an examination of the net; and can we derive rules describing the problem from an examination of the net? trevor bench-capon analysis of airport/airline operations using simulation sandra turner gantt an architecture of a knowledge-based simulation engine madhav erraguntla perakath c. benjamin richard j. mayer hardware acceleration for window systems d. rhoden c. wilcox task-level programming of a robot using an intelligent human-robot interface b. sheperd camdroid: a system for implementing intelligent camera control in this paper, a method of encapsulation camera tasks into well defined units called "camera modules" is described. through this encapsulation, camera modules can be programmed and sequenced, and thus can be used as the underlying framework for controlling the virtual camera in the widely disparate types of graphical environments. two examples of the camera framework are shown: an agent which can film a conversation between two virtual actors and a visual programming language for filming a virtual football game. steven m. drucker david zeltzer simple solution formula construction in cylindrical algebraic decomposition based quantifier elimination hoon hong program transformation systems h. partsch r. steinbruggen dynamic stereo displays colin ware an enhanced treatment of hidden lines a new display scheme is presented in which three-dimensional objects are represented with a much higher degree of user control of the line-drawing format than those in conventional schemes. in this scheme, the attribute of a line is determined not at the time of definition but after the viewing transformation has been applied, taking the shielding environment of the line into consideration. the essential difference between our scheme and conventional hidden-line removal/indication lies in the treatment of the shielding environment. in conventional hidden-line removal/indication, only the visible/invisible information is available for the selection of the attribute of a line. in our scheme, the entire set of surfaces that hide a line can be used to determine the attribute of the line. a much higher degree of freedom in generating pictures is achieved owing to the treatment of the shielding environment. we also describe a system called grip, which implements the new scheme. tomihisa kamada satoru kawai general principles of learning-based multi-agent systems david h. wolpert kevin r. wheeler kagan tumer knowledge representation in expert vision systems h. ranganath r. greene similarity in harder cases: sentencing for fraud we focus on one of the central concepts of case-based reasoning: similarity. in the field of sentencing, where the really decided cases are often on the harder side, similarity is multidimensional and depends less on formal rules than on various legitimate principles, objectives and factors which relate to the offender, the victim, the act and its social context. the paper presents our data base of empirically analysed cases of fraud and discusses two of the different phases completed to reduce complexity: (1) a decision topology whose categories emerged from in-depth interviews with judges, crown and defense attorneys and which proved to be statistically significant; (2) experimenting with fxs, a program designed to retrieve similar cases based on their degree of salience. the salience coefficient used is a measure of similarity based on relative high or low frequency of factors in their local context. particularly promising, although not yet formally explained, are fxs' flexible weighting possibilities. ruth murbach Éva nonn generalized bidirectional associative memories for image processing a. d. kulkarni iraj yazdanpanahi convolution surfaces jules bloomenthal ken shoemake a prototype of fault diagnostic system for robots yoshio endo michitaka oshima tomohiro miyazaki shiro haihara finding stable system designs: a reverse simulation technique rosemary h. wild joseph j. pignatiello spaf: sub-texel precision anisotropic filtering texture mapping is a technique which most effectively improves the realism of computer-generated scenes in 3d graphics. tri-linear filtering of the mip- mapped texture has been popular as a texture filtering method but it blurs images on the surface of objects angled obliquely away from the viewer in a scene. various anisotropic filtering methods like footprint assembly, feline, and fast footprint mip- mapping have been proposed to satisfy the desire for the high quality image [7]. in spite of the increase of the memory bandwidth, the memory bandwidth limit is still the bottleneck of the texture filtering hardware. moreover, it is very important to keep the quality of rendered image good. in this paper, we propose sub-texel precision anisotropic filtering (spaf) which filters texels in a region that covers a quadrilateral footprint with the weights. the weight plays a key role in effective filtering to render the image of high quality with the restricted number of texels loaded from memory for real-time filtering. first, the area coverage based texel filtering scheme is introduced to obtain the footprint's coverage for each texel on the sub-texel precision leading to the small weight table size. second, the gaussian weight is applied to this footprint's coverage for each texel to reduce the artifacts. therefore, the quality of rendered images is superior to other anisotropic filtering methods in the same restricted number of texels. and the size of this weight table is several hundred kbytes which is much smaller than fast footprint mip-mapping. this small rom table size enables the spaf to be implemented at feasible hardware costs. hyun-chul shin jin-aeon lee lee-sup kim application of reduce system for analyzing consistency of systems of p.d.e.'s a consistency analysis of differential equation systems involves a sequence of differential - algebraic operations. at present there are know two methods: the cartan's and the riquier-janet-kuranishi(rjk) method which are equivalent. the implementation of the both of the methods with the purpose of their practical application leads to large symbolic computations which often cannot be performed without a computer. the problem on the consistency investigation of a specific overdetermined system was solved in [1] with the aid of a computer. more recently, a number of computer codes implementing the both of the above methods on a computer have been developed (for example [2]). in the present paper we propose a realization of the rjk algorithm the reduce system [3]. one of rjk algorithm advantages over cartan's algorithm is that there is no need to go over to exterior differential equation. it is, therefore more economical, particularly in the use of computer memory, which, in the problem at hand, is very important. the proposed version of the computer code enables us to investigate only the systems of quasilinear first - order differential equations. this limitation is not very restrictive, because the non-linearity is taken into account only in the process of the computation of the ranks of matrices. the reduce program was also used by the present authors for the extraction and construction of exact solutions of the equation systems from the continuum mechanics. new results were obtained with the aid of a computer. v.s. shurygin and n.n. yanenko: on the computer implementation of algebraic- differential algorithms. problemy kibernetik, vyp. 6, 1961. v.g. ganzha, s.v. meleshko, f.a. murzin, v.p. shapeev, n.n yanenko: realization on computer of an algorithm for studying the consistency of partial differential equations. dokl. acad. nauk sssr, vol. 261, no 5, 1981. [3]. a.c. hearn: reduce user's manual. 1985. v. g. ganzha s. v. meleshko v. p. shelest software bit-slicing: a technique for improving simulation performance peter m. maurer william j. schilp viewing composition tables as axiomatic systems axiomatic systems and composition tables are often seen asalternative ways of specifying the semantic interrelations ofrelations for qualitative reasoning. axiomatic characterizationsusually specify ontological assumptions concerning the domain ofthe relations and introduce a taxonomic system of relations that,on the one hand, serves to specify the relations and, on the otherhand, supports the communication of the intended meaning.in thisarticle, composition tables are seen as a specific form ofaxiomatic theories that can also be combined with a taxonomicsystem of relations. on this basis, the content of compositiontables can be reformulated in a simplified way. this simplificationsupports the construction of such tables parallel to thedevelopment of the axiomatic specification or on the basis of agiven axiomatic characterization. carola eschenbach representation of three-dimensional digital images sargur n. srihari a fast shadow algorithm for area light sources using backprojection the fast identification of shadow regions due to area light sources is necessary for realistic rendering and for discontinuity meshing for global illumination. a new shadow-determination algorithm is presented that uses a data structure, called a backprojection, to represent the visible portion of a light source from any point in the scene. a complete discontinuity meshing algorithm is described for polyhedral scenes and area light sources, which includes an important class of light/geometry interactions that have not been implemented before. a fast incremental algorithm for computing backprojections is also described. the use of spatial subdivision, and heuristics based on computed statistics of typical scenes, results in efficient mesh and backprojection computation. results of the implementation show that the use of the backprojection and discontinuity meshing permits accelerated high-quality rendering of shadows using both ray- casting and polygon-rendering with interpolants. george drettakis eugene fiume a digital video display system implemented on a kim-1 microcomputer the "microelectronic revolution" and the accompanying decrease in the cost of semiconductor memory has increased the availability of raster-scan graphical displays, yet, as pointed out in a recent survey [bae79], the implementation of graphics software for raster-scan systems has lagged behind that for random-scan ones. the aim of the work described in the present paper has been to apply random-scan techniques to a system employing a relatively inexpensive raster-scan device. the system, incorporating a display-file processor, is implemented on a kim-1 microcomputer. the display device is composed of a micro technology unlimited video board and a standard tv monitor. n. solntseff m. d. drummond an introduction to fault tolerant parallel simulation with eclipse felipe knop edward mascarenhas vernon rego v. s. sunderam a generalized de casteljau approach to 3d free-form deformation this paper briefly presents an efficient and intuitive 3d free-form deformation approach based on iterative affine transformations, a generalized de casteljau algorithm, whereby the object warps along a bezier curve as its skeleton. yu-kuang chang alyn p. rockwood integrating active perception with an autonomous robot architecture glenn wasson david kortenkamp eric huber exploiting emergent behavior in multi-agent systems (abstract) peter wavish learning of depth two neural networks with constant fan-in at the hidden nodes (extended abstract) peter auer stephen kwek wolfgang maass manfred k. warmuth discrete event simulation modeling: directions for the '90s ashvin radiya color gamut mapping and the printing of digital color images principles and techniques useful for calibrated color reproduction are defined. these results are derived from a project to take digital images designed on a variety of different color monitors and accurately reproduce them in a journal using digital offset printing. most of the images printed were reproduced without access to the image as viewed in its original form; the color specification was derived entirely from calorimetric specification. the techniques described here are not specific to offset printing and can be applied equally well to other digital color devices. the reproduction system described is calibrated using cie tristimulus values. an image is represented as a set of three-dimensional points, and the color output device as a three-dimensional solid surrounding the set of all reproducible colors for that device, called its gamut. the shapes of the monitor and the printer gamuts are very different, so it is necessary to transform the image points to fit into the destination gamut, a process we call gamut mapping. this paper describes the principles that control gamut mapping. included also are some details on monitor and printer calibration, and a brief description of how digital halftone screens for offset printing are prepared. maureen c. stone william b. cowan john c. beatty a comparative study of think-aloud and critical decision knowledge elicitation methods b. w. crandall multiobjective evolutionary algorithm test suites david a. van veldhuizen gary b. lamont ilp versus tlp on smt nicholas mitchell larry carter jeanne ferrante dean tullsen a distributed artificial intelligence view on general purpose vision systems (abstract) olivier boissier yves demazeau an architecture for planning with external information points in a real-time system rhonda eller-meshreki todd saundurs samer meshreki algorithm 647: implementation and relative efficiency of quasirandom sequence generators bennett l. fox guaranteeing the topology of an implicit surface polygonization for interactive modeling barton t. stander john c. hart opengl texture-mapping with very large datasets and multi-resolution tiles paul hansen intelligent simulation environments: identification of the basics a problem exists in efficiently combining a non-deterministic decision capability with a current discrete event simulation language for use by the simulationist (programmer and in the future the user). this paper explores this problem in the context of the discrete event simulation problem domain implemented in siman. the purpose is (1) to provide an ontological definition of abstract ideas from data to wisdom, (2) identify a taxonomy of simulation and artificial intelligence combination dialects, and (3) establish the need for and then introduce a "decide node" which will assist the simulationist in incorporating a broader spectrum of the ontology more easily than current dialects allow. jordan snyder gerald t. mackulack correction of geometric perceptual distortions in pictures denis zorin alan h. barr intelligent agents in computer games michael van lent john laird josh buckman joe hartford steve houchard kurt steinkraus russ tedrake agents teaching agents to share meaning the promise of intelligent agents acting on behalf of users' personalized knowledge sharing needs may be hampered by the insistence that these agents begin with a predefined, common ontology instead of personalized, diverse ontologies. only until recently have researchers diverged from the last decades common "ontology paradigm" to a paradigm involving agents that can share knowledge using diverse ontologies. this paper describes how we address this agent knowledge sharing problem of how agents deal with diverse ontologies by introducing a methodology and algorithms for multi- agent knowledge sharing and learning. we demonstrate how this approach will enable multi- agent systems to assist groups of people in locating, translating, and sharing knowledge using our distributed ontology gathering group integration environment (doggie) and describe our proof- of- concept experiments. doggie synthesizes agent communication, machine learning, and reasoning for information sharing in the web domain. andrew b. williams zijian ren making radiosity usable: automatic preprocessing and meshing techniques for the generation of accurate radiosity solutions daniel r. baum stephen mann kevin p. smith james m. winget a knowledge-based problem-specific program generator t hrycej texture synthesis for digital painting the problem of digital painting is considered from a signal processing viewpoint, and is reconsidered as a problem of directed texture synthesis. it is an important characteristic of natural texture that detail may be evident at many scales, and the detail at each scale may have distinct characteristics. a "sparse convolution" procedure for generating random textures with arbitrary spectral content is described. the capability of specifying the texture spectrum (and thus the amount of detail at each scale) is an improvement over stochastic texture synthesis processes which are scalebound or which have a prescribed 1/f spectrum. this spectral texture synthesis procedure provides the basis for a digital paint system which rivals the textural sophistication of traditional artistic media. applications in terrain synthesis and texturing computer-rendered objects are also shown. john-peter lewis simulation modeling with event graphs lee schruben identity criteria and sortal concepts in this paper we focus on a specific aspect of the notion of_conceptualisation_, i.e. on the issue of the specification ofa certain kind of concept: _sortal concept_. our startingpoint is the intuitive idea that a sortal concept cannot bespecified in isolation from a general notion of entity. we thinkthat this idea has some bearing on the way in which _identitycriteria_ should be conceived, since they are usually taken as afundamental tool for the specification of a sortal concept. thefirst goal of our paper is to discuss and point out somedifficulties concerning the relation between sortals and identitycriteria. in general, we think that the specification of a sortalpresupposes - in russellian terms - a range of significance onwhich the concept is defined. it follows that the sortal cannot bestated without a specification of its range of significance. thismeans that identity criteria for a sortal _k_ providingconditions of identity only for objects falling under _k_ arenot enough to specify _k_. independently of this point, we arequite skeptical about the possibility of achieving a formalsatisfactory definition of _sortal_. we will try to show thateven guarino and welty's last proposal [4] does not succeed. oursecond goal concerns which concepts are to be taken as sortals. bythinking that identity criteria are necessary and sufficientconditions for identity, guarino and welty cannot accept that, say,_a_ is s1 and _a_ is s2, in thecase that s1 and s2 are associated withincompatible criteria. so they are lead to postulate, for example,that not _a_ but _b_ is s2 and _a_ isconstituted by _b_. we defend the thesis that it is possibleto take some concepts - in our case s2 \\- as endowed onlywith necessary conditions for identity. we will argue for somespecific choices. massimiliano carrara pierdaniele giaretta graphical tools for interactive image interpretation this paper describes browse, an interactive raster image display facility which is a major component of a larger integrated map assisted photo- interpretation system (maps), being developed as a prototype interactive aid for photo- interpretation. application areas for this research include image cartography, land use studies and reconnaissance, as well as image database organization, storage, and retrieval. browse is a window-oriented display manager which supports raster image display, overlay of graphical data such as map descriptions and image processing segmentations, and the specification and generation of 3d shaded surface models. digitized imagery from black and white and color aerial mapping photographs is displayed by browse at multiple levels of resolution and allows for dynamic positioning, zooming, expansion or shrinking of the image window. map data represented as vectors and polygons can be superimposed on the imagery through image-to-map registration. access to collateral map databases and terrain models may be accomplished using the browse graphical interface. finally, the window representation gives a convenient communication mechanism for passing image fragments to image interpretation programs, which generally run as separate processes. the results of such processing can be returned to browse for further processing by the user. we will discuss the rationale behind the design of browse as well as its application to domains including aerial photo- interpretation and 3d cartography. david m. mckeown jerry l. denlinger hdtv (hi-vision) computer graphics k. omura r. mochizuki h. tamegaya y. kawajuchi d. miskowich a model of visual masking for computer graphics james a. ferwerda peter shirley sumanta n. pattanaik donald p. greenberg minimal gks minimal gks is a subset of the draft international standard graphical kernel system. minimal gks has been implemented at sandia in the programming language c. this implementation confirms that minimal gks does indeed have the anticipated advantages of easy implementation and small size. experience in using this implementation has also demonstrated that minimal gks is easy to learn and use, yet is powerful enough for non-trivial, real-world applications. randall w. simons a muscle model for animation three-dimensional facial expression keith waters asynchronous control of discrete event simulation traditionally, discrete-event simulations have been controlled in a synchronous manner. the synchronization is achieved through a data-structure, normally referred to as an events-list. manipulations on this list can often be very expensive and can thus detract from the attractiveness of simulation as a problem-solving tool. an alternative to this situation is asynchronous simulation which has found considerable exposure in the literature of distributed simulation. asynchronous simulations do not always process events in their natural order of occurrence. as a result the events-list and the associated costs may be eliminated. this paper explores the possibilities for asynchronous control of traditional simulations. it lays down the requirements for an effective asynchronous algorithm, and proposes a framework for simulations with such an algorithm. jay b. ghosh will parallel simulation come to an end? jason yi-bing lin and/or graph heuristic search methods two new marking algorithms for and/or graphs called cf and cs are presented. for admissible heuristics cs is not needed, and cf is shown to be preferable to the marking algorithms of martelli and montanari. when the heuristic is not admissible, the analysis is carried out with the help of the notion of the first and second discriminants of an and/or graph. it is proved that in this case cf can be followed by cs to get optimal solutions, provided the sumcost criterion is used and the first discriminant equals the second. estimates of time and storage requirements are given. other cost measures, such as maxcost, are also considered, and a number of interesting open problems are enumerated. a. mahanti a. bagchi examples for the algorithmic calculation of formal puisieux, laurent and power series formal laurent-puisiex series (lps) of the form k=k ∞ akxk/n are important in calculus and complex analysis. in some computer algebra systems (cass) it is possible to define an lps by direct or recursive definition of its coefficients. since some operations cannot be directly supported within the lps domain, some systems generally convert lps to finite truncated lps for operations such as addition, multiplication, division, inversion and formal substitution. this results in a substantial loss of information. since a goal of computer algebra is---in contrast to numerical programming---to work with formal objects and preserve such symbolic information, cas should be able to use lps when possible. there is a one-to-one correspondence between formal power series with positive radius of convergence and corresponding analytic functions. it should be possible to automate conversion between these forms. among cass only macsyma [5] provides a procedure powerseries to calculate lps from analytic expressions in certain special cases, but this is rather limited. in [2]--[4] we gave an algorithmic approach for computing an lps for a very rich family of functions. it covers e.g. a high percentage of the power series that are listed in the special series dictionary [1]. the algorithm has been implemented by the author and a. rennoch in the cas mathematica [7], and by d. gruntz in maple [6]. in this note we present some example results of our mathematica implementation which give insight in the underlying algorithmic procedure. wolfram koepf a new approach for evolving clusters robert e. marmelstein gary b. lamont graphical modeling and animation of brittle fracture james f. o'brien jessica k. hodgins opinion transition model under dynamic environment: experiment in introducing personality to knowledge-based systems masakatsu ohta toshiyuki iida tsukasa kawaoka brushing techniques for exploring volume datasets pak chung wong r. daniel bergeron interactive csg chris butcher ai planning: a prospectus on theory and applications subbarao kambhampati discrete element models and real life duals ross a. gagliano michael r. lauer a reflexive, not impulsive agent the aim of our present research is to build an agent capable of communicative and expressive behavior. the agent should be able to express its emotions but also to refrain from expressing them: a reflexive, not an impulsive agent. a reflexive agent is an agent who thinks it over before displaying one's emotions, that is, one who, when feeling an emotion, decides" not to display it immediately. in this paper we present our enriched discourse generator and we give a general overview of the factors that we consider to determine the displaying or not displaying of an emotion. catherine pelachaud isabella poggi berardina decarolis fiorella de rosis hardware-accelerated free-form deformation hardware-acceleration for geometric deformation is developed in the framework of an extension to the opengl specification. the method requires an addition to the front-end of the opengl rendering pipeline and an appropriate opengl primitive. our approach is to implement general geometric deformations so the system supports additional layers of abstraction, including physically based simulations. this approach would support a wide range of users with an accelerated implementation of a well-understood deformation method, reducing the need for software deformation engines and the execution time penalty associated with them. clint chua ulrich neumann memory length as a feedback parameter in learning systems in a classic learning experiment, a higher vertebrate is presented with two levers. to start, a reward is given if the left lever is pushed, no reward is given if the right lever is pushed. after a certain period of time, t, the meaning of the two levers is interchanged. now a reward is given if the right lever is pushed and no reward is given if the left lever is pushed. this happens for the same period of time t. once again there is an interchange in the meaning of the two levers for the period of time t. this alternation is repeated a number of times. higher vertebrates exhibit learning in this situation. with each repeated period of time t, they gradually adapt their response to the correct lever. in other words, at first they are slow to change from one lever to another, but they gradually learn that the reward at each lever is being changed with period t. the above experiment strongly suggests that memory retention is being adjusted to the length of time that a reward is given at each lever. while it is difficult to determine the exact mechanism by which this is done, there is an easy feedback control system which models this behavior. this is shown below. in the system of figure 1, if the error signal is too large, say exceeds a specified critical error, there is a decrease, s, in the memory length. if the error signal is low, say falls below the critical error, there is an increase, p, in the memory length. as described, this is a binary alternative in the change of memory length, as shown in figure 2. it is of course possible to have more than two alternatives in the changes of memory length. in [1] the above system was applied to the tracking of a maneuvering aircraft, where it is assumed that the aircraft follows a periodic linear spline function. that is, the aircraft in the absence of noise follows the path shown in figure 3. there are a certain number of samples of the aircraft position during each time interval t. the noise disturbances at each sample are chosen to be independent samples of normally distributed noise of mean 0 and constant standard of deviation . to simplify, let the filter model be a straight line, least squares model. other models are possible, for example a straight line, weighted least squares model. let e be the error signal. in [1], p = 0 when |e| ≤ , s = 1 when |e| > , where is a constant and is the critical error. while recursive filter models are given in [1], the operation may be viewed here as a sequence of nonrecursive linear regressions over f sample points, where f is the adjusting memory length. in figure 4, if the error signal is less than , the memory length increases from 7 to 8, fitting points 11 to 18. this corresponds to p = 0 because there is no penetration into past history prior to point 11. in figure 4, if the error signal is greater than , the memory length decreases from 7 to 6, fitting points 13 to 18. this corresponds to s = 1 because there is a shrinkage of memory length by 1. other values for p and s are possible. in the above we have seen a binary alternative where either f increases by 1 (p = 0) or f decreases by 1 (s = 1). many questions remain on the choice of alternatives. in what follows, we answer the questions below. is there any improvement in going from 2 alternatives, where f increases or decreases by 1, to 3 alternatives, where f increases by 1, decreases by 1, or remains the same? is there any improvement in using 2 alternatives corresponding to p = 1 (f increases by 2) and s = 2 (f decreases by 2)? is there any improvement in using 2 alternatives corresponding to high values of s and p? first, is there any improvement in allowing f to remain unchanged, as well as increasing or decreasing by one? we have made a number of simulations to test this, and find no discernible improvement over the binary alternative (just increasing or decreasing by one). from the standpoint of models for learning systems, this result is not surprising. second, the system in which p = 1 and s = 2 shows improvement over the system in which p = 0 and s = 1. the latter may be found in [1]. the improved performance of the former is shown in figure 5. apparently in figure 5, the improvement stems from the fact that f increases or decreases by a greater amount. for p = 1, the model penetrates into past history by 1, increasing f by 2 rather than 1. for s = 2, the model shrinks f by 2, rather than 1. note that in figure 5 the learning is rapid, in that the overshoot at each knot of the spline function rapidly decreases, through the sequence at knots b, c, d, e, f. as can be seen in figure 5, the period t is rapidly learned by the filter output. this is apparent by viewing the filter output from knot e to knot f. in fact, the filter value of f in steady state has a root-mean-square error of only 3.5 sample points with essentially no bias error. third, could high increases or decreases in f lead to improved performance? the answer is a definite no. the performance in such cases is very poor. apparently the best performance may be found around p = 0 and s = 1, or p = 1 and s = 2, as noted above. g epstein a reflectance model for computer graphics this paper presents a new reflectance model for rendering computer synthesized images. the model accounts for the relative brightness of different materials and light sources in the same scene. it describes the directional distribution of the reflected light and a color shift that occurs as the reflectance changes with incidence angle. the paper presents a method for obtaining the spectral energy distribution of the light reflected from an object made of a specific real material and discusses a procedure for accurately reproducing the color associated with the spectral energy distribution. the model is applied to the simulation of a metal and a plastic. robert l. cook kenneth e. torrance using new technology for systems documentation: specifications for an on-line documentation system for the paperless office recent conferences on future technology have lead to dreaming of the "perfect" on-line documentation system for the paperless office. this panel discussion on using the new technology will present specifications for one such "perfect" system. the audience will be invited to consider what changes and improvements they would make. the object of this exercise will be to encourage someone to build and market such a system. similar systems already exist but they are less than "perfect". diana patterson george olah from the editor richard j. beach inside the computer animation studio: where does inspiration come from? carl rosendahl when computers speak, hear, and understand jennifer lai a breadth-first approach to efficient mesh traversal tulika mitra tzi-cker chiueh designing government agents for constitutional compliance carey heckman alex roetter logical models of argument logical models of arguement formalize commonsense reasoning while taking process and computation seriously. this survey discusses the main ideas that characterize different logical models of argument. it presents the formal features of a few features of a few main approaches to the modeling of argumentation. we trace the evolution of argumentation from the mid-1980s, when argument systems emerged as an alternative to nonmonotonic formalisms based on classical logic, to the present, as argument in embedded in different complex systems for real-world applications, and allow more formal work to be done in different areas, such as ai and law, case-based reasoning and negotiation among intelligent agents. carlos ivan chesnevar ana gabriela maguitman ronald prescott loui a natural language front-end for knowledge acquisition b. arinze interactive manipulation and display of surfaces in four dimensions david banks fast polygon mesh querying by example james gain james scott knowledge representing schemes for planning an investigation of recent development of planning systems is performed, with emphasis on issues of knowledge representation, and the possible solutions for representing the effects of actions are pointed out. wan-bih liaw frank m. brown automatic documentation of physical systems a. andersen k. h. munch input modeling lawrence leemis investigations in adaptive distributed simulation a new adaptive protocol for distributed discrete event simulation is proposed. this protocol spans the continuum of protocols from conservative to optimistic allowing each process in a distributed simulation to adapt to the specific simulation problem at runtime. an actual implementation of the protocol has been tested on a network of workstations for a closed queueing system. the results are very favorable and the algorithm has outperformed a conservative and an optimistic protocol in some cases. donald o. hamnes anand tripathi the artifice of dimension conor patterson farrella dove steven churchill life spacies christa sommerer laurent mignonneau the simulation model development environment: an overview osman balci richard e. nance issues in using a dis facility in analysis jack m. kloeber jack a. jackson decremental delaunay triangulation richard hammersley hong-qian (karen) lu the limits of speech recognition ben shneiderman generalizing lookahead - behavioral prediction in distributed simulation jörn w. janneck a unified modeling methodology for performance evaluation of distributed discrete event simulation mechanisms the main problem associated with comparing distributed discrete event simulation mechanisms is the need to base the comparisons on some common problem specification. this paper presents a specification strategy and language which allows the same simulation problem specification to be used for both distributed discrete event simulation mechanisms as well as the traditional single event list mechanism. this paper includes: a description of the yaddes specification language; a description of the four simulation mechanisms currently supported; the results for three simulation examples; and an estimate of the performance of a communication structure needed to support the various simulation mechanisms. currently this work has only been done on a uniprocessor emulating a multiprocessor. this has limited some of our results but lays a significant basis for future simulation mechanism comparison. bruno r. preiss wayne m. loucks v. carl hamacher simscript ii.5@@@@ tutorial this tutorial will present the highlights of the simscript ii.5 approach to building discrete event simulation models. the approach will be to construct a small example problem, implement the program in simscript ii.5, and then to display the modularity which is possible with simscript ii.5 by adding several "real-world" complexities to the model. edward c. russell computer animation with scripts and actors a technique and philosophy for controlling computer animation is discussed. using the actor/scriptor animation system (asas) a sequence is described by the animator as a formal written script, which is in fact a program in an animation/graphic language. getting the desired animation is then equivalent to "debugging" the script. typical images manipulated with asas are synthetic, 3d perspective, color, shaded images. however, the animation control techniques are independent of the underlying software and hardware of the display system, so apply to other types (still, b&w;, 2d, line drawing ...). dynamic (and static) graphics are based on a set of geometric object data types and a set of geometric operators on these types. both sets are extensible. the operators are applied to the objects under the control of modular animated program structures. these structures (called actors) allow parallelism, independence, and optionally, synchronization, so that they can render the full range of the time sequencing of events. actors are the embodiment of imaginary players in a simulated movie. a type of animated number can be used to drive geometric expressions (nested geometrical operators) with dynamic parameters to produce animated objects. ideas from programming styles used in current artificial intelligence research inspired the design of asas, which is in fact an extension to the lisp programming environment. asas was developed in an academic research environment and made the transition to the "real world" of commercial motion graphics production. craig w. reynolds a storage system for scalable knowledge representation twenty years of ai research in knowledge representation has produced frame knowledge representation systems (frss) that incorporate a number of important advances. however, frss lack two important capabilities that prevent them from scaling up to realistic applications: they cannot provide high-speed access to large knowledge bases (kbs), and they do not support shared, concurrent kb access by multiple users. our research investigates the hypothesis that one can employ an existing database management system (dbms) as a storage subsystem for an frs, to provide high-speed access to large, shared kbs. we describe the design and implementation of a general storage system that incrementally loads referenced frames from a dbms, and saves modified frames back to the dbms, for two different frss: loom and theo. we also present experimental results showing that the performance of our prototype storage subsystem exceeds that of flat files for simulated applications that reference or update up to one third of the frames from a large loom kb. peter d. karp suzanne m. paley ira greenberg generating exception structures for legal information serving more and more legal information is available in electronic form, but traditional retrieval mechanisms are insufficient to answer questions and legal problems of most users. in the esprit project clime we are building a "legal information server" (lis), that not only retrieves all relevant norms for a user's query, but also applies them, giving the normative consequences of the 'situation' presented in the query. typically, these queries represent very general and underspecified cases. underspecification may lead to 'overlooking' of relevant norms, in particular those norms that directly change the legal status of a case: exceptions. most exceptions in legislation however, are implicit, i.e. will only be detected after trying all norms for a particular case and resolving conflicts between applicable norms. for liss we suggest to make the exception relations between norms explicit in off-line mode, so that we can use these exception structures to warn users about potential exceptions to their queries. r. winkels d. j. b. bosscher a. w. f. boer j. a. breuker more efficient pac-learning of dnf with membership queries under the uniform distribution nader h. bshouty jeffrey c. jackson christino tamon introduction to the visual simulation environment osman balci anders i. bertelrud chuck m. esterbrook richard e. nance "put-that-there": voice and gesture at the graphics interface recent technological advances in connected-speech recognition and position sensing in space have encouraged the notion that voice and gesture inputs at the graphics interface can converge to provide a concerted, natural user modality. the work described herein involves the user commanding simple shapes about a large-screen graphics display surface. because voice can be augmented with simultaneous pointing, the free usage of pronouns becomes possible, with a corresponding gain in naturalness and economy of expression. conversely, gesture aided by voice gains precision in its power to reference. richard a. bolt extending graphics hardware for occlusion queries in opengl dirk bartz michael meißner tobias huttner computer graphics research in japanese universities there is considerable research activity related to computer graphics within the japanese academic community. however, little is known about it outside of japan. another problem is that some of the most interesting work being done is not called "computer graphics" and is therefore only reported in other fields such as "precision engineering" or "image processing." interpreting "computer graphics" broadly, this panel will survey the work of three leading japanese researchers who will each briefly describe the history of computer graphics at their institutions, outline the level of staffing and equipment resources in their laboratories and report on their current areas of research activity. these presentations will be proceeded by the chairman's overview discussion about japanese university research, in general. the panel will conclude with a question and answer period. professor makoto nagao, department of electrical engineering, kyoto university, sakyo, kyoto, japan - the main thrust of research at kyoto university has been in the field of digital image processing systems. work began in 1965 on the problem of japanese character recognition. one of the earliest digital processing systems in japan was also designed. working from beliefs about the human pattern recognition process, efforts were directed towards the development of a structural approach rather than mathematical theories which were popular during the early days of character recognition research. current topics of interest include the modeling of more human-like image understanding functions, such as trial and error processes, knowledge- driven analysis processes and declaritive representation for image understanding. these can be summarized under the theme "image interpretation by knowledge presentation". laurin herr felix 3d display knut langhans tram: a blackboard architecture for autonomous robots in this paper, we present a blackboard model especially dedicated to autonomous robots. the control architecture exploits the ideas of the bb1 model (hay 85). our blackboard is non monotonous; this enhances its capabilities. the evolution of the environment is taken into account by our model. we explain the architecture of tram : control and structures. we illustrate it by an application in the field of tasks planning for a mobile inspection robot. anne koenig elisabeth crochon integrating perception with problem solving paul benjamin alec cameron leo dorst madeleine rosar hsiang-lung wu interactive problem solving j p e hodgson alain fournier, 1943-2000: an appreciation alain fournier, 1943-2000 an appreciation there are thinkers of great repute and intellect who would suggest thatany objective measure of humankind is in fact a mismeasure. while i amnot of a mood to argue the general point either way, it certainly appliesto alain fournier. i write this appreciation of him so that those who donot know him may be inspired to discover him, and so that those who doknow him are able to reflect further on his remarkable life. i beg thereader to indulge me in a rather personal reverie, for it is not possibleto have known alain without having a deep personal response to him. allow me to recount very briefly alain's accomplishments. his earlytraining was in chemistry. after emigrating from france to canada in the1970's, he co- wrote a textbook on the topic, and he taught pre-collegechemistry in quebec. his career in computer graphics spanned only about20 years. he received a ph.d. in computer science at the university of texas at dallas, and reported the results of his ph.d. work on stochasticmodelling in a seminal paper in 1980 with don fussell and loren carpenter.he then went on to an outstanding academic career first at the universityof toronto and subsequently at the university of british columbia.>from the outset he played on the international stage, especially ineurope and in north america. he has contributed to acm-tog as an author,as co-guest editor of a special issue (1987), and he was an associateeditor from 1990-1992. alain's early contributions to computer graphics on the modellingof natural phenomena were brilliant in themselves, but perhaps moreimportantly they advocated a methodology that required validationagainst real visual phenomena. this set the bar at the right levelscientifically. his approach, which he once called "impressionisticgraphics" both revolutionised the field and drove it forward. perhapsthe best example of this work is his beautiful paper on the depiction ofocean waves with bill reeves. his subsequent work on illumination models,light transport, rendering, and sampling and filtering is remarkable forits far-sightedness and depth. his theoretical work in computer graphicsand computational geometry made us think about the limits of both fields. alain's approach to solving problems was at once courageous and rational.were he to ask himself (as i'm sure he did) for a response to t.s. eliot'smusing, i'm sure his answer would be his inimitable "piece of cake! ok kiddos,here's what we'll do." and he would rush forward, leaving us to lingerhappily in his slipstream. endlessly resourceful and tirelesslyinnovative, he would mould ideas of amazing insight into work that alsoinspired others, often much later, to take up the challenge. if c.p. snow were ever in need of a prototypical person to bridge the"two cultures" of science and art, alain would be it. he was blessedwith an irrepressible enthusiasm to communicate his understanding andhis curiosity about the universe, and he did so in whatever languagewas most appropriate. he wrote wonderful mathematics, algorithms,prose and poetry. his vocabulary in english and in french was gentlyintimidating, for even in intimidation he was benevolent. it seemedthat his intellect was able to synthesise everything he ever learned.he would routinely interject a latin "bon mot" into the papers we werewriting or practise writing kanji on the napkins on which we were doingresearch. we rarely did research in an office. how i miss those days. his art served him as innately and intuitively as did his science.he wrote exquisite poetry that was both challenging and tender. one day ihope that his work will be more available to the general public. i alsohope that alain's accomplishments will in time be formally recognisedby the wider computer graphics community. alain's wit, his innate "jeu d'esprit", was legend. his fondness forgood jokes, especially groucho marx gags, allowed some but not all ofus to overlook his weakness for jerry lewis. there are few on this earth who have been blessed with a wider array oftalents, and fewer still who had more to say and more to contribute thanalain. and yet, alain had a sensibility that is common to scientistsand artists who have done many great things in their lives: apart from alovely retrospective paper he wrote for graphics interface in 1994, hedid not look back sympathetically on his work to derive satisfactionfrom his accomplishments. he was rooted in the present, and he sufferedfrom the belief that he was only as good as his last project. in the endhe may well have believed in eliot's sentiment: i have heard the mermaids singing, each to each.i do not think that they will sing to me. oh, but they have always sung to you, alain. it was just that the melodywas lost amid the clamour of disease. alain did not separate the personal and professional. his were passionsthat required no qualifying adjective. he loved those close to himwith an abandon and devotion that that was disarming and humbling.anyone who knew him knew that he was extraordinarily close to his wife andhis daughter. they were his greatest joys, his most provocative muses.they were the foundation upon which he built his life. leonard cohen, among others, said that you can't let the facts get in theway of the truth. the facts are that alain fournier, a great innovator in our field, died of lymphoma in the early hours of 14 august, 2000,and is survived by his daughter ariel, his wife adrienne, and a legionof admirers. the truth is that the sun seems to shine less brightly nowthat he is not among us. the truth is that he has broadened the mindsand moved the hearts of many people around the world. to those who havenever known alain, i express particular sympathy, for there are few peoplewe encounter in life with the ability to make us better than we thoughtwe could be. such, in truth and in fact, is the measure of this man. it was difficult not to love alain. with such a beguiling package ofbrilliance and benevolent eccentricity, it was simply a matter of timeand a question of degree. a unique and wonderful person has left us. requiescat in pace, my friend. \\--eugene fiume, november 2000. (the quotations from t.s. eliot are taken from his poem, "the love songof j. alfred prufrock".) eugene fiume layered memory using backward-chaining this is a progress report on my research project to design a model for _layered memory_ in an intelligent agent. i am using the cognitive model of human memory [4] as the design reference. it defines three layers: sensory information storage (sis); short-term memory (stm); and long-term memory (ltm). sis processes sensory inputs and motor skills. it is fast but `brittle' in being difficult to reprogram. it can be implemented as neural networks. stm is the seat of rationality and goal-setting. it is relatively fast, but its storage is small (perhaps seven semantic `chunks'). it can be implemented as a knowledge-base (kb). ltm stores vivid sensory impressions, heuristic rules, and syntactic knowledge. its storage is large but slow; reconstruction of stm state from ltm may be unreliable. it can be implemented as a database. the focus of my research at this point is the stm layer. a key feature of my stm model is backward-chaining inference. i present here a prototype of my _layered memory_ model and the important points i have learned in implementing the stm layer. joshua gay hierarchical modeling and multiresolution simulation michael kantner data fusion in 3d through surface tracking sensory signals typically suffer from noise, ambiguity, spurious signals and omissions. for a robot to successfully model the environment in which it operates, it must use signals captured from different locations in time and space in an effort to select those signals that appear to be accurate. data fusion, the process of combining signals into a single representation, is an essential component of a mobile robotics system. in this paper, we describe a new method for solving data fusion for a typical mobile robot domain, one in which precise robot location information in not known and where the robot mounted sensors employed are not calibrated. jonathan shapiro p. h. mowforth feature comparisons of 3-d vector fields using earth mover's distance a method for comparing three-dimensional vector fields constructed from simple critical points is described. this method is a natural extension of the previous work [1] which defined a distance metric for comparing two- dimensional fields. the extension to three-dimensions follows the path of our previous work, rethinking the representation of a critical point signature and the distance measure between the points. since the method relies on topologically based information, problems such as grid matching and vector alignment which often complicate other comparison techniques are avoided. in addition, since only feature information is used to represent, and therefore stored for each field, a significant amount of compression occurs. rajesh batra lambertus hesselink commonsense-based interfaces marvin minsky a perspective from industry michael krogh anders grimsrud t. todd elvins measuring volumetric coherence yuriko takeshima measuring and modeling anisotropic reflection gregory j. ward tour into the picture: using a spidery mesh interface to make animation from a single image youichi horry ken-ichi anjyo kiyoshi arai iron bowl daiji imai tomoyuki harashima a toolkit for manipulating indefinite summations with application to neural networks dongming wang plug meher gourjian jamie waese a pragmatic principle for agent communication heather holmback mark greaves jeffrey bradshaw advanced output analysis for simulation andrew f. seila silent hill gozo kitao a unification processor based on a uniformly structured cellular hardware in this paper, an implementation of unification using a systolic-like method is presented for a vlsi-oriented prolog machine. not pointers but a line of symbols and the arity of each symbol are used to express the structure of terms on a uniformly structured cellular hardware. this data structure is demanded by the systolic-like method. using the systolic-like method, copying structure, and occur check are easily executed during the process of unification. moreover, searching variable is executed in parallel using a broadcast bus. y. shobatake h. aiso ldi tree: a hierarchical representation for image-based rendering chun-fa chang gary bishop anselmo lastra graphical fisheye views manojit sarkar marc h. brown real-time fur over arbitrary surfaces jerome lengyel emil praun adam finkelstein hugues hoppe discrete groups and visualization of three-dimensional manifolds charlie gunn robust mpeg video watermarking technologies jana dittmann mark stabenau ralf steinmetz tv guides don ritter efficient techniques for interactive texture placement this paper describes efficient algorithms for the placement and distortion of textures. the textures include surface color maps and environment maps. affine transformations of a texture, as well as localized warps, are used to align features in the texture with features of the model. image-space caches are used to enable texture placement in real time. peter litwinowicz gavin miller improving the aircraft design process using web-based modeling and simulation designing and developing new aircraft systems is time-consuming and expensive. computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. this paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using web-based modeling and simulation. john a. reed gregory j. follen abdollah a. afjeh nicholson ny/pequot interactives wells packard dynamic deformation of solid primitives with constraints dimitri metaxas demetri terzopoulos pracniques: coordinated text and tranperiences an economical method of producing both hard copy and projection forms of text, without duplication of effort. r. w. bemer a speech-act-based negotiation protocol: design, implementation, and test use existing negotiation protocols used in distributed artificial intelligence (dai) systems rarely take into account the results from negotiation research. we propose a negotiation protocol, sanp (speech-act- based negotiation protocol), which is based on ballmer and brennenstuhl's speech act classification and on negotiation analysis literature. the protocol is implemented as a domain-independent system using strudel, which is an electronic mail toolkit. a small study tested the potential use of the protocol. although a number of limitations were found in the study, the protocol appears to have potential in domains without these limitations, and it can serve as a building block to design more general negotiation protocols. man kit chang carson c. woo phong shading at gouraud speed greg rivera humpty dumpty setsuro sugiyama a user-programmable vertex engine in this paper we describe the design, programming interface, and implementation of a very efficient user- programmable vertex engine. the vertex engine of nvidia's geforce3 gpu evolved from a highly tuned fixed- function pipeline requiring considerable knowledge to program. programs operate only on a stream of independent vertices traversing the pipe. embedded in the broader fixed function pipeline, our approach preserves parallelism sacrificed by previous approaches. the programmer is presented with a straightforward programming model, which is supported by transparent multi- threading and bypassing to preserve parallelism and performance. in the remainder of the paper we discuss the motivation behind our design and contrast it with previous work. we present the programming model, the instruction set selection process, and details of the hardware implementation. finally, we discuss important api design issues encountered when creating an interface to such a device. we close with thoughts about the future of programmable graphics devices. erik lindholm mark j. kligard henry moreton parallel simulation of billiard balls using shared variables peter a. mackenzie carl tropper a training algorithm for optimal margin classifiers a training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. the technique is applicable to a wide variety of the classification functions, including perceptrons, polynomials, and radial basis functions. the effective number of parameters is adjusted automatically to match the complexity of the problem. the solution is expressed as a linear combination of supporting patterns. these are the subset of training patterns that are closest to the decision boundary. bounds on the generalization performance based on the leave-one-out method and the vc- dimension are given. experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms. bernhard e. boser isabelle m. guyon vladimir n. vapnik visualization of complex models using dynamic texture-based simplification daniel g. aliaga prototype system of mutual telexistence yutaka kunita masahiko inami taro maeda susumu tachi xml: not a silver bullet, but a great pipe wrench tommie usdin tony graham vision, perception and imagery how principles of perception and human interaction can be used to solve problems in machine vision and image generation. andrew j. hanson david m. mckeown susan e. brennan alex p. pentland richard f. voss impulse-based simulation of rigid bodies we introduce a promising new approach to rigid body dynamic simulation called impulse- based simulation. the method is well suited to modeling physical systems with large numbers of collisions, or with contact modes that change frequently. all types of contact (colliding, rolling, sliding, and resting) are modeled through a series of collision impulses between the objects in contact, hence the method is simpler and faster than constraint- based simulation. we have implemented an impulse-based simulator that can currently achieve interactive simulation times, and real time simulation seems within reach. in addition, the simulator has produced physically accurate results in several qualitative and quantitative experiments. after giving an overview of impulse-based dynamic simulation, we discuss collision detection and collision response in this context, and present results from several experiments. brian mirtich john canny supercritical speedup david jefferson peter reiher artificial intelligence can improve hypermedia instructional technologies for learning teresa roselli shape transformation using variational implicit functions greg turk james f. o'brien appearance-perserving simplification jonathan cohen marc olano dinesh manocha implementation of ray tracing on the hypercube this preliminary report presents one implementation of a ray tracing system. the ray tracing system was divided and distributed onto the hypercube based on the data to be processed. the implementation which includes a dynamic load balancing scheme will be shown to be very efficient for large scenes. d. e. orcutt koktoo gaksi semi ryu wild card van phan implementing telos bryan m. kramer vinay k. chaudhri manolis koubarakis thodoros topaloglou huaiqing wang john mylopoulos frankenskippy linda lum cathy nelson dennis carnahan lars magnus holmgren multiresolution curves we describe a multiresolution curve representation, based on wavelets, that conveniently supports a variety of operations: smoothing a curve; editing the overall form of a curve while preserving its details; and approximating a curve within any given error tolerance for scan conversion. we present methods to support continuous levels of smoothing as well as direct manipulation of an arbitrary portion of the curve; the control points, as well as the discrete nature of the underlying hierarchical representation, can be hidden from the user. the multiresolution representation requires no extra storage beyond that of the original control points, and the algorithms using the representation are both simple and fast. adam finkelstein david h. salesin oddworld: abe's exodus geri wilhelm quality of ocr for degraded text images roger t. hartley kathleen crumpton rotated dispersed dither: a new technique for digital halftoning rotated dispersed-dot dither is proposed as a new dither technique for digital halftoning. it is based on the discrete one-to-one rotation of a bayer dispersed-dot dither array. discrete rotation has the effect of rotating and splitting a significant part of the frequency impulses present in bayer's halftone arrays into many low-amplitude distributed impulses. the halftone patterns produced by the rotated dither method therefore incorporate fewer disturbing artifacts than the horizontal and vertical components present in most of bayer's halftone patterns. in grayscale wedges produced by rotated dither, texture changes at consecutive gray levels are much smoother than in error diffusion or in bayer's dispersed-dot dither methods, thereby avoiding contouring effects. due to its semi-clustering behavior at mid-tones, rotated dispersed-dot dither exhibits an improved tone reproduction behavior on printers having a significant dot gain, while maintaining the high detail rendition capabilities of dispersed-dot halftoning algorithms. besides their use in black and white printing, rotated dither halftoning techniques have also been successfully applied to in-phase color reproduction on ink-jet printers. victor ostromoukhov roger d. hersch isaac amidror a spectral method for confidence interval generation and run length control in simulations philip heidelberger peter d. welch wriggon yoichiro kawaguchi basic blocks in unconstrained crossword puzzles geoff harris john forster richard rankin intelligent air travel and tourist information systems bing liu filter: an algorithm for reducing cascaded rollbacks in optimistic distributed simulations atul prakash rajalakshmi subramanian using control variates to estimate distribution functions (extended abstract) when estimating a parameter of a problem by the monte carlo method, one can usually improve the statistical efficiency of the estimation procedure by using prior information about the problem. techniques for achieving this improvement are called variance reduction methods and they differ considerably in the way they gain their advantages. for example, a user of the importance sampling technique draws data from a sampling distribution designed, on the basis of prior information, to reduce the variance of each observation while preserving its mean. the user of the stratified sampling technique draws observations from partitions of the sample space and then forms a linear combination of the resulting sample means as the estimator. by using prior information to determine the optimal relative number of observations per partition, one achieves a smaller variance for the estimator than the variance that crude monte carlo allows. the antithetic variate technique derives its benefit by inducing negative correlation between sample outcomes taken in pairs, when it is known a priori that certain monotone relationships hold. by contrast, the control variate technique does not gain its advantage by modifying the sampling procedure. instead, on each trial it collects ancillary sample data on phenomena whose true means are known and then uses a regression method to derive an estimator of the parameter of interest with reduced variance. since the only additional work in using the technique is the collection of the additional data and, at the end of the sample, to derive the control variate estimator, the incremental cost is usually relatively small. originally, fieller and hartley (1954) proposed using the control variate technique in a monte carlo study designed to estimate the relative frequencies in an unknown population. more recently, wilson (1984) has summarized the known theoretical results for the method, noting that for the finite sample size case control variate estimators are generally not unbiased and only in the case of normally distributed observations does an exact distribution theory exist for deriving confidence interval for the parameter of interest. the present paper represents a contribution to the theory of control variates in both the finite sample size and asymptotic cases when estimating a proportion, or more generally a distribution function, and when information on stochastic orderings between the phenomenon of interest and ancillary phenomena with known population parameters is available to the experimenter prior to sampling. in particular, the paper derives an unbiased point estimator (section 2) and 100(1 - α) percent confidence interval (section 3), for the parameter, that hold for every sample size k. section 2 also uses this prior information to derive upper bounds on the variance and coefficient of variation of the estimator and a lower bound on the achievable variance reduction. this information is especially valuable before sampling begins. it tells the experimenter what the least possible benefit of the control variate technique is and it enables him to achieve, say, a specified variance or coefficient of variation. the results for a single parameter extend easily to the multiparameter case. in particular, section 4 describes how they apply to the estimation of a distribution function (d.f.). a variance reduction is achieved for all estimated ordinates of the d.f., a notable improvement over most earlier applications of monte carlo variance reducing methods that focused on estimating a single ordinate of the d.f. the proposed technique offers yet an additional benefit. for a discrete d.f., section 5 shows how one can use the variance reduced ordinates of the d.f. to derive an unbiased estimator of the population mean. it also gives the variance of this estimator to order 1/k. to illustrate this technique, section 6 describes the estimation of the complementary distribution function of maximal flow in a flow network of 10 nodes and 25 arcs where the capacities of the arcs are subject to stochastic variation. george s. fishman coordination languages and their significance david gelernter nicholas carriero real-time nonphotorealistic rendering lee markosian michael a. kowalski daniel goldstein samuel j. trychin john f. hughes lubomir d. bourdev flocks, herds and schools: a distributed behavioral model craig w. reynolds visualization of rotation fields mark a. livingston knowledge representation and inference control of speril-ii speril-ii is an expert system for damage assessment of existing structures. fuzzy sets for imprecise data and dempster and shafer's theory for combining fuzzy sets with certainty factors are used in an inexact inference. since the process of the damage assessment is quite complex, metarules are used to control the inference in order to improve the effectiveness and reliability of results. the metarules in speril-ii are represented in logic form with emphases on the explicit representation of the selection of the rule group and the suitable inference method. h. ogawa k. s. fu j. t. p. yao conceptual graphs as a visual language for knowledge acquisition in architectural expert systems l. f. pau s. s. nielsen casa: a computer algebra package for constructive algebraic geometry r. gebauer m. kalkbrener b. wall f. winkler scenario-based management for multiagent systems brian j. garner dan song csim17: a simulation model-building toolkit herb schwetman a language for shading and lighting calculations pat hanrahan jim lawson controlling dynamic simulation with kinematic constraints paul m. isaacs michael f. cohen visualizing quaternion rotation quaternions play a vital role in the representation of rotations in computer graphics, primarily for animation and user interfaces. unfortunately, quaternion rotation is often left as an advanced topic in computer graphics education due to difficulties in portraying the four-dimensional space of the quaternions. one tool for overcoming these obstacles is the quaternion demonstrator, a physical visual aid consisting primarily of a belt. every quaternion used to specify a rotation can be represented by fixing one end of the belt and rotating the other. multiplication of quaternions is demonstrated by the composition of rotations, and the resulting twists in the belt depict visually how quaternions interpolate rotation. this article introduces to computer graphics the exponential notation that mathematicians have used to represent unit quaternions. exponential notation combines the angle and axis of the rotation into concise quaternion expression. this notation allows the article to present more clearly a mechanical quaternion demonstrator consisting of a ribbon and a tag, and develop a computer simulation suitable for interactive educational packages. local deformations and the belt trick are used to minimize the ribbon's twisting and simulate a natural-appearing interactive quaternion demonstrator. john c. hart george k. francis louis h. kauffman remotely operated vehicle (rov) dive visualization mike mccann trias: trainable information assistants for cooperative problem solving mathias bauer dietmar dengler advances in computer simulation alan h. rutan communicating structures for modeling large-scale systems vadim e. kotov 3d paint lance williams precision requirements for digital color reproduction an environment was established to perform device-independent color reproduction of full-color pictorial images. in order to determine the required precision for this environment, an experiment was performed to psychophysically measure colorimetric tolerances for six images using paired comparison techniques. these images were manipulated using 10 linear and nonlinear functions in the cielab dimensions of lightness, chroma, and hue angle. perceptibility tolerances were determined using probit analysis. from these results, the necessary precision in number of bits per color channel was determined for both the cielab and the crt rgb device color spaces. for both the cielab color space and the crt rgb device space, approximately eight color bits per channel were required for imperceptible color differences for pictorial images, and 10 bits per channel were required for computational precision. mike stokes mark d. fairchild roy s. berns identification of implicit legal requirements with legal abstract knowledge in order to acquire legal rules from legal texts, legal requirements and legal effects must be identified. however, some of legal requirements are expressed implicitly. such implicit legal requirements can be found by lawyers when they understand legal texts. in this paper, to mechanize legal knowledge acquisition process, a lawyer's understanding process of legal texts is analyzed. the lawyer's understanding process can be viewed as an abductive reasoning process, since the lawyer can introduce implicit legal requirements which have not appeared in legal texts. this paper models such a reasoning process when lawyers understand legal texts. based on the analysis of lawyer's understanding process, a knowledge acquisition support system is proposed. seiichiro sakurai hajime yoshino nova*gks, a distributed implementation of the graphical kernel system nova*gks is an implementation of the draft international standard graphical kernel system (gks), built using a distributed architecture. the specifications for gks present an implementor with many design tradeoff decisions. the implementors of nova*gks have analyzed those tradeoffs and created a distributed design which allows users of the package to design applications which can perform efficiently on many different graphics hardware configurations. clinton n. waggoner charles tucker christopher j. nelson learning via queries william i. gasarch carl h. smith automatic instance generation using simulation for inductive learning sima parisay behrokh khoshnevis a train operations simulation for miami's sr 836 corridor onala m. atala john s. carson broad agents joseph bates bryan loyall w. scott reilly recovering high dynamic range radiance maps from photographs paul e. debevec jitendra malik exploring the forms of model diagnosis in a simulation support environment our purpose is to explain the categorization of diagnostic assistance, using an example, in order to address the central question for a simulation support environment: in what forms can computer assistance be provided to improve the simulation model development task? this question must be answered in the context of the economic and technical realities of today, for the near future, and the long term. we present several examples of computer assistance which indicate that a significant degree of assistance is possible. richard e. nance c. michael overstreet integration of knowledge in a multiple classifier system yi lu the state of knowledge-based systems frederick hayes-roth neil jacobstein simplified representation of vector fields vector field visualization remains a difficult task. although many local and global visualization methods for vector fields such as flow data exist, they usually require extensive user experience on setting the visualization parameters in order to produce images communicating the desired insight. we present a visualization method that produces simplified but suggestive images of the vector field automatically, based on a hierarchical clustering of the input data. the resulting clusters are then visualized with straight or curved arrow icons. the presented method has a few parameters with which users can produce various simplified vector field visualizations that communicate different insights on the vector data. alexandru telea jarke j. van wijk a cost effective question asking strategy evangelos triantaphyllou jinchang wang anatema: a neural approach to extended markup of sgml documents o. d'antona m. w. lo campo a general rapid transit simulation model with both automatic and manual train control onala m. atala joseph c. brill john s. carson the power and performance of proof animation james o. henriksen learning rewrite rules to improve plan quality muhammad afzal upal boosting as entropy projection jyrki kivinen manfred k. warmuth expert systems in case-based law: the hearsay rule advisor m. t. maccrimmon an approach to document processing robert katz asynchronous, adaptive, rigid body simulation stephen chenney adding immersion to collaborative tools while three dimensional collaborative environments have been used for industrial design or interactive games, workgroup collaboration has largely remained in the two-dimensional realm. in this paper, we examine the collaborative capabilities of the collaborative virtual workspace, and how it is used. we then describe our effort to augment this system with an immersive display. by developing an immersive interface to an existing collaboration tool rather than adding collaboration to an immersive world, we hope to discover the advantages and pitfalls that immersive collaboration environments might offer. the first step in this process is to design the immersive environment to provide natural interactions for the activities users normally perform while working within the collaborative environment. edward swing the mvl theorem proving system matthew l. ginsberg a perceptually based adaptive sampling algorithm mark r. bolin gary w. meyer scribe support in gnu emacs think of scribe1 as a compiler for a document description language; scribe is not an editor. scribe is a "high level" document processing system, or a "composition engine" which permits users to deal with documentation at a higher level of abstraction than is possible with "word-processors" or "page processors." one of the methods used to provide this higher level of abstraction is the separation2 of the "form" or "layout" of the document from the "content;" thus avoiding distraction of the document's author with superfluous details of document format. this frees the writer to concentrate on the content of a document, rather than its format. with the increasing popularity of wysiwyg3 style editors, which are more properly described as "page processing" systems; fewer people are willing to insert the type of "mark-up" commands required to properly use a "document processing" system such as scribe. described herein are a set of support functions, written in a dialect of lisp, which provide assistance to the scribe user during the preparation and composition of documents. these support functions provide "short-cuts" for insertion of scribe mark-up, as well as certain features useful during composition and maintenance of large documents. collectively, these support functions are called "scribe mode" and are written to be used with the "gnu emacs" editor.4 gnu emacs is known to run under the unix5 and vax/vms6 operating systems, and various versions have been observed to operate on a wide variety of host computers, and other operating systems. j. e. black deriving expectations to guide knowledge base creation jihie kim yolanda gil game graphics during the 8-bit computer era steven collins approaches to solid modelling (panel session) solid modelling systems in current use are based on approaches that are fundamentally different. the panelists will discuss the advantages and disadvantages of the different approaches, especially from the user's point of view. leon malin frank bliss william carmody robert johnson martin schloessel john swarbrick forward image mapping we present a new forward image mapping algorithm, which speeds up perspective warping --- as in texture mapping. it processes the source image in a special scanline order instead of the normal raster scanline order. this special scanline has the property of preserving parallelism when projecting to the target image. the algorithm reduces the complexity of perspective-correct image warping by eliminating the division per pixel and replacing it with a division per scanline. the method also corrects the perspective distortion in gouraud shading with negligible overhead. furthermore, the special scanline order is suitable for antialiasing using a more accurate antialiasing conic filter, with minimum additional cost. the algorithm is highlighted by incremental calculations and optimized memory bandwidth by reading each source pixel only once, suggesting a potential hardware implementation. baoquan chen frank dachille arie kaufman hike (hpkb integrated knowledge environment)- a query interface and integrated knowledge environment for hpkb barbara h. starr vinay k. chaudhri boris katz benjamin good jerome thomere the role of simulation in planning theoretical work in automated planning research has shown that the popular methods used for doing automated planning either cannot be extended into real world domains, or are computationally intractable. this paper discusses the paradigm of simulation- based planning which attempts to overcome these difficulties. simulation-based planning uses some simulation techniques to gather information for guiding the planner. the resulting system can handle more complex domains with much improved performance over previous systems, while making only a small compromise in completeness. implemented systems are described. daved p. miller online learning about other agents in a dynamic multiagent system junling hu michael p. wellman undebuggability and cognitive science a resource-realistic perspective suggests some indispensable features for a computer program that approximates all human mentality. the mind's program would differ fundamentally more from familiar types of software. these features seem to exclude reasonably establishing that a program correctly and completely models the mind. christopher cherniak real-time visualization of scalably large collections of heterogeneous objects (case study) this paper presents results for real-time visualization of out-of-core collections of 3d objects. this is a significant extension of previous methods and shows the generality of hierarchical paging procedures applied both to global terrain and any objects that reside on it. applied to buildings, the procedure shows the effectiveness of using a screen-based paging and display criterion within a hierarchical framework. the results demonstrate that the method is scalable since it is able to handle multiple collections of buildings (e.g., cities) placed around the earth with full interactivity and without extensive memory load. further the method shows efficient handling of culling and is applicable to larger, extended collections of buildings. finally, the method shows that levels of detail can be incorporated to provide improved detail management. douglas davis william ribarsky t. y. jiang nickolas faust sean ho volume visualization - a sleeping giant about to awaken jim foley efficient algorithms for 3d scan-conversion of parametric curves, surfaces, and volumes arie kaufman visualization environments: long-term goals evidential logic and dempster-shafer theory most artificial intelligence applications require reasoning with uncertainty and incompleteness. such information is not captured in terms of simple true and false values nor in terms of probabilistic estimates when relevant statistical data are lacking. in this paper, we extend nilsson's probabilistic logic, a semantic generalization of logic, in which the truth value of a sentence is a probability value between Ø and 1 to evidential logic in the framework of dempster-shafer theory. s s chen rule-based systems rule-based systems automate problem-solving know- how, provide a means for capturing and refining human expertise, and are proving to be commercially viable. frederick hayes-roth a multi-agent planning system (abstract only) this report describes a planning system that generates a sequence of operations (actions) assignments for each agent to achieve a goal statement in a multi-agent problem domain. the system is characterized by a pre-analysis of the goal statement and a representation of the breakable and unbreakable operations. the analysis would yield a sequence of parallel groups of subgoals and the execution sequence of subgoals within a group. (two groups of subgoals are parallel to each other, if they can be achieved separately by different agents). following this analysis result, a plan generator dynamically assigns a group of subgoals to each agent and formulates the detailed plans. in order to increase the utility of agents, the plan generator always tries to assign an idle agent to assist a busy agent if a group of subgoals is not available at that particular instance. this is done by identifying the breakable and the unbreakable subtasks of an agent and assigning one part of the breakable subtask or a sequence of unbreakable subtasks to an idle agent. two major system components are being investigated: the goal statement analysis and the operation representations. the purpose of the goal statement analysis is to use the domain specific knowledge to examine the relationships among the subgoals, to group the correlated subgoals, to find the parallelism among the groups, and to determine the execution sequences of subgoals within a group and the pursuing orders of the groups. the system described here avoids the fruitless search efforts by 1. grouping the correlated subgoals together and assigning each group to an agent so that the complexity of a problem is reduced, 2. ordering the subgoal sequence within a group so that the trial-and-error search can be eliminated. the study of the operation representations has focused on the representations of the breakable and the unbreakable operation sequences. a sequence of operations is called breakable if each component in the sequence can be carried out by different agents. in contrast to the breakable sequence, an unbreakable sequence consists of operations that must be carried out by the same agent. the purpose of this distinction is to specify what portion of a task can be assisted by an idle agent and what portion has to be completed by the same agent if this agent has started it. kai-hsiung chang ray tracing with cones a new approach to ray tracing is introduced. the definition of a "ray" is extended into a cone by including information on the spread angle and the virtual origin. the advantages of this approach, which tries to model light propagation with more fidelity, include a better method of anti-aliasing, a way of calculating fuzzy shadows and dull reflections, a method of calculating the correct level of detail in a procedural model and texture map, and finally, a procedure for faster intersection calculation. john amanatides verification, validation & accreditation (panel): disciplines in dialogue or can we learn from the experiences of others? james d. arthur richard e. nance robert g. sargent dolores r. wallace linda h. rosenberg paul r. muessig image precision silhouette edges ramesh raskar michael cohen a stateless client for progressive view-dependent transmission richard southern simon perkins barry steyn alan muller patrick marais edwin blake entropy and self-organization in multi-agent systems emergent self- organization in multi-agent systems appears to contradict the second law of thermodynamics. this paradox has been explained in terms of a coupling between the macro level that hosts self-organization (and an apparent reduction in entropy), and the micro level (where random processes greatly increase entropy). metaphorically, the micro level serves as an entropy "sink", permitting overall system entropy to increase while sequestering this increase from the interactions where selforganization is desired. we make this metaphor precise by constructing a simple example of pheromone-based coordination, defining a way to measure the shannon entropy at the macro (agent) and micro (pheromone) levels, and exhibiting an entropybased view of the coordination. h. van dyke parunak sven brueckner re-tiling polygonal surfaces greg turk a temporal constraint satisfaction technique for nonlinear planning chung-ming cheng cheng-seen ho optimization of mesh locality for transparent vertex caching hugues hoppe sketching in 3d of the numerous changes to the implements for creating 2d images and 3d models, one of the most radical has been the recent adoption of wimp interfaces. ironically, there is good reason to believe that wimp interaction for 3d modeling is actually inferior to the real-world interfaces (pencils, large sheets of paper, clay, paint palettes) that it supplants. in fact, wimp interaction's principal benefit is its straightforward integration with computer 3d model representations which have many advantages including ease of transformation, archival, replication and distribution.instead of interpreting user compliance as an affirmation of wimp interaction, we explore the dichotomy of how easy it is to depict a 3d object with just a pencil and paper, and how hard it is to model the same object using a multithousand dollar workstation. our challenge is to blend the essence of pencil sketching interfaces with the power of computer model representations.this paper presents an overview of ongoing research in "sketch-like" 3d modeling user interfaces. the objective of this research is to design interfaces that match human ergonomics, exploit prelearned skills, foster new skills and support the transition from novice to skilled expert. thus pencil sketching is an interaction ideal, supporting users ranging from children to adults, and from doodlers to artists. perhaps the best testament to the effectiveness of the pencil and paper interface is that very few people even consider it a user interface. robert zeleznik guest editor's introduction henry fuchs recursive estimation of the variance of the sample average gordon m. clark letter from the chair jeffrey m. bradshaw efficient octree conversion by connectivity labeling we present an algorithm for converting from the boundary representation of a solid to the corresponding octree model. the algorithm utilizes an efficient new connected components labeling technique. a novelty of the method is the demonstration that all processing can be performed directly on linear quad and octree encodings. we illustrate the use of the algorithm by an application to geometric mine modeling and verify its performance by analysis and practical experiments. markku tamminen hanan samet ai systems are dumb because ai researchers are too clever jacques pitrat representational effects in a simple classifier system sandip sen parallel volume visualization on a hypercube architecture c. montani r. perego r. scopigno introduction to gpss summary information about key aspects of the simulation programming language gpss is provided. the class of problems to which gpss applies especially well is described; commentary on the semantics and syntax of the language is offered; the learning-oriented literature for gpss is summarized; various gpss implementations are commented on; the time- sharing networks offering gpss are cited; and public courses on the language are listed. finally, the source of a tutorial introduction to the fundamental semantics and syntax of gpss is given. copies of this tutorial material, excluded from reproduction here because of page-count limits, will be distributed at the session and provide the basis for the session itself. thomas j. schriber an integrated architecture for planning and learning charles e. martin r. james firby robot navigation with a polar neural map michail g. lagoudakis anthony s. maida merging and transformation of raster images for cartoon animation the task of assembling drawings and backgrounds together for each frame of an animated sequence has always been a tedious undertaking using conventional animation camera stands and has contributed to the high cost of animation production. in addition, the physical limitations that these camera stands place on the manipulation of the individual artwork levels restricts the total image-making possibilities afforded by traditional cartoon animation. documents containing all frame assembly information must also be maintained. this paper presents several computer methods for assisting in the production of cartoon animation, both to reduce expense and to improve the overall quality. merging is the process of combining levels of artwork into a final composite frame using digital computer graphics. the term "level" refers to a single painted drawing (cel) or background. a method for the simulation of any hypothetical animation camera set-up is introduced. a technique is presented for reducing the total number of merges by retaining merged groups consisting of individual levels which do not change over successive frames. lastly, a sequence-editing system which controls precise definition of an animated sequence, is described. also discussed is the actual method for merging any two adjacent levels and several computational and storage optimizations to speed the process. bruce a. wallace model-based integration of planning and learning gregg collins lawrence birnbaum bruce krulwich michael freed feeling and seeing: issues in force display margaret minsky ouh-young ming oliver steele frederick p. brooks max behensky the circle-brush algorithm brushing commonly refers to the drawing of curves with various line widths in hit- mapped graphics systems. it is best done with circles of suitable diameter so that a constant line width, independent of the curve's slope, is obtained. allowing all possible integer diameters corresponding to all possible integer line widths results in every second width having an odd value. thus, the underlying circle algorithm must be able to handle both integer and half- integer radii. our circle-brush algorithm handles both situations and produces a "best approximation": all grid points produced simultaneously minimize (1) the residual, (2) the euclidean distance to the circle, and (3) the displacement along the grid line from the intersection with the circle. our circle-brush algorithm was developed in careful consideration of its implementation in vlsi. k. c. posch w. d. fellner vision in film and special effects computer vision techniques have become increasingly important in visual effects. when a digital artist inserts an effect into a live-action scene, the more information that they know about the scene, the better. in visual effects, we use vision techniques to calculate the location of the camera, construct a 3d model of the scene and follow objects as they move through the scene.think back to the 1980s, before computer graphics in special effects really took hold. in most of the films of that era, it was very easy to spot the "effects shot" in a film. the give-away was the sudden halting of a previously dynamic, moving camera. this was done because inserting a special effect character or object into a scene with a moving camera was very difficult. the camera move used in filming the live action part of the scene must be exactly reproduced by the camera on the effects set when the model or puppet is filmed. to accomplish this, one of two techniques were employed: 1) mechanical devices were used to encode the motion of the camera and then the encoded information was used to control the motion of the camera on the effects set, or 2) an experienced motion control engineer matched the move by eye. both of these techniques are terribly prone to error and were used only on high budget productions.in the 1990s with the advent of widespread computer generated effects and digital scanning of film, it became possible to apply computer vision techniques to extract information from the scene to aid the digital artist in blending an effect with the reality of the filmed image. doug roble the condiment league david elliott jerry chambless jason alexander silk: a java-based process simulation language kevin j. healy richard a. kilgore a topology modifying progressive decimation algorithm william j. schroeder the integrated performance modeling environment - simulating human-system performance david dahn k. ronald laughery recursive geometric structures in computer graphics marvin kiss integrating reaction and planning in a heterogeneous asynchronous architecture for mobile robot navigation erann gat 3d gait reconstruction using two-camera markerless video suba varadarajan xiaoning fu rick parent patrick j. flynn kathy johnson semantics of interactive rotations we first outline an overall design philosophy for rigid geometric manipulations, then examine a manipulation's characteristics: nesting, scope, pivot constraints, and axis, constraints. we show how a mnemonic notation helps us explain how a simple matrix operation can make manipulations (both rotations and translations) nested within rotations easy to control. finally, we mention some practical considerations to increase calculation speed and control numerical error. an appendix collects formulas useful for working with rotations. michael e. pique integrating solid image capability into a general purpose calligraphic graphics package raster scanned graphics terminals provide several features not found in standard line drawing displays. among them are area fill and an extensive color palette. hardware support for such functions is becoming cost effective and available in a variety of forms. what is now needed is high level, device independent software that assists in the generation of and interaction with these terminals. to accomplish this, a project has been undertaken jointly by sandia laboratories, purdue university, and megatek corporation to integrate solid image capabilities into the graphics compatibility system, gcs. gcs is a widely used, general purpose calligraphic graphics package. due to the similarity of the gcs model to that of the gspc core system, gcs provides a timely test bed for solid image extensions to a line drawing package. this paper will summarize the capabilities being implemented, the method and motivation behind each function, and suggestions for further work. the heart of the extensions is a hidden surface algorithm similar to myers' approach which combines the simplicity of a z buffer with the efficiency of a watkin's algorithm. facilities will be provided for hidden line removal as well as hidden surface processing in order to satisfy the basic gcs design goal of device independence. in this way, random vector terminals as well as raster devices will benefit. if a user does not require hidden object processing then very little additional overhead will remain in those routines which have been called. hidden surface processing may also be intermixed with wire frame drawings and text. two remaining issues are the user interface for 3d object definition and the specification of lighting models. since the myers algorithm requires n sided polygons as input, a basic call has been provided for such definitions. higher level constructs, called "shells" and "faces", which facilitate creation of complex 3d bodies, have been implemented. the specification of lighting models is an area of considerable debate. it has been concluded that a choice of a few common models will be made available to the user. these include interpolated and noninterpolated shading (smooth and flat). objects may be color-shaded or black-and-white. a hook is provided for user definition of more exotic lighting models. g. laib r. puk g. stowell cofactor iteration robert m. corless the nature of cognition and action tony clear an evaluation of techniques for grabbing and manipulating remote objects in immersive virtual environments doug a. bowman larry f. hodges automation of simulation model generation from system specifications major advances in simulation techniques have in the past resulted from refinement of our understanding of the modeling process itself. the encapsulation of frequently used functions into standard packages gave rise to the original simulation languages. the recognition of frequently used concepts gave rise to current model-based simulation systems. the separation of the model frame from the experimental frame is just beginning to see implementation in commercially available decision support systems. we believe the next major advance will come from a better understanding of the relationship between the representation (or description) of the system itself and the models/experiments which are performed to satisfy a particular goal. we are currently working towards a modeling paradigm in which a system specification frame and a goal specification frame are formally recognized as distinct from the simulation modeling, experimental, and implementation frames. richard j. mayer robert e. young integrating ilp and ebl raymond j. mooney john m. zelle probabilistic robot exploration and navigation using visual landmarks fafa paku alexander vadenberg-rodes gone fishin keith turner sizing and positioning rectangles r. m. baecker online model reconstruction for interactive virtual environments benjamin lok example-based hinting of true type fonts hinting in truetype is a time-consuming manual process in which a typographer creates a sequence of instructions for better fitting the characters of a font to a grid of pixels. in this paper, we propose a new method for automatically hinting truetype fonts by transferring hints of one font to another. given a hinted source font and a target font without hints, our method matches the outlines of corresponding glyphs in each font, and then translates all of the individual hints for each glyph from the source to the target font. it also translates the control value table (cvt) entries, which are used to unify feature sizes across a font. the resulting hinted font already provides a great improvement over the unhinted version. more importantly, the translated hints, which preserve the sound, hand-designed hinting structure of the original font, provide a very good starting point for a professional typographer to complete and fine-tune, saving time and increasing productivity. we demonstrate our approach with examples of automatically hinted fonts at typical display sizes and screen resolutions. we also provide estimates of the time saved by a professional typographer in hinting new fonts using this semi-automatic approach. douglas e. zongker geraldine wade david h. salesin compositing digital images most computer graphics pictures have been computed all at once, so that the rendering program takes care of all computations relating to the overlap of objects. there are several applications, however, where elements must be rendered separately, relying on compositing techniques for the anti-aliased accumulation of the full image. this paper presents the case for four-channel pictures, demonstrating that a matte component can be computed similarly to the color channels. the paper discusses guidelines for the generation of elements and the arithmetic for their arbitrary compositing. thomas porter tom duff sample: new programming technology and ai language andrew h. gleibman internal representation by the magic light seiki inoue mavric's brain george e. mobus paul s. fisher images as maps / maps from images this talk will illustrate how map database access techniques can be used interactively by pointing at imagery that is in camera correspondence with a geodetic coordinate system; map knowledge is used to extract roads, buildings, and other features, and to generate 3d scenes by fusion of information from traditional geographic information systems, semantic oriented spatial databases, and imagery. david m. mckeown insights into carrier control: a simulation of a power and free conveyor through an automotive paint shop david w. graehl simulation model design paul a. fishwick interactive full spectral rendering mark s. peercy benjamin m. zhu daniel r. baum a prototype system for visualizing time-dependent volume data this video shows a prototype system for the visualization of time-vary ing volume data of one or several variables as they occur in scientific and engineering applications. it partitions the data dimensions on the client side into a 3d viewer for iso-surfaces and a 2d control plane where each point selects a particular iso-surface in 3d. lutz kettner jack snoeyink interactive three-dimensional holographic displays: seeing the future in depth computer graphics is confined chiefly to flat images. images may look three- dimensional (3d), and sometimes create the illusion of 3d when displayed, for example, on a stereoscopic display [16, 13, 12]. nevertheless, when viewing an image on most display systems, the human visual system (hvs) sees a flat plane of pixels. volumetric displays can create a 3d computer graphics image, but fail to provide many visual depth cues (e.g. shading texture gradients) and cannot provide the powerful depth cue of overlap (occlusion). discrete parallax displays (such as lenticular displays) promise to create 3d images with all of the depth cues, but are limited by achievable resolution. only a real-time electronic holographic ("holovideo") display [11, 6, 8, 7, 9, 21, 22, 20, 2] can create a truly 3d computer graphics image with all of the depth cues (motion parallax, ocular accommodation, occlusion, etc.) and resolution sufficient to provide extreme realism [13]. holovideo displays promise to enhance numerous applications in the creation and manipulation of information, including telepresence, education, medical imaging, interactive design and scientific visualization.the technology of electronic interactive three- dimensional holographic displays is in its first decade. though fancied in popular science fiction, only recently have researchers created the first real holovideo systems by confronting the two basic requirements of electronic holography: computational speed and high-bandwidth modulation of visible light. this article describes the approaches used to address these problems, as well as emerging technologies and techniques that provide firm footing for the development of practical holovideo. mark lucente route66 ronen lasry daniel szecket color image quantization by minimizing the maximum intercluster distance one of the numerical criteria for color image quantization is to minimize the maximum discrepancy between original pixel colors and the corresponding quantized colors. this is typically carried out by first grouping color points into tight clusters and then finding a representative for each cluster. in this article we show that getting the smallest clusters under a formal notion of minimizing the maximum intercluster distance does not guarantee an optimal solution for the quantization criterion. nevertheless our use of an efficient clustering algorithm by teofilo f. gonzalez, which is optimal with respect to the approximation bound of the clustering problem, has resulted in a fast and effective quantizer. this new quantizer is highly competitive and excels when quantization errors need to be well capped and when the performance of other quantizers may be hindered by such factors as low number of quantized colors or unfavorable pixel population distribution. both computer-synthesized and photographic images are used in experimental comparison with several existing quantization methods. zhigang xiang spatial reasoning in an industrial robotic environment michael magee william j. wolfe donald mathis cheryl weber-sklair jeffrey becker web-based simulation in simjava using remote method invocation ernest h. page robert l. moose sean p. griffin an adaptive agent bidding strategy based on stochastic modeling sunju park edmund h. durfee william p. birmingham artificial evolution of implicit surfaces edward j. bedwell david s. ebert a knowledge based model of traffic behavior in freeways sanjoy das betty a. bowles chris r. houghland steven j. hunn yunlong zhang polar lust joe fournier a user interface management system the design and construction of the user interface to interactive systems is receiving increased attention. this paper describes a user interface management system that allows a designer/developer to focus on the logical functionality of an application without the usual bookkeeping associated with a conventional programming language. the user interface management system contains two components: a special purpose, application independent dialogue specification language and a run-time interpreter that provides a number of interaction extensions not possible with procedure libraries. david j. kasik rasterizing curves of constant width this paper gives a fast, linear- time algorithm for generating high-quality pixel representations of curved lines. the results are similar to what is achieved by selecting a circle whose diameter is the desired line width, and turning on all pixels covered by the circle as it moves along the desired curve. however, the circle is replaced by a carefully chosen polygon whose deviations from the circle represent subpixel corrections designed to improve the aesthetic qualities of the rasterized curve. for nonsquare pixels, equally good results are obtained when an ellipse is used in place of the circle. the class of polygons involved is introduced, an algorithm for generating them is given, and how to construct the set of pixels covered when such a polygon moves along a curve is shown. the results are analyzed in terms of a mathematical model for the uniformity and accuracy of line width in the rasterized image. john d. hobby a program computing the homology groups of loops spaces julio rubio francis sergeraert integration of simulation with enterprise models k. srinivasan sundaresan jayaraman simulation model verification and validation robert g. sargent introduction to simulation andrew f. seila understanding robot navigation and control ugonna ibeanusi valerie lafond- favieres latresa mclawhorn scott d. anderson provably correct theories of action we investigate logical formalization of the effects of actions in the situation calculus. we propose a formal criterion against which to evaluate theories of deterministic actions. we show how the criterion provides us a formal foundation upon which to tackle the frame problem, as well as its variant in the context of concurrent actions. our main technical contributions are in formulating a wide class of monotonic causal theories that satisfy the criterion, and showing that each such theory can be reformulated succinctly in circumscription. fangzhen lin yoav shoham chess on a hypercube we report our progress on computer chess last described at the second conference on hypercubes. our program follows the strategy of currently successful sequential chess programs: searching of an alpha-beta pruned game tree, iterative deepening, transposition and history tables, specialized endgame evaluators, and so on. the search tree is decomposed onto the hypercube (an ncube) using a recursive version of the principal-variation- splitting algorithm. roughly speaking, subtrees are searched by teams of processors in a self-scheduled manner. a crucial feature of the program is the global hashtable. hashtables are important in the sequential case, but are even more central for a parallel chess algorithm. the table not only stores knowledge but also makes the decision at each node of the chess tree whether to stay sequential or to split up the work in parallel. in the language of knuth and moore, the transposition table decides whether each node of the chess tree is a type 2 or a type 3 node and acts accordingly. for this data structure the hypercube is used as a shared-memory machine. multiple writes to the same location are resolved using a priority system which decides which entry is of more value to the program. the hashtable is implemented as "smart" shared memory. search times for related subtrees vary widely (up to a factor of 100) so dynamic reconfiguration of processors is necessary to concentrate on such "hot spots" in the tree. a first version of the program with dynamic load balancing has recently been completed and out-performs the non-load-balancing program by a factor of three. the current speedup of the program is 101 out of a possible 256 processors. the program has played in several tournaments, facing both computers and people. most recently it scored 2-2 in the acm north american computer chess championship. e. w. felten s. w. otto a tutorial on verification and validation of simulation models in this tutorial paper we give a general introduction to verification and validation of simulation models, define the various validation techniques, and present a recommended model validation procedure. robert g. sargent peteei: a pet with evolving emotional intelligence magy seif el-nasr thomas r. ioerger john yen towards a standard upper ontology the suggested upper merged ontology (sumo) is an upper levelontology that has been proposed as a starter document for thestandard upper ontology working group, an ieee-sanctioned workinggroup of collaborators from the fields of engineering, philosophy,and information science. the sumo provides definitions forgeneral-purpose terms and acts as a foundation for more specificdomain ontologies. in this paper we outline the strategy used tocreate the current version of the sumo, discuss some of thechallenges that we faced in constructing the ontology, and describein detail its most general concepts and the relations between them. ian niles adam pease decomposable negation normal form knowledge compilation has been emerging recently as a new direction of research for dealing with the computational intractability of general propositional reasoning. according to this approach, the reasoning process is split into two phases: an off-line compilation phase and an on-line query-answering phase. in the off-line phase, the propositional theory is compiled into some target language, which is typically a tractable one. in the on-line phase, the compiled target is used to efficiently answer a (potentially) exponential number of queries. the main motivation behind knowledge compilation is to push as much of the computational overhead as possible into the off-line phase, in order to amortize that overhead over all on-line queries. another motivation behind compilation is to produce very simple on-line reasoning systems, which can be embedded cost-effectively into primitive computational platforms, such as those found in consumer electronics.one of the key aspects of any compilation approach is the target language into which the propositional theory is compiled. previous target languages included horn theories, prime implicates/implicants and ordered binary decision diagrams (obdds). we propose in this paper a new target compilation language, known as decomposable negation normal form (dnnf), and present a number of its properties that make it of interest to the broad community. specifically, we show that dnnf is universal; supports a rich set of polynomial--time logical operations; is more space-efficient than obdds; and is very simple as far as its structure and algorithms are concerned. moreover, we present an algorithm for converting any propositional theory in clausal form into a dnnf and show that if the clausal form has a bounded treewidth, then its dnnf compilation has a linear size and can be computed in linear time (treewidth is a graph-theoretic parameter that measures the connectivity of the clausal form). we also propose two techniques for approximating the dnnf compilation of a theory when the size of such compilation is too large to be practical. one of the techniques generates a sound but incomplete compilation, while the other generates a complete but unsound compilation. together, these approximations bound the exact compilation from below and above in terms of their ability to answer clausal entailment queries. finally, we show that the class of polynomial--time dnnf operations is rich enough to support relatively complex ai applications, by proposing a specific framework for compiling model-based diagnosis systems. adnan darwiche the synthesis and rendering of eroded fractal terrains f. k. musgrave c. e. kolb r. s. mace on site: creating lifelike characters in pixar movies tom porter galyn susman using neural networks in agent teams to speed up solution discovery for hard multi-criteria problems shaun gittens richard goodwin jayant kalagnanam sesh murthy hierarchical agent control: a framework for defining agent behavior the hierarchical agent control architecture (hac) is a general toolkit for specifying an agent's behavior. hac supports action abstraction, resource management, sensor integration, and is well suited to controlling large numbers of agents in dynamic environments. it relies on three hierarchies: action, sensor, and context. the action hierarchy controls the agent's behavior. it is organized around tasks to be accomplished, not the agents themselves. this facilitates the integration of multi-agent actions and planning into the architecture. the sensor hierarchy provides a principled means for structuring the complexity of reading and transforming sensor information. each level of the hierarchy integrates the data coming in from the environment into conceptual chunks appropriate for use by actions at this level. actions and sensors are written using the same formalism. the context hierarchy is a hierarchy of goals. in addition to their primary goals, most actions are operating within a set of implicit assumptions. these assumptions are made explicit through the context hierarchy. we have developed a planner, grasp, implemented within hac, which is capable of resolving multiple goals in real time. hac was intended to have wide applicability. it has been used to control agents in commercial computer games and physical robots. our primary application domain is a simulator of land-based military engagements called "capture the flag." hac's simulation substrate models physics at an abstract level. hac supports any domain in which behaviors can be reduced to a small set of primitive effectors such as {\sc move} and {\sc apply-force}. at this time defining agent behavior requires lisp programming skills; we are moving towards more graphical programming languages. marc s. atkin gary w. king david l. westbrook brent heeringa paul r. cohen the simkit system: knowledge-based simulation and modeling tools in kee m. stelzner j. dynis f. cummins abstractions in semantic networks: axiom schemata for generalization, aggregation and grouping ulrich schiel an overview of temporal ontologies, structures and reasoning anne beguin jean- marc lermuzeaux guy belhomme the four phase method for modelling complex systems hamad i. odhabi ray j. paul robert d. macredie escher web-based toy sampath jagannathan knowledge and tools in building grandjur 1.1 this paper will describe selected aspects of the first phase of grandjur, an ongoing project to assist ohio prosecutors in preparing cases and selecting charges for presentation to a grand jury. in this project we have tried to identify, categorize, and make available a range of types of information for the user to choose among, compare and contrast, and make better evaluations. in addition, we sought to allow prosecutors to obtain and use the information in making decisions without their abdicating control or responsibility. a few points regarding our experience comparing different implementation tools will also be discussed. for several reasons, we felt it was important to give much responsibility for decision-making to users. many lawyers become impatient with programs that leave no flexibility for the lawyer to direct program flow. in particular, they seem to wish to manipulate depth of detail. typically a lawyer, will skim familiar or less important areas and direct attention to more problematic areas. because of the importance of the decisions being made, we also feel it would be unethical to relieve a prosecutor of much responsibility for deciding appropriate charges. moreover, having the user provide an additional check on knowledge quality is important. errors in coding seem inevitable. the knowledge base continues to grow and evolve; the law changes and we add more expertise over time. requiring the user to participate actively rather than passively helps detect errors. working with prosecutors reduces the likelihood of liability for us as system producers. shifting responsibility for decisions by making an expert system user's role more active seem wise to help minimize potential liability.1 therefore, we designed the program as an aid, not a replacement for user decision-making. at each step the user is provided with information and choices, but he or she, not the program makes the substantive decisions. for example, the program will provide a statutory definition of a crime, and give the user a choice of further types of information available. using the system to analyze a particular case, the prosecutor may search deeper in the knowledge base, obtaining additional assistance in understanding and satisfying elements. the user, however, controls how much assistance to obtain, as well as making the substantive decision that the evidence in the case file sufficiently matches that suggested by the program to satisfy the crime. this reinforces the prosecutor's ethical responsibility to avoid bringing charges without probable cause,2 and, we hope, encourages the user to evaluate the assistance provided more closely. the portion of the program described here provides two approaches. using an informational approach, the user can query the system for information about various crimes, their elements, types of proof, possible defense responses, etc.3 diagram 1 provides a schematic representation of sample types of knowledge and their interrelationships. we tried to identify a variety of knowledge types that prosecutors with varying degrees of expertise could use, and to make those types of information available as the user wishes. thus each crime is broken down into elements, and the user may search further depths of the knowledge base for additional help. attached to the various nodes in the network are also comments, expert suggestions, citations, and other information the user might wish. the other phase, allows the user to identify proof and elements satisfied in the case being analyzed, and then compare that proof to crimes in the knowledge base. for example, under the general heading of homicide are such crimes as aggravated murder, aggravated felony murder, murder, and voluntary manslaughter. associated with each are related crimes (lesser-included, lesser non-included, and superior offenses). thus, the user examining aggravated murder may see there are two forms with different elements, and that murder and voluntary manslaughter are, under appropriate circumstances, considered lesser included offenses. the program's main analysis is of crime elements. for example, a primary factor differentiating homicides is the mens rea element. the ohio murder statute prohibits "purposely" causing the death of another.5 "purposely" remains elusive and vague, so the user may obtain additional information, including a statutory definition,6 and expert comment. the statute's language alone could mislead without additional explanation. it seems that the second clause of the definition - that of intending the conduct regardless of the result - probably does not apply in a murder case. therefore additional expert comment clarifies this point. statutes, and appellate opinions, tend to leave intent elements rather abstract. to some extent, this cannot be overcome, as the law itself remains unclear until such time as an authority clarifies it. inconsistency, incompleteness, and lack of clarity, as attributes of the domain, are reflected in the knowledge base as well. we included conflicting interpretations where they existed, with information for the user to make his or her own evaluation and choice between them.7 vagueness of the law, and in some cases the open texture, however, are constrained to some extent in daily practice by practical application in specific cases of the law to facts. lawyers dealing with an abstract concept such as intent, may translate it into prototype fact situations with which they associate the concepts, or interpretations into facts and lay concepts that can be shown and explained to a jury. for prosecutors, stereotypical cases, based on general fact patterns that satisfy requisite elements can form a working definition of the law.8. therefore, we also included, where applicable, reference to typical factual or evidentiary patterns, listing evidence types and samples, and standard jury instructions. for example, one expert suggests most homicides, at least among those that are solved, fall into one of perhaps three or four categories - domestic disputes, bar fights, or neighborhood disagreements are most common. killings associated with another crime, such as a robbery or attempted robbery also account for a significant number of homicides. for proof, many prosecutors, (and, to some degree, the police who derive evidence the prosecutor uses) tend to rely on certain types of evidence which vary according to the prototype patterns. our primary expert tells prosecutors in training sessions there are basically four ways to prove crime elements - a confession, accomplice or associate testimony, eyewitness testimony, or other circumstantial evidence. with the "purposeful" element in murder, as in many instances, the types of proof within these categories tend to fall into more narrow patterns. accomplice or associate testimony frequently relates statements by the defendant regarding his plans or feelings toward the victim. or, for another example, in the category of other circumstantial evidence to show intent, the program user may examine such types of evidence as motive, type of weapon or instrument used, manner of infliction of the wound, preparation or planning, etc. another step deeper, the user may see, from typical cases, that use of a deadly weapon such as a gun alone may suffice to support an inference of intent, as may a coroner's testimony that the death was caused by an especially repeated forceful blows to the head. recognizing that mere evidence of use of a dangerous weapon will suffice to support an inference of intent to kill gives that requirement quite a different meaning.9 linking together elements, one sees that certain evidence may support more than one element, and that patterns of groups of evidence supporting the various elements tend to fall together, forming pattern cases.10 to satisfy user interest in implications of his or her choices, predictions about potential defense tactics are available. expert comment, for example, suggests that reliance on evidence of a particularly violent or forceful injury to show intent probably increases the likelihood the defense will claim self defense. as with other types of information, the source is identified so the user may evaluate the credibility and utility of the information. we are experimenting with two implementations of the programs described here. we built the first prototype using a prolog language, but have now begun to experiment with a hypertext tool. both are high level, powerful languages; one can do nearly whatever one wishes in either. each implementation, for example, allows backward chaining, and modification of the knowledge base during a run. each language allows tracing a user's search path, or the solutions selected, etc. the primary distinctions reduce to convenience in representation and control. the hypertext tool lends itself well to handling long strings, and allows converting strings to lists for processing, and back. it features nice screen handling, and the highlighting of selected text. easy windowing and menu production provide for quick development of friendly user interaction. while prolog also has features that allow manipulation of text, its strength is more in handling symbols. in the hypertext medium we found it easy to represent statutory or case material in text form; with the prolog form, it seemed easier to manipulate the concepts as symbols, then translate them into text for delivery to the screen. the representation in the hypertext medium seemed driven toward using the actual statutory language as a basis for representation, while in prolog, a representation of crimes by elements, and a network of concepts seemed more natural. this prolog representation required additional coding to make a pleasant presentation, but allowed a quick and easy development of a prototype without fancy screen handling. the use of prolog predicates to identify types of knowledge, with an argument to identify the associated concept seemed rational and natural to us, and facilitates easy search and change. if, for example, a new case is decided which affects the meaning of "purposely", the knowledge base can be searched for case predicates with purposely as a first argument. the new case can be quickly added, and inconsistent cases commented on or removed. the representation selected in the hypertext version relying more on statutory text and format, requires greater effort for modification. one has to examine the code containing the concept which has changed, identify from the code what "topics" are called by the topic containing the concept, and then search those topics. removing a topic also requires removing references to it elsewhere in the code. because of the threading between topics, there can be many such references to be found. prolog has a somewhat richer literature upon which to build; standard ways of representing knowledge and of describing control language have been published in numerous authorities. more writing is becoming available regarding hypertext tools, so this distinction is fading. tools and routines are available to replicate, in prolog, many of the screen handling advantages of hypertext media. the prolog version of the grandjur program occupies much less space in memory, and operates much faster. the hypertext version of the program looked better more quickly, but requires some greater thought to handle more complex control. we await further response from users before we decide what path to follow in future developments. the immediate reaction suggests that the hypertext format seems attractive and intuitively pleasing, but we may choose to replicate some of those features in prolog. whether difficulty in more complex control and matching with hypertext will be overcome as we become more familiar with it remains to be seen. r. d. purdy frame-to-frame coherence and the hidden surface computation: constraints for a convex world frame-to-frame coherence is the highly structured relationship that exists between successive frames of certain animation sequences. from the point of view of the hidden surface computation, this implies that parts of the scene will become visible or invisible in a predictable fashion. in this paper the frame-to-frame coherence constraints are identified and characterized for static scenes restricted to stationary, closed, convex, nonintersecting polyhedra. the animation derives from a continuous movement of the viewer. the mathematical analysis of the constraints is geometric, and leads to a characterization of the self- occlusion relationship over a single polyhedron; and to a characterization of the occlusion or change of occlusion relationship over two polyhedra. based on these constraints, an algorithm is presented which generates successive frames in an animation sequence. harold hubschman steven w. zucker fast perspective volume rendering with splatting by utilizing a ray-driven approach klaus mueller roni yagel a simple, fast, and effective rule learner william w. cohen yoram singer the software process and software environments (panel session) this panel discussion springs from a three-day workshop on the software process and software environments that was held in march 1985. our intention is to convey some of what was discussed at that workshop and to precipitate discussion of some of the issues that arose during the workshop deliberations. in this paper we briefly summarize the workshop, then list some of the issues that might become foci of the panel discussion. jack c. wileden mark dowson design and implementation of a supervisory software for intelligent robot a. htay charles f. bryan fast csg voxelization by frame buffer pixel mapping shiaofen fang duoduo liao the use of a template-based methodology in the simulation of a new cargo track from rotterdam harbor to germany alexander j. g. pater maurice j. g. teunisse temporal reasoning in tram a. koenig e. crochon lords of sipan erwin gomez viňales hugo chinga margarita cid good vibrations: model dynamics for graphics and animation a. pentland j. williams watertight tessellation using forward differencing in this paper we describe an algorithm and hardware for the tessellation of polynomial surfaces. while conventional forward difference-based tessellation is subject to round off error and cracking, our algorithm produces a bit-for- bit consistent triangle mesh across multiple independently tessellated patches. we present tessellation patterns that exploit the efficiency of iterative evaluation techniques while delivering a defect free adaptive tessellation with continuous level-of-detail. we also report the rendering performance of the resulting physical hardware implementation. henry moreton on the correspondence between walsh spectral anlaysis and 2k factorial experimental designs this paper discusses the basic concepts of using a spectrally based approach to identify a polynomial response surface model. we show that this approach is closely related to classical experimental design methods. we concentrate on the use of walsh spectra, and show that an experiment which is constructed in this fashion corresponds to a traditional 2k factorial design. paul j. sanchez error-bounded antialiased rendering of complex environments in previous work, we presented an algorithm to accelerate z-buffer rendering of enormously complex scenes. here, we extend the approach to antialiased rendering with an algorithm that guarantees that each pixel of the output image is within a user-specified error tolerance of the filtered underlying continuous image. as before, we use an object-space octree to cull hidden geometry rapidly. however, instead of using an image-space depth pyramid to test visibility of collections of pixel samples, we use a quadtree data structure to test visibility throughout image-space regions. when regions are too complex, we use quadtree subdivision to simplify the geometry as in warnock's algorithm. subdivison stops when the algorithm can either analytically filter the required region or bound the convolution integral appropriately with interval methods. to the best of our knowledge, this is the first algorithm to antialias with guaranteed accuracy scenes consisting of hundreds of millions of polygons. ned greene michael kass zsweep: an efficient and exact projection algorithm for unstructured volume rendering ricardo farias joseph s. b. mitchell cláudio t. silva experiments in imitation using perceptuo-motor primitives stefan weber maja j. mataric odest chadwicke jenkins mutual benefits for ai & law and knowledge management it is necessary to explore new branches of ai & law. a new promising field is to study the mutual connection between ai & law and knowledge management. at present, these areas are infrequently investigated together. we argue that addressing these areas together is for their mutual benefit and progress, as well as gainful for legal practice. anja oskamp maaike w. tragter arno r. lodder lapped textures we present for creating texture over an surface mesh using an example 2d texture. the approach is to identify interesting regions (texture patches) in the 2d example, and to repeatedly paste them onto the surface until it is completely covered. we call such a collection of overlapping patches a lapped texture. it is rendered using compositing operations, either into a traditional global texture map during a preprocess, or directly with the surface at runtime. the runtime compositing approach avoids resampling artifacts and drastically reduces texture memory requirements. through a simple interface, the user specifies a tangential vector field over the surface, providing local control over the texture scale, and for anisotropic textures, the orientation. to paste a texture patch onto the surface, a surface patch is grown and parametrized over texture space. specifically, we optimize the parametrization of each surface patch such that the tangential vector field aligns everywhere with the standard frame of the texture patch. we show that this optimization is solved efficiently as a sparse linear system. emil praun adam finkelstein hugues hoppe deja vu amnon katz ga design of crisp-fuzzy logic controllers kuan-shiu chiu andrew hunter interactive exploration of volume line integral convolution based on 3d-texture mapping line integral convolution (lic) is an effective technique for visualizing vector fields. the application of lic to 3d flow fields has yet been limited by difficulties to efficiently display and animate the resulting 3d-images. texture-based volume rendering allows interactive visualization and manipulation of 3d-lic textures. in order to ensure the comprehensive and convenient exploration of flow fields, we suggest interactive functionality including transfer functions and different clipping mechanisms. thereby, we efficiently substitute the calculation of lic based on sparse noise textures and show the convenient visual access of interior structures. further on, we introduce two approaches for animating static 3d-flow fields without the computational expense and the immense memory requirements for pre-computed 3d-textures and without loss of interactivity. this is achieved by using a single 3d-lic texture and a set of time surfaces as clipping geometries. in our first approach we use the clipping geometry to pre-compute a special 3d-lic texture that can be animated by time-dependent color tables. our second approach uses time volumes to actually clip the 3d-lic volume interactively during rasterization. additionally, several examples demonstrate the value of our strategy in practice. c. rezk- salama p. hastreiter c. teitzel t. ertl development and application of an intermodal mass transit simulation with detailed traffic modeling joseph c. brill dudley e. whitney game control design dave thomson the implementation of an algorithm in macsyma: computing the formal solutions of differential systems in the neighborhood of regular singular point we discuss in this paper the problems arised in the implementation in macsyma of a direct algorithm for computing the formal solutions of differential systems in the neighborhood of regular singular point. the differential system to be considered is of the form xh dy/dx = a(x) y (1) with a(x) = a0 \\+ a1 \\+ … is an n by n matrices of formal series. this paper deals with system of the form (1) with h = 1. another paper [3] has considered the systems with h > 1\\. so these two papers give complete consideration of systems in the form (1) in a neighborhood of both regular and irregular singular point without reducing the systems to super-irreducible forms, i.e. without knowing (for h > 1) a singular point is regular or irregular. the first step in the algorithm is to transform the leading matrix a0 to its jordan form, for which we can use the algorithm in [1][11] that computes at first the frobenius form and then reduces it to its jordan form. the reduction from the frobenius matrix to its jordan matrix is based on a theorem in wilkinson [13] which reduces each companion matrix to its jordan matrix. one of the important aspects is that we don't compute the eigenvalues of the matrix but just keep them as the roots of some corresponding polynomials, i.e. they are looked as algebraic numbers. one simple case in the computations is that when the eigenvalues of the leading matrix a0 have no integer differences, which is called the generic case. the formal solutions of the differential system can be obtained by solving a linear system of the form a x \- x b = c with a and b in jordan form. the implementation of this resolution is immediate. hence this gives the fundamental system of formal solutions wanted. for the general case, we should determine at first the integer differences of the eigenvalues of the matrix a0. a classical method is applied and implemented in macsyma for our situation. then we search for some transformations to reduce the differential system to the generic case. then an interesting question arised: how to compute the jordan normal form of the new leading matrix whose entries are rational functions of the eigenvalues which are algebraic numbers. we have studied these problems and proposed two algorithms for two different cases. these algorithms are implemented in macsyma. g. chen i. gil a haptic interaction method for volume visualization ricardo s. avila lisa m. sobierajski a montage method: the overlaying of the computer generated images onto a background photograph eihachiro nakamae koichi harada takao ishizaki tomoyuki nishita a perspective on machine translation: theory and practice allen b. tucker luminaries david haxton verification, validation, and accreditation osman balci a comment on "a fast parallel algorithm for thinning digital patterns" a fast parallel thinning algorithm for digital patterns is presented. this algorithm is an improved version of the algorithms introduced by zhang and suen [5] and stefanelli and rosenfeld [3]. an experiment using an apple ii and an epson printer was conducted. the results show that the improved algorithm overcomes some of the disadvantages found in [5] by preserving necessary and essential structures for certain patterns which should not be deleted and maintains very fast speed, from about 1.5 to 2.3 times faster than the four- step and two-step methods described in [3] although the resulting skeletons look basically the same. h. e. lu p. s. p. wang how reovirus kills cancer cells douglas bowman david rittenhouse denis gadbois debra kurtz patrick lee matthew coffey peter forsyth peter strong reconfigurable target recognition system (poster abstract) gabor szedo sandeep neema jason scott ted bapty scene graph apis: wired or tired? wes bethel carl bass sharon rose clay brian hook michael t. jones henry sowizral andries van dam constructing stable force-closure grasps van-due nguyen on the design and evaluation of a multi-dimensional approach to information retrieval (poster session) _we present a method of searching text collections that takes advantage of hierarchrical information within documents and integrates searches of structured and unstructured data. we show that multidimensional databases (mdb), designed for accessing data along hierarchical dimensions, are effective for information retrieval. we demonstrate a method of using on-line analytic processing (olap) techniques on a text collection. this combines traditional information retrieval and the slicing, dicing, drill-down, and roll-up of olap. we demonstrate use of a prototype for searching documents from the trec collection._ m. catherine mccabe jinho lee abdur chowdhury david grossman ophir frieder digital design of a surgical simulator for interventional mr imaging (case study) we present the design of a simulator for a prototype interventional magnetic resonance imaging scanner. this mri scanner is integrated with an operating theater, enabling new techniques in minimally invasive surgery. the simulator is designed with a threefold purpose: (1) to provide a rehearsal apparatus for practicing and modifying conventional procedures for use in the magnetic environment, (2) to serve as a visualization workstation for procedure planning and previewing as well as a post-operative review, and (3) to form the foundation of a laboratory workbench for the development of new surgical tools and procedures for minimally invasive surgery. the simulator incorporates preoperative data, either mri or ct exams, as well as data from commercial surgical planning systems. dynamic control of the simulation and interactive display of preoperative data in lieu of intra-operative data is handled via an opto- electronic tracking system. the resulting system is contributing insights into how best to perform visualization for this new surgical environment. terry s. yoo penny rheingans spatial frames robert jensen rule-based versus structure-based models for explaining and generating expert behavior flexible representations are required in order to understand and generate expert behavior. although production rules with quantifiers can encode experiential knowledge, they often have assumptions implicit in them, making them brittle in problem scenarios where these assumptions do not hold. qualitative models achieve flexibility by representing the domain entities and their interrelationships explicitly. however, in problem domains where assumptions underlying such models change periodically, it is necessary to be able to synthesize and maintain qualitative models in response to the changing assumptions. in this paper we argue for a representation that contains partial model components that are synthesized into qualitative models containing entities and relationships relevant to the domain. the model components can be replaced and rearranged in response to changes in the task environment. we have found this "model constructor" to be useful in synthesizing models that explain and generate expert behavior, and have explored its ability to support decision making in the problem domain of business resource planning, where reasoning is based on models that evolve in response to changing external conditions or internal policies. vasant dhar harry e. pople simulation of a signal quality survey douglas j. morrice peter w. mullarkey astrid s. kenyon herb schwetman jingfang zhou energy constraints on parameterized models andrew witkin kurt fleischer alan barr geometric primitives a. p. rockwood balancing fusion, image depth and distortion in stereoscopic head-tracked displays zachary wartell larry f. hodges william ribarsky an axiomatic basis for general discrete-event modeling sanjai narain reflectance and texture of real-world surfaces in this work, we investigate the visual appearance of real-world surfaces and the dependence of appearance on the geometry of imaging conditions. we discuss a new texture representation called the btf (bidirectional texture function) which captures the variation in texture with illumination and viewing direction. we present a btf database with image textures from over 60 different samples, each observed with over 200 different combinations of viewing and illumination directions. we describe the methods involved in collecting the database as well as the importqance and uniqueness of this database for computer graphics. a related quantity to the btf is the familiar brdf (bidirectional reflectance distribution function). the measurement methods involved in the btf database are conducive to simultaneous measurement of the brdf. accordingly, we also present a brdf database with reflectance measurements for over 60 different samples, each observed with over 200 different combinations of viewing and illumination directions. both of these unique databases are publicly available and have important implications for computer graphics. kristin j. dana bram van ginneken shree k. nayar jan j. koenderink remindings and their effects in learning a text editor how can learning in text-editing be characterized? much recent work has focused on the use of analogies from prior experience. in this paper, we investigate the retrievals of earlier experiences within the editor and how they might be used by analogy to accomplish the task and learn the editor. an experiment is presented that demonstrates the effects of these "remindings" on performance. in addition, some possible determinants of these remindings are investigated. this experiment points out the need to consider not only the general form of instruction, but also the specifics of the instructional sequence as well. irrelevant aspects of the task may have strong effects on performance. we consider three teaching techniques, designed to take advantage of these effects in different ways. brian h. ross thomas p. moran new techniques for ray tracing procedurally defined objects we present new algorithms for efficient ray tracing of three procedurally defined objects: fractal surfaces, prisms, and surfaces of revolution. the fractal surface algorithm performs recursive subdivision adaptively. subsurfaces which cannot intersect a given ray are culled from further consideration. the prism algorithm transforms the three dimensional ray- surface intersection problem into a two dimensional ray-curve intersection problem, which is solved by the method of strip trees. the surface of revolution algorithm transforms the three dimensional ray-surface intersection problem into a two dimensional curve-curve intersection problem, which again is solved by strip trees. james t. kajiya a lipschitz method for accelerated volume rendering barton t. stander john c. hart impact of general systems orientation: present and future the present impact of general systems orientation, i.e., theory and approach, on the simulation community is briefly outlined and assessed. prospects for the future are contingent upon the evolution of educational and corporate settings conducive to "modelling in the large" methodologies. bernard p. zeigler integration of knowledge and method in real-world discovery jan m. zytkow the holodeck interactive ray cache gregory ward larson maryann simmons constellation: a wide-range wireless motion-tracking system for augmented reality and virtual set applications eric foxlin michael harrington george pfeifer inductive inference: theory and methods dana angluin carl h. smith automated heuristic analysis of spectroscopic data hank simon autonomous mobile manipulators managing perception and failures we present a control system for autonomous mobile manipulators based on a theory of actions integrated with a theory of perception and failures. the system applies to autonomous manipulators built for simple missions like building towers of blocks, collecting balls of paper and putting them into a bin, or clearing obstacles from a path. for these application domains we have built three autonomous agents: armhand0, armhandone and a.r.r.i.g.o. the core of the system is a high level program controlling the on-line agent behaviour: while a goal is not achieved, it selects a task from a library of possible subplans and executes it. the task, when correctly chosen, must lead to a subgoal position. at the end of each task execution, visual perception is used to monitor the coherence between the configuration of the domain and the configuration entailed as a consequence of the execution of the task. in case of a misalignment between the predicted state and the perceived one, a diagnostic procedure is activated. alberto finzi fiora pirri marco pirrone massimo romano milko vaccaro zippered polygon meshes from range images range imaging offers an inexpensive and accurate means for digitizing the shape of three-dimensional objects. because most objects self occlude, no single range image suffices to describe the entire object. we present a method for combining a collection of range images into a single polygonal mesh that completely describes an object to the extent that it is visible from the outside. the steps in our method are: 1) align the meshes with each other using a modified iterated closest-point algorithm, 2) zipper together adjacent meshes to form a continuous surface that correctly captures the topology of the object, and 3) compute local weighted averages of surface positions on all meshes to form a consensus surface geometry. our system differs from previous approaches in that it is incremental; scans are acquired and combined one at a time. this approach allows us to acquire and combine large numbers of scans with minimal storage overhead. our largest models contain up to 360,000 triangles. all the steps needed to digitize an object that requires up to 10 range scans can be performed using our system with five minutes of user interaction and a few hours of compute time. we show two models created using our method with range data from a commercial rangefinder that employs laser stripe technology. greg turk marc levoy language learning from texts (extended abstract): mind changes, limited memory and monotonicity efim kinber frank stephan plenoptic stitching: a scalable method for reconstructing 3d interactive walk throughs interactive walkthrough applications require detailed 3d models to give users a sense of immersion in an environment. traditionally these models are built using computer-aided design tools to define geometry and material properties. but creating detailed models is time-consuming and it is also difficult to reproduce all geometric and photometric subtleties of real-world scenes. computer vision attempts to alleviate this problem by extracting geometry and photogrammetry from images of the real-world scenes. however, these models are still limited in the amount of detail they recover. image-based rendering generates novel views by resampling a set of images of the environment without relying upon an explicit geometric model. current such techniques limit the size and shape of the environment, and they do not lend themselves to walkthrough applications. in this paper, we define a parameterization of the 4d plenoptic function that is particularly suitable for interactive walkthroughs and define a method for its sampling and reconstructing. our main contributions are: 1) a parameterization of the 4d plenoptic function that supports walkthrough applications in large, arbitrarily shaped environments; 2) a simple and fast capture process for complex environments; and 3) an automatic algorithm for reconstruction of the plenoptic function. daniel g. aliaga ingrid carlbom xavier: experience with a layered robot architecture reid g. simmons richard goodwin karen zita haigh sven koenig joseph o'sullivan manuela m. veloso real-time performance of intelligent autonomous agents (abstract) anne collinot barbara hayes-roth detecting and diagnosing mistake in inexact vision-based navigation (dissertation) elizabeth r. stuck adaptive estimation of optical flow from general object motion zhaoxin pan joseph j. pfeiffer replicated objects in time warp simulations divyakant agrawal jonathan r. agre aggregating criteria with quantifiers we are here concerned with the problem of forming multiple criteria decision functions when the aggregation process is controlled by a linguistic quantifier. r r yager a radiosity method for non-diffuse environments david s. immel michael f. cohen donald p. greenberg gaia santi fort a demonstration of a legal reasoning system based on teleological analogies in this article, we demonstrate our analogical legal reasoning system based on a teleological approach to interpret laws, using an actual example. by this demonstration, we show the validity of our approach. the example is based on a real legal problem and consists of an actual case, the actual decision on the case by the japanese supreme court and two major doctrines on the case in japan. the problem and the doctrines are also analyzed from the viewpoint of gda (goal-dependent abstraction) framework in this article. we further show that our system using gda can provide helpful information to evaluate and revise interpretations of legal rules. tokuyasu kakuta makoto haraguchi a knowledge-based electronic information and documentation system we describe the capabilities of a knowledge-based system to automatically generate a collection of electronic notebooks containing various forms of online documentation and reports. this system is a subsystem of a larger knowledge-based system called scinapse. scinapse's raison d'etre is to transform high-level simulation problem specifications into executable numerical programs. the electronic notebooks are generated from the same domain knowledge bases that the system uses to perform its primary tasks. these online notebooks are of two different kinds: reference materials and reports. reference materials are generated from the latest version of the knowledge base, which includes the classes that drive the system, and a network of objects representing meta-information about the system. the reference materials document the system's capabilities and help users understand what the system can do. reports are generated from the instances created by a run of the system. they document the transformations the input specification underwent in becoming code, and are intended to help a user understand what the system has done. we have found that our approach to producing documents has both advantages and disadvantages when compared with more traditional approaches to documentation. the advantages are that we can minimize the manual effort that is involved in writing documentation about the system, while at the same time maximizing the accuracy of the documentation that is produced. the main disadvantage has been the lack of truly appropriate authoring tools built to work in our environment. when we began, we expected the task of creating such authoring tools to be much easier than it has turned out to be. later in this paper, we explore some of the factors that have caused this to be the case. robert l. young elaine kant larry a. akers deadlock detection and resolution in simulation models murali krishnamurthi amar basavatia sanjeev thallikar simulation and control of reactive systems pawel gburzynski jacek maitan reasoning about sensing actions and reactivity son cao tran links amruth kumar "&dbarbelow;&ebarbelow;termining operational policies for oil flow and tanker loading through simulation" in this paper the use of simulation in addressing the specific needs of one oil company's tanker scheduling problem is examined. the situation is actually more involved than simply tanker scheduling. the flow of oil from the wells to the storage facility must also be considered. this flow is constrained by governmental regulations and profit considerations. a network-discrete event- continuous simulation model, written in slam, was developed to address certain issues of interest to the oil company. these were issues such as a desirable mean time between tanker arrivals and the adequacy of the current storage facility. the latter was also of interest to the government involved, since there were usually tankers waiting to be loaded. the results show that tanker arrival schedule can be developed and that the current storage facility is adequate to meet this company's needs. c. p. koelling w. h. remy a formal treatment of distributed matchmaking (poster) somesh jha prasad chalasani onn shehory katia sycara get real!: global illumination for film, broadcast, and game production stuart feldman craig barron scott lelieur george murphy dave walvoord subproblem finder and instance checker, two cooperating modules for theorem provers properties are proved about instance, a theorem prover module that recognizes that a formula is a special case and/or an alphabetic variant of another formula, and about insurer, another theorem prover module that decomposes a problem, represented by a formula, into independent subproblems, using a conjunction. the main result of instance is soundness; the main result of insurer is a maximum decomposition into subproblems (with some provisos). experimental results show that a connection graph theorem prover extended with these modules is more effective than the resolution-based connection graph theorem prover alone. dennis de champeaux shape from projected light grid (abstract only) an algorithm is proposed to obtain local surface orientation from the distortion of the projected light stripes in the image of the surface. only partial camera calibration is required for the computations of surface normals. only the directions of the optical axis and the projector axis are required from the 3-d scene for this algorithm. a mapping is defined from points in the image to points on the gaussian sphere associated with the object. this mapping is based on the measurements of the distorted "quadrilaterals" in the image. experimental results are obtained both for planar and curved surfaces. there are many "shape from" methods discussed in the literature. surface normals can be obtained by calculating the spatial variation of brightness (shape from shading [4] ); by noting the distortion of texture elements (shape from texture [1]); by using range data directly (shape from stereo [3]); or by using the image of a moving object (shape from motion [2]). our approach is similar to shape from texture, but we use externally imposed light stripes to simulate texture. thus we can guarantee "texture" elements of known uniform sizes. we project a grid of light stripes on the scene. knowledge of optical axis and projector axis is assumed. we further make the assumption of parallel projection. the light grids are observed as distorted quadrilaterals in the image. the amount of distortion is a function of the normal to the surface on which the grid is being projected. lengths of the sides of these quadrilaterals measured in the image are then used to calculate the local surface normal. the algorithm can be used both on planar and curved surfaces. neelima shrikhande a thinning algorithm by contour generation a new contour generating serial algorithm is faster and more efficient than conventional contour tracing and parallel algorithms paul kwok automated team analysis taylor raines milind tambe stacy marsella a comparison of two methods for advancing time in parallel discrete event simulation anthony p. galluscio john t. douglass brian a. malloy a. joe turner dialaw: a dialogical framework for modeling legal reasoning arno r. lodder aimee herczog time management in the dod high level architecture richard m. fujimoto richard m. weatherly feynmann diagrams and spreading illusions "schwinger's quantum electrodynamics and feynmann's may have been mathematically the same, but one was conservative and the other revolutionary. one extended an existing line of thought. the other broke with the past decisively enough to mystify its intended audience. one represented an ending: a mathematical style doomed to be fatally overcomplex. the other, for those willing to follow feynmann into a new style of visualization, served as a beginning. feynmann's style was risky, even megalomaniacal." [1] thomas g. west a smart imager for the vision processing front-end noriaki takeda mituru homma makoto nagata takashi morie atsushi iwata integrating shape and pattern in mammalian models the giraffe and its patches, the leopard and its spots, the tiger and its stripes are spectacular examples of the integration of a pattern and a body shape. we present an approach that integrates a biologically-plausible pattern generation model, which can effectively deliver a variety of patterns characteristic of mammalian coats, and a body growth and animation system that uses experimental growth data to produce individual bodies and their associated patterns automatically. we use the example of the giraffe to illustrate how our approach takes us from a canonical embryo to a full adult giraffe in a continuous way, with results that are not only realistic looking, but also objectively validated. the flexibility of the approach is demonstrated by examples of big cat patterns, including an interpolation between patterns. the approach also allows a considerable amount of user control to fine-tune the results and to animate the resulting body with the pattern. marcelo walter alain fournier daniel menevaux summarizing text documents: sentence selection and evaluation metrics jade goldstein mark kantrowitz vibhu mittal jaime carbonell trends in semiconductor hardware for graphics systems (panel) during the next 5 - 10 years, text and graphic systems will tend to merge because of the demand for a more productive man/machine interface, failing memory costs and the availability of higher performance vlsi controllers. this panel will discuss video controllers, memory components and their architectures, graphic systems configurations and the evolution of enhanced system performance versus reduced system cost. henry fuchs illumination networks: fast realistic rendering with general reflectance functions chris buchalew donald fussell speedlines: depicting motion in motionless pictures maic masuch volume tracking deborah silver x. wang graphics growth at clemson university interest in computer graphics has been steadily growing at clemson university and has led to a very rapid expansion of graphics hardware and software. this paper is divided into four parts: • the history of our graphics hardware acquisitions with comments about the advantages and disadvantages of particular devices. • a discussion of software that runs on our system from the perspective of how it shaped our growth. • a discussion of the amount of resources dedicated by academic computing support (acs, our user support group) to graphics. • a discussion of future growth. gabriel acebo object recognition through automated tactile sensing (abstract only) convex polyhedra were modelled as generalized cylinders (gcs) and programs were written to recognize them through single-sensor tactile exploration. the system was designed to recognize objects that vary in size and orientation. the sensor for which the programs were written is the lord corp. lts-200, which has a flat 16 by 10 array of sensors and a flexible "skin". from a sensor impression the software can recognize lines, corners, and edges. these features are used to guide exploration and to construct representations of polyhedron faces. a gc is constructed from descriptions of faces on opposite sides of the object, the gc description includes face descriptions, the axis length, the sweeping rule, and the degree of tilt of the axis relative to the faces. a list of standard gc descriptions is searched to make an identification. the procedures have been tested with simulated tactile impressions to discover the tolerance of the system for noise. tests on real tactile input are in progress. this work is supported by nsf grant ecs-8216572 to case western reserve university. william mcmillan summed-area tables for texture mapping texture-map computations can be made tractable through use of precalculated tables which allow computational costs independent of the texture density. the first example of this technique, the "mip" map, uses a set of tables containing successively lower-resolution representations filtered down from the discrete texture function. an alternative method using a single table of values representing the integral over the texture function rather than the function itself may yield superior results at similar cost. the necessary algorithms to support the new technique are explained. finally, the cost and performance of the new technique is compared to previous techniques. franklin c. crow software for simulation jerry banks merging 3-d graphics and imaging: applications and issues william r. pickering topologically reliable display of algebraic curves an algebraic curve is a set of points in the plane satisfying an equation f(x,y) 0, where f(x,y) is a polynomial in x and y with rational number coefficients. the topological structure of an algebraic curve can be complicated. it may, for example, have multiple components, isolated points, or intricate self-crossings. in the field of computer algebra (symbolic mathematical computation), algorithms for exact computations on polynomials with rational number coefficients have been developed. in particular, the cylindrical algebraic decomposition (cad) algorithm of computer algebra determines the topological structure of an algebraic curve, given f(x,y) as input. we describe methods for algebraic curve display which, by making use of the cad algorithm, correctly portray the topological structure of the curve. the running times of our algorithms consist almost entirely of the time required for the cad algorithm, which varies from seconds to hours depending on the particular f(x,y). dennis s. arnon execution monitoring and diagnosis in multi-agent environments gal a. kaminka learning with malicious membership queries and exceptions (extended abstract) we consider two issues in polynomial-time exact learning of concepts using membership and equivalence queries: (1) malicious errors in the answers to membership queries and (2) learning finite variants of concepts drawn from a learnable class. dana angluin martins krikis from logic to stochastic processes (abstract only) prakash panangaden integrating multiagent coordination with reactive plan execution jeffrey s. cox bradley j. clement pradeep m. pappachan edmund h. durfee requirements for machine learning in expert systems (abstract) as dietterich pointed out at the 1989 machine learning workshop, the field needs analyses of the kinds of tasks that require machine learning (ml), and of the suitability of existing ml techniques for solving real problems. an area often mentioned as one that would benefit from ml is that of expert systems. learning from experience could alleviate the knowledge acquisition "bottleneck". although there have been some impressive successes in learning practical knowledge from examples, existing ml techniques lack the power to learn to conduct different phases of an expert "consultation". there are usually many dozens of variables to consider in an expert task, and their relevance changes as the consultation evolves. ml systems such as id3, aq11, and neural networks bog down quickly as extraneous variables are introduced. it is not practical to submit all data derived from an example consultation to such programs, expecting them to produce monolithic knowledge structures. based on our work in developing a diagnostic expert system, we offer suggestions for future work in ml. one is that the system be given knowledge that is easily available from a human expert. the ml program should be given a "proto-rule" for a specific decision, indicating the most relevant variables to be used by the rule and its "roughcut" logic. such knowledge can be gotten from an expert quickly. ml routines can then edit the logic and consider other variables as examples are considered. search is reduced drastically. other suggestions include ways to combine ml techniques. christopher j. gardiner william w. mcmillan uflic: a line integral convolution algorithm for visualizing unsteady flows han-wei shen david l. kao graphics goodies #2 - a simple, versatile procedural texture lloyd burchill learning with queries but incomplete information (extended abstract) we investigate learning with membership and equivalence queries assuming that the information provided to the learner is incomplete. by incomplete we mean that some of the membership queries may be answered by "i don't know." this model is a worst-case version of the incomplete membership query model of angluin and slonim. it attempts to model practical learning situations, including an experiment of lang and baum that we describe, where the teacher may be unable to answer reliably some queries that are critical for the learning algorithm. we present algorithms to learn monotone k-term dnf with membership queries only, and to learn monotone dnf with membership and equivalence queries. compared to the complete information case, the query complexity increases by an additive term linear in the number of "i don't know" answers received. we also observe that the blowup in the number of queries can in general be exponential for both our new model and the incomplete membership model. robert h. sloan györgy turán continuous categories for a mobile robot michael t. rosenstein paul r. cohen plaid: proactive legal assistance t. j. m. bench-capon g. staniford geometric compression through topological surgery the abundance and importance of complex 3-d data bases in major industry segments, the affordability of interactive 3-d rendering for office and consumer use, and the exploitation of the internet to distribute and share 3-d data have intensified the need for an effective 3-d geometric compression technique that would significantly reduce the time required to transmit 3-d models over digital communication channels, and the amount of memory or disk space required to store the models. because the prevalent representation of 3-d models for graphics purposes is polyhedral and because polyhedral models are in general triangulated for rendering, this article introduces a new compressed representation for complex triangulated models and simple, yet efficient, compression and decompression algorithms. in this scheme, vertex positions are quantized within the desired accuracy, a vertex spanning tree is used to predict the position of each vertex from 2,3, or 4 of its ancestors in the tree, and the correction vectors are entropy encoded. properties, such as normals, colors, and texture coordinates, are compressed in a similar manner. the connectivity is encoded with no loss of information to an average of less than two bits per triangle. the vertex spanning tree and a small set of jump edges are used to split the model into a simple polygon. a triangle spanning tree and a sequence of marching bits are used to encode the triangulation of the polygon. our approach improves on michael deering's pioneering results by exploiting the geometric coherence of several ancestors in the vertex spanning tree, preserving the connectivity with no loss of information, avoiding vertex repetitions, and using about three fewer bits for the connectivity. however, since decompression requires random access to all vertices, this method must be modified for hardware rendering with limited onboard memory. finally, we demonstrate implementation results for a variety of vrml models with up to two orders of magnitude compression. gabriel taubin jarek rossignac results of the nineteenth acm north american computer chess championship monty newborn danny kopec atmospheric illumination and shadows nelson l. max marsupial-like mobile robot societies robin r. murphy michelle ausmus magda bugajska tanya ellis tonia johnson nia kelley jodi kiefer lisa pollock fast phong shading gary bishop david m. weimer interactive ray tracing steven parker william martin peter-pike j. sloan peter shirley brian smits charles hansen mesh reduction with error control reinhard klein gunther liebich wolfgang straßer discrete-event simulation and the event horizon the event horizon is a very important concept that is useful for both parallel and sequential discrete-event simulations. by exploiting the event horizon, parallel simulations can process events in a manner that is risk-free (i.e., no antimessages) in adaptable "breathing" time cycles with variable time widths. additionally, exploiting the event horizon can greatly reduce the event list management overhead that is common to virtually all discrete-event simulations. this paper develops an analytic model describing the event horizon from first principles using equilibrium considerations and the hold model (where each event, when consumed, generates a single new event with future-time statistics described by a known probability function). exponential and beta-density functions are used to verify the mathematics presented in this paper. jeff s. steinman modelling hybrid rule/frame-based expert systems using coloured petri nets simon c. k. shiu james n. k. liu daniel s. yeung esplex: a rule and conceptual model for representing statutes the characteristics of the esplex system which may be defined as a "rule and conceptual based model" are illustrated, together with the possibilities for its utilization, its similarities with other existing projects, and the requisites of the knowledge representation language. the methodology and the theoretical propositions that have led to the definition of the representation language are therefore explained. the characteristics of the system which manages the knowledge base are also described and a brief comment is made regarding future development. c. biagioli p. mariani d. tiscornia graphics standards for three-dimensional modelling much effort has been spent in defining a standard call interface for a library of routines with display primitives embedded in a three-dimensional space. this paper argues that this is now an inappropriate target. there is too much diversity in the three-dimensional things we want to draw and in the kinds of picture we want to draw of them for any single standard to be both broadly acceptable and specifically useful. if the decision is made to go for a set of specialized standards for wire frame and for rendered graphics for data presentation, for sculptured surfaces and for object models, it becomes apparent that far more can be gained from a higher level standard, at the data presentation or object modelling level. in the object modelling community, at least, there is indeed work toward such a standard based on the iges proposals. this has very few concepts in common with the graphics standard effort, and it is not clear at present what contribution the graphics community has to make in this area. the suggestion made here is that there may well be a number of paragraphs or sections common to more than one of the graphics-related and higher level standards. one such is the coordinate transformation (the dreaded 4 × 4 matrix). its definition and its use both for modelling and viewing concepts are suggested which resolve some of the black art associated with transformation in the past. these could become the conceptual basis for a short simple standard in this area. malcolm sabin meld orbs scott petill the role of polymorphism in class evolution in the devs-scheme environment tag gon kim extracting legal knowledge by means of a multilayer neural network application to municipal jurisprudence laurent bochereau danièle bourcier paul bourgine three-dimensional beans - creating web content using 3d components in a 3d authoring environment this paper deals with the question how the component idea can be transferred to the authoring of 3d content for the www. the concept of 3d beans and their according authoring environment is presented. in addition, an implementation of this concept using java3d and java beans is described. advantages of the concept are discussed and illustrated with an application example from the area of computer-based training. major advantages of the 3d beans concept are on the one hand that 3d content can be created in a virtual environment more directly and efficiently using pre-fabricated components that fit together. especially, as the author is supported by a bean authoring environment that itself uses information from the 3d beans. on the other hand, a 3d authoring environment offers more degrees of freedom for authoring component-based applications. /0 ralf dörner paul grimm cachesim: a cache simulator for teaching memory hierarchy behaviour m. luisa córdoba cabeza m. isabel garcía clemente m. luz rubio using polymorphism to create complex agents brian guarraci generalized markup for literary analysis easy interchange of texts among humanistic scholars and development of standard analytical tools for these texts require a standard for marking features of documents for literary analysis. the standard generalized markup language has been used as a basis for a standard, and the results are compared against a set of requirements for such a tool. cheryl a. fraser david t. barnard george m. logan international technology transfer (panel) for the past two and one half decades, usa suppliers have generally dominated the computer graphics marketplace, providing an estimated 90 percent of computer graphics shipments worldwide. in the past few years, however, a number of non-usa sources have begun to develop products which not only have captured a growing percentage of their domestic markets, but also have begun to be competitive worldwide. usa companies have increasingly turned to non-usa sources for licenses and products. this panel brings together executives from european, asian and south african companies to discuss the growing exchange of computer graphics technology among usa and non-usa suppliers. carl machover palka: a system for lexical knowledge acquisition jun-tae kim dan i. moldovan evolving intelligent text-based agents edmund s. yu ping c. koo elizabeth d. liddy undirected behavior without unbounded search ronald j. brachman hector j. levesque interactive technical illustration bruce gooch peter-pike j. sloan amy gooch peter shirley richard riesenfeld expertfit: total support for simulation input modeling averill m. law michael g. mccomas high-quality pre-integrated volume rendering using hardware-accelerated pixel shading we introduce a novel texture-based volume rendering approach that achieves the image quality of the best post-shading approaches with far less slices. it is suitable for new flexible consumer graphics hardware and provides high image quality even for low-resolution volume data and non-linear transfer functions with high frequencies, without the performance overhead caused by rendering additional interpolated slices. this is especially useful for volumetric effects in computer games and professional scientific volume visualization, which heavily depend on memory bandwidth and rasterization power. we present an implementation of the algorithm on current programmable consumer graphics hardware using multi-textures with advanced texture fetch and pixel shading operations. we implemented direct volume rendering, volume shading, arbitrary number of isosurfaces, and mixed mode rendering. the performance does neither depend on the number of isosurfaces nor the definition of the transfer functions, and is therefore suited for interactive high-quality volume graphics. klaus engel martin kraus thomas ertl simulation of facility designs on a micro-computer the design of a high technology production facility quite often hinges on the successful design of a material handling system to integrate the various automation centers. traditionally, management has been reluctant to invest the capital necessary for high technology material handling equipment since they have no guarantee of the performance potential. simulation of a proposed facility design, with a special emphasis on the material handling system is illustrated as a key method for evaluating the proposed design prior to the purchase of production process equipment. gerald t. mackulak neil glenney techniques for conic splines vaughan pratt dynamic view-dependent partitioning for structured grids with complex boundaries for object-order rendering techniques object-order rendering techniques present an attractive approach to run-time visualization of structured grid data, particularly when combined with a parallel rendering paradigm such as image composition. the ability of this combination to exploit hardware exceeds that of parallel image order methods. however, certain configurations of grid boundaries prevent composition from being performed correctly. in particular, when the boundary between two partitions contains concave sections, the partitions may no longer be depth sorted correctly, a requirement for some visualization techniques such as direct volume rendering. this occurs because the concave boundary prevents even the simple ordering of two adjacent partitions. if the data may be repartitioned such that it can be depth sorted correctly, then an image composition approach is a viable option. to facilitate such an operation, we present an algorithm to analyze the geometric structure of a grid boundary and extract knowledge about how the boundary impacts depth sorting and therefore image composition. we then show through examples how this knowledge may be applied to create a set of partitions that may be properly depth sorted. lance c. burton raghu machiraju donna s. reese applied probabilistic ai for online diagnosis of a safety-critical system based on a quality assurance program johannes lauber christian steger reinhold weiss image-based techniques for object removal rod g. bogart sikt: a structured interactive knowledge transfer program xindong wu "axe": a simulation environment for actor-like computations on ensemble architectures this paper introduces a set of experimental tools which have been designed to study dynamic run-time distribution of work on mesh- connected concurrent processors. the computation is modeled (and stimulated) at the "operating system level". this environment is characterized by its fast turn-around time for model specification, simulation, as well as data collection. jerry c. yan stephen f. lundstrom gpss/h in the 1990s daniel t. brunner robert c. crain cognitive architectures and hci the cognitive architectures and human-computer interaction workshop examined computational cognitive modeling approaches to human-computer interaction issues (hci). the five major architectures and variations represented were briefly summarized. participants compared approaches to a set of selected hci problems and alternative solutions, and compared the strengths and weaknesses of the architectures. a list of additional issues was generated and discussed. susan s. kirschenbaum wayne d. gray richard m. young delta: an expert system to troubleshoot diesel electric locomotives in the last few years, expert systems have become the most visible and fastest growing branch of artificial intelligence. their objective is to capture the knowledge of an expert in a particular problem domain, represent it in a modular, expandable structure, and transfer it to other users in the same problem domain. to accomplish this goal, it is necessary to address issues of knowledge acquisition, knowledge representation, inference mechanisms, control strategies, user interface and dealing with uncertainty. piero p. bonissone universal editor unattainable c. s. yovev an analytical comparison of periodic checkpointing and incremental state saving the successful application of optimistic synchronization techniques in parallel simulation requires that rollback overheads be contained. the chief contributions to rollback overhead in a time warp simulation are the time required to save state information and the time required to restore a previous state. two competing techniques for reducing rollback overhead are periodic checkpointing (lin and lazowska, 1989) and incremental state saving (bauer et al., 1991). this paper analytically compares the relative performance of periodic checkpointing to incremental state savings. the analytical model derived for periodic checkpointing is based almost entirely on the previous model developed by lin (lin and lazowska, 1989). the analytical model for incremental state saving has been developed for this study. the comparison assumes an optimal checkpoint interval and shows under what simulation parameters each technique performs best. avinash c. palaniswamy philip a. wilsey rolie polie olie pam lehn integrated volume compression and visualization tzi-cker chiueh chuan-kai yang taosong he hanspeter pfister arie kaufman a practical model for subsurface light transport this paper introduces a simple model for subsurface light transport in translucent materials. the model enables efficient simulation of effects that brdf models cannot capture, such as color bleeding within materials and diffusion of light across shadow boundaries. the technique is efficient even for anisotropic, highly scattering media that are expensive to simulate using existing methods. the model combines an exact solution for single scattering with a dipole point source diffusion approximation for multiple scattering. we also have designed a new, rapid image-based measurement technique for determining the optical properties of translucent materials. we validate the model by comparing predicted and measured values and show how the technique can be used to recover the optical properties of a variety of materials, including milk, marble, and skin. finally, we describe sampling techniques that allow the model to be used within a conventional ray tracer. henrik wann jensen stephen r. marschner marc levoy pat hanrahan portals and mirrors: simple, fast evaluation of potentially visible sets we describe an approach for determining potentially visible sets in dynamic architectural models. our scheme divides the models into cells and portals, computing a conservative estimate of which cells are visible at render time. the technique is simple to implement and can be easily integrated into existing systems, providing increased interactive performance on large architectural models. david luebke chris georges compiling rules from constraint satisfaction problem solving s. subramanian e. c. freuder hierarchical modular modelling in discrete simulation m. pidd r. bayer castro recognition of semantically incorrect rules: a neural-network approach a novel technique that applies the neural-network learning strategy of back- propagation to recognize semantically incorrect rules is presented. when the rule strengths of most rules are semantically correct, semantically incorrect rules can be recognized if their strengths are weakened or change signs after training with correct samples. in each training cycle, the discrepancies in the belief values of goal hypotheses are propagated backward and the strengths of rules responsible for such discrepancies are modified appropriately. a function called consistent-shift is defined for measuring the shift of a rule strength in the direction consistent with the strength assigned before training and is a critical component of this technique. the viability of this technique has been demonstrated in a practical domain. li-min fu design of the pen video editor display module pen, a new portable video editor, uses a number of simple but effective techniques. most are not new, but are unavailable in the literature. we will describe our goals for pen's display module, discuss implementation alternatives and describe in detail the techniques used in the editor. david r. barach david h. taenzer robert e. wells a concise presentation of itl nicola guarino building parallel time-constrained hla federates: a case study with the parsec parallel simulation language congduc pham rajive l. bagrodia alternative approaches for specifying input distributions and processes (panel session) w. david kelton bennett l. fox mark e. johnson averill m. law bruce w. schmeiser james r. wilson john meszaros cynthia l. morey susan e. romens fusion of gray scale and light striping in 2-d feature extraction an approach of extracting 2-d features of geometrical ions (geons) is presented, which uses the fused data obtained from gray scale images and structured light images. gongzhu hu neelima shrikhande adaptive isocurve-based rendering for freeform surfaces gershon elber elaine cohen kppcdl: an internet based shared environment for introductory programming education the karel++ collaborative laboratory is an internet based educational tool, which facilitates the learning of object-oriented programming techniques, by providing a shared development environment for the building of student programs written in the karel++ language. kppcdl offers remote sharing of karel++ program elements, collaborative source code editing, textual and graphical notification of both coarse and finely grained remote updates, remote and local views of developing program elements, updated views for late comers, background source parsing, and real-time memo sending. the system combines features of both centralized and replicated architectures, and provides for both synchronous and asynchronous collaboration. alfred j. rossi performance of temporal reasoning systems ed yampratoom james f. allen vertex-based anisotropic texturing mip mapping is a common method used by graphics hardware to avoid texture aliasing. in many situations, mip mapping over-blurs in one direction to prevent aliasing in another. anisotropic texturing reduces this blurring by allowing differing degrees of filtering in different directions, but is not as common in hardware due to the implementation complexity of current techniques. we present a new algorithm that enables anisotropic texturing on any current mip map graphics hardware supporting mip level biasing, available in opengl 1.2 or through the _gl_ext_texture_lod_bias_ or _gl_sgix_texture_lod_bias_ opengl extensions. the new algorithm computes anisotropic filter footprint parameters per vertex. it constructs the anisotropic filter out of several mip map texturing passes or multi-texture lookups. each lookup uses mip level bias and perturbed texture coordinates to place one probe used to construct the more complex filter profile. marc olano shrijeet mukherjee angus dorbie architectural walkthroughs using portal textures daniel g. aliaga anselmo a. lastra cost of state saving & rollback approaches to state saving and rollback for a shared memory, optimistically synchronized, simulation executive are presented. an analysis of copy state saving and incremental state saving is made and these two schemes are compared. two benchmark programs are then described, one a simple, all overhead, model and one a performance model of a regional canadian public telephone network. the latter is a large ss7 common channel signalling model that represents a very challenging, practical, test application for parallel simulation. experimental results are presented which show the necessity and sufficiency of incremental state saving for this application. john cleary fabian gomes brian unger zhonge xiao raimar thudt some notions on testing generated fault hypotheses yusuf wilajati purna takahira yamaguchi filtered noise and the fourth dimension geoff wyvill kevin novins multiperspective panoramas for cel animation daniel n. wood adam finkelstein john f. hughes craig e. thayer david h. salesin documentational analysis: or good common sense those who never documented their systems are suffering the consequences. in response your friendly marketplace has come up with some solutions to be purchased at the same price as a piece of software. it may or may not be partly software. it is a specific guide for generating documentation in as mechanical and predictable a method possible. if it is mechanical, it can be controlled, and that is what the product is: control. control of system analysis and design. the necessary by-product is documentation. the kind of analysis that documentation requires, falls under that mysterious grace called "good common sense." diana patterson tools and methods for group data modeling: a key enabler of enterprise modeling james d. lee douglas l. dean douglas r. vogel computer animation of knowledge-based human grasping hans rijpkema michael girard conflict resolution in inductive learning from examples ray r. ashemi mehdi razzaghi frederick r. jelovsek john r. talburt some experiments in object-oriented simulation discrete event simulation has been a constant source of inspiration for language designers. the present interest in object-oriented programming (o.o.p.) can be traced back to the simula 67 language. in return, simulation languages are currently getting back many advantages from the ongoing research in o.o.p. one of the main present research trends is the possibility of concurrently executing a simulation program composed of a set of interacting and communicating objects. conditions which will make this possible are discussed together with some induced problems. more precisely this paper presents some experiments in object-oriented simulation that reveal the great flexibility of the smalltalk-801 programming system for building evaluation environments. jean bezivin computational geometry column joseph o'rourke numerically stable implicitization of cubic curves john d. hobby lidar: reality capture eric wong dennis martin daniel chudak alan lasky grant mckinney lisa simon-parker benedikt wolff guy cutting wilvia uchida ben kacyra barbara kacyra reactive agents for adaptive image analysis jiming liu feature sensitive surface extraction from volume data the representation of geometric objects based on volumetric data structures has advantages in many geometry processing applications that require, e.g., fast surface interrogation or boolean operations such as intersection and union. however, surface based algorithms like shape optimization (fairing) or freeform modeling often need a topological manifold representation where neighborhood information _within_ the surface is explicitly available. consequently, it is necessary to find effective conversion algorithms to generate explicit surface descriptions for the geometry which is implicitly defined by a volumetric data set. since volume data is usually sampled on a regular grid with a given step width, we often observe severe alias artifacts at sharp features on the extracted surfaces. in this paper we present a new technique for surface extraction that performs feature sensitive sampling and thus reduces these alias effects while keeping the simple algorithmic structure of the standard marching cubes algorithm. we demonstrate the effectiveness of the new technique with a number of application examples ranging from csg modeling and simulation to surface reconstruction and remeshing of polygonal models. leif p. kobbelt mario botsch ulrich schwanecke hans-peter seidel validation of simulation models: the weak/missing link the validation of the simulation model is generally acknowledged as an integral part of a simulation project. there is, however, no general agreement on how simulation models should be verified and there is often confusion as to the difference between validation and verification. in this paper we first set forth a framework for verification and validation as well as some of the more commonly suggested methods to be used. we then turn to the literature on the application of simulation to see how models are, in practice, verified and validated. to our surprise we found that in the vast majority of the reported applications of simulation there is no mention of verification or validation of the simulation model, and when it is included, typically a single sentence or two is all that is devoted to how the model's credibility was established. we classified reported applications by type of author, organization type and source and found no significant difference in the frequency of the inclusion of verification or validation as part of the reported application. stewart v. hoover ronald f. perry kat: a knowledge acquisition tool for acquiring functional knowledge based upon the no-causality-in-function principle congxiao lu david j. russomanno cooperative multi-agent intelligent field terminals we have developed a method for improving cooperation in complex systems that uses multi- agent (ma) intelligent field terminals (ifts). the ma function evaluates the control conditions of the overall system and the conditions of the other ifts. to shorten the turn- around time for data transfer among ifts, the conflicts that occur when the data processed by different ifts is inconsistent or irregular must be resolved autonomously. we thus incorporate a predictive agent in each ift, and these agents cooperate to resolve the conflicts. experimental results showed that this method not only provides adequate controls but also reduces the load on the network and the turn- around time when the number of ifts is less than 30. juichi kosakaya katsunori yamaoka some heuristics for playing mastermind k yue irs: a hierarchical knowledge based system for aerial image interpretation a knowledge based architecture for the interpretation of aerial images is presented. the image recognition system (irs) utilises a multiresolution perceptual clustering methodology as a robust alternative to the more traditional edge or region based approaches. initially, data driven feature generation and primary perceptual clustering is performed independently for two or more reduced resolution versions of the image. a rule based frame system (rbfs) is then used to instantiate more complex geometrical structures from symbolic multiresolution feature representations. final interpretation is achieved by using knowledge of contextual relations between objects in the domain. steve cosby ray thomas using simulation to optimize solar greenhouse design this paper discusses the development and use of a thermal greenhouse model which was written in the dynamo computer simulation language. the model is used in designing an energy-efficient passive stand-alone greenhouse for use in the southern california area. the design process and model are readily adaptable for use in other climatic regions and for studying attached greenhouses. robert d. engel the role of frame-based representation in reasoning a frame-based representation facility contributes to a knowledge system's ability to reason and can assist the system designer in determining strategies for controlling the system's reasoning. richard fikes tom kehler a comparative analysis of partial order planning and task reduction planning subbarao kambhampati business process modeling with simprocess scott swegles a task-based architecture for application-aware adjuncts users of complex applications need advice, assistance, and feedback while they work. we are experimenting with "adjunct" user agents that are aware of the history of interaction surrounding the accomplishment of a task. this paper describes an architectural framework for constructing these agents. using this framework, we have implemented a critiquing system that can give task-oriented critiques to trainees while they use operating system tools and software applications. our approach is generic, widely applicable, and works directly with off-the-shelf software packages. robert farrell peter fairweather eric breimer live computer animation (panel) tim heidmann ai planning for hazard action response todd mansell elizabeth sonenberg grahame smith the symptom-component approach to knowledge acquisition j. bradley k. harbison-briggs artist discoveries and graphical histories thomas g. west a sequential reversible belief revision method based on polynomials salem benferhat didier dubois odile papini interactive techniques for implicit modeling jules bloomenthal brian wyvill the simulation of natural phenomena (panel session) this panel will discuss the issues and the problems associated with the simulation of natural phenomena. this is a difficult area of research since it generally involves complex data bases and in many instances time variant phenomena. the computational loads can become enormous as one considers the physics or the mathematical modeling of structures. most items in nature, trees, clouds, fire and comets being some examples, have not been displayed realistically in computer graphics. this lack stems from a few different problems, all of which are significant. the first is the fact that realistic portrayals require large amounts of storage and consequently large compute time. nature is able to create diverse detail at the most minute levels within an object of grandoise scale. the second problem is that of diversity of design within a given framework. for example, if a scene requires two dozen poplar trees, how does the designer construct trees that look different but are undeniably poplars? humans typically become tired after the first few iterations of such a design process, with a resulting degradation in the subsequent models. clearly, this problem applies to all of the phenomena mentioned above. finally, there is a lack of models. first, second and third order representations are commonly used in computer graphics to model various kinds of surfaces and their boolean combinations. however, their applications to objects, which do not lend themselves well to being described as surfaces has not been addressed sufficiently. previous attempts at realism have dealt with the appearances of the surfaces being modeled, in terms of their illumination or relief. more recently, fractal methods have introduced a new degree of realism into terrain modeling systems. however, it appears that natural phenomena will require more research into the fundamental way things occur in nature, and in terms of computer graphics, their representation will build on previous work, but will still require new modeling techniques. charles a. csuri james blinn julian gomez nelson max william reeves the vissim/discrete event modeling environment herb schwetman arun mulpur modelling reasoning about evidence in legal procedure this article investigates the modelling of reasoning about evidence in legal procedure. to this end, a dialogue game model of the relevant parts of dutch civil procedure is developed with three players: two adversaries and a judge. the model aims to be both legally realistic and technically well-founded. legally, the main achievement is a more realistic account of the judge's role in legal procedures than that provided by current models. technically, the model aims to preserve the features of an earlier-developed framework for two- player argumentative dialogue systems. henry prakken explanation systems for computer simulations explanation systems supply information that clarifies the structure and problem domain of a computer program for the user. we begin our paper by describing the early explanation systems, which were built for expert system programs, and by reviewing some of the subsequent developments in artificial intelligence that relate to this area. the results of our research are consistent with some of the recent developments in artificial intelligence; we have found that there are a variety of kinds of information that are useful to naive users of computer programs. we have been particularly interested in writing programs that can supply such information to naive users of numerical computer simulations. we describe an implemented explanation system, naturalist, which explains the structure and domain of a simulation for inventory control. our experience with the naturalist program suggests that explanation facilities may be valuable additions to numerical computer simulations. david h. helman akash bahuguna image-driven simplification we introduce the notion of image-driven simplification, a framework that uses images to decide which portions of a model to simplify. this is a departure from approaches that make polygonal simplification decisions based on geometry. as with many methods, we use the edge collapse operator to make incremental changes to a model. unique to our approach, however, is the use at comparisons between images of the original model against those of a simplified model to determine the cost of an ease collapse. we use common graphics rendering hardware to accelerate the creation of the required images. as expected, this method produces models that are close to the original model according to image differences. perhaps more surprising, however, is that the method yields models that have high geometric fidelity as well. our approach also solves the quandary of how to weight the geometric distance versus appearance properties such as normals, color, and texture. all of these trade- offs are balanced by the image metric. benefits of this approach include high fidelity silhouettes, extreme simplification of hidden portions of a model, attention to shading interpolation effects, and simplification that is sensitive to the content of a texture. in order to better preserve the appearance of textured models, we introduce a novel technique for assigning texture coordinates to the new vertices of the mesh. this method is based on a geometric heuristic that can be integrated with any edge collapse algorithm to produce high quality textured surfaces. peter lindstrom greg turk announcements amruth kumar multi-layered activity cycle diagrams and their conversion into activity-based simulation code kurt a. pflughoeft kiran manur types of monotonic language learning and their characterization the present paper deals with strong-monotonic, monotonic and weak-monotonic language learning from positive data as well as from positive and negative examples. the three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce always better and better generalizations when fed more and more data on the concept to be learnt. we characterize strong-monotonic, monotonic, weak-monotonic and finite language learning from positive data in terms of recursively generable finite sets, thereby solving a problem of angluin (1980). moreover, we study monotonic inference with iteratively working learning devices which are of special interest in applications. in particular, it is proved that strong-monotonic inference can be performed with iteratively learning devices without limiting the inference capabilities, while monotonic and weak-monotonic inference cannot. steffen lange thomas zeugmann a lens and aperture camera model for synthetic image generation this paper extends the traditional pin-hole camera projection geometry, used in computer graphics, to a more realistic camera model which approximates the effects of a lens and an aperture function of an actual camera. this model allows the generation of synthetic images which have a depth of field, can be focused on an arbitrary plane, and also permits selective modeling of certain optical characteristics of a lens. the model can be expanded to include motion blur and special effect filters. these capabilities provide additional tools for highlighting important areas of a scene and for portraying certain physical characteristics of an object in an image. michael potmesil indranil chakravarty a shading approach to non-convex clipping the conventional way of clipping a graphic object with a rectangular window or any convex polygon is to cut the object along a line tangent to each side of the polygon. parts of the object on the exterior side of the line are thrown away. after repeating the process for all of the sides, what remains is the clipped object. however, this process fails if the clipping window is non-convex, i.e., has one or more corners that point inward, or if the window consists of more than one polygon. too much then gets clipped away. one way of dealing with this situation is to break up the window into simple convex polygons. the object is clipped with each polygon in turn, and the results are joined together to form the final result. a different approach, presented here, is to use shading logic. the window is seen as an area to be shaded, and the object, as the shading pattern. that is, the window is shaded with the object. the method yields not only the clipped object, but also, as a by-product, the reverse clipping, i.e., that part of the object that lies outside the window. this is an advantage in graphics editing where both clippings are often needed in parallel. thomas l. springall gustav tollet organizing synthetic agent behaviors based on a motif architecture jiming liu hong qin the bisector surface of rational space curves given a point and a rational curve in the plane, their bisector curve is rational [farouki and johnston 1994a]. however, in general, the bisector of two rational curves in the plane is not rational [farouki and johnstone 1994b]. given a point and a rational space curve, this art icle shows that the bisector surface is a rational ruled surface. moreover, given two rational space curves, we show that the bisector surface is rational (except for the degenerate case in which the two curves are coplanar). gershon elber myung-so kim finite field manipulations in macsyma k. t. rowney r. d. silverman employing voice back channels of facilitate audio document retrieval human listeners use voice back channels to indicate their comprehension of a talker's remarks. this paper describes an attempt to build a user interface capable of employing these back channel responses for flow control purposes while presenting a variety of audio information to a listener. acoustic evidence based on duration and prosody (rhythm and melody) of listeners' utterances is employed as a means of discriminating responses by discourse function without using word recognition. such an interface has been applied to three tasks: speech synthesis of driving directions, speech synthesis of electronic mail, and retrieval of recorded voice messages. chris schmandt externalizing internal state amol d. mali urban traffic simulation with psycho-physical vehicle-following models thomas schulze thomas fliess a new algorithm for computing asymptotic series dominik gruntz the quality of expertise: implications of expert-novice differences for knowledge acquisition marianne lafrance learning specialist decision lists atsuyoshi nakamura sample bylaws for local sigforths corporate acm headquaters heterogeneous decomposition and inter-level coupling for combined modeling paul a. fishwick design and implementation of hla time management in the rti version f.0 christopher d. carothers richard m. fujimoto richard m. weatherly annette l. wilson high resolution virtual reality michael deering the turing test lynellen d. s. perry on the power of qualitative simualation for estimating diffusion transit times models describing the process of qualitative reasoning have significantly enhanced our insight into the general nature of the challenges involved in modeling people's imaginations when they think about complex processes. the composite picture created by combining models suggested by individual researchers may appear blurred this is because, in some cases, the range of applicability of each approach is not sharply defined, and the theoretical claims supporting the work are not sufficiently stated. this state of the art clearly suggests that considerable amount of research is still required in order to reinforce the theoretical foundations for the modeling qualitative reasoning. in this paper i first describe some of the approaches used for qualitative simulation. second, i discuss present the example of diffusion process in complicated media and the challenges involved in automating the qualitative estimations of diffusion transit times. s. l. hardt mastermind by evolutionary algorithms luí bento luísa pereira agostinho rosa realistic, hardware-accelerated shading and lighting wolfgang heidrich hans- peter seidel fiat lux paul debevec tim hawkins westley sarokin haarm-pieter duiker tal garfinkel christine cheng jenny huang does prior knowledge facilitate the development of knowledge-based systems? paul cohen vinay chaudhri adam pease robert schrag procedure models for generating three-dimensional terrain a method for generating arbitrary terrain models, including trees, bushes, mountains, and buildings, is described. procedure models are used to combine fundamental data elements in the creation of unified objects comprising the terrain model. as an example, a procedure model to generate arbitrary trees of various species is implementation. these are the generation of the low level data elements, specification of input parameter requirements, and a brief explanation of the algorithmic structure. terrain images rendered by this process are included, as are diagrams and illustrations explaining the procedure model organization. comparisons with previous work are made. robert marshall rodger wilson wayne carlson automated planning thomas dean the computation of optical flow two-dimensional image motion is the projection of the three-dimensional motion of objects, relative to a visual sensor, onto its image plane. sequences of time-orderedimages allow the estimation of projected two-dimensional image motion as either instantaneous image velocities or discrete image displacements. these are usually called the optical flow field or the image velocity field. provided that optical flow is a reliable approximation to two- dimensional image motion, it may then be used to recover the three-dimensional motion of the visual sensor (to within a scale factor) and the three- dimensional surface structure (shape or relative depth) through assumptions concerning the structure of the optical flow field, the three- dimensional environment, and the motion of the sensor. optical flow may also be used to perform motion detection, object segmentation, time-to-collision and focus of expansion calculations, motion compensated encoding, and stereo disparity measurement. we investigate the computation of optical flow in this survey: widely known methods for estimating optical flow are classified and examined by scrutinizing the hypothesis and assumptions they use. the survey concludes with a discussion of current research issues. s. s. beauchemin j. l. barron bowling green state university excels rosalee wolfe jodi giroux lynn pocock karen sullivan model-based recognition in robot vision this paper presents a comparative study and survey of model-based object- recognition algorithms for robot vision. the goal of these algorithms is to recognize the identity, position, and orientation of randomly oriented industrial parts. in one form this is commonly referred to as the "bin- picking" problem, in which the parts to be recognized are presented in a jumbled bin. the paper is organized according to 2-d, 2½-d, and 3-d object representations, which are used as the basis for the recognition algorithms. three central issues common to each category, namely, feature extraction, modeling, and matching, are examined in detail. an evaluation and comparison of existing industrial part- recognition systems and algorithms is given, providing insights for progress toward future robot vision systems. roland t. chin charles r. dyer fuzzy belief networks the most common method for knowledge representation in an expert system is the production rule [waterman 1986]. unfortunately, the modularity inherent in a rule-based system is limiting, especially in an uncertain environment [morawski 1989]. a fuzzy belief network (fbn) provides a more holistic, graphical approach and lends itself well to implementation in expert systems on personal and small computers. david f. clark abraham kandel evolving virtual creatures this paper describes a novel system for creating virtual creatures that move and behave in simulated three-dimensional physical worlds. the morphologies of creatures and the neural systems for controlling their muscle forces are both generated automatically using genetic algorithms. different fitness evaluation functions are used to direct simulated evolutions towards specific behaviors such as swimming, walking, jumping, and following. a genetic language is presented that uses nodes and connections as its primitive elements to represent directed graphs, which are used to describe both the morphology and the neural circuitry of these creatures. this genetic language defines a hyperspace containing an indefinite number of possible creatures with behaviors, and when it is searched using optimization techniques, a variety of successful and interesting locomotion strategies emerge, some of which would be difficult to invent or built by design. karl sims illustrating smooth surfaces we present a new set of algorithms for line-art rendering of smooth surfaces. we introduce an efficient, deterministic algorithm for finding silhouettes based on geometric duality, and an algorithm for segmenting the silhouette curves into smooth parts with constant visibility. these methods can be used to find all silhouettes in real time in software. we present an automatic method for generating hatch marks in order to convey surface shape. we demonstrate these algorithms with a drawing style inspired by a topological picturebook by g. francis. aaron hertzmann denis zorin gaps: general and automatic polygonal simplification carl erikson dinesh manocha triangle scan conversion using 2d homogeneous coordinates marc olano trey greer surveyor's forum: working on interpretations siba mohanty subjective avatars (poster) michael mateas planning and learning together gerhard weiß a note on clustering modules for floorplanning many vlsi floorplanners work by recursively decomposing rectangular modules into lower-level rectangular modules until the leaf-level modules are reached[5]. good layouts require good floorplans. the quality of a floorplan depends (among other things) on how the leaf-level modules are clustered into the various levels of the hierarchy. some of the factors that determine the suitability of a decomposition are the geometry of the modules, the connectivity among modules, and timing constraints. our experience with the mechanization of a vlsi design manager[1] has shown that the initial structural hierarchy that arises during synthesis from behavioral specifications is not always suitable for floorplanning. we describe hierarchical-clustering-based algorithms that lead to a small number of superior candidate hierarchies. j. d. gabbe p. a. subrahmanyam an expert system for foreign currency hedging (abstract only) international corporations present their financial statements in terms of a single currency, often based on the currency of the home office or on the dollar, even though a large part of their foreign operations are completed using other currencies. unanticipated moves in the volatile foreign exchange markets can severely distort the year-end results, sometimes causing large losses in an otherwise profitable year. recent innovations in the markets for foreign currency futures and options allow corporations to implement complex hedging strategies to reduce foreign exchange risk. a good hedging strategy requires both a statistical analysis and expert judgment as to how to combine this analysis with specific corporate foreign currency demands and the desire for reduced risk. this paper is a progress report on our attempt to build an expert system to make decisions on hedging strategies that reduce foreign exchange risk. in original aspect of our approach is that we will analyze the reactions of experts to a simulated market to determine the rules in our knowledge base. the expert system's performance will be verified by analyzing its decisions in a simulated market. spencer star an anthropometric face model using variational techniques douglas decarlo dimitris metaxas matthew stone exploiting temporal uncertainty in parallel and distributed simulations richard m. fujimoto real-time procedural textures john rhoades greg turk andrew bell andrei state ulrich neumann amitabh varshney anisotropic diffusion for monte carlo noise reduction monte carlo sampling can be used to estimate solutions to global light transport and other rendering problems. however, a large number of observations may be needed to reduce the variance to acceptable levels. rather than computing more observations within each pixel, if spatial coherence exists in image space it can be used to reduce visual error by averaging estimators in adjacent pixels. anisotropic diffusion is a space-variant noise reduction technique that can selectively preserve texture, edges, and other details using a map of image coherence. the coherence map can be estimated from depth and normal information as well as interpixel color distance. incremental estimation of the reduction in variance, in conjunction with statistical normalization of interpixel color distances, yields an energy- preserving algorithm that converges to a spatially nonconstant steady state. michael d. mccool image-based rendering: a new interface between computer vision and computer graphics leonard mcmillan steven gortler lodestar: an octree-based level of detail generator for vrml dieter schmalstieg intelligent virtual worlds continue to develop glen fraser scott s. fisher kris: knowledge representation and inference system franz baader bernhard hollunder consequences of stratified sampling in graphics don p. mitchell speedup of a sparse system simulation _optimistic and conservative simulation algorithms have been effective for speeding up the execution of many simulation programs. event stepped techniques have also been shown to be effective for certain types of problems. this paper presents a conjecture that sparse systems are effectively simulated by conservative and optimistic techniques, where sparseness is characterized by the ratio of output generating states to non-output generating states. the speedup results for a representative sparse simulation are shown to support this conjecture._ james nutaro hessam sarjoughian accelerating volume rendering with quantized voxels benjamin mora jean-pierre jessel rene caubet what is visual interactive simulation? (and is there a methodology for doing it right?) visual interactive simulation (vis) is the development and application of simulations which produce a dynamic display of the system model, and allow the user to interact with the running simulation. this paper presents the basic ideas behinds vis, and the methods whereby it can be achieved. the benefits of using vis, and folklore about the advantages and problems of vis, are presented. it is concluded that amongst those practicing vis a number of competing decision aiding methodologies exist, and that a single unifying methodology is necessary for the advancement of vis. robert m. o'keefe documenting complex processes: educating the user and simplifying the task bob waite infant: a modular approach to natural language processing the infant system is a working natural language processing system that approaches language analysis through the integration of a large number of interdependent procedural modules. by themselves the modules perform specific, predictable, and rather mundane tasks; taken together, they make up an understanding system whose 'beliefs' and responses are varied and unpredictable. thus a type of emergent behavior is demonstrated by the system. this paper describes the overall modular design of infant, and illustrates through sample conversations some of its capabilities and deficiencies. paul buchheit breaking objects james o'brien real-time accelerators for volume rendering (panel) a. kaufman markup meets the mainstream: the future of content-based processing charles hill learning to recommend from positive evidence in recent years, many systems and approaches for recommending information, products or other objects have been developed. in these systems, often machine learning methods that need training input to acquire a user interest profile are used. such methods typically need positive and negative evidence of the user's interests. to obtain both kinds of evidence, many systems make users rate relevant objects explicitly. others merely observe the user's behavior, which fairly obviously yields positive evidence; in order to be able to apply the standard learning methods, these systems mostly use heuristics that attempt to find also negative evidence in observed behavior. in this paper, we present several approaches to learning interest profiles from positive evidence only, as it is contained in observed user behavior. thus, both the problem of interrupting the user for ratings and the problem of somewhat artificially determining negative evidence are avoided. the learning approaches were developed and tested in the context of the web- based elfi information system. it is in real use by more than 1000 people. we give a brief sketch of elfi and describe the experiments we made based on elfi usage logs to evaluate the different proposed methods. ingo schwab wolfgang pohl ivan koychev what is text, really? the way in which text is represented on a computer affects the kinds of uses to which it can be put by its creator and by subsequent users. the electronic document model currently in use is impoverished and restrictive. the authors argue that text is best represented as an ordered hierarchy of content object (ohco), because that is what text really is. this model conforms with emerging standards such as sgml and contains within it advantages for the writer, publisher, and researcher. the authors then describe how the hierarchical model can allow future use and reuse of the document as a database, hypertext, or network. steven j. derose david g. durand elli mylonas allen h. renear robotics: introduction randolph chung lynellen d. s. perry star cursors in content space: abstractions of people and places p. j. rankin c. v. heerden j. mama l. nikolovska r. den otter j. rutgers a structure from manipulation for text-graphic objects the general purpose graphics systems of the future will need a simple logic for visual objects---one structure underlying both text and graphics. as an experiment, perhaps the immediate handling of visual objects by the user can provide the starting point for developing that structure. this paper describes the pam graphics system, in which the structure of text-graphic objects arises directly out of manual manipulation. the needs of manual manipulation determine the text-graphic pattern as the simplest organizing structure for images; pam stands for pattern manipulating. the pam system is designed for the agile manipulation of text-graphic patterns---first manually, and then, later, programmatically. starting from this strict 'front-in' viewpoint---where immediate manipulation (hand powered animation) was to be the primary application---a 'manipulative grammar' was evolved to give the user a simple yet powerful handle on text- graphic images. this grammar turned out to be a generalization of lisp syntax from textual symbolic expressions to text-graphic forms, structuring such forms as trees and then offering: @@@@spatial grabbing of objects into attention @@@@tree guided attention shifters like first, rest, next, and up spatial & tree manipulations on any object in attention. @@@@the resulting structures also offer surprising computational power (in a manner directly analogous to the way the basic list structures and functions of lisp give rise to the flexibility and power of a full blown lisp system, mccarthy and talcott [1]), leading finally to computing with text-graphic forms. consequently, a semantic function is added to supplement the basic manipulative grammar: @@@@evaluation of object in attention, result displayed at the cursor. evaluation supports facilities like naming (and thus saving) visual objects, programming, and creation of menus (patterns of evaluatable function objects). an experimental version of the pam system has been implemented in maclisp at the stanford artificial intelligence lab. fred h. lakin a distributed blackboard architecture for interactive data visualization robert van liere jan harkes wim de leeuw discontinuity edge overdraw pedro v. sander hugues hoppe john snyder steven j. gortler reflection vector shading hardware surface reflections of an environment can be rendered in real time if hardware calculates an unnormalized reflection vector at each pixel. conventional perspective-correct texture hardware can then be leveraged to draw high- quality reflections of an environment or specular highlights in real time. this fully accommodates area light sources, allows a local viewer to move interactively, and is especially well suited to the inspection of surface orientation and curvature. by emphasizing the richness of the incoming illumination rather than physical surface properties, it represents a new direction for real-time shading hardware. douglas voorhies jim foran digital cloning system barnabas takacs radiance maps: an image-based approach to global illumination philipp slusallek wolfgang heidrich hans-peter seidel distributed ray tracing ray tracing is one of the most elegant techniques in computer graphics. many phenomena that are difficult or impossible with other techniques are simple with ray tracing, including shadows, reflections, and refracted light. ray directions, however, have been determined precisely, and this has limited the capabilities of ray tracing. by distributing the directions of the rays according to the analytic function they sample, ray tracing can incorporate fuzzy phenomena. this provides correct and easy solutions to some previously unsolved or partially solved problems, including motion blur, depth of field, penumbras, translucency, and fuzzy reflections. motion blur and depth of field calculations can be integrated with the visible surface calculations, avoiding the problems found in previous methods. robert l. cook thomas porter loren carpenter on the batch means and area variance estimators halim damerdji an object-oriented, knowledge-based approach to simulation simulation is an integral part of the industry's analysis tools. industry's demands for simulation are growing and simulations are becoming larger as well as more complex. simulation plays such a key role because it can: offer early insight into the behavior of planned systems support the evaluation of alternate system architectures, algorithms, etc provide an inexpensive training vehicle however, the "conventional" (i.e. procedural, code-like) approach to building simulations has three undesirable traits. first, the conventional approach can be unresponsive to user needs because the intended user has little opportunity to provide meaningful guidance during the development process. the user may have one notion of how the simulation should behave and the developer perhaps another. the divergence in their ideas may not become apparent until late in the development process, after much time and money has been spent. one way to improve this situation is to rapidly prototype the simulation and allow the developer and user to mold the simulation into a final product reflecting the intended user's desires. second, conventional simulations can be difficult to modify because the assumptions and knowledge about the simulated objects are dispersed implicitly throughout the simulation's code. thus, modifying a simulation requires the implementor to know exactly what code segments to change and where to find them. we can improve this situation by explicitly representing both the model's assumptions and knowledge about the simulated objects. thirdly, simulations notoriously provide copious output which requires highly trained analysts to interpret properly. this output is used both for debugging as well as analysis. interpreting the simulation output can be time-consuming because the analyst must know what information is needed and where to find it. we can remedy this if the human interface to the simulation results is almost "intuitive" and graphical. the preliminary results of an irad project at general research corporation suggest that object-oriented programming coupled with interactive graphics and knowledge-based techniques form the basis for a greatly improved simulation methodology for a broad class of simulation problems. in this paradigm, the simulated entities are represented as "objects" whose behavior is controlled by rules and who communicate with other objects and the user by "message passing". the objects themselves are represented by icons which can be interactively inspected and modified. this type of modularity promotes rapid prototyping and incremental development object-oriented programming paradigms do not force the designer/developer to design or develop in a strictly top-down or bottom- up fashion. the developer may focus on the objects and their behavior at any level of detail. the simulation can quickly reflect changing and increasing knowledge of the simulated domain without necessarily modifying the entire system and the associated debugging which accompanies system modifications. the developer modifies the object's behavior by modifying the rules which govern that object. other objects need not necessarily interact differently with the modified object. the messages which pass among objects are generic and do not usually require the type of implementation-dependent detail needed in other styles of programming. simulations constructed in this way often have browser and explanation facilities so a user may directly inspect the behavior of a simulated entity and request an explanation of the behavior. this capability also decreases the need for massive amounts of simulation-generated data for analysis. in short, the flexibility of this object-oriented approach promotes the modification of the simulation as well as the analysis of the simulation's results. richard j. greene million dollar logistic decisions using simulation michael carr howard way fbram: a new form of memory optimized for 3d graphics fbram, a new form of dynamic random access memory that greatly accelerates the rendering of z-buffered primitives, is presented. two key concepts make this acceleration possible. the first is to convert the read-modify-write z-buffer compare and rgbα blend into a single write only operation. the second is to support two levels of rectangularly shaped pixel caches internal to the memory chip. the result is a 10 megabit part that, for 3d graphics, performs read- modify-write cycles ten times faster than conventional 60 ns vrams. a four-way interleaved 100mhz fbram frame buffer can z-buffer up to 400 million pixels per second. working fbram prototypes have been fabricated. michael f. deering stephen a. schlapp michael g. lavelle an expert system for evaluation of sports injuries (abstract only) a knowledge-base system was developed using the insight 2+ rule-based expert system tool. the user is questioned about the area of the injury, such as the foot, ankle, knee, or elbow. the inference engine then selects the appropriate knowledge base. symptoms of the injury are requested from the user, who is also instructed to perform manual tests on the injured part. the input is evaluated through a set of rules, and the exact type of injury is evaluated as well as the severity. a recommended treatment is then suggested. the user is expected to have a basic knowledge of athletic training terms and sports medicine anatomy; however, screen displays can be requested on certain technical details by the user. this system can be used as a training tool for student athletic trainers and can also be used as a substitute in the absence of a more experienced trainer. our experiences with the design of this system, our work with experts, and the implementation will be discussed along with a report about the performance and use of the system. j. m. gardner j. h. morrel k. e. lagle communityboard 2: mediating between speakers and an audience in computer network discussions shigeo matsubara takeshi ohguro fumio hattori frequencies vs biases (extended abstract): machine learning problems in natural language processing fernando c. n. pereira modeling viewpoints for assessing reliability this paper is the third in a series of papers dealing with model evolution and its importance in problem-solving when employing simulation as an analysis tool. in the paper, reliability assessment is used as a vehicle for presenting modeling viewpoints and constructs. as in the previous papers, the assessment of the worth of a model is not made. the paper describes modeling viewpoints and procedures in a reliability assessment context and describes the need for being able to build more detailed models from simpler models. the basic hypothesis of the paper is that model evolution is a way of life for the simulationist. a. alan b. pritsker stereoscopic projections and 3d scene reconstruction chaman l. sabharwal a conceptual model of raster graphics systems in this paper we present a conceptual model of raster graphics systems which integrates, at a suitable level of abstraction, the major features found in both contemporary and anticipated graphics systems. these features are the refresh buffer; the image creation (scan-conversion) system; the single address-space architecture which integrates the address space of the refresh buffer with those of the image creation system and the associated general- purpose computer; the rasterop or bitblt instructions found in some single address-space architectures; the video look-up table, and refresh buffer to screen transformations. also included are the major components from the conceptual model of vector graphics systems which are embodied in the acm/sigggraph/gspc core system. using the conceptual model as a base, we proceed to sketch out the capabilities we have defined in a substantial addition to the core system. the capabilities are currently being implemented as part of the george washington university core system. james acquah james foley john sibert patricia wenner graphics standards status report (panel session) two status reports will be presented. the first report describes the activities of working group 2 (computer graphics) of the international standards organization subcommittee on programming languages (iso tc97/sc5/wg2). actions taken in recent wg2 meetings in october, 1979, and june, 1980, will be covered. significant proposals being examined by wg2 include gks, a german-designed 2d graphics package derived from the siggraph gspc core system proposals, and videotex, a canadian-designed protocol text communication that includes geometrically coded picture images. the second status report describes the recent activities of the american national standards institute (ansi) technical committee x3h3, computer graphics. this includes the committee status, international activities, key technical issues, and future plans. peter bono p. tenhagen janet chin p. bono j. encarnacao j. michener pomegranate: a fully scalable graphics architecture pomegranate is a parallel hardware architecture for polygon rendering that provides scalable input bandwidth, triangle rate, pixel rate, texture memory and display bandwidth while maintaining an immediate-mode interface. the basic unit of scalability is a single graphics pipeline, and up to 64 such units may be combined. pomegranate's scalability is achieved with a novel "sort- everywhere" architecture that distributes work in a balanced fashion at every stage of the pipeline, keeping the amount of work performed by each pipeline uniform as the system scales. because of the balanced distribution, a scalable network based on high-speed point-to-point links can be used for communicating between the pipelines. pomegranate uses the network to load balance triangle and fragment work independently, to provide a shared texture memory and to provide a scalable display system. the architecture provides one interface per pipeline for issuing ordered, immediate-mode rendering commands and supports a parallel api that allows multiprocessor applications to exactly order drawing commands from each interface. a detailed hardware simulation demonstrates performance on next- generation workloads. pomegranate operates at 87-99% parallel efficiency with 64 pipelines, for a simulated performance of up to 1.10 billion triangles per second and 21.8 billion pixels per second. matthew eldridge homan igehy pat hanrahan perception-guided global illumination solution for animation rendering we present a method for efficient global illumination computation in dynamic environments by taking advantage of temporal coherence of lighting distribution. the method is embedded in the framework of stochastic photon tracing and density estimation techniques. a locally operating energy-based error metric is used to prevent photon processing in the temporal domain for the scene regions in which lighting distribution changes rapidly. a perception-based error metric suitable for animation is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. as a result a perceptually-consistent quality across all animation frames is obtained. furthermore, the computation cost is reduced compared to the traditional approaches operating solely in the spatial domain. karol myszkowski takehiro tawara hiroyuki akamine hans-peter seidel creation machines: stanley kubrick's view of computers in 2001 mark midbon a hybrid visual environment for models and objects paul a. fishwick visad: connecting people to computations and people to people in our last _computer graphics_ issue, greg johnson contributed an interesting survey article on collaborative visualization. another perspective on this hard problem is discussed by bill hibbard in the following article. most of you in the visualization community know bill from his pioneering work on vis5d and more recently visad. as always, i welcome your comments and thoughts on this column. bill hibbard proof post processor animation with siman roger mchaney doug white abstracts of japanese computer algebra meeting in kyoto these are abstracts of talks given at the 14th rims (research institute for mathematical sciences) meeting of computer algebra and its applications in mathematical researches. the meeting took place from 20 to 22 in november 1995. in the meeting, 31 talks were presented. here, short abstracts of almost all talks are shown. if readers have an interest in in some of abstracts, please contact with authors directly or send a message to matu-tarow noda at the address above. proceedings of the meeting were already published from rims as rims memoir 941, entitled _theory of computer algebra and its applications._ matu-tarow noda artists augmented by agents (invited speech) computers can be very helpful to us by performing tasks on our behalf. for example, they are very good at performing calculations, storing information and producing visualisations of objects that do not yet exist as a made artifact. increasingly, however, a different role is being found for the computer. it is the role of a catalyst, or a stimulant, to our own creative thinking. in such cases the computer is not primarily performing a task for us and generating an answer within itself, rather it is helping us to generate answers within ourselves. the computer helps us think creatively. this role for the computer can be illustrated in the context of computer support to creative design. in order to design computer systems that support the creative process, it is important to understand that process well enough to predict what might help, rather than hinder. given such research, we may begin to define the characteristics of what the computer must do in order to augment creative thinking. the paper explores a particular application of intelligent user interfaces:- the augmentation of creative thought in artists. ernest edmonds modeling water for computer animation nick foster dimitris metaxas workhorse graphics j. martin beam tracing polygonal objects ray tracing has produced some of the most realistic computer generated pictures to date. they contain surface texturing, local shading, shadows, reflections and refractions. the major disadvantage of ray tracing results from its point-sampling approach. because calculation proceeds ab initio at each pixel it is very cpu intensive and may contain noticeable aliasing artifacts. it is difficult to take advantage of spatial coherence because the shapes of reflections and refractions from curved surfaces are so complex. in this paper we describe an algorithm that utilizes the spatial coherence of polygonal environments by combining features of both image and object space hidden surface algorithms. instead of tracing infinitesimally thin rays of light, we sweep areas through a scene to form "beams." this technique works particularly well for polygonal models since for this case the reflections are linear transformations, and refractions are often approximately so. the recursive beam tracer begins by sweeping the projection plane through the scene. beam-surface intersections are computed using two- dimensional polygonal set operations and an occlusion algorithm similar to the weiler- atherton hidden surface algorithm. for each beam-polygon intersection the beam is fragmented and new beams created for the reflected and transmitted swaths of light. these sub-beams are redirected with a 4×4 matrix transformation and recursively traced. this beam tree is an object space representation of the entire picture. since the priority of polygons is pre-determined, the final picture with reflections, refractions, shadows, and hidden surface removal is easily drawn. the coherence information enables very fast scan conversion and high resolution output. image space edge and texture antialiasing methods can be applied. paul s. heckbert pat hanrahan a simulation study of replication control protocols using volatile witnesses perry k. sloope jehan-francois paris darrell d. e. long integrating word processing with writing instruction: a review of research and practice amy heebner dr. strangeheight keith kramer efficient ray tracing of volume data volume rendering is a technique for visualizing sampled scalar or vector fields of three spatial dimensions without fitting geometric primitives to the data. a subset of these techniques generates images by computing 2-d projections of a colored semitransparent volume, where the color and opacity at each point are derived from the data using local operators. since all voxels participate in the generation of each image, rendering time grows linearly with the size of the dataset. this paper presents a front-to-back image-order volume-rendering algorithm and discusses two techniques for improving its performance. the first technique employs a pyramid of binary volumes to encode spatial coherence present in the data, and the second technique uses an opacity threshold to adaptively terminate ray tracing. although the actual time saved depends on the data, speedups of an order of magnitude have been observed for datasets of useful size and complexity. examples from two applications are given: medical imaging and molecular graphics. marc levoy progressive simplicial complexes jovan popovic hugues hoppe a new formalism for discrete event simulation a new formalism for discrete event simulation is proposed based on the theoretical frameworks used for the formal analysis and specification of the different aspects of computer languages. the major elements of such a formalism are identified, defined, and discussed. this formalism includes the specification and validation of the structural and behavioral aspects of models specified in it. in this formalism, the behavioral specifications can be used to synchronize different components of a model. this enhances the modeling methodology. ashvin radiya robert g. sargent hunting for the holy grail with "emotionally intelligent" virtual actors clark elliott a decomposition algorithm for visualizing irregular grids henry neeman artifical intelligence and document processing geoffrey james to be or not to be peter lee half pint heroes everett downing aaron hartline daniel o'brien mike laubach oblique projector rendering on planar surfaces for a tracked user ramesh raskar co-operating mobile agents for distributed parallel processing r. ghanea- hercock j. c. collis d. t. ndumu using graphics to build simulation models computer graphics has become a very important part of the model development process at eaton-kenway. data entry is aided greatly by user friendly software that takes almost all of the work out of building a data base. model validity is determined in less time and customer validation has become much easier. this has resulted in more customers wanting to have models built and increased material handling and factory automation sales. efforts are underway to incorporate cad hardware being used by engineering into the modeling process. merriel c. dewsnup simulation of a dqdb mac protocol with movable boundary and bandwidth balancing mechanisms s. popovich m. alam s. bandyopadhyay disturbed behavior in co-operating autonomous robot robert ghanea-hercock david p. barnes interactive speedes jeff s. steinman object representation by means of nonminimal division quadtrees and octrees quadtree representation of two-dimensional objects is performed with a tree that describes the recursive subdivision of the more complex parts of a picture until the desired resolution is reached. at the end, all the leaves of the tree are square cells that lie completely inside or outside the object. there are two great disadvantages in the use of quadtrees as a representation scheme for objects in geometric modeling system: the amount of memory required for polygonal objects is too great, and it is difficult to recompute the boundary representation of the object after some boolean operations have been performed. in the present paper a new class of quadtrees, in which nodes may contain zero or one edge, is introduced. by using these quadtrees, storage requirements are reduced and it is possible to obtain the exact backward conversion to boundary representation. algorithms for the generation of the quadtree, boolean operations, and recomputation of the boundary representation are presented, and their complexities in time and space are discussed. three- dimensional algorithms working on octrees are also presented. their use in the geometric modeling of three-dimensional polyhedral objects is discussed. d. ayala p. brunet r. juan i. navazo post-rendering 3d warping william r. mark leonard mcmillan gary bishop rampage newscast jeff bunker alternatives for modeling of preemptive scheduling a system which includes overt instances of preemptive scheduling, or which requires the use of preemptive scheduling to model the system, can pose difficulties to a modeler. by preemptive scheduling we mean having to reschedule or cancel a previously scheduled event (or activity completion). the difficulties in modeling preemptive scheduling stem from the highly stylized constructs which simulation languages provide for such operations. this paper describes these difficulties, reviews an approach which can reduce the difficulties for some applications, and presents an approach which can be used to model preemptive scheduling, using only simple simulation language constructs. james o. henriksen conservative visibility preprocessing using extended projectionsvisualization of very complex scenes can be significantly accelerated usingocclusion culling. in this paper we present a visibility preprocessing methodwhich efficiently computes potentially visible geometry for volumetric viewingcells. we introduce novel extended projection operators, which permitsefficient and conservative occlusion culling with respect to all viewpointswithin a cell, and takes into account the combined occlusion effect ofmultiple occluders. we use extended projection of occluders onto a set ofprojection planes to create extended occlusion maps; we show how toefficiently test occludees against these occlusion maps to determine occlusionwith respect to the entire cell. we also present an improved projectionoperator for certain specific but important configurations. an importantadvantage of our approach is that we can re-project extended projections ontoa series of projection planes (via an occlusion sweep), and accumulateocclusion information from multiple blockers. this new approach allows thecreation of effective occlusion maps for previously hard-to-treat scenes suchas leaves of trees in a forest. graphics hardware is used to accelerate boththe extended projection and reprojection operations. we present a completeimplementation demonstrating significant speedup with respect to view-frustumculling only, without the computational overhead of on-line occlusion culling.fredo durand george drettakis joÃ"lle thollot claude puech imaging vector fields using line integral convolution brian cabral leith casey leedom direct interaction with a 3d volumetric environment arie kaufman roni yagel reuven bakalash controlling physics in realistic character animation zoran popovic geo-spatial visualization for situational awareness (case study) situational awareness applications require a highly detailed geospatial visualization covering a large geographic area. conventional polygon based terrain modeling would exceed the capacity of current computer rendering. terrain visualization techniques for a situational awareness application are described in this case study. visualizing large amounts of terrain data has been achieved using very large texture maps. sun shading is applied to the terrain texture map to enhance perception of relief features. perception of submarine positions has been enhanced using a translucent, textured water surface. each visualization technique is illustrated in the accompanying video tape. eliot feibush nikhil gagvani daniel williams embodied cultural agents and synthetic sociology (poster) simon penny fast and accurate hierarchical radiosity using global visibility recent hierarchical global illumination algorithms permit the generation of images with a high degree of realism. nonetheless, appropriate refinement of light transfers, high quality meshing, and accurate visibility calculation can be challenging tasks. this is particularly true for scenes containing multiple light sources and scenes lit mainly by indirect light. we present solutions to these problems by extending a global visibility data structure, the visibility skeleton. this extension allows us to calculate exact point-to-polygon form- factors at vertices created by subdivision. the structures also provides visibility information for all light interactions, allowing intelligent refinement strategies. high-quality meshing is effected based on a perceptualy based ranking strategy which results in appropriate insertions of discontinuity curves into the meshes representing illumination. we introduce a hierarchy of triangulations that allows the generation of a hierarchical radiosity solution using accurate visibility and meshing. results of our implementation show that our new algorithm produces high quality view- independent lighting solutions for direct illumination, for scenes with multiple lights and also scenes lit mainly by indirect illumination. fredo durand george drettakis claude puech an interactive production planning simulation system (abstract only) the manager of a large production facility is responsible for planning the needs of the factory based on estimates of production levels and production costs. a key towards successful planning is that of obtaining correct information about the factory and its operation. considered here is a modest database system which contains this information, and interactive means for updating this information, and a simulation program that uses only the database to operate (with appropriate user directives). this allows easy alteration of factory data and significantly reduces the complexity of the simulation. advantages of this approach are discussed. alan w. carpenter corey d. schou the agent service brokering problem as a generalised travelling salesman problem aneurin m. easwaran jeremy pitt stefan poslad blitz: a rule-based system for massively parallel architectures the rule-based system has emerged as an important tool to developers of artificial intelligence programs. because of the computational resources required to realize the match-select-execute cycle of rule-based systems, researchers have been trying to introduce parallelism into these systems for some time. we describe a new approach to parallel rule-based systems which exploits fine- grained hypercube hardware. the new algorithms for parallel rule matching and simultaneous execution of several rules at once are presented. experimental results using a connection machine* implementation of blitz are presented. k. morgan an experimental and theoretical comparison of model selection methods michael kearns yishay mansour andrew y. ng dana ron creating volume models from edge-vertex graphs the design of complex geometric models has been and will continue to be one of the limiting factors in computer graphics. a careful enumeration of the properties of topologically correct models, so that they may be automatically enforced, can greatly speed this process. an example of the problems inherent in these methods is the "wire frame" problem, the automatic generation of a volume model from an edge-vertex graph. the solution to this problem has many useful applications in geometric modelling and scene recognition. this paper shows that the "wire frame" problem is equivalent to finding the embedding of a graph on a closed orientable surface. such an embedding satisfies all the topological properties of physical volumes. unfortunately graphical embeddings are not necessarily unique. but when we restrict the embedding surface so that it is equivalent to a sphere, and require that the input graph be three-connected, the resulting object is unique. given these restrictions there exists a linear time algorithm to automatically convert the "wire frame" to the winged edge representation, a very powerful data structure. applications of this algorithm are discussed and several examples shown. patrick m. hanrahan visualizing high-dimensional predicitive model quality penny rheingans marie desjardins constrained ga applied to production and energy management of a pulp and paper mill amâncio santos antónio dourado a simple expert system b. i. blum a two-pass solution to the rendering equation: a synthesis of ray tracing and radiosity methods john r. wallace michael f. cohen donald p. greenberg automation or interaction (panel session): what's best for big data? david kenwright david banks steve bryson robert haimes robert van liere sam uselton recognition of hurwitz polynomials vilmar trevisan animation of dynamic legged locomotion marc h. raibert jessica k. hodgins fast texture synthesis using tree-structured vector quantization texture synthesis is important for many applications in computer graphics, vision, and image processing. however, it remains difficult to design an algorithm that is both efficient and capable of generating high quality results. in this paper, we present an efficient algorithm for realistic texture synthesis. the algorithm is easy to use and requires only a sample texture as input. it generates textures with perceived quality equal to or better than those produced by previous techniques, but runs two orders of magnitude faster. this permits us to apply texture synthesis to problems where it has traditionally been considered impractical. in particular, we have applied it to constrained synthesis for image editing and temporal texture generation. our algorithm is derived from markov random field texture models and generates textures through a deterministic searching process. we accelerate this synthesis process using tree-structured vector quantization. li-yi wei marc levoy knowledge constraints john debenham image processing experiments timothy s. kula raymond konopka john a. cicero modelling database based expert systems at the conceptual level in a conceptual modelling environment a model is given for analysing complex real world problems known as conceptual knowledge model (ckm), represented by a graphical representation and a formal representation. the graphical representation consist of 3 graphs: conceptual requirement graph, conceptual behavior graph, and conceptual structure graph. this graphs are developed by consulting the expert during the design process. the graphs are then transformed into first- order predicate logic to represent the non-logical axioms of a first order theory. the model suggested here is a step towards closing the gap between data base theory and ai databases. ramin yasdi a methodology for cost-risk analysis in the statistical validation of simulation models osman balci robert g. sargent the turing test is not a trick: turing indistinguishability is a scientific criterion stevan harnad the cost of terminating synchronous parallel discrete-event simulations vasant sanjeevan marc abrams head quarters jason donati chris cryan graphic input interaction techniques(panel session) in the last decade most research and development in computer graphics has been in the output side. input devices and techniques for communicating with graphic systems are now also becoming active research topics. to communicate the state-of-the-art and to identify the key research issues, siggraph sponsored a workshop on graphics interaction input techniques in june, 1982. the topics to be discussed include tools for building user interfaces, recent successes using those tools, human factors, generic interaction types, interaction techniques for raster systems and style-independent interaction techniques. the workshop will be summarized with an address on the current state-of-the- art and identification of the major research issues. in addition, selected participants will be asked to address the primary research issues. james j. thomas effective industrial finishing systems design utilizing the use of simulation techniques in designing a fiberglass hood paint system is demonstrated. two alternate designs are described and analyzed as to float requirements. due to schedule considerations in maintaining job sequence, the first design was determined impractical. the results of both analyses will be presented in the conference session. sally j. hill doris klein a coordination language for collective agent based systems: grouplog fernanda barbosa jose c. cunha automating gait generation one of the most routine actions humans perform is walking. to date, however, an automated tool for generating human gait is not available. this paper addresses the gait generation problem through three modular components. we present elevwalker, a new low-level gait generator based on _sagittal elevation angles_, which allows curved locomotion - walking along a curved path - to be created easily; elevinterp, which uses a new _inverse motion interpolation_ algorithm to handle uneven terrain locomotion; and metagait, a high-level control module which allows an animator to control a figure's walking simply by specifying a path. the synthesis of these components is an easy-to-use, real-time, fully automated animation tool suitable for off-line animation, virtual environments and simulation. infinitereality: a real-time graphics system john s. montrym daniel r. baum david l. dignam christopher j. migdal sequential allocations that reduce risk for multiple comparisons stephen e. chick koichiro inoue an approach to 3d pose determination the orientation, or pose, of an object is a fundamental property that helps to define the geometrical relationship between the object and its environment. in addition, knowledge of object orientation can also facilitate interpretive and decision-making tasks in a variety of practical domains, including industrial, meteorological, and medical applications. determining object pose, however, remains an open research question in the fields of graphics and visualization. this article describes a novel yet intuitively simple approach, which we call topological goniometry, to directly determine the pose of a three-dimensional object from 3d data. the topology of interest is that of two-sided surfaces in a three- manifold, and includes objects whose shaped are unaffected by elastic transformations. algorithmically, topological goniometry is composed of the following major steps. the first analyzes the global topology in order to generate a distribution of 3d coordinate triplets in the proximity of the desired pose axis. using this set of 3d points, that second step then invokes a "3d walk" algorithm that considers the local topology to produce a generalized curve representing an estimate of the object's axis of pose. the resultant pose axis is thus not constrained to lie along a straight line but can be generalized 3d curve. the methods are illustrated with a variety of synthetically created models that exhibit duct-like shapes, and are further tested by introducting noise as well as deformations to these models. the approach is also applied to a number of real discrete data obtained from meteorological and medical domains. the results suggest that the appproach is applicable to both real and synthetic datasets and is shown to be robust, computationally efficient, and applicable to a variety of problems. the approach can incorporate context- or application-dependent information about the object of interest by using a set of constraints that guide the process of orientation determination. this article describes the approach, its implementation, and the results obtained with numerous applications. norberto ezquerra rakesh mullick investigating the communication problems encountered in knowledge acquisition v. r. waldron a similarity index for convex polygons (abstract only) a new method for quantifying the shape similarity of two arbitrary convex polygons is presented. the intuitive notion of similarity is captured by computing the maximum area of intersection over every possible superposition of one polygon on the other. polygons are defined as regions bounded by conjunctive linear constraints of the form: a * x1 + a * x2 + a > = 0.0 (for i = 1,2,…, k). the problem is then reduced to one of non-differential optimization of three dimensions with an integral objective function. the maximum is taken over every possible rotation and translation of one polygon relative to the other. an extension of the method to arbitrary polygons is given where figures are partitioned into convex polygonal components. the results of classification experiments are presented as a comparison of this procedure with existing techniques. guy bruno walter g. rudd an introduction to slx james o. henriksen view-dependent geometry paul rademacher ray tracing and radiosity (panel): ready for production? jacquelyn ford morie richard hollander grant boucher gonzalo garramuno bob powell c. wedge whole field modelling (case study): effective real-time and post-survey visualization of underwater pipelines the detailed underwater bathymetric data provided by sonar research and development's high speed multi-frequency sonar transducer system provides new challenges in the development of interactive seabed visualization tools. this paper introduces a 'whole field modelling' system developed at sonar research and development ltd and the department of computer science, university of hull. this system provides the viewer with a new 3d underwater visualization environment that allows the user to pilot a virtual underwater vehicle around an accurate seabed model. in this paper we consider two example case studies that use the whole field modelling system for visualizing sonar data. both case studies, visualizing real-time pipeline dredging and pipe restoration visualization, are implemented using real survey data. paul chapman derek wills peter stevens grahams brookes tightrope bob hoffman adaptable and adaptive sytems workshop russell borland optimal triangular haar bases for spherical data multiresolution analysis based on fwt (fast wavelet transform) is now widely used in scientific visualization. spherical biorthogonal wavelets for spherical triangular grids where introduced in [5]. in order to improve on the orthogonality of the wavelets, the concept of nearly orthogonality, and two new piecewise-constant (haar) bases were introduced in [4]. in our paper, we extend the results of [4]. first we give two one-parameter families of triangular haar wavelet bases that are nearly orthogonal in the sense of [4]. then we introduce a measure of orthogonality. this measure vanishes for orthogonal bases. eventually, we show that we can find an optimal parameter of our wavelet families, for which the measure of orthogonality is minimized. several numerical and visual examples for a spherical topographic data set illustrates our results. georges-pierre bonneau symbolic integration of expressions involving unspecified functions graham h. campbell the latent damage system: a jurisprudential analysis this paper brings together the findings of two projects in the field of ai and law. the first of these was a jurisprudential inquiry into expert systems in law sometimes referred to as the oxford project. this work in fact originated in glasgow university in 1981, was developed in the course of doctoral legal research at oxford university from 1983 to 1986, and culminated in the publication of expert systems in law: a jurisprudential inquiry (oxford university press, 1987) by r e susskind. the second project was work on the latent damage system, the uk's first commercially available expert system in law. the development of that system has been fully documented in a case study by p n capper and r e susskind: latent damage law --- the expert system (butterworths , 1988), and a copy of the system itself is packaged with the book. the purpose of this paper is to examine the latent damage project from the perspective of the oxford project --- the findings and methodology of a jurisprudential inquiry into expert systems in law are used to analyse the development and operation of one of the world's first operational systems. the paper is structured in four parts. the first offers a brief overview of the oxford and latent damage projects. this is not undertaken in detail, as these projects have been documented extensively elsewhere. however, sufficient background information is given so that the paper can be read as a self- contained piece. the central arguments and findings are presented in part two, which lays bare the jurisprudential foundations of the latent damage system. it does so in terms of the theories of jurisprudence that the oxford project established must underlie any expert system in law. part three examines the extent to which the development of the latent damage system was conditioned by the teachings of jurisprudence. finally, in part four, the users, function and scope of the latent damage system are pinpointed. the purpose of this paper is not to demonstrate the power of the latent damage system by placing it within a jurisprudential framework. the power of the system is, in fact, best illustrated by running it and assessing its utility in practice. the overriding aim of the discussion here is to consider the relevance of the findings and recommendations of a highly academic project for those whose orientation is that of building real-life, commercial, practical systems. r. e. susskind on the role of abduction pietro torasso luca console luigi portinale daniele theseider dupre decorating implicit surfaces hans køhling pedersen frontier in the full paper [1], we discuss the functionality and implementation challenges of the _frontier_ geometric constraint engine, designed to address the main reasons for the underutilization of geometric constraints in today's 3d design and assembly systems. here, we motivate the full paper by outlining the advantages of _frontier_. * _frontie r_ fully enables both (a) the use of complex, cyclic, spatial constraint structures as well as (b) feature-based design. to deal with issue (a), _frontier_ relies on the efficient generation of a close-to-optimal _decomposition and recombination (dr) plan_ for completely general variational constraint systems (see figure 1). a serious bouleneck in constraint solving is the exponential time dependence on the size of the largest system that is simultaneously solved by the algebraic- numeric solver. in most naturally occurring cases, _frontier_'s dr-plan is guaranteed in minimize this size (to within a small constant factor). to deal with issue (b), _frontier_'s dr-plan admits the independent and local manipulation of features and sub-assemblies in one or more _underlying feature hierarchies that are input_ (figures 1 and 2). a dr-plan satisfying the above requirements is generated by the new _frontsier vertex algorithm (fa)_: the dr problem and its significance as well as fa and its performance with respect to several relevant and newly formalized abstract measures are described in [2, 3]. * _frontier_ employs a crucial representation of the dr-plan's subsystems or clusters, their hierarchy and their interaction this representation merges network flow information, as well as other geometric and combinatorial information in a natural manner. some of this information is obtained from an efficient flow- based algorithm for detecting small rigid sub-systems presented in [4]. the clarity of this representation is crucial in the concrete realization of fa's formal performance. more significantly, this representation allows _frontier_ to take advantage of its dr-plan in surprising and unsuspected ways listed below. jianjun oung meera sitharam brandon moro adam arbree backtracking: the nine lives of the ai syed s. ali perspectives on simulation using gpss thomas j. schriber the flexibility of an unsupervised neural network for machine part classification c. k. lee c. h. chung generative modeling: a symbolic system for geometric modeling john m. snyder james t. kajiya the frame-definition language for customizing the raffaello structure-editor in host expert systems the use of the language discussed here, is to customize an expert-system (es) construction tool, raffaello, in the application. we are concerned, here, with a version, being an es itself, of the frame-definition and navigation component of raffaello. allowed frame- topology may be deep, i. e. with an unbound depth of the tree. frame schemata of the application es are described by means of a production system (ps), "cupros", with particular conventions for conveying particular features: e. g. slotless repeatable subframes, or local cupros'es in instance's subtrees, n-to-n relation insertion, retrieval - key local definition, ancestry-driven disambiguation of polysemic slots, etc. examples are drawn from the application to onomaturge, an es for word- formation. e nissan polar bear swim suzanne datz polynomial texture maps in this paper we present a new form of texture mapping that produces increased photorealism. coefficients of a biquadratic polynomial are stored per texel, and used to reconstruct the surface color under varying lighting conditions. like bump mapping, this allows the perception of surface deformations. however, our method is image based, and photographs of a surface under varying lighting conditions can be used to construct these maps. unlike bump maps, these polynomial texture maps (ptms) also capture variations due to surface self-shadowing and interreflections, which enhance realism. surface colors can be efficiently reconstructed from polynomial coefficients and light directions with minimal fixed-point hardware. we have also found ptms useful for producing a number of other effects such as anisotropic and fresnel shading models and variable depth of focus. lastly, we present several reflectance function transformations that act as contrast enhancement operators. we have found these particularly useful in the study of ancient archeological clay and stone writings. tom malzbender dan gelb hans wolters predicting the drape of woven cloth using interacting particles we demonstrate a physically-based technique for predicting the drape of a wide variety of woven fabrics. the approach exploits a theoretical model that explicitly represents the microstructure of woven cloth with interacting particles, rather than utilizing a continuum approximation. by testing a cloth sample in a kawabata fabric testing device, we obtain data that is used to tune the model's energy functions, so that it reproduces the draping behavior of the original material. photographs, comparing the drape of actual cloth with visualizations of simulation results, show that we are able to reliably model the unique large-scale draping characteristics of distinctly different fabric types. david e. breen donald h. house michael j. wozny a method of generating stone wall patterns kazunori miyata the space efficiency of a forest of quadtrees over a quadtree data structure (abstract only) forest of quadtrees is a refinement of a quadtree data structure that is used to represent planar regions. forest of quadtrees provides space savings over regular quadtrees by concentrating vital information. this paper presents the construction of a quadtree for a planar region enclosed in a 2n × 2n image. this quadtree representation is then used to present the construction of a forest of quadtrees for the same region. examples of various regions represented as a quadtree and the space savings provided by a forest of quadtrees will then be given. nancy gautier the vortex steven churchill managing latency in complex augmented reality systems marco c. jacobs mark a. livingston andrei state a framework for the analysis of error in global illumination algorithms in this paper we identify sources of error in global illumination algorithms and derive bounds for each distinct category. errors arise from three sources: inaccuracies in the boundary data, discretization, and computation. boundary data consists of surface geometry, reflectance functions, and emission functions, all of which may be perturbed by errors in measurement or simulation, or by simplifications made for computational efficiency. discretization error is introduced by replacing the continuous radiative transfer equation with a finite-dimensional linear system, usually by means of boundary elements and a corresponding projection method. finally, computational errors perturb the finite-dimensional linear system through imprecise form factors, inner products, visibility, etc., as well as by halting iterative solvers after a finite number of steps. using the error taxonomy introduced in the paper we examine existing global illumination algorithms and suggest new avenues of research. james arvo kenneth torrance brian smits topology preserving compression of 2d vector fields suresh k. lodha jose c. renteria krishna m. roskin determination of the root system of semisimple lie algebras from the dynkin diagram h. schlegel efficient learning of continuous neural networks we describe an efficient algorithm for learning from examples a class of feedforward neural networks with real inputs and outputs in a real-value generalization of the probably approximately correct (pac) model. these networks can approximate an arbitrary function with an arbitrary precision. the learning algorithm can accommodate a fairly general worst-case noise model. the main improvement over previous work is that the running time of the algorithm grows only polynomially as the size of the target network increases (there is still an exponential dependence on the dimension of the input space, however). the main computational tool is an iterative "loading" algorithm which adds new hidden units to the hypothesis network sequentially. this avoids the difficult problem of optimizing the weights of all units simultaneously. pascal koiran optimization of simulation responses in a multicomputing environment this paper describes the application of experimental design techniques to computer simulation in a multicomputing environment. three principal areas of experimental design are considered: (1) factor screening experiments; (2) experiments of comparison; and (3) response surface methodology. william e. biles h. tamer ozmen the design of the dipmeter advisor system the dipmeter advisor system [11] attempts to emulate human expert performance in an important and specialized oil well-log interpretation task. the system is currently being used in a small number of schlumberger field log interpretation centers as an aid to human dipmeter interpreters. in this paper, we describe the problem just enough to establish the vocabulary for discussing the program and the characteristics of the domain. we then present the internal structure of the program and attempt to put it in perspective with respect to first-generation expert systems. we discuss ways in which characteristics of the task domain impose constraints on the design or" signal interpretation programs, and attempt to extract knowledge useful to future expert system developers. reid g. smith robert l. young improving static and dynamic registration in an optical see-through hmd in augmented reality, see-through hmds superimpose virtual 3d objects on the real world. this technology has the potential to enhance a user's perception and interaction with the real world. however, many augmented reality applications will not be accepted until we can accurately register virtual objects with their real counterparts. in previous systems, such registration was achieved only from a limited range of viewpoints, when the user kept his head still. this paper offers improved registration in two areas. first, our system demonstrates accurate static registration across a wide variety of viewing angles and positions. an optoelectronic tracker provides the required range and accuracy. three calibration steps determine the viewing parameters. second, dynamic errors that occur when the user moves his head are reduced by predicting future head locations. inertial sensors mounted on the hmd aid head-motion prediction. accurate determination of prediction distances requires low-overhead operating systems and eliminating unpredictable sources of latency. on average, prediction with inertial sensors produces errors 2-3 times lower than prediction without inertial sensors and 5-10 times lower than using no prediction at all. future steps that may further improve registration are outlined. ronald azuma gary bishop notes on conceptual representations underlying concepts of "conceptual modelling" are examined. a systematic layered approach is presented in order to improve techniques recently used. e knuth l hannak a hernadi the a -buffer, an antialiased hidden surface method the a-buffer (anti-aliased, area-averaged, accumulation buffer) is a general hidden surface mechanism suited to medium scale virtual memory computers. it resolves visibility among an arbitrary collection of opaque, transparent, and intersecting objects. using an easy to compute fourier window (box filter), it increases the effective image resolution many times over the z-buffer, with a moderate increase in cost. the a-buffer is incorporated into the reyes 3-d rendering system at lucasfilm and was used successfully in the "genesis demo" sequence in star trek ii. loren carpenter a consistent hierarchical representation for vector data randal c. nelson hanan samet depth-order point classification techniques for csg display algorithms constructive solid geometry (csg) defines objects as boolean combinations (csg trees) of primitive solids. to display such objects, one must classify points on the surfaces of the primitive solids with respect to the resulting composite object, to test whether these points lie on the boundary of the composite object or not. although the point classification is trivial compared to the surface classification (i.e., the computation of the composite object), for csg models with a large number of primitive solids (large csg trees), the point classification may still consume a considerable fraction of the total processing time. this paper presents an overview of existing and new efficiency-improving techniques for classifying points in depth order. the different techniques are compared through experiments. frederik w. jansen advanced distributed simulation (keynote speech): what we learned, where to go next w. h. lunceford modeling compressed full-motion video benjamin melamed ray tracing trimmed rational surface patches tomoyuki nishita thomas w. sederberg masanori kakimoto emacs the extensible, customizable self-documenting display editor emacs is a display editor which is implemented in an interpreted high level language. this allows users to extend the editor by replacing parts of it, to experiment with alternative command languages, and to share extensions which are generally useful. the ease of extension has contributed to the growth of a large set of useful features. this paper describes the organization of the emacs system, emphasizing the way in which extensibility is achieved and used. this report describes work done at the artificial intelligence laboratory of the massachusetts institute of technology. support for the laboratory's research is provided in part by the advanced research projects agency of the department of defense under office of naval research contract n00014-80-c-0505. richard m. stallman algorithmic determination of commutation relations for lie symmetry algebras of pdes g. j. reid i. g. lisle a. boulton a. d. wittkopf time critical lumigraph rendering peter-pike sloan michael f. cohen steven j. gortler knowledge-based understanding on a small machine one problem plaguing current knowledge-based systems is the acquisition of the information that is represented in the knowledge base. most knowledge processing systems, such as expert systems and more recent case-based systems, are either based on static data that has already been entered into the system, or use human intervention to enter the information into the system in the correct format. both of these methods greatly reduce the power and flexibility of the system since, presumably, an intelligent knowledge-based system would make better decisions provided with more information. one resource that provides a vast amount of information is written text, such as journals, newspapers and on-line news wires. systems that use information provided by such resources could greatly benefit if they automatically acquired this information. this paper presents research directed at the development of such a system on a small machine. benjamin moreland a comparison of coordinated planning methods for cooperating rovers steve chien anthony barrett tara estlin gregg rabideau modeling and rendering waves: wave-tracing using beta-splines and reflective and refractive texture mapping. the graphical simulation of a certain subset of hydrodynamics phenomena is examined. new algorithms for both modeling and rendering these complex phenomena are presented. the modeling algorithms deal with wave refraction in an ocean. waves refract in much the same way as light. in both cases, the equation that controls the change in direction is snell's law. ocean waves are continuous but can be discretely decomposed into wave rays or wave orthogonals. these wave orthogonals are wave-traced in a manner similar to the rendering algorithm of ray-tracing. the refracted wave orthogonals are later traversed and their height contributions to the final surface are calculated using a sinusoidal shape approximation and the principle of wave superposition. the surface is then represented by beta-splines, using the tension (or β2) shape parameter to easily add more complexity to the surface. the rendering algorithms are based on the use of texture maps and fresnel's law of reflection. in each algorithm, two texture maps are used to simulate reflection and refraction. based on surface normal orientation and fresnel's law, a weighting is calculated that determines what fractions of reflected color and refracted color are assigned to a point. these algorithms are more efficient, though less accurate, alternatives to standard ray-tracing techniques. pauline y. ts'o brian a. barsky parallel volume ray-casting for unstructured-grid data on distributed-memory architectures kwan-liu ma variable-precision rendering xuejun hao amitabh varshney parallel polygon scan conversion on hypercube multiprocessors john jingfu jenq how to move (physically speaking) in a multi-agent world (abstract) jean- claude latombe system-level simulation of a next generation, multi-sensor concurrent signal processor a new generation of processors is now emerging which addresses the modular processing requirements for a wide range of multispectrum applications envisioned for the late-20th/early-21st century. due to the complexity of interaction among software modules and hardware devices simulation techniques are required to verify that specific configurations support application requirements. this paper demonstrates how simulation results can provide useful information for validation objectives. for this study, a system-level architectural model of one such processor was written in a conventional programming language oriented toward simulation of computer environments. james f. engler system identification using frequency domain methodology arnold buss ai: inventing a new kind of machine william j. clancey a topology simplification method for 2d vector fields xavier tricoche gerik scheuermann hans hagen a practical analytic model for daylight a. j. preetham peter shirley brian smits is this a quadrisected mesh? in this paper we introduce a fast and efficient linear time and space algorithm to detect and reconstruct uniform loop subdivision structure, or triangle quadrisection, in irregular triangular meshes. instead of a naive sequential traversal algorithm, and motivated by the concept of _covering surface_ in algebraic topology, we introduce a new algorithm based on global connectivity properties of the _covering mesh_. we consider two main applications for this algorithm. the first one is to enable interactive modelling systems that support loop subdivision surfaces, to use popular interchange file formats which do not preserve the subdivision structure, such as vrml, without loss at information. the second application is to improve the compression efficiency of existing lossless connectivity compression schemes, by optimally compressing meshes with loop subdivision connectivity. extensions to other popular uniform subdivision schemes such as catmull-clark and doo- sabin, are relatively straightforward but will be studied elsewhere. gabriel taubin an integrated shell and methodology for rapid development of knowledge-based agents gheorghe tecuci mihai boicu kathryn wright seok won lee dorin marcu michael bowman agent development with jackal r. scott cost tim finin yannis labrou xiaocheng luan yun peng ian soboroff james mayfield akram boughannam editorial steven pemberton visualizing large-scale telecommunication networks and services (case study) visual exploration of massive data sets arising from telecommunication networks and services is a challenge. this paper describes swift-3d, an integrated data visualization and exploration system created at at&t; labs for large scale network analysis. swift-3d integrates a collection of interactive tools that includes pixel-oriented 2d maps, interactive 3d maps, statistical displays, network topology diagrams and an interactive drill-down query interface. example applications are described, demonstrating a successful application to analyze unexpected network events (high volumes of unanswered calls), and comparison of usage of an internet service with voice network traffic and local access coverage.1 eleftherios e. koutsofios stephen c. north russell truscott daniel a. keim how to test in subexponential time whether two points can be connected by a curve in a semialgebraic set a subexponential-time algorithm is designed which finds the number of connected components of a semi-algebraic set given by a quantifier-free formula of the first-order theory of real closed fields (for a rather wide class of real close fields, cf. [gv 88], [gr 88]). moreover, the algorithm allows for any two points from the semi- algebraic set to test, whether they belong to the same connected component. decidability of the mentioned problems follows from the quantifier elimination method in the first-order theory of real closed fields, described for the first time by a. tarski ([ta 51]). however, complexity bound of this method is nonelementary, in particular, one cannot estimate it by any finite iteration of the exponential function. g. collins ([co 75]) has proposed a construction of cylindrical algebraic decomposition, which allows to solve these problems in exponential time. for an arbitrary ordered field f we denote by f f its uniquely defined real closure. in the sequel we consider input polynomials over the ordered ring zm z[δ1, …, δm] qm = q(δ1, …, δm), where δ1, …, δm are algebraically independent elements over q and the ordering in the field qm is defined as follows. the element δ1 is infinitesimal with respect to q (i. e. 0 < δ1 < α for any rational number 0 < α ∈ q) and for each 1 ≤ i < m the element δi+1 > 0 is infinitesimal with respect to the field qi (cf. [gv 88], [gr 88]). thus, let an input quantifier-free formula for the first-order theory of real closed fields be given, containing atomic subformulae of the form i ≥ 0, 1 ≤ i ≤ k where i ∈ z& lt;subscrpt>m[x, …,xn]. any rational function g ∈ qm< ;/italic>(y1& lt;/subscrpt>, …, y3) can be represented as g = g 1/g where the polynomials g1, g2 ∈ z m[ y1, …, y3] are reciprocately prime. denote by l(g) the maximum of bit-lengths of the (integer) coefficients of the polynomials g1, g2 (in the variables y1, …, y3, δ1, …,δm). in the sequel we assume that the following bounds are valid: degx1, …, xn( i) < d, degδ1, …, δm ( i) < d0, l( i) ≤ m, 1 ≤ i ≤ k (1) where d, d0, m are some integers. then the bit-length of the formula &minusb; can be estimated by the value l = k m dn dm0 (cf. [cg 83], [gr 86]). note that in the case m = 0, i. e. for the polynomials with integer coefficients, the algorithms from [co 75] allow to produce the connected components (in particular to solve the problems considered in the present paper) within polynomial in m (kd)( (n)) time. we use the notation h1 ≤ & lt;italic>p(h2 ;, …, ht) for the functions h1 > 0, …, ht > 0 if for the suitable integers c, γ the inequality h1 ≤ c(h~~2 *…* t)γ ; is fulfilled. recall that a semialgebraic set (in fn where f is a real closed field) is a set {ii} fn of all points satisfying a certain quantifier-free formula ii of the first-order theory of the field f with the atomic subformulae of the form (g ≥ 0) where the polynomials g ∈ f[< italic>x1, …, < ;italic>xn]. a semialgebraic set { } (qm&l; t;/subscrpt>)n ; is (uniquely) decomposable in a union of a finite number of connected components {&minusb;} = 1≤i≤t {&minusb;i}, each of them in its turn being a semialgebraic set determined by appropriate quantifier-free formula &minusb;i of the first-order theory of the field qm (see e. g. [co 75] for the field f = r, for an arbitrary real closed field one can involve tarski ([ta 51]). note that t ≤ (kd) (n) (see e. g. [gv 88], [gr 88]). we use the following way of representing the points u = (u1, …, un< /italic>) ∈ (@@@@m )n ; (cf. [gv 88]). firstly, for the field qm (u1, …, un< /italic>) a primitive element is produced such that qm (u1, …, un< /italic>) = qm[ ], herewith a minimal polynomial @@@@(z) ∈ qm[z_] for is indicated, furthermore = 1≤i≤nαiv ;i for some integers 0 ≤ α1, …, αn ≤ degz(@@@@). also the expressions ui = 0≤jz(@@@@) β~~(j)&l; t;italic>i j are yielded, where β(j)i ∈ qm. secondly, for specifying the root of the polynomial @@@@ a sequence of the signs of the derivatives of all orders @@@@′( ), @@@@(2)( ), …, @@@@(deg(@@@@)) ( ) of the polynomial @@@@ in the point is given. thom's lemma (see e. g. [fgm 88]) entails that the latter condition uniquely determines the root of @@@@. we say that a point u satisfies (d, d0, m)-bound if the following inequalities hold: degz(@@@@) < d; degδ1 , …, δm(@@@@), degδ1 , …, δm (β(j)~~i) ≤ d0; l (@@@@), l(^ 6;(j)i) ≤ m then the bit-length of the representation of the point u does not exceed p(m, d, dm0, n) (cf. [gv 88], [gr 88]). the main purpose of the paper is to prove the following theorem (see also [vg 91]). d. y. grigoriev grammex: defining grammars by example henry lieberman bonnie a. nardi david wright representing teleological structure in case-based legal reasoning: the missing link we argue that robust case-based models of legal knowledge that represent the way in which practicing professionals use legal decisions must contain a deeper domain model that represents the purposes behind the rules articulated in the cases. we propose a model for representing the teleological components of legal decisions, and we suggest a method for utilizing this representation in a hypo-like framework for case-based legal argument. donald h. berman carole d. hafner evaluation of a prototype visualization for distributed simulations james h. graham irfan s. karachiwala adel s. elmaghraby a polygonal approximation to direct scalar volume rendering peter shirley allan tuchman some observations on modelling case based reasoning with formal argument models in this paper i shall explore the modelling of case based reasoning using a formal model of argument, taking the approach of prakken and sartor as my starting point. i first consider their method of representing cases, and describe how --- if we restrict ourselves to independent boolean factors --- we can fruitfully model the domain as a partial order on rules. i then consider the issues relating to quantifiable factors, as used in hypo, and factor hierarchies, as used in cato. the former presents some difficulties for modelling as a partial order, and, coupled with the latter, forces us to recognise two different kinds of reasoning used in concept application which have different implications for representing the domain. i then present some conclusions arising from the discussion. t. j. m. bench-capon nurbs in vrml in the days of vrml 1.0, nurbs seemed to be too complex to be adapted to the specification. current development of hardware compels us to reevaluate this idea: while cpu clocks break the 1-ghz barrier, users still have to cope with 56k modems. nurbs meet exactly these demands. a nurbs description is a compact storage form, but its evaluation requires more computational effort. in addition, nurbs can be utilized for morphing effects and they provide a means for a smooth lod. adopting trimmed nurbs allows visualization of complex cad models in vrml. this paper gives an overview of the proposed nodes and their implementation and applications. we take a closer look at lod, animations and trimmed nurbs. finally, we look ahead and briefly touch on an emerging new representation, subdivision surfaces. holger grahn thomas volk hans j. wolters case studies of sneps stuart c. shapiro fast oriented line integral convolution for vector field visualization via the internet rainer wegenkittl eduard gröller the monte carlo estimation fuction variation george s. fishman multipurpose web publishing using html, xml, and css håkon wium lie janne saarela sentencing and information management: consistency and the particularities of a case sentencing practice is often considered inconsistent. the use of it and ai to support sentencing decisions in order to make them more consistent, deserves a lot of attention nowadays. the movement for more fair and consistent sentencing in the netherlands led to the development of several jdsss (judicial decision support system) that already are or shortly will be used by the public prosecution and the judiciary. criticism towards the use of it and ai for this sentencing purpose is often directed at the resulting standardization that tends to overlook individualized sentencing. in this paper we describe the current situation of computerization in the domain of criminal justice. we address information management within the criminal justice chain (police--- 'public prosecution' \---judge). current practice is evaluated and future developments are addressed. we argue that information management and electronic versions of case files could help in providing especially the judge with more relevant information. this could lead to effective consult of two main aspects of criminal cases: establishing the facts and obeying procedural rules. marius j. a. duker arno r. lodder a shading model for atmospheric scattering considering luminous intensity distribution of light sources tomoyuki nishita yasuhiro miyawaki eihachiro nakamae computational efficiency evaluation in output analysis halim damerdji shane g. henderson peter w. glynn ports: a parallel, optimistic, real-time simulator this paper describes issues concerning the design of an optimistic parallel discrete event simulation system that executes in environments that impose real-time constraints on the simulator's execution. two key problems must be addressed by such a system. first the timing characteristics of the parallel simulator must be sufficiently predictable to allow one to guarantee that real-time deadlines for completing simulation computations will be met. second, the optimistic computation must be able to interact with its surrounding environment with as little latency as possible, necessitating rapid commitment of i/o operations. to address the first question, we show that optimistic simulators that never send incorrect messages (sometimes called "aggressive-no-risk" simulators) provide sufficient predictability to allow traditional schedulability analysis techniques commonly used in real-time systems to be applied. we show that incremental state saving techniques introduce sufficient unpredictability that they are not well-suited for real-time environments. we observe that the traditional "lowest timestamp first" scheduling policy used in many optimistic parallel simulation systems is an optimal (in the real-time sense) scheduling algorithm when event timestamps and real-time deadlines are the same. finally, to address the question for rapid commitment of i/o operations, we utilize a continuous gvt computation scheme for shared- memory multiprocessors where a new value of gvt is computed after processing each event in the simulation. these ideas are incorporated in a parallel, optimistic, real-time simulation system called ports. initial performance measurements of the shared-memory based ports system executing on a kendall square research multiprocessor are presented. initial performance results are encouraging, demonstrating that ports achieves performance approaching that of a conventional time warp system for the benchmark programs that were tested. kaushik ghosh richard m. fujimoto karsten schwan temporal analysis of activity systems enrique v. kortright a formal foundation for process modeling process modeling is ubiquitous in business and industry. while agreat deal of effort has been devoted to the formal andphilosophical investigation of processes, surprisingly littleresearch connects this work to real world process modeling. thepurpose of this paper is to begin making such a connection. to doso, we first develop a simple mathematical model of activities andtheir instances based upon the model theory for the nist processspecification language (psl), a simple language for describingthese entities, and a semantics for the latter in terms of theformer, and a set of axioms for the semantics based upon the nistprocess specification language (psl). on the basis of thisfoundation, we then develop a general notion of a process model,and an account of what it is for such a model to be realized by acollection of events. christopher menzel michael gruninger junk food mark knox der eindecker walzer hidetoshi oneda real-time bump map synthesis in this paper we present a method that automatically synthesizes bump maps at arbitrary levels of detail in real- time. the only input data we require is a normal density function; the bump map is generated according to that function. it is also used to shade the generated bump map. the technique allows to infinitely zoom into the surface, because more (consistent) detail can be created on the fly. the shading of such a surface is consistent when displayed at different distances to the viewer (assuming that the surface structure is self- similar). the bump map generation and the shading algorithm can also be used separately. jan kautz wolfgang heidrich hans-peter seidel panovr sdk - a software development kit for integrating photo-realistic panoramic images and 3-d graphical objects into virtual worlds cheng-chin chiang alex huang tsing-shin wang matthew huang yunn-yen chen jun-wei hsieh ju-wei chen tse cheng the structure of norm conditions and nonmonotonic reasoning in law giovanni sartor the morphological cross-dissolve kevin novins james arvo star wars episode 1: the phantom menace - research and development highlights christian rouet ribena cyberries geoff wyvill knowledge acquisition from multiple experts w. a. wolf realizing opengl: two implementations of one architecture mark j. kilgard first union: launch yves metraux simulation modeling with insight using insight, simulation models can be built and evaluated with less time and effort than previously possible. insight defines a vocabulary for systems analysis and provides a small set of modeling symbols to represent the basic processes found in real systems. by combining the symbols graphically and filling in needed information, immediately useful simulation models can be constructed quickly. simple models can be greatly embellished using the rich variety of insight specifications without resorting to any computer programming. the visual model is simply translated into insight statements for automatic execution. procedures incorporated in insight enable sound statistical analysis to be used routinely. such features reduce the cost and difficulty in obtaining representative results. the net result is that attention is focused on modeling and analysis. stephen d. roberts a simulation for the justification and planning of a continuous slab caster two major simulation models were developed to aid in planning the multi- million dollar expansion of steelmaking facilities at bethlehem steel's sparrows point, maryland, plant. one model looks at the flow of liquid metal through steelmaking facilities while the second model represents the handling of solid slabs produced by the new continuous caster. both simulation models were written using "bethsim", a bethlehem steel developed simulation language used primarily for discrete event simulation. bethsim uses a network combined with user written fortran subroutines and package subroutines that aid in the modeling of complex manufacturing elements such as roller lines, crane movements, and storage areas. output from the models consists of standard bethsim reports and customized reports produced by fortran subroutines and a sas postprocessor. beverly j. wolfe mark a. christobek the challenge of qualitative spatial reasoning a. g. cohn advances in parameter estimation from event count data in modeling a system, identifying the times at which the state of the system changes is important. in particular, the probability distributions of the interevent times (time lengths between consecutive events) of each process (such as interarrival times) are required in a simulation model. often information characterizing the interevent times may be unavailable or difficult to obtain. on the other hand, the number of event per unit time is simple and economical to collect. in the last few years, there has been much research in stochastic modeling using point processes by a varied group of researchers. despite this extensive and diverse research effort, little attention has been given to estimation techniques based on event count data rather than event time data. this paper discusses recent advances (in the last year) in estimation based on event count data. ronald dattero consolidated manipulation of virtual and real objects yoshifumi kitamura fumio kishino gnu/maverik: a micro-kernel for large-scale virtual environments this paper describes a publicly available virtual reality (vr) system, gnu/maverik, which forms one component of a complete 'vr operating system'. we give an overview of the architecture of maverik, and show how it is designed to use application data in an intelligent way, via a simple, yet powerful, callback mechanism which supports an object-oriented framework of classes, objects and methods. examples are given which illustrate different uses of the system, and typical performance levels. roger hubbold jon cook martin keates simon gibson toby howard alan murta adrian west steve pettifer an ease of use evaluation of an integrated document processing system designers of systems intended to be easy to use have many guidelines available to them in the literature. most of these recommendations are based on the intuition and experiences of particular designers with particular systems. very few of them have been evaluated experimentally, so one must be cautious not to attribute more authority to these guidelines than they deserve [6]. this paper summarizes the results of an experimental evaluation of the etude text processing system [8]. section 2 provides a brief overview of etude. section 3 describes the development of suitable ease of use criteria. section 4 presents the experimental protocol. section 5 discusses the results of the evaluation. a complete description of the experiment can be found in [7]. michael good a perceptually based physical error metric for realistic image synthesis mahesh ramasubramanian sumanta n. pattanaik donald p. greenberg advances in new display technology (panel session) sol sherr ifay chang thomas maloney peter pleshko elliot schlam peter seats ethnocritical questions for working with translations, interpretation and their stakeholders michael j. muller exploiting the architecture of dynamic systems xavier boyen daphne koller modeling and simulation in product development as part of the product development cycle, the xerox corporation has evolved a modeling and simulation methodology. this paper describes the approach, its use, and value in product development. to provide a common basis for understanding the modeling activities to be discussed, a brief overview of the xerographic process as used in our current duplicator copier products is described. each of the functions is discussed in terms of how they contribute to the overall systems model and how they are linked together in this model. a brief description of the data acquisition system used for model validation is included. a discussion of the fortran-based simulation language, analysis tools, and its flexibility is presented. the application of this modeling approach at xerox is illustrated. the applications are the optimization of performance parameters, subsystem requirements, tolerances, aging, and setpoints, as well as the examination of the performance of a population. these population studies encompass customer utilization and service interactions. the advantages of this process will be discussed, including the design efficiencies enabled, service policy optimization, and field performance projection of the product population. douglas g. boike edward h. ernst extracting feature lines from 3d unstructured grids kwan-liu ma victoria interrante knowledge based simulation of an organization: an agent's behavior in a dynamic environment sigrid unseld neuroanimator: fast neural network emulation and control of physics-based models radek grzeszczuk demetri terzopoulos geoffrey hinton simulation-based inference for plan monitoring neal lesh james allen piks and biif explored this issue's column is an introduction to the image processing and interchange standards (iso/iec 12087) developed by iso/iec jtc i/ sc 24 (computer graphics and image processing.). first, bill pratt of pixelsoft, the editor of the programmer's imaging kernel system (piks) international standard (iso/iec 12087-1:1995), describes piks and how it can be applied. next, i describe the basic image interchange format (biif), iso/iec 12087-1:1998, and its applications. george s. carson william k. pratt creating generative models from range images ravi ramamoorthi james arvo scripting highly autonomous simulation behavior using case-based reasoning niels catsimpoolas jed marti using visual texture for information display results from vision research are applied to the synthesis of visual texture for the purposes of information display. the literature surveyed suggests that the human visual system processes spatial information by means of parallel arrays of neurons that can be modeled by gabor functions. based on the gabor model, it is argued that the fundamental dimensions of texture for human perception are orientation, size (1/frequency), and contrast. it is shown that there are a number of trade-offs in the density with which information can be displayed using texture. two of these are (1) a trade-off between the size of the texture elements and the precision with which the location can be specified, and (2) the precision with which texture orientation can be specified and the precision with which texture size can be specified. two algorithms for generating texture are included. colin ware william knight when is a finitely generated algebra of poincare-birkhoff-witt type? j. l. bueso j. gomez torrecillas f. j. lobillo estimating the cost of throttled execution in time warp samir r. das height distributional distance transform methods for height field ray tracing height distributional distance transform (hddt) methods are introduced as a new class of methods for height field ray tracing. hddt methods utilize results of height field preprocessing. the preprocessing involves computing a height field transform representing an array of cone-like volumes of empty space above the height field surface that are as wide as possible. there is one cone-like volume balanced on its apex centered above each height field cell. various height field transforms of this type are developed. each is based on distance transforms of height field horizontal cross-sections. hddt methods trace rays through empty cone-like volumes instead of through successive height field cells. the performance of hddt methods is evaluated experimentally against existing height field ray tracing methods. david w. paglieroni sidney m. petersen latent semantic analysis of textual data preslav nakov efficient clipping of arbitrary polygons clipping 2d polygons is one of the basic routines in computer graphics. in rendering complex 3d images it has to be done several thousand times. efficient algorithms are therefore very important. we present such an efficient algorithm for clipping arbitrary 2d-polygons. the algorithm can handle arbitrary closed polygons, specifically where the clip and subject polygons may self-intersect. the algoirthm is simple and faster that vatti's (1992) algorithm, which was designed for the general case as well. simple modifications allow determination of union and set-theoretic differences of two arbitrary polygons. gunther greiner kai hormann fiber measurement using digital image processing leland best robert stobart michael magee el arca/l'arche rodrigo munoz kuri bruno follet minimizing sensors task in robot plan monitoring alfredo milani a user interface to support experimental design and data exploration of complex, deterministic simulations l. tandy herren pamela k. fink christopher j. moehle non-linear approximation of reflectance functions eric p. f. lafortune sing- choong foo kenneth e. torrance donald p. greenberg a grid-based tool for knowledge acquisition: validation with multiple experts m. l. g. shaw theme park computer graphics and interactivity phil hettema a modular approach to simulation of robotic systems this paper presents the results of a simulation effort performed as part of the icam robotics program. the initial q-gert simulation model is briefly described, followed by the presentation of a modular approach to simulation of robotic systems. in this approach, predefined q-gert modules, described by simple block diagrams, are assembled to produce a complete model. the system design logic is discussed and the available modules are briefly described. examples are presented in block diagram format. the advantages and limitations of the approach are discussed. d. j. medeiros r. p. sadowski d. w. starks b. s. smith managing geometric complexity with enhanced procedural models phil amburn eric grant turner whitted autoknag: automatic knowledge acquisition for fault isolation expert systems mark w. wheeler m. schneider practical application of existing hypermedia standards and tools lloyd rutledge jacco van ossenbruggen lynda hardman dick c. a. bulterman gedit: a test bed for editing by contiguous gestures gordon kurtenbach bill buxton autonomous systems intelligence the evolution of unmanned and manned spacecraft indicates that significant degrees of autonomous operation for onboard health, fault, or performance management are becoming important. in particular, technology and mission studies of a permanently manned, evolutionary space station have surfaced the need for new computer system concepts incorporating autonomy and perhaps artificial (machine) intelligence. an evolving autonomous system with mixed degrees of autonomy, interdependent autonomous functions, and perhaps disparate artificial intelligences will require orchestration of the behavior of the entire system. an advanced concept of decision control, involving an explicit sequence of goal-directed decisions, is proposed as an executive machine-intelligent process by which the multiple, specialized "smart programs" or "machine intelligences" of an autonomous system could be controlled. john l. anderson a reflectance model for computer graphics r. l. cook k. e. torrance simulation of memory chip line using an electronics manufacturing simulator douglas n. estremadoyro phillip a. farrington bernard j. schroer james j. swain a knowledge-based system for automated industrial visual inspection planning mu g. jeong suk i. yoo distributed planning and scheduling for enhancing spacecraft autonomy subrata das raffi krikorian walt truszkowski a generalization of algebraic surface drawing james f. blinn surfaces from contours this paper is concerned with the problem of reconstructing the surfaces of three-dimensional objects, given a collection of planar contours representing cross-sections through the objects. this problem has important aplications in biomedical research and instruction, solid modeling, and industrial inspection. the method we describe produces a triangulated mesh from the data points of the contours which is then used in conjunction with a piecewise parametric surface-fitting algorithm to produce a reconstructed surface. the problem can be broken into four subproblems: the correspondence problem (which contours should be connected by the surface?), the tiling problem (how should the contours be connected?), the branching problem (what do we do when there are branches in the surface?), and the surface- fitting problem (what is the precise geometry of the reconstructed surface?) we describe our system for surface reconstruction from sets of contours with respect to each of these subproblems. special attention is given to the correspondence and branching problems. we present a method that can handle sets of contours in which adjacent contours share a very contorted boundary, and we describe a new approach to solving the correspondence problem using a minimum spanning tree generated from the contours. david meyers shelley skinner kenneth sloan an interface for sketching 3d curves jonathan m. cohen lee markosian robert c. zeleznik john f. hughes ronen barzel a progressive global illumination solution considering perceptual factors vladimir volevich jerzy sas karol myszkowski andrei khodulev edward a. kopylov texture-based visibility for efficient lighting simulation lighting simulations using hierarchical radiosity with clustering can be very slow when the computation of fine and artifact-free shadows is needed. to avoid the high cost of mesh refinement associated with fast variations of visibility across receivers, we propose a new hierarchical algorithm in which partial visibility maps can be computed on the fly, using a convolution technique for emitter- receiver configurations where complex shadows are produced. other configurations still rely on mesh subdivision to reach the desired accuracy in modeling energy transfer. in our system, therefore, radiosity is represented as a combination of textures and piecewise-constant or linear contributions over mesh elements at multiple hierarchical levels. we give a detailed description of the gather, push/pull, and display stages of the hierarchical radiosity algorithm, adapted to seamlessly integrate both representations. a new refinement algorithm is proposed, which chooses the most appropriate technique to compute the energy transfer and resulting radiosity distribution for each receiver/transmitter configuration. comprehensive error control is achieved by subdividing either the source or receiver in a traditional manner, or by using a blocker subdivision scheme that improves the quality of shadow masks without increasing the complexity of the mesh. results show that high-quality images are obtained in a matter of seconds for scenes with tens of thousands of polygons. cyril soler f. x. sillion sample-efficient strategies for learning in the presence of noise in this paper, we prove various results about pac learning in the presence of malicious noise. our main interest is the sample size behavior of learning algorithms. we prove the first nontrivial sample complexity lower bound in this model by showing that order of ε/Δ2 \\+ d/Δ (up to logarithmic factors) examples are necessary for pac learning any target class of {0,1}-valued functions of vc dimension d, where ε is the desired accuracy and = ε/(1 + ε) - Δ the malicious noise rate (it is well known that any nontrivial target class cannot be pac learned with accuracy ε and malicious noise rate ≥ ε/(1 + ε), this irrespective to sample complexity). we also show that this result cannot be significantly improved in general by presenting efficient learning algorithms for the class of all subsets of d elements and the class of unions of at most d intervals on the real line. this is especialy interesting as we can also show that the popular minimum disagreement strategy needs samples of size d ε/Δ2, hence is not optimal with respect to sample size. we then discuss the use of randomized hypotheses. for these the bound ε/(1 + ε) on the noise rate is no longer true and is replaced by 2ε/(1 + 2ε). in fact, we present a generic algorithm using randomized hypotheses that can tolerate noise rates slightly larger than ε/(1 + ε) while using samples of size d/ε as in the noise-free case. again one observes a quadratic powerlaw (in this case d& ;#949;/Δ2, Δ = 2ε/(1 + 2ε) - ) as Δ goes to zero. we show upper and lower bounds of this order. nicolò cesa-bianchi eli dichterman paul fischer eli shamir hans ulrich simon modeling and prediction of dynamic behavior for model-based diagnosis sabine kockskämper learning what is relevant to the effects of actions for a mobile robot matthew d. schmill michael t. rosenstein paul r. cohen paul utgoff a knowledge representation for natural language understanding w. pharr aesthetics of computer graphics (panel session) mihai nadin charles csuri frank dietrich thomas linehan hiroshi kawano one tooth too far michael sanborn a framework for performance evaluation of real-time rendering algorithms in virtual reality ping yuan mark green rynson w. h. lau the evaluation of legal knowledge based systems evaluation strategies to assess the effectiveness of legal knowledge based systems enable strengths and limitations of systems to be accurately articulated. this facilitates efforts in the research community to develop systems and also promotes the adoption of research prototypes in the commercial world. however, evaluation strategies for systems that operate in a domain as complex as law are difficult to specify. in this paper, we present an evaluation framework put forward by reich and describe how this motivated the evaluation of our systems in australian family law. strategies surveyed include a comparison of linear regression with neural networks, user acceptance surveys, a comparison of system predictions with those from past cases, and a comparison of system outputs with those proposed by a panel of lawyers. specific criteria for the evaluation of explanation facilities are also described. andrew stranieri john zeleznikow synthesizing bidirectional texture functions for real-world surfaces in this paper, we present a novel approach to synthetically generating bidirectional texture functions (btfs) of real-world surfaces. unlike a conventional two-dimensional texture, a btf is a six-dimensional function that describes the appearance of texture as a function of illumination and viewing directions. the btf captures the appearance change caused by visible small- scale geometric details on surfaces. from a sparse set of images under different viewing/lighting settings, our approach generates btfs in three steps. first, it recovers approximate 3d geometry of surface details using a shape-from-shading method. then, it generates a novel version of the geometric details that has the same statistical properties as the sample surface with a non-parametric sampling method. finally, it employs an appearance preserving procedure to synthesize novel images for the recovered or generated geometric details under various viewing/lighting settings, which then define a btf. our experimental results demonstrate the effectiveness of our approach. xinguo liu yizhou yu heung-yeung shum the zonal method for calculating light intensities in the presence of a participating medium holly e. rushmeier kenneth e. torrance a computer animation representing the molecular events of g protein-coupled receptor activation (case study) the molecular events involved in the activation of g protein-coupled receptors, represent a fundamental biochemical process. these events were selected for animation because the mechanism involves both a ligand-receptor conformational shape change, and an enzyme- substrate conformational shape change. expository animation brought this biochemical process to life. zoya maslak douglas j. steel robert j. mcdermott top ten blunders by visual designers literacy in a medium means being able to use it to communicate to our fellow human beings. literacy means that we can recognize good and bad attempts in the medium and avoid the latter while employing the former. by that standard, many, even in the graphics field, are not visually literate.as a consultant i'm called in to fix what's gone awry. i want to share with you some of the most frequently made --- and easily prevented --- blunders by visual designers, especially as they move to electronic media. william horton detecting vortical phenomena in vector data by medium-scale correlation the detection of vortical phenomena in vector data is one of the key issues in many technical applications, in particular in flow visualization. many existing approaches rely on purely local evaluation of the vector data. in order to overcome the limits of a local approach we choose to combine a local method with a correlation of a pre-defined generic vortex with the data in a medium-scale region. two different concepts of a generic vortices were tested on various sets of flow velocity vector data. the approach is not limited to the two generic patterns suggested here. the method was found to successfully detect vortices in cases were other methods fail. h.-g. pagendarm b. henne m. rutten xml to be, vrml to see b. arun a. d. ganguly codex-dp: co-design of communicating systems using dynamic programming jui- ming chang massoud pedram sea based logistics: distribution problems for future global contingencies keebom kang kevin r. gue modeling and simulation peter g. neumann are life-like characteristics useful for autonomous agents? mathew yap ng wee keong martin eldracher space, time, matter and things i present a logical language for describing spatial, temporal and material properties of the physical world. the formalism is ontologically well-founded in the sense that it is interpreted with respect to model structures that have a specific physical interpretation in terms of the distribution of matter in space and time. brandon bennett towards conceptualisation of physical object propositions (abstract only) the representation and understanding of physical objects unify diverse fields in and around artificial intelligence research. they bring together the heuristic and epistemological categories. although much of concern in the past has been on the former, the recent flood of literature indicates the growing interest in the latter. the components of a physical object configuration are often characterized by uncertainty. they arise often due to unreliability of the sources of information and incomplete information, as in the case of recognition of partially occluded scenes. zadeh and others have advocated a theory of possibility to consider propositions with uncertainty. it appears promising to extend some of these ideas to propositions describing physical object configurations. further, a scene could be described from multiple perspectives resulting in corresponding descriptions. it is often necessary to aggregate these propositions to higher levels of abstraction. the proposition describing stability must take into account the shape, size, and relative orientation of objects among themselves and with respect to supporting structures. in the case of vertical structures the perception of stability is a function of height and base. often the third dimension is not easily fathomed, though some default assignment could be made. on consulting many observers the authors found that in the case of a vertical rectangular structure of width w and height h, a stability indicator s could be defined as s= 2(2w/h)2 for 02 for 0.5<2w/h<1 and s= 1 for 1<2w/h. though other indicators could be defined, we found this to be useful in the scene of our experiment. we have also considered objects of various shapes and formulated stability indicators empirically. when objects are stacked one on top of another, it becomes necessary to combine the stability indicators to obtain the overall stability indicator for the configuration. this is done in the lines of combining evidences in the expert systems. however the boolean operators and, or and not even with modified semantics do not seem to have an obvious parallel in spacial situations. in the case of a rectangular beam supported by two pillars the stability indicator s for the composite system could be obtained from s=1-(1-s1)(1-s2) where s1 and s2 are the indicators for the supports. expressions for combining evidences obtained from multiple views of the same configurations are formulated by experiment and trial. using such indicators simple rules on different kinds of configurations are developed. the high level reasoner for handling complex scenes is a lisp based system that operates on propositions based on the indicators with a flexible control strategy. we are conceptualizing the stability of the configurations by appealing to commonsense judgements, instead of appealing to the laws of physics involving the determination of centres of gravity and computation of various forces. such an approach would lack the rigor and reliability of the traditional methods. the advantages arise from the simplicity of computation and approximates the way humans rationalize on physical objects. in a robot task planning it is often necessary to avoid complicated computations and obtain working conclusions based on fewer and simplified parameters. r. sadananda nizam uddin ahmed light reflection functions for simulation of clouds and dusty surfaces the study of the physical process of light interacting with matter is an important part of computer image synthesis since it forms the basis for calculations of intensities in the picture. the simpler models used in the past are being augmented by more complex models gleaned from the physics literature. this paper is another step in the direction of assimilating such knowledge. it concerns the statistical simulation of light passing through and being reflected by clouds of similar small particles. (it does not, however, address the cloud structure modeling problem). by extension it can be applied to surfaces completely covered by dust and is therefore a physical basis for various theories of diffuse reflection. james f. blinn feline: fast elliptical lines for anisotropic texture mapping joel mccormack ronald perry keith i. farkas norman p. jouppi tachyon: a constraint-based temporal reasoning model and its implementation jonathan stillman richard arthur andrew deitsch displaced subdivision surfaces in this paper we introduce a new surface representing, the displaced subdivision surface. it represents a detailed surface model as a scalar- valued displacement over a smooth domain surface. our representation defines both the domain surface and the displacement function using a unified subdivision framework, allowing for simple and efficient evaluation of analytic surface properties. we present a simple, automatic scheme for converting detailed geometric models into such a representation. the challenge in this conversion process is to find a simple subdivision surface that still faithfully expresses the detailed model as its offset. we demonstrate that displaced subdivision surfaces offer a number of benefits, including geometry compression, editing, animation, scalability, and adaptive rendering. in particular, the encoding of fine detail as a scalar function makes the representation extremely compact. aaron lee henry moreton hugues hoppe fast and resolution independent line integral convolution detlev stalling hans-christian hege decision-theoretic troubleshooting you have just finished typing that big report into your word processor. it is formatted correctly and looks beautiful on the screen. you hit print, go to the printer---and nothing is there. your try again---still nothing. the report needs to go out today. what do you do? david heckerman john s. breese koos rommelse the clipmap: a virtual mipmap christopher c. tanner christopher j. migdal michael t. jones a comparative study of neural network algorithms applied to optical character recognition three simple general purpose networks are tested for pattern classification on an optical character recognition problem. the feed-forward (multi-layer perceptron) network, the hopfield network and a competitive learning network are compared. the input patterns are obtained by optically scanning images of printed digits and uppercase letters. the resulting data is used as input for the networks with two-state input nodes; for others, features are extracted by template matching and pixel counting. the classification capabilities of the networks are compared with a nearest neighbour algorithm applied to the same feature vectors. the feed-forward network reaches the same recognition rates as the nearest neighbour algorithm, even when only a small percentage of the possible connections is used. the hopfield network performs less well, and overloading of the network remains a problem. recognition rates with the competitive learning network, if input patterns are clustered well, are again as high as the nearest neighbour algorithm. p. patrick van der smagt the relevance factor search engines today are not always correct in their assessment of how well websites correlate with user-defined keywords. through extensive research, an adequate relevance factor formula was found and implemented. chris buckley, gerald salton, and james allan introduced this formula. it is an automated tool for website ranking, which corresponds more closely to how the users would rate the websites' significance. the formula depends partly on the number of times the user-defined keyword(s) appear(s) in the document. kimberley deninger caroline cassigneul model specification with respect to analysis much potential exists for reducing the costs while also improving the quality of simulation models through the use of model development systems (mdss). we describe a specification language for discrete event models to be used in a mds. our approach is to introduce an intermediate form between a conceptual model (the model as it exists in the mind of the modeler) and an implementation of that model in some programming language. the model specification language described here is intended to be used in conjunction with a model specification generator (a program to assist a modeler or modeling team with the construction of model specifications). a simple example is presented using this specification language. the example is used to illustrate some types of model analyses and documentation. c. michael overstreet keynote address-artificial intelligence and simulation artificial intelligence is the latest buzzword and one of the hottest topics in the scientific community today. some experts are proclaiming that artificial intelligence (ai) has already emerged as one of the most significant technologies of this century. proponents are declaring that it will completely revolutionize management and the way we use computers. if these claims are even half true, then ai is bound to have a profound effect upon the art and science of simulation. the purpose of this paper is to provide a current overview of this rapidly evolving field, examine the potential of ai in simulation and the inevitability of it. we propose to explore the probable impact as well as forecast the directions it is likely to take. robert e. shannon font formats (panel session) charles bigelow philippe coueignoux john hobby peter karow vaughn pratt luis trabb-pardo john warnock reasoning with portions of precedents l. karl branting silhouette clipping approximating detailed with coarse, texture- mapped meshes results in polygonal silhouettes. to eliminate this artifact, we introduce silhouette clipping, a framework for efficiently clipping the rendering of coarse geometry to the exact silhouette of the original model. the coarse mesh is obtained using progressive hulls, a novel representation with the nesting property required for proper clipping. we describe an improved technique for constructing texture and normal maps over this coarse mesh. given a perspective view, silhouettes are efficiently extracted from the original mesh using a precomputed search tree. within the tree, hierarchical culling is achieved using pairs of anchored cones. the extracted silhouette edges are used to set the hardware stencil buffer and alpha buffer, which in turn clip and antialias the rendered coarse geometry. results demonstrate that silhouette clipping can produce renderings of similar quality to high- resolution meshes in less rendering time. pedro v. sander xianfeng gu steven j. gortler hugues hoppe john snyder fast volume rendering using a shear-warp factorization of the viewing transformation several existing volume rendering algorithms operate by factoring the viewing transformation into a 3d shear parallel to the data slices, a projection to form an intermediate but distorted image, and a 2d warp to form an undistorted final image. we extend this class of algorithms in three ways. first, we describe a new object-order rendering algorithm based on the factorization that is significantly faster than published algorithms with minimal loss of image quality. shear-warp factorizations have the property that rows of voxels in the volume are aligned with rows of pixels in the intermediate image. we use this fact to construct a scanline-based algorithm that traverses the volume and the intermediate image in synchrony, taking advantage of the spatial coherence present in both. we use spatial data structures based on run-length encoding for both the volume and the intermediate image. our implementation running on an sgi indigo workstation renders a 2563 voxel medical data set in one second. our second extension is a shear-warp factorization for perspective viewing transformations, and we show how our rendering algorithm can support this extension. third, we introduce a data structure for encoding spatial coherence in unclassified volumes (i.e. scalar fields with no precomputed opacity). when combined with our shear-warp rendering algorithm this data structure allows us to classify and render a 2563 voxel volume in three seconds. the method extends to support mixed volumes and geometry and is parallelizable. philippe lacroute marc levoy webtop: 3d interactive optics on the web the optics project (top) is a 3d interactive computer graphics system that visualizes optical phenomena. the primary motivation for creating top was to develop a tool to help undergraduate students learn optics. webtop is a web- based version of the system that encompasses a physical simulation, an overview of the theory involved, a showcase of examples, and a set of suggested exercises. the actual simulation is implemented using vrml, java, and the external authoring interface, and runs under multiple hardware/os/browser configurations. this work is significant in that it represents, to our knowledge, the first complete 3d interactive optics system on the web. kiril vidimce john t. foley david c. banks yong-tze chi taha mzoughi editorial david nicol handling visual media preparation for ohio state's computer center workshops at the ohio state university, approximately 20 general-purpose workshops are provided each quarter to service the needs of our user community. we have found it useful to organize the teaching materials for these workshops by standardizing the overhead transparencies that are used by our instructors. we have found that as a direct benefit of this team approach, we can maintain consistent organization in teaching our workshops; our instructors are certain to cover the important aspects of each topic; they are better able to assess the time that will be required to cover each topic and they are less likely to be side-tracked from their subject matter. it has become much easier to prepare and rotate the teaching assignments among our instructors by this method, and the instructors find that they can prepare for a workshop on shorter notice (should an emergency arise) if they've taught this workshop in the past. three years ago, while each instructor prepared his or her own temporary overhead transparencies on clear acetate sheets with water soluble pens, we began the process of learning to pool our ideas, determine what concepts were consistent and relevant---and prepare permanent, standardized sets of overhead transparencies for many of our workshops. we have developed a filing system for maintaining our overhead transparency sets, and make periodic modifications to these sets. our transparency supplies consist of such simple materials as transparency film, water-based and permanent transparency pens, chartpak and zip-a-tone---all the way to the more sophisticated kroy lettering machine, the electrostatic versatec plotter and the newest addition--- sas/graph. as the computer literacy rate has risen among our user community, the need for more technical and specialized instruction has also risen. by standardizing our workshop teaching materials, we have been able to free up more of the instructors' time and allow them to apply their research efforts toward new areas of computer literacy, while keeping pace with the needs and special interests of our user community. the paper that i propose will discuss the basic considerations that are necessary to produce clear, visible, and attractive overhead transparencies. basic topics will include: simple dos and don'ts of transparency making, examples of good and poor transparency style; suggestions for redesigning overhead transparencies for better comprehension and visibility, and current visual media preparation at the ohio state university. gail peters acm forum: letters robert l. ashenhurst efficient radiosity rendering using textures and bicubic reconstruction rui bastos michael goslin hansong zhang optimism: not just for event execution anymore christoper h. young radharamanan radhakrishnan philip a. wilsey providing electronic access to group descriptions e. a. o'brien computer graphics in scandanavia nowadays, computer graphics is a commonly utilized tool deployed both in universities and industry. a short overview like this can't hope to cover all activities, but i've attempted to give some examples from the field in scandinavia.overviews of computer graphics activity in scandinavia have previously been published in _computer graphics,_ 30(2) may 1996 (research) and 30(3) august 1996 (education). in addition, the journal _computers and graphics_ had a special issue on computer graphics in scandinavia in 1995\. in this overview we won't duplicate the material from the previous overviews, but rather attempt to give some additional examples of computer graphics research work. alain chesnais jose encarnacao #2287 mesh collapse compression martin isenburg jack snoeyink model development revisited a brief chronology of simulation software helps to understand the current status, which finds simulation modeling in a transactional period. viewed in the context of the model life cycle, the needs for more effective and efficient simulation model development can be identified. some consensus is evident in the definition of tools, but the approaches to improvement are charted quite differently by researchers and practitioners in the simulation community. at this juncture no clear directions have been established. however, the indications of convergence in the software and model development paradigms can only be beneficial for both communities. richard e. nanoe automatic sample-by-sample model selection between two off-the-shelf classifiers steve p. chadwick parallel processing and simulation john c. comfort david jefferson y. v. reddy paul reynolds sallie sheppard using cautious heuristics to bias generlization and guide example section diana f. gordon representation of electronic mail filtering profiles: a user study electronic mail offers the promise of rapid communication of essential information. however, electronic mail is also used to send unwanted messages. a variety of approaches can learn a profile of a user's interests for filtering mail. here, we report on a usability study that investigates what types of profiles people would be willing to use to filter mail. michael j. pazzani a simulation based analysis of naming schemes for distributed systems taieb b. znati judith molka collapsing flow topology using area metrics visualization of topological information of a vector field can provide useful information on the structure of the field. however, in turbulent flows standard critical point visualization will result in a cluttered image which is difficult to interpret. this paper presents a technique for collapsing topologics. the governing idea is to classify the importance of the critical points in the topology. by only displaying the more important critical points, a simplified depiction of the topology can be provided. flow consistency is maintained when collapsing the topology, resulting in a visualization which is consistent with the original topology. we apply the collapsing topology technique to a turbulent flow field. wim de leeuw robert van liere rational hypersurface display chanderjit l. bajaj clipping of bezier curves fuhua cheng chi-cheng lin on the issue of neighborhood in self-organizing maps hua yang m. palaniswami the preemption capability in gpss one of the important capabilities offered by the general purpose simulation language (gpss) is the preemption capability. this capability is of use in modeling many real situations in which it is necessary to interrupt an ongoing activity before it comes to completion on a natural basis, e.g., failure of equipment in a manufacturing context; completion of an input or output activity in a multiprogrammed computing system; routing of roving vehicles, such as police cruisers, to a point of crisis; or simply providing immediate service to a high-priority demand which has occurred. preemption logic is potentially quite complicated, including as it does the need to specify how the interrupted ongoing activity is to be handled, both at the time of preemption and subsequent to that time. this tutorial will explore various aspects of the preemption topic as implemented in gpss. thomas j. schriber generic tasks in knowledge-based reasoning: a level of abstraction that supports knowledge acquisition, system design and explanation b chandrasekaran integrated analytic spatial and temporal anti-aliasing for polyhedra in 4-space charles w. grant real-time slicing of data space roger a. crawfis towards flexible negotiation in teamwork zhun qiu milind tambe hyuckchul jung a bayesian approach to object identification hanna pasula the worst order in not always the lexicographic order alyson a. reeves a new, fast method for 2d polygon clipping: analysis and software implementation this paper presents a new 2d polygon clipping method, based on an extension to the sutherland-cohen 2d line clipping method. after discussing three basic polygon clipping algorithms, a different approach is proposed, explaining the principles of a new algorithm and presenting it step by step. an example implementation of the algorithm is given along with some results. a comparison between the proposed method, the liang and barsky algorithm, and the sutherland-hodgman algorithm is also given, showing performances up to eight times the speed of the sutherland-hodgman algorithm, and up to three times the liang and barsky algorithm. the algorithm proposed here can use floating point or integer operations; this can be useful for fast or simple implementations. patrick-gilles maillot hierarchical abilities of diagrammatic representations of discrete event simulation models vlatko ceric consistency maintenance in multiresolution simulation simulations that run at multiple levels of resolution often encounter consistency problems because of insufficient correlation between the attributes at multiple levels of the same entity. inconsistency may occur despite the existence of valid models at each resolution level. cross- resolution modeling (crm) attempts to build effective multiresolution simulations. the traditional approach to crm \---aggregation-disaggregation--- causes chain disaggregation and puts an unacceptable burden on resources. we present four fundamental observations that would help guide future approaches to crm. these observations form the basis of an approach we propose that involves the design of multiple resolution entities (mres). mres are the foundation of a design that incorporates maintaining internal consistency. we also propose maintenance of core attributes as an approach to maintaining internal consistency within an mre.. paul f. reynolds anand natrajan sudhir srinivasan simpas: a simulation language based on pascal simpas is a portable, strongly-typed, event-oriented, discrete system simulation language based on pascal. it has most of the useful features of simscript ii.5. however, because of the type checking inherent in simpas, a simpas program is easier to debug and maintain than the corresponding simscript ii.5 program. this paper briefly describes simpas and discusses the advantages of using simpas instead of simscript ii.5 for the rapid construction of reliable simulation programs. r. m. bryant multiagent model of dynamic design: visualization as an emergent behavior of active design agents suguru ishizaki simtutor: a multimedia intelligent tutoring system for simulation modeling tajudeen a. atolagbe vlatka hlupic incremental feature-based modeling hyun suk kim heedong ko kunwoo lee the interaction technique notebook: bookmarks: an enhanced scroll bar dan r. olsen snack and drink tommy pallotta the fort at mashantucket raymond doherty guido jimenez fred lee jonathan alberts joel goodman alberto forero liju huang peter weishar mayumi sato artificial neural networks: an emerging new technique youngohc yoon lynn l. peterson real time discrete event simulation of a pcb production system for operational support mats jackson christer johansson geometrically deformed models: a method for extracting closed geometric models form volume data james v. miller david e. breen william e. lorensen robert m. o'bara michael j. wozny an asymptotic allocation for simultaneous simulation experiments hsiao-chang chen chun-hung chen jianwu lin enver yucesan evaluating 3d task performance for fish tank virtual worlds kevin w. arthur kellogg s. booth colin ware mtdm - an active model for drawing understanding wei wu wei lu masao sakauchi interior/exterior classification of polygonal models f. s. nooruddin greg turk buzz off naoki fujiwara logic circuit design with the aid of an automated reasoning program r. b. abhyankar b. humpert analysis of future event set algorithms for discrete event simulation new analytical and empirical results for the performance of future event set algorithms in discrete event simulation are presented. these results provide a clear insight to the factors affecting algorithm performance, permit evaluation of the hold model, and determine the best algorithm(s) to use. the analytical results include a classification of distributions for efficient insertion scanning of a linear structure. in addition, it is shown that when more than one distribution is present, there is generally an increase in the probability that new insertions will have smaller times than those in the future event set. twelve algorithms, including most of those recently proposed, were empirically evaluated using primarily simulation models. of the twelve tested, four performed well, three performed fairly, and five performed poorly. william m. mccormack robert g. sargent use of structure-based models in the development of expert systems j. g. ramirez illicit expressions in vector algebra in vector geometry there are 2 distinct types of entities: points p, q, r … and vectors u, v, w … generally, the operattions of vector algebra ---addition, subtraction, scalar multiplication, dot product, and cross product---are intrinsically defined only for vectors, not for points. yet illicit expressions containing terms like p \\+ q, cp, p x q, etc. often appear in graphics textbooks, papers, and programs. in this paper we justify the use of such illicit expressions, and we we give criteria for recognizing when such an expression is truly legitimate. in particular we show that an algebraic expression e(p1, …, pn) is legitimate if and onl y if e(v1 \\+ w, …vn \\+ w) = e( ;v1, …, vn) + kw, k \\+ 0, 1. we also derive many useful examples of such an expression. ronald n. goldman the basis of a computer system for modern algebra so-called general purpose systems for algebraic computation such as altran, macsyma, sac, scratchpad and reduce are almost exclusively concerned with what is usually known as "classical algebra", that is, rings of real or complex polynomials and rings of real or complex functions. these systems have been designed to compute with elements in a fixed algebraic structure (usually the ring of real functions). typical of the facilities provided are: the arithmetic operations of the ring, the calculation of polynomial gcd's, the location of the zeros of a polynomial; and some operations from calculus: differentiation, integration, the calculation of limits, and the analytic solution of certain classes of differential equations. for brevity, we shall refer to these systems as ca systems. john j. cannon making better manufacturing decisions with aim julie n. ehrlich william r. lilegdon integration of knowledge sources for explanation production s. bridges j. d. johannes what dreams may come: the painted world sequence nicholas brooks concurrent constraint programming and linear logic (abstract) françois fages long-lasting transient conditions in simulations with heavy-tailed workloads mark e. crovella lester lipsky computational modeling for the computer animation of legged figures michael girard a. a. maciejewski load balancing for multi-projector rendering systems rudrajit samanta jiannan zheng thomas funkhouser kai li jaswinder pal singh the hearsay-ii speech-understanding system: integrating knowledge to resolve uncertainty lee d. erman frederick hayes-roth victor r. lesser d. raj reddy plenoptic modeling: an image-based rendering system leonard mcmillan gary bishop perceptual color spaces for computer graphics perceptually uniform color spaces can be a useful tool for solving computer graphics color selection problems. however, before they can be used effectively some basic principles of tristimulus colorimetry must be understood and the color reproduction device on which they are to be used must be properly adjusted. the munsell book of color and the optical society of america (osa) uniform color scale are two uniform color spaces which provide a useful way of organizing the colors of a digitally controlled color television monitor. the perceptual uniformity of these color spaces can be used to select color scales to encode the variations of parameters such as temperature or stress. gary w. meyer donald p. greenberg a modeling system based on dynamic constraints ronen barzel alan h. barr a social reinforcement learning agent we report on our reinforcement learning work on cobot, a software agent that resides in the well-known online chat community lambdamoo. our initial work on cobot~\cite{cobotaaai} provided him with the ability to collect {\em social statistics\/} and report them to users in a reactive manner. here we describe our application of reinforcement learning to allow cobot to proactively take actions in this complex social environment, and adapt his behavior from multiple sources of human reward. after 5 months of training, cobot received 3171 reward and punishment events from 254 different lambda\\\\-moo users, and learned nontrivial preferences for a number of users. cobot modifies his behavior based on his current state in an attempt to maximize reward. here we describe lambdamoo and the state and action spaces of cobot, and report the statistical results of the learning experiment. charles isbell christian r. shelton michael kearns satinder singh peter stone gaming and graphics: computer games, not computer movies richard rouse socratic sequent systems david a. mcallester an algorithm for multidimensional data clustering a new divisive algorithm for multidimensional data clustering is suggested. based on the minimization of the sum-of-squared-errors, the proposed method produces much smaller quantization errors than the median-cut and mean-split algorithms. it is also observed that the solutions obtained from our algorithm are close to the local optimal ones derived by the k-means iterative procedure. s. j. wan s. k. m. wong p. prusinkiewicz on solving systems of algebraic equations via ideal bases and elimination theory the determination of solutions of a system of algebraic equations is still a problem for which an efficient solution does not exist. in the last few years several authors have suggested new or refined methods, but none of them seems to be satisfactory. in this paper we are mainly concerned with exploring the use of buchberger's algorithm for finding groebner ideal bases [2] and combine/compare it with the more familiar methods of polynomial remainder sequences (pseudo-division) and of variable elimination (resultants) [4]. michael e. pohst david y. y. yun standards pipeline: the opengl specification this is the first in a series of standards pipeline articles concerning important "de-facto" or "public" specifications rather than formal standards. paula womack and john schimpf of sgi contributed the material on which this article is based. george s. carson modeling the effect of the atmosphere on light the interaction of light with particles suspended in the air is the cause of some beautiful effects. among these effects are the colors of the sunset, the blue of the sky, and the appearance of a scene in fog. a lighting model that takes into account the effects of scattering by suspended particles is presented. a method of computing the colors of the sun and sky, for any sun position above the horizon, is derived from the lighting model. the model is also suitable for rendering fog under general lighting conditions. as an example of the use of the model for rendering fog, the special case of fog lit by the sun, without shadows, is considered. r. victor klassen texture mapping 3d models of real-world scenes texture mapping has become a popular tool in the computer graphics industry in the last few years because it is an easy way to achieve a high degree of realism in computer- generated imagery with very little effort. over the last decade, texture- mapping techniques have advanced to the point where it is possible to generate real-time perspective simulations of real-world areas by texture mapping every object surface with texture from photographic images of these real-world areas. the techniques for generating such perspective transformations are variations on traditional texture mapping that in some circles have become known as the image perspective transformation or ipt technology. this article first presents a background survey of traditional texture mapping. it then continues with a description of the texture-mapping variations that achieve these perspective transformations of photographic images of real-world scenes. the style of the presentation is that of a resource survey rather thatn an in-depth analysis. frederick m. weinhaus venkat devarajan volume rendering on the maspar mp-1 guy vezina peter a. fletcher philip k. robertson a system for the representation of theorems and proofs m. r. finley e. b. hausen-tropper cube architecture for 3-d computer graphics reuven bakalash the computation of 1-loop contributions in y.m. theories with class iii nonrelativistic gauges and reduce a. burnel h. caprasse corpus-based induction of lexical representation and meaning maria lapata rendering and animation of gaseous phenomena by combining fast volume and scanline a-buffer techniques d. s. ebert richard e. parent continuous system simulation languages (cssl's) this tutorial introduces cssl's (continuous system simulation languages) by using examples from three of the popular commercial languages used in north america at the present time. the languages are cssl-iv (1), dsl/vs (2) and isim (3). continuous system simulation languages are user- oriented software systems. cssl's are designed to assist engineers and scientists to mathematically model, analyze, and evaluate the dynamic behavior of physical phenomena. by providing a set of tools for computer-aided- analysis, they make it easy for the user to get his simulation on the computer quickly and to easily conduct experiments, collect data and present that data in useful form with minimal knowledge of the computer system itself. ralph c. huntsinger automatic generation of knowledge structures f. h. merrem applying appearance standards to light reflection models appearance standards for gloss, haze, and goniochromatic color are applied to computer graphic reflection models. correspondences are derived between both the gloss and haze standards and the specular exponent of the phong model, the surface roughness of the ward model, and the surface roughness of the cook- torrance model. metallic and pearlescent colors are rendered using three aspecular measurements defined in a proposed standard for goniochromatic color. the reflection models for gloss and goniochromatic color are combined to synthesize pictures of clear coated automotive paint. advantages of using appearance standards to select reflection model parameters include the small number of required measurements and the inexpensive commercially available instruments necessary to acquire the data. the use of a standard appearance scale also provides a more intuitive way of selecting the reflection model parameters and a reflection model independent method of specifying appearance. harold b. westlund gary w. meyer shading and shadow casting in image-based rendering without geometric models akihiro katayama yukio sakagawa hiroyuki yamamoto hideyuki tamura editorial statement peter wegner confessions of a visualization skeptic bill hibbard ray tracing parametric patches this paper describes an algorithm that uses ray tracing techniques to display bivariate polynomial surface patches. a new intersection algorithm is developed which uses ideas from algebraic geometry to obtain a numerical procedure for finding the intersection of a ray and a patch without subdivision. the algorithm may use complex coordinates for the (u, v)-parameters of the patches. the choice of these coordinates makes the computations more uniform, so that there are fewer special cases to be considered. in particular, the appearance and disappearance of silhouette edges can be handled quite naturally. the uniformity of these techniques may be suitable for implementation on either a general purpose pipelined machine, or on special purpose hardware. james t. kajiya picturebalm: a lisp-based graphics language system with flexible syntax and hierarchical data structure picturebalm is a portable, interactive, lisp-based language system for graphics applications programming. picturebalm's design and initial experimental implementation is described from the point of view of both the user and the language system implementor. the approach of extending a lisp- based language by adding graphical operations was chosen because many of the recognized requirements for graphics programming languages are standard features of lisp-like systems. future work is proposed. gary b. goates martin l. griss gary j. herron pertubation analysis - theoretical, experimental, and implementational aspects perturbation analysis (pa) is a technique for the computation of the gradient of performance measure (pm) of a discrete event dynamic system with respect to its parameters ( ) using only one sample path or monte carlo experiment of the system. it is probably the most developed of the several techniques of single run gradient estimation [1,2,3,4]. there are now close to forty papers on the subject matter. when first presented, one's immediate reaction to pa has often been incredulity stemming from the belief that "one cannot get something for nothing". later this disbelief may be developed into a more sophisticated and technical objection involving the legitimacy of interchanging differentiation and expectation operators or the probabilistic convergence of the pa estimate to its true value. in less technical terms, these translate to "how can you squeeze out information about a trajectory / sample-path operating under one value of the system parameter from that of another operating under a different value? don't the two trajectories behave entirely dissimilarly?" this is primarily a theoretical and conceptual question for which there are now fairly concrete as well as intuitively pleasing answers. references [4,5] can be consulted for details. experimentally, pa is ahead of theory in the sense that there are experimental results which we cannot satisfactorily explain. these algorithms are often arrived at via intuitive and heuristic reasoning. we can safely say that the results they produce are not statistical accidents. yet no rigorous proofs are available. a typical example is the case of pa and aggregation in queueing networks. pa can often be successfully applied to the aggregated version of network for which pa is known to fail if applied to the original complex network. another experimental observation is the excellent variance reduction associated with pa calculation of a gradient estimate. these experimental findings, in fact, provide challenges and clues to future theoretical developments. this is in contrast with an axiomatic and purely mathematical development of the subject matter. lastly, the implementation of pa in simulation and real time setting involve another type of effort. there does not exist a general purpose pa algorithm or routine which is totally data driven and transparent to the user. it is not a trivial exercise to develop such software. yet this author is convinced that this must be carried out if pa is to be really effective in everyday applications. although the theoretical aspects of the matter have understandibly received the most attention. the other two aspects of experimentation and implementation may be more important in the long run both in terms of utility and long term health of this research topic. y. c. ho a set of extensions to the siman/arena simulation environment marcelo j. torres peter w. glynn using simple++ for improved modeling efficiencies and extending model life cycles david r. kalasky gerald a. levasseur automated war gaming for enhancing fleet readiness war gaming at all levels has received increased attention as a result of dod's emphasis on decreasing the use of fuel and travel. the naval warfare gaming system is a sophisticated, flexible tool which provides a realistic decision- making environment for naval officers. the system provides multiple levels of models, automated doctrine control, and user-oriented features to support the four standard gaming phases: design, preparation, play, and analysis. m. leonard birns simscript ii.5 tutorial this tutorial will highlight an area for which simscript is particularly well- suited, a complex network of dissimilar processes - each described at an appropriate level of detail. this model has been used, with appropriate variations, for applications as widespread as aircraft maintenance modelling, combat system architecture modelling, waterways network modelling, and crude- oil transportation studies. this later application will be used to illustrate the powerful data structuring and model-building techniques of simscript. edward c. russell a note on proximity spaces and connection based mereology representation theorems for systems of regions have been ofinterest for some time, and various contexts have been used forthis purpose: mormann [17] has demonstrated the fruitfulness of themethods of continuous lattices to obtain a topologicalrepresentation theorem for his formalisation of whiteheadianontological theory of space; similar results have been obtained byroeper [20]. in this note, we prove a topological representationtheorem for a connection based class of systems, using methods andtools from the theory of proximity spaces. the key novelty is a newproximity semantics for connection relations. dimiter vakarelov ivo duntsch brandon bennett stray sheep yoshihisa hirano simultaneous events and distributed simulation bruce a. cota robert g. sargent norms and time in agent-based systems we propose a first-order model as a possible formal basis for normative agent systems (nas). the model allows us to describe the execution of actions in time and the use of dynamic norms. we present its application to the detection of the violation cases and to optimal scheduling. tiberiu stratulat francoise clerin-debart patrice enjalbert facial surgery - today and tomorrow yogi parish markus gross daniel von bueren rolf koch modeling priority queues with entity lists: a sigma tutorial lee w. schruben final project assignment eric kunzendorf tatlin's tower takehiko nagakura learning of compositional hierarchies by data-driven chunking karl pfleger eagle view: a simulation tool for wing operations eric a. zahn kerris j. renken hardware-accelerated volume and isosurface rendering based on cell-projection stefan röttger martin kraus thomas ertl simulation and animation of an assembly system a software package was developed for eventual use by manufacturing engineers responsible for the design of an automated assembly loop. the loop consists of a number of cells through which palletized parts flow. a simulation of this system was developed which includes the effects of blocking, machine down times and pallet transport time. in addition to the statistical information usually obtained with any simulation, an animation of the process was also developed which provides the manufacturing engineer with an interactive- graphic environment to examine the dynamic performance of the system. bahram keramati christine m. kelly-sacks gregory l. tonkay opacity-modulating triangular textures for irregular surfaces penny rheingans principles of modeling (panel) a. alan b. pritsker james o. henriksen paul a. fishwick gordon m. clark methods for knowledge acquisition and refinement in second generation expert systems nada lavrac igor mozetic analytic representation of simulation (panel) bruce w. schmeiser stability of event synchronisation in distributed discrete event simulation this paper is concerned with the behaviour of message queues in distributed discrete event simulators. we view a logical process in a distributed simulation as comprising a message sequencer with associated message queues, followed by an event processor. we show that, with standard stochastic assumptions for message arrival and time-stamp processes, the message queues are unstable for conservative sequencing, and for conservative sequencing with maximum lookahead and hence for optimistic resequencing, and for any resequencing algorithm that does not employ interprocessor "flow control". these results point towards certain fundamental limits on the performance of distributed simulation of open queueing networks. anurag kumar rajeev shorey advances and trends in the design and construction of algebraic manipulation systems we compare and contrast several techniques for the implementation of components of an algebraic manipulation system. on one hand is the mathematical-algebraic approach which characterizes (for example) ibm's scratchpad ii. on the other hand is the more ad hoc approach which characterizes many other popular systems (for example, macsyma, reduce, maple, and mathematica). while the algebraic approach has generally positive results, careful examination suggests that there are significant remaining problems, especially in the representation and manipulation of analytical, as opposed to algebraic mathematics. we describe some of these problems, and some general approaches for solutions. r. j. fateman the impact of graphics standards an american point of view (panel session) this panel session will consist of reports from participants in the effort to standardize computer graphics functions in america. there are two principal goals. the first is to provide status reports for the groups represented by the panelists. the second is to explain relationships between different standardization efforts. david h. straayer tom wright theodore reed david shuey bruce cohen bradford m. smith isosurface extraction techniques for web-based volume visualization the reconstruction of isosurfaces from scalar volume data has positioned itself as a fundamental visualization technique in many different applications. but the dramatically increasing size of volumetric data sets often prohibits the handling of these models on affordable low-end single processor architectures. distributed client-server systems integrating high- bandwidth transmission channels and web-based visualization tools are one alternative to attack this particular problem, but therefore new approaches to reduce the load of numerical processing and the number of generated primitives are required. in this paper we outline different scenarios for distributed isosurface reconstruction from large-scale volumetric data sets. we demonstrate how to directly generate stripped surface representations and we introduce adaptive and hierarchical concepts to minimize the number of vertices that have to be reconstructed, transmitted and rendered. furthermore, we propose a novel computation scheme, which allows the user to flexibly exploit locally available resources. the proposed algorithms have been merged together in order to build a platform-independent web-based application. extensive use of vrml and java opengl-bindings allows for the exploration of large-scale volume data quite efficiently. klaus engel rudiger westermann thomas ertl the crozzle - a problem for automation j. j. h. forster g. h. harris p. d. smith letter: a system for personalizing processed text the need to maintain multiple versions of documents arises in many different situations. for example, software producers may desire to maintain multiple copies of manuals, one for beginners unfamiliar with computers, and another for experienced users. scientists may have to produce reports in two forms, one for management and another for technical personnel. elected officials answering constituents' mail on a particular issue like to be able to tailor their response to the perspective of the letter writer. in general, tailoring computer-generated text to different audiences is an important problem, but one which is scarcely addressed by most text-formatting programs. letter attacks this problem by providing a high-level language for creating multiple versions. input to letter is in two parts, a profile and a manuscript. letter uses information from the profile to decide how to print each copy requested in the manuscript. the profile consists of information about the readers of the document. the manuscript consists of the text of the document, along with directive expressions. these include directives which cause information to be copied from the profile into the document, conditional expressions which depend on information in the profile, and looping directives which cause multiple copies of the document to be produced. letter is written in pascal, and is designed to be portable. the initial version has been developed on a pdp-10, but we plan to install it on a small personal computer in the near future. edward f. gehringer steven r. vegdahl parallel independent replicated simulation on a network of workstations parallel independent replicated simulation (pirs) is an effective approach to speed up the simulation processes. in a pirs, a single simulation run is executed by multiple computers in parallel. the statistical properties for a pirs may be affected by the scheduling policies. for an unbiased pirs scheduling policy, a reliable distributed computing environment is required. we consider an unbiased pirs scheduling policy on a distributed platform such as a network of workstations. we observe that including more computing resources may degrade the performance of pirs. simple rules are proposed to select processors for pirs. yi-bing lin chancy mitch butler simulation-based planning for multi-agent environments jin joo lee paul a. fishwick awake in the dream kevin mack display of clouds taking into account multiple anisotropic scattering and sky light tomoyuki nishita yoshinori dobashi eihachiro nakamae distributed reinforcement learning for a traffic engineering application mark d. pendrith concepts of the text editor lara lara, a text editor developed for the lilith workstation, exemplifies the principles underlying modern text- editor design: a high degree of interactivity, an internal data structure that mirrors currently displayed text, and extensive use of bitmap controlled displays and facilities. j. gutknecht constructive solid geometry for polyhedral objects david h. laidlaw w. benjamin trumbore john f. hughes my favorite martian lisa cooke an axiomatic treatment of three qualitative decision criteria the need for computationally efficient decision-making techniques together with the desire to simplify the processes of knowledge acquisition and agent specification have led various researchers in artificial intelligence to examine qualitative decision tools. however, the adequacy of such tools is not clear. this paper investigates the foundations of maximin, minmax regret, and competitive ratio, three central qualitative decision criteria, by characterizing those behaviors that could result from their use. this characterizaton provides two important insights: (1)under what conditions can we employ an agent model based on these basic qualitative decision criteria, and (2) how "rational" are these decision procedures. for the competitive ratio criterion in particular, this latter issue is of central importance to our understanding of current work on on-line algorithms. our main result is a constructive representation theorem that uses two choice axioms to characterize maximin, minmax regret, and competitive ratio. ronen i. brafman moshe tennenholtz don't leave your plan on the shelf austin tate the radiance lighting simulation and rendering system this paper describes a physically-based rendering system tailored to the demands of lighting design and architecture. the simulation uses a light- backwards ray- tracing method with extensions to efficiently solve the rendering equation under most conditions. this includes specular, diffuse and directional-diffuse reflection and transmission in any combination to any level in any environment, including complicated, curved geometries. the simulation blends deterministic and stochastic ray-tracing techniques to achieve the best balance between speed and accuracy in its local and global illumination methods. some of the more interesting techniques are outlined, with references to more detailed descriptions elsewhere. finally, examples are given of successful applications of this free software by others. gregory j. ward a meta simplifier the simplification process is a key point in computer algebra systems. two ideas have induced our model of simplifier : homogenizing the computation over numerical (see below) and formal expressions, and building a simplifier completely reachable by the user. in order to evaluate numerical expressions, the simplifier calls functions which compute the result or raise a runtime type error. formal expressions are transformed modulo the properties of the operators. for homogenizing those two processes, three basic mechanisms come out : simplification by properties, type checking, evaluation. more over a fourth mechanism using rewriting rules is necessary to compute non-standard transformations needed by the user. the structure of this four-component simplifier must make it completely reachable by the user unlike the usual simplifiers that include a kernel that the user can't modify. for example the operator "+" is always considered as associative and commutative and there is no way to suppress those properties because they are implicitly used in the code. the only means of suppressing this kernel is to express explicitly all knowledge needed by the simplifier. the model that comes out consists of two parts : a knowledge- base that contains all information and an engine split into four components that consults this base, we call it a meta simplifier1. each component of the engine takes the needed knowledge in the base to compute transformations on expressions. the algebraic properties allowed are all those used in other systems such as associativity, commutativity with some new ones such as to be an endomorphism, or a ring. the type-checker uses formal types to compute the type of all expressions. in any system, different types of objects are used, for example, integer, sparse-matrix, dense-matrix, list. we call them numerical types because they depend on representation, then a formal type is a set of numerical types which are equivalent modulo representation. for example the formal type matrix would contain sparse matrix and dense matrix. the conversions usually defined between numerical types, are extended to formal types. every operator is given several formal signatures so the simplifier explicitly solves this overloading. thus numerical and formal expressions are considered in the same way, and type errors are raised by the same mechanism. this process type checks the expression, solves the overloading and infers types over symbols. the third mechanism uses functions to convert (numerical conversions) and to evaluate numerical expressions (interpretations). the typing process has introduced formal conversions, and not overloaded operators, then the evaluator converts the numerical objects, and applies evaluating function over numerical arguments. the last component performs rewriting with types modulo associativity and commutativity. the information which are introduced above (operators properties, formal signatures, interpretations, conversions, formal types, rewriting rules) are stored in the knowledge base as objects built by the user through an interface, and are consulted but not modified by the engine. the main object in the base is the operator, it consists of a name, properties, formal signatures and interpretations. all the objects are completely modifiable by the user, for example a "+" can be defined just associative with a semantic of "append" over the lists and associative, commutative with a semantic of "add" over algebraic objects. the knowledge-base can be completely customized by the user to fit his precise needs, so the user has potentially several simplifiers at his disposal. the standard scheme of simplification involves those four components in the precise order they have been described above, but the user can choose an other scheme to change the simplifier behavior. given an interface to build the base, an engine supplied with a mechanism of control, the user can build exactly the simplifier he requires. c. faure fast computation of generalized voronoi diagrams using graphics hardware kenneth e. hoff john keyser ming lin dinesh manocha tim culver model-based matching and hinting of fonts roger d. hersch claude betrisey model generation issues in a simulation support environment osman balci richard e. nance e. joseph derrick ernest h. page john l. bishop fast footprint mipmapping tobias huttner wolfgang straßer analysis of a local search heuristic for facility location problems madhukar r. korupolu c. greg plaxton rajmohan rajaraman knowing what we don't know: how to visualize an imperfect world nahum gershon lossless compression of computer generated animation frames this article presents a new lossless compression algorithm for computer animation image sequences. the algorithm uses transformation information available in the animation script and floating point depth and object number information at each pixel to perform highly accurate motion prediction with vary low computation. the geometric data (i.e., the depth and object number) can either be computed during the original rendering process and stored with the image or computed on the fly during compression and decompression. in the former case the stored geometric data are very efficientlycomporessed using motion prediction and a new technique called direction coding, typically to 1 to 2 bits per pixel. the geometric data are also useful in z-buffer image compsiting and this new compression algorthm offers a very low storage overhead method for saving the information needed for this comoositing. the overall compression ratio of the new algorithm, including the geometic data overhead, in compared to conventional spatial linear prediction compression and block-matching motion. the algorithm improves on a previous motion prediction algorithm by incorporating block predictor switching and color ratio predition. the combination of thes techniques gives compression ratios 30% better than those reported previously. hee cheol yun brian k. guenter russell m. mersereau a core graphics environment for teletext simulations the development of our graphics environment over the past year has been directed towards demonstrating the concept of broadcast teletext service, where pages of graphics and text are received in the home over existing television channels. we have developed a graphics editor, radix, which uses an underlying core graphics package to create teletext-compatible images. radix is also used to display these images at arbitrary resolutions by scaling the graphics primitives, selecting the corresponding sizes of fonts and bitmaps, and then precisely adjusting the output video signal. the combination of powerful display hardware and a robust set of software tools has allowed us to rapidly prototype potential teletext features, and to provide these accurate simulations of teletext pages at any resolution. while this general graphics environment does impose some inefficiencies in running graphics applications, they are far outweighed by the ability to quickly respond to changing requirements. douglas f. dixon interactive curve design using digital french curves karan singh multivalent documents thomas a. phelps robert wilensky load balancing for a parallel radiosity algorithm w. sturzlinger g. schaufler j. volkert i yield one minute…: an analysis of the final speeches from the house impeachment hearings hearings held in the united states house of representatives in december 1998 on the articles of impeachment of president clinton presented an unusual opportunity to observe real-time argumentation. in this paper, i survey and discuss the various sorts of arguments made in the short, typically only one minute long, speeches given during the final hour of debate. edwina l. rissland natural language processing in a japanese text-to-speech system a japanese text-to-speech system is developed that provides continuous generation of natural speech from newspaper articles and other japanese texts written in an unrestricted mixture of kanji (ideograms. --- 7000 chinese characters), kana (two types of phonograms, --- 200 characters), alphanumerics and marks (including , &, $, etc.). in the newly developed system, the japanese texts are analyzed in terms of both grammar and meaning where texts have ambiguities, but in terms of grammar only where there is no ambiguity. the dictionary in this system contains ordinarily segmented words for various open texts and special compound words, most of which are proper nouns that have multiple readings when segmented. this reduces the percentage of erroneous readings from 2.2% for the conventional method to 0.2% and of erroneous accentuation from 20% to 5%. the dictionary containing approximately 430,000 words uses a resident index consists of the first two characters of the words. this approach for accessing is adopted because many japanese words consists of two basic characters. this improved the systems efficiency in maintaining a higher level of accuracy. these improvements will provide advanced information exchanges to subscriber networks. yoshifumi ooyama masahiro miyazaki satoru ikehara techniques for interactive raster graphics the visual quality of raster images makes them attractive for applications such as business graphics and document illustration. such applications are most fully served using interactive systems to describe curves, areas and text which can be rendered at high resolution for the final copy. however, to present such imagery in an interactive environment for moderate cost is difficult. techniques are presented that provide solutions to the problems of scan conversion, screen update, and hit testing for a class of interactive systems called illustrators. the design rests on the use of software display file encoding techniques. these ideas have been used in the implementation of several illustration programs on a personal minicomputer. patrick baudelaire maureen stone view interpolation for image synthesis shenchang eric chen lance williams recent advances in computer chess monty newborn tony marsland open location ronan kennedy the 1985 fredkin competition hans berliner the mummy yves metraux memory model with unsupervised sequential learning: the effect of threshold self-adjustment anatoli gorchetchnikov comments on price/performance patterns of u. s. computer systems several errors are noted in the formulation of econometric models describing computer price/performance patterns. an alternative model is presented which shows the effects of technological advances and computer size on price reduction. jane fedorowicz model validation: with respect to the user saul i. gass: the utility of a simulation model to a specific user depends on the user's ability to interpret available model documentation into a "measure" of model confidence. we shall review an approach to the obtaining of such a measure and the distinct, but interrelated roles of the user and analyst. robert g. sargent: a summary of the use of hypothesis testing and confidence intervals for validating models of observable systems will be presented. the presentation will include topics such as model accuracies, cost of data collection, model user's risk, model builder's risk, and acceptable validity range. j. william schmidt: in my view validation is the process of determining whether or not the system model describes the behavior of the system modeled with fidelity adequate for effective use of the model for its intended purpose(s). at best, a model is only an approximate representation of the reality described by the model. hence an attempt to describe system behavior through a model carries with it the implicit assumption that at least minimal discrepancy between model and system behavior is tolerable with respect to the intended purpose of the model. the discrepancy between model and system behavior is usually referred to as model error (a more descriptive reference would be model errors since errors may exist in many dimensions). the process of validation may then be viewed as the attempt to determine the magnitude of model error and whether or not the error exhibited by the model falls within limits which will permit effective use of the model. robert g. sargent saul i. gass j. william schmidt plan-then-compile architectures tom m. mitchell bumps: a program for animating projections bumps (brown university multiple projection system) is a program that illustrates the implementation of viewing transformations using animation. the program uses the viewing model defined in the core graphics system. bumps employs interactive computer graphics to demonstrate how planar geometric projections are generated, what the effects of different projections and projection parameters are on the projected object, and how the viewing functions of the core graphics system work. after presenting background material on projections, the features of bumps are described, followed by a pictorial user scenario of bumps in action. the paper concludes with a discussion of the merits of user controlled animation for teaching and possible improvements to the program. robert f. gurwitz richard w. thorne andries van dam ingrid b. carlbom the power and the frailty of images: is a picture worth a thousand words? nahum gershon bounding simulations by forcing bias steve bravy three-dimensional object recognition a general-purpose computer vision system must be capable of recognizing three- dimensional (3-d) objects. this paper proposes a precise definition of the 3-d object recognition problem, discusses basic concepts associated with this problem, and reviews the relevant literature. because range images (or depth maps) are often used as sensor input instead of intensity images, techniques for obtaining, processing, and characterizing range data are also surveyed. paul j. besl ramesh c. jain efficient perspective-accurate silhouette computation g. barequet c. a. duncan m. t. goodrich s. kumar m. pop comparison of analysis strategies for screening designs in large-scale computer simulation models timothy s. webb kenneth w. bauer texture compression with adaptive block partitions (poster session) we present image compression method based on block palletizing. image block is partitioned into four subsets, and each subset is palletized by 2 or 4 colors from the quasioptimal local palette, constructed for the whole block. index map for the whole block, being the union of index maps for subsets, is thus only 1 or 2 bits deep, while the local palette may consist of 8 or even 16 colors. the local palette has a specific geometrical configuration in rgb color space, determined by only 2 colors. these two colors are stored explicitly, and the rest are reconstructed at the decompression stage. compressed block consists, essentially, of the index map, palette description and partition description. this format allows fast access to randomly chosen pixels, and high reconstruction quality for compression ratios from 8 to 12, which is useful for texture storage in 3d graphics applications where real- time _decompression_ is crucial. leonid levkovich- maslyuk pavel kalyuzhny alexander zhirkov biif, vrml and cgm this edition of the standards pipeline consists of three articles that provide an in-depth look at applications of three important international standards. in the first article, bernadette kuzma of semcor, inc. describes the basic image interchange format (biif) formally known as iso/iec 12087-5:1998. while biif originated in the defense community, it is beginning to find wider applications in areas ranging from electronic libraries to medical imaging. in the second article, richard f. puk of intelligraphics describes the current status and future prospects for the virtual reality modeling language (vrml) formally known as iso/iec 14772-1. in the final article, lofton henderson of inso corporation and john gebhardt of intercap graphics systems, inc. describe the work of the cgm open consortium and in particular the webcgm profile. bernadette kuzma george s. carson richard f. puk john gebhardt visfiles: presentation techniques for time-series data this is the first in a series of columns on the subject of data visualization. i'm excited to have this opportunity because it will give me an excuse to finally learn about some subjects that i've wanted to learn about but never gave myself the time. gordon cameron, _computer graphics_ editor, has given me some latitude on topics so future visualization columns will deal with color selection, data visualization systems and apis, data visualization research, information visualization, user interface issues, tips and hints, etc. if you have comments on this column or suggestions for future columns, please send me email _(todd@acm.org)._in this issue i will give an overview of some familiar, and some probably unfamiliar, time-series data presentation techniques. time- series data visualization is becoming an increasingly important topic as data archives grow exponentially and computing becomes ubiquitous. some researchers claim that time-series data can be imaged using techniques similar to those applied to spatial data (since time is most often linear), and some claim that temporal data is a much different beast. for example, temporal data can have multiple strands and events, features not present in spatial data. there are also many interesting unsolved problems and data handling issues associated with temporal data: boolean and other mathematical operations data storage and temporal databases data query and retrieval data generalization (handling data at vastly different resolutions) data models and representation data presentationi have lately been working on designing a user interface for a simple geographic information system capable of manipulating and compositing time-series geo-referenced data. this project started me thinking about the different ways of visually representing temporal data. without going into much detail on any one method, following are all the methods i could find within arms length in my office. most of these methods allow the scientist or visualizer to present the time-series in a single image, while the final method uses an animation technique. good techniques for visually presenting time-series data will: reveal spatial and temporal relationships uncover patterns show variability t. todd elvins determination of a dependent variable in the measurement of discrete event computer simulation success roger mchaney reasoning in time and space: issues in interfacing a graphic computer simulation with an expert system for an intelligent simulation training system l. d. interrante j. e. biegel on visible surface generation by a priori tree structures this paper describes a new algorithm for solving the hidden surface (or line) problem, to more rapidly generate realistic images of 3-d scenes composed of polygons, and presents the development of theoretical foundations in the area as well as additional related algorithms. as in many applications the environment to be displayed consists of polygons many of whose relative geometric relations are static, we attempt to capitalize on this by preprocessing the environment's database so as to decrease the run-time computations required to generate a scene. this preprocessing is based on generating a "binary space partitioning" tree whose in order traversal of visibility priority at run-time will produce a linear order, dependent upon the viewing position, on (parts of) the polygons, which can then be used to easily solve the hidden surface problem. in the application where the entire environment is static with only the viewing-position changing, as is common in simulation, the results presented will be sufficient to solve completely the hidden surface problem. henry fuchs zvi m. kedem bruce f. naylor the hemi-cube: a radiosity solution for complex environments michael f. cohen donald p. greenberg navigating static environments using image-space simplification and morphing lucia darsa bruno costa silva amitabh varshney kitan kengou miyakuni tomohiro kadokawa hidenori onishi illumination in diverse codimensions this paper considers an idealized subclass of surface reflectivities; namely a simple superposition of ideal diffuse and ideal specular, restricted to point light sources. the paper derives a model of diffuse and specular illumination in arbitrarily large dimensions, based on a few characteristics of material and light in 3-space. it describes how to adjust for the anomaly of excess brightness in large codimensions. if a surface is grooved or furry, it can be illuminated with a hybrid model that incorporates both the id geometry (the grooves or fur) and the 2d geometry (the surface). david c. banks towards bounded optimal meta-level control: a case study daishi harada expert systems: perils and promise based on a review of some actual expert-system projects, guidelines are proposed for choosing appropriate applications and managing the development process. d. g. bobrow s. mittal m. j. stefik luna adam byrne woody smith fraud detection and self embedding tse-hua lan ahmed h. tewfik intelligent support for the engineering of software (panel paper) engineers of large systems must be concerned with both design of new systems and maintenance of existing systems. a great deal of effort goes into arranging things so that systems may be maintained and extended when needed, and that new systems fit into the context of existing structures. insight about the support of software engineering can thus be derived by examining how other branches of engineering support their version of the processes of engineering. before embarking on a design project an engineer must first determine the customer's need. this is always a complex process, involving modeling of the customer's situation, to determine exactly what is essential and what is accidental. the specifications derived from this analysis are the input to the design process. the specifications may be expressed in varying degrees of formality. though mathematical formalism is to be desired because it is unambiguous and because it can be manipulated precisely, existing mathematical technique is usually inadequate to precisely specify the requirements. this means that the specification phase must be part of the debugging loop. the design engineer first attempts to meet the specifications with some existing artifact. if this fails, he attempts to synthesize an artifact that meets the specifications by combining the behaviors of several parts, according to some standard plan. for example, in electrical engineering, a complex signal-processing system can often be composed as a cascade of simpler signal-processing components - an engineer knows that the transfer function of a cascade is the product of the transfer functions of the parts cascaded, if the loading is correct. such design depends on the ability to compute the behavior of a combination of parts from their specifications and a description of how they are combined. often such analysis must be approximate, with bugs worked out by simulation and debugging of breadboard prototypes. this design strategy is greatly enhanced by the existence of compatible families of canned parts with agreed- upon interfaces and well-defined specifications. if the family of canned parts is sufficiently universal, the interfaces sufficiently well specified, and if design rules can be formulated to isolate the designer from the details of the implementation of the parts, the family constitutes a design language. for example, ttl is a pretty good design language for relatively slow digital systems. just as the approximations of analysis are often imperfect, the abstraction barriers of the design language are often violated, for good reasons. thus there are the inevitable bugs. these may be found in simulation or prototyping. one key observation is that all phases of engineering involve bugs and debugging. bugs are not evidence of moral turpitude or inadequate preparation on the part of the engineer, they are an essential aspect of effective strategy in the problem-solving process. real- world problems are usually too complex to specify precisely, even assuming that we have adequate formal language to support such specifications. (imagine trying to formalize what is meant by "my program plays chess at the expert level." or "my music synthesizer circuit makes sounds that are like a fine violin." in any precise way, yet surely such specifications are the meat of real engineering.) and even where one can write very precise specifications (as in "this circuit makes 1/f noise.") such specifications are often mathematically intractable in current practice. even worse, analysis of systems made up of a few simple, precisely-specifiable components such as transistors (here a few exponential equations suffice), or floating-point additions are mathematically intractable. thus engineers, like physicists, make progress by approximate reasoning. linearizations are made, and deductions that ignore possibly important interactions are used to plan designs, with the explicit intention of finding the relevant ignored effects in the simulation or prototype debugging phase of design. bugs thus arise from the deliberate oversimplification of problems inherent in using perturbational methods to develop answers to hard problems. finally, bugs often arise, in apparently finished products, because of unanticipated changes in the requirements of the customer. although these are technically not errors in the design, the methods we have for patching a design to accommodate a change in requirements amounts to debugging the installed system. software engineering needs appropriate tools to support each of the phases of the engineering process. there must be tools to aid with specification and modelling, with synthesis and analysis, with rapid-prototyping and debugging, with documentation, verification, and testing, and with maintenance of finished products. in addition there must be environmental tools to support the engineering process in the large. our tools must support the natural processes of problem solving. they must provide precise ways to talk about alternate design plans and strategies. there is no doubt that mathematical formalisms, such as logic and abstract algebra are essential ingredients in such an enterprise, but we must be careful to separate our concerns. mathematical formalisms and technique are rarely strong enough to provide better than very approximate models of interesting physical, economic or engineered systems. sometimes a system that is readily described as a computer program is not easily formalized in more conventional terms. this suggests the exciting possibility that some difficult theoretical constructs can be formally represented as computational algorithms. we can expect to manipulate these as we now manipulate equations of classical analysis. such a paradigm shift is already taking place in control theory. twenty years ago, the dominant method for making control systems was synthesizing them using feedback, and the dominant theory was concerned with the stability of linear feedback systems. in contrast, most modern control systems are microprocessor- based, and the theory is now much more qualitative. moreover, the program for the microprocessor is often a much simpler description of the strategy of control than the classical equations, and one can thus express much more complex strategies than were previously feasible to analyze and synthesize. an even more striking revolution has occurred in the design of signal-processing systems. the nature of the field has changed completely, because digital filters are algorithms. artificial intelligence research often uses programs as theoretical constructs, akin to equations and schematic diagrams, but with the added feature that programs that embody parts of a theory of the design of programs can be used as tools in the process of theory construction (or software development). the language lisp, for example, was initially conceived as a theoretical vehicle recursion theory and for symbolic algebra. most ai experiments are formulated in lisp. lisp has developed into a uniquely powerful and flexible family of software development tools, providing wrap- around support for the rapid-prototyping of software systems. as with other languages, lisp provides the glue for using a vast library of canned parts, produced by members of the ai community. in lisp, procedures are first- class data, to be passed as arguments, returned as values, and stored in data structures. this flexibility is valuable, but most importantly, it provides mechanisms for formalizing, naming, and saving the idioms - the common patterns of usage that are essential to engineering design. in addition, lisp programs can easily manipulate the representations of lisp programs - a feature that has encouraged the development of a vast structure of program synthesis and analysis tools, such as cross-referencers. gerald jay sussman visualizing multivalued data from 2d incompressible flows using concepts from painting we present a new visualization method for 2d flows which allows us to combine multiple data values in an image for simultaneous viewing. we utilize concepts from oil painting, art, and design as introduced in [1] to examine problems within fluid mechanics. we use a combination of discrete and continuous visual elements arranged in multiple layers to visually represent the data. the representations are inspired by the brush strokes artists apply in layers to create an oil painting. we display commonly visualized quantities such as velocity and vorticity together with three additional mathematically derived quantities: the rate of strain tensor (defined in section 4), and the turbulent charge and turbulent current (defined in section 5). we describe the motivation for simultaneously examining these quantities and use the motivation to guide our choice of visual representation for each particular quantity. we present visualizations of three flow examples and observations concerning some of the physical relationships made apparent by the simultaneous display technique that we employed. r. m. kirby h. marmanis d. h. laidlaw eliminating the boundary effect of a large-scale personal communication service network simulation eliminating the boundary effects is an important issue for a large-scale personal communication service (pcs) network simulation. a pcs network is often modeled by a network of hexagonal cells. the boundary may significantly bias the ouput statistics if the number of hexagonal cells is small in a pcs network simulation. on the other hand, if the simulation is to be completed within a reasonable time on the available computing resources, the number of cells in the simulation cannot be too large. to avoid the inaccuracy caused by the boundary effect for a pcs network simulation with limited computing resources, we propose wrapping the hexagonal mesh into a homogeneous graph (i.e., all nodes in the graph are topologically identical). we show that by using the wrapped hexagonal mesh, the inaccuracy of the output measures can be limited even though the number of cells in the simulation is small. we can thus obtain the same statistical accuracy while using significantly less computation power than required for a simulation without cell wrapping. yi-bing lin victor w. mak evaluation of a (r,s,q,c) multi-item inventory replenishment policy through simulation carlos b. cerdaramirez armando j. espinosa de los monteros f. simulation with insight insight (ins) is a computer simulation language for describing systems in a quick, simple, and compact fashion. their description is executed by a computer and statistics summarizing the simulation are automatically provided. use of the language does not require any special programming or statistical expertise. complex models use the descriptive features of simple ones but incorporate more elaborate specifications. likewise sophisticated statistical and simulation procedures available in insight additions are additions rather than revisions of the model. simulations can be executed on mainframes when execution speed is needed or on a micro where interaction with the simulation and model development is facilitated. the net result is that the process of simulation modeling and the results from the simulations combine to provide "insight" into problem solving. stephen d. roberts animating wrinkles on clothes this paper describes a method to simulate realistic wrinkles on clothes without fine mesh and large computational overheads. cloth has very little in- plane deformations, as most of the deformations come from buckling. this can be looked at as area conservation property of cloth. the area conservation formulation of the method modulates the user defined wrinkle pattern, based on deformation of individual triangle. the methodology facilitates use of small in-plane deformation stiffnesses and a coarse mesh for the numerical simulation, this makes cloth simulation fast and robust. moreover, the ability to design wrinkles (even on generalized deformable models) makes this method versatile for synthetic image generation. the method inspired from cloth wrinkling problem, being geometric in nature, can be extended to other wrinkling phenomena. sunil hadap endre bangerter pascal volino nadia magnenat- thalmann how to use expert advice we analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. our analysis is for worst- case situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. we measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictins. we show that the minimum achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. our upper and lower bounds have matching leading constants in most cases. we then show how this leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently know in this context. we also compare our analysis to the case in which log loss is used instead of the expected number of mistakes. nicolò cesa-bianchi yoav freund david haussler david p. helmbold robert e. schapire manfred k. warmuth geometric modelling and display primitives towards specialised hardware work over the last ten years developing a simple geometric modelling scheme has led to the design of a high speed display processor capable of generating real time moving displays directly from a three dimensional model. the geometric model consists of a graph-matrix boundary representation linked to a boolean expression volume overlap representation. the architecture of the display processor is particularly suitable for implementation as a pipeline of vlsi components, and current work is exploring this possibility. a divide and conquer, quad tree algorithm applied to the boolean expression model allows the system to make use of scene coherence, and used with the hardware will make it possible to handle scenes of high complexity. a. l. thomas the power of pluralism for automatic program synthesis carl h. smith animating prairies in real-time frank perbet maric-paule cani interactive multi-pass programmable shading programmable shading is a common technique for production animation, but interactive programmable shading is not yet widely available. we support interactive programmable shading on virtually any 3d graphics hardware using a scene graph library on top of opengl. we treat the opengl architecture as a general simd computer, and translate the high-level shading description into opengl rendering passes. while our system uses opengl, the techniques described are applicable to any retained mode interface with appropriate extension mechanisms and hardware api with provisions for recirculating data through the graphics pipeline. we present two demonstrations of the method. the first is a constrained shading language that runs on graphics hardware supporting opengl 1.2 with a subset of the arb imaging extensions. we remove the shading language constraints by minimally extending opengl. the key extensions are color range (supporting extended range and precision data types) and pixel texture (using framebuffer values as indices into texture maps). our second demonstration is a renderer supporting the renderman interface and renderman shading language on a software implementation of this extended opengl. for both languages, our compiler technology can take advantage of extensions and performance characteristics unique to any particular graphics hardware. mark s. peercy marc olano john airey p. jeffrey ungar a new approach to knowledge acquisition by repertory grids sanjiv k. bhatia qi yao fankestein's insects (abstract) jonathan mills representation of the tactile surface texture of an object using a force feedback system teruaki iinuma hideki murota yasuo kubota hiroshi tachihara virtual stage: an interactive 3d karaoke system keechang lee changwhan sui sungjoon hur kwang yun wohn overfitting and undercomputing in machine learning tom dietterich taming recognition errors with a multimodal interface sharon oviatt multiresolution tetrahedral framework for visualizing regular volume data yong zhou baoquan chen arie kaufman bugid (abstract only): a soybean insect pest identifier bugid is a production system for identifying insect pests. it is written in ops5, an expert system language. the features that have been implemented in the system include the ability to start at any level of analysis, show the insects that are possible or not possible at any level of analysis, and to trace to the beginning of analysis from any stage, and to display results. the system is designed to handle erroneous inputs also. the system can guide the user at any stage, suggest remedial measures for various insects, and give explanation of how it arrived at its conclusions and suggestions. bugid is intended to be a component in a larger expert system for insect pest management for which a prototype is being developed at lsu. krishna m. uppuluri walter g. rudd unifit ii: total support for simulation input modeling stephen vincent averill m. law recovering photometric properties of architectural scenes from photographs yizhou yu jitendra malik reflection space image based rendering brian cabral marc olano philip nemec taxonomic ambiguities in category variations needed to support machine conceptualization this is a theoretical expositional exploration into the underlying needs of concept formation. the main purpose is to identify and discuss the differing forms of categorization in the context of possible machine learning and representation of those concepts. conceptualization is the process of developing the abstractions that are needed to support reasoning. the formation of the machine equivalent of human concepts is critical to the development of a general, machine based reasoning capacity. an aid in understanding conceptual categorization is prototype theory. it helps to identify the building block tools that are used to construct categories and taxonomies. when developing taxonomic structures, framing conflicts can occur in terms of how things should be clustered together, what should be the relative hierarchal levels, and what should be subordinate to what. l j mazlack multi-modal stereognosis luiz m. g. gonçalves roderic a. grupen antonio a. f. oliveira on the automation of legal reasoning about responsibility jos lehmann abdullatif a. o. elhag saving private ryan yves metraux implementing multi-user virtual worlds (panel session): ideologies and issues chris greenhalgh integration of volume rendering and geometric graphics: work in progress e. ruth johnson charles e. mosher constrained optimal framings of curves and surfaces using quaternion gauss maps andrew j. hanson reactive search, a history-sensitive heuristic for max-sat the reactive search (rs) method proposes the integration of a simple history- sensitive (machine learning) scheme into local search for the on-line determination of free parameters. in this paper a new rs algorithm is proposed for the approximated solution of the maximum satisfiability problem: a component based on local search with temporary prohibitions (tabu search) is complemented with a reactive scheme that determines the appropriate value of the prohibition parameter by monitoring the hamming distance along the search trajectory. the proposed algorithm (h-rts) can therefore be characterized as a dynamic version of tabu search. in addition, the non- oblivious functions recently introduced in the framework of approximation algorithms are used to discover a better local optimum in the initial part of the search the algorithm is developed in two phases. first the bias-diversification properties of individual candidate components are analyzed by extensive empirical evaluation, then a reactive scheme is added to the winning component, based on tabu search. the final tests on a benchmark of random max-3-sat and max-4-sat problems demonstrate the superiority of h-rts with respect to alternative heuristics. roberto battiti marco protasi exploring geo-scientific data in virtual environments this paper describes tools and techniques for the exploration of geo- scientific data from the oil and gas domain in stereoscopic virtual environments. the two main sources of data in the exploration task are seismic volumes and multivariate well logs of physical properties down a bore hole. we have developed a props- based interaction device called the cubic mouse to allow more direct and intuitive interaction with a cubic seismic volume. this device effectively places the seismic cube in the user's hand. geologists who have tried this device have been enthusiastic about the case of use, and were adept only a few moments after picking it up. we have also developed a multi- modal visualisation and sonification technique for the dense, multivariate well log data. the visualisation can show two well log variables mapped along the well geometry in a bivariate colour scheme, and another variable on a sliding lens. a sonification probe is attached to the lens so that other variables can be heard. the sonification is based on a geigercounter metaphor that is widely understood and which makes it easy to explain. the data is sonified at higher or lower resolutions depending on the speed of the lens. sweeps can be made at slower rates and over smaller intervals to home in on peaks, boundaries or other features in the full resolution data set. bernd fröhlich stephen barrass björn zehner john plate martin göbel weave: a system for visually linking 3-d and statistical visualizations, applied to cardiac simulation and measurement data d. l. gresh b. e. rogowitz r. l. winslow d. f. scollan c. k. yung hands-free multi-scale navigation in virtual environments joseph j. laviola daniel acevedo feliz daniel f. keefe robert c. zeleznik modsim iii - a tutorial john goble mechanical discovery of classes of problem-solving strategies george w. ernst michael m. goldstein the displacement method for implicit blending surfaces in solid models to date, methods that blend solids, that is, b-rep or csg models, with implicit functions require successive composition of the blending functions to handle an arbitrary solid model. the shape of the resulting surfaces depends upon the algebraic distances defined by these functions. to achieve meaningful shapes, previous methods have relied on blending functions that have a pseudo- euclidean distance measure. these methods are abstracted, resulting in some general observations. unfortunately, the functions used can exhibit unwanted discontinuities. a new method, the displacement form of blending, embeds the zero surface of the blending functions in a form for which algebraic distance is c1 continuous in the entire domain of definition. characteristics of the displacement form are demonstrated using the superelliptic blending functions. intuitive and mathematical underpinnings are provided. a. p. rockwood a belief management architecture for diagnostic problem solving an architecture for diagnosis that uses qualitative endorsements as its principal method of uncertainty abstraction and propagation is presented. the framework performs local belief computations in a hierarchical hypothesis space, in contrast with methods that propagate evidence throughout the whole frame of discernment. in this system, global control of the decision making process is maintained by local evaluations of belief status. these local evaluations determine an active focus in which refinement of belief status is undertaken by gathering additional information. the main goal of the research project is the development of a framework for reasoning with endorsements, and the diagnostic application explicated in the paper is built as a proof-of- principle. serdar uckun benoit m. dawant gautam biswas kazuhiko kawamura geometric reconstruction with anisotropic alpha-shapes michael capps marek teichmann hybrid volume and polygon rendering with cube hardware kevin kreeger arie kaufman design of high level modelling / high performance simulation environments bernard p. zeigler doohwan kim integration issue in knowledge acquisition systems k. morik practical benefits of animated graphics in simulation recent years have seen the emergence of graphics tools for simulation. these add to the basic representational facilities of the simulation technique, features enabling the construction and driving of animated color mimic diagrams. the paper considers the contribution of such tools - do they lead to real benefits or are they luxury toys? brian w. hollocks simulation-based dynamic optimization: planning united states coast guard law enforcement patrols michael p. bailey kevin d. glazebrook robert f. dell hidden curve removal for free form surfaces gershon elber elaine cohen a computer simulation of the effects of variations in constitutive rules and individual goals on interpersonal communications this paper is an extension of earlier work concerning the coordinated management of meaning theory of interpersonal communication. prior research has taken several paths. among them have been laboratory experiments utilizing games to simulate conversations, computer simulations, and case studies of actual conversations. the research reported in this paper involves a detailed examination of relatively simple interpersonal communications systems. the results of the simulation will be used to evaluate selected theorems to develop protocols for further laboratory testing and to identify laboratory and field research necessary to develop a more accurate model of conversational behavior. aaron h. brown charles j. campbell eugene e. kaczka a linguistic geometry for multiagent systems boris stilman yesterday's world of tomorrow paul simpson representations for rigid solids: theory, methods, and systems aristides g. requicha on reasoning from data david waltz simon kasif a framework for knowledge representation and use in pattern analysis in this paper a prototype expert system oriented to signal and pattern analysis is described together with a general methodology based on a hypothesize-and-test strategy similar to the one used by a human expert. this paper focuses on the knowledge base architecture and on its use. in order to make it capable of dealing with noisy patterns, the knowledge description is based on fuzzy logic and the inference engine is able to reason about the knowledge it uses. the system is being applied to the phonetic analysis of the voice signal in speech recognition, as a case study. f bergadano a giordana a method to document data entry forms all of us receive data entry forms manually and electronically. these forms have to be filled and returned. they vary in size, content and clarity. frequently parts of them are duplicates, but the duplication is concealed by the use of different terminology. the amount of duplicate input will increase sharply in the future, as more and more forms are communicated electronically. this paper describes a method to document data entry forms. this method converts a paper form to computerized form, divides the form into data elements, identifies synonyms and replaces them with one preferred term, standardizes mathematical relationships, and arrives at a smaller subset of elements for which actual values have to be input. once values are input for this limited subset, the method can compute the values for the other derived elements and produce the final filled-in form. all the data elements in the forms are thus documented precisely. six different software products have to be developed and linked together to make this method operational. the work on these is also described. c. s. sankar theories of discrete event model abstraction suleyman sevinc towards inter-robot communications: using short-term memory for mobile robots robert castillo the control and transformation metric: toward the measurement of simulation model complexity current complexity metrics based upon graphical analysis or static program characteristics are not well suited for discrete event simulation model representations, owing to their inherent dynamics. this paper describes a complexity metric suitable for model representations. a study of existing metrics provides a basis for the development of the desired metric, and a set of characteristics for a simulation model complexity metric is defined. a metric is described based on the two types of complexity that are apparent in model representations. experimental data are presented to verify that this metric possesses the desired characteristics. based on evaluation of this data and the desired characteristics, this metric appears to offer an improvement over existing metrics. jack c. wallace efficient image-based methods for rendering soft shadows we present two efficient imaged-based approaches for computation and display of high- quality soft shadows from area light sources. our methods are related to shadow maps and provide the associated benefits. the computation time and memory requirements for adding soft shadows to an image depend on image size and the number of lights, not geometric scene complexity. we also show that because area light sources are localized in space, soft shadow computations are particularly well suited to imaged-based rendering techniques. our first approach---layered attenuation maps\\---achieves interactive rendering rates, but limits sampling flexibility, while our second method---coherence-based raytracing of depth images---is not interactive, but removes the limitations on sampling and yields high quality images at a fraction of the cost of conventional raytracers. combining the two algorithms allows for rapid previewing followed by efficient high-quality rendering. maneesh agrawala ravi ramamoorthi alan heirich laurent moll vision satoshi kitahara my little alien kelvin lee a frequency based ray tracer mark r. bolin gary w. meyer evolutionary learning in mobile robot navigation cory quammen validation, verification, and testing techniques throughout the life cycle of a simulation study osman balci the vacuum buffer voicu popescu anselmo lastra business process re-engineering jeffry jones is ai abstract and impractical? isn't the answer obvious? musings inspired by sun, lemonade, and lost students lisa c. kaczmarczyk composable controllers for physics-based character animation an ambitious goal in the area of physics-based computer animation is the creation of virtual actors that autonomously synthesize realistic human motions and possess a broad repertoire of lifelike motor skills. to this end, the control of dynamic, anthropomorphic figures subject to gravity and contact forces remains a difficult open problem. we propose a framework for composing controllers in order to enhance the motor abilities of such figures. a key contribution of our composition framework is an explicit model of the "pre- conditions" under which motor controllers are expected to function properly. we demonstrate controller composition with pre- conditions determined not only manually, but also automatically based on support vector machine (svm) learning theory. we evaluate our composition framework using a family of controllers capable of synthesizing basic actions such as balance, protective stepping when balance is disturbed, protective arm reactions when falling, and multiple ways of standing up after a fall. we furthermore demonstrate these basic controllers working in conjunction with more dynamic motor skills within a prototype virtual stunt-person. our composition framework promises to enable the community of physics-based animation practitioners to easily exchange motor controllers and integrate them into dynamic characters. petros faloutsos michiel van de panne demetri terzopoulos v-buffer: visible volume rendering craig upson michael keeler creating digital libraries together - collaboration, multimodality, and plurality many have tried to answer the question of what a digital library is and how such libraries should be built. but, in a sense the question of how to construct digital libraries as well defined entities is misguided from the beginning. there are many approaches to building digital libraries [7, 18, 4] and each approach must be understood from within a context. some contexts such as information retrieval and digitizing of existing materials have received much attention [12, 22, 18, 17], while other contexts have been more or less ignored [19]. one such context is that of networking from a higher level of abstraction [8, 11]. since traditional libraries have long since existed in elaborate and large-scale physical networks it is only natural that we should see such structures mirrored in the world of digital abstract networks. the universal simulator [10] application builds on the idea that research in digital libraries need not necessarily focus on micro level infrastructures, but that we may also find interesting possibilities on the macro level of digital library infrastructures. moreover, at such a macro level we may find important new ways of collaborating and building digital libraries in educational settings. anders hedman scaling, hierarchical modeling, and reuse in an object-oriented modeling and simulation system thorsten daum robert g. sargent v-clip: fast and robust polyhedral collision detection this article presents the voronoi-clip, or v-clip, collision detection alogrithm for polyhedral objects specified by a boundary representation. v-clip tracks the closest pair of features between convex polyhedra, using an approach reminiscent of the lin-canny closest features algorithm. v-clip is an improvement over the latter in several respects. coding complexity is reduced, and robustness is significantly improved; the implementation has no numerical tolerances and does not exhibit cycling problems. the algorithm also handles penetrating polyhedra, and can therefore be used to detect collisions between nonvconvex polyhedra described as hierarchies of convex pieces. the article presents the theoretical principles of v-clip, and gives a pseudocode description of the algorithm. it also documents various test that compare v-clip, lin-canny, and the enhanced gjk algorithm, a simplex-based algorithm that is widely used for the same application. the results show v-clip to be a strong contender in this field, comparing favorably with the other algorithms in most of the tests, in term of both performance and robustness. brian mirtich interfaces for understanding multi-agent behavior >synchronized punch-card displaysare an interface technique to visualize tens of thousands of variables by encoding their values as color chips in a rectangular array. our technique ties multiple such displays to a timeline of events enabling the punch-card displays to show animations of the behavior of complex systems. punch-card displays not only make it easy to understand the high-level behavior of systems, but also enable users to quickly focus on individual variables and on fine-grained time intervals. this paper describes synchronized punch-card displays and shows how this technique is extremely powerful for understanding the behavior of complex multi-agent systems. pedro szekely craig milo rogers martin frank hierarchical, modular modelling in devs-scheme b. p. zeigler j. h. hu j. w. rosenblit computer vision and artificial intelligence christopher o. jaynes signal-to-symbol transformation and vice versa: from fundamental processes to representation ruzena bajcsy practical animation of liquids we present a general method for modeling and animating liquids. the system is specifically designed for computer animation and handles viscous liquids as they move in a 3d environment and interact with graphics primitives such as parametric curves and moving polygons. we combine an appropriately modified semi-lagrangian method with a new approach to calculating fluid flow around objects. this allows us to efficiently solve the equations of motion for a liquid while retaining enough detail to obtain realistic looking behavior. the object interaction mechanism is extended to provide control over the liquid s 3d motion. a high quality surface is obtained from the resulting velocity field using a novel adaptive technique for evolving an implicit surface. enhancing the efficiency and versatility of directly manipulated free-form deformation james gain neil dodgson filter: an algorithm for reducing cascaded rollbacks in optimistic distributed simulations atul prakash rajalakshmi subramanian intelligent form feature interaction management in a cellular modeling scheme rafael bidarra jose carlos teixeira more theory revision with queries (extended abstract) judy goldsmith robert h. sloan computer vision applications astound the imagination karen sullivan the development of a methodology for the use of neural networks and simulation modeling in system design mahdi nasereddin mansooreh mollaghasemi wanted milla moilanen can the regenerative method be applied to discrete-event simulation? shane g. henderson peter w. glynn piccolo's encore sam chen the visual simulation environment technology transfer osman balci anders i. bertelrud chuck m. esterbrook richard e. nance refinement: a tool to deal with inconsistencies in traditional belief revision approaches, new information is accepted unconditionally. in such models the agent either corrrects his knowledge by introducing a new conflicting information or simply rejects it. this paper proposes an alternative way where, in presence of new conflicting data, the agent refines his knowledge by accepting the new conflicting information under certain conditions. such operation, called here refinement, provides an interesting tool to avoid contradictions in normative systems and is a first step towards a rigorous treatment of how courts provide fresh solutions to circumstances which were unpredicted by legal statutes. it is an alternative to defeasible conditionals approaches much in the spirit of alchourron's defense of revision as the proper account of ampliative resoning [1]. the paper introduces a constructive model for refinement based on the power of the agm contruction [3] and inspired by general ideas of selective revision put forward by ferme and hansson [6]. a refinement operator @@@@ is defined and an axiomatic characterization is provided with a representation theorem. juliano maranhão visibility driven hierarchical radiosity fredo durand george drettakis claude puech an analysis of line numbering strategies in text editors many techniques are employed in numbering lines for text editing. the simplest approach uses a single integer that changes each time a line is added or deleted (1). basic, in addition to other systems, uses fixed multi-digit numbers bound to each line. this approach has problems: only a fixed number of lines can be inserted between two consecutive lines of text, otherwise the original text must be renumbered. this negates one advantage of line numbers: the ability to compare different versions of the same document. in order to overcome these disadvantages, two basic schemes have been proposed and implemented. the following grammar defines the two schemes: fractional line numbering (fln) and hierarchical line numbering (hln). m. l. schneider s. nudelman k. hirsh-pasek the design of a multi-microprocessor based simulation computer - i a discrete event simulation computer based on a network of microprocessors is being developed at florida international university. this paper contains a description of the simulation models used thus far in the development process and results obtained from them. a system using a pdp-11 as the principal processor and a motorola m68000 as an event set processor has been implemented. results from the performance of this system are presented, and plans for further development are discussed. john craig comfort computer-generated pen-and-ink illustration of trees we present a method for automatically rendering pen-and-ink illustrations of trees. a given 3-d tree model is illustrated by the tree skeleton and a visual representation of the foliage using abstract drawing primitives. depth discontinuities are used to determine what parts of the primitives are to be drawn; a hybrid pixel-based and analytical algorithm allows us to deal efficiently with the complex geometric data. using the proposed method we are able to generate illustrations with different drawing styles and levels of abstraction. the illustrations generated are spatial coherent, enabling us to create animations of sketched environments. applications of our results are found in architecture, animation and landscaping. oliver deussen thomas strothotte two-stage multiple-comparison procedures for steady-state simulations procedures for multiple comparisons with the best are investigated in the context of steady-state simulation, whereby a number k of different systems (stochastic processes) are compared based upon their (asymptotic) means μi (i = 1,2,…, k). the variances of these (asymptotically stationary) processes are assumed to be unknown and possibly unequal. we consider the problem of constructing simultaneous confidence intervals for mi-max j imj i=1,2, ,k which is known as multiple comparisons with the best (mcb). our intervals are constrained to contain 0, and so are called constrained mcb intervals. in particular, two- stage procedures for construction of absolute- and relative- width confidence intervals are presented. their validity is addressed by showing that the confidence intervals cover the parameters with probability of at least some user-specified threshold value, as the confidence intervals' width parameter shrinks to 0. the general assumption about the processes is that they satisfy a functional central limit theorem. the simulation output analysis procedures are based on the method of standardized time series (the batch means method is a special case). the techniques developed here extend to other multiple- comparison procedures such as unconstrained mcb, multiple comparisons with a control, and all-pairwise comparisons. although simulation is the context in this paper, the results naturally apply to (asymptotically) stationary time series. halim damerdji marvin k. nakayama predicting a binary sequence almost as well as the optimal biased coin yoav freund square-free decomposition in finite characteristic: an application to jordon form computation in ([gt]) has been addressed the problem of the computation ofthe square-free decomposition for univariate polynomials withcoefficients in arbitrary fields. the complete square-freedecomposition can be computed over arbitrary fields of finitecharacteristic solely assuming that the field satisfies the**condition p** of seidenberg ([se]), which has been provenequivalent to the ability computing such decompositions (see also[mrr]). if we assume that the field is only an effective field(i.e. of a field **k** where there are constructive proceduresfor performing rational operations in **k** and for decidingwhether or not two elements in **k** are equal), it is possibleto obtain a weaker decomposition into powers of relatively primefactors, not necessarily square- free, but such that within eachfactor the roots have constant multiplicity. although this is apartial decomposition, much useful information can be gathered fromthis result. as an application we present an algorithm to computethe jordan form of a matrix over an arbitrary effective field. inparticular we show how to handle problems of inseparability whilesplitting invariant factors and constructing symbolic jordanform. the computation of normal forms of a matrix, in particular ofthe jordan form, is a very important task and has many usefulapplications, so it has been widely studied for many years and manyefficient algorithms, sequential and parallel ([o], [l], [gi1],[gi2], [ol], [kks], [rv]), are already available for itscomputation. there are already algorithms which compute the jordanform of a matrix over general fields ([gd], [rv]), but they arebased on dynamic evaluation ([d5]) and we want to avoid the use ofsuch a scheme, that requires a special computational environment.storjohann ([st]) has given a new algorithm for computing therational canonical form which has a deterministic complexity of_o_(_n_3) but he does not compute thetransition matrix with the same complexity. steel's ([s]) algorithmfor computing generalized jordan form has a complexity_o_(_n_4) but requires factoring polynomialsinto irreducibles. kaltofen et. al. ([kks]) give fast parallelalgorithms for canonical forms and make the observation that onecould compute a symbolic jordan form from a rational canonical formby splitting the invariant factors using gcd's and square-freedecompositions. they require the computation of completesquare-free decompositions and thus also require that **k** be aperfect field with the ability to compute _pth_roots. they also don't compute the transition matrix. ozello ([o])presents an algorithm for computing the rational canonical formwhich is deterministic with complexity_o_(_n_4), and leaves the question of fasterprobabilitic approaches for future work. giesbrecht ([gi2]) gives aprobabilistic algorithm whose complexity is essentially the same asmatrix multiplication but requires choosing _n_ "good" randomvectors simultaneously thus giving only a probability of 1/4 ofmaking a successful choice. our aim is to obtain a general sequential algorithm, of acomplexity comparable with most of the existing algorithms, thatworks in the widest possible setting, without requiring particularcomputing resources and hence of easy and straightforwardimplementation. because of our hypothesis, in general, ouralgorithm will produce a _symbolic jordan form_ ([k], [rv]),but the main difference with the other available algorithms basedon dynamic evaluation is that our algorithm is a rationalalgorithm, since all the computations take place in the givenfield, except for the output and eventually the computation of theinverse of the transition matrix. to obtain all the information onthe symbolic roots of the characteristic polynomial (multiplicitiesand recognition) we, at first, transform the given matrix _a_into a _pseudo- rational_ form, i.e. a block diagonal matrix,similar to _a,_ with companion matrices on the diagonalwithout requiring any kind of divisibility of the associatedpolynomials. then we refine the factorization of the characteristicpolynomial, given by the polynomials whose companion matrices areon the diagonal of the pseudo-rational form, using partialsquare-free decomposition and gcd computations, so that we canidentify the same roots in different blocks and also we reduce, asmuch as possible without factorization, the degree of the definingpolynomials for the eigenvalues. the pseudo-rational form is computed with a probabilisticalgorithm of complexity _o_(_n_3) such thateach independent random choice is verifiable with probabilitybetter than 1 \- 1/_n_ of success. we derive this probabilisticalgorithm from one for the computation of the rational form, whichhas a complexity of _o_(_n_4), and is obtainedvia a straightforward analysis of the properties of the minimalpolynomial that leads to a natural way to construct invariantsubspaces. elisabetta fortuna patrizia gianni learning apprentice system for turbine modelling a learning apprentice system is presented, which learns from examples extracted from user dialogues. the system provides an interface between the user and the turbine modeller. while a dialogue is carried out between the user and the turbine modelling software, the system observes the dialogues and whenever a new example is observed which performs a task completely, the system tries to learn it. the learning methodology used by the system is described and various drawbacks are pointed out. a new learning methodology is proposed which easily overcomes the problems faced by the earlier methodology. mohammad jamil sawar richard c. thomas the application of non-periodic tiling patterns in the creation of artistic images kenneth a. huff parallel simulation using conservative time windows rassul ayani hassan rajaei double- and triple-step incremental generation of lines a method of increasing the efficiency of line drawing algorithms by setting additional pixels during loop iterations is presented in this paper. this method adds no additional costs to the loop. it is applied here to the double- step algorithm presented in [15] and later used in [14], resulting in up to a thirty-three percent reduction in the number of iterations and a sixteen percent increase in speed. in addition, the code complexity and initialization costs of the resulting algorithm remain the same. phil graham s. sitharama iyengar scribe: how to use it for document production scribe is a document production system that makes it easier to produce and maintain large complex technical documentation. although its primary purpose is high-quality text formatting, scribe also performs cross-reference tabulations, indexing, automatic generation of text such as tables of contents, figure integration. many text formatting programs exist, and many can do some or all of what scribe can do. but scribe has proven through the years to be much easier to learn and use than most other systems, and documents produced in scribe have proven to be much easier to edit and maintain than documents produced other ways. brian reid further experience with controller-based automatic motion synthesis for articulated figures we extend an earlier automatic motion-synthesis algorithm for physically realistic articulated figures in several ways. first, we summarize several incremental improvements to the original algorithm that improve its efficiency significantly and provide the user with some ability to influence what motions are generated. these techniques can be used by an animator to achieve a desired movement style, or they can be used to guarantee variety in the motions synthesized over several runs of the algorithm. second, we report on new mechanisms that support the concatenation of existing, automatically generated motion controllers to produce complex, composite movement. finally, we describe initial work on generalizing the techniques from 2d to 3d articulated figures. taken together, these results illustrate the promise and challenges afforded by the controller-based approach to automatic motion synthesis for computer animation. joel auslander alex fukunaga hadi partovi jon christensen lloyd hsu peter reiss andrew shuman joe marks j. thomas ngo small robot mapping and navigation christopher ireland control flow graphs as a representation language bruce a. cota douglas g. fritz robert g. sargent a generative model of narrative cases effective case-based reasoning in complex domains requires a representation that strikes a balance between expressiveness and tractability. for cases in temporal domains, formalization of event transitions in a narrative grammar can simplify both the user's task of problem formulation and the system's indexing, matching, and adaptation tasks without compromising expressiveness. this paper sets forth a model of temporal cases based on narrative grammars, demonstrates its applicability to several different domains, distinguishes two different similarity metrics---sequence overlap and tree overlap---and shows how the choice between these metrics depends on whether nonterminals in the narrative grammar correspond to abstract domain states or merely represent constraints on event transitions. the paper shows basic-level and legal-event narrative grammars can be used together to model how human lawyers interleave fact elicitation and analysis. l. karl branting control strategies for two-player games computer games have been around for almost as long as computers. most of these games, however, have been designed in a rather ad hoc manner because many of their basic components have never been adequately defined. in this paper some deficiencies in the standard model of computer games, the minimax model, are pointed out and the issues that a general theory must address are outlined. most of the discussion is done in the context of control strategies, or sets of criteria for move selection. a survey of control strategies brings together results from two fields: implementations of real games and theoretical predictions derived on simplified game-trees. the interplay between these results suggests a series of open problems that have arisen during the course of both analytic experimentation and practical experience as the basis for a formal theory. bruce abramson towards a logical foundation of discrete event modeling and simulation ashvin radiya to dream the possible dream raj reddy virtual environments and interactivity: windows to the future c. conn j. lanier m. minsky s. fisher a. druin computer graphics technology (panel session) john staudhammer dean bailey steven dines louis j. doctor karl guttag jack l. hancock klaus w. lindenberg edmund y. sun the rm-cell: a set of old ideas that produce new results g de la pena casares inductive logic programming and learnability jörg-uwe kietz saso dzeroski an efficient algorithm for managing partial orders in planning maria fox derek long visualizing modeling heuristics: an exploratory study laurie b. waisel william a. wallace thomas r. willemain relief texture mapping we present an extension to texture mapping that supports the representation of 3-d surface details and view motion parallax. the results are correct for viewpoints that are static or moving, far away or nearby. our approach is very simple: a relief texture (texture extended with an orthogonal displacement per texel) is mapped onto a polygon using a two-step process: first, it is converted into an ordinary texture using a surprisingly simple 1-d forward transform. the resulting texture is then mapped onto the polygon using standard texture mapping. the 1-d warping functions work in texture coordinates to handle the parallax and visibility changes that result from the 3-d shape of the displacement surface. the subsequent texture-mapping operation handles the transformation from texture to screen coordinates. manuel m. oliveira gary bishop david mcallister simulating pass transistor circuits using logic simulation machines an algorithm for pass transistor simulation using the yorktown simulation engine (yse) is outlined. implementing this algorithm yields an efficient tool for custom vlsi circuit design verification and fault simulation. modeling of circuits under this environment is defined, including the analysis of the algorithm's performance for some general circuit types. a number of specific examples are discussed in detail. z. barzilai l. huisman g. silberman d. tang l. woo shutterbug joseph brumm an updated survey of ga-based multiobjective optimization techniques after using evolutionary techniques for single-objective optimization during more than two decades, the incorporation of more than one objective in the fitness function has finally become a popular area of research. as a consequence, many new evolutionary-based approaches and variations of existing techniques have recently been published in the technical literature. the purpose of this paper is to summarize and organize the information on these current approaches, emphasizing the importance of analyzing the operations research techniques in which most of them are based, in an attempt to motivate researchers to look into these mathematical programming approaches for new ways of exploiting the search capabilities of evolutionary algorithms. furthermore, a summary of the main algorithms behind these approaches is provided, together with a brief criticism that includes their advantages and disadvantages, degree of applicability, and some known applications. finally, further trends in this area and some possible paths for further research are also addressed. carlos a. coello modeling of computer systems (panel session) over ten years have elapsed since the first symposium on the simulation of computer systems in 1973\. for several years thereafter there were similar conferences held. sessions at numerous winter simulation conferences have features papers on simulation of computer systems. yet more and more attention has been paid to analytical models for determination of projected performance of computer systems. is this the result of proven accuracy, ease of use, cost- effectiveness or lack of knowledge of simulation? in what instances is it necessary or desirable to use simulation rather than analytical models? paul f. roth high performance graphics systems (panel session) john tartar kellogg booth louis doctor andries van dam solid modeling(panel session) ronald n. goldman david gossard richard riesenfeld herbert voelcker tony woo document image understanding sargur n. srihari merging text and graphics the capability to merge graphics and text into a consolidated document can greatly enhance communication. even simple graphics such as boxes and arrows can help organize ideas and make information easier to understand. it is still disturbing to see the number of manuals that describe computer graphics systems that do not include even one graphic image! in the phototypesetting world, the capability to merge graphics and text has been available for some time, but only recently have the components for less costly systems become available. this paper will discuss a segmented system in use in a scientific r&d; environment for including graphics into documents. judi cleary lod-sprite technique for accelerated terrain rendering we present a new rendering technique, termed lod-sprite rendering, which uses a combination of a level-of-detail (lod) representation of the scene together with reusing image sprites (previously rendered images). our primary application is accelerating terrain rendering. the lod-sprite technique renders an initial frame using a high- resolution model of the scene geometry. it renders subsequent frames with a much lower-resolution model of the scene geometry and texture-maps each polygon with the image sprite from the initial high-resolution frame. as it renders these subsequent frames the technique measures the error associated with the divergence of the view position from the position where the initial frame was rendered. once this error exceeds a user-defined threshold, the technique re-renders the scene from the high- resolution model. we have efficiently implemented the lod-sprite technique with texture-mapping graphics hardware. although to date we have only applied lod-sprite to terrain rendering, it could easily be extended to other applications. we feel lod- sprite holds particular promise for real-time rendering systems. baoquan chen j. edward swan eddy kuo arie kaufman consistent updates in dual representation systems srinivas raghothama vadim shapiro genetic algorithms with cluster analysis for production simulation robert entriken siegfried vössner beam breaker scott petill geometric modeling technology the geometric model is an essential element of the design description. it is an abstract representation of the form and size of the object that serves as a medium of communication. the geometric model must hold sufficient information to answer all questions about form and size. the traditional engineering drawing was developed for and has served this function. though highly developed, over a long period, the technique suffers from the fact that much of the information is implied and many times is ambiguous. it takes a trained human to interpret a drawing, a computer can't. various approaches have been taken in design modeling and the capabilities and limitation should be understood before approaches are selected or rejected. the wire frame, surface and volumes approaches can form a complimentary system of modeling capabilities rather than replace each other. the need for multiple representation appears when deeper consideration is given to specific applications. william luts a hypercube ray-tracer we describe a hypercube ray-tracing program for rendering computer graphics. for small models, which fit in the memory of a single processor, the ray- tracer uses a scattered decomposition of pixels to balance the load, and achieves a very high efficiency. the more interesting case of large models, which cannot be stored in a single processor, requires a decomposition of the model data as well as the pixels. we present algorithms for constructing a decomposition based upon information about the frequency with which different elements of the model are accessed. the resulting decomposition is approximately optimized to minimize communication and achieve load balance. j. salmon j. goldsmith gaze-directed volume rendering marc levoy ross whitaker multiresolution mesh morphing aaron w. f. lee david dobkin wim sweldens peter schröder simulation analysis of a collisionless multiple access protocol for a wavelength division multiplexed star-coupled configuration patrick w. dowd kalyani bogineni observations on comparing digital systems synthesis techniques previously, much research effort has been aimed at developing automatic design methods for the higher levels of system design. this work has centered around developing methods of representation, constraint specification, and automatic design. specifically, methods of representing systems at the behavioral and functional block levels were developed and implemented. around these representations, automatic design methods have been developed. this paper describes the previous research efforts, highlights their results, and makes observations on the comparison of these design methods. donald e. thomas a metric time-point and duration-based temporal model federico a. barber creating reusable visualizations with the relational visualization notation matthew c. humphrey le bestiaire julien delmotte bruno follet should we leverage natural-language knowledge? an analysis of user errors in a natural-language-style programming language amy bruckman elizabeth edwards bidirectional reflection functions from surface bump maps brian cabral nelson max rebecca springmeyer information filtering: the computation of similarities in large corpora of legal texts erich schweighofer werner winiwarter dieter merkl the flesch index: an easily programmable readability analysis algorithm this paper is an exposition of an algorithm for text analysis that can be of value to writers and documentalists. the simplicity of this algorithm allows it to be easily programmed on most computer systems. the author has successfully implemented this test as a function within a text editing system written in rpg ii. included in this paper is a sample program written for the vax 11/780 in pl/i. in 1949 dr. rudolph flesch published a book titled "the art of readable writing." in this book, he described a manual method of reading ease analysis. this method was to analyze text samples of about 100 words. each sample is assigned a readability index based upon the average number of syllables per word and the average number of words per sentence. this flesch index is designed so that most scores range from 0 to 100. only college graduates are supposed to follow prose in the 0 - 30 range. scores of 50 -60 are high-school level and 90 - 100 should be readable by fourth graders. though crude, since it is designed simply to reward short words and sentences, the index is useful. it gives a basic, objective idea of how hard prose is to wade through. this test has been used by some state insurance commissions to enforce the readability of policies. flesch's algorithm was automated in the early 1970s by the service research group of the general motors corporation. the program, called gm-star (general motors simple test approach for readability) was used so that shop manuals could be made more readable. gm-star was originally written in basic language. the key to this program is a very simple algorithm to count the number of syllables in a word. in general the text analysis portion of the program uses the following rules: periods, explanation points, question marks, colons and semi-colons count as end-of-sentence marks. each group of continuous non-blank characters counts as a word. each vowel (a, e, i, o, u, y) in a word counts as one syllable subject to the following sub- rules: ignore final -es, -ed, -e (except for -le) words of three letters or less count as one syllable consecutive vowels count as one syllable. < ;/item> although there are many exceptions to these rules, it works in a remarkable number of cases. the flesch index (f) for a given text sample is calculated from three statistics; the total number of sentences (n), the total number of words (w), the total number of syllables (l), according to the following formula: f = 206.835 - 1.015 × (w/n) - 84.6 × (l/w). the grade level equivalent (g) of the flesch index is given by the following table: if -50 f 50, then g = (140 - f)/6.66 if 50 f 60, then g = (93 - f)/3.33 if 60 f 70, then g = (110 - f)/5.0 if 70 f , then g = (150 - f)/10.0 a pl/i program that implements this algorithm is listed below along with sample output. for simplicity, this program assumes all letters are in upper case. processing text with lower case letters can be accomplished by either modifying the program to test for lower case as well as upper case, or by preprocessing the text sample to translate all letters to upper case. there are a multitude of other refinements and amenities that can be added to the basic analysis. among these are: nothing which characters are considered sentence terminators. ignoring periods that are used for abbreviations rather than sentence terminators. ignoring word connecting hyphens in compound words. noting which character groups should probably be spelled out, such as numerals and dollar amounts. sharpening the syllable counting routine to detect exceptional cases. john talburt a tutorial on design and analysis of simulation experiments w. david kelton visual nonlinearity characterized and goal driven image segmentation approach tianxu zhang paint by numbers: abstract image representations paul haeberli applying devs methodology to continuous systems modeling a. m. fayek d. van welden g. vansteenkiste output analysis research (panel discussion): why bother? john m. charnes john s. carson merriel dewsnup andrew f. seila jeffrey d. tew randall p. sadowski igks (abstract only): an integrated image processing and graphics environment this paper describes a part of a large project at the university of lowell to develop an integrated environment for the unified treatment of image processing and graphics with interfaces to pictorial databases. the overall systems consists of a knowledge-based user interface containing three orthogonal kernels: an image processing and vision kernel system (iks) and the graphical kernel system (gks) several pictorial databases one can think of a kernel as a set of procedures, tools and utilities that are coherent, orthogonal and fundamental. in our environment, all kernels are closely linked. there is a very strong link between the image/vision kernel and the graphics one. this is specifically referred to as the image and graphics kernel system (igks) and this is the focus of this paper. this system is interfaced to the user through a knowledge-base and rule-based system (referred to symbolically as ). george grinstein functional geometry a method of describing pictures is introduced. the equations, which describe the appearance of a picture, also form a purely functional program which can be used to compute the set of lines necessary to plot the picture on a graphical device. the method is illustrated by using it to describe the structure of one of the woodcuts of maurits escher. peter henderson quick simulation of atm buffers with on-off multiclass markov fluid sources g. kesidis j. walrand a software mechanism to enhance simulation model validity gaynor legge dana l. wyatt feature-based volume metamorphosis apostolos lerios chase d. garfinkle marc levoy a study of automatic rule generation and adaption h. h. zhou using motives and artificial emotions for long-term activity of an autonomous robot to operate over a long period of time in the real world, autonomous mobile robots must have the capability of recharging themselves whenever necessary. in addition to be able to find and dock into a charging station, robots must be able to decide when and for how long to recharge. this decision is influenced by the energetic capacity of their batteries and the contingencies of their environments. to deal with this temporality issue and using research evidences from the field of psychology, this study investigates the use of motives and artificial emotions to regulate the recharging need of autonomous robots. françois michaud jonathan audet bankruptcy case law: a hybrid ir-cbr approach we briefly present a description of an on-going work in applying a combined ir-cbr approach to legal information processing in bankruptcy law. our underlying model is based on how lawyers do legal research as part of the legal reasoning process. in particular we suggest an ir technique derived from our assumption and based on actual statute text rather than case documents' text. mohamed t. elhadi tibor vamos voicexml for web-based distributed conversational applications bruce lucas a multi-agent system for meting out influence in an intelligent environment m. v. nagendra prasad joseph f. mccarthy hardware antialiasing of lines and polygons walter gish allen tanner a collaborative legal information retrieval system using dynamic logic programming we propose a framework for a collaborative legal information retrieval system based on dynamic logic programming. in order to be collaborative our system keeps the context of the interaction and tries to infer the user intentions. each event is represented by logic programming facts which are used to dynamically update the previous user model. user intentions are inferred from this new model and are the basis of the interaction between the system and the legal texts knowledge base. as legal texts knowledge base we are using the documents from the portuguese attorney general (procuradoria geral da republica). in this paper we will show some examples of the obtained collaborative behaviour. paulo quaresma irene pimenta rodrigues application of enriched deontic legal relations: federal rules of civil procedure rule 7(a), pleadings deontic legal relations are quantified obligation or permission statements in the logic of legal relations. such legal relations stem from hohfeld's right- duty-privilege-no-right set of fundamental legal conceptions. the four hohfeldian roots are enriched by more detailed specification in terms of quantifier, deontic, action and temporal logics. the 1600 enriched deontic legal relations defined in the logic of legal relations are both more specific and more discriminating than their hohfeldian predecessors. a brief summary is presented of the 800 active enriched deontic legal relations along with other parts of the legal relations language involved in the analysis of the important rule 7(a) of the federal rules of civic procedure dealing with pleadings. layman e. allen charles s. saxon a simple computational model for nonmonotonic and adversarial legal reasoning in many commonsense contexts only incoherent and conflicting information is available. in such contexts reasonable conclusions must be derived from inconsistent sets of premises. this is especially the case in legal reasoning: legal norms can be issued by different authorities, in different times, to reach incompatible socio-political objectives, and the meaning of those norms can be semantically indeterminate. logic deduction alone is insufficient to derive justified conclusions out of inconsistent legal premises, since in the most popular logical systems (such as classical or intuitionistic logic) everything can be deduced from any contradiction. nevertheless, much research now underway shows that formal methods can be developed for reasoning with conflicting information. the possibility of obtaining justified conclusions from an inconsistent set of premises increases when an ordering is defined over that set, since the ordering of the premises can be translated into an ordering of the competing arguments. this fact is particularly relevant for legal reasoning, since lawyers effectively solve normative conflicts by using ordering relations. in the following page, a model for reasoning with ordered defaults, interpreted as unidirectional inference rules, is proposed: a language for representing (possibly) contradictory rules is introduced, a notion of argument is defined, and types of arguments are distinguished. a simple interpreter in prolog able to develop those arguments is also illustrated. finally, the significance of the proposed model (and, more generally, of the acceptance of inconsistency) for the formal analysis of legal systems is discussed. giovanni sartor learning and generalization thomas cover optical printing in computer animation the optical printer can be considered as an optical analog computer, which can perform geometric transformations and simple arithmetic operations on pictures very efficiently. the principles of operation of the printer are explained, and many of its applications to computer animation are listed and discussed briefly. two techniques are discussed in detail: the use of high contrast masks to suppress the bright spots where two lines of different colors cross, and the use of continuous tone masks and multiple exposures to create realistic transparency at low cost. nelson max john blunden the pessimism behind optimistic simulation in this paper we make an analogy between the time that storage must be maintained in an optimistic simulation and the blocking time in a conservative simulation. by exploring this analogy, we design two new global virtual time (gvt) protocols for time warp systems. the first protocol is based on null message clock advancement in conservative approaches. our main contribution is a new protocol inspired by misra's circulating marker scheme for deadlock recovery. it is simple enough to be implemented in hardware, takes no overhead in the normal path, can be made to work over non-fifo links, and its overhead can be dynamically tuned based on computational load. george varghese roger chamberlain william e. weihl creating complex actors with ease paul scerri nancy e. reed perspectives on simulation using gpss thomas j. schriber conference preview: oopsla 99: 14th annual conference on object-oriented programming systems, language and applications jennifer bruer a 3-dimensional representation for fast rendering of complex scenes hierarchical representations of 3-dimensional objects are both time and space efficient. they typically consist of trees whose branches represent bounding volumes and whose terminal nodes represent primitive object elements (usually polygons). this paper describes a method whereby the object space is represented entirely by a hierarchical data structure consisting of bounding volumes, with no other form of representation. this homogencity allows the visible surface rendering to be performed simply and efficiently. the bounding volumes selected for this algorithm are parallelepipeds oriented to minimize their size. with this representation, any surface can be rendered since in the limit the bounding volumes make up a point representation of the object. the advantage is that the visibility calculations consist only of a search through the data structure to determine the correspondence between terminal level bounding volumes and the current pixel. for ray tracing algorithms, this means that a simplified operation will produce the point of intersection of each ray with the bounding volumes. memory requirements are minimized by expanding or fetching the lower levels of the hierarchy only when required. because the viewing process has a single operation and primitive type, the software or hardware chosen to implement the search can be highly optimized for very fast execution. steven m. rubin turner whitted image composition methods for sort-last polygon rendering on 2-d mesh architectures tong-yee lee c. s. raghavendra j. n. nicholas super-criticality revisited critical path analysis has been suggested as a technique for establishing a lower bound on the completion times of parallel discrete event simulations. a protocol is super-critical if there is at least one simulation that can complete in less than the critical path time using that protocol. previous studies have shown that several practical protocols are super-critical while others are not. we present a sufficient condition to demonstrate that a protocol is super-criticality. it has been claimed that super-criticality requires independence of one or more messages (states) on events in the logical past of those messages (states). we present an example which contradicts this claim and examine the implications of this contradiction on lower bounds. sudhir srinivasan paul f. reynolds real time design and animation of fractal plants and trees peter e. oppenheimer the purdue university network-computing hubs: running unmodified simulation tools via the www this paper describes the web interface management infrastructure of a functioning network-computing system (punch) that allows users to run unmodified simulation packages at geographically dispersed sites. the system currently contains more than fifty university and commercial simulation tools, and has been used to carry out more than two hundred thousand simulations via the world wide web. dynamically-constructed virtual urls allow the web interface management infrastructure to support the semantics associated with an interface to computing services without requiring any changes to web browsers or www protocols. virtual urls also facilitate customizable control of access to networked resources. simulation tools with text-based interfaces are supported via dynamically-generated, virtual interfaces, whereas tools with graphical interfaces are supported by leveraging available remote display-management technologies. virtual interface generation and interactivity emulation are handled by a programmable state machine in conjunction with a mechanism to embed variables and ovjects within standard html. nirav h. kapadia jose a. b. fortes mark s. lundstrom simulation model evolution: a strategic tool for model planning todd hunter levels of detail & polygonal simplification mike krus patrick bourdot françoise guisnel gullaume thibault graphical description, control logic development, simulation development, and animated display of material handling systems we present a brief rationale for the material handling system design methodology being developed. during the presentation a film will be shown illustrating the development of a material handling systems design using interactive computer graphics and a subsequent animated display of a simulation of the system designed. william l. maxwell eugene e. fellows image rendering by adaptive refinement larry bergman henry fuchs eric grant susan spach xml gets down to business aaron weiss more ia needed in ai: interpretation assistance for coping with the problem of multiple structural interpretations layman e. allen charles s. saxon robust gait generation for hexapodal robot locomotion david p. barnes javan b. wardle agnostic learning of geometric patterns (extended abstract) sally a. goldman stephen s. kwek stephen d. scott modeling and simulating complex spatial dynamic systems: a framework for application in environmental analysis james d. clark recent developments in expert systems the most important development in expert systems is the introduction and proliferation of frameworks, or shells, that allow rapid prototyping, easy expansion, and continued maintenance. several shells are now in commercial use and have been used to build expert systems of recognized value. the state of the art and several new lines of development will be discussed. some of the developments involve fundamental research in ai on knowledge representation and inference; others involve making the shell environments themselves more intelligent. b buchanan volume illustration: non-photorealistic rendering of volume models david ebert penny rheingans bounds and error estimates for radiosity we present a method for determining a posteriori bounds and estimates for local and total errors in radiosity solutions. the ability to obtain bounds and estimates for the total error is crucial fro reliably judging the acceptability of a solution. realistic estimates of the local error improve the efficiency of adaptive radiosity algorithms, such as hierarchical radiosity, by indicating where adaptive refinement is necessary. first, we describe a hierarchical radiosity algorithm that computes conservative lower and upper bounds on the exact radiosity function, as well as on the approximate solution. these bounds account for the propagation of errors due to interreflections, and provide a conservative upper bound on the error. we also describe a non-conservative version of the same algorithm that is capable of computing tighter bounds, from which more realistic error estimates can be obtained. finally, we derive an expression for the effect of a particular interaction on the total error. this yields a new error-driven refinement strategy for hierarchical radiosity, which is shown to be superior to brightness-weighted refinement. dani lischinski brian smits donald p. greenberg structured models and dynamic systems analysis: the integration of the idef0/idef3 modeling methods and discrete event simulation larry whitman brian huff adrien presley robotics f. l. lewis m. fitzgerald k. liu acquisition of semantic patterns from a natural corpus of texts p. velardi m. t. pazienza s. magrini knowledge base revision through exception-driven discovery and learning seok won lee gheorghe tecuci an approach to ranking and selection for multiple performance measures douglas j. morrice john butler peter w. mullarkey virtual reality and computer graphics programming bob c. liang filtering high quality text for display on raster scan devices recently several investigators have studied the problem of displaying text characters on grey level raster scan displays. despite arguments suggesting that grey level displays are equivalent to very high resolution bitmaps, the performance of grey level displays has been disappointing. this paper will show that much of the problem can be traced to inappropriate antialiasing procedures. instead of the classical (sin x)/x filter, the situation calls for a filter with characteristics matched both to the nature of display on crts and to the human visual system. we give examples to illustrate the problems of the existing methods and the advantages of the new methods. although the techniques are described in terms of text, the results have application to the general antialiasing problem---at least in theory if not in practice. j. kajiya m. ullner a double step algorithm for rendering parabolic curves euisuk park larry f. hodges a distributed architecture for document management c. reid elementary shape construction on a microcomputer (abstract only) three dimensional shapes are translated to the two dimensional screen as a series of vectors which are used to construct a surface grid. as a guideline, major sections of the basic code are covered. some program options and alternate coordinate systems are discussed. henry rosche shading models for point and linear sources the degree of realism of the shaded image of a tree-dimensional scene depends on the successful simulation of shading effects. the shading model has two main ingredients, properties of the surface and properties of the illumination falling on it. most previous work has concentrated on the former rather than the latter. this paper presents an improved version for generating scenes illuminated by point and linear light sources. the procedure can include intensity distributions for point light sources and output both umbrae and penumbrae for linear light sources, assuming thr environment is composed of convex polyhedra. this paper generalizes crow's procedure for computing shadow volumes caused by the end points of the linear source results in an easy determination of the reions of penumbrae and umbrae on the face prior to shading calculation. this paper also discusses a method for displaying illuminance distribution on a shaded image aby using colored isolux contours. tomoyuki nishita isao okamura eihachiro nakamae ghostcatching bill t. jones paul kaiser shelley eshkar michael girard susan amkraut interval time clock implementation for qualitative event graphs ricki g. ingalls douglas j. morrice andrew b. whinston controlling physical agents through reactive logic programming daniel shapiro pat langley spot noise texture synthesis for data visualization jarke j. van wijk displacement mapping using flow fields existing displacement mapping techniques operate only in directions normal to the surface, a restriction which limits the richness of the set of representable objects. this work removes that restriction by allowing displacements to be defined along curved trajectories of flow fields. the main contribution of this generalized technique, which will be referred to as flow mapping, is an alternative model of offset surfaces that extends the class of shapes that can be modelled using displacement maps. the paper also discusses methods for synthesizing homogeneous displacement textures. finally, it introduces the concept of a texture atlas for efficient sampling and reconstruction of distorted textures. hans køhling pedersen inference of a 3-d object from a random partial 2-d projection pamela hsu evangelos triantaphyllou k-rep system overview eric mays robert dionne robert weida extracting lexical knowledge from dictionary text b. m. slator formation of clusters and resolution of ordinal attributes in id3 classification trees chaman l. sabharwal keith r. hacke daniel c. st. clair a physically-based night sky model this paper presents a physically- based model of the night sky for realistic image synthesis. we model both the direct appearance of the night sky and the illumination coming from the moon, the stars, the zodiacal light, and the atmosphere. to accurately predict the appearance of night scenes we use physically-based astronomical data, both for position and radiometry. the moon is simulated as a geometric model illuminated by the sun, using recently measured elevation and albedo maps, as well as a specialized brdf. for visible stars, we include the position, magnitude, and temperature of the star, while for the milky way and other nebulae we use a processed photograph. zodiacal light due to scattering in the dust covering the solar system, galactic light, and airglow due to light emission of the atmosphere are simulated from measured data. we couple these components with an accurate simulation of the atmosphere. to demonstrate our model, we show a variety of night scenes rendered with a monte carlo ray tracer. henrik wann jensen fredo durand julie dorsey michael m. stark peter shirley simon premoze entertainment driven collaboration michael harris a theory of the learnable humans appear to be able to learn new concepts without needing to be programmed explicitly in any conventional sense. in this paper we regard learning as the phenomenon of knowledge acquisition in the absence of explicit programming. we give a precise methodology for studying this phenomenon from a computational viewpoint. it consists of choosing an appropriate information gathering mechanism, the learning protocol, and exploring the class of concepts that can be learnt using it in a reasonable (polynomial) number of steps. we find that inherent algorithmic complexity appears to set serious limits to the range of concepts that can be so learnt. the methodology and results suggest concrete principles for designing realistic learning systems. l. g. valiant anti-aliasing in topological color spaces kenneth turkowski detecting feature interactions from accuracies of random feature subsets thomas r. ioerger n-group classification using genetic algorithms aaron h. konstam the causal markov condition, fact or artifact? john f. lemmer multimedia output devices and techniques colin ware the impact of transients on simulation variance estimators daniel h. ockerman david goldsman in defense of discrete-event simulation ernest h. page fast visualization methods for comparing dynamics: a case study in combustion kay a. robbins michael gorman animation of output applied to manufacturing capacity analysis a general purpose technique for animating simulation output is described. the technique can be implemented using inexpensive hardware and is applicable to many types of simulation models. an example which uses the procedure for capacity analysis in a job shop is presented. d. j. medeiros john t. larkin irex: an expert system for the selection of industrial robots and its implementation in two environments this paper describes the design and implementation of a prototype robot selection expert system based on an outside/in approach. the expert system is comprised of three basic parts. the first part examines several proposed applications where automation is desired and chooses the one best suited for automation using a robot. the second part uses rules to select the configuration, drive, programming type, and playback type. the third section of the expert system examines a data base of robots and selects the best five robots for the application based on the users specifications of that job. the paper describes its implementation in kee and the expert system's transfer to the keystone environment on a ps-2 model 80. b. a. gardone r. k. ragade a new radiosity approach by procedural refinements for realistic image sythesis min-zhi shao qun-sheng peng you-dong liang analysis of a heuristic for full trie minimization a trie is a distributed-key search tree in which records from a file correspond to leaves in the tree. retrieval consists of following a path from one root to a leaf, where the choice of edge at each node is determined by attribute values of the key. for full tries, those in which all leaves lie at the same depth, the problem of finding an ordering of attributes which yields a minimum size trie is np-complete. this paper considers a "greedy" heuristic for constructing low-cost tries. it presents simulation experiments which show that the greedy method tends to produce tries with small size, and analysis leading to a worst case bound on approximations produced by the heuristic. it also shows a class of files for which the greedy method may perform badly, producing tries of high cost. douglas comer an inference model for inheritance hierarchies with exceptions we present a definition of property inheritance with exceptions in terms of a formal model of default reasoning called nonconclusive reasoning. our model resolves both types of ambiguity encountered in inheritance with exceptions and avoids the need for explicit exceptions without resorting to extralogical mechanisms to derive them. k whitebread the effects of batching on the power of the test for frequency domain methodology douglas j. morrice artificial intelligence: introduction frank klassner modeling surfaces of arbitrary topology using manifolds cindy m. grimm john f. hughes when the world plague was stopped by a digital artist thomas g. west issues in simulation model integration, reusability, and adaptability (panel session) john f. heafner ron huhn dennis r. mensh richard nance herbert sallin deep canvas in disney's tarzan eric daniels where to look? automating attending behaviors of virtual human characters sonu chopra-khullar norman i. badler introduction to siman/cinema david m. profozich david t. sturrock the phantom spa method: an inventory problem revisited felisa j. vázquez-abad manuel cepeda-juneman mixing translucent polygons with volumes we present an algorithm which renders opaque and/or translucent polygons embedded within volumetric data. the processing occurs such that all objects are composited in the correct order, by rendering thin slabs of the translucent polygons between volume slices using slice-order volume rendering. we implemented our algorithm with opengl on current general-purpose graphics systems. we discuss our system implementation, speed and image quality, as well as the renderings of several mixed scenes. kevin kreeger arie kaufman speech recognition as a computer graphics input technique (panel session) richard rabin interactive graphics systems typically require intense "hands busy/eyes busy and brains busy" activity on the part of the system user/operator. voice input by means of automatic speech recognition equipment, offers major potential for improving user/operator productivity. it is the only input technique which does not require the direct use of hands and eyes. voice input can replace or complement keyboards, function keys, tablets and other types of input devices typically employed for entering commands, alpha and numeric data. alan r. strass effective human interfaces are an essential element of plant information systems and computer integrated processes, such as graphics. in the past, data gathering choices in the factory have been generally dominated by clipboards and travelling punched card decks. similarly, complicated keystroke sequences have often been required to evoke appropriate computer controlled functionality. today, a number of advanced human interface techniques can be used to improve both source data capture and the selection of appropriate computer controlled operations. these techniques are becoming an integral part of many emerging on-line/real-time engineering and manufacturing applications. speech recognition, in particular, is emerging as an important interface technology. speech input can reduce the amount of attention the user has to spend on the mechanics of recording information of selecting functions and allows users to concentrate on their primary task. some of the benefits include: (1) reduced user training time, (2) increased worker productivity, (3) reduced secondary key input, and (4) improved timeliness and accuracy of information made available via voice. currently available speech recognition products have already been used to demonstrate these benefits. these advantages will only increase as the next generation of speech products deliver improved recognition performance. mark robillard there is an increasing awareness of the potential for improving operator productivity for graphic systems by the use of voice input. we have analyzed a number of applications in conjunction with potential users to determine how best to use this capability. initial conclusions are that voice input can be effective, providing that the capabilities of the speech recognition equipment utilized to input voices are matched to the requirements of the applications. key factors to be considered are vocabulary size and types, and the use of isolated words versus continuous speech utterances. sue schedler the addition of voice input to calma's gbsll system provides users with a fast, accurate means by which they can execute commands. a customized interstate electronic vrm system has been integrated into the hardware configuration of all microelectronics products, chips, sticks and cardsii. system software recognizes the input from the harmony vrm in the same manner as a keyboard or menu button input. all three input methods may be used separately or in conjunction with each other. all command inputs are buffered, so the user need never wait for a command to complete before entering the next. a voice file contains up to 50 words, and any number of files may be uploaded or downloaded by one user. both standard system commands or gplii programs can be executed by voice. on-line design productivity by menu input can be increased by 50% when the operator uses voice. operator training is unaffected by the addition of voice to the system. voice is trained as a separate module in the class. its greatest impact appears once the users know operation syntax and can communicate commands quickly. calma provides standard voice menu to be used by customers or the customer may modify the file as needed. most commonly used commands are put into the voice file leaving less frequently used commands for the onscreen menu. by implementing the combination of voice and menu, we can completely eliminate the need for keyboard input. operators initially are skeptical of the "bell and whistle" feature, but find after using voice input that they do not feel uncomfortable "talking" to the computer, and find it an enjoyable and productive tool. matthew peterson the use of voice recognition in a "mature" graphics application raises a number of issues that must be faced. "mature" refers specifically to the maturity of the user interface in the graphics application. commercial cad/cam systems provide such an application: they have been evolving for over ten years, they possess robust user interface features, and their features are subject to the test of the marketplace. use of voice recognition in such applications raises a number of challenges to the technology: (1) how does it compare to existing features (e.g., tablet menuing) for servicing typical user interface needs? (2) are there atypical or emerging needs for which it is particularly suitable? (3) how attractive is its price/productivity offering compared to alternative user investment strategies for increasing productivity? (4) in what directions should it be driven to better service the needs of commercial graphics applications? material on present and future industry needs, other user interface features, and industry price trends will be presented to help assess these issues. richard rabin alan r. strass mark robillard sue schedler matthew peterson an effective system development environment based on vhdl prototyping serafín olcoz luis entrena luis berrojo under construction wooksang chang introduction: agent technology alessio lomusico estimation of the sample size and coverage for guaranteed-coverage nonnormal tolerance intervals huifen chen tsu-kuang yang a system for algorithm animation a software environment is described which provides facilities at a variety of levels for "animating" algorithms: exposing properties of programs by displaying multiple dynamic views of the program and associated data structures. the system is operational on a network of graphics-based, personal workstations and has been used successfully in several applications for teaching and research in computer science and mathematics. in this paper, we outline the conceptual framework that we have developed for animating algorithms, describe the system that we have implemented, and give several examples drawn from the host of algorithms that we have animated. marc h. brown robert sedgewick image-based rendering of diffuse, specular and glossy surfaces from a single image in this paper, we present a new method to recover an approximation of the bidirectional reflectance distribution function (brdf) of the surfaces present in a real scene. this is done from a single photograph and a 3d geometric model of the scene. the result is a full model of the reflectance properties of all surfaces, which can be rendered under novel illumination conditions with, for example, viewpoint modification and the addition of new synthetic objects. our technique produces a reflectance model using a small number of parameters. these parameters nevertheless approximate the brdf and allow the recovery of the photometric properties of diffuse, specular, isotropic or anisotropic textured objects. the input data are a geometric model of the scene including the light source positions and the camera properties, and a single image captured using this camera. our algorithm generates a new synthetic image using classic rendering techniques, and a lambertian hypothesis about the reflectance model of the surfaces. then, it iteratively compares the original image to the new one, and chooses a more complex reflectance model if the difference between the two images is greater than a user-defined threshold. we present several synthetic images that are compared to the original ones, and some possible applications in augmented reality. links: artificial intelligence and interactive entertainment robert st. amant r. michael young a low-cost memory architecture for pci-based interactive ray casting michael doggett michael meißner urs kanus conversational interfaces jennifer lai pyramid-based texture analysis/synthesis david j. heeger james r. bergen secrets of successful simulation studies averill m. law michael g. mccomas construction of vector field hierarchies we present a method for the hierarchical representation of vector fields. our approach is based on iterative refinement using clustering and principal component analysis. the input to our algorithm is a discrete set of points with associated vectors. the algorithm generates a top-down segmentation of the discrete field by splitting clusters of points. we measure the error of the various approximation levels by measuring the discrepancy between streamlines generated by the original discrete field and its approximations based on much smaller discrete data sets. our method assumes no particular structure of the field, nor does it require any topological connectivity information. it is possible to generate multiresolution representations of vector fields using this approach. bjoern heckel gunther weber bernd hamann kenneth i. joy distributed simulation with locality we describe moss, a small language of mobile distributed objects and system- wide references, uncommitted to any distributed simulation protocol, but which can be executed as a distributed conservative simulation with automatic deduction of lookahead. we show how the moss programmer can control the dynamic distribution and locality of simulation objects by simple means which provide natural modelling functions. preliminary results show how programmed locality can reduce communication costs in simulation. t. d. blanchard t. w. lake the delivery michael brunet christine arboit martin millette jean-philippe lafontaine algorthmic pradigms: examples in computational geometry ii n. adlai a. depano farinaz d. boudreaux philip katner brian li a heuristic cost estimation method for optimizing assignment of tasks to processors larry dunning sub ramakrishnan the prince of egypt: the red sea stacey pauly 3d position, attitude and shape input using video tracking of hands and lips recent developments in video-tracking allow the outlines of moving, natural objects in a video-camera input stream to be tracked live, at full video-rate. previous systems have been available to do this for specially illuminated objects or for naturally illuminated but polyhedral objects. other systems have been able to track nonpolyhedral objects in motion, in some cases from live video, but following only centroids or key-points rather than tracking whole curves. the system described here can track accurately the curved silhouettes of moving non-polyhedral objects at frame-rate, for example hands, lips, legs, vehicles, fruit, and without any special hardware beyond a desktop workstation and a video-camera and framestore. the new algorithms are a synthesis of methods in deformable models, b-splines curve representation and control theory. this paper shows how such a facility can be used to turn parts of the body---for instance, hands and lips---into input devices. rigid motion of a hand can be used as a 3d mouse with non-rigid gestures signalling a button press or the "lifting" of the mouse. both rigid and non-rigid motions of lips can be tracked independently and used as inputs, for example to animate a computer-generated face. andrew blake michael isard cake: an implemented hybrid knowledge representation and limited reasoning system charles rich characterizing plans as a set of constraints - the model - a framework for comparative analysis austin tate user-guided composition effects for art-based rendering michael a. kowalski john f. hughes cynthia beth rubin jun ohya commodity trading using an agent-based iterated double auction chris preist tweak scott petill free-form shape design using triangulated surfaces we present an approach to modeling with truly mutable yet completely controllable free-form surfaces of arbitrary topology. surfaces may be pinned down at points and along curves, cut up and smoothly welded back together, and faired and reshaped in the large. this style of control is formulated as a constrained shape optimization, with minimization of squared principal curvatures yielding graceful shapes that are free of the parameterization worries accompanying many patch-based approaches. triangulated point sets are used to approximate these smooth variational surfaces, bridging the gap between patch-based and particle-based representations. automatic refinement, mesh smoothing, and re- triangulation maintain a good computational mesh as the surface shape evolves, and give sample points and surface features much of the freedom to slide around in the surface that oriented particles enjoy. the resulting surface triangulations are constructed and maintained in real time. william welch andrew witkin analogical reasoning in planning and decision making analogical reasoning is a significant cognitive process that has heretofore not been modelled by artificial intelligence researchers in a computationally tractable manner. recently, several new approaches have shown significant promise using analogical processes for problem solving and learning. among the first and most comprehensive, the aries project demonstrated that analogical problem solving is a computationally tractable means of exploiting past experience to solve new problems of increasing complexity. two methods were developed: transformational analogy, with solutions to related problems are incremently transformed into the solution of a new problem, and derivational analogy with the problem solving strategies, rather than the resultant solutions, are transferred across like problems. however, many interesting questions remain unanswered, including: how closely related should problems be prior to analogical transfer? how does one measure similarity? what is the relation of analogical transfer to human problem solving capabilities? what roles does analogy play in learning? should analogical reasoning be part of a decision analysis engine to bring past experience to bear in evaluating likely consequences of candidate decisions? should analogical reasoning be considered an integral aspect of any unified problem solving architecture striving to model human cognition? after a glimpse into the basic aries model, partial answers to these questions are discussed based on recent advances. some of these new directions are rooted in concrete computational and psychological results; others are of a more speculative nature. j carbonell top-down search for coordinating the hierarchical plans of multiple agents bradley j. clement edmund h. durfee on extending more parallelism to serial simulators david nicol philip heidelberger a review of point five: an analysis and modeling system george a. marcoulides laura d. marcoulides production for the long haul john c. donkin charles gibson ralph guggenheim edward kummer brad lewis jeff thingvold the view from the trenches: issues in the ontology of restricted domains we consider the impact of strict processing limitations on thedesign of an ontology for information extraction from newswiretexts. we conclude that requiring online, real-time processingleads to a particular set of answers to fundamental issues ofrelation size and the choice of primary categories. we show how tosatisfy these requirements by using relational categories directlyduring analysis and by using a reified lattice of their partialsaturations that is annotated with the linguistic constructs usedto realize them. david d. mcdonald murmures bruno follet audrey mahaut cyrille roux fairing of non-manifolds for visualization andreas hubeli markus gross interactive rendering with arbitrary brdfs using separable approximations jan kautz michael d. mccool distributed asynchronous winner-take-all structures and conflict detection mingqi deng flash vs. (simulated) flash: closing the simulation loop jeff gibson robert kunz david ofelt mark horowitz john hennessy mark heinrich tensor product decomposition and other algorithms for representations of large simple lie algebras andrej a. zolotykh anti-aliased line drawing using brush extrusion this algorithm draws lines on a gray-scale raster display by dragging a "brush" along the path of the line. the style of the line is determined by the properties of the brush. an anti-aliasing calculation is performed once for the brush itself and thereafter only a trivial additional operation is needed for each pixel through which the brush is dragged to yield an anti-aliased line. there are few constraints on the size, shape, and attributes of the brush. lines can be curved as well as straight, it is possible to produce lines with a three dimensional appearance. turner whitted a learning technique for legal document analysis more and more law is available freely on the internet. the growing complexity of legal rules and the necessary adaptation to user needs requires better instruments than manual browsing and searching interfaces of the past. information reconnaissance of an unknown text corpus would provide a major help. our research on neural networks concerns adaptive learning techniques for information reconnaissance in legal document archives. self-organising maps offer besides successful classification a promising tool for this purpose. the neural processing elements can be labeled with the most appropriate keywords to describe the contents of the documents. applying the tools of refinement, our novel approach describes the most interesting features of the document. the user can choose properly between the various units in order to refine the next step of research. an integration of this tool of information reconnaissance into an intelligent agent is straightforward and will bring much benefit in a practical application. erich schweighofer dieter merkl an open architecture for emotion and behavior control of autonomous agents (poster) juan d. velásquez masahiro fujita hiroaki kitano quality assurance in cognizant simulative design tuncer i. Ã-ren an anti-aliasing technique for splatting j. edward swan klaus mueller torsten möller naeem shareef roger crawfis roni yagel a visible polygon reconstruction algorithm an algorithm for determining visible lines or visible surfaces in polygonal form, at object resolution, is presented. the original scene must consist of non-intersecting planar polygons. the procedure relies on image coherence, since the sampling is dependent on the complexity of the image. the reconstruction method is based on an elaborate data structure, which allows the polygonal output to be easily obtained. the polygonal output is useful for smooth shaded or textured images, as well as the for creation of shadows. stuart sechrest donald p. greenberg personal navigating agents h. l. wang w. k. shih c. n. hsu y. s. chen y. l. wang w. l. hsu a framework for realistic image synthesis donald p. greenberg kenneth e. torrance peter shirley james arvo eric lafortune james a. ferwerda bruce walter ben trumbore sumanta pattanaik sing-choong foo a tla+ specification for agent communication that enables proofs ioan alfred letia space diffusion: an improved parallel halftoning technique using space-filling curves yuefeng zhang robert e. webber to build a better mousetrap christopher leone artificial intelligence and simulation: an introduction artificial intelligence is one of the most rapidly developing fields in modern applied computer science. it has generated research and applications in several widely varying fields, both within and without the realm of topics considered to be 'core' ai components. at the same time, the science of simulation modelling is also advancing steadily, incorporating new technologies such as graphical output display and more recently, applications of artificial intelligence. this paper provides a brief description of the major component topics of artificial intelligence. several references are included to allow readers to examine topics of interest in more depth. the paper also discusses the application of intelligent systems to simulation modelling and describes, with references, some of the development in this area to date. r. greer lavery the magician and the rabbit s. c. hsu irene lee orkin: spy guy mark voelpel interactive effectivity control: design and applications richard ilson semi-automatic generation of transfer functions for direct volume rendering gordon kindlmann james w. durkin interactive volume rendering lee westover ray tracing deformed surfaces alan h. barr ray tracing complex scenes timothy l. kay james t. kajiya efficient agnostic pac-learning with simple hypothesis we exhibit efficient algorithms for agnostic pac-learning with rectangles, unions of two rectangles, and unions of k intervals as hypotheses. these hypothesis classes are of some interest from the point of view of applied machine learning, because empirical studies show that hypotheses of this simple type (in just one or two of the attributes) provide good prediction rules for various real-world classification problems. in addition, optimal hypotheses of this type may provide valuable heuristic insight into the structure of a real world classification problem. the algorithms that are introduced in this paper make it feasible to compute optimal hypotheses of this type for a training set of several hundred examples. we also exhibit an approximation algorithm that can compute nearly optimal hypotheses for much larger datasets. wolfgang maass pen: a hierarchical document editor three terms in common usage in computerized text processing are text-editing, word- processing, and computer controlled typesetting. this paper deals with a fourth term, manuscript preparation, that has important intersections with the above three. a computerized manuscript preparation system is one that supports an author in the preparation of a manuscript. in what follows we deal with one such, the pen system, directed towards the preparation of manuscripts containing significant mathematical notation. todd allen robert nix alan perlis rendering curves and surfaces with hybrid subdivision and forward differencing ari rappoport pixel preference gretchen l. van meer john c. hansen harriet wall high speed high quality antialiased vector generation anthony c. barkans image contrast enhancement using fuzzy set theory faisel saeed k. m. george huizhu lu management and editing of distributed modular documentation when computers really come to be useful and self-documenting tools there are new kinds of management problems. they require some innovative solutions. this paper presents some possible methods for dealing with; a)writers in a "distributed" office b)managing the quality of documents that are assembled paragraph by paragraph by a user on-line c)implications of mixed-media computer presentations in real time, on-line. diana patterson virtualized reality: constructing time-varying virtual worlds from real world events peter rander p. j. narayanan takeo kanade easy-to-use simulation "packages" (panel): what can you really model? daniel t. brunner steven d. duket dean b. foussianes peter l. haigh andrew j. junga paul m. mellema software agents and intelligent object fusion techniques for modular software design are presented applying _software agents._ the conceptual designs are domain independent and make use of specific domain aspects applying _multiagent ai._ the stages of conceptualization, design and implementation are defined by new techniques coordinated by objects. software systems are designed by knowledge acquisition, specification, and multiagent implementations. multiagent implementations are defined for the modular designs, applying our recent projects which have lead to fault tolerant ai systems. a novel multi-kernel design technique is presented.communicating pairs of kernels, each defining a part of the system, are specified by object- coobject super-modules. treating objects as abstract data types and a two level programming approach to oop allows us to define pull-up abstractions to treat incompatible objects. **keywords** abstract objects, intelligent syntax, mjoop, multi agent object level programming, multi kernel design with oop, software agents, intelligent objects. academic ucsb when at university cyrus f. nourani crogro: an interactive forest growth simulator crogro is an interactive computer model which simulates crown growth in trees as they compete with each other for available space and light. the model contains growth data for six species of trees. crogro was designed to be used as a teaching aid. it allows forestry students to observe in a few minutes a process which, in the real world, occurs over several decades. the system is programmed in apl and input/output is via a tektronix 4015 graphics display terminal. d. m. fellows g. l. sprague g. l. baskerville art-based rendering of fur, grass, and trees michael a. kowalski lee markosian j. d. northrup lubomir bourdev ronen barzel loring s. holden john f. hughes form classification using dp matching yungcheol byun yillbyung lee unifit ii: total support for simulation input modeling stephen g. vincent averill m. law shopbot economics jeffrey o. kephart amy r. greenwald making representations work morten kyng inside risks: linguistic risks peter g. neumann universal speech interfaces ronald rosenfeld dan olsen alex rudnicky modeling a genetic control program for the tobacco budworm with slam a simulation model is described and used to investigate the potential for controlling the tobacco budworm through hybrid sterilization. the results indicated that high levels of population control appear feasible with relatively large release ratios. however, driving population levels to zero may be ruled out by such factors as different egg laying capabilities between budworm and backcross moths and by in-migration of budworm moths. a general slam insect model is also described and developed which may be of value to other researchers studying population dynamics. the authors were well satisfied with the capabilities of slam. richard a. levins murl wayne parker state of researches on the problem solver argos-ii bernard fade jean-francois picardat what type of model: analytic, simulation or hybrid simulation/analytic the purpose of this panel session is to discuss when to use and the advantages and disadvantages of analytic, simulation, and hybrid simulation/analytic models. four panelists will speak to open the session and then a general discussion will take place among the panelists and the session attendees. robert g. sargent introduction to demos demos [1,2] is yet another discrete event simulation language hosted in simula. it was released in 1979 and is now running on ibm, dec, univac, and cdc hardwares amongst others. the paper contains a short introduction to simula's object and context features; an explanation of the process approach to simulation; a brief comparison of simula and gpss; and finally, the main features of demos are presented via an example. graham birtwistle generating computer animations with frame coherence in a distributed computing environment timothy a. davis synthetic texturing using digital filters aliasing artifacts are eliminated from computer generated images of textured polygons by equivalently filtering both the texture and the edges of the polygons. different filters can be easily compared because the weighting functions that define the shape of the filters are pre-computed and stored in lookup tables. a polygon subdivision algorithm removes the hidden surfaces so that the polygons are rendered sequentially to minimize accessing the texture definition files. an implementation of the texture rendering procedure is described. eliot a. feibush marc levoy robert l. cook selecting the checkpoint interval in time warp simulation in time warp parallel simulation, the state of each process must be saved (checkpointed) regularly in case a rollback is necessary. although most existing time warp implementations checkpoint after every state transition, this is not necessary, and the checkpoint interval is in reality a tuning parameter of the simulation. lin and lazowska[6] proposed a model to derive the optimal checkpoint interval by assuming that the rollback behavior of time warp is not affected by the frequency of checkpointing. an experimental study conducted by preiss et al.[10] indicates that the behavior of rollback is affected by the frequency of checkpointing in general, and that the lin-lazowska model may not reflect the real situations in general. this paper extends the lin-lazowska model to include the effect of the checkpoint interval on the rollback behavior. the relationship among the checkpoint interval, the rollback behavior, and the overhead associated with state saving and restoration is described. a checkpoint interval selection algorithm which quickly determines the optimal checkpoint interval during the execution of time warp simulation is proposed. empirical results indicate that the algorithm converges quickly and always selects the optimal checkpoint interval. yi-bing lin bruno r. preiss wayne m. loucks edward d. lazowska coordinating planning, perception, and action for mobile robots reid simmons modeling pigmented materials for realistic image synthesis this article discusses and applies the kubelka-munk theory of pigment mixing to computer graphics in order to facilitate improved image synthesis. the theories of additive and subtractive color mixing are discussed and are shown to be insufficient for pigmented materials. the kubelka--munk theory of pigment mixing is developed and the relevant equations are derived. pigment mixing experiments are performed and the results are displayed on color television monitors. a paint program that uses kubelka--munk theory to mix real pigments is presented. theories of color matching with pigments are extended to determine reflectances for use in realistic image synthesis. chet s. haase gary w. meyer the synthesis of cloth objects jerry weil the condition specification: revisiting its role within a hierarchy of simulation model specifications ernest h. page topology matching for fully automatic similarity estimation of 3d shapes there is a growing need to be able to accurately and efficiently search visual data sets, and in particular, 3d shape data sets. this paper proposes a novel technique, called _topology matching_, in which similarity between polyhedral models is quickly, accurately, and automatically calculated by comparing multiresolutional reeb graphs (mrgs). the mrg thus operates well as a search key for 3d shape data sets. in particular, the mrg represents the skeletal and topological structure of a 3d shape at various levels of resolution. the mrg is constructed using a continuous function on the 3d shape, which may preferably be a function of geodesic distance because this function is invariant to translation and rotation and is also robust against changes in connectivities caused by a mesh simplification or subdivision. the similarity calculation between 3d shapes is processed using a coarse-to-fine strategy while preserving the consistency of the graph structures, which results in establishing a correspondence between the parts of objects. the similarity calculation is fast and efficient because it is not necessary to determine the particular pose of a 3d shape, such as a rotation, in advance. topology matching is particularly useful for interactively searching for a 3d object because the results of the search fit human intuition well. masaki hilaga yoshihisa shinagawa taku kohmura tosiyasu l. kunii simulation: pushing a dead mouse through a maze? simulation has been defined in various ways, depending upon the particular biases of the definer. as an example, simulation has been defined, albeit facetiously, as "pushing a dead mouse through a maze". the definition is examined in the paper. special emphasis is given to the recent changes that have occurred in user acceptance and consideration is given to trends in the field. the need for increased accessibility to simulation is the primary focus of the paper. john a. white fast rendering of irregular grids cláudio silva joseph s. b. mitchell arie e. kaufman architecture and performance evaluation of a massive multi-agent system gaku yamamoto yuhichi nakamura gouraud bump mapping i. ernst h. russeler h. schulz o. wittig some applications of maple symbolic computation to scientific and engineering problems t. c. scott g. j. fee time representation based on knowledge partitioning (abstract only) our approach to the representation of time is based on dinsmore's theory of knowledge partitioning which factors any piece of knowledge into two components, a context and a description and accordingly distributes the knowledge over multiple knowledge bases called spaces each characterized by its context. each space is a domain for localized reasoning processes and its context bears strict relationship to other spaces. since a space is characterized by its context, the legal consistency of a space is a property of its context. thus we define legal contexts as follows: if f is a propositional function that maps a proposition p into a proposition f(p), we can use f as a legal context if and only if it is entailment preserving i.e., (for all p, q) f(p & (p---> q)) ---> f(q). if c is the context of a space s1 with respect to another space s2 then a proposition p true in s1 is equivalent to the proposition c(p) true in s2. this process called context climbing helps retrieve knowledge from one space into another. (contexts may be temporal or otherwise e.g. "george believes___", "in 1985___", "if p then___", "during the past hour,___" are some legal contexts.) descriptions having the same context with respect to a space may be stored in the same space. the creation of a unique space with a context c is possible only if the context can be distributed over conjunction, i.e., (for all p, q) (c(p) & c(q) ---> c(p&q;)). if c is a distributive context then propositions p and q are stored as p, q in the unique space referred to by c. the idea of temporal spaces (those defined by temporal contexts) is more general than that of history proposed by hayes. a temporal space contains propositions true at the corresponding time irrespective of their spatial dimensions. temporal contexts are more general than the tense operators invented by prior that map propositions true in the past or future into the present. this is because the knowledge partitioning scheme is not restricted to the four tense operators as contexts but allows any entailment preserving propositional function as a context of a temporal space. this scheme is more advantageous than a system which does not make the structural distinction between contexts and descriptions, for instance allen's system, because such a system does not consolidate information into units that model possible situations. the proposed representation makes no logical distinction between contexts referring to time instants or intervals. one can reason about each space in isolation and use context climbing to transfer the results of the reasoning process and other information to other spaces. the set of 13 mutually exclusive and exhaustive temporal relations formulated by allen could serve as some of the temporal contexts, since a temporal context specifies the relation of propositions in a space to those in another space. in this sense, then successive context climbing between spaces is analogous to the transitivity of temporal relations given by allen. contexts generalize the idea of temporal relations, as they map a proposition into a world existing at a particular time. this scheme avoids combinatorial problems by focussing reasoning processes on restricted domains and by allowing complex temporal inferences in relatively few steps. it allows one to reason about temporal and atemporal knowledge in an integrated framework. this mode of representation removes a great deal of redundancy in both storage and processing since the common context of all descriptions in a space is stored only once and does not take part in local reasoning in the space. madhukar n. thakur john dinsmore getting raster ellipses right a concise, incremental algorithm for raster approximations to ellipses in standard position produces approximations that are good to the last pixel even near octant boundaries or the thin ends of highly eccentric ellipses. the resulting approximations commute with reflection about the diagonal and are mathematically specifiable without reference to details of the algorithm. m. douglas mcilroy a little curious (pad & pencil song) steve oakes a particle system for direct synthesis of landscapes and textures argiris a. kranidotis problem of undergeneralization in ebl and a proposed solution r. loganantharaj sai prabhu a genetic algorithm for solving the euclidean distance matrices completion problem wilson rivera-gallego a language-directed distributed discrete simulation system (extended abstract) in recent years, there has been a noticable increase of interest into the feasibility and utility of distributed discrete simulation. this has been prompted by an improvement in distributed algorithms and distributed computer systems, as well as the need to economically manage the increasingly complex simulation models found in today's world. researchers have centered their efforts around two distinctly different approaches to the design and implementation of distributed discrete simulation systems, with the differences based on what types of tasks are distributed to the individual processors. the more common approach researchers have taken is to develop a system in which the model functions (events) are distributed among the processors (1, 3, 7, 8). the alternative approach distributes simulation support functions (e.g. random number generation) to the available processors (4,9). dana l. wyatt sallie sheppard area and volume coherence for efficient visualization of 3d scalar functions nelson max pat hanrahan roger crawfis efficient antialiased rendering of 3-d linear fractals john c. hart thomas a. defanti nilaya bruno follet towards a consistent logical framework for ontological analysis in their framework for ontological analysis, guarino and weltyprovide a number of insights that are useful for guiding the designof taxonomic hierarchies. however, the formal statements of theseinsights as logical schemata are flawed in a number of ways,including inconsistent notation that makes the intended semanticsof the logic unclear, false claims of logical consequence, anddefinitions that provably result in the triviality of some of theirproperty features. this paper makes a negative contribution, bydemonstrating these flaws in a rigorous way, but also makes apositive contribution wherever possible, by identifying theunderlying intuitions that the faulty definitions were intended tocapture, and attempting to formalize those intuitions in a moreaccurate way. aaron n. kaplan antialiased parameterized solid texturing simplified for consumer-level hardware implementation john c. hart nate carr masaki kameya stephen a. tibbitts terrance j. coleman representation and processing issues underlying machine learning from examples j. ortega g. lee d. fisher the virtual mesh: a geometric abstraction for efficiently computing radiosity in this article, we introduce a general-purpose method for computing radiosity on scenes made of parametric surfaces with arbitrary trimming curves. in contrast with past approaches that require a tessellation of the input surfaces (be it made up of triangles or patches with simple trimming curves) or some form of geometric approximation, our method takes full advantage of the rich and compact mathematical representation of objects. at its core lies the _virtual mesh_, an abstraction of the input geometry that allows complex shapes to be illuminated as if they were simple primitives. the virtual mesh is a collection of normalized square domains to which the input surfaces are mapped while preserving their energy properties. radiosity values are then computed on these supports before being lifted back to the original surfaces. to demonstrate the power of our method, we describe a high-order wavelet radiosity implementation that uses the virtual mesh. examples of objects and environments, designed for interactive applications or virtual reality, are presented. they prove that, by exactly integrating curved surfaces in the resolution process, the virtual mesh allows complex scenes to be rendered more quickly, more accurately, and much more naturally than with previously known methods. l. alonso f. cuny s. petit jean j.-c. paul s. lazard e. wies graphical interface for logic programming angelo monfroglio probabilistic logic programming with conditional constraints we introduce a new approach to probabilistic logic programming in which probabilities are defined over a set of possible worlds. more precisely, classical program clauses are extended by a subinterval of [0,1] that describes a range for the conditional probability of the head of a clause given its body. we then analyze the complexity of selected probabilistic logic programming tasks. it turns out that probabilistic logic programming is computationally more complex than classical logic programming, more precisely, the tractability of special cases of classical logic programming generally does not carry over to the corresponding special cases of probabilistic logic programming. moreover, we also draw a precise picture of the complexity of deciding and computing tight logical consequences in probabilistic reasoning with conditional constraints in general. we then present linear optimization techniques for deciding satisfiability and computing tight logical consequencesof probabilistic logic programs. these techniques are efficient in the special case in which we have little relevant purely probabilistic knowledge. we finally show that probabilistic logic programming under certain syntactic and semantic restrictions is closely related to van emden's quantitative deduction, and thus has computational properties similar to calssical logic programming. based on this result, we present an efficient approximation technique for probabilistic logic programming. thomas lukasiewicz productivity tools in simulation: simscript 11.5 and simgraphics o. f. bryan the morph node we discuss potential and limitations of a morph node, inspired by the corresponding construct in java3d. a morph node in java3d interpolates vertex attributes among several homeomorphic geometries. this node is a promising candidate for the delivery of 3d animation in a very compact form. we review the state-of-the-art in web 3d techniques with respect to the possibility of interpolating among several geometries. this review leads to a simple extension for vrml-97 as well as a recommendation for necessary changes in java3d. furthermore, we discuss various optimization issues for morph nodes. marc alexa johannes behr wolfgang muller steps to implement bayesian input distribution selection stephen e. chick what is scalability in multi-agent systems? omer f. rana kate stout automating the technical publications department a day at the office \---1995 by the time george denning arrives at his office at seacom, inc., at 10:30 a.m. this morning, he has already done three hours of work. george, a senior technical writer in the software publications department, wanted some isolation this morning to edit a particularly tricky section in a user guide. his home computer, hooked up to seacom's host computer through the local cable network, allowed him to edit the document on- line in the quiet of his den, browse through his company mailbox, and glance at the latest changes to a functional specification fred herbert has been working on. although fred is the analyst in charge of project development, george is responsible for all ergonomic considerations. wishing to avoid the chit-chat of a telephone call, george linked up with fred and the user liason on the project for a three-way computer conference. in five minutes george had explained the problem he had found, and, with the three looking at the specifications on-line, had changed it to everyone's satisfaction. finished, george logged off of seacom and left for the office. arriving at seacom, george gets a cup of tea and turns on his terminal. the system alerts him that a document has been returned from review. although the committee members are scattered around the country, it has only taken five days for the entire review cycle. all of the comments have been made on-line. george calls up the review copy on his terminal. the text of the document is formatted on the screen, and blinking red arrows scattered throughout the document show where comments were made. george issues a command, and the document compresses onto the left-hand half of the screen, while the comments appear on the right-hand half. in an hour he has made the changes and sent the file off to the certification board for approval. reviewing the document he edited at home in the morning, george decides an illustration is called for. he queries the corporate graphics library on-line for similar illustrations. the description of one sounds promising. calling it up on the screen, he decides that, with a few changes, it will fill the bill. rapidly he draws a few more lines and connectors, adds a photo of a particular device (copied from the corporate photo archives in digital form) and incorporates it into his document as a figure. the system instantly boxes and labels it, and updates the list of figures in the table of contents. john d. browne a concurrent on-board vision system for a mobile robot robot vision algorithms have been implemented on an 8-node ncube-at hypercube system onboard a mobile robot (hermies) developed at oak ridge national laboratory. images are digitized using a framegrabber mounted in a vme rack. image processing and analysis are performed on the hypercube system. the vision system is integrated with robot navigation and control software, enabling the robot to find the front of a mockup control panel, move up to the panel, and read an analog meter. among the concurrent algorithms used for image analysis are a new component labeling algorithm and a hough transform algorithm with load balancing. j. p. jones validation and verification of simulation models robert g. sargent a multi-agent based evolutionary artificial neural network for general navigation in unknown environments fang wang eric mckenzie book reviews karen t. sutherland drawing antialiased cubic spline curves cubic spline curves have many nice properties that make them desirable for use in comptuer graphics, and the advantages of antialiasing have been known for some years. yet, only recently has there been any attempt at directly antialiasing spline curves. parametric spline curves have resisted antialiasing in several ways: single segments may cross or become tangent to themselves. cusps and small loops are easily missed entirely. thus, short pieces of the curve cannot necessarily be rendered in isolation. finding the distance from a pixel center to the curve accurately and efficiently---usually an essential part of antialiasing---is an unsolved problem. the method presented by lien, shantz, and pratt [21] is a good start, although it considers pixel-length pieces of the curve in isolation and lacks robustness in the handling of certain curves. this paper provides an improved method that is more robust, and is able to handle intersections and tangency. r. victor klassen a multi-agent reinforcement learning method for a partially-observable competitive game this article proposes a reinforcement learning (rl) method based on an actor- critic architecture, which can be applied to partially-observable multi-agent competitive games. as an example, we deal with a card game "hearts". in our method, the actor plays so as to enlarge the expected temporal-difference error, which is obtained based on the estimation of the state transition. the state transition is estimated by taking the inferred card distribution and the other player's action models into account. yoichiro matsuno tatsuya ymazaki shin ishii jun matsuno learning agents for uncertain environments (extended abstract) stuart russell predicting the future: resource requirements and predictive optimism the partitioning of systems for parallel simulation is a complex task, requiring consideration of both computational load requirements and communications activity. typically, this information is not accurately known prior to execution. this paper investigates the use of historical information for the prediction of future requirements, both for computation and communications. in addition, for optimistic simulation algorithms, we present a novel technique (which we call predictive optimism) whereby binary prediction schemes can be used to increase the accuracy of optimistic assumptions, thereby decreasing rollbacks and potentially improving overall simulator performance. bradley l. noble roger d. chamberlain interpreting statutory predicates in this paper we discuss a hybrid approach to the problem of statutory interpretation that involves combining our past approach to case-based reasoning ("cba"), as exemplified in our previous hypo and tax-hypo systems, with traditional rule-based reasoning ("rbr"), as exemplified by expert systems. we do not tackle the fullblown version of statutory interpretation, which would include reasoning with legislative intent or other normative aspects (the "ought"), but confine ourselves to reasoning with explicit cases and rules. we discuss strategies that can be used to guide interpretation, particularly the interleaving of cbr and rbr, and how they are used in an agenda-based architecture, called cabaret, which we are currently developing in a general way and experimenting with in the particular area of section 280a(c)(1) of the u.s. internal revenue code, which deals with the so called "home office deduction". e. l. rissland d. b. skalak avoiding the problems and pitfalls in simulation randall p. sadowski using abductive inferencing to derive complex error classifications for discrete sequential processes sanjeev b. ahuja james a. reggia face fixer: compressing polygon meshes with properties most schemes to compress the topology of a surface mesh have been developed for the lowest common denominator: triangulated meshes. we propose a scheme that handles the topology of arbitrary polygon meshes. it encodes meshes directly in their polygonal representation and extends to capture face groupings in a natural way. avoiding the triangulation step we reduce the storage costs for typical polygon models that have group structures and property data. martin isenburg jack snoeyink fast stereo volume rendering taosong he arie kaufman computer graphics visualization for acoustic simulation a. stettner d. p. greenberg a microfacet-based brdf generator a method is presented that takes as an input a 2d microfacet orientation distribution and produces a 4d bidirectional reflectance distribution function (brdf). this method differs from previous microfacet-based brdf models in that it uses a simple shadowing term which allows it to handle very general microfacet distributions while maintaining reciprocity and energy conservation. the generator is shown on a variety of material types. michael ashikmin simon premoze peter shirley encoding a problem for genetic algorithm vassil smarkov milena korova julka petkova freeform curve generation by recursive subdivision of polygonal strip complexes ahmad nasri boing david kurry mike hertlein michael farrell dean hovey a new variance-reduction technique for regenerative simulations of markov chains james m. calvin marvin k. nakayama sheeps warren fuller the systematization of legal meta-inference hajime yoshino interactive pen-and-ink illustration we present an interactive system for creating pen-and-ink illustrations. the system uses stroke textures\\---collections of strokes arranged in different patterns---to generate texture and tone. the user "paints" with a desired stroke texture to achieve a desired tone, and the computer draws all of the individual strokes. the system includes support for using scanned or rendered images for reference to provide the user with guides for outline and tone. by following these guides closely, the illustration system can be used for interactive digital halftoning, in which stroke textures are applied to convey details that would otherwise be lost in this black-and-white medium. by removing the burden of placing individual strokes from the user, the illustration system makes it possible to create fine stroke work with a purely mouse-based interface. thus, this approach holds promise for bringing high- quality black-and-white illustration to the world of personal computing and desktop publishing. michael p. salisbury sean e. anderson ronen barzel david h. salesin multiresolution techniques for interactive texture-based volume visualization we present a multiresolution technique for interactive texture-based volume visualization of very large data sets. this method uses an adaptive scheme that renders the volume in a region-of-interest at a high resolution and the volume away from this region at progressively lower resolutions. the algorithm is based on the segmentation of texture space into an octree, where the leaves of the tree define the original data and the internal nodes define lower- resolution versions. rendering is done adaptively by selecting high- resolution cells close to a center of attention and low-resolution cells away from this area. we limit the artifacts introduced by this method by modifying the transfer functions in the lower-resolution data sets and utilizing spherical shells as a proxy geometry. it is possible to use this technique to produce viewpoint-dependent renderings of very large data sets. eric lamar bernd hamann kenneth i. joy visualization spaces bill hibbard simulation of computer systems and networks with mogul and regal peter l. haigh real-time digital of virtual acoustic environments elizabeth m. wenzel scott h. foster an improved z-buffer csg rendering algorithm nigel stewart geoff leach sabu john presenting visual infomation responsibly "seeing is believing""seeing is not believing" nahum gershon volume visualization arie e. kaufman gpss - finding the appropriate world-view every simulation language embodies a world-view which heavily influences approaches taken in building models in the language. in most applications for which a given language is used, the world-view of the language enforces a discipline of programming which results in models which are time- and space- efficient, reflecting the usefulness of the language and the appropriateness of language choice by the programmer. for some applications, however, the programming style encouraged by the world-view of a language can lead to programs which are time- and space-inefficient, even though the programs are natural, straightforward solutions to the problem at hand. in such cases, one may be forced to consider alternative languages or to alter one's approach in application of a given language. this paper briefly summarizes the world-view of the gpss language and gives two examples of systems which, when modelled with conventional gpss approaches, result in inefficient programs. for each system, two gpss models are presented: a straightforward model which is inefficient, and a clever model which is efficient. in both cases, the clever models are easily programmed in gpss and require only marginally more skill on the part of the programmer than do the straightforward models. once an appropriate alternative to the obvious gpss world-view is found, the rest is easy. a working knowledge of gpss is required to read this paper. james o. henriksen development of a blackboard system for robot programming this paper describes the development of a blackboard system for robot programming. the blackboard architecture is a system architecture which provides a structured way of coordinating different sub-systems in such a way that they can work together to solve problems. a blackboard system was implemented in prolog and it has been applied successfully for the automatic generation of control code for a robot to perform the task of block assembly in an environment with an obstacle. in this blackboard system, the user- interface to the system is treated as an additional knowledge source of the system. through this knowledge source, the user can find out the status of the operation. he can also modify the goal specifications and the robot will plan according to the new specification. grantham k. h. pang modeling fiber stream of internal wood naoki kawai caricatures from images a caricature is a graphical coding of facial features that seeks, paradoxically, to be more like the face than the face itself. a theory of computation for caricatures of frontal faces has been developed and implemented in computer graphics. the caricature generator starts with a digitized image of a face and distorts a line drawing representation of the face relative to a norm. this transformation emphasizes perceptually significant information, reduces noise, and exploits some of the same mechanisms used in human memory for faces. susan e. brennan results of acm's eighteenth computer chess championship m. newborn d. kopec an overview of construction-integration model: a theory of comprehension as a foundation for a new cognitive architecture cathleen wharton walter kintsch fast volumetric deformation on general purpose hardware high performance deformation of volumetric objects is a common problem in computer graphics that has not yet been handled sufficiently. as a supplement to 3d texture based volume rendering, a novel approach is presented, which adaptively subdivides the volume into piecewise linear patches. an appropriate mathematical model based on tri-linear interpolation and its approximations is proposed. new optimizations are introduced in this paper which are especially tailored to an efficient implementation using general purpose rasterization hardware, including new technologies, such as vertex programs and pixel shaders. additionally, a high performance model for local illumination calculation is introduced, which meets the aesthetic requirements of visual arts and entertainment. the results demonstrate the significant performance benefit and allow for time-critical applications, such as computer assisted surgery. c. rezk-salama m. scheuering g. soza g. greiner creating a live broadcast from a virtual environment chris greenhalgh john bowers graham walker john wyver steve benford ian taylor progressive radiance evaluation using directional coherence maps baining guo spacetime constraints andrew witkin michael kass fast multi-layer fog justin legakis automating forms publishing with the intelligent filing manager k. c. mukherjee iris performer: a high performance multiprocessing toolkit for real-time 3d graphics this paper describes the design and implementation of iris performer, a toolkit for visual simulation, virtual reality, and other real- time 3d graphics applications. the principal design goal is to allow application developers to more easily obtain maximal performance from 3d graphics workstations which feature multiple cpus and support an immediate- mode rendering library. to this end, the toolkit combines a low-level library for high-performance rendering with a high-level library that implements pipelined, parallel traversals of a hierarchical scene graph. while discussing the toolkit architecture, the paper illuminates and addresses performance issues fundamental to immediate-mode graphics and coarse-grained, pipelined multiprocessing. graphics optimizations focus on efficient data transfer to the graphics subsystem, reduction of mode settings, and restricting state inheritance. the toolkit's multiprocessing features solve the problems of how to partition work among multiple processes, how to synchronize these processes, and how to manage data in a pipelined, multiprocessing environment. the paper also discusses support for intersection detection, fixed-frame rates, run-time profiling and special effects such as geometric morphing. john rohlf james helman two implementations of a concurrent simulation environment this paper discusses the design of a concurrent simulation environment hosted on the ada programming language at the laboratory for software research at texas a & m university. this environment was first implemented on a vax 11/750 single processor system and then ported to a sequent balance 8000 parallel computer system with ten processors. two run-time ada systems were available on the sequent: one for sequential and one for parallel. this paper reports our experiences in porting the original software to these new environments. carolyn hughes usha chandra sallie v. sheppard rendering complex scenes with memory-coherent ray tracing matt pharr craig kolb reid gershbein pat hanrahan the emotion machine (invited speech): from pain to suffering marvin minsky analysis and verification of multi-agent interaction protocols wu wen fumio mizoguchi computer graphics in singapore this issue, we travel to singapore for an update of this country's computer graphics activities. alain chesnais jose encarnacao h. seah y. t. lee line drawings of octree-represented objects the octree structure represents the space occupied by an object as a juxtaposition of cubes, where the sizes and position coordinates of the cubes are integer powers of 2 and are defined by a recursive decomposition of three- dimensional space. this makes the octree structure highly sensitive to object location and orientation, and the three-dimensional shape of the represented object obscure. it is helpful to be able to see the actual object represented by an octree, especially for visual performance evaluation of octree algorithms. presented in this paper is a display algorithm that helps visualize the three- dimensional space represented by the octree. given an octree, the algorithm produces a line drawing of the objects represented by the octree, using parallel projection, from any specified viewpoint with hidden lines removed. the order in which the algorithm traverses the octree has the property that if node x occludes node y, then node x is visited before node y. the algorithm produces a set of long, straight visible edge segments corresponding to the visible surface of the polyhedral object represented by the octree. examples of some line drawing produced by the algorithm are given. the complexity of the algorithm is also discussed. jack veenstra narendra ahuja parallel proximity detection and the distribution list algorithm generalized proximity detection for moving objects in a logically correct parallel discrete-event simulation is an interesting and fundamentally challenging problem. determining who can see whom in a manner that is fully scalable in terms of cpu usage, number of messages, and memory requirements is highly non-trivial. a new scalable approach has been developed to solve this problem. this algorithm, called the distribution list, has been designed and tested using the object-oriented synchronous parallel environment for emulation and discrete-event simulation (speedes) operating system. preliminary results show that the distribution list algorithm achieves excellent parallel performance. jeff s. steinman frederick wieland intensity fluctuations and natural texturing wolfgang krueger symbolic-numeric nonlinear equation solving a numerical equation- solving algorithm employing differentiation and interval arithmetic is presented which finds all solutions of f(z) = 0 on an interval i when f is holomorphic and has simple zeros. a two dimensional generalization of this algorithm is discussed. finally, aspects of a broader symbolic-numeric algorithm which uses the first algorithm as a foundation are considered. kelly roach the acquisition, verification, and explanation of design knowledge c. kellogg r. a. gargan w. mark j. g. mcguire m. pontecorvo j. l. schlossberg j. w. sullivan m. r. genesereth n. singh metastream vadim abadjev miguel del rosario alexei lebedev alexander migdal victor paskhaver compatability and interaction style in computer graphics recent trends in human computer interaction have focused on representations based on physical reality [4, 5, 6, 8]. the idea is to provide richer, more intuitive handles for control and manipulation compared to traditional graphical user interfaces (guis) using a mouse. this trend underscores the need to examine the concept of manipulation and to further understand what we _want_ to manipulate versus what we _can_ easily manipulate. implicit in this is the notion that the bias of the ui is often incompatible with user needs.the main goal of ui design is to reduce complexity while augmenting the ability of users to get their work done. a fundamental belief underlying our research is that complexity lies not only in what is purchased from the software and hardware manufacturers, but also in what the user creates with it. it is not just a question of making buttons and menus easier to learn and more efficient to use. it is also a question of "given that i've created this surface in this way, how can it now be modified to achieve my current design objective?" (the observation is that how the user created the surface in the first place will affect the answer to the question.) our thesis is that appropriate design of the system can minimize both kinds of complexity: that inherent in accessing the functionality provided by the vendor, _and_ that created by the user. the literature focuses on the former. in what follows, we investigate some of the issues in achieving the latter. in so doing, we structure our discussion around questions of _compatibility._ george w. fitzmaurice bill buxton induction of models under uncertainty this paper outlines a procedure for performing induction under uncertainty. this procedure uses a probabilitic representation and uses bayes' theorem to decide between alternative hypotheses (theories). this procedure is illustrated by a robot with no prior world experience performing induction on data it has gathered about the world. the particular inductive problem is the formation class descriptions both for the tutored and untutored cases. the resulting class definitions are inherenty probabilistic and so do not have any sharply defined membership criterion. this robot example raises some fundamental problems about induction--- particularly it is shown that inductively formed theories are not the best way of making predictions. another difficulty is the need to provide prior probabilities for the set of possible theories. the main criterion for such priors is a proagmatic one aimed at keeping the theory structure as simple as possible, while still reflecting any structure discovered in the data. p cheeseman the ultravis system gunter knittel case-based reasoning: business applications bradley p. allen constraint-based termination analysis of logic programs current norm- based automatic termination analysis techniques for logic programs can be split up into different components: inference of mode or type information, derivation of models, generation of well-founded orders, and verification of the termination conditions themselves. although providing high-precision results, these techniques suffer from an efficiency point of view, as several of these analyses are often performed through abstract interpretation. we present a new termination analysis which integrates the various components and produces a set of constraints that, when solvable, identifies successful termination proofs. the proposed method is both efficient and precise. the use of constraint sets enables the propagation on information over all different phases while the need for multiple analyses is considerably reduced. stefaan decorte danny de schreye henk vandecasteele darwinian evolution as a paradigm for ai research a fenanzo visualizing simulated room fires (case study) recent advances in fire science and computer modeling of fires allow scientists to predict fire growth and spread through structures. in this paper we describe a variety of visualizations of simulated room fires for use by both fire protection engineers and fire suppression personnel. we also introduce the concept of fuzzy visualization, which results from the superposition of data from several separate simulations into a single visualization. jayesh govindarajan matthew ward jonathan barnett volume visualization in vrml johannes behr marc alexa unification using a distributed representation a. browne j. pilkington contemplations of a simulation navel or recognizing the seers among peers richard e. nance smoothing of contours by using one adaptive filter ognian jelezov quasi-linear depth buffers with variable resolution in this paper we present new class of variable-resolution depth buffers, providing a flexible trade-off between depth precision in the distant areas of the view volume and performance. these depth buffers can be implemented using linear or quasi- linear mapping function of the distance to the camera to the depth in the screen space. in particular, the complementary z buffer algorithm combines simplicity of implementation with significant bandwidth savings. a variable-resolution depth buffer saves bandwidth by changing size of the per-pixel depth access from 24 bits to 16 bits when distance to the pixel from the camera becomes larger than a given threshold. this distance is selected in order to keep the resulting resolution equal or larger than the resolution of the 24-bit screen z buffer. for dynamic ratios of the distances between far and near planes 500 and above, bandwidth savings may exceed 20%. quasi-linear depth floating-point depth buffers are best at high dynamic ratios; 3d hardware should support per-frame setting of the optimal depth buffer type and format. per-frame adjustment of the resolution switch distance allows balancing performance with depth precision and should be exposed in the graphics api. eugene lapidous guofang jiao jianbo zhang timothy wilson accommodating memory latency in a low-cost rasterizer bruce anderson andy stewart rob macaulay turner whitted hollow jason shulman checking regulation consistency by using sol-resolution this paper addresses the problem of regulation consistency checking. regulations are sets of rules which express what is obligatory, permitted, forbidden and under which conditions. we first define a first order language to model regulations. then we introduce a definition of regulation consistency. we show that checking the consistency of a regulation comes to generate some particular consequences of some first order formulas. then, we show that we can apply inoue's inference rule, sol-resolution, which is complete for generating, from some clauses, their consequences which satisfy a given condition. laurence cholvy an ai-based approach to machine translation in indian languages primarily illustrated as an approach to translate the indian languages, a focus on ai techniques for building semantic representational structures of sentences is presented. subramanian raman narayanan alwar simulating the mind: a gaunlet thrown to computer science jiri wiedermann a general construction scheme for unit quaternion curves with simple high order derivatives myoung-jun kim myung-soo kim sung yong shin which way is the flow? david l. kao using transparent props for interaction with the virtual table dieter schmalstieg l. miguel encarnação zsolt szalavári fundamental algorithms (panel session): retrospect and prospect rae a. earnshaw james h. clark a. robin forrest robert d. parslow david f. rogers eco-problem-solving model (abstract): results of the n-puzzle alexis drogoul christophe dubreuil jagged edges: when is filtering needed? depiction of oblique edges by discrete pixels usually results in visible stair steps, often called jaggies. a variety of filtering approaches exists to minimize this visual artifact, but none has been applied selectively only to those edges that would otherwise appear jagged. a recent series of experiments has led to a model of the visibility of jagged edges. here, we demonstrate how these data can be used efficiently to determine when filtering of edges is needed to eliminate the jaggies and when it is unnecessary. this work also provides a template for how the results of psychophysical experiments can be applied in computer graphics to address image-quality questions. avi c. naiman modern trompe l'oeil pat hanrahan text processing - where do we go from here? most universities have sort of "backed into" the word-processing/text- processing business. as large computers become more sophisticated and small word- processing micros become more readily affordable, large numbers of students, faculty, and staff have discovered the joys of online text editing and formatting and have demanded access to these types of systems. this can, and at times does, lead to uncontrolled growth and even pandemonium as users vie for time on the terminals and machines and demand specialized assistance from user services personnel. since there was no "master plan," the process of inputting, editing, formatting, previewing, and outputting text is currently done very much piecemeal using whatever hardware and software is available at the time. this worked fine with the small numbers of knowledgeable users who first got interested in computerized text processing as an outgrowth of their other computer-related activities. however, with more and more "lay" users appearing every day with new text- processing applications there has developed a real need to organize and give a direction to this previously uncontrolled field. this paper will discuss what is being done at clemson university to provide these controls and will touch on the concept of a fully integrated text processing system, which is the topic of a white paper recently started by the document composition project of ibm's share user group. sandra piazza implementations of time (panel) douglas w. jones james o. henriksen robert m. o'keefe c. dennis pegden robert g. sargent brian w. unger error modeling in stereo navtgation larry matthies steven a. shafer links: what use is knowledge? syed s. ali snap-dragging in three dimensions eric a. bier metafield maze bill keays ron macneil energy preserving non-linear filters monte carlo techniques for image synthesis are simple and powerful, but they are prone to noise from inadequate sampling. this paper describes a class of non-linear filters that remove sampling noise in synthetic images without removing salient features. this is achieved by spreading real input sample values into the output image via variable-width filter kernels, rather than gathering samples into each output pixel via a constant-width kernel. the technique is nonlinear because kernel widths are based on sample magnitudes, and this local redistribution of values cannot generally be mapped to a linear function. nevertheless, the technique preserves energy because the kernels are normalized, and all input samples have the same average influence on the output. to demonstrate its effectiveness, the new filtering method is applied to two rendering techniques. the first is a monte carlo path tracing technique with the conflicting goals of keeping pixel variance below a specified limit and finishing in a finite amount of time; this application shows how the filter may be used to "clean up" areas where it is not practical to sample adequately. the second is a hybrid deterministic and monte carlo ray-tracing program; this application shows how the filter can be effective even when the pixel variance is not known. holly e. rushmeier gregory j. ward converting sets of polygons to manifold surfaces by cutting and stitching andre gueziec gabriel taubin francis lazarus william horn pic - a language for typesetting graphics pic is a language for specifying pictures so that they can be typeset as an integral part of a document preparation system. the basic objects in pic are boxes, lines, arrows, circles, ellipses, arcs and splines, which may be placed anywhere and labeled with arbitrary text. most of the effort in designing pic has gone into making it possible to specify the sizes and positions of objects with minimal use of absolute coordinates. this paper describes pic, with primary emphasis on those aspects of the language that make it easy to use. brian w. kernighan simulation of lumber processing for improved raw material utilization timothy stiess texture potential mip mapping, a new high-quality texture antialiasing algorithm a refined version of the texture potential mapping algorithm is introduced in which a one-dimensional mip map is incorporated. this has the effect of controlling the maximum number of texture samples required. the new technique is compared to existing texture antialiasing methods in terms of quality and sample count. the new method is shown to compare favorably with existing techniques for producing high quality antialiased, texture-mapped images. gap - a gpss/fortran package the gap package has been developed to permit the linking of standard gpss models to a series of fortran subprograms. since the gap subprograms are transparent to the user, knowledge of the fortran language is not needed. gap provides an easy mechanism to obtain, among other things, values from a number of standard statistical distributions, permits the loading of msavevalues and other snas using data cards and provides statistical analysis of the gpss model output. robert j. wimmert arena software tutorial david a. takus david m. profozich expert system for ship transportation planning yugo taka satoshi fukumura zhi min xie creating polyhedral stellations a process for creating and displaying stellations of a given polyhedral solid is described. a stellation is one of many star-like polyhedra which can be derived from a single solid by extending its existing faces. a program has been implemented which performs the stellation process on an input object and generates a 3-dimensional image of the stellated object on a computer graphics display screen. pictures of icosahedron and rhombictriacontahedron stellations generated by the program are included in the paper. kathleen r. mckeown norman i. badler the implementation of a model-based belief revision system timothy s. c. chou marianne winslett cooperative acceleration: robust conservative distributed discrete event simulation robustness of the simulation mechanism is a requirement for acceptability of distributed simulation environments. we consider complex and erratic distributed conservative simulations, using colliding pucks as a guiding example. a new mechanism is introduced which allows logical processes to cooperate locally and advance. advance is made through the collective selection of the next event in a group of logical processes. the algorithm demonstrates scalability through the locality of its determination. a description and proof of correctness is given. the effectiveness of the cooperative acceleration mechanism is illustrated with measurements on road traffic and colliding pucks simulations. t. d. blanchard t. w. lake s. j. turner unstructured lumigraph rendering we describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view- dependent texture mapping. in particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). in the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. when presented with fewer cameras and good approximate geometry, our algorithm behaves like view- dependent texture mapping. the algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. we demonstrate this flexibility with a variety of examples. chris buehler michael bosse leonard mcmillan steven gortler michael cohen an adaptive version of the boost by majority algorithm yoav freund layered construction for deformable animated characters j. e. chadwick d. r. haumann r. e. parent efficient polygon-filling algorithms for raster displays michael r. dunlavey direct ascent, deep space intercept model a stochastic computer model was developed to analyze a direct ascent, deep space intercept system. there are two error sources within the system: the deep space tracking system and the launch vehicle guidance system. the errors are modeled based on equipment performance data and analytical results. to compensate for the system errors, the requirements for a maneuverable intercept vehicle are developed. the requirements are described in terms of sensor acquisition distance and vehicle velocity change. analyses of error sources and tradeoffs amongst maneuver factors are discussed. mark m. mekaru richard c. barclay modelling styles and their support in the casm environment much effort is currently being expended in the design and implementation of integrated software support environments for discrete event simulation modelling. what are the considerations underlying the development of such environments and how are efforts being directed? this paper discusses aspects of the background to current developments and relates this discussion to the context of one such environment namely that of the computer aided simulation modelling (casm) group at the london school of economics. david w. balmer investigating the structure of a lie algebra d. w. rand p. winternitz evian: babies jean pierre roux mac guff ligne nicolas trout a fast, low-space algorithm for multiplying dense multivariate polynomials this paper presents an improved adaptive hybrid algorithm for multiplying dense multivariate polynomials that is both time and space efficient. the hybrid algorithm makes use of two families of univariate algorithms, one karatsuba based and the other dft based, which are applied recursively to solve the multivariate problem. the hybrid algorithm is adaptive in that particular univariate algorithms are selected at run time to minimize the time complexity; an order-of-magnitude speedup with respect to classical multiplication is achieved over the entire practical range except for very small problems. empirical investigation shows that most of the theoretical superiority is maintained in actual implementation. the largest contribution to the space requirements of the total algorithm is determined by the univariate algorithm used for the outermost variable; except for quite small problems, selecting univariate algorithms to minimize run time almost always leads to situations where the space requirements of the total algorithm are extremely close to the space required merely to store the result. vangalur s. alagar david k. probst automated reasoning about enterprise concepts ernesto compatangelo giovanni rumolo the cmunited-97 robotic soccer team: perception and multiagent control manuela veloso peter stone kwun han function evaluation on branch cuts albert d. rich david j. jeffrey a development methodology applied to a radar system simulation as systems to be simulated grow larger and more complex and the cost of software escalates the need for cost and time efficient development methodologies become critical. this paper describes such a methodology. the methodology is based on standard modern software practices such as top-down design and structured programming. this development methodology has been applied to a radar system simulation which will eventually grow into an air defense simulation employing a surveillance net, and deployed weapons. the objective of the simulation is to provide a computer design tool to aid in the synthesis and parameterization of modern air defense systems. the simulation embodies computational algorithms as well as the contentions for resources among the various elements of the system. the radar system simulation is implemented in the simscript simulation program language. it is documented in the software design and documentation language (sddl) which is a machine processed and machine reproducible pseudo code. jane harmon jay landreth donald lausch padman nagenthiram henry ramirez evaluation of a metric inventory system using simulation traditional measures obtained from the application of theoretical inventory models do not provide detailed information on the ability of multi-echelon, reparable item inventory systems to maintain their supported machines in an operational status. monte carlo simulation techniques are applied to obtain estimates of the operational status actually attained. an advantage of the simulation techniques is that they do not require all the strict assumptions of the theoretical models. the slam simulation language developed by pritsker and pegden is used for this analysis. james r. coakley charles g. carpenter a multiagent perspective of parallel and distributed machine learning gerhard weiß real-time previewing for volume visualization takafumi saito a synergetic approach to speculative price volatility taisei kaizoji a network interface unit simulation using micro passim this paper describes how micro passim, a gpss based simulation system, was transported from the apple ii to the hp 9836. the problems associated with moving a large program form one ucsd pascal system to another are discussed. micro passim was transported to the hp system so that an ethernet to hpib interface board could be modeled. the model is described and the results obtained from the simulation are discussed. a discussion of the advantages and disadvantages of using micro passim rather than a standard language, such as gpss, is also included. tom p. vayda larry l. wear efficient identification of important factors in large scale simulations large, complex computer simulation models can require prohibitively costly and time-consuming experimental programs to study their behavior. therefore we may want to concentrate the analysis on the set of "most important" factors (i.e., input variables). factor screening experiments, which attempt to identify the more important variables, can be extremely useful in the study of such models. the number of computer runs available for screening, however, is usually severely limited. in fact, the number of factors often exceeds the number of available runs. in this paper we present a survey of supersaturated designs for use in factor screening experiments. the designs considered are: random balance, systematic supersaturated, group screening, modified group screening, t-optimal, r-optimal, and search designs. we discuss in general terms the basic technique, advantages, and disadvantages of each procedure surveyed. carl a. mauro a distributed simulation model of air traffic in the national airspace system eric l. blair frederick p. wieland anthony e. zukas a decision making method and its application in unconstrained handwritten character recognition p. ahmed c. y. suen the runtime creation of code for printing simulation output john h. reynolds team-partitioned, opaque-transition reinforcement learning peter stone manuela veloso perpetuities reasoning captured and automated in a logic program barnett w. glickfeld perspectives on simulation using gpss thomas j. schriber simulation, technology, and the decision process philip j. kiviat reconstruction filters in computer-graphics don p. mitchell arun n. netravali a skeletal-based solid editor robert blanding cole brooking mark ganter duane storti visorama: a complete virtual panorama system andre matos luiz velho jonas gomes andre parente heloisa siffert can humans think? -- a general review of hi h. s. hartl comparison of simulation results of dissimilar models analytical and simulation models for the same idealised but realistic computer system are considered. the results for cpu utilisation and cpu queue length from the models compare very favourably, suggesting that the technique could be used to confirm the accuracy of a simulation model. increasingly dissimilar models are also systematically investigated. it is found that the degree of dissimilarity of the models can be surprisingly large before the method of comparison can be considered as having broken down. c lazos special purpose automatic programming for hidden surface elimination in many applications of three dimensional computer graphics, the appearance of the same scene must be computed repeatedly for many different positions of the viewer. this paper concerns a scheme for exploiting this property of an application for the purpose of improving the efficiency of the hidden surface computation. the scheme involves a kind of automatic programming: for each scene to be considered, a different special purpose program is automatically constructed. the special purpose program then takes the position of the viewer as input, and generates a suitable description of the scene with hidden surfaces removed as output. since the special purpose program has a very limited task to perform - it is adapted to handling just one scene - it can be much faster than any general purpose algorithm would be for the same scene. the paper describes a method by which special purpose programs for hidden surface elimination can be constructed in a fully automatic manner. the method has been implemented, and results of experiments are given. chris goad learning to remove internet advertisements nicholas kushmerick simplifying the modeling of multiple activities, multiple queuing, and interruptions: a new low-level data structure most conventional discrete-event simulation software assumes a simple progression of entities through queues and activities. such software cannot cope easily with modeling systems where entities can be present in more than one queue, can be involved in more than one activity (i.e., scheduled for more than one event), or can be interrupted while queuing or taking part in an activity in order to join another queue or take part in a different activity. low-level data structures to address these problems have been implemented in pascal by extending an existing suite of pascal procedures, call pascal_sim. the problems and their solutions are discussed in the context of machine breakdown in a production system. comparisons between the use of the new structures and the existing ones showed some gain in computational efficiency and considerable improvement in ease of modeling. the generality of the data structure is considered. ruth m. davies robert m. o'keefe huw t. o. davies determinants of immersivity in virtual reality: graphics vs. action alan r. mitchell stuart rosen william bricken ron martinez brenda laurel artificial fishes: physics, locomotion, perception, behavior this paper proposes a framework for animation that can achieve the intricacy of motion evident in certain natural ecosystems with minimal input from the animator. the realistic appearance, movement, and behavior of individual animals, as well as the patterns of behavior evident in groups of animals fall within the scope of the framework. our approach to emulating this level of natural complexity is to model each animal holistically as an autonomous agent situated in its physical world. to demonstrate the approach, we develop a physics-based, virtual marine world. the world is inhabited by artificial fishes that can swim hydrodynamically in simulated water through the motor control of internal muscles that motivates fins. their repertoire of behaviors relies on their perception of the dynamic environment. as in nature, the detailed motions of artificial fishes in their virtual habitat are not entirely predictable because they are not scripted. xiaoyuan tu demetri terzopoulos the ides framework: a case study in development of a parallel discrete-event simulation system david m. nicol michael m. johnson ann s. yoshimura microchoice bounds and self bounding learning algorithms john langford avrim blum broadcast of local eligibility: behavior-based control for strongly cooperative robot teams barry brian werger maja j. mataric initial transient effects in the frequency domain douglas j. morrice lee w. schruben sheldon h. jacobson user-interface devices for rapid and exact number specification ari rappoport maarten van emmerik a speculation-based approach for performance and dependability analysis: a case study yiqing huang zbigniew kalbarczyk ravishankar k. iyer a unifying framework for distributed simulation a theory of distributed simulation applicable to both discrete-event and continuous simulation is presented. it derives many existing simulation algorithms from the theory and describes an implementation of a new algorithm derived from the theory. a high-level discrete-event simulation language has been implemented, using the new algorithm, on parallel computers; performance results of the implementation are also presented. r. bagrodia k. m. chandy wen toh liao a package for computations in simple lie algebra representations a. a. zolotykh taking advantage of models for legal classification legal reasoning is often couched in terms of legal classification. we examine how three models of classification --- classical, probabilistic and exemplar --- are used to perform legal classification. we argue that all three models of classification are implicitly applied by existing ai methods. the cabaret ("case-based reasoning tool") system is suggested as an architecture that applies all three models. the relative difficulty of revising knowledge in rule form, in hypo- style dimension form, and exemplar form is considered. d. b. skalak a unified distributed simulation system jeff mcaffer lagrangian visualization of natural convection mixing flows luis m. de la cruz victor godoy eduardo ramos reality engine graphics kurt akeley computer-based readability indexes computer-based document preparation systems provide many aids to the production of quality documents. a text editor allows arbitrary text to be entered and modified. a text formatter then imposes defined rules on the form of the text. a spelling checker ensures that each word is a correctly spelled word. none of these aids, however, affect the meaning of the document; the document may be well-formatted and correctly spelled but still incomprehensible. douglas r. mccallum james l. peterson synchronizing simulations in distributed interactive simulation sandra cheung margaret loper news lisa meeden modular simulation environments: an object manager based architecture charles r standridge an update on the story of computer graphics carl machover book reviews karen t. sutherland hardware assisted volume rendering of unstructured grids by incremental slicing david m. reed roni yagel asish law po-wen shin naeem shareef introduction to slam ii and slamsystem j. j. o'reilly k. c. nordlund computers aid index and glossary preparation i have written many books but only in the past two years have i had available my own computer and word processor. now i find the word processor indispensible to my writing endeavors. but my computer can do more than just compose the manuscript. it can help me to compile and organize indexes and glossaries. i have just finished three books and i got assistance from my computer on all of them for the indexes. there are four kinds of things my computer does for me: 1\\. extracts information directly from the manuscript, providing important words i might want to index; 2\\. helps me to make an index file; 3\\. puts this file in alphabetic order; 4\\. prints out the index in the desired format. some of the help i get is from program packages which i purchase. other help i get from basic programs which i have written to assist me further. i have not seen many articles which describe how to combine purchased software with your programs to facilitate a unified activity. first i will describe the system that i have and how i am using it. then i will tell you what i wanted done in terms of indexes. finally i will give you the details of the programs i have written and how i use the packages that you can buy. ivan flores intelligent manufacturing-simulation agents tool (imsat) gajanana nadoli john e. biegel a software test-bed for the development of 3-d raster graphics systems we describe a set of utility routines for 3-d shaded display which allow us to create raster scan display systems for various experimental and production applications. the principal feature of this system is a flexible scan conversion processor that can simultaneously manage several different object types. communications between the scan conversion routine and processes which follow it in the display pipeline can be routed through a structure called a "span buffer" which retains some of the high resolution, three dimensional data of the object description and at the same time has the characteristics of a run length encoded image. turner whitted david m. weimer creating natural language ouput for real-time applications susan w. mcroy songsak channarukul syed s. ali hierarchical multi-agent reinforcement learning in this paper we investigate the use of hierarchical reinforcement learning to speed up the acquisition of cooperative multi-agent tasks. we extend the maxq framework to the multi-agent case. each agent uses the same maxq hierarchy to decompose a task into sub-tasks. learning is decentralized, with each agent learning three interrelated skills: how to perform subtasks, which order to do them in, and how to coordinate with other agents. coordination skills among agents are learned by using joint actions at the highest level(s) of the hierarchy. the q nodes at the highest level(s) of the hierarchy are configured to represent the joint task-action space among multiple agents. in this approach, each agent only knows what other agents are doing at the level of sub-tasks, and is unaware of lower level (primitive) actions. this hierarchical approach allows agents to learn coordination faster by sharing information at the level of sub-tasks, rather than attempting to learn coordination taking into account primitive joint state-action values. we apply this hierarchical multi-agent reinforcement learning algorithm to a complex agv scheduling task and compare its performance and speed with other learning approaches, including flat multi-agent, single agent using maxq, selfish multiple agents using maxq (where each agent acts independently without communicating with the other agents), as well as several well-known agv heuristics like "first come first serve", "highest queue first" and "nearest station first". we also compare the tradeoffs in learning speed vs. performance of modeling joint action values at multiple levels in the maxq hierarchy. rajbala makar sridhar mahadevan mohammad ghavamzadeh fishing david gainey john “jr” robeck cassidy curtis carl rosendahl julie haddon web 3d roundup don brutzman timothy childs interactive modeling and simulation of transaction flow or network models using the ada simulation support environment the ada simulation support environment (asse) is a software system, with the purpose to support the development and maintenance of simulation models written in ada throughout their life cycle. we describe here the transaction flow or network part of the asse, which allows to build models like in gpss or slam. our view of such models is slightly different from that of the above mentioned languages, which is demonstrated in detail by the server/resource process. the design stresses modular top-down development using submodels. models can be developed and tested interactively. heimo h. adelsberger new directions for the design of advanced simulation systems this paper will review the philosophy behind advanced simulation methodologies and the advantages these concepts offer to simulation analysts. it also proposes a generic design of an integrated simulation system based on an analysis of theoretical studies, a review of recently developed simulation systems, and current research. michael g. ketcham john w. fowler don t. phillips functionally based virtual computer art alexei sourin a decision matrix approach for constructing multiple knowledge bases system ning shan xiaohua hu from on-line to batch learning nick littlestone getting it off the screen and onto paper (panel session): current accomplishments and future goals gary w. meyer ricardo j. motta joann taylor maureen c. stone techniques for automatically correcting words in text (abstract) karen kukich a more flexible image generation environment a supervisory process is used to distribute picture-generation tasks to heterogeneous subprocesses. significant advantages accrue by tailoring the subprocesses to their tasks. in particular, scan conversion algorithms tailored to different surface types may be used in the same image, a changing mixture of processors is possible, and, by multiprogramming, a single processor may be used more effectively. a two- level shape data structure supports this execution environment, allowing top- level priority decisions which avoid comparisons between surface elements from non-interfering objects during image construction. f. c. crow the forgotten planet marc urlus michael alalouf rachel lamisse serge brackman christian leroy expression cloning we present a novel approach to producing facial expression animations for new models. instead of creating new facial animations from scratch for each new model created, we take advantage of existing animation data in the form of vertex motion vectors. our method allows animations created by any tools or methods to be easily retargeted to new models. we call this process _expression cloning_ and it provides a new alternative for creating facial animations for character models. expression cloning makes it meaningful to compile a high-quality facial animation library since this data can be reused for new models. our method transfers vertex motion vectors from a source face model to a target model having different geometric proportions and mesh structure (vertex number and connectivity). with the aid of an automated heuristic correspondence search, expression cloning typically requires a user to select fewer than ten points in the model. cloned expression animations preserve the relative motions, dynamics, and character of the original facial animations. jun-yong noh ulrich neumann environment matting extensions: towards higher accuracy and real-time capture environment matting is a generalization of traditional bluescreen matting. by photographing an object in front of a sequence of structured light backdrops, a set of approximate light-transport paths through the object can be computed. the original environment matting research chose a middle ground \---using a moderate number of photographs to produce results that were reasonably accurate for many objects. in this work, we extend the technique in two opposite directions: recovering a more accurate model at the expense of using additional structured light backdrops, and obtaining a simplified matte using just a single backdrop. the first extension allows for the capture of complex and subtle interactions of light with objects, while the second allows for video capture of colorless objects in motion. yung-yu chuang douglas e. zongker joel hindorff brian curless david h. salesin richard szeliski prime rule-based methodologies give inadequate control the use of rule-based methodologies in the development of expert systems is widespread. in order to provide good explanations in these systems it is desirable that the rules be prime. the difficulty of expressing control in such rules, and thus arriving at a desirable sequencing of events, has led to pragmatic additions to the basic methodology. recent developments in the theory of decision processes have provided new insight into the form of a desirable sequencing. prime rules, even when augmented by sophisticated control strategies, cannot generate from backward chaining all these desirable sequencings. furthermore, if one of these desirable sequencings happens to be generated from prime rules it may be by luck rather than design. j r b cockett j herrera incremental reinforcement learning for designing multi-agent systems olivier buffet alain dutech françois charpillet empirical analysis of inductive knowledge acquisition methods h. m. chung contribution of a multi-agent cooperation model in a hospital environment samir aknine hamid aknine smooth transitions between bump rendering algorithms barry g. becker nelson l. max simultaneous and efficient simulation of highly dependable systems with different underlying distributions philip heidelberger victor f. nicola perwez shahabuddin brillia yoichiro kawaguchi programs for applying symmetries thomas wolf intelligent scissors for image composition eric n. mortensen william a. barrett a triangulation algorithm from arbitrary shaped multiple planar contours conventional triangulation algorithms from planar contours suffer from some limitations. for instance, incorrect results can be obtained when the contours are not convex, or when the contours in two successive slices are very different. in the same way, the presence of multiple contours in a slice leads to ambiguities in defining the appropriate links. the purpose of this paper is to define a general triangulation procedure that provides a solution to these problems. we first describe a simple heuristic triangulation algorithm which is extended to nonconvex contours. it uses an original decomposition of an arbitrary contour into elementary convex subcontours. then the problem of linking one contour in a slice to several contours in an adjacent slice is examined. to this end, a new and unique interpolated contour is generated between the two slices, and the link is created using the previously defined procedure. next, a solution to the general case of linking multiple contours in each slice is proposed. finally, the algorithm is applied to the reconstitution of the external surface of a complex shaped object: a human vertebra. a. b. ekoule f. c. peyrin c. l. odet applying knowledge-based system design and simulation in information system requirements determination kung-chao liu jerzy w. rozenblit letter from the chair jeff bradshaw issue spotting in legal cases for any system that uses previous experience to solve problems in new situations, it is necessary to identify the features in the situation that should match features in the previous cases through some process of situation analysis. in this paper, we examine as issue spotting; in particular, we present how issues spotting is implemented in chaser, a legal reasoning system that works in the domain of tort law. the approach presented here is a compromise between generality and efficiency, and is applicable to a range of problems and domains outside of legal reasoning. in particular, it presents a principled way to use multiple cases for a single problem by exploiting the inherent structure present in many domains. barbara cuthill robert mccartney experiments on multistrategy learning by meta-learning philip k. chan salvatore j. stolfo handling communications in concurrent kbs a. bokma m. huiban a. slade s. a. jarvis needed: a new test of intelligence w. lewis johnson radiosity and hybrid methods we examine various solutions to the global illumination problem, based on an exact mathematical analysis of the rendering equation. in addition to introducing efficient radiosity algorithms, we present a uniform approach to reformulate all of the basic radiosity equations used so far. using hybrid methods we are able to analyze possible combinations of the view-dependent ray-tracing method and of the low- resolution radiosity-based method, and to offer new algorithms. laszlo neumann attila neumann adjustable tools: an object-oriented interaction metaphor david salesin ronen barzel representation theory in cayley: tools and algorithms gerhard j. a. schneider effects of waiting overheads on conservative parallel simulation yi-bing lin solid modeling with hardware (panel session) pierre j. malraison gershon kedem greg lee donald meagher animating explosions in this paper, we introduce techniques for animating explosions and their effects. the primary effect of an explosion is a disturbance that causes a shock wave to propagate through the surrounding medium. the disturbance determines the behavior of nearly all other secondary effects seen in explosion. we simulate the propagation of an explosion through the surrounding air using a computational fluid dynamics model based on the equations for compressible, viscous flow. to model the numerically stable formation of shocks along blast wave fronts, we employ an integration method that can handle steep pressure gradients without introducing inappropriate damping. the system includes two-way coupling between solid objects and surrounding fluid. using this technique, we can generate a variety of effects including shaped explosive charges, a projectile propelled from a chamber by an explosion, and objects damaged by a blast. with appropriate rendering techniques, our explosion model can be used to create such visual effects as fireballs, dust clouds, and the refraction of light caused by a blast wave. gary d. yngve james f. o'brien jessica k. hodgins software integration techniques for modular cognitive modeling systems john drake image-based modeling and photo editing we present an image-based modeling and editing system that takes a single photo as input. we represent a scene as a layered collection of depth images, where each pixel encodes both color and depth. starting from an input image, we employ a suite of user- assisted techniques, based on a painting metaphor, to assign depths and extract layers. we introduce two specific editing operations. the first, a "clone brushing tool," permits the distortion-free copying of parts of a picture, by using a parameterization optimization technique. the second, a "texture-illuminance decoupling filter," discounts the effect of illumination on uniformly textured areas, by decoupling large- and small-scale features via bilateral filtering. our system enables editing from different viewpoints, extracting and grouping of image- based objects, and modifying the shape, color, and illumination of these objects. byong mok oh max chen julie dorsey fredo durand fast ray tracing by ray classification james arvo david kirk modeling waves and surf darwyn r. peachey maps: multiresolution adaptive parameterization of surfaces aaron w. f. lee wim sweldens peter schröder lawrence cowsar david dobkin everything old is new again: remarking computer games richard rouse multiclass learning, boosting, and error-correcting codes venkatesan guruswami amit sahai two bit/pixel full color encoding graham campbell thomas a. defanti jeff frederiksen stephen a. joyce lawrence a. leske computer-generated pen-and-ink illustration this paper describes the principles of traditional pen-and-ink illustration, and shows how a great number of them can be implemented as part of an automated rendering system. it introduces "stroke textures," which can be used for achieving both texture and tone with line drawing. stroke textures also allow resolution-dependent rendering, in which the choice of strokes used in an illustration is appropriately tied to the resolution of the target medium. we demonstrate these techniques using complex architectural models, including frank lloyd wright's "robie house." georges winkenbach david h. salesin state event in combined simulation within the context of combined (discrete and continuous) simulation, modelers attribute considerable importance to the concept of state event representation. unfortunately, since the methodology introduced in gasp iv, little research efforts have been devoted to the development of a more flexible and a more powerful modelling tool for representation of state event within a support system for model development environment. this paper briefly reviews the existing methodology and suggests a new approach useful for representation of a state event triggered by a complex condition involving two or more continuously changing attributes. irma angulo gholamreza torkzadeh knowledge-level analysis of planning systems andre valente dab: interactive haptic painting with 3d virtual brushes _we present a novel painting system with an intuitive haptic interface, which serves as an expressive vehicle for interactively creating painterly works. we introduce a deformable, 3d brush model, which gives the user natural control of complex brush strokes. the force feedback enhances the sense of realism and provides tactile cues that enable the user to better manipulate the paint brush. we have also developed a bidirectional, two-layer paint model that, combined with a palette interface, enables easy loading of complex blends onto our 3d virtual brushes to generate interesting paint effects on the canvas. the resulting system, dab, provides the user with an artistic setting, which is conceptually equivalent to a real-world painting environment. several users have tested dab and were able to start creating original art work within minutes._ bill baxter vincent scheib ming c. lin dinesh manocha research frontiers in virtual reality steve bryson steven k. feiner frederick p. brooks philip hubbard randy pausch andries van dam operational knowledge acquisition problems for air traffic expert controller a. j. kornecki the effect of strict firing and real characteristics of multiprocessors on performance - a simulation approach vladimir i. ivanov hamdy s. soliman an approach for federating parallel simulators steve l. ferenci kalyan s. perumalla richard m. fujimoto building dynamic organizations of distributed, heterogenous agents david v. pynadath milind tambe nicolas chauvat on ray tracing parametric surfaces daniel l. toth verification, validation, and accreditation (vv&a;) of models and simulations through reduced order metamodels don caughlin scanline rendering of parametric surfaces a scanline algorithm is described which renders bicubic patches directly from the parametric description without producing a polygonal approximation. the algorithm is partially based on earlier work by whitted. a primitive object, called a "curved-edge polygon", is defined, and an algorithm for breaking down a bicubic patch into the primitive objects is described. a general surface intersection method is employed to provide a robust silhouette edge detector. shades are computed by calculating a cubic approximation to the normal surface and performing either a cubic or a linear interpolation of the bounding edge normals across the scanline. subdivision of parametric surfaces is used to reduce the complexity of the surfaces being rendered, providing dramatic improvement in the results of both the silhouette detector and the shading methods. dino schweitzer elizabeth s. cobb a comparative study of load sharing in heterogeneous multicomputer systems sayed a. banawan nidal m. zeidat t-buffer: fast visualization of relativistic effects in space-time ping-kang hsiung robert h. thibadeau michael wu dynamic texture: physically based 2d animation mikio shinya masakatsu aoki ken tsutsuguchi naoya kotani grap - a language for typesetting graphs the authors describe a system that makes it easy and convenient to describe graphs and to include them as an integral part of the document formatting process. jon l. bentley brian w. kernighan go-go scott petill announcements amruth kumar an implementation of eisner v. macomber l. thorne mccarty occlusion culling with optimized hierarchical buffering ned greene introduction to simfactory 11.5 m. c. rohbough experimenting and theorizing in theory formation the bacon system, developed by langley, simon and bradshaw, has shown the utility of a data driven discovery system. a new system, called fahrenheit, has been built which extends bacon, making it more robust and allowing it perform a wider range of discovery activity. the new system extends bacon in several ways: 1) it determines the scope of a law by making simulated experiments, and by searching for regularities that describe the scope boundaries. 2) the world model in which the experiments are performed is more sophisticated. 3) the order in which the data are considered is placed under the control of fahrenheit so that the system continues the discovery process even if no regularity is found for a particular variable. b w koehn j m zytkow model-based fault diagnosis: knowledge acquisition and system design within this paper a model-based approach for knowledge-based diagnosis-systems is discussed. the model is derived from the fault recognition and fault searching techniques used by maintenance experts in general. the presented model has implications on the knowledge acquisition method in a way that only such data that are usually known to the maintenance expert have to be gathered and interlinked. this means that knowledge acquisition has only to deal with a knowledge source that can be prestructured. the model for knowledge-based diagnosis-systems and the knowledge acquisition method has been successfully used to develop diagnosis systems for a smd-insertion machine and an extrusion machine. time needed for knowledge acquistion has been reduced. thomas h. schmidt masks piotr karwas hmds, caves & chameleon: a human-centric analysis of interaction in virtual space there are a various approaches to implementing virtual reality (vr) systems. the head mounted display (hmd) and cave approaches are two of the best known. in this paper, we discuss such approaches from the perspective of the types of interaction that they afford. our analysis looks at interaction from three perspectives: solo interaction, collaborative interaction in the same physical space and remote collaboration. from this analysis emerges a basic taxonomy that is intended to help systems designers make choices that better match their implementation with the needs of their application and users. bill buxton george w. fitzmaurice editorial philip heidelberger fake fur rendering dan b. goldman preventing existence we discuss the treatment of prevention statements in both naturallanguage semantics and knowledge representation, with particularregard to existence entailments. first order representations withan explicit existence predicate are shown to not adequately capturethe entailments of prevention statements. a linguistic analysis isframed in a higher order intensional logic, employing a fregeannotion of existence as instantiation of a concept. we discuss howthis can be mapped to a cyc style knowledge representation. cleo condoravdi dick crouch john everett valeria paiva reinhard stolle danny bobrow martin van den berg omc: an organisational model for co-operations hans czap joachim reiter automatic image placement to provide a guaranteed frame rate daniel g. aliaga anselmo lastra a probabilistic model for natural language understanding michael atherton debra a. lelewer on the isomorphism problem for finite-dimensional binomial algebras binomial algebras are finitely presented algebras defined by monomials or binomials. given two binomial algebras, one important problem is to decide whether or not they are isomorphic as algebras. we study an algorithm for solving this problem, when both algebras are finite-dimensional over a field. in particular, when they are monomial algebras (i.e. binomial algebras defined by monomials only), the problem has already been completely solved by the presentation uniqueness. in this paper, we provide some necessary conditions in terms of partially ordered sets for two certain binomial algebras to be isomorphic. in other words, invariants of the binomial algebras are presented. these conditions together serve as an effective procedure for solving the isomorphism problem. k. shirayanagi a family of new algorithms for soft filling soft filling algorithms change the color of an anti-aliased region, maintaining the anti-aliasing of the region. the two published algorithms for soft filling work only if the foreground region is anti-aliased against a black background. this paper presents three new algorithms. the first fills against a region consisting of any two distinct colors, and is faster than the published algorithms on a pixel-by-pixel basis for an rgb frame buffer; the second fills against a region composed of three distinct colors; and the third fills against a region composed of four distinct colors. as the power of the algorithms increases, so do the number of assumptions they make, and the computational cost. kenneth p. fishkin brian a. barsky adaptive precision in texture mapping andrew glassner spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments we present a method to accelerate global illumination computation in prerendered animations by taking advantage of limitations of the human visual system. a spatiotemporal error tolerance map, constructed from psychophysical data based on velocity dependent contrast sensitivity, is used to accelerate rendering. the error map is augmented by a model of visual attention in order to account for the tracking behavior of the eye. perceptual acceleration combined with good sampling protocols provide a global illumination solution feasible for use in animation. results indicate an order of magnitude improvement in computational speed. hector yee sumanita pattanaik donald p. greenberg arms: arbitrary robot modelling system (abstract only) one of the problems associated with controlling a robot arm is finding the arm's position control equations. the position control equations tell the magnitude and direction of the motion required from each joint in the arm to position the end effector at an arbitrary point in space with some arbitrary orientation. dr. c. y. ho described how these equations can be derived [1]. dr. ho's method relies on being able to describe each link in the arm by a special matrix called an "a matrix". the a matrices are simple to derive from a mathematical model of a robot called a "link parameter table". the link parameter table describes each link in the robot. it is assumed that the robot is in a special position called "home position". home position requirements are given in terms of a real world xyz coordinate system. all joints in the robot must move around or along at most two of the three real world axes. there must be no motion along or around the third axis. the approach vector for the end effector must also lie along one of these same two axes. any fixed distances in the arm must be along the third axis and possibly one of the other two axes. many times a robot won't be in home position. it must be manipulated until these requirements are met. robots can be divided into three classes depending on how easy it is to get them in home position. for the first class it is simple; either they are already in home position or they can be put in home position by turning some of their revolute joints. in the second class, some of the fixed distances must be combined with other fixed distances or prismatic joints. this must be done carefully so that the modified robot has the same functionality as the original robot. in the third class, it is impossible to meet the requirements. these impossible cases are handled separately. before the development of the arms system, finding a robot's home position was largely a process of trial and error. this was especially true if some of the fixed distances had to be combined in addition to turning some of the joints. the arms system automates this process. its inputs are the direction of the approach vector for the end effector and a physical description of each link in the robot. from this information the arms system will find a link parameter table (if it's possible) and derive the a matrices for the robot. the a matrices are in a format that can be used for input to a symbolic mathematics package. it is intended that the mathematics package will be used to solve for the position control equations. the differential speed control matrix could also be derived. the differential speed control matrix tells how fast each joint must move so that all the joints start and stop moving at the same time. an expert system approach was used in a prolog prototype of the system. however, developing the prototype resulted in an algorithm for the process. the algorithm is now being implemented in common lisp on a dec micro-vax ii. william r. gerlt 3d facial reconstruction and visualization of ancient egyptian mummies using spiral ct data maurizio forte discrete event simulation methodologies and formalisms mario r. garzia beyond formatting and fonts: changing academic needs for computer-assisted document composition in the last ten years, major trends in academic computing have included a number of particularly interesting changes: 1) increases in cpu capacity, 2) increases in baud rates, 3) the substitution of terminals for card readers, 4) the shift from batch to interactive systems, and 5) the development of computerized text processing. the development of academic text processing has been substantial in several areas of application. this paper will present an overview of the past, present, and possible future of one of these areas: the processing of large and/or complex texts, a task which in this paper will be referred to as "document composition." joel j. mambretti gks for imaging by adopting the graphical kernel system (gks), groups who manipulate pixelated images can take advantage of device independent graphics without giving up the functions which have traditionally been hardware dependent. most of these functions, including image i/o, zoom, pan, lookup table manipulation, and cursor reading, are supported within gks; several other functions, such as the use of multiple image planes and multiple look up tables, are accomodated by the gks escape and the generalized drawing primitive (gdp). because gks has powerful inquire capabilities, it's possible to tightly customize applications code to a particular hardware display device. the inquire functions also can be used by an applications program to determine the hardware display size and thus avoid resampling of a pixelated image. gks thoroughly separates applications programs from device- dependencies; however, device drivers must still be written. the time and effort of writing device drivers can be largely eliminated by using a single programmable device driver, which is tailored to each device by a graphcap configuration file, in much the same way as termcap is used by berkeley unix implementations. cliff stoll image-based modeling and lighting paul e. debevec legal knowledge acquisition using case-based reasoning and model inference although case-based reasoning comes out in order to solve knowledge acquisition bottleneck, a case structure acquisition bottleneck has emerged, superseding it. because we cannot decide an appropriate case structure in advance, a framework for cbr should be able to improve a case structure dynamically, collecting and analyzing cases. here is discussed a new framework for knowledge acquisition using cbr and model inference. model inference tries to obtain new descriptors(predicates) with interaction of a domain expert, regarding the predicate as the slots that compose a case structure, with an eye to the function of theoretical term generation. the framework has two features: (1) cbr obtains a more suitable group of slots (a case structure) incrementally through cooperation with model inference, and (2) model inference with theoretical term capability discovers the rules which deal with a given task better. furthermore, we evaluate the feasibility of the framework by implementing it to deal with law interpretation and certify two features with the framework. takahira yamaguti masaki kurematsu a framework for integrating perception, action, and trial-and-error learning steven d. whitehead effect of sinusoidal function on backpropagation learning bijan karimi kaveh ashenayi taraneh baradaran seyed reconstructing surfaces form sparse depth information jesse s. jin brian g. cox wai k. yeap user studies of an interdependency-based interface for acquiring problem- solving knowledge this paper describes a series of experiments with a range of users to evaluate an intelligent interface for acquiring problem- solving knowledge to describe how to accomplish a task. the tool derives the interdependencies between different pieces of knowledge in the system and uses them to guide the user in completing the acquisition task. the paper describes results obtained when the tool was tested with a wide range of users, including end users. the studies show that our acquisition interface saves users an average of 32% of the time it takes to add new knowledge, and highlight some interesting differences across user groups. the paper also describes what are the areas that need to be addressed in future research in order to make these tools usable by end users. jihie kim yolanda gil the race ii engine for real-time volume rendering in this paper, we present the race ii engine, which uses a hybrid volume rendering methodology that combines algorithmic and hardware acceleration to maximize ray casting performance relative the total amount of volume memory throughput contained in the system. the challenge for future volume rendering accelerators will be the ability to process higher resolution datasets at over 10hz without utilizing large-scale, and therefore, expensive designs. the limiting performance factor for large datasets will be the throughput between the volume memory subsystem and computational units. unfortunately, the throughput between memory devices and computational units does not scale with moore's law. as a result, memory efficient solutions are needed that maximize the input-output relationship between volume memory throughput and frame rate. the race ii design utilizes this approach and achieves an input-output relationship of up to 4 × larger than many solutions proposed in literature. as a result, this architecture is well suited for meeting the challenges of next generation datasets. harvey ray deborah silver integrated simulation support system concepts and examples new simulation software needs to provide a framework for performing entire simulation studies, to recognize that individuals who build models are typically not the individuals who perform the simulation experiments with models, and to support analysts who have only a basic level of computer technical skills. pritsker & associates has developed a fourth generation integrated simulation support system for slam ii which meets these objectives. charles r. standridge a. alan b. pritsker jean o'reilly batch size selection for the batch means method chiahon chien fcp: a summary of performance results flat concurrent prolog is a simple concurrent programming language which has been used for a variety of non-trivial applications. a compiler based parallel implementation has been completed which operates on an intel hypercube. this paper presents a brief summary of performance data from a recent study of the implementation. three categories of program were studied: parallel applications, uniprocessor benchmarks and communication stereotypes. the latter programs are abstractions of common parallel programming techniques and serve to quantify the cost of communication in the language. s. taylor r. shapiro e. shapiro color quantization by dynamic programming and principal analysis color quantization is a process of choosing a set of k representative colors to approximate the n colors of an image, k < n, such that the resulting k-color image looks as much like the original n-color image as possible. this is an optimization problem known to be np-complete in k. however, this paper shows that by ordering the n colors along their principal axis and partitioning the color space with respect to this ordering, the resulting constrained optimization problem can be solved in o(n \\+ km2) time by dynamic programming (where m is the intensity resolution of the device). traditional color quantization algorithms recursively bipartition the color space. by using the above dynamic-programming algorithm, we can construct a globally optimal k-partition, k>2, of a color space in the principal direction of the input data. this new partitioning strategy leads to smaller quantization error and hence better image quality. other algorithmic issues in color quantization such as efficient statistical computations and nearest- neighbor searching are also studied. the interplay between luminance and chromaticity in color quantization with and without color dithering is investigated. our color quantization method allows the user to choose a balance between the image smoothness and hue accuracy for a given k. xiaolin wu a tutorial on default logics default logic is one of the most prominent approaches to nonmonotonic reasoning, and allows one to make plausible conjectures when faced with incomplete information about the problem at hand. default rules prevail in many application domains such as medical and legal reasoning. several variants have been developed over the past year, either to overcome some perceived deficiencies of the original presentation, or to realize somewhat different intuitions. this paper provides a tutorial-style introduction to some important approaches of default logic. the presentation is based on operational models for these approaches, thus making them easily accessible to a broader audience, and more easily usable in practical applications. grigoris antoniou shadow volume reconstruction from depth maps current graphics hardware can be used to generate shadows using either the shadow volume or shadow map techniques. however, the shadow volume technique requires access to a representation of the scence as a polygonal model, and handling the near plane clip correctly and efficiently is difficult; conversely, accurate shadow maps require high-precision texture map data representations, but these are not widely supported. we present a hybird of the shadow map and shadow volume approaches which does not have these difficulties and leverages high-performance polygon rendering. the scene is rendered from the point of view of the light source and a sampled depth map is recovered. edge detection and a template-based reconstruction technique are used to generate a global shadow volume boundary surface, after which the pixels in shadow can be marked using only a one-bit stencil buffer and a single-pass rendering of the shadow volume boundary polygons. the simple form of our template-based reconstruction scheme simplifies capping the shadow volume after the near plane clip. michael d. mccool rendering effective route maps: improving usability through generalization route maps, which depict a path from one location to another, have emerged as one of the most popular applications on the web. current computer- generated route maps, however, are often very difficult to use. in this paper we present a set of cartographic generalization techniques specifically designed to improve the usability of route maps. our generalization techniques are based both on cognitive psychology research studying how route maps are used and on an analysis of the generalizations commonly found in handdrawn route maps. we describe algorithmic implementations of these generalization techniques within linedrive, a real-time system for automatically designing and rendering route maps. feedback from over 2200 users indicates that almost all believe linedrive maps are preferable to using standard computer-generated route maps alone. maneesh agrawala chris stolte towards image realism with interactive update rates in complex virtual building environments john m. airey john h. rohlf frederick p. brooks an effective way to represent quadtrees a quadtree may be represented without pointers by encoding each black node with a quaternary integer whose digits reflect successive quadrant subdivisions. we refer to the sorted array of black nodes as the "linear quadtree" and show that it introduces a saving of at least 66 percent of the computer storage required by regular quadtrees. some algorithms using linear quadtrees are presented, namely, (i) encoding a pixel from a 2n × 2>n array (or screen) into its quaternary code; (ii) finding adjacent nodes; (iii) determining the color of a node; (iv) superposing two images. it is shown that algorithms (i)-(iii) can be executed in logarithmic time, while superposition can be carried out in linear time with respect to the total number of black nodes. the paper also shows that the dynamic capability of a quadtree can be effectively simulated. irene gargantini picture recognition using arc length and turning angle transformation pictures can be first approximated by a polygonal approximation. then, this polygonal approximation can be transformed into the arc length s and turning angle coordinates as used in two recent papers [1,2]. for simplicity, we call this transformation, as s- transformation. an important property of this transformation is that s- transformation is invariant with respect to rotation. this implies that the orientation of the picture is not important. therefore it is advantageous to perform matching in the s- space. in addition, techniques and guidelines for improving the matching process are presented and illustrated by examples. the results have useful applications in pattern recognition, robotics, visual languages and artificial intelligence. chung mou peng wu edward t. lee computational construction kits for geometric modeling and design (panel abstract) robert aish james l. frankel john h. frazer anthony t. patera knowledge acquisition from repertory grids using a logic of confirmation k. ford f. petry image quilting for texture synthesis and transfer we present a simple image-based method of generating novel visual appearance in which a new image is synthesized by stitching together small patches of existing images. we call this process _image quilting_. first, we use quilting as a fast and very simple texture synthesis algorithm which produces surprisingly good results for a wide range of textures. second, we extend the algorithm to perform texture transfer --- rendering an object with a texture taken from a different object. more generally, we demonstrate how an image can be re- rendered in the style of a different image. the method works directly on the images and does not require 3d information. alexei a. efros william t. freeman an analytic visible surface algorithm for independent pixel processing an algorithm is presented that solves the visible surface problem at each pixel independently. this allows motion blur and depth of field blurring to be integrated into the algorithm. it also allows parallel processing. the algorithm works on large numbers of polygons. an analytic gaussian filter is used. the filter can be elongated or scaled differently for each polygon to adjust for its speed or distance from the focal plane. this is achieved by shrinking or scaling the polygon prior to solving the hidden surface problem so that blurring is correctly presented when objects obscure each other. edwin catmull interpolating nets of curves by smooth subdivision surfaces adi levin the use of distortion of special venue films toshi kato keith goldfarb bob powell future directions in desktop video t. heidmnn m. mackay g. macnicol f. wray evaluation of modeling techniques for agent-based systems to develop agent- based systems, one needs a methodology that supports the development process as common in other disciplines. in recent years, several such methodologies and modeling techniques have been suggested. an important question is, to what extent do the existing methodologies address the developers' needs. in this paper we attempt to answer this question. in particular, we discuss suitability of agent modeling techniques to agent- based systems development. in evaluating existing modeling techniques, we address criteria from software engineering as well as characteristics of agent- based systems. our evaluation shows that some aspects of modeling techniques for agent- based systems may benefit from further enhancements. as we show, these aspects include distribution, concurrency, testing and communication richness. we also find space for (relatively small) improvements in aspects such as the refining of the models throughout the development process and the coverage and consistency checking of the suggested models. onn shehory arnon sturm a prototype belief network-based expert systems shell a belief network-based expert system works in a way entirely different from a rule- based expert system. in such systems, a high degree of non-determinism is present in the process of belief propagation. existing belief network-based systems thus provide no control mechanism and let their users make decisions at every stage in the process of evidence gathering and belief propagation. however, for any real world expert system with large complex belief network, it is important to have efficient control of the inference procedure. in this paper we describe a novel expert system shell that incorporates a general inference control mechanism for efficient belief propagation in belief networks. the control knowledge supplied by domain experts is encoded in some action rules. with these action rules, the system can effectively direct the inference procedure, help the user to gather the most relevant evidence based on the results of previous stages, while the user of the system is still allowed to take the initiative. shijie wang marco valtorta geometrical concept learning and convex polytopes we consider exact identification of geometrical objects over the domain {0,1,…,n 1}d, d≥1 fixed. we give efficient implementations of the general incremental scheme "identify the target concept by constructing its convex hull" for learning convex concepts. this approach is of interest for intersections of half-spaces over the considered domain, as the convex hull of a concept of this type is known to have "few" vertices. in this case we obtain positive results on learning intersections of halfspaces with superset/disjointedness queries, and on learning single halfspaces with membership queries. we believe that the presented paradigm may become important for neural networks with a fixed number of discrete inputs. tibor hegedus wires: a geometric deformation technique karan singh eugene fiume the high level architecture and beyond: technology challenges judith s. dahmann neural networks: a new dimension in expert systems applications mohammed h. a. tafti symbolic simplification of tensor expressions using symmetries, dummy indices and identities v. a. ilyin a. p. kryukov rectangular convolution for fast filtering of characters avi naiman alain fournier b-spline surfaces for ship hull design the use of true sculptured surface descriptions for design applications has been proposed by numerous authors. the actual implementation and use of interactive sculptured surface description techniques for design and production has been limited. the use of such techniques for ship hull design has been even more limited. the present paper describes a preliminary implementation of such a system for the design of ship hulls and for the production of towing tank models using numerical control techniques. the present implementation is based on a cartesian product b-spline surface description. implementation is on an evans and sutherland picture system supported by a pdp-11/45 minicomputer. the b-spline surface is manipulated by its associated polygonal net. both surface and net are three-dimensional. techniques both good and bad for 3-d picking of a polygon point when the net, its associated surface, and the 3-d picking cue independently exist and can be independently manipulated in three space are presented and discussed. the shape of a b-spline surface of fixed order is controlled by the location of the polygon net points, the number of multiple points at a particular net point, and the knot vector. frequently multiple points imply multiple knot vectors. practical techniques for controlling and shaping the surface with and without this assumption are discussed and the results illustrated. experience attained by interactively fitting a single fourth order b-spline surface patch to the forebody half of an actual ship hull described by three dimensional digitized points is discussed and the results illustrated. david f. rogers steven g. satterfield adaptive radiosity textures for bidirectional ray tracing paul s. heckbert the concept of a generalized assignment statement and its application to commonsense reasoning the concept of a generalized assignment statement provides a basis for a unified approach to knowledge representation and inference. the basic idea underlying this approach is that a proposition, p, in a natural language may be viewed as an elastic constraint on a variable, x. more specifically, the meaning of p may be represented as p -> x isr c in which -> should be read as "translates into" and the right-hand member is a representation of the meaning of p in the form of a generalized assignment statement. in this statement, x plays the role of a constrained variable; c is the constraint on x; and r in the copula isr is a variable whose values define the role of c in relation to x. the prinipal values of r are d, standing for disjunctive; c, standing for conjunctive; p, standing for probabilistic; g, standing for granular; and h, standing for hybrid. since in most cases the value of r is d, it is convenient to abbreviate isd to is. with this understanding, x is c should be interpreted as x isd c. in general, x, c, and r are implicit rather than explicit in p. viewed in this perspective, the problem of meaning representation is that of explicitating x, c and r, given p. in general, x and c are defined procedurally. thus the principal steps in representing the meaning of p are: (1) construction of an explanatory database, ed; (2) construction of a procedure which acts on ed and yields x; and construction of a procedure which acts on ed and yields c. if p1, …, pu are propositions in a knowledge-base, inference from p1, …, pu starts with representing the meaning of each p1 in the form of a generalized assignment statement, xi isri ci, i = 1, … n. then the constraints are combined through the application of a collection of rules of inference, resulting in a computed constraint, c, on a specified variable y as a function of the ci. in general, the determination of c requires the solution of a non- linear program. although the approach to knowledge representation and inference based on the concept of a generalized assignment statement does not follow the traditional lines, it is in fact quite natural and easy to understand. several illustrative examples show how a proposition can be expressed as a generalized assignment statement, and how an answer to a question may be obtained through the solution of a nonlinear program. a particularly important application of the concept of a generalized assignment statement relates to the representation and inference from commonsense knowledge. more specifically, a distinguishing characteristic of commonsense knowledge is that the facts and rules which comprise such knowledges, e.g., birds can fly and tomatoes are red when they are ripe, are, for the most part, preponderantly rather than universally true. for this reason, the concepts of typicality, normality, and default play an essential role in most of the existing approaches to commonsense reasoning and knowledge representation. in an alternative approach based on the concept of a generalized assignment statement, a comparable role is played by the concept of usuality. as its name suggests, the concept of usuality relates to what is usual or, more precisely, to events of high probability. a basic concept which derives from usuality is that of a usual value of a variable. thus, in the proposition a cup of coffee usually costs about fifty cents, about fifty cents is a usual value of the variable cost (cup coffee)). in general, a usual value is imprecise and non- unique. viewed in the perspective of usuality, commonsense knowledge may be regarded as a collection of usuality-qualified generalized assignment statements of the form usually (x is f) and usually (x is f if y is r), in which f is a usual value of the variable x, r is a constraint on the conditioning variable y, and usually is a fuzzy quantifier which is representable as a fuzzy proportion in the inverval [0,1]. based on this view of commonsense knowledge, various inference rules for reasoning with usuality-qualified facts and rules are developed. in particular, it is shown that from usually (x is a) and usually (x is b), it follows that (2usually-1) (x is a b), where a b is the conjunction of a and b, and (2usually-1) is a fuzzy number which is less specific that usually. rules of this type are of direct relevance to property inheritance in the presence of exceptions, to the combination of uncertain evidence in expert systems and, more generally, to inference under uncertainty. l a zadeh evolutionary group robots for collective world modeling jiming liu jian-bing wu introduction to the computer graphics reference model george s. carson a model for anisotropic reflections in open gl wolfgang heidrich x3h3 standards report richard f. puk george s. carson two complementary techniques for digitized document analysis george nagy junichi kanai mukkai krishnamoorthy mathews thomas mahesh viswanathan bintrees, csg trees, and time hanan samet markku tamminen the creation of digital consciousness j. t. monterege rendering from compressed textures andrew c. beers maneesh agrawala navin chaddha smooth patching of refined triangulations this paper presents a simple algorithm for associating a smooth, low-degree polynomial surface with triangulations whose extraordinary mesh nodes are separated by sufficiently many ordinary, 6-valent mesh nodes. output surfaces are at least tangent continuous and are c2 sufficiently far away from extraordinary mesh nodes; they consist of three-sided bezier patches of degree 4. in particular, the algorithm can be used to skin a mesh generated by a few steps of loop's generalization of three- direction box-spline subdivision. jorg peters accelerated volume rendering and tomographic reconstruction using texture mapping hardware brian cabral nancy cam jim foran incremental execution of guarded theories when it comes to building controllers for robots or agents, high-level programming languages like _golog_ and _congolog_ offer a useful compromise between planning-based approaches and low-level robot programming. however, two serious problems typically emerge in practical implementations of these languages: how to evaluate tests in a program efficiently enough in an open-world setting, and how to make appropriate nondeterministic choices while avoiding full lookahead. recent proposals in the literature suggest that one could tackle the first problem by exploiting sensing information, and tackle the second by specifying the amount of lookahead allowed explicitly in the program. in this paper, we combine these two ideas and demonstrate their power by presenting an interpreter, written in prolog, for a variant of _golog_ that is suitable for efficiently operating in open- world setting by exploiting sensing and bounded lookahead. giuseppe de giacomo hector j. levesque sebastian sardiña hierarchical triangulation for multiresolution surface description a new hierarchical triangle-based model for representing surfaces over sampled data is proposed, which is based on the subdivision of the surface domain into nested triangulations, called a hierarchical triangulation (ht). the model allows compression of spatial data and representation of a surface at successively finer degrees of resolution. an ht is a collection of triangulations organized in a tree, where each node, except for the root, is a triangulation refining a face belonging to its parent in the hierarchy. we present a topological model for representing an ht, and algorithms for its construction and for the extraction of a triangulation at a given degree of resolution. the surface model, called a hierarchical triangulated surface (hts) is obtained by associating data values with the vertices of triangles, and by defining suitable functions that describe the surface over each triangular patch. we consider an application of a piecewise-linear version of the hts to interpolate topographical data, and we describe a specialized version of the construction algorithm that builds an hts for a terrain starting from a high- resolution rectangular grid of sampled data. finally, we present an algorithm for extracting representations of terrain at variable resolution over the domain. leila de floriani enrico puppo rating of pattern classifications in multi-layer perceptrons: theoretical background and practical results w. ritschel t. pfeifer r. grob a connectionist approach to rate adaptation mai h. nguyen garrison w. cottrell a simple and efficient error-diffusion algorithm in this contribution, we introduce a new error-diffusion scheme that produces higher quality results. the algorithm is faster than the universally used floyd- steinberg algorithm, while maintaining its original simplicity. the efficiency of our algorithm is based on a deliberately restricted choice of the distribution coefficients. its pleasing nearly artifact-free behavior is due to the off-line minimization process applied to the basic algorithm's parameters (distribution coefficients). this minimization brings the fourier spectra of the selected key intensity levels as close as possible to the corresponding "blue noise" spectra. the continuity of the algorithm's behavior across the full range of intensity levels is achieved thanks to smooth interpolation between the distribution coefficients corresponding to key levels. this algorithm is applicable in a wide range of computer graphics applications, where a color quantization algorithm with good visual properties is needed. victor ostromoukhov hierarchical parallel coordinates for exploration of large datasets our ability to accumulate large, complex (multivariate) data sets has far exceeded our ability to effectively process them in search of patterns, anomalies, and other interesting features. conventional multivariate visualization techniques generally do not scale well with respect to the size of the data set. the focus of this paper is on the interactive visualization of large multivariate data sets based on a number of novel extensions to the parallel coordinates display technique. we develop a multiresolutional view of the data via hierarchical clustering, and use a variation on parallel coordinates to convey aggregation information for the resulting clusters. users can then navigate the resulting structure until the desired focus region and level of detail is reached, using our suite of navigational and filtering tools. we describe the design and implementation of our hierarchical parallel coordinates system which is based on extending the xmdvtool system. lastly, we show examples of the tools and techniques applied to large (hundreds of thousands of records) multivariate data sets. ying-huey fua matthew o. ward elke a. rundensteiner interactive two-handed gesture interface in 3d virtual environments hiroaki nishino kouichi utsumiya daisuke kuraoka kenji yoshioka kazuyoshi korida multifaceted, multiparadigm modeling perspectives: tools for the 90'smultiplicities of perspective inherent in modelling and simulation methodologyare enumerated and rationales given for their existence. characteristics offuturistic simulation environments which support flexible adoption of multipleperspectives are outlined. finally, we discuss the construction of modelswhich simultaneously embody differing perspectives. advances in modellingmethodology along these lines will constitute a quantum leap in toolsophistication which can greatly extend the domain of simulation application.bernard p. zeigler tuncer i. Ã-ren the distributed mission training integrated threat environment system architecture and design we describe the architecture, design, components, and functionality of the distributed mission training integrated threat environment (dmtite) software. the dmtite architecture and design support the development and run-time operation of computer-generated actors (cgas) in distributed simulations. the architecture and design employ object- oriented techniques, component software, object frameworks, containerization, and rapid prototyping technologies. the dmtite architecture and design consist of highly modular components where interdependencies are well defined and minimized. dmtite is an open architecture and open design, and most component and framework code is open source. the dmtite architecture and design have been implemented (including all system components and frameworks) and currently support a number of types of computer-generated actors. the dmtite architecture, design, and implementation are capable of supporting multiple reasoning, vehicle dynamics, skill level, and migration requirements for any type of cga. martin r. stytz sheila b. banks effective use of color in computer graphics color is a significant component of computer aided visualization of information, concepts and ideas. the use of color in all applications of computer graphics enhances the image, clarifies the information presented, and helps distinguish features that are obscure in black and white pictures. color is used to differentiate elements in the diagrams so that the comparative information is read and understood rapidly and accurately. color visualization techniques increase the amount of information that can be integrated into the visual message or picture, and thus create layers of information. it is clear that research and discovery can be supported and enhanced by color application, or inhibited by ineffectual use of color. in order to effectively utilize color in the visualization of ideas, information, or concepts, the role of color in various applications must be examined, and the perceptual behavior of color must be delineated. the basic principles of color theory have been discussed to provide a fundamental understanding of the characteristics of color and the manner in which colors interact with one another. based on these principles, the user will be better able to utilize color in an effective manner relative to specific applications in color graphics. joan r. truckenbrod minimum cost adaptive synchronization: experiments with the parasol system edward mascarenhas felipe knop vernon rego kulaquest hiromasa horie gaku tada shuji hiramatsu yasuharu yoshizawa hidehisa onai naomi horikawa artificial intelligence in the factory of the future this paper describes the intelligent management system (ims) project, which is part of the factory of the future project in the robotics institute of carnegie-mellon university. ims is a long term project concerned with applying artificial intelligence techniques in aiding professionals and managers in their day to day tasks. this report discusses both the long term goals of ims, and current research. mark s. fox exploration and virtual camera control in virtual three dimensional environments colin ware steven osborne a parametric version of jackknife-after-bootstrap jin wang parallel discrete-event simulation (pdes): a case study in design, development, and performance using speedes can parallel simulations efficiently exploit a network of workstations? why haven't pdes models followed standard modeling methodologies? will the field of pdes survive, and if so, in what form? researchers in the pdes field have addressed these questions and others in a series of papers published in the last few years [1,2,3,4]. the purpose of this paper is to shed light on these questions, by documenting an actual case study of the development of an optimistically synchronized pdes application on a network of workstations. this paper is unique in that its focus is not necessarily on performance, but on the whole process of developing a model, from the physical system being simulated, through its conceptual design, validation, implementation, and, of course, its performance. this paper also presents the first reported performance results indicating the impact of risk on performance. the results suggest that the optimal value of risk is sensitive to the latency parameters of the communications network. frederick wieland eric blair tony zukas control of a virtual actor: the roach michael mckenna steve pieper david zeltzer minuprox - an advanced proximity correction technique for the ibm el-2 electron beam tool minuprox - (minimum neighborhood proximity correction) is a novel approach to proximity correction used with the ibm el-2 e-beam direct exposure tool. based on earlier work by m. parikh, it differs from his approach primarily by only correcting those shapes that form the edge definition of the image. reflecting experimental observations that the dose assignment to the edges of the image is much more critical than the assignment made to the interior, minuprox separates the edges from the interior of the image w. j. guillaume a. kurylo simulation in the next millennium sanjay jain introduction to siman this paper discusses the concepts and methods for simulating manufacturing systems using the siman simulation language. siman is a new general purpose simulation language which incorporates special purpose features for modeling manufacturing systems. these special purpose features greatly simplify and enhance the modeling of the material handling component of a manufacturing system. c. dennis pegden computer graphics in television (panel session) our society is increasingly relying on symbolic and imaginal communication to augment written and spoken language (advertising graphics, logos and corporate i.d.s, satellite weather maps, international traffic, dashboard symbols, etc.). nowhere are graphics and sophisticated imagery being more widely and innovatively used than in television production. greater availability of computer graphics and a strong trend in television technology towards digital electronics makes this an application of high synergy and potential. the purpose of this panel is to bring together several top system designers and leading graphic designers in television to exchange points of view about this exciting application area. panelists will show and discuss recent on-air uses of electronic graphics and will exchange ideas about what is needed, possible, important, and useful for the future. richard shoup tom klimek larry evans peter black h. bley d. weise digital halftoning with space filling curves luiz velho jonas de miranda gomes shape by example peter-pike j. sloan charles f. rose michael f. cohen a formal model of open agent societies alexander artikis jeremy pitt an architecture for bounded rationality stuart j. russell accelerated walkthrough of large spline models subodh kumar dinesh manocha hansong zhang kenneth e. hoff video rewrite: driving visual speech with audio christoph bregler michele covell malcolm slaney skip strips: maintaining triangle strips for view-dependent rendering view-dependent simplification has emerged as a powerful tool for graphics acceleration in visualization of complex environments. however, view- dependent simplification techniques have not been able to take full advantage of the underlying graphics hardware. specifically, triangle strips are a widely used hardware-supported mechanism to compactly represent and efficiently render static triangle meshes. however, in a view-dependent framework, the triangle mesh connectivity changes at every frame making it difficult to use triangle strips. in this paper we present a novel data- structure, skip strip, that efficiently maintains triangle strips during such view-dependent changes. a skip strip stores the vertex hierarchy nodes in a skip-list-like manner with path compression. we anticipate that skip strips will provide a road-map to combine rendering acceleration techniques for static datasets, typical of retained-mode graphics applications, with those for dynamic datasets found in immediate-mode applications. jihad el- sana elvir azanli amitabh varshney the classic knowledge representation system: guiding principles and implementation rationale peter f. patel-schneider deborah l. mcguinness ronald j. brachman lori alperin resnick supporting personality in personal service assistants from metaphor to implementation yasmine arafa patricia charlton abe mamdani raleigh-benard convection in a closed box gregory foss splatting without the blur splatting is a volume rendering algorithm that combines efficient volume projection with a sparse data representation: only voxels that have values inside the iso-range need to be considered, and these voxels can be projected via efficient rasterization schemes. in splatting, each projected voxel is represented as a radially symmetric interpolation kernel, equivalent to a fuzzy ball. projecting such a basis function leaves a fuzzy impression, called a footprint or splat, on the screen. splatting traditionally classifies and shades the voxels prior to projection, and thus each voxel footprint is weighted by the assigned voxel color and opacity. projecting these fuzzy color balls provides a uniform screen image for homogeneous object regions, but leads to a blurry appearance of object edges. the latter is clearly undesirable, especially when the view is zoomed on the object. in this work, we manipulate the rendering pipeline of splatting by performing the classification and shading process after the voxels have been projected onto the screen. in this way, volume contributions outside the iso-range never affect the image. since shading requires gradients, we not only splat the density volume, using regular splats, but we also project the gradient volume, using gradient splats. however, alternative to gradient splats, we can also compute the gradients on the projection plane, using central differencing. this latter scheme cuts the number of footprint rasterization by a factor of four, since only the voxel densities have to be projected. our new method renders objects with crisp edges and well-preserved surface detail. added overhead is the calculation of the screen gradients and the per-pixel shading. both of these operations, however, may be performed using fast techniques employing lookup tables. klaus mueller torsten möller roger crawfis teaching a robot how to read symbols symbols are used everywhere to help us find our way and to provide useful information about our world. autonomous robots that would operate in real life settings could surely benefit from these indications. to do so, this research project integrates a character recognition technique with methods to position the robot in front of the symbol, to capture the image that will be used in the identification process, and validate the overall system on a robot. françois michaud memories of a vector world owen r. rubin solid texturing of complex surfaces darwyn r. peachey the middgo project the ancient game of go, still the most popular strategy game in the far east, has proven very challenging for artificial intelligence (ai) researchers trying to develop go-playing programs. in contrast to the best chess, checkers, backgammon, othello, and scrabble programs, all of which play at or better than the level of the top human players, the best go programs play only at the level of advanced beginners. the middgo project, begun in the fall of 1999, aims to better understand the challenges posed by go and to investigate approaches for improving the performance of go-playing systems. brian mcquade effects of time-varied arrival rates: an investigation in emergency ambulance service systems zhiwei zhu mark a. mcknew jim lee direct volume visualization of three-dimensional vector fields roger crawfis nelson max mapping a complex temporal problem into a combination of static and dynamic neural networks thierry catfolis a framework and testbed for studying manipulation techniques for immersive vr ivan poupyrev suzanne weghorst mark billinghurst tadao ichikawa level-of-detail volume rendering via 3d textures manfred weiler rudiger westermann chuck hansen kurt zimmermann thomas ertl graphic libraries for windows programming m. carmen juan lizandra semiautomatic simplification gong li benjamin watson genroku-ryoran masayoshi obata misako saka determining bidding strategies in sequential auctions: quasi-linear utility and budget constraints in this paper, we develop a new method for finding an optimal bidding strategy in sequential auctions, using a dynamic programming technique. the existing method assumes the utility of a user is represented in an additive form. thus, the remaining endowment of money must be explicitly represented in each state. on the other hand, our method assumes the utility of a user can be represented in a quasi-linear form, and representing the payment as a state-transition cost. accordingly, we can obtain more than an $m$-fold speed-up in the computation time, where $m$ is the initial endowment of money. furthermore, we have developed a method for obtaining a semi-optimal bidding strategy under budget constraints. hiromitsu hattori makoto yokoo yuko sakurai toramatsu shintani obligations directed from bearers to counterparts henning herrestad christen krogh interactive visualization of fluid dynamics simulations in locally refined cartesian grids (case study) this work presents interactive flow visualization techniques specifically adapted for powerflow , a lattice-based cfd code from the exa corporation. their digital physics fluid simulation technique is performed on a hierarchy of locally refined cartesian grids with a fine voxel resolution in areas of interesting flow features. among other applications the powerflow solver is used for acrodynamic simulations in car body development where the advantages of automatic grid generation from cad models is of great interest. in a joint project with bmw and exa we are developing a visualization tool which incorporates virtual reality techniques for the interactive exploration of the large scalar and vector data sets. in this paper we describe the specific data structures and interpolation techniques and we report on fast particle tracing taking into account collisions with the car body geometry. an opengl optimizer based implementation allows for the inspection of the flow with particle probes and slice probes at interactive frame rates. martin schulz frank reck wolf bartelheimer thomas ertl visual effects technology - do we have any? (panel session) derek spears scott dyer george joblove charlie gibson lincoln hu a sortation system model arun jayaraman ramu narayanaswamy ali k. gunal representing and reusing explanations of legal precedents precedent- based legal reasoning depends on accurate assessment of relevant similarities between new cases and existing precedents. determining the relevant similarities between a new case and a precedent with respect to a legal category requires knowing the explanation of the precedent's membership in the category. grebe is a system that uses both general legal rules and specific explanations of precedents to evaluate legal predicates in new cases. grebe assesses the similarity of a new case to a precedent of a legal category by attempting to find a pattern of relations in the new case that corresponds to the facts of the precedent responsible for its category membership. missing relations in the new case are inferred by reusing other explanations from past cases. l. k. branting an object-oriented 3d graphics toolkit paul s. strauss rikk carey advanced uses for micro saint simulation software catherine drury barnes k. ronald laughery emerging principles in machine learning machine learning, a field concerned with developing computational theories of learning and constructing learning machines, is now one of the most active research areas in artificial intelligence. an inference-based theory of learning will be presented that unifies basic learning strategies. special attention will be given to comparing and unifying inductive learning and deductive learning strategies. inductive learning strategies include empirical techniques for learning from examples and learning from observation and discovery. deductive learning techniques include analytic learning on the basis of the explanation of a given fact using prior domain knowledge. we will show that the "similarity- based learning" (a form of inductive learning) and the "explanation-based learning" (a form of deductive learning) are two extremes in the spectrum of techniques representing different relative role of the learner's prior knowledge and the information supplied to the learner. we will also show how inductive and deductive learning can be integrated within one theoretical framework. some experimental results will be used to illustrate presented ideas. r michalski the role of logic and ai in the software enterprise (panel session): panel description this session is intended to act as a bridge between the current conference and the next. this conference identifies the need to expand the realm of software engineering beyond the products of development (code and documentation) to include the process by which those products were obtained (the development activity itself). it makes the development process another product of the software enterprise. this is an important first step in treating the software process as a product, but it is clear that we can't progress beyond the current methodology-based "good- practices" without formalizing this product. this formalization is required to precisely represent and reason about the process, and to construct tools that automate, or verify the quality, of portions of that process. this formalization and automation of the software process is the theme of the next conference. two technologies, logic and al, are particularly relevant to this formalization and automation task. professor vlad turski will represent the logic point of view, and professor gerald sussman will represent the al point of view. they will address the following questions: what are the respective roles of logic and al in the software enterprise? what phenomena needs to be represented? how will it be reasoned about? how do these technologies differ in what they can represent. how do they differ in the way they are used? beyond these questions of the technical adequacy of these formalisms, there is the issue of how the human programmers, managers, and users will contend with these formalisms. professor alan perlis will represent this human element in the software enterprise. robert balzer the reyes image rendering architecture robert l. cook loren carpenter edwin catmull frames: software tools for modeling, rendering and animation of 3d scenes michael potmesil eric m. hoffert on the power series solution of a system of algebraic equations s. moritsugu a polygon matching problem in computer vision (abstract only) a central problem in computer vision is that of determining whether a given object o appears in a scene s. we outline a general method of solution applicable to r dimensional location spaces, where r<=2. spaces of dimension r < 3 may occur when it is necessary to identify not only spatial position but also the time and conditions under which observations were made (type of equipment used, etc.). we assume that the scene s is a bounded region in r dimensional euclidean space (rr), and we consider the object o as represented by the values of a given feature function f at certain distinguished points p1, p2&l; t;/subscrpt>,…,pn on o. if q is the polyhedron in rr with vertices p1,p2,…,p< ;subscrpt>n, we refer to q as the query polyhedron. we suppose that a metric d is defined for feature values and that a tolerance t = (t1,t2,& #8230;,tn), has been given. for each i, l ≤ i ≤ n, let ri be the set of points p in s for which d(f(p),f(pi)) ≤ ti. we refer to the ri as the object polyhedra and say that the object o has been found in s to within the tolerance t iff there exists a translation t of the query polyhedron q with t(pi) ε ri for each i. the original problem is now a problem in computational geometry. this approach is due to w. grosky. we consider the restricted problem in which each ri is a closed nonempty convex polyhedron. the ri need not be disjoint, nor is there any restriction (beyond convexity) on their shape. we assume that each ri is given as the solution set of a system li of ki linear inequalities (that is, as an intersection of ki closed halfspaces). the representation is convenient for higher dimensional spaces. we can determine whether the translation t exists in time o(k), where k = k1 \\+ k2 \\+ … \+ kn. the approach is to translate not q, but the object polyhedra ri. for each i, 2 ≤ i ≤ n, let ti be the unique translation in rr with ti(pi) = p1, and let ri′ = ti(ri). there is a translation t with t(pi)ε ri for each i iff there is a translation t with t(p1) ε r = r1 r2' r3' … rn'. thus, the required translation t exists iff r is nonempty. for 2 ≤ i ≤ n, the linear system li′ which defines ri′ can be found from the system li in time o(ki) by a simple change of variable applied to each inequality in li separately. the linear system l consisting of the conjunction l1 l2′ l3′ … ln′ to be satisfied by the coordinates of p1 can thus be constructed in time o(k). this system consists of k linear constraints and defines the feasible region of a r variable linear programming problem. the method of n. megiddo [1] will find a point p in the feasible set or characterize the set as empty in time o(k). the translation t is then defined by the requirement that t(p1) = p. by an optimal translation t of q we mean one which minimizes (or maximizes) some linear function h of r variables at t(p1). if r = 4, for example, we may wish to locate object o when it first appears in s (that is, to minimize h(x,y,z,t) = t = 0x+0y+0z+1t). by the method outlined above, an optimal translation t may be found in time 0(k) if the feasible set is nonempty. additional results have been obtained concerning nontranslational motion of the query polyhedron q and more general matching conditions. l. w. brinn efficient reasoning many tasks require "reasoning"\---i.e., deriving conclusions from a corpus of explicitly stored information---to solve their range of problems. an ideal reasoning system would produce all-and-only the correct answers to every possible query, produce answers that are as specific as possible, be expressive enough to permit any possible fact to be stored and any possible query to be asked, and be (time) efficient. unfortunately, this is provably impossible: as correct and precise systems become more expressive, they can become increasingly inefficient, or even undecidable. this survey first formalizes these hardness results, in the context of both logic- and probability-based reasoning, then overviews the techniques now used to address, or at least side-step, this dilemma. russell greiner christian darken n. iwan santoso a decomposable algorithm for contour surface display generation we present a study of a highly decomposable algorithm useful for the parallel generation of a contour surface display. the core component of this algorithm is a two-dimensional contouring algorithm that operates on a single 2 x 2 subgrid of a larger grid. an intuitive procedure for the operations used to generate the contour lines for a subgrid is developed. a data structure, the contouring tree, is introduced as the basis of a new algorithm for generating the contour lines for the subgrid. the construction of the contouring tree is detailed. space requirements for the contouring tree algorithm are described for particular implementations. michael j. zyda image-based view synthesis by combining trilinear tensors and learning techniques s. avidan t. evgeniou a. shashua t. poggio introduction to arena michael j. drevna cynthia j. kasales simulation: an overview simulation is one of the most powerful analytical tools available to managers of complex systems. we define what simulation is and what it is not. we will define simulation as the process of designing a model of a real system and conducting experiments with this model for the purpose either of understanding the behavior of the system, or of evaluating various strategies for the operation of the system. thus we understand the process of simulation to include both the construction of the model and the analytical use of the model for studying a problem. the types of systems that can be simulated are those that are either in existence or capable of being brought into existence. therefore, systems in the preliminary or planning stage can be modeled as well as those already in existence. robert e. shannon designing simultaneous simulation experiments paul hyden lee schruben some pac-bayesian theorems david a. mcallester an extended transformation approach to inductive logic programming inductive logic programming (ilp) is concerned with learning relational descriptions that typically have the form of logic programs. in a transformation approach, an ilp task is transformed into an equivalent learning task in a different representation formalism. propositionalization is a particular transformation method, in which the ilp task is compiled to an attribute-value learning task. the main restriction of propositionalization methods such as linus is that they are unable to deal with nondeterminate local variables in the body of hypothesis clauses. in this paper we show how this limitation can be overcome., by systematic first-order feature construction using a particular individual-centered feature bias. the approach can be applied in any domain where there is a clear notion of individual. we also show how to improve upon exhaustive first-order feature construction by using a relevancy filter. the proposed approach is illustrated on the "trains" and "mutagenesis" ilp domains. from the guest editors steven seitz richard szeliski slam tutorial slam is a simulation language that allows for alternative modeling approaches. it allows systems to be viewed from a process, event, or state variable perspective. these alternate modeling world views are combined in slam to provide a unified systems modeling framework (1,4). in slam, a discrete change system can be modeled within an event orientation, process orientation, or both. continuous change systems can be modeled using either differential or difference equations. combined discrete- continuous change systems can be modeled by combining the event and/or process orientation with the continuous orientation. in addition, slam incorporates a number of features which correspond to the activity scanning orientation. claude dennis pegden a. alan b. pritsker user performance in relation to 3d input device design based mainly on a series of studies the author conducted at the university of toronto, this article reviews the usability of various six degrees of freedom (6 dof) input devices for 3d user interfaces. the following issues are covered in the article: the multiple aspects of input device usability (performance measures), mouse based 6 dof interaction, mouse modifications for 3d interfaces, free-moving isotonic 6 dof devices, desktop isometric and elastic 6 dof devices, armature-based 6 dof devices, position vs. rate control and the form factors of 6 dof control handle. these issues are treated at an introductory and practical level, with pointers to more technical and theoretical references. shumin zhai qualitative simulation: a feedback control system angel syang yichiang syang autostat van b. norman theory for coordinating concurrent hierarchical planning agents using summary information bradley j. clement edmund h. durfee object behavior analysis kenneth s. rubin adele goldberg hierarchical face clustering on polygonal surfaces michael garland andrew willmott paul s. heckbert interactive graphics for enhancement of simulation systems a computer simulation system was greatly enhanced by providing an interactive graphics interface. it is easy for the user to access and use. output is in graphics form that is quick and easy to comprehend. brief instructional messages appear, on the tektronix screen, at each phase of the simulation so that training required to use the system is almost nonexistent. the user may select any one of four simulations, interactively edit input data, and select output in graphics or tabular form. alternatively, the user may access only the input and output of the previous simulation run. the interface can be readily adapted to other systems. daniel l. roberts julia m. hodges a progressive refinement approach to fast radiosity image generation michael f. cohen shenchang eric chen john r. wallace donald p. greenberg systems for monte carlo work with the proliferation of computers has come a proliferation of simulation. monte carlo experiments can now be run by a vast range of programs from simple. basic environments to spreadsheets: yet little attention has been paid to the problem of designing a system to do monte carlo problems. the ideas for a system described in this paper not only simplifies the problem of programming a monte carlo experiment but also attempts to maintain standards of experimental design and to encourage careful analysis. david alan grier model-based systems analysis using csim18 herb schwetman a knowledge-based system implementing image analysis activity in the context of photo-interpretation j.-c. engel p. bouthemy on a learnability question associated to neural networks with continuous activations (extended abstract) this paper deals with learnability of concept classes defined by neural networks, showing the hardness of pac- learning (in the complexity, not merely information-theoretic sense) for networks with a particular class of activation. the obstruction lies not with the vc dimension, which is known to grow slowly; instead, the result follows the fact that the loading problem is np-complete. (the complexity scales badly with input dimension; the loading problem is polynomial-time if the input dimension is constant.) similar and well-known theorems had already been proved by megiddo and by blum and rivest, for binary-threshold networks. it turns out the general problem for continuous sigmoidal-type functions, as used in practical applications involving steepest descent, is not np-hard---there are "sigmoidals" for which the problem is in fact trivial---so it is an open question to determine what properties of the activation function cause difficulties. ours is the first result on the hardness of loading networks which do not consist of binary neurons; we employ a piecewise-linear activation function that has been used in the neural network literature. our theoretical results lend further justification to the use of incremental (architecture-changing) techniques for training networks. bhaskar dasgupta hava t. siegelmann eduardo sontag validating simulation models in this paper we give a general introduction to model validation, define the various validation techniques, discuss conceptual and operational validity, and present a recommended model validation procedure. robert g. sargent a centralized methodology for multi-level abstraction in simulation steven walczak paul fishwick on the role of prototypes in appellate legal argument (abstract) l. thorne mccarty automated argument assistance for lawyers solving the first of these two drawbacks has led to a new graphical representation of the arguments, in which argument attacks are shown, and to a change in the argumentation theory, viz. the introduction of a novel notion of an argument, viz. that of a dialectical argument. briefly, a dialectical argument is an argument in which attacks (and counterattacks) are incorporated. solving the second drawback has led to the introduction of step warrants and undercutter warrants into the argumentation theory. the resulting notion of a warranted dialectical argument is the analog for defeasible argumentation of the notion of a (hilbert-style) proof of classical logic. the present version of the argumed-system is put in context by a brief comparison with selected other systems, viz. loui's room 5, gordon and karacapilidis' zeno, argumed's precursor argue! and the previous version of argumed. bart verheij a declarative formalization of knowledge translation sasa buvac richard fikes applying supervised learning to real-world problems dragos d. margineantu digital disks and a digital compactness measure an o(n2) time algorithm is presented that determines whether or not a given convex digital region is a digital disk. a new compactness measure for digital regions is introduced, and an algorithm to evaluate the compactness measure of convex digital regions is also presented. chul e. kim timothy a. anderson algorithms for solid noise synthesis j. p. lewis the rendering equation james t. kajiya a framework for developing and managing objects in a complex simulation system james d. barrett using photographic quality images in desktop applications (panel session) alain rossmann dan putman michael bourgoin greg millar legal reasoning - a jurisprudential description this paper provides a description of a legal reasoning process. the presentation originates from a research project combining law and artificial intelligence (ai) and contains theoretical results from system-developing activities that have been carried out in cooperation with the swedish court administration and a major swedish employer's association. the research project, and several parallel projects at the swedish law and informatics research institute (iri), is being documented in the series iri-reports. related work, especially focusing on computerized formalization of legal norms and legal decision processes from a jurisprudential perspective is carried out at the norwegian research center for computers and law, but contributions have also been made by many others1 and legal reasoning has been investigated from somewhat different perspectives2. p. wahlgren the siggraph 95 art gallery jacquelyn martino case reconstruction before assessment henk de bruijn radboud winkels editor's introduction r. daniel bergeron fast-software-checkpointing in optimistic simulation: embedding state saving into the event routine instructions francesco quaglia viewfinder: a framework for representing, ascribing and maintaining nested beliefs of interacting agents (dissertation) afzal ballim real-time acoustic modeling for distributed virtual environments thomas funkhouser patrick min ingrid carlbom the role of knowledge management in hierarchical model development y.-m. huang j. w. rozenblit j. hu the di-3000 implementation of the 1979 gspc 'core' graphics standard di-3000 is a machine-independent system of fortran-callable subroutines based on the 1979 'core' report of the graphics standards planning committee (gspc) of acm/siggraph. di-3000 supports output level-3 (dynamic output), input level-2 (synchronous input), and dimension level-2 (3d) of the core. this includes the complete support of 2d and 3d primitives, full 3d viewing definitions, multiple concurrent display surfaces, hershey text fonts, all logical input functions, and image transformations. the raster extensions defined in the 1979 core report are also supported, including polygonal primitives, shaded or patterned polygons, and dynamic definition of color video lookup tables on raster devices. di-3000 uses a network of several interconnected virtual graphics devices which communicate with each other via messages. these virtual devices may represent physical graphics devices, graphics metafiles, or device-independent display files for storing retained segments. the di-3000 logical network may be implemented in various ways, depending on the architecture and operating system of the target computer system. the network concept is ideally suited to multi-tasking or distributed processing environments, where either the central memory of the host computer is limited or the target graphics devices have local intelligence. using a logical network, the virtual graphics devices can execute concurrently for increased efficiency. all virtual devices have a distinct layer structure. this allows easy maintenance and extension of the system as the graphics standard evolves, without changes in the basic design. the layer structure also makes it easy to build new graphics device drivers. nikolaus j. kiefhaber james r. warner qualitative reasoning about fit (abstract only) the main goal of this research is to discover the knowledge structures, control strategy, and problem-solving behavior required to determine how two objects best fit together. this sort of reasoning arises in many contexts and can involve any combination of objects, making it difficult to formalize. we therefore restrict the problem space to a particular class of objects. each object is composed of a base, with features such as pegs, blocks, and holes of various shapes and sizes projecting inward and outward from its surfaces. a fit occurs when surfaces from any two objects can be brought close together by inserting the projections on either surface into the holes on the other. the degree of fit is determined by how tightly the projections fit into the holes and by how close the surfaces lie as a result. we recognize four stages in this reasoning process. the grouping and orientation stages select a particular juxtaposition between two objects that could lead to a fit. the grouping stage identifies relevant features on an object's surface and forms a feature group. the orientation stage selects two feature groups that appear to be compatible, based on the relative locations of features within them. if necessary, one object may be rotated to bring the feature group surfaces into opposition, or to line up the features. the matching and confirmation stages test the fit proposed by the two preceding stages. the matching stage examines the surfaces region by region, matching features that are roughly complementary in shape and in corresponding positions. only regions with features are considered. the confirmation stage examines each feature pair in detail, analyzing the size, shape, and orientation of each member to determine how well they fit. if all the feature pairs are confirmed, the fit itself is confirmed. this decomposition into stages reduces the search space of potential fits. the grouping stage narrows the focus from all features to the set of features relevant to a fit. the orientation stage selects a single orientation. the matching stage confines the search for mating features to localized regions on the object's surface. the confirmation stage is thus able to examine features one pair at a time. much of this reasoning is qualitative. qualitative reasoning involves the analysis of how systems make the transition between discrete, qualitatively different states as their parameters reach certain critical values (bobrow, 1985). typically only ordinal relationships among values are considered. we treat knowledge about fit as a qualitative state. as new knowledge about fit is discovered, different and increasingly specific types of reasoning are applied. eventually the knowledge of fit becomes sufficiently complete to indicate the specific measurements needed to test fit quantitatively. we have developed an implementation that currently covers the matching and confirmation stages. input consists of a pair of objects that have already been grouped and oriented, as shown in the figure. in this example, the matching procedures pair the features from corresponding corners since these features match in location and are of complementary shape. the confirmation procedures examine each of these four pairs in turn. f1a is found to be smaller than f1b, and this is confirmed as a loose fit. in cross-section f2a fits into f2b, but since f2a is too long the fit is not confirmed. the radius of f3a exceeds that of f3b, and again fit is not confirmed. the radius and length of f4a are found to be smaller than those of f4b, resulting in a loose fit. we have identified several types of knowledge and control strategies that are important for reasoning about fit. we are currently working on representations for storing knowledge about the geometric and spatial relationships that accumulates as this reasoning proceeds. we are also interested in incorporating non-geometric properties, such as object functionality, into our model. such extensions provide new avenues for reasoning about fit, and facilitate the exploration of such important areas as inferring function from structure (stanfill, 1983) and the compilation of routine design knowledge (brown, 1985). douglas s. green david c. brown constraints in constructive solid geometry the success of solid modelling in industrial design depends on facilities for specifying and editing parameterized models of solids through user-friendly interaction with a graphical front-end. systems based on a dual representation, which combines constructive solid geometry (csg) and boundary representation (brep), seem most suitable for modelling mechanical parts. typically they accept a csg- compatible input (boolean combinations of solid primitives) and offer facilities for parameterizing and editing part definitions. the user need not specify the topology of the boundary, but often has to solve three-dimensional trigonometric problems to compute the parameters of rigid motions that specify the positions of primitive solids. a front-end that automatically converts graphical input into rigid motions may be easily combined with boundary-oriented input, but its integration in dual systems usually complicates the editing process and limits the possibilities of parameterizing solid definitions. this report proposes a solution based on three main ideas: (1) enhance the semantics of csg representations with rigid motions that operate on arbitrary collections of sub-solids regardless of their position in the csg tree, (2) store rigid motions in terms of unevaluated constraints on graphically selected boundary features, (3) evaluate constraints independently, one at a time, in user-specified order. the third idea offers an alternative to known approaches, which convert all constraints into a large system of simultaneous equations to be solved by iterative numerical methods. the resulting front-end is inadequate for solving problems where multiple constraints must be met simultaneously, but provides a powerful tool for specifying and interactively editing parameterized models of mechanical parts and mechanisms. the order in which constraints are evaluated may also be used as a language for specifying the sequence of assembly and set-up operations. an implementation under way is based on the interpreter of a new object oriented programming language, enhanced with geometric classes. constraint evaluation results in the activation of methods which compute rigid motions from surface information. the set of available methods may be extended by the users, and methods may be integrated in higher level functions whose algorithmic nature simplifies the treatment of degenerate cases. graphic interaction is provided through a geometrical engine which lets the user manipulate shaded images produced efficiently from the csg representation of solid models. jaroslaw r. rossignac artificial autonomous agents with artificial emotions poster luís miguel botelho helder coelho comparative game model exploration samuel baskinger scott briening anthony emma graig fisher vincent johnson christopher moyer standardisation - opportunity or constraint? (panel session) david arnold jack bresenham ken brodlie george s. carson jan hardenbergh paul van binst andries van dam automatic alignment of high-resolution multi-projector display using an un- calibrated camera yuqun chen douglas w. clark adam finkelstein timothy c. housel kai li the global simulation clock as the frequency domain experiment index frequency domain simulation experiments involve inducing sinusoidal variations in the input process. system sensitivities of the output can be detected in the frequency domain. the selection of an appropriate index for these oscillations is critical in running such experiments. the index for the sinusoidal variations has typically been a discrete index such as customer or part number in queueing and production systems respectively. in this paper, the use of the global simulation clock as the index is discussed. sheldon h. jacobson doug morrice lee w. schruben simple constrained deformations for geometric modeling and interactive design deformations are a powerful tool for shape modeling and design. we present a new model for producing controlled spatial deformations, which we term simple constrained deformations (scodef). the user defines a set of constraint points, giving a desired displacement and radius of influence for each. each constraint point determines a local b-spline basis function centered at the constraint point, falling to zero for points beyond the radius. the deformed image of any point in space is a blend of these basis functions, using a projection matrix computed to satisfy the constraints. the deformation operates on the whole space regardless of the representation of the objects embedded inside the space. the constraints directly influence the final shape of the deformed objects, and this shape can be fine-tuned by adjusting the radius of influence of each constraint point. the computations required by the technique can be done very efficiently, and real-time interactive deformation editing on current workstations is possible. paul borrel ari rappoport a knowledge-based mathematical model formulation system ramayya krishnan xiaoping li david steier neural nets for image restoration no imaging system in practice is perfect, in fact the recorded images are always distorted or of finite resolution. an image recording system can be modeled by a fredholm integral equation of the first kind. an inversion of the kernel representing the system, in the presence of noise, is an ill posed problem. the direct inversion often yields an unacceptable solution. in this paper, we suggest an artificial neural network (ann) architecture to solve ill posed problems in the presence of noise. we use two types of neuron like processing units: the units that use the weighted sum and the units that use the weighted product. the weights in the model are initialized using the eigen vectors of the kernel matrix that characterizes the recording system. we assume the solution to be a sample function of a wide sense stationary process with a known auto- correlation function. as an illustration, we consider two images that are degraded by motion blur. a. d. kulkarni an evaluation of learning performance in backpropagation neural networks and decision-tree classifier systems daniel c. st. clair w. e. bond a. k. rigler steve aylward fractal forgeries many natural objects are well-described by fractals. we discuss recent developments in the synthesis of psychologically convincing natural scenes using fractal techniques. multiresolution of arbitrary triangular meshes wei xu don fussell an analysis of selected computer interchange color spaces important standards for device-independent color allow many different color encodings. this freedom obliges users of these standards to choose the color space in which to represent their data. a device-independent interchange color space must exhibit an exact mapping to a colorimetric color representation, ability to encode all visible colors, compact representation for given accuracy, and low computational cost for transforms to and from device- dependent spaces. the performance of cie 1931 xyz, cieluv, cielab, yes, ccir 601-2 ycbcr, and smpte-c rgb is measured against these requirements. with extensions, all of these spaces can meet the first two requirements. quantizing error dominates the representational errors of the tested color spaces. spaces that offer low quantization error also have low gain for image noise. all linear spaces are less compact than nonlinear alternatives. the choice of nonlinearity is not critical; a wide range of gammas yields acceptable results. the choice of primaries for rgb representations is not critical, except that high-chroma primaries should be avoided. quantizing the components of the candidate spaces with varying precision yields only small improvements. compatibility with common image data compression techniques leads to the requirement for low luminance contamination, a property that compromises several otherwise acceptable spaces. the conversion of a device- independent representation to popular device spaces by means of trilinear interpolation requires substantially fewer lookup table entries with ccir 601-2 ycbcr and cielab. james m. kasson wil plouffe designing with words: a model for a design language in a moo anna cicognani parallel and distributed simulation richard m. fujimoto image based rendering with stable frame rates huamin qu ming wan jiafa qin arie kaufman efficient simulation of light transport in scences with participating media using photon maps henrik wann jensen per h. christensen bayesian analysis for simulation input and output stephen e. chick art or virtual cinema? mario cavalli biologically inspired approaches to robotics: what can we learn from insects? randall d. beer roger d. quinn hillel j. chiel roy e. ritzmann "the extended mind" - extended joseph s. fulda editorial i'm very pleased to have the opportunity to take over as editor-in-chief after holly rushmeier's strong leadership of the past three years. acm has made a big effort to get tog back on schedule after the delays caused by changing to a new publication system. with this issue, we are almost on schedule again and the backlog of accepted papers is significantly reduced. i hope to further shorten the time the return reviews to authors and to get accepted paper to readers. as part of that effort, i have asked steve fortune for bell labs, john hart from washington state, joe marks from merl, and jorg peters from the university of florida to be new members of the editorial board to replace those who retired with the end of holly's term. i expect that we'll see significant changes in the role of journals during my three-year term. right now, journals utilize peer reviews and a revision cycle to provide accreditation to research, but this process is tied to a particular distribution medium and requires financial support in the form of subscription fees. with the increasing use of web-baed publishing, i expect that the accreditation and distribution functions will begin to separate. accreditation will continue to play an important role in scientific reputations and academic careers, while the publication media will become far more diverse. this diversification is particularly important for graphics because many of its important results are not represented well on paper. tog has moved in this direction with a web page that is expertly maintained by eric haines and contains occasional supplemental material (http://www.acm.org./tog/). if tog is to represent the true breadth of graphics, however, we will have to make much more of an effort in this direction. during the next three years, we will almost certainly see significant changes in graphics as well. clearly, the field is expanding beyond the creation of beautiful and useful images and image sequences. new emphases include increasingly realistic physical modeling, interactive worlds, and nonvisual modalities. graphics is a fascinating field, in part because researchers are able to adopt and adapt new ideas from so many other fields: art, vision, physics, materials science, control, optimization and biomechanics. i look forward to the innovative and creative work that the community will submit to tog during the next three years. jessica hodgins invisibility coherence for faster scan-line hidden surface algorithms invisibility coherence is a new technique developed to decrease the time necessary to render shaded images by existing scan-line hidden surface algorithms. invisibility coherence is a technique for removing portions of a scene that are not likely to be visible. if a large portion of the scene is invisible, as is often the case in three-dimensional computer graphics, the processing time eliminated may be substantial. invisibility coherence takes advantage of the observation that a minimal amount of processing needs to be done on objects (polygons, patches, or surfaces) that will be hidden by other objects closer to the viewer. this fact can be used to increase the efficiency of current scan-line algorithms, including both polygon-based and parametrically curved surface-based algorithms. invisibility coherence was implemented and tested with the polygon hidden surface algorithm for constructive solid geometry developed by peter atherton [1]. the use of invisibility coherence substantially increases the efficiency of this scan-line algorithm. invisibility coherence should work as well or even better with other scan-line hidden surface algorithms, such as the lane- carpenter, whitted, and blinn algorithms for parametrically curved surfaces [2]., or the watkins, romney, and bouknight algorithms for polygons [3, 4, 5]. gary a. crocker nature, man and man-made: the siggraph 95 technical slide set rosalee wolfe touchable 3d display hideki kakeya deep models, normative reasoning and legal expert systems this paper discusses the role of deep models and deontic logic in legal expert systems. whilst much research work insists on the importance of both these features, legal expert systems are being built using shallow models and no more propositional logic, and are claimed to be successful in use. there is then a prima facie conflict between findings of research and commercial practice, which this paper explores, and attempts to explain. t. j. m. bench- capon managing robot autonomy and interactivity using motives and visual communication françois michaud minh tuan vu a deductive approach to program synthesis program synthesis is the systematic derivation of a program from a given specification. a deductive approach to program synthesis is presented for the construction of recursive programs. this approach regards program synthesis as a theorem-proving task and relies on a theorem-proving method that combines the features of transformation rules, unification, and mathematical induction within a single framework. zohar manna richard waldinger designing effective pictures: is photographic realism the only answer? (panel session) jim blinn donald p. greenberg margaret a. hagen exception handling in agent systems mark klein chrysanthos dellarocas letters jim thomas texture mapping for cel animation wagner toledo corrêa robert j. jensen craig e. thayer adam finkelstein subdivision surfaces in character animation tony derose michael kass tien truong asymptotic expansions of functional inverses bruno salvy john shackell application of machine learning to the maintenance of knowledge-based performance integration of machine learning methods into knowledge- based systems requires greater control over the application of the learning methods. recent research in machine learning has shown that isolated and unconstrained application of learning methods can eventually degrade performance. this paper presents an approach called performance-driven knowledge transformation for controlling the application of learning methods. the primary guidance for the control is performance of the knowledge base. the approach is implemented in the peak system. two experiments with peak illustrate how the knowledge base is transformed using different learning methods to maintain performance goals. results demonstrate the ability of performance-driven knowledge transformation to control the application of learning methods and maintain knowledge base performance. lawrence b. holder efficient alias-free rendering using bit-masks and look-up tables greg abram lee westover highly parallel computing in simulation on dynamic bond portfolio management the bond portfolio management problem is formulated as a stochastic program based on interest rate scenarios. it has been proved in dupakova (1998) that small errors in constructing scenarios will not destroy the optimal solution. the aim of the contribution is to quantify, through carefully planned simulation studies, the magnitude of the above-mentioned errors and to give bounds, at a specified confidence level, for the optimal gap between the value related to the optimal first-stage solution of the unperturbed problem and the "true" optimal value. parallel computer numerical results are presented. vittorio moriggia marida bertocchi jitka dupaková using compensating reconfiguration to maintain military distributed simulations donald j. welch james m. purtilo using assortative mating in genetic algorithms for vector quantization problems carlos fernandes rui tavares cristian munteanu agostinho rosa a global illumination solution for general reflectance distributions françis x. sillion james r. arvo stephen h. westin donald p. greenberg estimating fuzzy phoneme similarity relations for continuous speech recognition hajin yu sung-joo kim yung-hwan oh v-collide: accelerated collision detection for vrml thomas c. hudson ming c. lin jonathan cohen stefan gottschalk dinesh manocha technical implications of proposed graphics standards (panel session) this panel session is intended to present recent technical developments in the efforts to standardize computer graphics. several alternative approaches to 3-d standards will be presented and contrasted. the virtual device metafile will be presented, as well as a proposed binding of the gks standard to fortran. presentations will be made by the following individuals: dr. peter bono is the chairman of the ansi x3h3 technical committee developing standards for computer graphics programming languages, and will present the current status of the effort to adopt 2-d gks as an international standard and as an american national standard. tom wright will be available to respond to questions about the programmer's minimal interface to graphics (pmig) proposal, and its current status as a new output level of the draft proposed american national standard (dpans) gks. mark skall will present a proposed binding of gks to the fortran language. mr. skall will also discuss progress in the field of formal specification of graphics standards and developments in the establishment of conformance/certification procedures for implementations of gks. theodore reed will present the technical content of the virtual device metafile (vdm) draft proposed american national standard. discussed will be the relationship of the vdm to gks and to the yet to be proposed virtual device interface. specific functionality of the vdm will be discussed as will specific bindings of that functionality as a character set extension and as a binary format. david shuey will present an overview of the programmer's hierarchical interface to graphics (phigs) proposal. the phigs proposal is intended to support hierarchical structuring of graphics data, in contrast to the core system and gks proposals. this type of structure addresses highly interactive graphics applications which need to modify the presentation and the relationships within graphics data. richard ehlers will present the attribute model of the phigs proposal and explore the relationship of attribute model to structured graphics data bases. also, mr. ehlers will discuss the viewing and transformation implications of structured graphics data bases using examples. gunter enderle is a member of the west german delegation to the international standards organization (iso) working group on computer graphics. herr enderle will be discussing iso proposals for the extension of the gks standard from 2-d to 3-d functionality. several such proposals have been made, including one by din (the official standards making body of the german federal republic) and a norwegian proposal called idigs. elaine sonderegger was a member of the acm siggraph graphics standards planning committee, and is the acm siggraph representative to ansi x3h3. ms. sonderegger will contrast the 3-d functionality of the core system, idigs, din proposed 3-d extensions, and phigs. david straayer peter bono richard ehlers gunter enderle theodore reed david shuey mark skall elain sonderegger tom wright towards a web based simulation environment peter lorenz thomas j. schriber heiko dorwarth klaus-christoph ritter circumscription with homomorphisms: solving the equality and counterexample problems one important facet of common-sense reasoning is the ability to draw default conclusions about the state of the world, so that one can, for example, assume that a given bird flies in the absence of information to the contrary. a deficiency in the circumscriptive approach to common-sense reasoning has been its difficulties in producing default that tweety blutto using ordinary circumscription, or conclude by default that a particular bird flies, if some birds are known not to fly. in this paper, we introduce a new form of circumscription, based on homomorphisms between models, that remedies these two problems and still retains the major desirable properties of traditional forms of circumscription. peter k. rathmann marianne winslett mark manasse multiple viewpoint rendering michael halle visual storytelling graham walters sharon calahan bill cone ewan johnson tia kratter glenn mcqueen bob pauley procedural field grasses lee a. butler david s. ebert multiresolution painting and compositing we describe a representation for multiresolution images\\---images that have different resolutions in different places---and methods for creating such images using painting and compositing operations. these methods are very easy to implement, and they are efficient in both memory and speed. only the detail present at a particular resolution is stored, and the most common painting operations, "over" and "erase", require time proportional only to the number of pixels displayed. we also show how fractional-level zooming can be implemented in order to allow a user to display and edit portions of a multiresolution image at any arbitrary size. deborah f. berman jason t. bartell david h. salesin a simulation approach for analyzing parking space availability at a major university john m. harris yasser dessouky under the hood of geovrml 1.0 geovrml 1.0 provides geoscientists with a rich suite of enabling capabilities that cannot be found elsewhere. that is, the ability to model dynamic 3-d geographic data that can be distributed over the web and interactively visualized using a standard browser configuration. geovrml includes nodes for vrml97 that perform this task; addressing issues such as coordinate systems, scalability, animation, accuracy, and preservation of the original geographic data. the implementation is released as open source and includes various tools for generating geovrml data. all these facilities provide geoscientists with an excellent medium to present complex 3-d geographic data in a dynamic, interactive, and web-accessible format. we illustrate these capabilities using real-world examples drawn from diverse application areas. martin reddy lee iverson yvan g. leclerc phene-: creating a digital chimera tiffany holmes ray tracing objects defined by sweeping planar cubic splines jarke j. van wijk smooth invariant interpolation of rotations we present an algorithm for generating a twice-differentiable curve on the rotation group so(3) that interpolated a given ordered set of rotation matrices at their specified knot times. in our approach we regard so(3) as a lie group with a bi-invariant riemannian metriac, and apply the coordinate- invariant methods of riemannian geometry. the resulting rotation curve is easy to compute, invariant with respect to fixed and moving reference frames, and also approximately minimizes angular acceleration. f. c. park bahram ravani an organizational framework for comparing adaptive artficial intelligence system teresa a. blaxton brian c. kushner a preliminary excursion into step-logics we have suggested that a new kind of logical study that focuses on individual deductive steps is appropriate to agents that must do commonsense reasoning. in order to adequately study such reasoners, a formal description of such "steps" is necessary. here we carry further this program for the propositional case. in particular we give a result on completeness for reasoning about agents. j drapkin d perlis a space-time tradeoff for memory-based heuristics robert c. holte istvan t. hernadvolgyi the standards pipeline steve carson dick puk object-oriented simulation of paratrooper-vortex interactions t. glenn bailey jose c. belano philip s. beran jack m. kloeber hans j. petry using singular value decomposition to visualise relations within multi-agent systems michael schroeder anatomically based modeling jane wilhelms allen van gelder a rule discovery method based on approximate dependency inference atsuhiro takasu moonis ali ray tracing parametric surface patches utilizing numerical techniques and ray coherence kenneth i. joy murthy n. bhetanabhotla development and integration of ai applications in control centers: the sparse experience zita a. vale a. machado e moura m. fernanda fernandes couto rosado albino marques animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents we describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. conversation is created by a dialogue planner that produces the text as well as the intonation of the utterances. the speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gestures generators. coordinated arm, wrist, and hand motions are invoked to create semantically meaningful gestures. throughout we will use examples from an actual synthesized, fully animated conversation. justine cassell catherine pelachaud norman badler mark steedman brett achorn tripp becket brett douville scott prevost matthew stone configuration management for multi-agent systems as heterogeneous distributed systems, multi-agent systems present some challenging configuration management issues. there are the problems of knowing how to allocate agents to computers, launch them on remote hosts, and once the agents have been launched, how to monitor their runtime status so as to manage computing resources effectively. in this paper, we present the retsina configuration manager, \emph{recoma}. we describe its architecture, how it uses agent infrastructure such as service discovery, to assist the multi-agent system administrator in allocating, launching, and monitoring a heterogeneous distributed agent system in a distributed and networked computing environment.\footnote{the authors would like to acknowledge the significant contributions of matthew w. easterday in his earlier and exhaustive implementations of configuration management programs and cm design proposals. this research has been sponsored in part by darpa grant f-30602-98-2-0138 and the office of naval research grant n-00014-96-16-1-1222.} joseph a. giampapa octavio h. juarez-espinosa katia p. sycara depth-fair crossover in genetic programming matthew kessler thomas haynes cantata: visual programming environment for the khoros system mark young danielle argiro steven kubica visualising and debugging distributed multi-agent systems divine t. ndumu hyacinth s. nwana lyndon c. lee jaron c. collis the role of representation in interaction (abstract): discovering focal points among alternative solutions sarit kraus jeffrey s. rosenschein research and development for film production christian rouet keith goldfarb ed leonard darwyn peachey ken pearce enrique santos paul yanover robot navigation with range queries dana angluin jeffery westbrook wenhong zhu verification validation and accreditation of simulation models osman balci autonomous discovery in empirical domains gary livingston bruce g. buchanan practitioners' views on simulation the panelists are all actively engaged in applying simulation to a variety of systems. their areas of application include manufacturing, electronics, steel, and health systems. the panel discussion will address issues relating to the practice of simulation and make recommendations for future consideration. the panelists' position statements follow. kenneth j. musselman william a. clark stanley a. hendryx donald segal dynamic simulation of autonomous legged locomotion michael mckenna david zeltzer representation of legal text for conceptual retrieval judith p. dick animated free-form deformation: an interactive animation technique sabine coquillart pierre jancene clustering for glossy global illumination we present a new clustering algorithm for global illumination in complex environments. the new algorithm extends provious work on clustering for radiosity to allow for nondiffuse (glossy) reflectors. we represent clusters as points with directional distributions of outgoing and incoming radiance and importance, and we derive an error bound for transfers between these clusters. the algorithm groups input surfaces into a hierarchy of clusters, and then permits clusters to interact only if the error bound is below an acceptable tolerance. we show that the algorithm is asymptotically more efficient than previous clustering algorithms even when restricted to ideally diffuse environments. finally, we demonstrate the performance of our method on two complex glossy environments. per h. christensen dani lischinski eric j. stollnitz david h. salesin robust learning aided by context john case sanjay jain matthias ott arun sharma frank stephan rendering antialiased shadows with depth maps william t. reeves david h. salesin robert l. cook edward tufte's visual explanations: a tapestry of images, comparisons, and principles russell k. needham fast and simple 2d geometric proximity queries using graphics hardware kenneth e. hoff andrew zaferakis ming lin dinesh manocha object shape and reflectance modeling from observation yoichi sato mark d. wheeler katsushi ikeuchi an algorithm and data structure for 3d object synthesis using surface patch intersections there are several successful systems that provide algorithms that allow for the intersection of polygonal objects or other primitive shapes to create more complex objects. our intent is to provide similar algorithms for intersecting surface patches. there have been contributions to this concept at the display algorithm level, that is, computing the intersection at the time the frame is generated. in an animation environment, however, it becomes important to incorporate the intersection in the data generation routines, in order that those parts of the intersected object that never contribute to an image are not processed by the display algorithm. this only increases the complexity of the object unnecessarily, and subsequently puts an additional burden on the display algorithms. an algorithm is described which uses a modified catmull recursive subdivision scheme to find the space curve which is the intersection of two bicubic patches. an associated data structure is discussed which incorporates this curve of intersection in the patch description in a way suitable for efficient display of the intersected object. sample output of these intersections are shown which serve to illustrate the capabilities and limitations of the described procedures. wayne e. carlson artistic evolution dong pfeifer todd larson eddie lee desert dreams paul rademacher michael north todd gaul curriculum descant: a new life for ai artifacts deepak kumar book reviews karen t. sutherland the art of survival editing images of text steven c. bagley gary e. kopec synthesizing natural textures michael ashikhmin extending the radiosity method to include specularly reflecting and translucent materials an extension of the radiosity method is presented that rigorously accounts for the presence of a small number of specularly reflecting surfaces in an otherwise diffuse scene, and for the presence of a small number of specular or ideal diffuse transmitters. the relationship between the extended method and earlier radiosity and ray-tracing methods is outlined. it is shown that all three methods are based on the same general equation of radiative transfer. a simple superposition of the earlier radiosity and ray-tracing methods in order to account for specular behavior is shown to be physically inconsistent, as the methods are based on different assumptions. specular behavior is correctly included in the present method. the extended radiosity method and example images are presented. holly e. rushmeier kenneth e. torrance editorial pointers diane crawford reconstruction and representation of 3d objects with radial basis functions we use polyharmonic radial basis functions (rbfs) to reconstruct smooth, manifold surfaces from point-cloud data and to repair incomplete meshes. an object's surface is defined implicitly as the zero set of an rbf fitted to the given surface data. fast methods for fitting and evaluating rbfs allow us to model large data sets, consisting of millions of surface points, by a single rbf --- previously an impossible task. a greedy algorithm in the fitting process reduces the number of rbf centers required to represent a surface and results in significant compression and further computational advantages. the energy-minimisation characterisation of polyharmonic splines result in a "smoothest" interpolant. this scale- independent characterisation is well- suited to reconstructing surfaces from non-uniformly sampled data. holes are smoothly filled and surfaces smoothly extrapolated. we use a non-interpolating approximation when the data is noisy. the functional representation is in effect a solid model, which means that gradients and surface normals can be determined analytically. this helps generate uniform meshes and we show that the rbf representation has advantages for mesh simplification and remeshing applications. results are presented for real-world rangefinder data. a rapid hierarchical radiosity algorithm pat hanrahan david salzman larry aupperle an empirical approach to the evaluation of icons j. m. webb p. f. sorenson n. p. lyons abstracts of interest b. shneiderman s. humphrey make all things make themselves thomas g. west quadrature prefiltering for high quality antialiasing this article introduces quadrature prefiltering, an accurate, efficient, and fairly simple algorithm for prefiltering polygons for scanline rendering. it renders very high quality images at reasonable cost, strongly suppressing aliasing artifacts. for equivalent rms error, quadrature prefiltering is significantly faster than either uniform or jittered supersampling. quadrature prefiltering is simple to implement and space-efficient; it needs only a small two- dimensional lookup table, even when computing nonradially symmetric filter kernels. previous algorithms have required either three-dimensional tables or a restriction to radially symmetric filter kernels. though only slightly more complicated to implement than the widely used box prefiltering method, quadrature prefiltering can generate images with much less visible aliasing artifacts. brian guenter jack tumblin certain algorithmic problems for lie algebras this short paper describes work similar to that appearing in buchberger's 1965 thesis inventing gr bner bases, but in the context of lie algebras. preceding buchberger by only three years, this paper, along with the two cited references, are the original papers defining what have become known as gr bner-shirshov bases. a. i. shirshov precedent, deontic logic, and inheritance the purpose of this paper is to establish some connections between precedent- based reasoning as it is studied in the field of artificial intelligence and law, particularly in the work of ashley, and two other fields: deontic logic and nonmonotonic logic. first, a deontic logic is described that allows for sensible reasoning in the presence of conflicting norms. second, a simplified version of ashley's account of precedent-based reasoning is reformulated within the framework of this deontic logic. finally, some ideas from the theory of nonmonotonic inheritance are employed to show how ashley's account might be elaborated to allow for a richer representation of the process of argumentation. john f. horty the story of computer graphics walt bransford carl machover john hart steve silas joan collins frank foster judson rosebush modeling prosody automatically in concept-to-speech generation shimei pan an attempted dimensional analysis of the law governing government appeals in criminal cases s. mendelson spectrally optimal sampling for distribution ray tracing don p. mitchell ams: a knowledge-based approach to task representation, organization and coordination applying artificial intelligence (ai) to the study of office information systems (ois) remains a challenging area. in this paper, we will present a knowledge based approach to modelling office activities. we will focus both on the declarative representation of the office concepts and their relationships, and on the organization of office knowledge in abstract packets, at different levels of generalization. this organization will allow abstract knowledge to be reusable in different situations, and to process and construct concrete structures at different levels. the control structure that should model the behavior of office workers is also discussed. michel tueni jianzhong li pascal fares lighting controls for synthetic images the traditional point source light used for synthetic image generation provides no adjustments other than position, and this severely limits the effects that can be achieved. it can be difficult to highlight particular features of an object. this paper describes new lighting controls and techniques which can be used to optimize the lighting for synthetic images. these controls are based in part on observations of the lights used in the studio of a professional photographer. controls include a light direction to aim a light at a particular area of an object, light concentration to simulate spotlights or floodlights, and flaps or cones to restrict the path of a light. another control relates to the color of the light source and its effect on the perceived color of specular highlights. an object color can be blended with a color for the highlights to simulate different materials, paints, or lighting conditions. this can be accomplished dynamically while the image is displayed, using the video lookup table to approximate the specular color shift david r. warn a knowledge-based approach for the validation of simulation models: thefoundationa new perspective of the validation problemfor simulation models is formulatedin this article. the approach is knowledge-based and focuses on behavioralvalidation. it has the important feature of providing a basis for thedevelopment of a software environment that can automate the validationactivity. discrete, continuous and combined simulation models can be treatedin a uniform manner.the key element of the approach is a validation knowledge base (vkb). this isdeveloped as three disjoint sets of relationships among the input and outputvariables of the simulation model. these relationships serve to capture allaspects of expected behavior of the simulation model. a simplecharacterization of model behavior is presented which provides the basis forspecifying the relationships from which the vkb is constructedthe utilization of all the information in the vkb in an efficient way is animportant subgoal of our system architecture. this requirement gives rise toan experiment design problem. this problem is carefully formulated andexamined within the framework of the behavior characterizations that exist inthe vkb. in particular, a basis for its solution is established in aconstraint set framework by carrying out a transformation on the relationshipswithin the vkb. the constraint set context for the problem has the advantageof providing an environment which not only facilitates analysis but alsoenables the application of a variety of solution techniques.louis g. birta f. nur Ã-zmizrak validation: expanding the boundaries (panel session) richard c. nance jerry banks jorge haddock kenneth n. mckay nelson pacheco jeff rothenberg a tale of two representations sandip sen image models narendra ahuja b. j. schachter an architectural design for digital objects paul a. fishwick neural network models in simulation: a comparison with traditional modeling approaches p. a. fishwick computer animation nadia magnenat thalmann daniel thalmann the hidden path: indexing in information management chris hallgren comparison of surface and derivative evaluation methods for the rendering of nurb surfaces three methods for evaluating the surface coordinates, first derivatives, and normal vectors of a nurb surface are compared. these methods include forward differencing, knot insertion, and a tow-stage cox-de boor technique. the computational complexity of each of these techniques is analyzed and summarized. the use of hermite functions is shown to yield a poor approximation for the shading functions of a nurb surface. an improved method for computing derivatives by knot insertion is presented. an efficient algorithm for computing the foward difference matrix and a method for using foward differencing to compute the first derivatives of a nurb surface are also presented. william l. luken fuhua cheng using mogul 2.0 to produce simulation models and animations of complex computer systems and networks peter l. haigh modeling systems using discrete event simulation in this article, we present an introduction to discrete event modeling and discuss some of the important issues related to model development. we are not talking about simulation codes nor statistical models for the outputs from such programs. rather we will focus on the interface between a real system and a simulation code where we describe the system. lee schruben natural language learning eugene charniak toward a computational theory of arguing with precedents this paper presents a partial theory of arguing with precedents in law and illustrates how that theory supports multiple interpretations of a precedent. the theory provides succinct computational definitions of (1) the most persuasive precedents to cite in the principal argument roles and (2) the most salient aspects of the precedents to emphasize when citing them in those roles. an extended example, drawn from the output of the hypo program, illustrates the range of different descriptions of the same precedent that are supported by the theory. each description focuses on different salient aspects of the case depending on the argument context. k. d. ashley towards diagram processing: a diagrammatic information system michael anderson the design of an automated assistant for acquiring strategic knowledge t. gruber p. cohen assessing multimedia similarity: a framework for structure and motion vasilis delis dimitris papadias nikos mamoulis a model for belief revision in a multi-agent environment (abstract) aldo franco dragoni research of opportunities for concurrent execution of algorithms for time normalization and comparison of acoustic models in speech recognition plamen petkov clustered time warp and logic simulation we present, in this paper, a hybrid algorithm which makes use of time warp between clusters of lps and a sequential algorithm within the cluster. time warp is, of course, traditionally implemented between individual lps. the algorithm was implemented in a digital logic simulator, and its performance compared to that of time warp. resting upon this platform we develop a family of three checkpointing algorithms, each of which occupies a different point in the spectrum of possible trade-offs between memory usage and execution time. the algorithms were implemented on several digital logic circuits and their speed, number of states saved and maximal memory consumption were compared to those of time warp. one of the algorithms saved between 35 and 50% of the maximal memory consumed by time warp (depending upon the number of processors used), while the other two decreased the maximal usage up to 30%. the latter two algorithms exhibited a speed comparable to time warp, while the first algorithm was 30-60% slower. these algorithms are also simpler to implement than optimal checkpointing algorithms. herve avril carl tropper haar wavelets over triangular domains with applications to multiresolution models for flow over a sphere gregory m. nielson il-hong jung junwon sung legged robots research on legged machines can lead to the construction of useful legged vehicles and help us to understand legged locomotion in animals. marc h. raibert analysis-based program transformations mitchell wand emergent bucket brigading: a simple mechanisms for improving performance in multi-robot constrained-space foraging tasks esben h. ostergaard gaurav s. sukhatme maja j. matari prefiltered antialiased lines using half-plane distance functions we describe a method to compute high-quality antialiased lines by adding a modest amount of hardware to a fragment generator based upon half-plane edge functions. (a fragment contains the information needed to paint one pixel of a line or a polygon.) we surround an antialiased line with four edge functions to create a long, thin, rectangle. we scale the edge functions so that they compute signed distances from the four edges. at each fragment within the antialiased line, the four distances to the fragment are combined and the result indexes an intensity table. the table is computed by convolving a filter kernel with a prototypical line at various distances from the line's edge. because the convolutions aren't performed in hardware, we can use wider, more complex filters with better high-frequency rejection than the narrow box filter common to supersampling antialiasing hardware. the result is smoother antialiased lines. our algorithm is parameterized by the line width and filter radius. these parameters do not affect the rendering algorithm, but only the setup of the edge functions. our algorithm antialiases line endpoints without special handling. we exploit this to paint small blurry squares as approximations to small antialiased round points. we do not need a different fragment generator for antialiased lines, and so can take advantage of all optimizations introduced in the existing fragment generator. robert mcnamara joel mccormack norman p. jouppi computer graphics in commercial and broadcast production (panel) the goal of this panel is to acquaint the listener with what is perhaps the most visible area of computer graphics - the use of animation in television and motion pictures creating animation for commercial use is subject to many factors outside of the animator's control. pieces are constrained by deadlines and budgets and yet must be " new, fresh and different" in order to satisfy the client. the speakers on the panel represent four of the leading computer animation production houses and are well acquainted with having to work with these constraints while still delivering work of the highest possible quality. in addition to showing some of their work, the participants will discuss methods for creating animation for commercial use. carl rosendahl the 22d annual acm international computer chess championship danny kopec monty newborn mike valvo synthesizing realistic facial expressions from photographs frederic pighin jamie hecker dani lischinski richard szeliski david h. salesin data parallel volume rendering as line drawing peter schröder gordon stoll knowledge-based simulation at rand jeff rothenberg spiraling edge: fast surface reconstruction from partially organized sample points many applications produce three-dimensional points that must be further processed to generate a surface. surface reconstruction algorithms that start with a set of unorganized points are extremely time-consuming. sometimes, however, points are generated such that there is additional information available to the reconstruction algorithm. we present spiraling edge, a specialized algorithm for surface reconstruction that is three orders of magnitude faster than algorithms for the general case. in addition to sample point locations, our algorithm starts with normal information and knowledge of each point's neighbors. our algorithm produces a localized approximation to the surface by creating a star-shaped triangulation between a point and a subset of its nearest neighbors. this surface patch is extended by locally triangulating each of the points along the edge of the patch. as each edge point is triangulated, it is removed from the edge and new edge points along the patch's edge are inserted in its place. the updated edge spirals out over the surface until the edge encounters a surface boundary and stops growing in that direction, or until the edge reduces to a small hole that is filled by the final triangle. patricia crossno edward angel surface drawing steven schkolne cici koenig lcis: a boundary hierarchy for detail-preserving contrast reduction jack tumblin greg turk large margin classification using the perceptron algorithm yoav freund robert e. schapire intention reconsideration in complex environments martijn schut michael wooldridge simulation success stories: business process reengineering kathi l. hunt gregory a. hansen edwin f. madigan richard a. phelps menu planning by an expert (abstract only) the development of an expert menu planning system using the database management system ingres and the programming language "c" is presented. there are many factors involved in menu planning that tend to make the dietitian's job tedious and long. therefore, the overall goal of the system is to generate menus for an individual (taking into account his particular nutrient requirements and preferences) in a more efficient and accurate manner than when done manually by the nutritionist. there are five phases involved in developing an expert system [1] : problem identification, conceptualization, formalization, implementation, and testing. these phases encompass such things as: defining the characteristics of the problem, choosing an appropriate knowledge representation scheme, selecting a knowledge engineering tool, building a prototype system, refining the system, and evaluating the final product. each phase will be addressed to explain the actual implementation of the menu planning system. [1] b.g. buchanan et al. "constructing an expert system." in f. hayes-roth et al. eds., building expert systems. reading, massachusetts: addison-wesley publishing company, inc., (1983) 127 kim a. mcmahon book reviews: game programming lynellen d. s. perry from theory to practice: the utep robot in the aaai 96 and aaai 97 robot contests c. baral l. floriano a. hardesty d. morales m. nogueira t. c. son this is not a pipe rich gold an overview of knowledge representation this is a brief overview of terminology and issues related to knowledge representation (here-after kr) research, intended primarily for researchers working on semantic data models within database management and program specifications within programming languages/software engineering. knowledge representation is a central problem in artificial intelligence (ai) today. its importance stems from the fact that the current design paradigm for "intelligent" systems stresses the need for expert knowledge in the system along with associated knowledge-handling facilities. this paradigm is in sharp contrast to earlier ones which might be termed "power-oriented" [goldstein and papert 77] in that they placed an emphasis on general purpose heuristic search techniques [nilsson 71]. john mylopoulos automatic recognition and mapping of constraints for motion retargetting rama bindiganavale norman i. badler bargaining with deadlines tuomas sandholm nir vulkan book preview steven cherry introduction rick parent bjork: all is full of love james mann pasi johansson herve dhorne a coherent projection approach for direct volume rendering jane wilhelms allen van gelder detecting abnormal situations from real time power plant data using machine learning c. subramanian m. ali knowledge discovery based on neural networks limin fu digital simulation for energy conservation in a large wind tunnel plant system this paper documents a feasibility study of mathematically modeling the wind tunnel complex and associated plant in the von karman gas dynamics facility at the air force's arnold engineering development center in tennessee. the ultimate goal of the modeling effort is to effect energy conservation measures by modifying operational procedures and plant hardware. a general theory is proposed to model the aerodynamics and losses of each plant or tunnel component in terms of a set of 33 equations based on one-dimensional, unsteady, nonisentropic flow and admitting arbitrary interconnections between components. a computer program is currently under development to apply the theoretical model using the ibm continuous system modeling program iii. in conjunction with the modeling effort, an assessment was made of the existing experimental data base which could be used to drive the mathematical model. requirements for experimental data are discussed in general terms. frederick l. shope advanced topics in slam in 1979, the state-of-the-art in simulation languages was extended with the introduction of slam,tm the first language that provided three different modeling viewpoints in a single integrated framework. as experience with the use of slam increased, the need for enhanced capabilities within slam became apparent. for this reason, pritsker & associates, inc. has refined and expanded slam capabilities to produce slam iitm. a. alan b. pritsker capturing the motions of actors in movies masanobu yamamoto katsutoshi yagishita hajime yamanaka naoto ohkubo simulation with c the c programming language was developed at bell laboratories in 1972 by dennis ritchie. since that time, c has had major acceptance as a modern programming language suitable for a large variety of applications. those applications include the operating system unix, which was written in c for portability. it requires only a c compiler and the rewriting of some low level routines in machine language for implementation on virtually any computer. the best documentation of c is provided in a book entitled the c programming language by kerninghan and ritchie.[1] this paper explores the use of c as the host language for discrete event simulation. one of the primary motivators is the portability of c code. c compilers are now available for a variety of computers from microcomputers to mainframes. hence, a simulation model developed in c could execute on a microcomputer or a mainframe, given a standard c compiler. the standard for c is well defined via compilers which have a full implementation of c. kerningham and ritchie's text provides the documentation for that standard. floyd h. grant douglas g. macfarland knowledge and natural language processing kbnl is a knowledge-based natural language processing system that is novel in several ways, including the clean separation it enforces between linguistic knowledge and world knowledge, and its use of knowledge to aid in lexical acquisition. applications of kbnl include intelligent interfaces, text retrieval, and machine translation. jim barnett kevin knight inderjeet mani elaine rich rendering synthetic objects into real scenes: bridging traditional and image- based graphics with global illumination and high dynamic range photography paul debevec sensitivity analysis in ranking and selection for multiple performance measures douglas j. morrice john butler peter mullarkey srinagesh gavirneni the connectivity shapes video in this video we introduce a 3d shape representation that is based sol ely on mesh connectivity -- the {\em connectivity shape}. given a connectivity, we define its natural geometry as a smooth embedding in space with uniform edge lengths and describe efficient techniques to compute it. furthermore, we show how to generate connectivity shapes that approximate given shapes. the details will soon be published in form of a full paper. martin isenburg stefan gumhold craig gotsman descriptive sampling: an improvement over latin hypercube sampling eduardo saliby slam tutorial this paper provides an overview of the important features of the slam simulation language. the focus of the paper is the unified system-modeling framework of slam which allows systems to be viewed from process, event, or state variable perspectives. claude dennis pegden a. alan b. pritsker the battle scene seiichi ishii tpass: dynamic, discrete-event simulation and animation of a toll plaza robert t. redding andrew j. junga sandland heiko lueg map generation by means of autonomous robots and possibility propagation techniques maite lopez-sanchez ramon lopez de mantaras carles sierra special generators and relations for some orthogonal and symplectic groups over gf(2) gerhard grams what does fuzzy logic bring to ai? didier dubois henri prade a software testbed for the development of 3d raster graphics systems t. whitted d. m. weimer projecting computer graphics on moving surfaces: a simple calibration and tracking method claudio pinhanez frank nielsen kim binsted pattern-based texturing revisited fabrice neyret marie-paule cani a stroke-order free chinese handwriting input system based on relative stroke positions and back-propagation networks wing-nin leung kam-shun cheng methodical design and maintenance of well structured rule base kunihiko higa is designing a neural network application an art or a science? roman erenshteyn richard foulds scott galuska hairy brushes steve strassmann molecular applications of volume rendering and 3-d texture maps david s. goodsell arthur j. olson industrial strength simulation using gpss/h robert c. crain douglas s. smith a frame buffer system with enhanced functionality a video-resolution frame buffer system with 32 bits per pixel is described. the system includes, in addition to standard features for limited zoom and pan, an arithmetic unit at the update port which allows local computation of many frequently-used pixel-level functions combining stored pixel values with incoming pixel values. in addition to the standard arithmetic and logical functions there are functions for sum to maximum pixel value and difference to minimum pixel value. comparisons between incoming and stored data are used to implement conditional writes based on depth values for depth-buffer algorithms. update and refresh ports are designed for a wide range of flexibility allowing simultaneous use by separate tasks and various functional rearrangements of the 32-bit pixel words. the memory architecture, refresh and update ports are described. examples of widely divergent modes of operation are provided. f. c. crow m. w. howard logic programming for large scale applications in law: a formalisation of supplementary benefit legislation t. j. m. bench-capon g. o. robinson t. w. routen m. j. sergot bidirectional heuristic search again dennis de champeaux variance reduction through smoothing and control variates for markov chain simulations sigrun andradottir daniel p. heyman teunis j. ott simulation analysis the use of a computer simulation model to learn about the system(s) under study must involve an analysis of the results from the simulation program itself. a classification of simulation types is given which provides a framework for a treatment of simulation analysis. a more detailed discussion of the most difficult class of simulation analysis is presented. various goals of analyses are mentioned, together with a brief discussion of related topics. w. david kelton curved pn triangles alex vlachos jörg peters chas boyd jason l. mitchell scanning physical interaction behavior of 3d objects we describe a system for constructing computer models of several aspects of physical interaction behavior, by scanning the response of real objects. the behaviors we can successfully scan and model include deformation response, contact textures for interaction with force-feedback, and contact sounds. the system we describe uses a highly automated robotic facility that can scan behavior models of whole objects. we provide a comprehensive view of the modeling process, including selection of model structure, measurement, estimation, and rendering at interactive rates. the results are demonstrated with two examples: a soft stuffed toy which has significant deformation behavior, and a hard clay pot which has significant contact textures and sounds. the results described here make it possible to quickly construct physical interaction models of objects for applications in games, animation, and e-commerce. a simulation environment for the coordinated operation of multiple autonomous underwater vehicles joão borges de sousa aleks göllu volume rendering robert a. drebin loren carpenter pat hanrahan algorithmic aspects in speech recognition: an introduction speech recognition is an area with a considerable literature, but there is little discussion of the topic within the computer science algorithms literature. many computer scientists, however, are interested in the computational problems of speech recognition. this paper presents the field of speech recognition and describes some of its major open problems from an algorithmic viewpoint. our goal is to stimulate the interest of algorithm designers and experimenters to investigate the algorithmic problems of effective automatic speech recognition. adam l. buchsbaum raffaele giancarlo fast computation of shadow boundaries using spatial coherence and backprojections this paper describes a fast, practical algorithm to compute the shadow boundaries in a polyhedral scene illuminated by a polygonal light source. the shadow boundaries divide the faces of the scene into regions such that the structure or "aspect" of the visible area of the light source is constant within each region. the paper also describes a fast, practical algorithm to compute the structure of the visible light source in each region. both algorithms exploit spatial coherence and are the most efficient yet developed. given the structure of the visible light source in a region, queries of the form "what specific areas of the light source are visible?" can be answered almost instantly from any point in the region. this speeds up by several orders of magnitude the accurate computation of first level diffuse reflections due to an area light source. furthermore, the shadow boundaries form a good initial decomposition of the scene for global illumination computations. a. james stewart sherif ghali elastic time we introduce a new class of synchronization protocols for parallel discrete event simulation, those based on near- perfect state information (npsi). npsi protocols are adaptive dynamically controlling the rate at which processes constituting a parallel simulation proceed with the goal of completing a simulation efficiently. we show by analysis that a class of adaptive protocols (that includes npsi and several others) can both arbitrarily outperform and be arbitrarily outperformed by the time warp synchronization protocol. this mixed result both substantiates the promising results we and other adaptive protocol designers have observed, and cautions those who might assume that any adaptive protocol will always be better than any nonadaptive one. we establish in an experimental study that a particular npsi protocol, the elastic time algorithm, outperforms time warp, both temporally and spatially on every workload tested. although significant options remain with respect to the design of eta, the work presented here establishes the class of npsi protocols as a very promising approach. sudhir srinivasan paul f. reynolds hidden line elimination in projected grid surfaces d. p. anderson visibility-ordering meshed polyhedra a visibility-ordering of a set of objects from some viewpoint is an ordering such that if object a obstructs object b, then b precedes a in the ordering. an algorithm is presented that generates a visibility-ordering of an acyclic convex set of meshed convex polyhedra. this algorithm takes time linear in the size of the mesh. modifications to this algorithm and/or preprocessing techniques are described that permit nonconvex cells nonconvex meshes (meshes with cavities and/or voids), meshes with cycles, and sets of disconnected meshes to be ordered. visibility-ordering of polyhedra is applicable to scientific visualization, particularly direct volume rendering. it is shown how the ordering algorithms can be used for domain decomposition of finite element meshes for parallel processing, and how the data structures used by these algorithms can be used to solve the spatial point location problem. the effects of cyclically obstructing polyhedra are discussed and methods for their elimination are described, including the use of the delaunay triangulation. methods for converting nonconvex meshes into convex meshes are described. peter l. williams dynamic arguments in a case law domain in this paper we describe an approach to reasoning with cases which takes into account the view that case law evolves through a _series_ of decisions. this is in contrast to approaches which take as a starting point a set of decided cases, with no account taken of the order in which they were decided. the model of legal reasoning we follow is based on levi's account which shows how decided cases often need to be reinterpreted in the light of subsequent decisions, so that features of cases wax and wane in importance. our aim is to reproduce the arguments that could have been used in a given case, rather than to apply a retrospective understanding of the law to them. a second novel feature is that we use a general purpose ontology to describe the cases, rather than one developed specifically to model the pertinent cases. the paper describes a prototype implementation, and uses an example to illustrate how our approach works. after this case by case description we make some remarks on the insights gained, and draw some conclusions. john henderson trevor bench-capon censor-sheep dani rosen don't pull the plug! wobbe koning curve-drawing algorithms for raster displays the midpoint method for deriving efficient scan- conversion algorithms to draw geometric curves on raster displays in described. the method is general and is used to transform the nonparametric equation f(x,y) = 0, which describes the curve, into an algorithms that draws the curve. floating point arithmetic and time- consuming operations such as multiplies are avoided. the maximum error of the digital approximation produced by the algorithm is one-half the distance between two adjacent pixels on the display grid. the midpoint method is compared with the two-point method used by bresenham, and is seen to be more accurate (in terms of the linear error) in the general case, without increasing the amount of computation required. the use of the midpoint method is illustrated with examples of lines, circles, and ellipses. the considerzations involved in using the method to derive algorithms for drawing more general classes of curves are discussed. jerry van aken mark novak field of view control for closed-loop visually-guided motion william s. gribble domain knowledge and the design process during the past 10 or 12 years, artificial intelligence researchers have explored techniques for bringing large amounts of domain knowledge to bear in solving ill-structured problems. several programs that make use of these knowledge-based techniques are currently being developed to assist in various design tasks. this paper introduces one technique---rule-based programming--- and illustrates its use with two programs, r1 and xsel, which are used by digital equipment corporation in the design of computer system configurations. john mcdermott r-buffer: a pointerless a-buffer hardware architecture we present a graphics hardware architecture that implements carpenter's a-buffer. the a-buffer is a software renderer that uses pointer based linked lists. our pointerless approach computes order independent transparency for any number of layers with minimal hardware complexity. statistics are shown for a variety of different scenes using a trace based methodology, with an instrumented mesa opengl implementation. the architecture is shown to require from 2.1 to 3.6 times more memory than traditional z-buffering. a detailed hardware design is provided. order independent transparency is computed without application sorting and without artifacts. the architecture can also be used for antialiasing, and an example of carpenter's classical a-buffer antialiasing is shown. craig m. wittenbrink on the parallel risch algorithm (ii) it is proved that, under the usual restrictions, the denominator of the integral of a purely logarithmic function is the expected one, that is, all factors of the denominator of the integrand have their multiplicity decreased by one. furthermore, it is determined which new logarithms may appear in the integration. j. h. davenport b. m. trager painterly rendering for animation barbara j. meier structural approach for the recognition of printed documents stoian donchev georgi gluhcher reinforcement learning and mistake bounded algorithms yishay mansour designing solid objects using interactive sketch interpretation david pugh conflict representation and classification in a domain-independent conflict management framework k. s. barber t. h. liu a. goel c. e. martin linear constraints for deformable non-uniform b-spline surfaces george celniker will welch efficient algorithms for line and curve segment intersection using restricted predicates jean-daniel boissonnat jack snoeyink compositing 3-d rendered images tom duff rendering csg models with a zz-buffer david salesin jorge stolfi advanced use of simula this paper is a tutorial on program development in simula. it assumes a reading knowledge of simula, and sketches the design of a local area network simulator (cambridge ring architecture) in five logical levels: machine interface, queueing, simulation primitives, data collection primitives and finally the network components. besides program development technique, we also emphasize the value of class body actions, inner, the virtual mechanism and data protection. graham birtwistle virtual workbench - a non-immersive virtual environment for visualizing and interacting with 3d objects for scientific visualization upul obeysekare chas williams jim durbin larry rosenblum robert rosenberg fernando grinstein ravi ramamurthi alexandra landsberg william sandberg pedagogical agents on the web erin shaw w. lewis johnson rajaram ganeshan a representation of branch-cut information charles m. patton object voxeliztion by filtering milos sramek arie kaufman why simulation works a. alan b. pritsker venus pie trap daniel lazarow a memetic algorithm to schedule planned maintenance for the national grid e. k. burke a. j. smith solving the problems that count the excitement of doing research in computer graphics has not diminished in its 35 years of life. it remains a quick paced, intellectually challenging field that necessarily bends some traditional scientific methodologies. i think, however, that a recalibration of how we do things is in order. i offer a personal view of the "hows and whys" of that recalibration. eugene fiume non-invasive, interactive, stylized rendering alex mohr michael gleicher new technologies and applications in robotics takeo kanade michael l. reed lee e. weiss cyc: toward programs with common sense cyc is a bold attempt to assemble a massive knowledge base (on the order of 108 axioms) spanning human consensus knowledge. this article examines the need for such an undertaking and reviews the authos' efforts over the past five years to begin its construction. the methodology and history of the project are briefly discussed, followed by a more developed treatment of the current state of the representation language used (epistemological level), techniques for efficient inferencing and default reasoning (heuristic level), and the content and organization of the knowledge base. douglas b. lenat r. v. guha karen pittman dexter pratt mary shepherd automating the metamodeling process don caughlin multiple-center-of-projection images paul rademacher gary bishop vizard ii, a pci-card for real-time volume rendering m. meißner u. kanus w. straßner new quadric metric for simplifiying meshes with appearance attributes complex triangle meshes arise naturally in many areas of computer graphics and visualization. previous work has shown that a quadric error metric allows fast and accurate geometric simplification of meshes. this quadric approach was recently generalized to handle meshes with appearance attributes. in this paper we present an improved quadric error metric for simplifying meshes with attributes. the new metric, based on geometric correspondence in 3d, requires less storage, evaluates more quickly, and results in more accurate simplified meshes. meshes often have attribute discontinuities, such as surface creases and material boundaries, which require multiple attribute vectors per vertex. we show that a wedge-based mesh data structure captures such discontinuities efficiently and permits simultaneous optimization of these multiple attribute vectors. in addition to the new quadric metric, we experiment with two techniques proposed in geometric simplification, memoryless simplification and volume preservation, and show that both of these are beneficial within the quadric framework. the new scheme is demonstrated on a variety of meshes with colors and normals. hugues hoppe abductive coordination for logic agents anna ciampolini evelina lamma paola mello cesare stefanelli interactive deformations from tensor fields ed boring alex pang supercomputer assisted brain visualization with an extended ray tracer don stredney roni yagel stephen f. may michael torello k museum shinsuke baba jennifer meloon stephen duck on being responsible (abstract) n. r. jennings a competitive approach to game learning christopher d. rosin richard k. belew using constraint logic programming for model-based diagnosis: the modic system edward k. yu graphics goodies #1 - a filling algorithm for arbitrary regions bernd lind tomas hrycej issue spotting in a system for searching interpretation spaces a method for spotting issues is described which uses a system we are developing for searching interpretations spaces and constructing legal arguments. the system is compatible with the legal philosophy known as legal positivism, but does not depend on its notion of clear cases. ai methods applied in the system include an atms reason maintenance system, poole's framework for default reasoning, and an interactive natural deduction theorem prover with a programmable control component for including domain-dependent heuristic knowledge. our issue spotting method is compared with gardner's program for identifying the hard and easy issues raised by offer and acceptance law school examination questions. t. f. gordon ant-like missionaries and cannibals: synthetic pheromones for distributed motion control h. van dyke parunak sven brueckner expressive expression mapping with ratio images facial expressions exhibit not only facial feature motions, but also subtle changes in illumination and appearance (e.g., facial creases and wrinkles). these details are important visual cues, but they are difficult to synthesize. traditional expression mapping techniques consider feature motions while the details in illumination changes are ignored. in this paper, we present a novel technique for facial expression mapping. we capture the illumination change of one person's expression in what we call an _expression ratio image_ (eri). together with geometric warping, we map an eri to any other person's face image to generate more expressive facial expressions. zicheng liu ying shan zhengyou zhang zlayer: simulating depth with extended parallax scrolling desmond hii relating sentences and semantic networks with procedural logic robert f. simmons daniel chester a signal-processing framework for inverse rendering realism in computer-generated images requires accurate input models for lighting, textures and brdfs. one of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by _inverse rendering_. however, inverse rendering methods have been largely limited to settings with highly controlled lighting. one of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and brdf, and expresses it mathematically as a product of spherical harmonic coefficients of the brdf and the lighting. inverse rendering can then be viewed as deconvolution. we apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. we will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. the theory developed here also leads to new practical representations and algorithms. for instance, we present a method to _factor_ the lighting and brdf from a small number of views, i.e. to estimate both simultaneously when neither is known. pandas bob hoffman 3d tracking in fx production: blurring the lines between the virtual and the real richard hollander jacquelyn ford morie thaddeus beier rod g. bogart doug roble arthur zwern introduction to slam ii and slamsystem jean j. o'reilly accessible animation and customizable graphics via simplicial configuration modeling o ur goal is to em bed free-form constraints into a graphical m odel. w ith such constraints a graphic can m aintain its visual integrity--- and break rules tastefully--- while being m anipulated by a casualuser. a typicalparam eterized graphic does notm eet these needs because its configuration space contains nonsense im ages in m uch higher proportion than desirable im ages, and the casual user is apt to ruin the graphic on any attem pt to m odify oranim ate it. w e therefore m odel the sm all subset of a given graphic's configuration space that m aps to desirable im ages. in our solution, the basic building block is a sim plicial complex--- the m ost practical data structure able to accom m odate the variety of topologies that can arise. the configuration- space m odel can be built from a cross productofsuch com plexes. w e describe how to define the m apping from this space to the im age space. w e show how to invert that m apping, allow ing the user to m anipulate the im age without understanding the structure of the configuration-space m odel. w e also show how to extend the m apping when the originalparam eterization contains hierarchy, coordinate transform ations,and other non linearities. o ur software im plem entation applies sim plicial configuration m odeling to 2d vector graphics. tom ngo doug cutrell jenny dana bruce donald lorie loeb shunhui zhu elytre bruno follet parallel thinning with two-subiteration algorithms two parallel thinning algorithms are presented and evaluated in this article. the two algorithms use two-subiteration approaches: (1) alternatively deleting north and east and then south and west boundary pixels and (2) alternately applying a thinning operator to one of two subfields. image connectivities are proven to be preserved and the algorithms' speed and medial curve thinness are compared to other two-subiteration approaches and a fully parallel approach. both approaches produce very thin medial curves and the second achieves the fastest overall parallel thinning. zicheng guo richard w. hall artificial intelligence as the year 2000 approaches maurice v. wilkes a catalog of agent coordination patterns sandra c. hayden christian carrick qiang yang learning performance and attitudes as a function of the reading grade level of a computer-presented tutorial the purpose of this study was to determine the most appropriate level of language sophistication, or "readability" of the text of a computer-presented tutorial. the tutorial teaches first-time users how to operate a display terminal. the effects of readability on performance and attitudes of adults with different levels of reading ability were examined. joan m. roemer alphonse chapanis integrated enterprise modeling: method and tool kai mertins roland jochem selecting the best system: a decision-theoretic approach stephen e. chick computational approaches to image understanding michael brady acquiring knowledge about group facilitation: research propositions fred niederman what exactly do you do? frank m. marchak unknown unknowns: modeling unanticipated events lobna a. okashah paul m. goldwater common fallacies about expert systems jay liebowitz conference preview: hci 2000: usability or else! marisa campbell empirical input distributions: an alternative to standard input distribution in simulation modeling aarti shanker w. david kelton efficiently using graphics hardware in volume rendering applications rudiger westermann thomas ertl general system theories (panel session) tuncer i. Ã-ren a. wayne wymore j. j.talavage b. p. zeigler use of genetic algorithms in three-dimensional reconstruction in carbon black aggregates robert j. grim j. richard rinewalt l. donnell payne motion path editing michael gleicher incremental execution of guarded theories when it comes to building controllers for robots or agents, high level programming languages like golog and congolog offer a useful compromise between planning-based approaches and low-level robot programming. however, two serious problems typically emerge in practical implementations of these languages: how to evaluate test in a program efficiently enough in an open- world setting, and how to make appropiate nondeterministic choices while avoiding full lookahead. recent proposals in the literature suggest that one could tackle the first problem by exploiting sensing information, and tackle the second by specifying the amount of lookahead allowed explicitly in the program. in this paper, we combine these two ideas and demonstrate their power by presenting an interpreter, written in prolog, for a variant of golog that is suitable for efficiently operating in open-world setting by exploiting sensing and bounded lookahead. a simulation-based finite capacity scheduling system alexander j. weintraub andrew zozom thorn j. hodgson denis cormier cmunited-98: a team of robotic soccer agents manuela veloso michael bowling sorin achim kwun han peter stone diagnosing and correcting student's misconceptions in an educational computer algebra system using powerful computer algebra programs for mathematics education has been steadily gaining popularity, regardless of inherent user-unfriendliness exhibited by such systems. algebrator [4, 5] is a cas specifically designed to teach high-school algebra. despite its limited domain (or may be because of it), it has proven to be useful to students requiring mastery of basic algebra skills. algebrator's ca engine is but a small part of the whole system; larger portion of it deals with the way in which the system as an electronic tutor engages the student in the learning process. in this paper we will focus on algebrator's error diagnostic capability and teacher-directed generation of appropriate remedial tutoring sessions. neven jurkovic a generalized reliability block diagram (rbd) simulation kerry d. figiel dileep r. sule a performance monitoring application for distributed interactive simulations (dis) david b. cavitt c. michael overstreet kurt j. maly accuracy in qualitative descriptions of behaviour tony morgan the intensity of implication, a measurement learning machine laurent fleury yann masson computer games: introduction john cavazos book preview marisa campbell demon seed, 1987 stephen wilson eggy, 1990 yoichiro kawaguchi the last word aaron weiss untitled #11 elizabeth crowson kinsey harris prediction of radiated electromagnetic emissions from pcb traces based on green dyadics e. leroux f. canavero g. vecchi 960810_01 and 970717_30 kenneth a. huff isolation 3.11 lam nguyen touchware joan truckenbrod thirteen sketches for an incompetent user interface george roland genetics and genemoics: impact on drug discovery and development complex diseases is limited by a still rudimentary understanding of the molecular basis of disease as well as of drug action. at the heart of this is our current inability to account for inter-individual differences in disease etiology and drug response. these inter-individual differences are determined, to a large extent, by inherited predispositions and susceptibilities. knowledge of the genetic differences that explain these individual characteristics, and based upon it, the development of specific diagnostics and therapeutics, will therefore be critical for the successful transition to a future progress in health care. the impact of genetics and genomics will leave its mark along all steps involved in the creation of a new medicine: * in the discovery of new targets that carry-inherently, because genetic linkage implies causation-a greater likelihood of success; * in the discovery phase of a new drug aimed at an existing target, where the knowledge of molecular variation of this target (snps) may provide clues to achieve higher selectivity; where genetic epidemiology studies will provide added value by validating the target; and where large scale gene expression profiling (gene chips) will help select compounds with a higher likelihood for ultimate success at an early stage; * and in the development phase of an drug undergoing clinical evaluation, where pharmacogenetic studies, and genotype-specific patient selection may allow recognition and definition of drug-responders and non-responders, or help decrease the likelihood of adverse events. although the impact of genetic and genomic investigation will certainly accelerate progress in biomedical research, we believe it will do so in an evolutionary fashion, and as a logical extension of the history of medical progress towards a more detailed understanding of disease and the resultant more refined differential diagnosis as well as more accurate prospective risk assessment. if any, the fundamental change we are going to witness in the years to come is a (paradigmatic) shift from today's largely clinical disease definition and diagnosis to a molecular definition and diagnosis of disease. this shift is likely to greatly increase the importance of in-vitro diagnostics and will mandate, much more than is the case today, an integrated approach of diagnostics and therapeutics. ultimately, we expert to derive the benefit of more successful, and more cost-effective medicines, and of possibly being able to prevent (or delay), rather than treat disease. it is important to realize that genetic research and testing are areas of great public concern, and that a more comprehensive dialogue between scientists and the public is urgently needed to address the societal, ethical, legal issues that are being raised. only then will we be able to truly take advantage of the significant advances in medical knowledge that genetic research will make possible, and fully realize the potential of these approaches towards the ultimate goal of all our striving, improving the human condition. the utility of most drugs prescribed today for common, complex diseases is limited by a still rudimentary understanding of the molecular basis of disease as well as of drug action. at the heart of his is our current inability to account for inter-individual differences in disease etiology and drug response. these inter-individual differences are determined, to a large extent, by inherited predispositions and susceptibilities. knowledge of the genetic differences that explain these individual characteristics, and based upon it, the development of specific diagnostics and therapeutics, will therefore be critical for the successful transition to a future progress in health care. the impact of genetics and genomics will leave its mark along all steps involved in the creation of a new medicine: * in the discovery of new targets that carry-inherently, because genetic linkage implies causation-a greater likelihood of success; * in the discovery of a new drug aimed at an existing target, where the knowledge of molecular variation of this target (snps) may provide clues to achieve higher selectivity; where genetic apidemiology studies will provide added value by validating the target; and where large scale gene expression profiling (gene chips) will help select compounds with a higher likelihood for ultimate success at an early stage; * and in the development phase of an drug undergoing clinical evaluation, where pharmacogenetic studies, and genotype-specific patient selection may allow recognition and definition of drug-responders and non-responders, or help decrease the likelihood of adverse events. although the impact of genetic and genomic investigation will certainly accelerate progress in biomedical research, we believe it will do so in an evolutionary fashion, and as a logical extension of the history of medical progress towards a more detailed understanding of disease and the resultant more refined differential diagnosis as well as more accurate prospective risk assessment. if any, the fundamental change we are going to witness in the years to come is a (paradigmatic) shift from today's largely clinical disease definition and diagnosis to a molecular definition and diagnosis of disease. this shift is likely to greatly increase the importance of in-vitro diagnostics and will mandate, much more than is the case today, an integrated approach of diagnostics and therapeutics. ultimately, we expect to derive the benefit of more successful, and more cost-effective medicines, and of possibly being able to prevent (or delay), rather than treat disease. it is important to realize that genetic research and testing are areas of great public concern, and that a more comprehensive dialogue between scientists and the public is urgently needed to address the societal, ethical, legal issues that are being raised. only then will we be able to truly take advantage of the significant advances in medical knowledge that genetic research will make possible, and fully realize the potential of these approaches towards the ultimate goal of all our striving, improving the human condition. klaus lindpaintner dna microarrays - the how and the why (abstract) ed southern an empirical evaluation of design rationale documents laurent karsenty conference report: psb'2000 todd wareham computer graphics as artistic expression h w franke the military impact of information technology jeff johnson ronald l. davis roger w. wester frank exner crispin cowan mayur patel michael lingle barry goldstein james k. yun carey nachenberg acm lectureship program james s. crabbe symbolic execution and nans richard j. fateman business information analysis and interaction technique (biait): finding the big payoff areas while growth of the computer industry and the computing profession have been phenomenal, this growth has not been accompanied by improved communication between top executives in user organizations and their dp managers about effective use of the new computer- based information technologies. there remains a critical need for ways to demonstrate the impact of computer applications on management's ability to get the best results out of the resources under their control. one candidate for solving this communication and evaluation problem is the business information analysis and integration technique (biait). it has grown out of some ibm research and is being developed within ibm and by others. the current range of uses covers application development planning and implementation, marketing planning, and organization analysis; other uses are visualized for the future. the application development planning approach receiving major attention is called business information control study (bics). based upon the biait principles, the bics approach produces prompt identification of problem areas having high management visibility and big potential for payoff from use of computers. walter m. carlson efficient ac and noise analysis of two-tone rf circuits ricardo telichevesky ken kundert jacob white computational bioimaging for medical diagnosis and treatment christopher r. johnson stroke monique genton design automation research at the university of wisconsin - madison the university of wisconsin---madison has a large design automation research program with activities in many da and cad areas. the primary thrusts are in testing, high- and low-level synthesis, ai-based cad, rapid implementation of cad tools, design methodologies, and cad system issues. this document describes recent and current da projects. the reader is encouraged to contact the investigators for more information about specific projects. j. beetem d. dietmeyer y. hu r. jain c. kime p. ramanathan k. saluja conference on computing and the social sciences 1993 melanie loots paleolithic postmodern venus, 1987 donna cox regulatory element detection using correlation with expression (abstract only) we present a new computational method for discovering _cis-_ regulatory elements which circumvents the need to cluster genes based on their expression profiles. based on a model in which upstream motifs contribute additively to the expression level of a gene, it requires a single genome-wide set of expression ratios and the upstream sequence for each gene, and outputs statistically significant motifs. analysis of publicly available expression data for _sac charomyes cerevisiae_ reveals several new putative regulatory elements, some of which plausibly control the early, transient induction of genes during sporulation. known motifs generally have high statistical significance. harmen j. bussemaker hao li eric d. siggia no man no shadow anna ursyn small appliances jennifer mccoy kevin mccoy using atomic-level structure to probe protein function marvin edelman vladimir sobolev artist block 2 and indecision matt cave "ibm perspectives on the electrical design automation industry" (keynote address) this address will highlight the history of design automation at ibm as a developer/user and as a business; the unique marriage of internal software and commercial software within ibm; and the responsibilities of a hardware and software platform supplier to support the design automation task. robert m. williams artists and the art of the luthier bill buxton a structured approach to selecting a cad/cam system r. i. mcnall r. j. d'innocenzo pixel poppin'dot com, 1998 craig hickman spirits of wonder david halperin is desktop publishing really for every desktop? because of the proliferation of new hardware and software products aimed at turning all microcomputer users into publishers, editors of newsletters, user documentation, and training materials need to carefully consider the many options available for producing the highest quality product to meet the stated goals of their publications. while there are several distinct advantages to desktop publishing, such as permitting the quick turnaround of near-typeset quality publications, there are many additional considerations required to evaluate and decide what level of desktop publishing (dtp) best fits your needs and the capabilities of your staff. as users contemplate the move to dtp products, it is critical that their expectations are realistic and acceptable. essential to the success of any dtp implementation is the recognition of the "hidden" responsibilities placed on the user. there must be a realization, for example, that there is a significant amount of individual expertise required to go from word processing to desktop publishing, and the dtp practitioner will be making the same design decisions as those made by highly trained graphic artists and typographers. within most organizations it is difficult to find one person who can write, edit, do page layout, and produce graphics for a publication. each publication, be it system or user documentation, newsletter, system notice, workshop and lecture schedule, brochure, or flyer, has its own set of design and publishing requirements. in spite of what the salesperson may tell you, dtp may not be the greatest invention since the light bulb. you can guarantee, however, that they will keep you in the dark about more appropriate products, simply because they may not be able to demonstrate them effectively. depending on the type of publication you prepare, dtp may not answer all your publishing needs. yet, there are practical steps you can follow to see if dtp is the appropriate choice to enhance and improve your publications. the information services group at the center for computing and information technology at the university of arizona uses both dos-based and macintosh desktop publishing workstations to produce all user documents. our group is responsible for all external publications for both administrative and academic users. certainly the arrival of desktop publishing in our department represented the potential for increased services to our audience, but it also brought the realization that we needed to revisit existing publications. just as we analyze a new publication, we examined each document for its stated purpose, audience, content, and design, in order to effectively integrate the new technology into established production methods. over the past eighteen months we have been evaluating and choosing appropriate products suited for the varied types and requirements of the publications we produce. to effectively incorporate desktop publishing we felt we had to address the impact in five critical areas: hardware requirements for both macintosh and dos-based systems, output devices, software options, system compatibility, and system implementation timelines and training requirements. paul christian nude (studying in perception), 1996 ken knowlton development of a surface-and-modeler for raw material die making tatsuhiko suzuki seiji eguchi hideo yoshioka shigeki tanimoto masato honda publishing a university computing service newsletter the university computing services newsletter is the most effective, efficient and consistent way to reach the largest number of computer users. it can also be more than a way to disseminate technical information and news to computer users. the newsletter is tangible evidence of the activities of computing services staff both for end-users and university administrators. it serves as a record of the progress of the computing services, provides incentive for staff to complete projects in a timely manner and signifies a commitment by the computing services to a stable end-user environment. in 1986, tulane computing services (tcs) went from intermittent production of a newsletter which received little attention from users to a six-times yearly, professional quality publication which has attracted the attention of the university computing community, university administrators and computing services and libraries at other universities. there were eight key factors involved in this transition: (1) management commitment (financial, technical and personnel) to a professional quality newsletter; (2) access to state-of- the-art desktop publishing technology; (3) increased staff commitment to the publication; (4) cooperation with the administrative department responsible for all university publications; (5) development of a consistent newsletter format compatible with that used for all tcs publications; (6) dedication to providing a stable end-user environment; (7) increased access to information from computing services administrators, systems programmers and operations staff; and (8) the hiring of an individual with graphics design and computer expertise who was primarily responsible for all desktop publishing done within the computing services. alison hartman j. e. diem scent posts rik sferra the role of computational chemistry in translating genomic information into bioactive small molecules (abstract only) although genomic information provides many potential targets for drug discovery, the challenge is to convert this information into drugs that cure human disease. workers in computer assisted molecular design and chemometrics have developed a number of techniques to aid this process. examples include selecting diverse compounds for high throughput screening, designing universal or targeted combinatorial libraries, and using a variety of computational techniques to forecast binding affinitiy and bioavailability of compounds. this lecture will summarize recent advances in these areas. yvonne martin chemical reaction dynamics: integration of coupled sets of ordinary differential equations on the caltech hypercube use of the caltech/jpl hypercube multicomputer to solve problems in chemical dynamics is the subject of this paper. the specific application is quantum mechanical atom diatomic molecule reactive scattering. one methodology for solving this dynamics problem on a sequential computer is based on symmetrized hyperspherical coordinates. we will discuss our strategy for implementing the hyperspherical coordinate methodology on the hypercube. in particular, the performance of a parallel integrator for the special system of ordinary differential equations which arises in this application is discussed. p. g. hipes t. mattson y-s. wu a. kuppermann vectors: textures, 1997 mark wilson information processing by cells and biologists (abstract only) the core agenda of post-wwii molecular biology has been defined as the molecular understanding of how genetic information was transmitted and read out (see for example stent 1968), and, by the 1950's, the analogy between the tape in a turing machine and the linear sequence of nucleotides in dna was apparent to both computer scientists and biologists. in the early 21st century, it may be that molecular biology can fruitfully return to these roots, by recasting part of its agenda in terms of the need to understand how biological information is processed. in a somewhat more modern formulation, cells can be though of as machines that process and make decisions on three kinds of information: 1) information stored in the genome 2) information about intracellular events (for example from heckpoint mechanisms) and 3) information external to the cell. in many cases the machinery that cells use to make decisions is reasonably well understood at a qualitative level. however, in no case do we possess a corresponding quantitative understanding, and, reflecting this, nor are we very capable of predicting the outcomes of perturbations to the genome, the internal workings of the cell, or its external environment. one path to understanding the behavior of these ensembles of components clearly lies in construction of mechanism-based quantitative models representing cellular processes. building such models requires solution of numerous computational and experimental biological challenges. i will detail some of these. another path may involve computation on the qualitative biological knowledge that now exists. expert biologists reason on this qualitative information to make statements about the consequences of perturbations, but expert systems that do the same in the main do not exist. here, although the need is clear, the relative opacity (to me) of much of the seemingly relevant computer science literature has made it more difficult to figure out first steps. finally, note that information theory (shannon 1948) has its roots in the 20th century need to understand transmission of electrical signals through channels. it is not immediately clear that the representations of biological processes used by biologists map well to concepts that come from this theory. to give only one example, one is hard pressed to define or find, inside a cell that is processing signals from the outside, either the signal or the "bits" (tukey, 1946) that might make it up. there may be thus be an opportunity here for new theory to guide thinking and further experiment. roger brent the clearing, 1988 eudice feder the lacemaker victor acevedo efficient optimal design space characterization methodologies one of the primary advantages of a high-level synthesis system is its ability to explore the design space. this paper presents several methodologies for design space exploration that compute all optimal tradeoff points for the combined problem of scheduling, clock-length determination, and module selection. we discuss how each methodology takes advantage of the structure within the design space itself as well as the structure of, and interactions among, each of the three subproblems. (cad) stephen a. blythe robert a. walker dual personality, 1979 laurence m. gartel risks in aviation (part 1 of 2) robert d. dorsett factors affecting the extent of electronic cooperation between firms: economic and sociological perspectives jai-yeol son sridhar narasimhan frederick j. riggins the stregner civilization matthew teichman retrieved icon, 1998 ken knowlton de-emphasizing technology: issues of consciousness, cognition and content in ongoings karen sullivan design automation - lessons of the past, challenges for the future during the past two decades, we have learned that design automation is increasingly the only viable way to deal with the complexity of electronic circuits. and da will continue to thrive in the future because of ever- increasing complexity. following the development of the transistor and the integrated circuit, electronic desings grew almost exponentially in numbers of components and interconnections. unfortunately, so did the design effort and the potential for errors, as unwelcome effects. the automation of electronic designs attacked this complexity to save labor and improve quality. my company, bell labs, designs hundreds of vlsi chips each year and thousands of circuit packs. without design automation there is no way we could dot it---not in the required time and not with the available resources. without da, for example, designing and debugging our 32-bit microprocessor would have been virtually impossible. john s. mayo is it really over? jim rose an information system involving competing organizations prabuddha de thomas w. ferratt stalking the wild hypertext: the electronic literature directory robert kendall hunger for new technologies, metrics, and spatiotemporal models in functional genomic (abstract only) functional genomics, as a field, is applying genomic self-improvement protocols (cost-effective, comprehensive, precise, accurate, and useful) to the kinetics of complex cellular systems. radical surgery in functional biology aims to mimic the success of structural biology along all five of those axes. technologies of recombinant dna and automation have brought costs down exponentially (100-fold in ten years) in structural studies. that combined with definitions of completeness push the second axis (to better than 99.99 the third axis by the beautiful, brute force of repetition. to reduce systematic errors requires more finesse. models allow integration of wildly different experimental methods (e.g. models based on the genetic code plus phylogeny provide quite independent checks of models based on dna electrophoretic images). model interchange specifications and metrics for model comparison mutually reinforce one another and provide one path along the fifth axis, that of utility, via killer-applications such as homology searches. this combination of modeling and searching provides serendipity and "functional hypothesis generation" in abundance. it instantly connects previously separately studied processes and organisms. statistical assessment of agreement between experiment and calculation can lead to improvement of the types of model parameter as well as parameter values. what are the analogous metrics and models for functional genomics? how can we estimate possible lower limits to costs? how do we define completion and accuracy? finally, how to we create and assess searches (not just on data but on models) and the utility of applications in general? how do these feed back to experimental design and feed forward to bioengineering? the functional genomics measures that are now thought to be prime for automation, miniaturization, and multiplexing include electrophoresis, molecular microarrays, mass-spectrometry, microscopy. microscopy is well suited for non-destructive time series, measures concerning spatial effects and stochastic kinetics of systems of one or a few of any critical molecule. the other methods currently offer richer signatures for multiplex (measure many molecules from the same source atonce). such extensive multiplexing can reduce errors due to misalignment of the (unmultiplexed) measures in space and/or time. these misalignments are dramatic, but by no means limited to unplanned (meta) comparisons between literature values. in the spirit of eliminating systematic errors, we see a major role for models as integrating as disparate a set of measures as possible. the dynamic and spatial biomodels of yore thought doomed by some by lack of data, will soon promote fresh study in the glaring light of overdetermination, i.e. more datapoints than adjustable parameters and feedback to the experiments justification for even more data for even more accuracy. we illustrate the above themes in the context of stress responses in wildtype and mutant human erythrocytes, e.coli and yeast time series. we assess measures of up to 19 metabolites, 400 proteins, and over 7000 rnas. these measures touch most of the critical 34 metabolites in erythrocytes but only a tiny fraction of the over 1200 in e.coli. they so far quantitate fewer than 10 often have unknown covalent structure). for the rnas (assayed with a dense set of oligon ucleotides) we see a rich, probably comprehensive set, including many unpredicted transcripts. so what are the next steps? spatial effects seen for dna-motifs at a few bp, hundreds, and thousands of bp (for three separate reasons) can be found by automatable methods. time-series of molecular concentration data can be aligned by discrete and/or interpolative dynamic programming. components of regulatory networks evident in time-series can be assessed by these independent models. the components of decay as well as steady-state levels have been modeled for a complete rna sets. these time series benefit from the sharp specific transitions that can be achieved through conditional mutants and drugs (chemical biology in general). overarching questions remain as to how we will systematize (automate) kinetic modeling and applications to a point analogous with strucural data modeling all the while connecting with issues of global quality of life? george church jdbassist: generating javabeans and supporting 3-tier enterprise applications alan audick michael e. locasto david pike untitled penny feuerstein harmonic analysis from the computer representation of a musical score r. e. prather computerized medical records, production pressure and compartmentalization in the work activity of health center physicians yrjo engestrom ritva engestrom osmo saarelma computing in liberal arts colleges the panelists will describe their experiences and share their thoughts on the special problems involved in teaching computing in the environment of the liberal arts college. time will be available for attendees to add their own comments. john beidler california landscape jessica yuan wolf marjorie david time in clinical decision support systems: temporal reasoning in oncocin and onyx g michael kahn m lawrence fagan h edward shortliffe comparative analysis of organelle genomes, a biologist's view of computational challenges (abstract only) with genomic data (generated by classical, functional, structural, proteo- and other `omic' approaches) accumulating at a stupendous rate, there is an ever increasing need for the development of new, more efficient and more sensitive computational methods. to highlight aspects of our computational needs, we will present results that emerged from the comparative genome analysis of mitochondria. having originated from an alpha- proteobacterial endosymbiont, these eukaryotic organelles contain small and extremely variable genomes, and are thus perfect model systems for the much more complex eubacterial and archaeal genomes. we are currently in vestigating mitochondrial dnas (mtdnas) in a lineage of unicellular, primitive protistan eukaryotes, the jakobids, with the aim to understand the evolution of mitochondrial genomes, genes and their regulation. because these organisms are difficult to grow, biochemical approaches aimed at understanding gene regulation are laborious, thus it is possible to capitalize considerably from predictions on genome and gene organization, and regulatory elements. contrary to approaches in which molecular data (gene order, sequence similarities) are used to infer the phylogenetic relationships among a group of organism, we know their phylogeny and employ this information to identify and model more or less conserved genetic elements and structural rna genes that are difficult to spot by conventional methods, in a phylogenetic-comparative approach. franz lang using visualization to teach parallel algorithms thomas l. naps eric e. chan using our resources to improve publications "we do it that way because we've always done it that way…" due to the rapid advances in computer technology, many of us in academic computing are finding ourselves in the position of looking for ways to use this technology to improve those publications that we produce for general distribution. there are new concepts, new software packages and hardware that can now be utilized. one can't help but feel the excitement in our new-found ability to design, format, and produce a professional looking publication without depending on a professional printer. through the use of a word processing package, a desktop publisher and a laser printer, a whole new world has opened up to us. academic computing publications at ball state university include a quarterly newsletter and mini-course schedule. for the last several years, there has basically been one person in charge of these publications. publishing the newsletter consisted of writing articles, collecting articles from other staff members, editing all articles, getting them formatted and printed by the secretary, cutting and pasting into newsletter form, and finally, sending the finished copy to the on-campus print shop. the mini- course schedule was produced in this same manner. we restructured our publication method to increase staff involvement and also, we made what seemed like almost a quantum leap to desktop publishing, computer graphics and staff production, making most of these changes between issues. our original goals were to minimize cutting and pasting, to involve more staff and to produce a professional set of publications which would be more readable and helpful to the individual user. in so doing, we had to learn a lot of new things very quickly: new software, new hardware and---just as important---new ideas. one outcome was the many favorable comments we received from the academic community on the revised format of our mini-course schedule. we feel that our presentation is to show that it is possible to make a dramatic change in production methods, given the right attitudes of acceptance, and willingness to utilize new resources: computer, personal, and personnel. we had to overcome more than just technical changes \---the greater challenge was in forming these concepts and new ideas. pam claspell kathy sawyer i saw a tree mackenzie wahl dora:: cad interface to automatic diagnostics this paper will discuss a family of cad tools supporting automatic diagnosis and the usage of those tools in western electric company (weco) testing. the cad tools described in this paper are part of a package developed at the engineering research center (erc), princeton, new jersey. the diagnostic organization and retrieval algorithms (dora) system is a complex of programs which provide audited test programs and diagnostic data files from the results of lamp (logic analyzer for maintenance planning)1 circuit simulation. dora supports manufacturing and repair test facilities in all divisions of western electric that produce circuit packs or digital custom devices. improved engineering productivity in bell laboratories and western electric, efficient use of capital intensive test hardware, and reduced diagnostic costs are the goals of this package. r. w. allen m. m. ervin-willis r. e. tulloss half planes, 1996 manfre mohr osaka-skyharp, 1986 rob fisher the mayflower voyage natasha spottiswoode the rose's own garden: its view, 1997 colette bangert charles bangert remembering the `a's: report from the ama medical student section a. j. binder long-term variation in user actions there has been little research into the study of long term adaptation by users or the extent to which usage is enhanced over a substantial period [1]. however, there is general agreement that some interfaces, such as unix shells and certain editors, take years to master. in this paper we present evidence that users do indeed change their actions in the long term. some implications of our findings are discussed. richard c. thomas arbor erecta sonya rapoport computer budgeting of mineral holdings for a small scale mining operation this paper describes how a computer program handles the problems encountered in developing a mining budget from a large number of individual and widely scattered ore deposits. among the difficulties overcome by the use of the program are constantly changing production forecasts requiring reworking of the budget, scheduling of the capacities of the offsite loading facilities, production rates requiring the mining of several properties concurrently while still maintaining quality control, and pressing time limits on several leases. daniel l. mcculley product-market and technology strategies in banking christopher p. holland john b. westwood playing the numbers: m.d. coverley's _fibonacci's daughter_ jane yellowlees douglas torn touch, 1997 joan truckenbrod building and developing new software firms eric ruff the challenge of cad/cam education colleges and universities are not meeting industry needs for graduates trained in the use or implementation of computer-aided design and manufacturing systems since few schools have experience in teaching cad/cam. furthermore, many bachelor's-level graduates are going directly into industry, rather than pursuing graduate degrees, thereby compounding this problem. members of this panel represent both schools and industry and together attempt to outline approaches to developing more extensive cad/cam emphasis in education, including \\- the role colleges and universities can or should play in addressing industry needs in this area \\- user- and implementor-oriented cad/cam education \\- cad/cam education as an integral part of university engineering curricula \\- problems encountered in organizing and implementing cad/cam curriculum \\- universities' relationships with industry. it is our hope that panelists' experiences will help guide schools in structuring cad/cam programs to best insure that the united states remain at the forefront of technology and compete industrially world-wide. frank puhl victor langer donald p. greenberg mark s. shepard herb voelcker universities' relationships with industry frank puhl american industry needs engineers who have been brought up with the realization that computers are intimately involved in all aspects of design, analysis, and manufacturing. industry, of course, needs engineers who are already trained in fundamental cad/cam principles and in solving real world problems using computers, as well as those who have been taught to be productivity- and cost- conscious. to help meet these needs, we must now provide answers to several significant questions: what part should industry play in encouraging and sponsoring research and education in cad/cam? how can we resolve conflicts between industries' "trade secrets" and universities' "open research"? how can industry and educational institutions work together to increase u.s. productivity so as to regain our competitive edge? lockheed corporation and cadam, inc. are addressing these issues by supporting selected schools, installing cadam systems, and providing fellowships for education and research at upperclass and graduate levels. in addition, we are involved in various joint research projects with universities, and we look optimistically toward continuing these projects. cadd cam user educationcadd cam user education victor langer general electric medical systems division in milwaukee experienced a severe shortage of cadd cam operators and encouraged matc to develop a training program which started in 1980. a three-year nsf- cause grant and a partial donation of a computervision cadds 3 (now cadd 4 designer v six-station) system resulted in a program for upgrading employed designers and for two-year associate degree full-time students. the program enrolls 200 students per semester, with 70% continuing education students and 30% full-time students. before students can effectively apply cadd cam education, they must have drafting and design experience or at least a year of engineering training, plus ability to use spatial relationships and a course in descriptive geometry. with this background, students learn to create geometry for a 2-d drawing database and gain sufficient experience in the first course to be as productive as employees with six months' full-time experience. in the final semester or in the second advanced course, students create 3-d geometry and apply analytical computer capabilities for design, specializing for uses in mechanical, electrical, structural, architectural, and graphics arts applications. the defined geometry is also used in cam for generating numerical control machining and flame-cutting paths, and for robotic control. each course has been evaluated, follow-up studies have been completed, and an advisory committee has guided development. employees have verified success and ease in transferring geometric skills to many different cadd cam systems in the market. beginning in the fall of 1982, apple microcomputers will be used to teach all 2-d computer graphics skills previously taught on the computervision system, and cadd cam education is now becoming available on an economical basis to all users. some problems in cad education donald greenberg it is imperative that universities educating the next generation of engineers introduce computer-aided design courses into their curricula. there are many obstacles in accomplishing this within a university structure. this presentation describes the facilities and operation at cornell university and discusses the potential benefits and difficulties. cad/cam education in an engineering curriculum mark s. shepard today there is a large industrial demand for engineering graduates that understand cad/cam techniques and computer graphics. therefore, many colleges and universities already have or are planning to introduce computer graphics and cad/cam concepts into their curriculum. the major questions to be addressed in integrating these techniques into the curriculum include type and amount of hardware, development and maintenance of software and method of introduction into the curriculum. in 1977, rpi's school of engineering established the center for interactive computer graphics which is charged with integrating interactive computer graphics into the entire undergraduate engineering curriculum and providing a facility for graduate instruction and research. with heavy industrial support, the center has also developed a research program in computer graphics and cad/cam. this presentation will discuss rpi's overall approach to integrating interactive computer graphics into the engineering curriculum. a postgraduate program in "programmable automation" ari requicha herb voelcker "programmable automation" designates the emerging body of knowledge surrounding cad/cam and industrial robotics. graduate study in the field is aimed at (1) understanding the informational aspects of design and production in the discrete goods industries, and (2) developing new technologies for producing goods automatically with programmable, general- purpose tools. some of the knowledge and techniques used in programmable automation are drawn from established fields (computer science, material science, control theory, ...), but the distinctive character of programmable automation is set mainly by the pervasive roles played by geometry and computation. a postgraduate program in programmable automation is being launched at the university of rochester to train ms/level systems engineers for industry, and ph.d-level researchers and teachers. the program's evolution reflects a "trickle-down" philosophy of education, wherein major new fields enter engineering education through on-going research; research begets seminars, seminars sometimes evolve into graduate courses, and graduate courses sometimes spawn undergraduate courses. (put differently, the process starts with mature minds grappling with poorly understood concepts and ends with immature minds assimilating tightly codified concepts.) the rochester program is sited in electrical engineering and draws heavily on the staff and facilities of the production automation project; it also has strong links with mechanical engineering and computer science. the initial curriculum is based on two core courses in computational geometry, a graphics lab, and a systems seminar; these are supplemented with established courses in computer science, digital systems, finite-element analysis, control theory, and so forth. an nc systems course and lab will be introduced a year hence. plans for linking the program with rochester's vlsi program, and for launching robotics research and teaching, are still in an embryonic stage. michel a. melkanoff frank puhl victor langer donald p. greenberg mark s. shepard herb voelcker computer science in manufacturing michael j. wozny william c. regli inductive reasoning, 1989 craig hickman war, information technologies, and international asymmetries s. e. goodman as worlds collide bonnie mitchell about the art in this issue przemyslaw prusinkiewicz enhancement of the interaction between low-intensity r.f. e.m. fields and ligand binding due to cell basal metabolism power absorption by biological tissues, due to low-intensity electromagnetic exposure at radio frequencies, as those generated by personal telecommunication systems, is typically negligible. nevertheless, the electromagnetic field is able to affect biological processes, like the binding of a messenger ion to a cell membrane receptor, if some specific conditions occur. the depth of the attracting potential energy well of the binding site must be comparable with the radio frequency photon energy. the dependance of the binding potential energy on the spatial coordinates must be highly non- linear. the ion--receptor system, in absence of the exogenous electromagnetic exposure, must be biased out of thermodynamic equilibrium by the cell basal metabolism. when the above conditions concur a low-intensity radio frequency sinusoidal field is able to induce a steady change of the ion binding probability, which overcomes thermal noise. the model incorporates the effects of both thermal noise and basal metabolism, so that it offers a plausible biophysical basis for potential bioeffects of electromagnetic fields, e.g., those generated by mobile communication systems. b. bianco a. chiabrera e. moggia t. tommasi what happens when a medical office information system fails andrew blyth some computer aided engineering system design principles engineering design is a human activity which is becoming increasingly reliant on computer systems programmed to support design processes. the builders of such computer aided engineering (cae) systems have many problems to solve and this paper looks at some of the principles involved. a model capable of describing the work of the designer is proposed and placed in the context of computer systems. this model is then used to classify basic design activities and leads to a set of computer aided engineering system design principles. an application of these principles is illustrated by a description of some of the methods used by the authors in designing computer aided electronic engineering products. henry l. nattrass glen k. okita the tree gizelle mallillin pera an inquiry about hair anne wilson is anyone there?, 1993 stephen wilson blue pearl, 1998 isaac victor kerlow the real world of design automation - part iii or the user's viewpoint chairman's introduction (panel discussion) for the past two years, this panel examined design automation as professionals evaluating our own profession. with tongue only slightly in cheek, we discussed "a funny thing happened on the way to implementation" in 1978, and "adapting to the joys of madness" in 1979. both sessions were hard-hitting self appraisals of the way we do business. this year we elected to include a new element in "the real world of design automation": the user. too frequently, we become blinded by the brilliance of our own creation. we fail to recognize the reason for our existance. the user is, after all, the bottom line in design automation. our future depends on his successor failure. p. losleben alluromania toujour byrd family portrait: mother olga tobreluts perspectives on groupware for cross-cultural teams j. c. nordbotten urban diary joseph squier computer graphics world magazine cover, 1981 darcy gerbarg variations sainte-victoire, 1996 vera molnar progress toward the whole-genome shotgun sequencing of drosophila (abstract) gene myers spirit tell me alexandra greene hypercube data analysis in astronomy: optical interferometry and millisecond pulsar searches astronomical data sets are beginning to live up to their name, in both their sizes and the complexity of the analysis required. here we discuss two astronomical data analysis problems which we have begun to implement on a hypercube concurrent processor environment: the intensive image processing required in an optical interferometry project, and the large scale power spectral analysis required by a search for millisecond-period radio pulsars. in both cases the analysis proceeds largely in the fourier domain, and we find that the problems are readily adapted to a concurrent environment. in the following report, we outline briefly the astronomical background for each problem, then discuss the general computational requirements, and finally present possible hypercube algorithms and results achieved to date. p. gorham t. prince s. anderson outlook: computing in education a single course for preservice teachers robert taylor new digital technologies and theatre michael mackenzie split personality daniel sergile physical vs computer simulation (panel session) paul cohen computers and biology jacques cohen surveyor's forum: interpreting experimental data bruce leverett reuse/web: web-based ada reuse in his article "software reuse myths" from the january 1988 issue of _acm sigsoft software engineering notes,_ will tracz discusses why software reuse has not grown to its full potential. engineers first proposed the concept of a subroutine library at the university of cambridge in 1949. yet the concept has not extended much past this level.at the time tracz wrote this article, interest had begun to reemerge in the concept of software reuse. still, tracz believes that nine myths surround software reuse that have prevented it from becoming more widespread. specifically, three myths relate to my thesis:1. software reuse is a technical problem.2. special tools are needed for software reuse.3. ada has solved the reuse problem.as far as the first myth goes, the only technical problem is the lack of search methods to find the right pieces of code (which my project should help alleviate). beyond that, the problem is more psychological, sociological or economic. for example, there is the "not from here" attitude pervading industry that icads people to believe that software developed outside of their company is unreliable.tracz believes you only need special tools (the second myth) when you become truly serious about reuse. most software libraries, he claims, are only 100-200 components in size and that programmers can easily navigate this many components without any tools. also, most instances of reuse occur when a programmer modifies existing programs that either he or a coworker wrote, so again he does not need a tool to find what he needs. i believe that the reuse mentality is a difficult one to accept and any tool that can make the transition to this attitude easier should be explored and utilized, no matter how small the number of components in the software library.tracz refutes the third myth by stating that "writing a generic package in ada does not necessarily make it reusable any more than writing a fortran subroutine or assembly language macro." simply put, generic packages are not enough to encourage reuse because the packages must still be instantiated with actual data types. the attitudes of the programmers using the language are more important than the language itself in determining whether or not code will be reused. i hope that my project makes ada more accessible to code reuse and can change some programmers' attitudes about reuse.i believe that this attitude can change if programmers have access to a simple tool to browse software libraries that runs on a variety of hardware platforms to maximize usage and potential. this idea gave rise to reuse-www, an ada specification file browser running under netscape. since netscape runs on many platforms (pc, macintosh, unix-based., etc.), an application using netscape as its "operating system" would be ideal. fortunately, recent developments in world wide web programming, such as javascript and perl cgi (common gateway interface) have made this goal extremely attainable.reuse-www may not run under microsoft's internet explorer, though. it also does not provide any editing, compiling, linking, or running capabilities. i assume that a potential user of the system already has an editor of choice and knows where to get an ada 95 compiler, such as gnat. john petren john beidler atm switch design by high-level modeling, formal verification and high-level synthesi asynchronous transfer mode (atm) has emerged as a backbone for high-speed broadband telecommunication networks. in this paper, we present atm switch design, starting from a parametric high-level model and debugging the model using a combination of formal verification and simulation. the model has been used to synthesize atm switches according to customers' choices, by choosing concrete values for each of the generic parameters. we provide a pragmatic combination of simulation, model checking, and theorem proving to gain confidence in the atm switch design correctness. s. p. rajan m. fujita k. yuan m. t-c. lee an approach to gathering undocumented information purvis m. jackson sara e. moss transjovian pipeline, 1979 david em trip report: computing in support of battle management j. j. horning untitled 67 and untitled 76 jim butkus heading out donna geist token city: subway wall muriel magenta looking at optical disk storage technology t. j. chandler noman'sland rik sferra interactive vs. procedural modelling in computer-aided design christos tountas a virtual test environment for business processess henry m. franken college guidance counselor (abstract only) an expert system for choosing a college has been developed using the insight 2+ rule-based tool on an at-compatible microcomputer. this expert system uses three categories in the selection process: admissions data, including sat score and class rank; the desired major; and personal preferences. the system uses the information to narrow down a list of colleges. at each stage, the user is given the option either to reduce the list through more questions or to choose directly from the list. the user can request various information about the colleges on the final list. design and implementation issues will be discussed including a comparison with a database-oriented design alternative. t. reid l. r. medsker catherine courier anna ullrich finite element solution of thermal convection on a hypercube concurrent computer m. gurnis a. raefsky g. a. lyzenga b. h. hager visions of mind f dietrich graphic design tips for desktop publishers since the introduction of the macintosh, the laserwriter, and pagemaker and the subsequent ability to create camera-ready or near-camera-ready documents on our desktops, document writers have found it necessary to become document designers as well. yet, this entails a number of choices. for example, what style or size of type? how many columns? where should headings be placed? should there be illustrations? if so, how should they be arranged? because most of us haven't been trained to make these choices, too often the result is badly designed and ineffective documents. however, by understanding some of the basic principles of graphic design, even design novices can use a desktop publishing system to improve the way their documents look and, more importantly, make them more readable and more understandable. research has shown that certain design techniques dramatically expand a reader's capacity for information. in this session i will offer a number of simple design guidelines for creating effective desktop published documents and avoiding the common design pitfalls of overzealous desktop publishers without design training. using before and after examples, i will discuss the graphic design choices available to desktop publishers and explain the best choices. topics will include choosing and using type styles and sizes, column width, capitals and italics, alignment, leading, word and letter spacing, white space, and balance to create effective memos, brochures, manuals, flyers, and newsletters. good graphic design is more than a simple list of do's and don'ts. by explaining the reasons for certain design choices, i hope that those who attend this session will leave with a basic understanding of graphic design that they can use as a resource when designing their own desktop-published documents and as a basis for further study in graphic design if they are interested. kenneth e. gadomski editorial mary jane irwin a multiscale method for fast capacitance extraction johannes tausch jacob white face #111 from face series 1982 barbara nessim computing in chile: the jaguar of the pacific rim? r. a. baeza-yates d. a. fuller j. a. pino s. e. goodman grand challenges to computation science the grand challenges to computational science conference held last january attracted over 100 scientists from major universities, national laboratories, and industrial research centers. the author reports some of the highlights of the conference. eugene levin language use in context janyce wiebe graeme hirst diane horton energy characterization based on clustering huzefa mehta robert michael owens mary jane irwin the winds that wash the seas chris dodge infoart: the digital frontier from video to virtual reality cynthia goodman how a community health information network is really used fay cobb payton patricia flatley brennan the emerging technology of cad/cam computer-aided design and manufacture (cad/cam) represents a merging of technological advances in computer hardware and software with pressing needs in manufacturing industries. integrated manufacturing systems--from computer graphics-aided design through engineering analysis and automated fabrication-- are only now beginning to fulfill nearly twenty-five-year old promises of increased production efficiency. this paper summarizes cad/cam's evolution and its current state and then describes some areas that will experience significant change in the next decade. larry lichten an object-oriented kernel for an integrated design and process planning system an object-oriented kernel that integrates design and process planning in the domain of mechanical parts is presented. the kernel has identified and separated the integral elements of its domain from the elements that are associated with a particular design/manufacturing environment. consequently, it can be customized to a wide range of environments and be used as a rapid prototyping tool. a prototype has been developed and implemented in smalltalk-80. s. j. feghhi m. marefat r. l. kashyap the morphing of mona, 1986-1997 lillian schwartz digital fukuwarai hiroshi matoba genderbender gregory p. garvey cathedral robert fabricant litt'l havoc keith roberson neurar, 1996 yoichiro kawaguchi life dream joshua hendle inside risks: risks in medical electronics the risks forum has had many accounts of annoying errors, expensive breakdowns, privacy abuses, security breaches, and potential safety hazards. however, postings describing documented serious injuries or deaths that scan be unequivocally attributed to deficiencies in the design or implementation of computer-controlled systems are very rare. a tragic exception was a series of accidents which occurred between 1985 and 1987 involving a computer-controlled radiation therapy machine. jonathan jacky the image management and communication system (imacs) seong k. mun steven horii harold benson larry p. elliott untitled #87 sarah dodson medical informatics: clinical decision making and beyond d. k. duncan computer science student works featured stan chisek world skin maurice benayoun digital self-portrait charlcie legler synthesis tools for mixed-signal ics: progress on frontend and backend strategies l. richard carley georges g. e. gielen rob a. rutenbar willy m. c. sansen problems in understanding the structure and assembly of viruses (abstract) johnathan king large landscape: curved and coiled, 1970 colette bangert charles bangert a framework for user assisted design space exploration x. hu g. w greenwood s. ravichandran g. quan windgrass elaine brechin imposing specificity by regulated localization (abstract only) many biologically important enzymes --- rna polymerases, rna splicing enzymes, ubiquinating enzymes, certain kinases and proteases --- are "`regulated"' by being brought to one or another of many potential substrates by auxiliary docking proteins (e.g. transcriptional activators). these regulatory interactions require interactions between simple adhesive (but specific) surfaces. this kind of regulation is highly `evolvable' : new and expanded meanings to signals are readily generated by simple changes in protein surfaces. mark ptashne the flux-corrected transport algorithm on the ncube hypercube this work describes the implementation of a finite-difference algorithm, incorporating the flux-corrected transport technique, on the ncube hypercube. the algorithm is used to study two-dimensional, convectively-dominated fluid flows, and as a sample problem the onset and growth of the kelvin-helmholtz instability is investigated. timing results are presented for a number of different sized problems on hypercubes of dimension up to 9. these results are interpreted by means of a simple performance model. the extension of the algorithm to the three-dimensional case is also discussed. d. w. walker g. c. fox g. r. montry beware02: satellite shin'ichi takemura pedestrian: walking as meditation and the lure of everyday objects annette weintraub lattice gauge theory on the hypercube lattice gauge theory, an extremely computationally intensive problem, has been run successfully on hypercubes for a number of years. herein we give a flavor of this work, discussing both the physics and the computing behind it. j. flower j. apostolakis c. baillie h-q. ding the past, present and future of protein structure prediction (abstract) john moult hypercube performance for 2-d seismic finite-difference modeling wave-equation seismic modeling in two space dimensions is computationally intensive, often requiring hours of supercomputer cpu time to run typical geological models with 500 × 500 grids and 100 sources. this paper analyzes the performance of acous2d, an explicit 4th-order finite- difference program, on intel's 16-processor vector hypercube computer. the conversion of the sequential version of acous2d to run on hypercube was straightforward, but time-consuming. the key consideration for optimal efficiency is load balancing. on a fairly typical geologic model, the 16-processor intel vector hypercube computer ran acous2d at 1/3 the speed of a cray-1s. l. j. baker toward cam-oriented cad a new solid modeling scheme is proposed and developed as the core of an integrated approach to the computer aided design and manufacture of mechanical parts. the benefits of this methodology, which considers the manufacturing process during the design phase, are discussed in the context of an integrated cad/cam system. farhad arbab larry lichten michel a. melkanoff what does it feel to have freedom? jillian banks rna biology and the genome (abstract only) the sequence of vertebrate genome is expressed by rna splicing producing mrna. interpreting the genome requires understanding the sequences recognized by the nuclear factors and spliceosomia executing removal of introns. the status of this area of science will be reviewed. conversely, genome sequences can be searched for sequences which specify rna splicing and are common of many genes. the availability of genome sequences from a variety of species will make the latter approach much more powerful. the recently discovered rna interference (rnai) process is evolutionarily old and related processes are important for control of expression of repetitive genes in some organisms. rnai can be initiated by conversion of double strand rna into 21-23 nucleotide rnas which can base pair with mrnas and direct their cleavage. these short rnas may also direct silencing of genes by other mechanisms. over half of the genome of many organisms is composed of repetitive sequences and it is possible that rnai may silence these sequences. philip sharp towards predicting coiled-coil protein interactions protein-protein interactions play a central role in many cellular functions, and as whole- genome data accumulates, computational methods for predicting these interactions become increasingly important. computational methods have already proven to be a useful first step for rapid genome-wide identification of putative protein structure and function, but research on the problem of computationally determining biologically relevant partners for given protein sequences is just beginning. in this paper, we approach the problem of predicting protein-protein interactions by focusing on the 2- stranded coiled- coil motif. we introduce a computational method for predicting coiled-coil protein interactions, and give a novel framework that is able to use both genomic sequence data and experimental data in making these predictions. cross-validation tests show that the method is able to predict many aspects of protein-protein interactions mediated by the coiled-coil motif, and suggest that this methodology can be used as the basis for genome-wide prediction of coiled-coil protein interactions. mona singh peter s. kim pages from a diary: leaving lynn pocock pyramidal model on civil information system hristo toudjarov stefan kalchev galia nenova a bear of a man corinne whitaker nibco's "big bang" carol v. brown iris vessey thirdperson, 1994 copper giloth in conversation susan alexis collins a perspective view of the modcon system in this paper, a system of computer programs for the geometric description of hot forgings is outlined. the system is known as the modcon system which stands for modular construction. the modcon system allows quite complex forging shapes to be produced from relatively few input instructions. the forging geometry is defined as a series of volumetric modules which are then merged together during generation of the n.c. tape necessary to machine the finishing-cut edm electrode for the whole forging cavity. the system can also be used for the roughing cut of the edm electrode before the finishing cut stage as well as for generating cross-sectional data which serves as input to subsequent preform design systems. the system represents a considerable saving in time and cost over current methods involving conventional pattern making for the copy milling of electrodes. y. k. chan interruptions, 1968 vera molnar the evolving art of special effects brian samuels tic tac toe from american favorite series 1996 barbara nessim the past casts shadows, 1997 charles csuri computer graphics in court: the adobe/quantel case richard l. phillips a utilitarian approach to cad the benefits of using and writing software utilities are appreciated by most software engineers. however, many computer-aided design (cad) systems do not take full advantage of this technology. this could be because good utilities do not exist, because cad developers are not aware of existing utilities, or because developers do not know what features to include and leave out when they are writing their own utilities. as with much of computer science, the art of effectively using and writing software utilities has remained just that: an art. this paper discusses the desirable features of good software utilities for cad and describes techniques that encourage effective use of existing utilities as well as the specification and implementation of new ones. throughout the paper, experiences from a four year development effort in designer's workbench (dwb) [1] are used as examples (both good and bad). t. j. thompson computers as substitute soldiers? technological edge has been sought throughout military history, and today's versions of the longer lance are the computerized, integrated systems that reach pervasively into enemy space. they are intended to provide efficiencies in information gathering, processing, and disseminating so that a minimal number of humans can prevail against an enemy. the 1991 gulf war encouraged belief in the power of electronics to defeat numerically large enemies with small friendly losses. the allies used the magic of complex electronics to pound away at an iraqi army of over a half- million soldiers. a vision was seemingly confirmed: that those with the best computer systems will win by seeing furthest, targeting best, moving quickest, and blasting most precisely. c. c. demchak s. e. goodman specification and design of electronic control units j. bortolazzi t. hirth t. raith da algorithms in non-eda applications (panel): how universal are our techniques alberto sangiovanni-vincentelli ulrich lauther dave hightower erik carlson patrick c. mcgeer women writers project benjamin fan qed on the connection machine physicists believe that the world is described in terms of gauge theories. a popular technique for investigating these theories is to discretize them onto a lattice and simulate numerically by a computer, yielding so-called lattice gauge theory. such computations require at least 1014 floating-point operations, necessitating the use of advanced architecture supercomputers such as the connection machine made by thinking machines corporation. currently the most important gauge theory to be solved is that describing the sub-nuclear world of high energy physics: quantum chromo-dynamics (qcd). the simplest example of a gauge theory is quantum electro-dynamics (qed), the theory which describes the interaction of electrons and photons. simulation of qcd requires computer software very similar to that for the simpler qed problem. our current qed code achieves a computational rate of 1.6 million lattice site updates per second for a monte carlo algorithm, and 7.4 million site updates per second for a microcanonical algorithm. the estimated performance for a monte carlo qcd code is 200,000 site updates per second (or 5.6 gflops/sec). c. f. baillie s. l. johnsson l. ortiz g. s. pawley kage motoshi chikamori kyoko kunoh 98.13 kenneth a. huff the human genetic variation (abstract): oligonucleotide chips and human disease david r. cox precious pink from embrasure series kati toivanen freedom and imprisonment, 1985 isaac victor kerlow the si challenge in health care jane grimson william grimson wilhelm hasselbring b97.9.3 and with left and right boundary hans e. dehlinger why water always scares me marjorie david my apartment angel r. espinoza gss, professional culture, geography, and software engineering brent auernheimer to bury recollection in … and image to touch jun kurumisawa tutorial - group technology the group technology tutorial consists of a tutorial and a discussion. the tutorial objectives are to provide the participants with an overview of group technology - its meaning, ramifications, and opportunities offered. group technology is the realization that "many problems are similar, and that by grouping similar problems, a single solution can be found to a set of problems, thus saving time and effort". in this tutorial i will discuss fundamentals of group technology, its meaning, benefits, application areas, and selected case studies. i will also use one or more videotapes to demonstrate the impact of group technology on engineering/manufacturing operations. hriday r. prasad streaming saoirse higgins ian ginilt using motion planning to study protein folding pathways we present a framework for studying protein folding pathways and potential landscapes which is based on techniques recently developed in the robotics motion planning community. in particular, our work uses probabilistic roadmap (prm) motion planning techniques which have proven to be very successful for problems involving high-dimensional configuration spaces. our results applying prm techniques to several small proteins (60 residues) are very encouraging. the framework enables one to easily and efficiently compute folding pathways from any denatured starting state to the native fold. this aspect makes our approach ideal for studying global properties of the protein's potential landscape. for example, our results show that folding pathways from different starting denatured states sometimes share some common `gullies', mainly when they are close to the native fold. such global issues are difficult to simulate and study with other methods. our focus in this work is to study the protein folding mechanism _assuming_ we know the native fold. therefore, instead of performing fold prediction, we aim to study issues related to the folding process, such as the formation of secondary and tertiary structure, and the dependence on the initial conformation. our results indicate that for some proteins, secondary structure clearly forms first while for others the tertiary structure is obtained more directly, and moreover, these situations seem to be differentiated in the distributions of the conformations sampled by our technique. we also find that the formation order is independent of the starting denatured conformation. we validate our results by comparing the secondary structure formation order on our paths to known pulse-labeling experimental results. this indicates the promise of our approach for studying proteins for which experimental results are not available. guang song nancy m. amato computational structural genomics: identifying protein targets for structural studies igor v. grigoriev retiming for dsm with area-delay trade-offs and delay constraints abdallah tabbara robert k. brayton a. richard newton juicy details from embrasure series kati toivanen implementation and performance analysis of parallel assignment algorithms on a hypercube computer the process of effectively coordinating and controlling resources during a military engagement is known as battle management/command, control, and communications (bm/c3). one key task of bm/c3 is allocating weapons to destroy targets. the focus of this research is on developing parallel computation methods to achieve fast and cost effective assignment of weapons to targets. using the sequential hungarian method for solving the assignment problem as a basis, this paper presents the development and the relative performance comparison of four parallel assignment methodologies that have been implemented on the intel ipsc hypercube computer. the first three approaches are approximations to the optimal assignment solution. the advantage to these is that they are computationally fast and have proven to generate assignments that are very close the optimal assignment in terms of cost. the fourth approach is a parallel implementation of the hungarian algorithm, where certain subtasks are performed in parallel. this approach produces an optimal assignment as compared to the sub-optimal assignments that result from the first three approaches. the relative performance of the four approaches is compared by varying the number of weapons and targets, the number of processors used, and the size of the problem partitions. b. a. carpenter n. j. davis design methodology management - a cad framework initiative perspective the design of today's electronic systems involves the use of a growing number of complex cad tools. invoking and controlling these tools, independently or as part of a captured, multi-operation flow, remains an error-prone and largely unsolved problem. this problem is the focus of the design methodology management technical subcommittee (dmmtsc) of the cad framework initiative. this paper describes the dmm problem, establishes dmm requirements, and presents the state of the dmmtsc effort to define interface standards for proposed solutions. kenneth w. fiduk sally kleinfeldt marta kosarchyn eileen b. perez computer-based system for monitoring drug therapy stanley n. cohen jing-jing l. kondo richard j. mangini terrence n. moore ann barry flood learn navigation: doing without the narrator in artifactual fiction bill bly computing research programs in the u.s. robert geist madhu chetuparambil stephen hedetniemi a. joe turner support to the management of clinical activities in the context of protocol- based care domenico m. pisanelli fabrizio consorti why i want the oed on my computer, and when i'm likely to have it? michael lesk computer-aided industrial design centuries ago, craftsmen made objects to their own design. a single person was generally responsible for both the function and the appearance of a tool or device. with the industrial revolution, design became specialized; design and production were handled by different people. often design was dictated by the needs of efficient manufacturing. as engineering advanced, design became more involved and specialized, straying farther and farther from the intuition and experience that had guided craftsmen for centuries. product appearance was in danger of being dominated by engineering considerations.in the 1930s, the field of industrial design was born, led by raymond loewy, walter dorwin teague and others. these sought to specialize design still further by hiring someone to create the visual appearance of a product. the premise was that a company could make its products more sellable (i.e., make more money) by investing in the visual appearance of its products.in automotive design, style was initially the province of a few expensive coachbuilders: the wealthy could have a custom-styled car, much as today they can wear one-of-a-kind clothes from a famous designer. in parallel with the birth of industrial design for other products like radios and railway engines, harley earl started the art and color department at general motors. this group sought to improve the appearance of gm vehicles, as well as to keep the gm brands distinct as they migrated to shared internal body structures; they effectively brought designer style to ready-to-wear clothing. gm's success in the '20s and '30s led other manufacturers to follow suit.computer-aided design has followed a similar path: first it was used to aid manufacturing; later, for functional, engineering design. only in the last decade has it penetrated aesthetic industrial design. computer-aided industrial design (caid) is cad adapted and specialized for aesthetic design. from a designer's point of view, cad is for the pocket-protector brigade, while caid is for the creative.my own exposure to caid has been over the last 13 years from within the design organization of a car manufacturer. this vantage point has advantages and disadvantages: a car-maker's design environment certainly isn't typical of industrial design as a whole, but car-makers have been innovators and drivers of caid technology. stephen h. westin business components: a case study of bankers trust australia limited greg baster prabhudev konana judy e. scott untitled chip collier an automated support environment for cad system development (abstract only) this research is concerned with methods of developing cad tool systems. deficiencies of current definition and construction methods are identified including: ad hoe development, lack of integration, weak user interfaces, and ineffective coupling to a design data base. sara (system architect's apprentice) is a collection of design tools that supports the creation, simulation, and analysis of concurrent, digital system models. the previous implementation of sara tools and subsequent research has identified several common services required by each tool including: interaction loop, design data base, graphics interaction, integral help, and error recovery. one design goal is to define a support environment kernel providing a single implementation of each of these services that is independent of policy, tool, and physical device. a second goal is to provide a multi-phase methodology and specification system that supports integration of new cad tools. the specification system provides compilers that target to the kernel and thus meld the tool dependent issues to the tool independent kernel. the sara tool set is a test bed for the method, specification system, and support environment kernel. duane worley introduction - computing and social responsibilities the three articles in this special section deal with computing applications that affect how people are treated in their social roles: as litigants, as employees, or as participants or bystanders in warfare. we believe that applications like these obligate their builders to consider their duties, not only to their employers or customers, but to all the people who will be affected when the systems are placed into service. these duties are the social responsibilities of computer professionals. authors donald erman and carole hafner address the social responsibility of creating a timely, affordable and just legal system. jack beusmans and karen wieckert address the social responsibilities of educators, researchers and applications programmers to consider whether their contributions to weapons technology are a positive contribution to society. richard ladner addresses responsibilities that society has for people with disabilities. douglas schuler jonathan jacky establishing an academic computing center - a major change this paper describes the experiences at a medium-sized public teaching university in establishing an academic computing center distinct from an existing administrative data processing center. the computer facilities were provided by a multi-institution time-sharing network. the steps involved and the lessons learned are presented as a case study which may be helpful to others considering this change. francis l. edwards carol s. edwards aoxoamoxoa 7, 1997 david em ethical questions and military dominance in next generation computing american response to the fifth generation project has been dominated by darpa's strategic computing initiative, which proposes to develop ambitious software and hardware technology for explicit military applications. the initiative generates two classes of ethical questions: * how well will non-military social goals be furthered by this approach to computing development? * how well will national security goals be furthered by this approach to defense, and what role in social policy-assessment is to be played by computer professionals? paul smolensky the fantastic self-organization in cyberspace yoichiro kawaguchi p-021-band structure, 1969-1970 manfred mohr linking the european monetary union and the information systems strategy of financial institutions: a research approach to understand the changeover manuel joão pereira luís valadares tavares evaluating social interactions on the introduction of a telephone based system for nursing handovers betty hewitt after paul klee, 1963 charles csuri a digital frottage ho assure: automated design for dependability design for dependability has long been an important issue for computer systems. while several dependability analysis tools have been produced, no effort has been made to automate the design for dependability. this paper describes assure, an automated design for dependability advisor, which is part of the micon system for rapid prototyping of small computer systems. a design for dependability methodology and a formal interface between synthesis and dependability analysis are presented. assure's operation includes dependability analysis, evaluation of dependability enhancement techniques using predictive estimation, and selection of a technique. different kinds of knowledge used in designing for dependability are identified, including an algorithmic approach for dependability analysis and a knowledge-based approach for suggesting dependability enhancement techniques. examples of designs produced using assure as a dependability advisor are provided and show an order of magnitude dependability improvement. patrick edmond anurag p. gupta daniel p. siewiorek audrey a. brennan moonlit men bettina santo domingo future developments in information technology (abstract) the primary information technologies, microelectronics and photonics, continue to improve their performance at extraordinary rates and are not expected to reach any fundamental theoretical limitations before the turn of the century. in his keynote address, ian ross will review progress in information technologies to date, consider its prospects and assess the emerging technologies and systems that may follow, with an emphasis on the engineering challenges ahead. in 1963, dr. ross received the morris n. liebmann award from the institute of radio engineers, now a part of the ieee. in 1969 and 1975 he received the public service group achievement award from the national aeronautics and space administration. dr. ross is a fellow of the institute of electrical and electronics engineers and was elected to the national academy of engineering in april 1973. in may 1981, he was elected to a fellowship in the american academy of arts and sciences. he was elected to the national academy of sciences in april 1982. ian m. ross from our imagination, 1998 ken o'connell from "landscape in circle" ying tan using tex for electronic publishing at i.p. sharp associates david manson please stay on the line juliet martin engineering workstations: tools or toys? we've all heard the scenario \---the complexity of designs is increasing at a staggering rate, design cycles are approaching the life of the products, the demand for design engineers far outstrips the supply, etc., etc. everyone seems to know the solution: \\- increase the efficiency of existing engineers by providing cae tools \\- provide tools and methodologies to allow non-ic/systems designers to effectively and efficiently design products. engineering workstations to the rescue? are cae tools the answers to the engineer's prayers? how about the engineering manager's dreams? these and other issues will be addressed by a panel of experts from both the vendor and user communities. steve sapiro the disease progression explorer: risk assessment support in chronic, multifactorial diseases pamela k. fink l. tandy herren a board computer for route transport ceco lukarski nokolay nikolov krasimir gornishki aleksander dimitrov scenarios for bedside medical data communication richard i. cook swimming pool paul brown rm-case: a computer support for designing structured hypermedia (abstract) tomás isakowitz vanesa maiorana alicia díaz gabriel gilabert central da and its role: an executive view the objective of this panel is not to dwell on da technology nor even on theoretical pros and cons of central da. the objective of this panel is to share experiences and ideas on the management of central da operation and how each company represented copes with the problems. one should expect to find much commonality of problems and, equally, that the management solutions are diverse. r. j. camoin workshop: applying cryptography in electric commerce alan t. sherman iconica troy innocent skippy peanut butter jars, 1980 copper giloth icad/pcb: integrated computer aided design system for printed circuit boards the advanced computer aided design system, icad/pcb, was recently put into operation at fujitsu. the system provides designers with powerful tools to significantly lower the cost and the time required to design and manufacture printed circuit boards (pcbs). interactive and automatic facilities to support the entire pcb design process are integrated in the icad/pcb system. hiroshi shiraishi mitsuo ishii shoichi kurita masaaki nagamine familiar is-ness, 1998 donna cox enablers and inhibitors of business-it alignment jerry luftman raymond papp tom brier challenges in feature-based manufacturing research martti mäntylä dana nau jami shah regrowth from the wreckage leslie nobler farber medea's contribution to the strength of european design and cad anton sauer c5 joel slayton case study 309 tammy knipp oszillogramm, 1961-1962 herbert franke the future of commercial computing united states' business computing power is dedicated to the transaction-based activities of companies, not to the needs of senior executives responsible for critical corporate decisions. software supporting current business systems is absolete and inflexible. antiquated, inappropriate software undermines business ability to compete. business requires new better systems that will provide external competitive data, as well as economic and research data vital to strategic decisions concerning products, operations, production, marketing, personnel and investments. there are four approaches to these future systems - redesign, package software, retrofitting, and prototyping. retrofitting provides a unique approach to the immediate problem of prolonging the aging application portfolio while prototyping will likely be an important future direction in system design and development. robert gilges digital self-portrait eddie kang design of a financial portal kemal saatcioglu jan stallaert andrew b. whinston the bush soul rebecca allen a predictive system shutdown method for energy saving of event-driven computation this paper presents a system-level power management technique for energy savings of event-driven application. we present a new predictive system- shutdown method to exploit sleep mode operations for energy saving. we use an exponential-average approach to predict the upcoming idle period. we introduce two mechanisms, prediction-miss correction and prewake- up, to improve the hit ratio and to reduce the delay overhead. experiments on four different event- driven applications show that our proposed method achieves high hit ratios in a wide range of delay overheads, which results in a high degree of energy with low delay penaties. chi-hong hwang allen c.-h. wu comparing genes and genomes (abstract): from polymorphism to phylogeny peer bork comparison of complete genomes (abstract): organization and evolution piotr slonimsky cad for system design: is it practical? manicomlo judciario carlos muti randolph viki: an accessibility project although computerized assistive devices for most handicaps are becoming more widely available, one major drawback is that users must currently provide customized interfaces for every computer system they use. in addition to specialized input and output devices, such assistive devices also frequently require customized software. this paper proposes an alternative solution, by providing a customized interface (both hardware and controlling software) on a laptop computer that is then used to control standard software on any other host computer. in this way, the user can carry their specialized interface to essentially any general computer, and can interact with any standard software supported on the host. the laptop system becomes a virtual keyboard to the host machine. blaise w. liffick synthesis: a dream thomas esser grammatron 1.0 mark amerika beast jacques servin medical information systems integration (panel discussion) for hospitals as well as for other complex organizations, the growth of integrated computer systems beyond traditional applications areas, such as financial processing, has led to a dilemma. either the new applications are developed (or purchased) with the restrictive requirement of integrability with existing applications, or additional resources need to be devoted to the problem of allowing interchange of data among systems. in most cases, these system-to- system interfaces are developed on an ad hoc basis, tailored closely to the two systems being interfaced, and must therefore be modified whenever either system is changed. although this situation is often tolerable when only two or three systems are involved, it usually consumes resources anc limits flexibility to an appreciable extent. the real problems with this approach, however, become apparent when the number of interacting information systems exceeds two or three, or when resources are simply not available to create and maintain ad hoc interfaces. unfortunately, many institutions today are in the process of acquiring cr developing their third or fourth distinct application system, and this information systems integration problem is being encountered with increasing frequency. in particular, many users who elect to acquire applications systems from several different systems vendors have found that the functionality and efficiency of each of the systems is seriously impaired by continuing struggles with interfaces, even when the same hardware is in use on all the systems. from the vendor's viewpoint, this situation is equally bad. some vendors respond by proposing a single, comprehensive information system to meet all of the organization's needs. although this approach may seem the desirable one from the developer's standpoint, it has several serious drawbacks in practice. john w. lewis a motif lexicon for the genomic analysis of dna melissa kimball the use of cad frameworks in a cim environment wang tek kee dennis sng jacob gan low kin kiong creating the backbone for the virtual cell (abstract only): cell mapping projects on the run the ability to identify proteins with mass spectrometry has a profound impact on biological research. its long term effects can only be compared with the significance pcr enzyme had for the advancement of biological science. suddenly proteins can be identified having only minute quantities available. the efforts invested in to the technology lead to a constant increase in throughput - comparable to the development of dna sequencing technology. biological mechanisms can be studied directly on the protein level. there are two research directions: protein expression based proteomics and cell mapping or functional proteomics. protein expression based proteomics is focussing on the understanding of cellular states by analyzing the protein expression level of as many proteins as possible. in cell mapping experiments the protein - protein interaction network within a cell is analyzed. at the embl a protein complex purification technique, called tandem affinity purification (tap), was developed which allows to rapidly purify very specifically non covalently interacting proteins which can be subsequently analyzed with mass spectrometry (cell mapping approach). the complexes assemble in vivo, are purified as such and characterized. the technique had been developed in yeast and efforts are under way to establish similar methods in higher eukaryotes. the comparison of this technique with similar experiments based on two hybrid screens demonstrate that the protein based approach is more specific and functional data suggest that it is more complete. this may be due to the fact that the complexes assemble first in their native environment and are then pulled out for characterization. the organization of all the proteins into groups of physically in teracting proteins is an important step towards understanding their functional role within a cell. to a large degree cellular processes organize themselves based on the non-covalent affinity of proteins. protein based approaches as exemplified by the tandem affinity purification method in conjunction with mass spectrometry are very important in analyzing this functional infra structure. the identification of all the proteins involved in a particular biological mechanism and the characterization of their interaction constants may provide the physical data to simulate the process by a computer. matthias wilm pesda and design abstraction (panel): how high is up? geoffrey bunza steve schulz tommy jansson alex silbey steve ma ed frank dna probe arrays - accessing the genome (abstract) robert j. lipshutz intangible assets: how the interaction of computers and organizational structure affects stock market valuations erik brynjolfsson lorin m. hitt shinkyu yang finite difference time domain solution of electromagnetic scattering on the hypercube electromagnetic fields interacting with a dielectric or conducting structure produce scattered electromagnetic fields. to model the fields produced by complicated, volumetric structures, the finite difference time domain (fdtd) method employs an iterative solution to maxwell's time dependent curl equations. implementations of the fdtd method intensively use memory and perform numerous calculations per time step iteration. we implemented an fdtd code on the california institute of technology/jet propulsion laboratory mark iii hypercube. this code allows us to solve problems requiring as many as 2,048,000 unit cells on a 32 node hypercube. for smaller problems, the code produces solutions in a fraction of the time to solve the same problems on sequential computers. r. h. calalo j. r. lyons w. a. imbriale an explicit rc-circuit delay approximation based on the first three moments of the impulse response bogdan tutuianu florentin dartu lawrence pileggi development of a prototype iaims at the university of utah h. r. warner visualization in scientific computing b. h. mccormick plant design for efficiency using autocad and factoryflow david p. sly escape velocity hellen sky john mccormick garth paine pyramid of darkness dayana ottenwalder eileen lied juan diaz zulma gomex ip-based design methodology daniel d. gajski sigkids art heidi dumphy tim comolli scott lang maria roussos chris carey poetry and computers (panel session) hale chatfield carol donley william dickey stephen marcus the graphics cad/cam industry(panel session) richard n. spann companies supplying graphics cad/cam components and systems form an important high technology business segment. panelists representing four financial perspectives will discuss market entry, financing, segment performance and shareholder's expectations. development of the graphics cad/cam industry in the 80's will be the unifying theme. richard n. spann frederick r. adler thomas p. kurlack joseph c. mcnay margaret a. reichenbach design automation at a large architect-engineer gibbs & hill (g & h) has been a proponent, developer, and user of design automation (da) techniques for over fifteen years. progression has been steady and significant, beginning with the use of relatively simple batch computer programs for the solution of specific engineering problems to the current broad application of state-of-the-art hardware and software including interactive graphics and data base management systems. this progress has been matched by acceptance, and at g & h da is considered a normal and viable mode of operation rather than an alternate method subject to doubts by management, clients, and personnel. the design automation objective is an on- line, integrated data base approach to all aspects and phases of our work, ranging from conceptual design to construction management. the cornerstone of the design automation system is cadaesm (computer aided design and engineering) which is an interactive graphics system that is used to produce design drawings, including both dimensional and nondimensional. the g & h design automation system is developed, refined, and maintained by a dedicated staff of da professionals. e. f. chelotti d. p. bossie items 1-2,000 paul vanouse bath scene matthew teichman undecided 2: market research in the narrow way james faure walker lincoln log factory of the future (lifof) (abstract only) the center for productivity enhancement (cpe) has undertaken the development of a bench top factory to design and build lincoln log houses. the goal of this project is to investigate issues related to our holistic approach for manufacturing and to develop a paradigm education and training model of the factory of the future. the llfof includes a unique cad/cae system that allows for input of design specifications and can use natural language input or be menu driven. the output provides structure analysis, specification checking, cost and material estimates, and constructs the manufacturing databases for assembly. a 3-d wire frame or realistic solid view model of the design is also provided. issues of manufacturing efficiency are also investigated including off line robotic simulation. the factory applies just in time (j.i.t.) principles developed by the japanese to increase quality and decrease cost. it also schedules and constructs orders in a multi-robot work cell that is vision controlled. the natural language assembly instructions developed in the cad step are dynamically decoded and assigned to various robots in the work cell according to a rule based system. the work cell, on the fly, schedules and dynamically modifies robotic work plans and uses machine vision to guide, assemble, and inspect. because set up time and labor are driven to near zero, individualistic, batch-size-one manufacturing becomes realistic. a complete expert system controlled factory with materials handling and autonomous robotic vehicles is included in the ultimate design. the bench top factory has been chosen to be included in the boston museum of science's age of intelligent machines exhibit which will go on national tour in 1987. the system design and software will be distributed in conjunction with this event to foster research and training in manufacturing productivity. p. d. krolak tutorial - mechanical workstation software computer aided engineering in the mechanical design process j. scott the transfer of university software for industry use computer-aided engineering software is generated in abundance in educational institutions. as a major source of design automation software, universities have a lot to offer: a progressive research environment; a seemingly inexhaustible supply of software engineers in the form of undergraduates, graduate engineers, and professors; and a no-risk, multi-disciplinary design lab for experimentation. for these reasons, university software is being widely sought for use in production/commercial environments. this paper examines the interface between industry and the universities during the transfer of design automation software. a printed circuit board router, a graphics subroutine library, and a circuit simulation program are used to illustrate issues arising from the technology transfer. guidelines to increase the likelihood of success in such transfers of software are provided. rossane wyleczuk lynn meyer gigi babcock tv needs mtv like mtv needs computers j whitney art teams up with technology through the net marco padula amanda reggiori cad tool interchangeability through net list translation steve meyer liquid views: rigid waves monika fleischmann wolfgang strauss christian-a. bohn enterprise resource planning: introduction kuldeep kumar jos van hillegersberg an iaims for baylor college w. b. panko k. c. aune g. a. gorry computerized electro neuro ophthalmograph (cenog) cenog (computerized electro neuro ophthalmograph) is a new diagnostic instrument developed to assist neurologists and neuro-ophthalmologists in the diagnosis and study of neurologic and ophthalmic disorders. clinicians are becoming increasingly aware that the study of eye movements and visually evoked (cortical) potentials provides a sensitive, noninvasive means of obtaining valuable information in a neurological evaluation. a number of standard tests of oculomotor function and visually evoked response have been devised in order to detect the presence of latent subclinical lesions that are not apparent in a standard examination. r. s. ledley l. s. rotolo m. buas full circle - continuum series, 1998 rob fisher trees laura griffiths moving towards the event horizon phillip george adrift helen thorington jesse gilbert marek walczak may we have your attention, please? thomas h. davenport large scale sequencing by hybridization sequencing by hybridization is a method for reconstructing a dna sequence based on its k-mer content. this content, called the _spectrum_ of the sequence, can be obtained from hybridization with a universal dna chip. however, even with a sequencing chip containing all 49 9-mers and assuming no hybridization errors, only about 400 bases-long sequences can be reconstructed unambiguously. drmanac et al. suggested sequencing long dna targets by obtaining spectra of many short overlapping fragments of the target, inferring their relative positions along the target and computing spectra of subfragments that are short enough to be uniquely recoverable. drmanac et al. do not treat the realistic case of errors in the hybridization process. in this paper we study the effect of such errors. we show that the probability of ambiguous reconstruction in the presence of (false negative) errors is close to the probability in the errorless case. more precisely, the ratio between these probabilities is 1 + _ _(_p_/(1 - _p_)4 * 1/d) where _d_ is the average distance between neighboring subfragments, and _p_ is the probability of a false negative. we also obtain lower and upper bounds for the probability of unambiguous reconstruction based on errorless spectrum. for realistic chip sizes, these bounds are tighter than those given by arratia et al. finally, we report results on simulations with real dna sequences, showing that even in the presence of 50% false negative errors, a target of cosmid length can be recovered with less than 0.1% miscalled bases. ron shamir dekel tsur naltrexone blocks rfr-induced dna double strand breaks in rat brain cells previous research in our laboratory has shown that various effects of radiofrequency electromagnetic radiation (rfr) exposure on the nervous system are mediated by endogenous opioids in the brain. we have also found that acute exposure to rfr induced dna strand breaks in brain cells of the rat. the present experiment was carried out to investigate whether endogenous opioids are also involved in rfr-induced dna strand breaks. rats were treated with the opioid antagonist naltrexone (1 mg/kg, ip) immediately before and after exposure to 2450 mhz pulsed (2 us pulses, 500 pps) rfr at a power density of 2 mw/cm2 (average whole body specific absorption rate of 1.2 w/kg) for 2 hours. dna double strand breaks were assayed in brain cells at 4 hours after exposure using a microgel electrophoresis assay. results showed that the rfr exposure significantly increased dna double strand breaks in brain cells of the rat, and the effect was partially blocked by treatment with naltrexone. thus, these data indicate that endogenous opioids play a mediating role in rfr-induced dna strand breaks in brain cells of the rat. henry lai monserrat carino narendra singh rapid design and prototyping of customized rehabilitation aids vijay kumar ruzena bajcsy william harwin patrick harker situation room analysis in the information technologies market adamantios koumpis cadd for conoco exploration kendall white homage to moholy-nagy, 1979 eudice feder computer aesthetics: new art experience, or the seduction of the masses p d prince the computer-mediated economy hal r. varian revolution igor stromajer using medical objects: their structure, transmission, storage and usage frank m. groom acoustic wavefield propagation using paraxial extrapolators modeling by paraxial extrapolators is applicable to wave propagation problems in which most of the energy is traveling within a restricted angular cone about a principle axis of the problem. frequency domain finite-difference solutions are readily generated by using this technique. input models can be described either by specifying velocities or appropriate media parameters on a two or three dimensional grid of points. for heterogeneous models, transmission and reflection coefficients are determined at structural boundaries within the media. the direct forward scattered waves are modeled with a single pass of the extrapolator operator in the paraxial direction for each frequency. the first-order back scattered energy can then be modeled by extrapolation (in the opposite direction) of the reflected field determined on the first pass. higher order scattering can be included by sweeping through the model with more passes. the chief advantages of the paraxial approach are 1) active storage is reduced by one dimension as compared to solutions which must track both up-going and down-going waves simultaneously, thus even realistic three dimensional problems can fit on today's computers, 2) the decomposition in frequency allows the technique to be implemented on highly parallel machines such the hypercube, 3) attenuation can be modeled as an arbitrary function of frequency, and 4) only a small number of frequencies are needed to produce movie-like time slices. by using this method a wide range of seismological problems can be addressed, including strong motion analysis of waves in three-dimensional basins, the modeling of vsp reflection data, and the analysis of whole earth problems such as scattering at the core-mantle boundary or the effect of tectonic boundaries on long-period wave propagation. r. w. clayton r. w. graves carla neddly maxime the doll floated by (quilt for flight 800) leslie nobler farber medical information science in medical education: a transition in transition a. g. swanson technology rules- the other side of technology dependent code the major objectives of a rules-driven, technology independent da system are: -faster support for new technologies -reduced development effort -better utilization of skills -fewer programs to maintain however, if not designed properly, the result could be an unnecessarily complex, expensive system, which does not fully utilize the technology capability. some of the key decisions are: (a) which technology parameters are to be fixed (hard coded) vs. rule-defined? (b) what is the interface between programs and rules? (c) what is the relative priority of product cost vs. da development cost? (d) should rules be defined in a high-level language and then compiled, or should they be coded in a format directly accessed by the programs? (e) what methods can be used to allow program improvements and additions without obsoleting the existing rule library? melvin f. heilweil the other woman corinne whitaker all publishers are alike, aren't they? (panel discussion) topics 1. the computer education publishing marketplace: an overview the biggest and the smallest, the oldest and the newest, introductory books versus advanced books 2\\. discussion question: what can go wrong in the author-publisher relationship? 3\\. discussion question: what advice would you give to a close friend in selecting a publisher? 4\\. discussion question: how will technology change the publishing process and the author-publisher relationship? 5\\. discussion question: building better textbooks for computer education: how do we do it? 6\\. summary and conclusions stephen mitchell charles stewart jon thompson charles murphy barbara friedman sigact news complexity theory column 18 lane a. hemaspaandra a universal concurrent algorithm for plasma particle-in-cell simulation codes we have developed a new algorithm for implementation of plasma particle-in- cell (pic) simulation codes on concurrent processors. this algorithm, termed the universal concurrent pic algorithm (uc-pic), has been utilized in a one- dimensional electrostatic pic code on the jpl mark iii hypercube parallel computer. to decompose the problem using the uc-pic algorithm, the physical domain of the simulation is divided into sub-domains, equal in number to the number of processors, such that all sub-domains have roughly equal numbers of particles. for problems with non-uniform particle densities, these sub-domains will be of unequal physical size. each processor is assigned, a sub-domain, with nearest neighbor sub-domains assigned to nearest neighbor processors. using this algorithm in the mark iii pic code, the increase in speed in going from 1 to 32 processors for the dominant portion of code (push time, defined below) was 29, yielding a parallel efficiency of 90%. although implemented on a hypercube concurrent computer, this algorithm should be also be efficient for pic codes on other parallel architectures and on sequential computers where part of the data resides in external memory. p. c. liewer v. k. decyk j. m. dawson g. c. fox replica madge gleeson mems cad beyond multi-million transistors (panel) kris pister albert p. pisano nicholas swart mike horton john rychcik john r. gilbert gerry k. fedder interdisiplinary computer science: introduction michael a. grasso mark r. nelson az300, 1998 lillian schwartz why it isn't art yet k knowlton family portrait: father olga tobreluts challenges facing global supply-chains in the 21st century matt hennessee if these walls could talk: the fiddler's story naomi ribner footnote to the millenium ii thomas esser digital self-portait a. j. thieblot subject differences in student attitudes to paper-based and web-based resources (poster session) judy sheard margot postema selby markham chips in space, 1984 ken o'connell battered corinne whitaker computers in hollywood s. landa w. dietrick m. jaffe millennium girl, 1997 laurence m. gartel computational biology david t. kingsbury dynamical simulations of granular materials using the caltech hypercube a technique for simulating the motion of granular materials using the caltech hypercube is described. we demonstrate that grain dynamics simulations run efficiently on the hypercube and therefore that they offer an opportunity for greatly expanding the use of parallel simulations in studying granular materials. several examples, which illustrate how the simulations can be used to extract information concerning the behavior of granular materials, are discussed. b. t. werner p. k. haff architecture-level power estimation and design experiments architecture-level power estimation has received more attention recently because of its efficiency. this article presents a technique used to do power analysis of processors at the architecture level. it provides cycle- by-cycle power consumption data of the architecture on the basis of the instruction/data flow stream. to characterize the power dissipation of control units, a novel hierarchical method has been developed. using this technique, a power estimator is implemented for a commercial processor. the accuracy of the estimator is validated by comparing the power values it produces against measurements made by a gate-level power simulator for the same benchmark set. our estimation approach is shown to provide very efficient and accurate power analysis at the architecture level. the energy models built for first-pass estimation (such as alu, mac unit, register files) are reusable for future architecture design modification. in this article, we demonstrate the application of the technique. furthermore, this technique can evaluate various kinds of software to achieve hardware/software codesign for low power. rita yu chen mary jane irwin raminder s. bajwa iceclif, 1990 darcy gerbarg an architecture for conceptual mechanical modeling robert young walid keirouz jahir pabon digital self-portrait eunji mah book review: introduction to computational biology randolph chung whole genome association studies in humans (abstract) d. cohen m. blumenfeld i. chumakov n. schork oral history rik sferra taut turnip anna ullrich pacific island vinh nguyen users view the complexity of communications systems and products developed and manufactured by at & t demands a sophisticated set of computer aided engineering tools. these tools are required to manage the complexities ranging from the integrated circuit level, through the circuit pack level, to the levels of shelves and frames of equipment. they are used at the very onset of the design process, starting with initial design intent capture, through fabrication of prototypes and models, and culminating in information transfer and manufacture. to meet these needs within a common design environment, a cae system is required which possesses considerable technical depth, great flexibility, and the capacity to evolve to meet changing needs. the unified cad system is a comprehensive and integrated system of cae tools used at at & t for the design, development, and manufacture of electronic circuit packs and backplanes. this system was conceived and developed at at & t bell laboratories and is now in heavy use by at & t product development and manufacturing organizations. when used in its complete form, the system provides a totally data driven process from design capture to manufactured product. this is achieved with a minimum of manual intervention and with the availability of audits to verify the integrity of data at each step along the way. this paper addresses the users view of the system. at at & t, the users consist of design engineers, specialists in physical design realization, documentation, and information transmittal, and manufacturing engineers. this paper presents a description of how the system is used today, its performance in these environments, and a users view of future directions. we find that the unified cad system improves quality, reduces costs, reduces intervals, and improves human interfaces. use of the system has resulted in fewer design iterations, reduced design activity, and elimination of duplication of effort. the system has had a profound effect in bringing designs into manufacture, and the users are anxious to extend its benefits into even more aspects of the product development and manufacturing process. j. r. colton f. e. swiatek d. h. edwards a nmr-spectra-based scoring function for protein docking a well studied problem in the area of computational molecular biology is the so- called protein-protein docking problem (ppd) that can be formulated as follows: given two proteins _a_ and _b_ that form a protein complex, compute the 3d-structure of the protein complex _ab_. protein docking algorithms can be used to study the driving forces and reaction mechanisms of docking processes. they are also able to speed up the lenghty process of experimental structure elucidation of protein complexes by proposing potential structures. in this paper, we are discussing a variant of the ppd-problem where the input consists of the tertiary structures of _a_ and _b_ plus an unassigned 1h-nmr spectrum of the complex _ab_. we present a new scoring function for evaluating and ranking potential complex structures produced by a docking algorithm. the scoring function computes a "theoretical" 1h-nmr spectrum for each tentative complex structure and subtracts the calculated spectrum from the experimental spectrum. the absolute areas of the difference spectra are then used to rank the potential complex structures. in contrast to formerly published approaches (_e.g._ morelli _et. al._ [38]) we do not use distance constraints (intermolecular noe constraints). we have tested the approach with the bound conformations of four protein complexes whose three- dimensional structures are stored in the pdb data bank [5] and whose 1h-nmr shift assignments are available from the bmrb database (biomagresbank [47]). in all examples, the new scoring function produced very good rankings of the structures. the best result was obtained for an example, where all standard scoring functions failed completely. here, our new scoring function achieved an almost perfect separation between good approximations of the true complex structure and false positives. unfortunately, the number of complexes with known structure and available spectra is very small. nevertheless, these experiments indicate that scoring functions based on comparisons of one- or multi- dimensional nmr spectra might be a good instrument to improve the reliability and accuracy of docking predictions and perhaps also of protein structure predictions (threading). o. kohlbacher a. burchardt a. moll a. hildebrandt p. bayer h.-p lenhof engineering bridges: from concept to reality (abstract) henry petroski vise from vise versa series florence ormezzano digital self-portrait eddie kang telematic vision paul sermon the cochlear implant 1 the sequence of the human genome (abstract only) a consensus sequence of the euchromatic portion of human genome has been generated by the whole genome shot-gun sequencing method that was developed while sequencing the genomes of haemophilus influenzae and drosophila melanogaster. the 2.9 billion bp sequence, was generated over nine months from 27,271,853 high quality sequence reads ( 5x coverage of the genome) from both ends of plasmid clones made from the dna of five individuals: three females and two males of african- american, asian-chinese, hispanic and caucasian ethnicity. the coverage of the genome in cloned dna represented by paired end- sequences exceeds 37x. two assembly methods, a whole genome assembly and a regional hybrid assembly were utilized, combining bac data from genbank with celera data. over 90 assemblies of 500,000 bp or greater and 2510 million bp or larger. analysis of the genome sequence reveals - 26,178 protein-encoding genes for which there is strong corroborating evidence and an additional 12,000 computationally derived genes with mouse homologues or other weak supporting evidence. comparative genomic analysis indicates vertebrate expansions of genes associated with neuronal function, tissue-specific developmental regulation, and in the hemostasis and immune systems. dna sequence comparisons among the five individuals provided locations of 2.6 million single nucleotide polymorphisms (snps). the haploid genomes of a randomly drawn pair of humans differ at a rate of one per 1,250 bp on average but there is marked heterogeneity in the level of polymorphism across the genome. only 0.75% of the snps led to possibly dysfunctional proteins. mark adams digital self-portrait erin mccartney my adventure using computer science on the genome project webb miller the role of engineering in the evolving technology/automation interface in order to be successful in the rapidly expanding field of lsi and vlsi development, there must be a close interrelationship between the software engineers who develop cad tools and the hardware engineers who must use those tools. any new cad tool must encompass three areas: it must be able to handle complex circuits efficiently; it must be a complete package, including simulation, layout, checking and test program generation; and it must be useable. the hardware engineer can provide valuable system integration experience when the time comes to incorporate each new tool into the complete package. if new tools do not properly interface with existing or projected tools, they will induce errors and not be useful. finally, if hardware engineers are involved in each of the phases of cad tool development, they will have a vested interest in ensuring that the package works, and just as importantly, they will use the tools. r. p. lydick i hear the bells claire abramowitz circle of life emily rosenthal integrated computer aided design, documentation and manufacturing system for pcb electronics this paper describes an integrated computer aided design, documentation and manufacturing system, which assures data integrity from physical design to manufacturing with the aid of one phase data input and integrated parts data base. unique features of the system are comprehensive documentation support for pcb electronics, one-phase user friendly data input, intensive input data checking, support for part and document numbering and pcb numbering, user definable document formats and languages and fully automatic generation of n.c. programs for insertion machines and input data for test equipment. although the design support is highly automatic, the system is just computer aided allowing the user to control the work flow and by providing the possibility for the user to overdrive the data base information. the designer experience, intuition and imagination can be fully utilized. mikko tervonen hannu lehikoinen timo mukari desert, 1997 herbert franke cals and the virtual enterprise (panel session) donald j. mccubbrey susan gillies darren meister hiroshi mizuta surveyor's forum: interpreting experimental data david w. embley george nagy topkapi (poster session): a tool for performing knowledge tests over the www guido roessling bernd freisleben scenario based portfolio optimization vladimir nikolov anatolii antonov computer aided turbine blade surface design p. sundar varada raj r. sankar m. mangla rajesh kumar skew e9, 1983 mark wilson computing resources in small colleges the participants will describe experiences at their institutions with different computing environments: mini- computer, ibm pc lab, macintosh lab, sun workstation lab. each panelist will address similar issues as they relate to these environments in small colleges: funding < ;p>candid evaluation of hardware and software including networking course usage in computer science usage by other disciplines management of the facility time is planned for an open discussion among participants and audience on questions of mutual interest. james r. sidbury nancy baxter richard f. dempsey ralph morelli robert prince science education and research for technological progress education implies systematic instruction, schooling or training of people so that they are able to meet challenges of the environment they live in, and be able to contribute positively in solving problems to build a better society. science education is usually equated with education in the area of natural and physical sciences, but in modern day society it includes important aspects of social sciences. education at academic institutions of higher learning must go hand in hand with research---an activity aimed at discovering new things. while the academic institutions engage in pioneering studies in basic or applied research, industrial research labs are required to shape this pioneering work into products to be used by the society. technological progress implies a process of breaking through obstacles, and producing new things and eliminating the old. s. imtiaz ahmad a training model of a microprogramming unit for operation control angel smrikarov stoianka smrikarova wladimir dochev teaching computer ethics: an alternative assessment approach ian k. allison peter halstead the standards process: evolution or revolution? claudia bach the dmca needs fixing reexamining u.s. copyright laws. vir v. phoha achieving computer literacy barbara b. wolfe experiences with ethical issues: part 2 judith l. gersting frank h. young literacy for computer science majors stewart a. denenberg how should data structures and algorithms be taught data structures and algorithms is clearly a very important topic and course in the computer science curriculum. it has been taught at several levels by a number of approaches. should the approach be mathematical, theoretical and abstract or very concrete and "hands on"? whichever method is used, the ultimate goal is the same: enhancing student comprehension. the panelists discuss three distinct and well-defined approaches. danny kopec richard close jim aman the human-centric approach michael l. dertouzos computer science as the focus of a secondary school magnet program the "magnet program" is a concept that has received a great deal of attention, especially in urban school districts. a magnet program is one which is made available to students who live outside of the area which is usually served by the school in which the program is housed. a key requirement of any magnet program is that participation be voluntary. students attend magnet programs to take advantage of what those programs offer. in many cases, students in magnet programs spend up to an hour traveling to school each day. brian d. monahan accommodating diversity of academic preparation in cs1 (panel) martin dickey frank friedman max hailperin bill manaris ursula wolz design and implementation of a programming contest for high school students the computer science department at northwest missouri state university sponsors a computer programming contest each spring for area high school students. the contest draws about 250 people each year. the olympiad has provided valuable student contact for the computer science faculty. the computer science department views the olympiad as a very powerful recruiting tool for the university. this paper will address the development of the computer science olympiad at northwest missouri state university. doug myers linda null technologies for knowledge-building discourse marlene scardamalia carl bereiter computing curricula 2001 how will it work for you? eric roberts gerald engel c. fay cover andrew mcgettrick carl chang ursula wolz the determinants of job satisfaction for is professionals in technical career paths catherine m. ridings lauren b. eder paradigms and laboratories in the core computer science curriculum: an overview recent issues of the bulletin of the acm sigcse have been scrutinised to find evidence that the use of laboratory sessions and different programming paradigms improve learning difficult concepts and techniques, such as recursion and problem solving.many authors in the surveyed literature believe that laboratories are effective because they offer a mode of learning that complements classroom teaching. several authors believe that different paradigms are effective when used to support teaching mathematics (logic and discrete mathematics) and computer science (programming, comparative programming languages and abstract machines).precious little evidence by way of reported results of surveys, interviews and exams was found in the acm sigcse bulletins to support these beliefs. pieter h. hartel l. o. hertzberger evaluating enterprise resource planning (erp) systems using an interpretive approach enterprise resource planning (erp) systems involve the purchase of pre-written software modules from third party suppliers, rather than bespoke (i.e. specially tailored) production of software requirements, and are often described as a _buy_ rather than _build_ approach to information systems development. current research has shown that there has been a notable decrease in the satisfaction levels of erp implementations over the period 1998-2000. the environment in which such software is selected, implemented and used may be viewed as a social activity system, which consists of a variety of stakeholders e.g. users, developers, managers, suppliers and consultants. in such a context, an _interpretive_ research approach (walsham, 1995) is appropriate in order to understand the influences at work. this paper reports on an _interpretive_ study that attempts to understand the reasons for this apparent lack of success by analyzing issues raised by representatives of key stakeholder groups. resulting critical success factors are then compared with those found in the literature, most notably those of bancroft _et al_ (1998). conclusions are drawn on a wide range of organizational, management and political issues that relate to the multiplicity of stakeholder perceptions. walter skok michael legge what's in store for you? the computer center at the university of minnesota has recently set up a retail store operation for the sale of computer-related documentation and supplies. this paper will discuss the factors which lead up to the decision to begin a retail operation as well as the policy decisions which govern the operation. i will also discuss the rules which affect the operation within a tax-exempt institution, the problems of staffing, stocking, space, and point of sale records. new problems which are the direct result of network expansion and microcomputer technology will also be addressed. it is my hope that this discussion will help you to avoid some of the problems that we encountered. during the summer of 1980, there were many things happening at the university of minnesota which eventually resulted in the opening of the university computer center store. inflation and the economy made it obvious that the computer center budget was not going to expand at a rate which would allow us to maintain our existing level of services. this meant that we had to search for new areas of income to offset our increased expenses and permit us to provide what we felt were needed services. all of the "free" services provided by the university computer center (ucc) came under close scrutiny to determine which ones could be eliminated, reduced, or provided on a charge basis. one of these areas was documentation. richard t. franta president's letter: on spending our time wisely peter j. denning software patents and their potential impact on the eda community (panel) william m. van cleemput ewald detjens herman beke george c. chen joseph hustein william lattin dennis fernandez the importance of selected systems analysis and design tools and techniques as determined by industry systems analysts and university educators r. aukerman r. schooley d. nord j. nord a proposed structure for a computer based learning environment - a pragmatic approach (poster) charlie daly resource management in a decentralized system the heterogeneous collection of machines constituting the processor bank at cambridge represents an important set of resources which must be managed. the resource manager incorporates these machines into a wider view of resources in a network. it will accept existing resources and specifications for constructing others from existing ones. it will then allocate these to clients on demand, combining existing resources as necessary. resource management in a decentralized system presents interesting problems: the resources are varied and of a multi-level nature; they are available at different locations from where they are required; the stock of available resources varies dynamically; and the underlying system is in constant flux. daniel h. craft bill o' rights john perry barlow computing technology and the third world the advent of computing technology has made far more impact in the developed world than any other technology in the past. the effect of this technology on third world countries has, so far, not been different from the introduction of other types of technologies. this paper attempts to highlight the negative aspects of the present state of computing in these countries in the hope that lessons can be drawn which will improve and modify future developments and trends. transferring the development pattern of the developed world would be unwise as this has proved inadequate in the industrialization process of third world countries. the paper also argues the role of international agencies and computer suppliers in the dissemination of information on computing technology and suggests a more pragmatic approach to the development of endogenous capacities through direct interaction between developed and less developed countries. examples of development profiles are drawn from the arab region and guidelines for future plans are proposed. a. dewachi race differences in job performance and career success although blacks have gained entry to the information systems (is) field and various managerial positions, they continue to experience more restricted career advancement prospects than whites. they have found it difficult to advance professionally and managerially within their organizations. perhaps, as the management literature suggests, this is because minorities may experience considerable discrimination in their jobs that lowers their performance and ultimately impedes their career advancement [10]. magid igbaria wayne m. wormley computer science in the new millennium: convergence of the technical, social and ethical c. dianne martin computers, technology, and society: special projects to enhance the curriculum margaret miller supporting declarative programming through analogy two of the most frequently used programming languages in teaching artificial intelligence (ai) are declarative. many undergraduates have difficulty in making the transition from the procedural programming language learned in their early years in college to the declarative programming paradigm used in the ai course. this paper presents a methodology that supports declarative programming in an ai course by using analogy. several classroom examples are presented along with the analogous out of class assignments. antonio m. lopez an evaluation of presentation methods for cal (abstract only) computer-assisted learning (cal) has been used as an educational tool for nearly twenty years. its potential as a valuable supplement or replacement for traditional methods of instruction has been widely acknowledged; the realization of its potential has remained less than complete. cal has been used with varying degrees of success in many areas of education, including public school education, university courses, distance education, vocational training in college environments, and in-house industrial training. the research described in this presentation focuses on the use of cal in industrial training. an initial investigation into the use of cal in industry, where course objectives are usually short term and measurable, shows that many drawbacks exist. it is felt that cal in industry will not reach its full potential until more effective products are available. at the moment, cost is not the limiting factor so much as mere availability. work needs to be done on software portability, innovative methods of presentation, alternative methods of presentation for various learning styles, and for individual companies the ability to customize software for their particular needs. this phase of our cal research concentrates on the effectiveness of various presentation methods. guidelines for effective screen design exist, but empirical work is lacking to verify all of the criteria and in particular to determine the relative importance of the many screen design considerations. some of the areas of concern in presentation design include [1,2,3]: consistent use of functional areas on the screen use of scrolling amount of material per screen use of graphics use of color best placement of text, graphics, and questions use of highlighting significance of line justification entry and exit points from a cal package it is difficult to set up objective tests for these criteria because of; the subjective nature of the type of learner, age of learner, and especially the nature of the learning material. our investigation has started by testing those parameters which we identified as being most likely to cause measurable differences in learner response. the initial broad parameter chosen was the use of scrolling, e.g. scrolling versus paging versus windowing. the challenge to be met to ensure objective evaluation is to keep the educational material consistent from test package to test package and to keep the secondary presentation parameter differences to a minimum. even so, designing a valid evaluation procedure to analyze the results of each test vehicle is not a clear cut process. we are interested in measuring the software's ability to motivate, stimulate, and promote long term retention of material and concepts, whereas the only effect that is straightforward to measure is short term retention. experimental software has been developed to evaluate the relative effectiveness of presenting the same educational material using the three different broad presentation methods; scrolling, paging, and windowing. the software is used by first year computer science students in the first week or two of term. we evaluate the effectiveness of the packages through two feedback mechanisms: first of all by questionnaires, and secondly by the analysis of a usage log that is produced by the software. the effectiveness of each package as far as fulfilling its educational objective is concerned is measured. in this discussion, the presentation methods used in designing the courseware, the evaluation techniques chosen, and some results and observations are examined. preliminary work in designing "dynamic" courseware, i.e. self-monitoring and self- modifying, is discussed. jane m. fritz ethics and the computer world: a new challenge for philosophers john ladd walking on an electric treadmill while performing vdt office work n. edelson j. danoffz computer science in industry (panel discussion) the first three panelists will give a short presentation on the computer science education program at their location covering the purpose of the programs, its goals, the curriculum, the instruction methods, and their experience with the program. the fourth panelist will then discuss industrial education programs from the perspective of having participanted in them as an instructor. a general discussion will follow the formal presentations. ez nahouraii tom bredt charles lobb nell b. dale discovery learning in introductory operating system courses uta ziegler a course in computer law we describe a course for computer science students that covers the legal issues that apply to computing, from intellectual property protection to liability for system failures to computer crime. d. g. kay on integration of learning and technology (poster session) zoran putnik an alternative culminating experience for master's students in computer science james c. mckim timothy o. martyn roger h. brown michael m. danchak kathleen l. farrell c. william higginbotham irina s. ilovic brian j. mccartin j. peter matelski a twenty year retrospective of the nato software engineering conferences (panel session): the nato conferences from the perspective of an active software engineer doug ross word processors: a new group of users and what to do with them carol w. doherty immigration and the global it work force lawrence a. west walter a. bogumil should we get involved in word processing? every computer center needs a strategy for word processing and other micro- based systems. this is something we can be sure of in the face of the current popularity of these systems. micro-based systems won't go away because we refuse to think of them and actually, although we have to assimilate a lot of new information and handle more work, these systems can provide cost effective services never before available to us and to our clients. we must get involved in a positive way. we must establish a method of operation for micro-based systems. this methodology will vary from one campus to another, but a sharing of ideas will be helpful to all of us. lois j. secrist computational science, parallel and high performance computing in undergraduate education (abstract) thomas l. marchioro joseph zachary d. e. stevenson ignatios vakalis leon tabak modularized short courses computer short course students are as heterogeneous a group as can be imagined in a college classroom. at northern illinois university we teach novices who are professional students; you can spot them by their sharpened, poised pencils and fresh notebooks. we teach the old computer pros who want information on some new system feature or compiler. we teach graduate students who feel they will face this monster called a "computer" once in their lifetime---they have a limited, specific and worthy goal, but it's not computer knowledge. and we teach administrative personnel whose paychecks are contingent upon the mastery of short course content. mary ann drayton simulation in the undergraduate computer science curriculum david j. thurente on teaching computer literacy to future secondary school teachers this paper discusses our experience in teaching computer literacy classes to non-majors with a future career as secondary school teachers. the backgrounds of the students, our teaching philosophy, course organization and conduction as well as the feedbacks from our students are detailed. from the feedbacks it can be seen that the central goal of computer and application literacy is accomplished. furthermore, students' perceptions of computers has turned from negative to positive. greg c. lee chen-chih wu computer security past and future diana moore michael neuman question time: true leadership john gehl suzanne douglas peter j. denning robin perry new ifip committee formed on information security education and training harold joseph highland what it's like to be a popl referee m n wegman computing in the middle east s. e. goodman j. d. green pope gregory on mars: on february 31, 2000 clement kent does society need computer science research? shrisha rao sim: a utility for detecting similarity in computer programs david gitchell nicholas tran imperial silicon valley paulina borsook the evaluation of computer science education in europe (poster) patricia magee micheel o'heigeartaigh computer science undergraduate capstone course curriculum concerns of the computer science discipline continue to require refinements in this rapidly changing field. we have established curriculum guidelines and we have two years experience in the accrediting process of the csab. remarks such as "i do not consider the topic of … to be in the computer science discipline" and "at least the topic of … is not a capstone course" are being made. the goal of this panel is to provide an open, probing platform for a discussion of the subject of a capstone course. clinton p. fuelling anne-marie lancaster mark c. kertstetter r. waldo roth william a. brown richard k. reidenbach ekawan wongsawatgul acadia university's "sandbox" glenn macdougall down the road: is something missing from your campus? thomas c. waszak a predictor for success in an introductory programming class based upon abstract reasoning development the purpose of this study was to create and validate a tool which could be administered to students enrolled in or considering enrollment in an introductory programming course to predict success in the course or alternatively to segregate enrolled students into fast and slow paced sections. previous work which met the criteria of a self contained predictive tool included the work of barry kurtz [5] of the university of california, irvine using abstract reasoning development as the predictive measure. the test kurtz developed had been tested only on a small sample (23 students) in a controlled environment (one instructor - the researcher) and the test required up to 80 minutes to complete. this study modified the kurtz test to require 40 minutes and administered it to 353 students learning two different languages from a variety of instructors. this predictor successfully predicted the advanced students from average to below average students. when used in conjunction with other known factors, e. g., gpa, the authors feel it is a viable tool for advising and placement purposes. ricky j. barker e. a. unger tools and techniques for teaching objects first in a java course michael kölling john rosenberg software change and evolution (sce '99) václav rajlich training adult learners - a new face in end users julie scott sig3c first anniversary robert p. campbell communicative technology and the emerging global curriculum robert p. taylor an industry perspective on computer science accreditation (abstract) john impagliazzo j. dennis bjornson dennis j. frailey jeanette horan gerald h. thomas cost trade-offs in hardware support (panel discussion) a serious problem facing computer science educators is deciding what type of computer resource(s) needs to exist in order to serve computer science students. should these resources include time-sharing, micro-processors, one large dedicated computer, and so forth? what are colleges and universities doing to attack this question? a panel of faculty from several colleges and universities of computer science education will present views and insights. there will be time for questions and answers from the audience. william g. bulgren nelle dale victor wallace clair maple larry loos software engineering teaching: a "docware" approach the software engineering teaching is a true challenge. indeed software engineering technology is only justified for large projects or long-term applications maintenance ; aspects impossible to show at the time of introductory course.in order to circumvent these difficulties, we propose a new approach of software development teaching which we called _"docware"._ it is a documentation centered process: the software product is no longer regarded as a source file that is documented afterwards, but as a set of documents whose source file is one product among others.after having specified our teaching objectives, we describe this approach which uses new tools that we developed and used for several years. a report of the use of this approach concludes this paper. daniel deveaux regis fleurquin patrice frison a successful computer approach to the computer literacy course barbara a. price assessing "hands on" skills on cs1 computer & network technology units d. veal s. p. maj rick duley an analysis of advanced c.s. students' experience with software maintenance maintenance accounts for a very large portion of total software cost and effort in industry. yet computer science students are seldom exposed to this in their training. this study investigated advanced cs students' reactions to doing a maintenance project. a survey was used to examine their attitudes towards doing maintenance, the maintenance strategies they used, the value of the experience, and their assessments of the maintainability of the original program. most students found it a valuable learning experience. a wide disparity was found in their perceptions of the maintainability of the original program. donna m. kaminski making all types typedefed ron ben-natan the 1980 computing center newsletter contest as the first place winner in the 1979 computing center newsletter contest, the university of colorado computing center served as the judging site for the 1980 contest. this presentation will summarize colorado's experiences in coordinating the contest. james nichols e-imc (poster session): an authoring tool for humanistic teachers aimed to develop and distribute customized instructional courseware david lanari stefano roccetti the seekers m. o. thirunarayanan using information technology to integrate social and ethical issues into the computer science and information systems curriculum (report of the iticse '97 working group on social and ethical issue in computing curricula) mary j. granger joyce currie little elizabeth s. adams christina björkman don gotterbarn diana d. juettner c. dianne martin frank h. young the first computer - an ethical concern john a. n. lee raising the awareness of ethics in it students: further development of the teaching model the question of ethics of information technology professionals is one that gets considerable attention in both the popular press and some academic literature. there have been occasional calls for undergraduate courses to include the topic in the curriculum. most schools do so. in many cases students, particularly undergraduate students, have only a vague notion of some of the business issues involved and the professional bodies' published codes of ethics make for fairly dry classroom material. in 1989, dan couger discussed teaching ethics in an is environment. this paper takes the approach outlined by couger, essentially personalising the issues, a step further by drawing on input from leading it practitioners. the approach in the school of is at the university of new south wales incorporates the suggestions contained in several recent publications, calling for management to take the lead in setting ethical standards, providing current advice to students and developing the existing ethical awareness of the students. the paper gives a review of current literature in the area and gives details of the teaching methodology adopted. geoffrey dick yesterday, i knew all the answers a long time ago and far away in another world you discovered computing. it was fun and you enjoyed working with it. you felt comfortable using computers and software. maybe you were a student or even a staff person when you decided to find a way to spend more time with computer technology. you applied for a job in user services. you got the job and became part of an organization that dealt with computing all the time. your responsibilities required you to learn more and more about computing and you found you enjoyed doing this. then you were assigned the responsibility of helping customers directly. the customers liked you and you responded well to their questions. the customers got good answers and you felt really satisfied with your job. the job was becoming even better! management liked what they saw you doing and expanded your responsibilities. you saw the new responsibilities as encouragement to do even more and better things. you accepted the challenges and took on the new tasks. you were able to handle the new responsibilities well which lead you to even more interaction with customers. you were now getting so good helping customers that you were asked to put together a training class, which you did. you discovered you liked teaching and helping several customers at one time. you developed another class, and then another and in doing so learned even more. you were now making a name for yourself and customers were seeking you out. the other user services' staff started coming to you for help with their problems. you liked this attention and to make sure you didn't loose your new found importance you worked even harder at solving problems and learning new pieces of the technology. you were on a roll! you had found your niche! l. downing s. chambers supply and demand for computer science phds (abstract) ashok chandra a long-term perspective on electronic commerce eric hughes work in the coming age john gray personal computing vs. personal computers much of the discussion about the use of personal computers within organizations arises from viewing personal computers and personal computing as the same thing. the focus is on a peripheral issue: the choice between distributed computer hardware and central computer hardware. it is more useful to focus on a very different issue: the difference between organizational computing and personal computing in organizational information systems. the primary problem is that of control; how can centralized information processing organizations ensure that the correct data is used, that it is protected, that the systems developed are what end users need, and that such development represents productive use of computing resources without getting in the way of productive use of computing resources to solve individual users' problems? john lehman information science: a computer-based degree that emphasizes data the author has participated in the development of a new baccalaureate degree program entitled information science. this paper reviews the motivation for starting a new branch of undergraduate computing that is distinct from computer science and describes the principles that have been incorporated in the new curriculum. although the program is only two years old, its assessment and accreditation procedures are already under development. william mitchell retraining teachers to teach high school computer science the national trends regarding the retraining of secondary school teachers in a computer science discipline are examined. recommendations and guidelines are offered in this acm/ieee task force report. james,l. poirot harriet,g. taylor cathleen,a. norris kombiz97 - virtual lab experiment with the advent of easy available internet and intranet technologies it became possible to redesign concept of building highly valuable managerial skills in real business problems solving using managerial games and make it part of modern, very easy to use and well equipped learning environment. the kombiz97 experiment presented below is growing up every year experiment on setting up such an environment and using throughout more and more courses offered by the cracow academy of economics (cue). jerzy skrzypek tadeusz wilusz teaching human computer interaction to programmers many computer science graduates will likely find themselves developing interfaces in a work culture that has only naive notions of usability engineering. this article describes a course i developed that prepares students for this eventuality by providing them with practical and applicable hci skills. all course material is available on the world wide web, and pointers are provided. saul greenberg from structure to context - bridging the gap tony clear the anatomy of a project oriented second course for computer science majors this paper describes the philosophy and design of a specific course, computer science 236, taught over the past few years at washington university. the philosophy of the course is that the objectives of the course can best be achieved by employing a series of associated projects which are complex enough to require a design and specification effort but are not so large that they cannot be completed in one semester. several other institutions have also found that a project oriented course is advantageous. the purpose of this paper is to describe the philosophy and methodology of such a course and not to describe the specific course at washington university. however, in describing the generic course, those decisions made for computer science 236 will be presented as examples. will gillett two pieces on vender relations dave browning first year information systems papers - optimising learning - minimising administration (poster) a. s. richardson intensity of end user computing phillip ein-dor eli segev designmentor (poster session): a pedagogical tool for graphics and computer- aided design ching-kuang shene john lowther soothsayers of the electronic age within our lifetimes, we have seen three generations of computers flourish and fade into the archives of technology. today we are experiencing the fruits of the fourth generation. our expectations of the fifth generation are the focus of this conference and, indeed, of the plans for our immediate future. from these expectations, we can extrapolate to the sixth generation. but what lies beyond? three noted thinkers in the computer industry comprise this panel of soothsayers, examining the omens and auguries of this electronic age. dr. carl hammer investigates the scenario beyond the data processing horizon. he explores processing by systems which are intelligent, expert, and capable of making complex decisions, human decisions. dr. kerry joels predicts the seventh generation system, giving specifications, capabilities, and costs of a likely model. he maps the route for reaching the seventh generation and points out the landmarks on a journey we have already begun. dr. alan kay foretells the future of small computers in an age of democratization of access and universal computer literacy. each speaker will address the interaction between the demands of technology on society and the demands society will make of its technology. and each will forecast a future which could be possible, should we wish it to be. virginia walker carl hammer kerry mark joels alan kay using inconsistent data for software process improvement petri vesterinen discriminatory practices in the pre-employment process: a proposed empirical investigation of academia mark a. serva patricia h. nunley lorraine c. serva information systems '95 curriculum model: a collaborative effort john t. gorgone j. daniel couger david feinstein george kasper herbert e. longenecker computing laboratories and the small community college: defining the directed computing laboratory in the small college computing environment the small community college faces a unique set of challenges in laboratory implementation. this paper identifies the computing environment at a particular small community college, discusses the instructional content desired by the college, surveys various approaches to laboratories, and offers the approach that the college is using to implement appropriate instructional computing labs. the suggested approach provides for both an "open" and "scheduled" lab, promotes instructor freedom as no one lab approach is dictated by the lab setup, allows directed labs where they are appropriate and documents that class contact hours are set according to standards. robert l. tureman forum diane crawford can online auctions beat online catalogs? yaniv vakrat abraham seidmann risks to the public peter g. neumann command-line usage in a programming exam introductory computer science education has a strong emphasis on the teaching and learning of programming skills. in establishing that a desired level of proficiency in the use of these skills has been attained, many courses implement a practical exam where students must complete a program and get it to run correctly under supervision and unaided. this exam may, as in our case, be presented as a "barrier" exam which must be passed in order to proceed to intermediate computer science enrolment. the importance of such an event is not always matched by our understanding (as educators) of student behaviour under such conditions. the binary (pass/fail) nature of the exam tends to be reflected in student perceptions of the exam, often polarised as being "quite easy" by those who pass or "too difficult" by those who fail. this paper describes an exploration into command-line behaviour during the exam, in an attempt to gain some insight into student behavioural reaction to it. in so doing, the issue is raised as to whether or not certain actions are more likely to serve as indicators of a successful candidate, and what meaning this has for teachers and students. tony greening the role of mathematics in the computer science curriculum there has been much debate in the past few years about the appropriate mathematics requirements for an undergraduate computer science major. the discussion has focused primarily on two issues: (1) the underlying mathematical content of computer science courses and (2) the content of mathematics courses which would serve as appropriate cognate requirements for computer science major programs. while this discussion has been helpful, it has been too narrowly focused--it has not started from an understanding of the relationship between the disciplines of mathematics and computer science, but rather has sought to identify mathematical prerequisites that computer science majors need in order to take existing computer science courses. this paper is a small step in seeking to apply an understanding of the relationship between the disciplines of mathematics and computer science to the undergraduate computer science curriculum. james bradley performance management: the secret weapon in the y2k battle fred engel surveillance and the reengineering of commitment within the virtual organization f. a. wilson n. n. mitev survival techniques of a conversion sandy sprafka betsy draper about this issue… adele j. goldberg what the students said about plagiarsim janet carter combining management and technology in a master's degree for information system professionals by the early to middle 1970's it was clear that education in computer science and related fields was failing to meet many of the needs of business and industry. in self-defense, many firms adopted the practice of hiring phd's to lead programming projects (on the grounds that the phd's were bright). other firms chose to hire bright bachelors, most of whom had little or no formal training in programming or software engineering (on the grounds that bright people could learn on the job). a side effect of these practices was that most of the phd's were being siphoned off into industry and were thus unavailable to staff the rapidly growing computer science departments in higher education. at the same time, bright computer science bachelors were enticed directly into industry by high salaries, and thus did not pursue advanced degrees. these factors were no doubt part of the reason for the severe shortage of computer scientists available to higher education. thomas e. kurtz a. kent morton using project management concepts and microsoft project software as a tool to develop and manage both on-line and on-campus courses and student team projects debra burton farrior daniel e. hallock conference report: the 1995 acm big event ronald b. krisko another approach to justifying the cost of usability arnold m. lund imbalance between growth and funding in academic computing science: two trends c this report is endorsed by the computer science board and prepared by the board's committee on research funding in computer science. d gries r miller r ritchie p young how to succeed in graduate school: a guide for students and advisors: part i of ii marie desjardins digital video in education todd smith anthony ruocco bernard jansen the free universal encyclopedia and learning resource richard stallman the underpinnings of privacy protection frank m. tuerkheimer computing in the former soviet union and eastern europe shane hart brain tumour development in rats exposed to electromagnetic fields used in wireless cellular communication it has been suggested that electromagnetic fields (emf) act as promoters late in the carcinogenesis process. to date, however, there is no convincing laboratory evidence that emfs cause tumour promotion at non-thermal exposure levels. therefore the effects of exposure to electromagnetic fields were investigated in a rat brain glioma model. some of the exposures correspond to electromagnetic fields used in wireless communication. microwaves at 915 mhz were used both as continuous waves (1 w), and pulse-modulated at 4, 8, 16 and 217 hz in 0.57 ms pulses and 50 hz in 6.67 ms pulses (2 w per pulse). fischer 344 rats of both sexes were used in the experiments. by stereotaxic technique rat glioma cells (rg2 and n32) were injected into the head of the right caudate nucleus in 154 pairs of rats, exposed and matched controls. starting on day 5 after inoculation, the animals were exposed for 7 hours a day, 5 days a week during 2--3 weeks. exposed animals were kept unanaesthetized in well- ventilated tem cells producing 915 mhz continuous or modulated microwaves. their matched controls were kept in identical tem cells without emf exposure. all brains were examined histopathologically and the tumour size was estimated as the volume of an ellipsoid. our study of 154 matched pairs of rats does not show any significant difference in tumour size between animals exposed to 915 mhz, and those not exposed. thus our results do not support that even an extensive daily exposure to emf promotes tumour growth when given from the fifth day after the start of tumour growth in the rat brain until the sacrifice of the animal after about 16 days. leif g. salford arne brun bertil r. r. persson a practical approach to internationalizing information systems & computer science courses (seminar) janet l. kourik training - the key to successful systems in developing countries computers are being installed at an ever increasing rate throughout the developing world. to succeed they must be backed by skilled local manpower. the training for these skills must be provided locally. unless this training can be provided, there can only be a tragic waste of resources and a slowing down of development. indeed, in these circumstances the best advice that can be given is to avoid computers completely. j. adderley making a careers presentation jack wilson the practical management of information ethics problems: test your appropriate use policy, part ii john w. smith design as a cultural activity steve portigal surveyor's forum: generating solutions douglas j. keenan the art and science of user services we are all familiar with the great advances made over the years in all areas of science and technology resulting from the application of well known laws and principles. among such laws are the laws of conservation of energy and mass, the laws of thermodynamics, and newton's laws of motion. we are frequently overwhelmed by the changes confronting us daily in our user services work. we experience changes in hardware, software, organizational structure, management, personnel, user education, consulting, documentation and physical facilities. the discipline of physics takes the whole universe of ubiquitous changes in stride and integrates them into unifying principles. in the midst of an ever-changing world, physics presupposes a constancy of nature, i.e. that identical conditions give identical results; that cause and effect relationships can be determined and applied to observed phenomena. this paper will be a light-hearted trip in analogous thinking into a different perspective of the many changes we experience in our profession. i shall explore how we might be able to use these classical laws of physics to ease our burden of coping with the changes we encounter daily. if our activities do obey these laws in a predictable and sensible fashion, then we can use these principles to help us. for example, the fundamental law of the conservation of energy states that energy can neither be created nor destroyed. from this law we can deduce that for a given level of user services personnel staffing, a maximum amount of work (energy) can be accomplished over a given period of time. hence, priorities are set, either implicitly or explicitly. by clearly seeing the situation, realistic goals and expectations are possible. perhaps such phenomena as staff personnel burnout and high turnover rates might be diminished by seriously applying such techniques. robert k. shaffer science, computational science, and computer science: at a crossroads d. e. stevenson determinants of program repair maintenance requirements considerable resources are devoted to the maintenance of programs including that required to correct errors not discovered until after the programs are delivered to the user. a number of factors are believed to affect the occurrence of these errors, e.g., the complexity of the programs, the intensity with which programs are used, and the programming style. several hundred programs making up a manufacturing support system are analyzed to study the relationships between the number of delivered errors and measures of the programs' size and complexity (particularly as measured by software science metrics), frequency of use, and age. not surprisingly, program size is found to be the best predictor of repair maintenance requirements. repair maintenance is more highly correlated with the number of lines of source code in the program than it is to software science metrics, which is surprising in light of previously reported results. actual error rate is found to be much higher than that which would be predicted from program characteristics. lee l. gremillion the impact and issues of the fifth generation: ethical issues in new computing technologies the spread of computer applications throughout american society is a form of social action, not simply a "scientific" laboratory exercise. like all social actions which effect the well-being of people, different approaches to computerization entail ethical judgements, however implicit. this session examines some ethical issues which are especially pertinent in the mobilization, development, and diffusion of new generation computing technologies. rob kling terry winograd paul smolensky roland schinzinger the role and responsibility of the consultant this paper examines the interaction that occurs between the computer consultant and his or her client, factors that affect that interaction, the role of the consultant in the research university, the consultant's personal responsibility in the interaction and ways to create change. the paper utilizes the method of institutional ethnography suggested by dorothy smith: place the consultant- client interaction within the organization of the research university. the paper describes how the consulting process works beginning from personal experiences of consultant and client and then places the interaction in the larger context of the university, academic training and research practices. the routinization of research separates it into many functions performed by people who may not understand the intended use of the research. the consultant performs two pieces of the research function --- training clients to use the computer and helping them deal with any problems. in the course of our work, we as consultants encounter contradictions in the university system that create our individual feelings of inadequacy and burnout. we also may find it difficult to accept our personal responsibility for the uses made of the research in which we participate. understanding our work and its place in the university is the first step toward creating change. this understanding can be increased by breaking our isolation and talking to each other about problems we encounter as consultants, by examining the ideology we participate in creating and by integrating our professional expertise with our desire to assist our clients. we need not blame ourselves for the shortcomings of the system. changing the way we think about our world and our work is a way of changing our world and our work. eileen driscoll the digital millennium copyright act jeanine l. gibbs the validation of a political model of information systems development cost estimating albert l. lederer jayesh prasad implementing a single classwide project in software engineering using ada tasking for synchronization and communcation barry l. kurtz thomas h. puckett inside risks: a tale of two thousands peter g. neumann the senior seminar in computer science richard j. arras lewis motter critical success factors for is executive careers - evidence from case studies this article qualitatively analyzes the critical success factors (csfs) for information systems (is) executive careers based on evidence gathered from five case studies carried out in 1997. typical is executive career paths are presented within a time series style and the csfs are interpreted within a descriptive framework by synthesising the case data based on social cognitive theory. the descriptive framework suggests that successful is executive careers would most likely be achieved by well educated and experienced is employees who have the right attitude towards both their career and work, together with good performance. they would also exhibit an ability for self- learning and to anticipate future it uses, as well as having proficient is management knowledge and skills while working with an appropriate organizational environment. moreover, the framework systematically indicates the interactions between the coupling factors in the typical career development processes. this provides a benchmark for employees that are aiming at a senior is executive career against which they can compare their own achievements and aspirations. it also raises propositions for further research on theory building. nansi shi david bennett student preference for multimedia-based lectures (poster session): a preliminary report david a. scanlan progress report: brown university instructional computing laboratory an instructional computing laboratory, consisting of about 60 high- performance, graphics-based personal workstations connected by a high- bandwidth, resource-sharing local area network, has recently become operational at brown university. this hardware, coupled with an innovative courseware/software environment, is being used in the classroom in an attempt to radically improve the state of the art of computer science pedagogy. this paper describes the current state of the project. the hardware and courseware/software environments are described and their use illustrated with detailed descriptions, including sample screen images. some comments are included on our initial reactions to our experience to date with the environment and on our future plans. marc h. brown robert sedgewick abstract solution design by specification refinement j. m. burgos j. galve j. garcía j. j. moreno s. muñoz d. villen moving industry-guided multimedia technology into the classroom p. k. mckinley b. h. c. cheng j. j. weng a content analysis of ten introduction to programming textbooks a content analysis was conducted on ten introduction to programming textbooks to determine if there were any significant differences in their content. the results of the analysis indicated that the content of the older textbooks, pre 1983, was not significantly different from the more recent texts, post 1985. h. willis means an is1 workbench for acm information system curriculum '81 this paper describes the system architects' workbench, a personal computer- based teaching environment for courses in computer organization and systems programming. this tool set provides an integrated learning and teaching environment for computer systems concepts defined in acm is curriculum '81 is1. the central tool is a computer simulator based on a pedagogical model of computer system resources which allows students to study principles without becoming too involved in the implementation idiosyncrasies usually associated with machine level programming. programs may be written directly in machine language or in a pascal-like language, tp, which includes features that allow complete access to and control of host level resources. the tp compiler supports separate compilation, ipl load module generation, and detailed translation output used for machine language modification and debugging. the simulator supports interactive execution, tracing, modification, and debugging. leslie j. waguespack viewpoint: against software patents corporate the league for programming freedom richard stallman simson garfinkle integrating collaborative problem solving throughout the curriculum r. j. daigle michael v. doran j. harold pardue development of the mit microcomputer center zita m. wenzel computer literacy by computer the basic concepts of computer literacy can be taught by the use of computer management and interactive instruction. the approach described here emphasizes measuring student achievement, and informing students of their progress. this approach also incorporates a system for the evaluation of alternative instructional experiences. m. i. chas. e. woodson computer science curriculum assessment merry mcdonald gary mcdonald teaching computer architecture with a new superscalar processor emulator current computers use several techniques to improve performance such as cache memories, pipeline and multiple instruction issue per cycle. using a real computer to teach these concepts is actually impractical, because these computers are designed to be programmed in high-level languages.in order to solve this problem, we have implemented a superscalar processor emulator, where most of the processor and cache parameters can be defined by the student. its objective is to create a set of laboratory works allowing the student to observe how the different components of the computer evolve while executing an assembler program. it allows detection of the different kinds of cache misses and hazards as well as their impact on performance. then, the student can apply some software techniques to reduce cache misses and to avoid hazards. santiago rodríguez de la fuente m. isabel garcía clemente rafael mendez cavanillas technology issues in washington daniel lin one department's evolution judith c. oleks editorial: technology or management peter j. denning proposed a.a.s. degree program in educational technology john m. lawlor values, personal information privacy, and regulatory approaches the relationships among nationality, cultural values, personal information privacy concerns, and information privacy regulation are examined in this article. sandra j. milberg sandra j. burke h. jeff smith ernest a. kallman a toolkit for individualized compiler-writing projects richard j. reid teaching abstraction explicitly (poster session) herman koppelman beyond the code of ethics: the responsibility of professional societies richard s. rosenberg final report: curricula for two-year college task force subgroup: computing for other disciplines richard austing therese jones computerized ambulance dispatching systems j. lin happenings on the hill fred w. weingarten preparing future teachers to use computers: a course at the college of charleston frances c. welch a social implications of computing course which "teaches" computer ethics computers are integral to today's world, forming our society as well as responding to it. in recognition of this interaction, as well as in response to requirements by the computer science accrediting board (csab), many schools are incorporating computer ethics and values and addressing the social implications of computing within their curriculum. the approach discussed here is through a separate course, rather than relying on the integration of specific topics throughout the curriculum. sylvia clark pulliam computer science for secondary schools: course content computers and computing are topics of discussion in many curriculum areas in secondary school. the four courses recommended by this task group, however, have computing as their primary content. the courses are: 1) introduction to computer science i (a full year course) 2) introduction to computer science ii (a full year course) 3) introduction to a high-level computer language (a half-year course) 4) applications and implications of computers (a half-year course) courses 1 and 2 are designed for students with a serious interest in computer science. course 1 can serve as a single introductory course for some students and also act as a prerequisite for course 2. at the end of two years of study, students should be prepared to be placed in second level computer science classes in post-secondary educational institutions or to take the advanced placement exam available through the college entrance exam board. jean b. rogers gender issues in computer science: from literature review to local and national projects (panel discussion) gloria childress townsend florence appel debra boezeman freedom smith can web development courses avoid obsolescence? yes. frank klassner computers and society: an integrated course model william j. joel technological frames: making sense of information technology in organizations in this article, we build on and extend research into the cognitions and values of users and designers by proposing a systematic approach for examining the underlying assumptions, expectations, and knowledge that people have about technology. such interpretations of technology (which we call technological frames) are central to understanding technological development, use, and change in organizations. we suggest that where the technological frames of key groups in organizations---such as managers, technologists, and users--- are significantly different, difficulties and conflict around the development, use, and change of technology may result. we use the findings of an empirical study to illustrate how the nature, value, and use of a groupware technology were interpreted by various organizational stakeholders, resulting in outcomes that deviated from those expected. we argue that technological frames offer an interesting and useful analytic perspective for explaining an anticipating actions and meanings that are not easily obtained with other theoretical lenses. wanda j. orlikowski debra c. gash session 11b: development models and methods r. bourgonjon traffic light: a pedagogical exploration through a design space viera k. proulx jeff raab richard rasala question time: what should be done about the "digital divide?" john gehl the communicative economy of the workgroup: multi-channel genres of communication stephen reder robert g. schwab retraining pre-college teachers: a survey of state computing coordinators harriet g. taylor cathleen a. norris research alerts jon hind marsh christian heath mike fraser steve benford chris greenhalgh integrating "depth first" and "breadth first" models of computing curricula traditional undergraduate computer science curricula have been increasingly challenged on a host of grounds: undergraduate computing education is attracting fewer majors, is not producing graduates who satisfy the needs of either graduate programs or business and industry, and is not effectively responding to the increasing needs for computing education among the larger student population. in the face of such challenges, there has been a recent movement to restructure undergraduate computing curricula. at georgia tech we have design (ay 91-92) and implemented (ay92-93) a new computing curriculum that features a radical restructuring of subject matter. during the design and implementation process, we paid close and critical attention to the particulars of both the acm recommendations and reports from our colleagues at other institutions who had already gained some experience with "breadth first" approaches. we have conclude that curriculum modernization should integrate key aspects of both "depth first" and "breadth first" approaches. our new curriculum is an example of such integration. we present data (measures of student performance and of student and faculty opinion) that confirm that our approach is viable, and we now believe that it can be a useful model for others. in this paper, we outline the structure of our integrated curriculum and report on key facets of our experience with it. russell l. shackelford richard j. leblanc graphical analysis of computer log files stephen g. eick michael c. nelson jerry d. schmidt from washington: ethics, schmethics neil munro privacy considerations as we enter the "information highway" era steve guynes using bluej to teach java (poster session) dianne hagan facilitating intracorporate cooperation: a univerisity creates the environment stuart a. varden frank j. losacco from washington: a patchwork of legislation and regulation neil munro multiple ways to incorporate oracle's database and tools into the entire curriculum it is not uncommon for students to view the curriculum as pieces instead of an integrated whole. this can hinder them from applying knowledge learned previously in one course to the course they are currently taking. in addition there are a number of recurring concepts in computer science that students need to recognize. the concepts associated with the management of large amounts of information is one such area. catherine c. bareiss using large projects in a computer science curriculum (panel session) k. todd stevens joel henry pamela b. lawhead john lewis constance bland mary jane peters using metrics to detect plagiarism (student paper) plagiarism in programming courses calls for a need for professors to be able to correctly identify occurrences of plagiarism in order to safeguard the educational process. however, detecting plagiarism is a long and tedious ordeal because of the fine line between allowable peer-peer collaboration and plagiarism. in this paper we present a metrics-based tool which aids in the process of detecting plagiarizers by monitoring the similarities between programs. shauna d. stephens utilization of the career anchor/career orientation constructs for management of i/s professionals connie w. crook raymond g. crepeau mark e. mcmurtrey the captain knows best don gotterbarn problems with and proposals for service courses in computer science j. d. parker g. m. schneider training, ability, and the acceptance of information technology: an empirical study of is personnel and end users r. ryan nelson michael w. kattan paul h. cheney professional education for secondary computer science (abstract only) as states are now beginning to certify secondary teachers in computer science for secondary education, it is time to consider how many (if any), and what kind, professional education courses a prospective teacher should take. the courses under consideration here primarily are the ones known as "methods" courses; that is, how to teach the subject. we have insufficient data to make a clear judgement at this time, but there appears to be a dismal record, especially for smaller schools, for these courses involving the sciences and mathematics. such courses as educational psychology are invaluable, but a methods course taught to computer science students by instructors who neither like nor understand computers or (computer science) cannot be a happy situation. we already have had this happen to the mathematics and science teachers --- more than one has been told that such subjects as algorithms, algebra, or trigonometry are not proper subjects for discussion. can only subjects be considered for discussion which are outside the realm of science and technology? i do not argue that students should not have a single course in the teaching of computer science, but i do strongly argue (unfortunately, having found too many bad examples of what is taking place) that this course should be taught by the department of computer science, i wonder how many prospective mathematics and science teachers we have lost because of a lack of understanding by the professional educators of the subject matter, or them being unable to relate to these students. do we wish to perpetuate the same problems for computer science? kathleen pearson social analyses of computing: theoretical perspectives in recent empirical research rob kling a chart representation for delineating computer disciplines (abstract) colleges and universities in the united states that teach some form of computer curriculum are many. from the small community college to the major university most offer some type of sequence of computer course. unfortunately, the computer programs offered by two different schools may be vastly different several "names" for the computer disciplines have emerged: computer science, computer theory, computer engineering, information sciences, information systems, and data processing to name the more common ones. a faculty member with a doctorate in computer science might feel uncomfort-able working in an information systems department. likewise, a student wishing to study in the area of information systems would feel ill-at-ease taking courses in a computer engineering department. many job-seeking faculty and school-seeking students ask of a department: "what kind of courses do you offer?" "where does your emphasis lie?" the answers to questions like these are obtained after lengthy discussions with school advisors, faculty interviewing, or digging through college catalogs. what would be helpful is a system which clearly illustrates the courses offered by a department. if this were in chart form where the courses progressed from computer theory to data processing, one could see at a glance the emphasis of a particular school's discipline. by comparing a department's chart with a faculty member's chart, one can easily and quickly see background differences and similarities, saving time and energy. curt m. white infromation for authors diane crawford addressing the y2k problem in the computing classroom charles p. howerton mary ann robbert carl e. bredlau peter j. knoke reliability and responsibility severo m. ornstein lucy a. suchman risks to the public in computers and related systems peter g. neumann integrating interactive computer-based learning experiences into established curricula: a case study educators who wish to integrate interactive computer- based learning experiences into established courses must contend not only with the difficulty of creating quality digital content but with the often equally difficult challenge of reconfiguring their courses to use such materials. we describe our experiences with the exploratories project at brown university [8] and the use of exploratories in an introductory computer graphics programming course [4]. we offer examples of both success and failure, with the goal of helping other educators avoid both painful mistakes and lost time spent coping with unforeseen logistical and pedagogical concerns. among the lessons we learned: planning can't begin too early for the integration of such materials into an established curriculum, and all possible methods of integration should be considered before committing to any specific approach. anne morgan spalter rosemary michelle simpson viewpoint: the acm declaration in felten v. riaa barbara simons editorial carl cargill inside risks: risks of y2k peter g. neumann declan mccullagh a proposed curriculum in information science (abstract) michael c. mulder gordon davis john gorgone david feinstein doris k. lidtke the university in the inner city the traditional university often has a world class reputation for teaching and research. equally often it is centrally sited in established cities through which the winds of economic change have blown viciously, leaving ivory towers surrounded by neighbourhoods of underprivileged communities. within these communities, junior and high schools do their best to battle against under-motivation and lack of expectation in the literal shadow of elaborately resourced institutions populated by secure and highly educated staff, teaching students from more affluent backgrounds who look forward on graduation to similar levels of security in employment.for modest investment, we demonstrate that it is possible to bridge the gap between these two communities. an exercise which commenced as a tightly funded service to a small number of inner-city schools has grown to serve a larger number. of more interest, it has demonstrated the potential to be of direct benefit to a range of students of the university in a way that could not otherwise be provided. roger boyle ann roberts narrative vs. logical reasoning in computer ethics john m. artz pedagogical implications of students' attitudes regarding computers a survey was completed by 260 undergraduate students in freshman level computer- related courses at a university in the midwest. the survey included questions about students' attitudes regarding automobiles, home entertainment systems, and computers. there was evidence of more frustration with computers than with automobiles or home entertainment systems. students apparently found computers more difficult to operate, but were more likely to think themselves responsible for problems with computers than for problems with home entertainment systems or automobiles. the major unanticipated conclusion is that perhaps as persons teaching computer literacy we should be more concerned about how computer frustration effects students' self perception than about the possibility that frustration will lead to a dislike of computers. by emphasizing the connection between employment and computer literacy computer literacy, teachers may be unintentionally contributing to low self-esteem, anxiety and other psychological problems. suggestions for avoiding these outcomes are included. dwight d. strong bruce j. neubauer representations of work lucy suchman upgrading cs1: an alternative to the proposed cocs survey course terrence w. pratt should undergraduates explore internals of workstation operating systems charles m. shub inside risks: certifying professionals peter g. neumann viewpoint jean-claude laprie bev litlewood discovering a treasure in part-time consultants to keep an instructional program afloat diane jung-gribble carolynne lambert transmission line experiments for computer science students william coey expect the unexpected john gehl cyberspace across the sahara: computing in north africa spanning 7.2 million square kilometers from the atlantic ocean to the red sea, and encompassing the great saharan desert and nile river valley, north africa embraces mauritania, western sahara, morocco, algeria, tunisia, libya, and egypt. charting the development of information technology (it) is as challenging as traversing the souks, the labyrinthine ancient marketplaces. a. k. danowitz y. nassef s. e. goodman how to present a paper in theoretical computer science: a speaker's guide for students ian parberry the laboratory component of a computer organization course patricia wenner distributed algorithms in the discrete mathematics course (poster session) martha j. kosa toward the european information society ari-veikko anttiroiko the it work force in china wang shan software piracy teresa tenisci wendy alexander is m.i.t giving away the store? robert c. heterick identifying topics for instructional improvement through on-line tracking of programming assignments this paper stresses the need for identifying specific learning objectives for student programming projects and describes the use of an on-line project submission system for assessment of those objectives. specifically, the emphasis of the article is on on-line tracking of student progress in order to identify topics that need particular instructional attention. the examples and data collected are drawn from a junior level operating system course. dorota m. huizinga about this issue…: 0 adele j. goldberg forum diane crawford the grace hopper celebration of women in computing: one student's experience sara m. carlstead educating educators: lessons in adding ada to a small school curriculum susan luks suzanne pawlan levy modeling human-computer decision making with covariance structure analysis m. d. coovert j. lalomia e-commerce and web design dan masterson why are fewer females obtaining bachelor's degrees in computer science? clark thomborson manpower profiling for information technology: a case study h. m. hosny m. s. akabawy t. g. gough privacy in the telecommunications age carol wolinsky james sylvester beyond outlaws, hackers and pirates rob kling hedonic price analysis of workstation attributes h. raghav rao brian d. lynch crafting an hr strategy to meet the need for it workers ritu agarwal thomas w. ferratt distance education: an oxymoron? carol twigg word bound, the power of seeing thomas g. west actively learning computer ethics jennifer a. polack-wahl accreditation and student assessment in distance education: why we all need to pay attention lisa c. kaczmarczyk systems analysis with attitude! have you ever been overruled by your students in critical decisions relating to their learning? have you ever attended your own classes as a guest consultant with pre-defined scope of input? have you ever suffered from the fact that each student is different, and you have a standard program for all? have you ever empowered your students, and watch them exceed your expectation? the only important question is whether you have the courage to throw out your safety nets and do it. for those who are looking to be involved in an exciting, challenging, stimulating and rewarding teaching exercise, systems analysis with attitude is definitely it. interested! we were too when we attempted this experiment that we do recommend to collegues in this always-evolving analysis discipline. maurice abi-raad the changing face of software support: the impact of the internet on customer support and support personnel the site used to collect the data for this study (beta inc.) is a u.s.-based, international computing company with over 3000 support personnel employed world-wide. during the 12 month period from august 1999 to july 2000, the company moved the customer support operation from a telephone-based system to a mix of telephone and internet- based support. the data presented in this paper was collected between january and july 2000 and shows the impact of this change on customer support and support personnel. prior to the implementation of the internet-based, inet customer support system, the quantity of customer support, as a measure of operational performance for beta inc., was based on telephone support request (sr) volume. between january and july, 2000, there was a 52% decrease in telephone srs across the company. this followed a three- year trend of increasing telephone srs. this study shows that the decrease in telephone srs has been made up through customer self-service using the inet customer support system. as a result of this trend toward customer self- service using the internet, 15% of the beta inc. customer support personnel were rifed in june 2000. the change in the customer support service model at beta inc. impacts the required number of support personnel, the level of expertise required of customer support personnel, and the type of support which customers may expect in the future. several conclusions are presented: * the effort to move beta inc. customers to a self-service mode was very successful. * average resolution time of internet-based srs exceeded average resolution time of telephone srs, suggesting that inet provides customer solutions to simpler problems with the submission of more difficult support problems to support personnel. * a measure of the quality of customer support self-service is needed. * as more support questions are answered through customer searches of the inet system, the number of support personnel required to answer support questions will continue to decrease. at the same time, since only the more difficult support questions are left unanswered by the inet system the level of support personnel expertise required to answer customer questions will increase. john m. borton factors of success for end-user computing a two-phase study concerned with the factors of success associated with the development of computer applications by end users was conducted in 10 large organizations. during the first phase, in-depth interviews were used to refine a preliminary model of the factors of success for user development of computer applications (uda). in the second phase, a questionnaire was administered to 272 end users experienced in developing applications. statistical tests of the relationships in the model indicated that all but one of the derived hypotheses were substantiated. the result of this study is a field-verified model of the factors of success of uda that provides a basis for implementation of uda practices and policies in organizations, as well as for further research in end-user computing. suzanne rivard sid l. huff the computer isn't the medium, it's the message roger c. schank planning for academic departmental systems polley ann mcclure current issues in graduate student research ann e. kelley sobel mario guimaraes computer ethics - part of computer science! carol j. orwant the new generation of computer literacy a tremendous mismatch is developing between two of the most critical components of any computer literacy course: the textbooks and the students. we are encountering a "new generation" of students (literally as well as figuratively!) who are much better acquainted with computer usage than their earlier counterparts. yet many textbooks with increasing emphasis in those same computer tools continue to appear. there are signs of a coming change in that a few authors and publishers apparently are becoming aware of the need for innovations in texts for non-scientists. these textbooks open the door for a new orientation to principles in the teaching of computer literacy. j. paul myers regulation of electronic funds transfer: impact and legal issues this paper investigates the implications and impact of current legislation on the future of the electronic funds transfer systems (eft). the relevant statutes are introduced and analyzed. problem areas are discussed together with examples of court rulings. the investigation reveals that the regulations do not provide enough safeguards for the consumer and do not clear up the ambiguities from a combination of competing laws, regulations, and conflicting jurisdictions. legislators, on both the national and state level, and federal and state governments need to cooperate more closely to produce uniform legislation that specifically addresses the current problems in an eft environment. courts need to realize the legislature's intent and the benefits that can be gained before ruling on the current issues. ahmed s. zaki truth in advertising in present and future generation computing the capabilities of advanced computing technologies are best understood by computer scientists and substantially less so by non-specialists. there is a strong bias toward oversimplifying the capabilities and limitations of new technologies when they are presented to a larger public: through presentations to policymakers, advertisements for specific products and through media exposure of enthusiasts or demoters of new technologies. overall, the capabilities of new technologies seem often exaggerated, even when the artifacts are usable and useful. in the case of new technologies, reliance on market forces is problematic since the public does not have much information on which to wisely evaluate their possible choices. does the computer science community have any role to play in helping assure that policymakers and the public receive relatively accurate representations about the character, values, problems, and costs of new classes of computing technologies? rob kling acm forum robert l. ashenhurst computer abuse and security: update on an empirical pilot study detmar straub industry perceptions of the knowledge, skills, and abilities needed by computer programmers in a day and age where computer information systems permeate virtually every facet of society, organizations find it difficult to hire adequate numbers of computer personnel. judging by the number of `quick-fix' organizations that offer training in "weeks rather than years" and the number of individuals that each claims to train per year, the shortage would seem to come from a lack of adequate skills rather than a lack of applicants. this paper reports the findings of an study designed to identify the knowledge, skills, and abilities (ksas) needed by one group of information technology (it) personnel --- the computer programmer. janet l. bailey greg stefaniak sixth annual ucla survey of business school computer usage providing the most comprehensive picture to date of the business school computing, communication, and information environment, this year's survey extends the focus of the fourth survey (1987) and raises the question: how to most effectively manage these resources. jason l. frand julia a. britt netnews: we were all there dennis fowler in search of academic integrity rebecca mercuri a domain centered curriculum: an alternative approach to computing education this paper presents a new approach to computer science education by proposing a model curriculum that presents computer science fundamentals and software engineering concepts in the context of an application domain. this domain- centered model is discussed in terms of its philosophy and structure, and emphasizes curriculum features that enhance the ability of a graduate to be part of a team that develops software in the application domain. in particular, the curriculum model proposes integration of software engineering education with the application domain. the undergraduate computer science curriculum at embry-riddle aeronautical university is used to illustrate the principle features of the model and to advance arguments about the model's viability. iraj hirmanpour thomas b. hilburn andrew kornecki from washington: the ongoing tug-of-war for e-commerce control neil munro edp auditors' role in evaluating computerized accounting information systems efficiency by queuing, simulation and statistical models this study is a summary of the various relevant aspects related to evaluating system efficiency in computerized accounting information systems. during recent years a vast body of knowledge central to the problem of computer performance evaluation has accumulated. unfortunately, however, the work on the subject demonstrates enormous disparity. on the one hand, one finds numerous reports and documents that present masses of empirical data obtained from measurement or simulation. on the other hand, theoretical papers are, more often than not, couched in advanced mathematics, not easily accessible to engineers and/or electronic data processing (edp) auditors. among a number of mathematical disciplines related to system modeling,"queuing model,""simulation method," and "statistical analysis" are the most important quantitative techniques. this paper provides a cohesive introduction to the modeling and analysis techniques for evaluating system efficiency. these techniques will certainly be applicable to modeling activities of complex systems in general, and not merely computer systems. sara f. rushinek avi rushinek supporting a diverse group of casual tutors and demonstrators: one size doesn't fit all linda stern computer crime legislation update thomas c. richards stand up for human resources mark a. huselid improving on-line assessment: an investigation of existing marking methodologies we are in the process of developing an on-line marking system for use in our large-scale cs1 and cs2 courses. to better accommodate the needs of our numerous raters, we investigated their current methodologies in marking students' work; this paper presents our findings from recording "think out loud" marking sessions and surveys. a prototype for an on-line assessment software tool is described; this tool allows markers to view students' work at various degrees of detail ranging from complete, low-level to "meta-level." we believe such a system is beneficial for improving the marking process in large-scale and distance classes. jon a. preston russell shackelford an "off by one" error don gotterbarn modules to introduce assertions and loop invariants informally within cs1: experiences and observations henry m. walker self-paced training expanding educational opportunities peter l. peterson the sigchi educational resource development group andrew sears julie a. jacko marilyn mantei it skills in the context of bigco. steve sawyer k. r. eschenfelder andrew diekema charles r. mcclure managing performance requirements for information systems brian a. nixon taking stock for university patents richard c. hsu the use of personal computers in teaching english composition it is clear that computers are essential to a course in programming, are easily used for practice in arithmetic, and are useful for simulations in an economics or sociology course, but it is not clear how, if at all, they can be used to teach the writing of english. to be sure, you could use a computer to administer a spelling drill, or a multiple choice test in punctuation---but none of this is composition. while most proposed applications in the teaching of english run to such computer assisted instruction, my experiences over the last four years have shown me that a personal computer can be a powerful aid in learning to write. jef raskin associative duties, institutional change, and agency: the challenge of the global information society robyn brothers experience hosting a high-school level programming contest george struble the case for desktops samuel l. grier larry w. bryant is computer technology taught upside down? there has been a continuing fragmentation of traditional computer science into other disciplines such as multimedia, e-commerce, software engineering etc. in this context the standard computer technology curriculum designed for computer science students is in danger of becoming perceived as increasingly irrelevant --- both by students and employers. the authors review expectations of both students and employers, as determined by market analysis, and present the results of implementing one possible solution to providing an introductory computer technology curriculum suitable not only for students from other disciplines but also as a basis for computer science majors. s. p. maj d. veal p. charlesworth sharing standards: it skills standards roy rada a paged - operating - system project this paper describes a student project which is a major part of a senior level operating systems course at the federal institute of technology in lausanne. the project consists in conceiving and implementing an entire operating system, where user jobs benefit from a simulated paged virtual memory on a dec-lsi/11 based microprocessor. students program in portal, a modular high level language similar to modula. the positive reactions we have obtained from our students center on satisfaction in having participated in defining specifications and having implemented an entire system themselves. andre schiper gerard dalang jorge eggli imad mattini roland simon a modular introductory computer science course the structure of a modular introductory course in computer science is described. two types of modules are offered, lecture and language, over three time periods. students enrolled for the course complete three lecture and three language modules. each student chooses modules which match his or her interests and background. in this way the course provides a useful alternative for all students on campus. herbert l. dershem changing computer science curricula (panel): planning for the future barbara boucher owens shirley booth marian petre anders berglund the demographics of candidates for faculty positions in computer science anthony ralston using a low-cost communications tool in data communications courses (tutorial session) larry hughes strategies found to be effective in the control of computer crime in the forbes 500 corporations joseph o'donoghue a conversation and commentary on from millwrights to shipwrights through their mutual dialog, the authors answer and explore sevenquestions about the audience,goals, assumptions, impact, and educationalvalue of r.john brockmann s 1998 history of technical communication in the united states. jack jobst changing attitudes for future growth everyone's attitude about computers is changing rapidly. it has to. computers are an everyday part of life that can not be avoided. telephone bills, electric companies, credit card companies, hospitals, and even grocery store check outs are now computerized in some way. university students at every level of education are also experiencing computer attitude changes. almost every department in a university system now requires some computer interaction for their students. several questions come to mind when planning how to best introduce students to the computer. are formal computer science classes too technical for non- computer science majors? will specialized short courses be enough? where do students go for personal help outside the classroom? are computer professionals able to assist those with little or no computer knowledge? are departments asking too much of their students? many universities have promoted satellite computer labs staffed with professionals to meet the needs of the students and faculty in a particular department or college. these satellite labs should be able to provide a user friendly environment and some one-to-one attention to the problem at hand. this paper will discuss some of the problems, we, as a satellite computer lab, have encountered during our existence. since we deal primarily with graduate students and faculty in public health working with statistical packages, our problems are not as numerous as they could be. i plan to present just some of the problems we face and solutions we have tried for changing people's attitudes about the computer. diana l. williams accreditation and the small, private college (panel session): problems and opportunities david mader e. robert anderson robert cupper james leone ralph meeker certification for the computer professional - individual preparation strategies this paper summarizes some historical and philosophical perspectives of professional certification and particularly the professional certificate in data processing (cdp). methods whereby the computer practitioner may become prepared to sit for the cdp examinations are discussed. james s. ketchel a program for reskilling is professionals (abstract) donald r. chand images and reversals: following the gifts - visual talent and trouble with words thomas g. west computing programs in small colleges this summary report of the acm small college task force outlines resources, courses, and problems for small colleges developing degree programs in computing. john beidler richard h. austing lillian n. cassel the xxii self-assessment: the ethics of computing eric a. weiss not-for-profits in the democratic polity john a. taylor eleanor burt status of information systems accreditation john t. gorgone doris k. lidtke david feinstein final thoughts dorothy e. denning learning from users (pane presentation): testing web design the panel will actively model methods of collecting input from users about the effectiveness of web pages. during the fall of 2000, owens library, northwest missouri state university, conducted a web usability study of six of the library's high-use web pages. these pages were created by librarians without the benefit of formal and systematic user input regarding the design and intuitiveness of the library site's architecture. the study consisted of one- on-one observations [1], focus groups [2], and user dialogues [3]. by consulting a representative, cross-section of student, staff, and faculty populations, librarians gained valuable assessment of how the library's web pages are perceived and used. the study was funded by culture of quality monies awarded by northwest missouri state university. panel members and volunteers from the audience will provide mini- demonstrations of one-on-one observations where the "user" will be asked to complete tasks using a specified web page. during the presentation, volunteers will also be recruited for participation in a mini focus group, followed by a demonstration of a user dialogue. in addition to modeling methods of collecting input, the librarians will share the process they developed to undertake the study. a timetable was developed to ensure that tasks were assigned to the appropriate persons; questions were developed for each of the pages to be tested; participants were recruited; and the data was analyzed. the panel members will also share the method used to assess the qualitative data gathered in this type of study. [4] the session will end with concrete examples of what librarians learned during the fall 2000 study and how the data collected was used to improve the intuitive functionality and appeal of owens library web pages. carolyn johnson joyce meldrem wg 13.3 1993 - 1995 activity report. lisbon, oct 1995 julio gonzalez abascal using scientific experiments in early computer science laboratories computer science is an experimental science, in the same sense that biology or physics are experimental sciences. nonetheless, lab exercises for cs1 and cs2 courses are almost never formal "experiments" as the term would be understood in any other science. this paper describes our experiences using formal experiments in cs1 and 2 laboratories. such exercises are extremely valuable, in part because they help students relate abstract concepts to concrete programs, but more importantly because they lead students into new areas of computing, and even new forms of learning. doug baldwin johannes a. g. m. koomen the analysis and comparison of scheduling controls in concurrent languages through classification tzilla elrad daniel e. nohl computer systems "conference" for teaching communication skills cindy norris james wilkes adapting courses to meet faculty, staff, and student needs gail s. peters frances m. blake educational technology standards roy rada james r. schoening modern software development concepts: a new philosophy for cs2 in this paper we propose a significantly different approach to cs2, the second course in the undergraduate computer science curriculum. rather than a central focus on the design and implementation of data structures, we propose that the central focus be on modern software development concepts such as object-oriented design, exceptions, guis, graphics, collection classes, threads, and networking. we believe that these are the important concepts that students should be exposed to and should use in the second computer science course. paul t. tymann g. michael schneider about this issue… adele goldberg a mathematically focused curriculum for computer science a curriculum that is flexible enough to suit all students studying computer science, and that reintegrates the theoretical aspects of the field along with more practical or vocational aspects, is proposed. alfs berztiss a review and applications of information ecologies this review of information ecologies notes how well nardi and o'day's biological approach fits with articulation theory (in rhetoric), since both encourage spelling out alternatives and consequences so as to avoid oversimplified technology choices. a closer look at the relevance of theircase studies to promoting suitable technological improvements in k-12 education, however, finds concerns (about adequate social rewards andsustainability) along with promising parallels. dickie selfe dawn hayden "free" computer too expensive (this paper has been accepted for publication in the proceedings, but the photo-ready form was not delivered in time. copies of the paper should be available upon request at the presentation.) ronald s. lemos a new role for user services terri paul debbie bowman tammy day tahereh jafari ralph stranahan computing the profession peter j. denning exploring the world of software maintenance: what is software maintenance? girish parikh self-assessment as a precursor to professional development this paper discusses the association for computing machinery (acm) self- assessment procedures and their use for professional development. a background and history of the procedures is presented, including a statement of intention and philosophy along with a list of currently available procedures. the use of the self- assessment procedures by individuals is described, along with ideas for use in acm chapter professional development programs. three group uses of the procedures are presented: the distribution of the procedures in packets (particularly to new members); the formation of self-assessment workshops; and, the use of the procedures in a cdp review course. the authors advocate the use of self-assessment as a precursor to any professional development program. james s. ketchel james r. douglass has our curriculum become math-phobic? (an american perspective) we are concerned about a view in undergraduate computer science education, especially in the early courses, that it's okay to be math-phobic and still prepare oneself to become a computer scientist. our view is the contrary: that any serious study of computer science requires students to achieve mathematical maturity (especially in discrete mathematics) early in their undergraduate studies, thus becoming well-prepared to integrate mathematical ideas, notations, and methodologies throughout their study of computer science. a major curricular implication of this theme is that the prerequisite expectations and conceptual level of the first discrete mathematics course should be the same as it is for the first calculus course --- secondary school pre-calculus and trigonometry. ultimately, calculus, linear algebra, and statistics are also essential for computer science majors, but none should occur earlier than discrete mathematics. this paper explains our concerns and outlines our response as a series of examples and recommendations for future action. charles kelemen allen tucker peter henderson owen astrachan kim bruce the national forum on science and technology goals: building a democratic, post-cold war science and technology policy gary chapman the impact of end-user computing on information systems development mary sumner robert klepper computers, science, and the microsoft case barry fagin what does compatible mean? carol carstens towards systems engineering - a personal view of progress d. e. talbot faculty computer literacy at the university of wisconsin-milwaukee the csd faculty loan program focuses on the computer-phobic, or at least computer-anxious, faculty member, rather than those faculty who have already come to grips with computer technology. we started the program in june, 1985. objectives: to provide faculty with sufficient computing experience to enable them to evaluate the use of computers in their instructional and research activities. to expose participants to several computing tools that could enhance their personal productivity. neil a. trilling group 5 (working group): the on-line computer science teaching centre scott grissom deborah knox matrix - concept animation and algorithm simulation system ari korhonen lauri malmi riku saikkonen privacy and the varieties of moral wrong-doing in an information age m. j. van den hoven increasing the enrollment of women in computer science shari lawrence pfleeger pat teller sheila e. castaneda manda wilson rowan lindley computer science curriculum for high school students this paper describes a current project to design an introductory computer science course for high school students. problems concerning the choice of hardware, the selection of software, programming language(s) and the overall design of the curriculum are discussed. in addition, some previous related research is summarized and a plan for future activities is outlined. r. m. aiken c. e. hughes j. m. moshell public policy committee activities bob ellis preparing precollege teachers for the computer age jean b. rogers david g. moursund gerald l. engel message from the chair john lawson event-driven programming is simple enough for cs1 we have recently designed a cs 1 course that integrates event-driven programming from the very start. our experience teaching this course runs counter to the prevailing sense that these techniques would add complexity to the content of cs 1. instead, we found that they were simple to present and that they also simplified that presentation of other material in the course. in this paper, we explain the approach we used to introduce event-driven methods and discuss the factors underlying our success. kim b. bruce andrea p. danyluk thomas p. murtagh acm awards banquet 2000, vicariously lynellen d. s. perry implementing an e-commerce curriculum in a cis program in this paper we describe the development of an e-commerce curriculum in the cis program at southern oregon university. the e-commerce curriculum is incorporated through a set of two courses. the first is a client-server programming course that introduces the structure of the web along with client and server side scripting techniques. the second is a corporate web development course, which builds on the earlier course by introducing elements of e-commerce technology such as security and encryption, electronic payment systems, agents and xml. rahul tikekar daniel wilson implementing a university level computer education course for preservice teachers dede heidt james poirot designer - a logic diagram design tool _designer_ is a software tool that helps the program developer lay out structure diagrams. structure diagramming is a structured design technique commonly taught in new zealand polytechnics. diagrams can be written for overall logic flow or detailed design. unlike a flow chart, a structure diagram can always be directly translated to the control structures of a structured programming language. the basic diagram symbols are process boxes, decisions, loops, input-output boxes and procedure calls. chris power on-line finals for cs1 and cs2 m. dee medley putting it together: e-commerce technolgy after the dotcom meltdown win treese continuing education activities of the acm continuing education is a major concern for most professional societies. this is especially true for ones like acm, whose members are working at the leading edge of technology - both in research and within numerous application areas. acm, through its education board, sponsors several different activities to assist members in their quest to keep abreast of the latest technical developments. this panel has several purposes. on the one hand it will serve as a means for disseminating more widely information on our current projects. in addition it will allow us to receive feedback from the membership with respect to how they perceive theses activities, what changes they might like to see, and what new projects we should be considering. among the topics that will be discussed are self assessment procedures, professional development seminars, tutorial weeks, and accreditation efforts, and institute for certification of computer professionals (iccp) activities. after these activities are briefly described, the remainder of the session will be devoted to answering questions and soliciting ideas from the audience. robert m. aiken neal s. coulter julia e. hodges joyce c. little helen c. takacs a. joe turner developing a comprehensive need-based curriculum for non-computer science majors (abstract) lorilee m. sadler why license when you can rent? risks and rewards of the application service provider model the confluence of several developments in technology and trends in the business environment has led to the emergence of new outsourcing model, the application service provider (asp). in this paper we draw upon outsourcing literature to understand the factors that are likely to influence the adoption of the asp model. an initial research model is developed based on factors identified from economic and social perspectives used in outsourcing literature. ravi patnayakuni nainika seth computing curriculum 2001: getting down to specifics in the fall of 1998, the acm education board and the educational activities board of the ieee computer society appointed representatives to a joint task force to prepare curriculum 2001, the next installment in a series of reports on the undergraduate computer science curriculum that began in 1968 and was then updated in 1978 and 1991. in this talk, eric roberts --- one of the cochairs of the curriculum 2001 effort --- will review the state of the project and lead a general discussion of the recommendations made by the "steelman" version of the report, which will be released at the end of august. eric roberts written on the body: biometrics and identity irma van der ploeg a laboratory based microprocessor course todd feil lee larson the proposed iccp certification in systems analysis (csa) systems analysts occupy a unique position within the structure of information processing: they deal not only with the technology and with the creative dimensions of systems development, but also with the resolution of conflict between the objectives of product-oriented users and process-oriented programmers. while this conflict is frequently not open, a form of adversary relationship typically develops between users and software specialists, and the systems analyst becomes the focal point in reducing this alienation. contemporary methodologies which aid in the systems development process have helped to reduce the friction, but the systems analyst still must frequently assume the role of arbitrator and diplomat. coupled with the above role is the emerging position of the systems analyst as a quality assurance specialist. as organizations become increasingly dependent upon computer-based information systems, some of which are life-critical, systems analysts must assume greater responsibility for the results --- good or bad \--- of the applications they develop and install. accountability of this type has always been one distinguishing characteristic of the true professional. although information processing is itself an evolving discipline, some computing professionals have responsibilities and requisite knowledge more clearly defined than those of the systems analyst. the bodies of knowledge relevant to the information systems manager and the senior programmer are well represented in the content of the two examinations currently offered by the institute for certification of computer professionals (iccp): the certificate in data processing (cdp) and the certificate in computer programming (ccp), respectively. the systems analyst remains the primary computer professional not covered by a certification program. fred g. harold notes on grading henry m. walker cuba, communism, and computing g. m. mesher r. o. briggs s. e. goodman j. m. snyder l. i. press didactics too, not only technology v. dagdilelis m. satratzemi use of a soundcard in teaching audio frequency and analog modem communications sound cards have become standard features of personal computers in the home, office and classroom. this paper demonstrates the usefulness of these inexpensive devices in the teaching of some of the basic and not so basic concepts of communications. these devices can be very effective in the explanation of amplitude, frequency and frequency multiplexed circuits, as well as modem handshake standards. martin h. levin information requirements determination: obstacles within, among and between participants information requirements determination (ird) is presented as a process with obstacles within individual "users", among "users", and between "users" and systems developers. a distinction is made between ird and information requirements analysis (ira), two labels which are often used interchangeably. the triple-obstacle view of ird/ira problems and the distinction between ird and ira is suggested as an approach for research into the requirements determination process. a user-oriented ird tool designed to attack the "within" obstacles is identified as the basis for the authors' current research into how to improve the user's lot in the development of information systems. john r valusek dennis g fryback structured chat m. o. thirunarayanan aixa perez-prado site-based management: saving our schools david g. moursund computer literacy objectives for junior high school students while ten years ago computer literacy was an unknown phrase, it now has an accepted place in the vocabulary of both educators and computer scientists. the concept of computer literacy has grown rapidly and many related issues remain unresolved. for instance: what do various types of people need to know about computers? how should they be instructed in computer awareness and computer based problem solving? how should computer education be offered to students and others? the minnesota educational computing consortium (mecc) recently received funding from the national science foundation to design, develop, test and evaluate an integrated set of twenty-five student learning modules for science, mathematics and social studies courses in secondary schools. the central theme of these learning packages will be the computer---how it works, how it is used, its impact on the individual and society, and the implications of its use for the future. the project presumes that abilities, skills, attitudes, and cognitive learning are all aspects of a person's computer literacy; the learning packages will be based on a set of learning objectives that reflect these areas. the materials will be written for students entering junior high school, however, they could be used at other secondary school levels as well. the initial period of the project consists of planning and the specification of objectives. computing center management agelia valleman larry haffner michael danziger training: a view from the new kid on the block janet sedgley hci students and internships katherine isbister the use of temporary managers and employees in a local government information systems services department stuart galup carol saunders robert cerveny origin of the online species: ethical perspectives on copyright and the web the online environment challenges information professionals to stretch the historical boundaries of the understanding of fair use.the authors create and maintain an academic library web site of more than 500 pages. recently we have encountered copies of our library instructional materials downloaded from our web site and uploaded on another academic library's server without express permission to use or modify those materials. grappling with these issues regarding internet plagiarism and copyright piqued our interest in this gray area of ethical practice. we continue to struggle with questions such as those posed in the following scenarios: is the design and format of a web page (including tables, color schemes, forms, and organizational style) a form of expression that is protected under copyright, much like essays, drawings, dances, and speeches? does copying the text of an online tutorial that discusses ideas and concepts in the public domain constitute a violation of copyright? of academic integrity? connie ury frank baudino patricia mcfarland nifty assignments panel nick parlante mike clancy stuart reges julie zelenski owen astrachan a sigcaph profile karen anderson a non-linear, criterion-referenced grading scheme for a computer literacy course bill rogers philip treweek sally jo cunningham implementation of a university wide computer-augmented-curriculum this paper discusses the implementation of a project to provide microcomputing resource to all students and faculty in an effort to integrate computer- assisted-learning with traditional teaching/learning methods across the curriculum of a comprehensive university. also discussed is the structure and staffing of the project, initial hardware and software selection and the project's impact on a computer science department. hugh garraway workshop discussion edward j. coyne charles e. youman intellectual technology for the new generation john gage the myths and realities of information technology insourcing rudy hirschheim mary lacity reader comments john gehl keeping a consulting system on course all to often, user services staff view the absence of complaints as a primary source of evaluation and feedback of services. as a frontline emergency service, the consulting effort is one of the most visible and critical elements of effective user support. it is difficult to collect data about the effectiveness and quality of frontline consulting services, but many techniques provide enlightening information for user services staff. at brown university collecting data and feedback about the consulting process is a priority for academic and user services. a consulting task group was formed to collect feedback from a variety of sources and to prepare a plan and implement a new structure (where necessary) for our computer consulting system. we quickly identified areas where immediate improvement resulted and committed to the longer term project of collecting data. by keeping data from "support log forms" at the point of consulting contact we have gained the following insights: the profiles of the computing user community --- who & when & how are questions being asked what the typical and atypical questions are and how frequently they are asked the percentage of questions referred to expert sources the management issues of allocation of resources, revision of training curriculum, and restructuring of consulting to meet brown's needs. computer consulting takes on many forms and can be a complex and time consuming task. knowing the key elements of the consulting environment, such as the volume of questions, frequency of questions, types of questions, and referral path of questions, can have a positive impact on the management and effectiveness of services. but data collection through log forms is not the only technique we have used for collecting information on the health of the consulting system at brown. additional techniques of gathering information about the consulting effort have also proved to be important, such as conducting "key user information sessions". we have also emphasized taking the time to gather street smarts and conduct more housecalls; an aspect of consultant savvy that tends to be neglected because of the frantic pace of consulting. the results of brown's first phase of support log data and key user information sessions will be reviewed in this paper. the presentation will include statistics, graphs, and anecdotal reviews of the data collection effort. handouts will be available. techniques for gathering information will be offered, as well as how such information can be used to improve the quality of consulting services. stephen c. andrade kathleen s. brown asian women in it education, an australian examination anne greenhill liisa von hellens sue nielsen rosemary pringle the proactive security toolkit and applications existing security mechanisms focus on prevention of penetrations, detection of a penetration and (manual) recovery tools indeed attackers focus their penetration efforts on breaking into critical modules, and on avoiding detection of the attack. as a result, security tools and procedures may cause the attackers to lose control over a specific module (computer, account), since the attacker would rather lose control than risk detection of the attack. while controlling the module, attacker may learn critical secret information or modify the module that make it much easier for the attacker to regain control over that module later. recent results in cryptography give some hope of improving this situation; they show that many fundamental security tasks can be achieved with proactive security. proactive security does not assume that there is any module completely secure against penetration instead, we assume that at any given time period (day, week,.), a sufficient number of the modules in the system are secure (not penetrated). the results obtained so far include some of the most important cryptographic primitives such as signatures, secret sharing, and secure communication however, there was no usable implementation, and several critical issues (for actual use) were not addressed in this work we report on a practical toolkit implementing the key proactive security mechanisms the toolkit provides secure interfaces to make it easy for applications to recover from penetrations. the toolkit also addresses other critical implementation issues, such as the initialization of the proactive secure system. we describe the toolkit and discuss some of the potential applications. some applications require minimal enhancements to the existing implementations - e.g. for secure logging (especially for intrusion detection), secure end-to- end communication and timestamping. other applications require more significant enhancements, mainly distribution over multiple servers, examples are certification authority, key recovery, and secure file system or archive boaz barak amir herzberg dalit naor eldad shai supporting and evaluating team dynamics in group projects judy brown gillian dobbie recommended curriculum for cs1, 1984 elliot b. koffman philip l. miller caroline e. wardle algorithm design by successive transformation algorithms courses are typically organized either by application area or by design technique. each of these organizations has its strength, but neither effectively reflects the fact that sophisticated algorithms are not designed in a single pass. this paper describes and gives an example of the strategy of successive transformation, in which a sequence of algorithms is generated to solve a single problem. design of algorithms by successive transformation requires more time per problem, since each problem requires multiple phases of analysis. to the extent that this method is used, the breadth of the course in terms of areas and algorithms is reduced. however, we assert that at least a few such experiences are indispensable, because such painstaking analysis is representative of algorithm design as it really is, and because successive transformation is an excellent way to show students the power of theory to improve applications. norman neff hacktivism and other net crimes dorothy e. denning interpreter based assignments for a standard programming languages course this tutorial will demonstrate how to use mule (multiple language environment) based projects in a programming languages design and implementation course. mule is a software tool (consisting primarily of four languages from different paradigms) developed to support combining the comparative and interpretive approaches to teaching the programming languages course. mule projects vary from small labs to programming a component within a larger existing program to programming in the large. john barr laurie smith king educating computer scientists: linking the social and the technical batya friedman peter h. kahn cyberlibertarian myths and the prospects for community langdon winner the end of career john gray highly structured internship and cooperative education program in computer science william l. ziegler the impact and issues of the fifth generation: social and organization consequences of new generation technology the social and organizational consequences of emerging from the use of computing technology are poorly understood. forecasting the impacts of new computing technology is similarly not well understood. however, it is becoming increasingly clear that a growing array of computing arrangements are emerging that reinforce or stratify existing social arrangements. whether these new arrangements are more or less socially desirable depends upon how one interprets their meaning and value. thus, the purpose of this panel is to bring together leading scholars currently examining the broader consequences of new computing arrangements to discuss what might lie ahead. most computing use centers around the work routines for a growing number of people. one form of consequences emerging from new modes of computing use can be seen in terms of the shift of resources (social, material, and otherwise) from one arrangement to another. when people value and seek to control local resource arrangements, these resource shifts represent social costs and benefits. therefore, examining the consequences of different computing arrangements in a variety of work settings is a common research strategy for the participating panelists. walt scacchi elihu m. gerson rob kling langdon winner legally speaking: should program algorithms be patented in the legally speaking column last may [6], we reported on a survey conducted at last year's acm-sponsored conference on computer-human interaction in austin, tex. among the issues about which the survey inquired was whether the respondents thought patent protection should be available for various aspects of computer programs. the 667 respondents overwhelmingly supported copyright protection for source and object code although they strongly opposed copyright or patent protection for "look and feel" and most other aspects of programs. algorithms were the only aspect of programs for which there was more than a small minority of support for patent protection. nevertheless, more than half of the respondents i opposed either copyright or patent protection for algorithms. however, nearly 40 percent of the respondents regarded algorithms as appropriately protected by patents. (another eight percent would have copyright law protect them.) we should not be surprised that these survey findings reflect division within the technical community about patents as a form of protection for this important kind of computer program innovation. a number of prominent computer professionals who have written or spoken about patent protection for algorithms or other innovative aspects of programs have either opposed or expressed reservations about this form of protection for software [2, 4, 5]. this division of opinion, of course, has not stopped many firms and some individuals from seeking patent protection for algorithms or other software innovations [8]. although the refac technology patent infringement lawsuit against lotus and other spreadsheet producers may be in some jeopardy, it and other software patent lawsuits have increased awareness of the new availability of software patents. this situation, in turn, has generated some heated discussion over whether this form of legal protection will be in the industry's (and society's) long-term best interests. the aim of this column is to acquaint readers with the legal debate on patent protection for algorithms and other computer program innovations, an issue which seems to be as divisive among lawyers as those in the computer field. [3, 9]. pamela samuelson software process: a roadmap alfonso fuggetta research in progress: the effects of ethical climate on attitudes and behaviors toward software piracy software piracy, the unauthorized copying of computer software, is widespread in many organizations today. from the perspective of managers in organizations, software piracy means the threat of costly litigation on the one hand, balanced against the reduced expense for additional software if unauthorized copies are used. it should be possible to exercise more effective control over software piracy with a more complete understanding of the factors that lead to the decision to copy software. the goal of this study is to assess the direct and moderating effects of the organization's ethical climate on a variety of attitudes and behaviors regarding software piracy. prior research on software piracy has tended to focus on individual factors that influence piracy. this study is part of a multi-year, multi-university study of software piracy. results from our prior studies suggest that while individual factors such as age and gender influence piracy attitudes and behaviors, these effects appear to be overshadowed by organization effects. in this study, we measure organization ethical climate as perceived by students in three universities and their attitudes and behaviors towards software piracy. the measures of ethical climate used in the literature have been modified to fit the university setting. our study promises to make several important contributions. from the perspective of theory, our work should demonstrate the importance of including organizational ethical climate in theoretical models of the antecedents of software piracy attitudes and behaviors. prior research on software piracy has tended to focus on the effects of individual differences; however, individuals are embedded in organizational contexts that can influence their attitudes and behaviors. from the perspective of practice, demonstrating that different ethical climates directly impact and moderate software piracy attitudes and behaviors has a number or implications for the management of information systems professionals. it is important to know whether certain ethical climates encourage or discourage software piracy attitudes and behaviors. managers may work to change an ethical climate that fosters software piracy; alternatively, they may need to implement strong measures in the existing ethical climate to discourage software piracy. finally, it is important to study software piracy in a university setting, as this is where future information systems professionals are trained. it is imperative for educators to understand students' ethical attitudes and behaviors concerning software piracy, and how these attitudes and behaviors may be influenced. diane lending sandra a. slaughter should everyone learn anything?: the question of computer literacy in developing a new area of knowledge, one of the most difficult problems is working out a framework in terms of which to define the area. the emerging subject of "computer literacy" is a case in point. what should colleges and universities teach about computers? and to whom? rather than beginning with such "computer literacy" issues themselves, we start with the more basic question of how educators make any decisions about the appropriate content and audience of higher education. the question of teleology in higher education is examined in terms of four conceptual categories: acculturation economic considerations, social mechanisms, and mental discipline. these four categories offer one plausible framework for crafting rational procedures for deciding what to teach college students about computers. naomi s. baron technology and the organization of schooling jan hawkins computers and the myth of neutrality this paper is a critique of the widely held belief that computers are valueneutral instruments whose uses are determined primarily by individual choice. the critique is based on an analysis of the social relations of technology which reveals the operation of three kinds of constraints on human choice: 1) intrinsic features of techology; 2) accidental features such as organizational structure; and 3) techno-cultural paradigms. these constraints will be explained and illustrated with examples from the world of computing. my aim is to show that some uses of computers are more likely than others to be adopted under particular, concrete historical circumstances. for example, the mere fact that computers can be used to improve the quality of work-life or to increase citizen participation in government does not mean that they will be so used. the assumption of value-neutrality is at variance with this observation; it distorts analysis of the social impact of computers and serves partisan interests. to achieve fairness and equity in the design of public policies, we must disabuse ourselves of the mistaken notion of value- neutrality. abbe mowshowitz finland: the unknown soldier on the it front kalle lyytinen seymour goodman partnering with eda vendors: tips, techniques, and the role of standards sean murphy the environmental contribution of personal computers: a state of the art report yoni perl edgar whitley teaching introductory computer science as the science of algorithms doug baldwin software compatibility and the law pamela samuelson information technology assessment and adoption: understanding the information centre role this paper reports the results of a study of the impact of end user computing strategies, and in particular organizational information centre activities, upon an organization's information technology assessment and adoption process. over 40 managers in ten organizations were interviewed using a discussion questionnaire. the purpose of the interviews was to probe, in a semi- structured way, various aspects of the roles played by these firms' information centres in identifying, assessing and absorbing new information technology into the firm. it was found that the impact of information centres on technology absorption could be broadly characterized using two constructs: acceleration (the rate at which new technology is introduced) and control (the variety of choices made possible to users of the technology). by considering acceleration and control together, four different states are identified and characterized. typical growth patterns through the different states are described, based on actual histories from the companies studied. also, the major factors that influence end user computing strategies, and the various tactics available for implementing user computing as a function of the various states, are delineated. malcolm c munro sid l huff implementing sap r/3 at the university of nebraska tim sieber keng siau fiona nah michelle sieber systems for multi-level teaching materials jorma sajaniemi marja kuittinen a case study in certification and accreditation bill neugent auto star - a software development system m. y. zhu international efforts in computer science education rocky ross requisite technical skills for technical support analysts: a survey technical skills are critical for computer related professions. specific skills will often vary by job classification and may vary over time as new technology is developed and adopted. an understanding of skills for a given profession can be used to advantage by companies and educators in planning for future requirements and opportunities. the research reported in this paper examines twenty technical skills required of technical support analysts and system analysts. a survey questions a sample in both professions to determine the perceived importance of the different skills. the results confirm previous studies on skills required of system analysts and show that they differ from those required of technical support analysts. both professions perceive hardware, communication, and software skills as being important while their perceptions differ on database and advanced application skills. career training and hiring decisions would perhaps be improved by taking these differences into account. james j. jiang gary klein which way to go? "i've got a quick question…" says user. sure, i say. yes---we all know that over-used phrase. what they think is a quick question may indeed be; it is the answer that is usually not. user and i prepare to consult. actually, the philosophy i use isn't what the term consult constitutes. i work more in the counselor-type role. i ask user questions with the ultimate goal in mind that user decides which word processing package to purchase (or microcomputer or printer or whatever) and use---not me deciding for user. important to remember is who, what, where, when, why and how (and how much!) user is usually more receptive, positive, and satisfied at the "suggested" solution. so a typical situation with user might be portrayed like the following: kathy hunter and still more on retraining mathematics faculty to teach undergraduate computer science doris c appleby curriculum '80: recommendations & guidelines for community & junior college curriculum this session will provide a summary of the first acm curriculum recommendations for the community and junior college associate and vocational technical level of education. the publication of this report represents the culmination of five years of work by the curriculum committee on computer education, subcommittee for community and junior colleges curriculum '80 addresses three areas: computer programming, computer operations, and data entry operations. the report gives, for each of these three, topical outlines of subject matter to be taught, lists of resources needed, and suggestions for articulation and continued relevance. the guidelines should be of benefit to those who wish to begin a new program of study, as well as to those who wish to improve and update an existing program. joyce currie little marjorie leeson iva helen lee john sweeney real-life projects for students daris howard tending our fields: hiring and retaining faculty in small colleges computing programs (panel session) ralph bravaco aaron garth enright frank ford david levine scott mcelfresh mary ann robert linda wilkens justifying the business value of information systems education: a report on multi-cultural field experiments there is little empirical evidence to support the value of information systems education and training. while is training is conducted because of its direct impact on the accretion of skills, the lack of empirical evidence for the value of information systems education limits resource allocation for it. as organizations employ information technology which is more difficult to justify using standard cost/benefit methods, user information satisfaction (uis) is becoming a useful metric to measure the business value of it. in two multi- cultural field experiments it was found that information systems education caused statistically significant affects in uis. in general, after viewing a video on the is development process, end users became more satisfied with the information product and less satisfied with some aspects of is support and staff, especially, the usual education and training provided. john t. nosek the standards factor: standards on the horizon pat billingsley a project planning and development process for small teams marc rettig gary simons using the new acm code of ethics in decision making ronald e. anderson deborah g. johnson donald gotterbarn judith perrolle computer security for the people jay bloombecker reader comments: the complete computer scientist kirk templeton arithmetical croquet this paper will describe how the game of arithmetical croquet was used in a second semester programming course. this game can be used as an assignment in many different ways: students can design it completely from scratch; students can complete classes that are already designed; students can focus only on the strategy module to select the next move when everything else has been supplied to them, etc. in particular, this paper describes experiences with the first assignment type listed above, and provides the details for the program evaluation scheme that was used. john a. trono women in science and academe karan wieckert will outline the process, conclusions and recommendations of a report published in february of 1983 by the female graduate students and technical staff in computer science and artificial intelligence at the massachusetts institute of technology. incidents of subtle discrimination from the laboratory work environment will be described as well as general assumptions which give rise to these problems. also, the effects these incidents have upon women and recommendations for alleviating the problems will be presented. nell dale will present the results of a project of the women in science career facilitation program of the national science foundation. re-entry projects for women with undergraduate degrees in the sciences were funded over a seven year period. as a part of the final survey questionnaire sent to over 140 participants, a section on discrimination was included. the results of this section will be presented. karen wieckert nell dale legally speaking: is information property? this column will discuss why the law has traditionally resisted characterizing information as the sort of thing that can be private property, and will speculate about why judges may be more receptive nowadays to assertions that information should be treated as property. this new attitude is illustrated by a 1987 u.s. supreme court decision which upheld criminal convictions based solely on the misappropriation of information which the court found to be the property of one of the defendants' employers. pamela samuelson nightmare on westwood avenue: product development laboratory: success, failure, both, neither??? mary lou dorf gerald r. heuring a simple straightforward method for software development donald v. steward case study: extreme programming in a university environment _extreme programming (xp) is a new and controversial software process for small teams. a practical training course at the university of karlsruhe led to the following observations about the key practices of xp. first, it is unclear how to reap the potential benefits of pair programming, although pair programming produces high quality code. second, designing in small increments appears problematic but ensures rapid feedback about the code. third, while automated testing is helpful, writing test cases before coding is a challenge. and last, it is difficult to implement xp without coaching. this paper also provides some guidelines for those starting out with xp._ matthias m. muller walter f. tichy an operating system development: windows 3 chip anderson computer professionals whose scientific freedom and human rights have been violated - 1984: a report of the acm committee on scientific freedom and human rights this is the third report prepared by the acm committee on scientific freedom and human rights (csfhr). the first was published in the march 1981 communications and the second in the december 1982 issue. this report is an update. since the committee intends to publish future updates, it would appreciate receiving further information about computer scientists whose rights have been violated. such information should be sent to: jack minker, vice-chairman, committee on scientific freedom and human rights, department of computer science, university of maryland, college park, md 29742. because those whose scientific freedom or human rights have been violated derive sustenance and support from contacts with their colleagues, the csfhr has established a program in which acm chapters "adopt" individual scientists and correspond with them. such correspondence should touch on the personal and scientific and not discuss political matters. these letters greatly improve the morale of the recipients and are one of the few ways they can keep current with computer science and technology. this csfhr program is directed by helen takacs (p.o. drawer cs, mississippi state, ms 39762). jack minker what courseware dedicated to computer science? computer science is a field where the concept acquisition depends both on knowledge and know-how. some aspects such as the situated learning, the use of viewpoints and the negotiated tutoring play an important part in the programming learning. all these notions require specific tools intented for a teacher who designs a computer science courseware. so, this paper presents an authoring system which is particularly specialized in the student interface generation and the tutoring strategy execution. mahmoud boufaida resurrection & penance richard i. anderson taking advantage of national science foundation funding opportunities: part 1: opportunities this session will highlight nsf division of undergraduate education programs of interest to college faculty, discussing the requirements and guidelines. it will include a discussion of the characteristics of a competitive proposal and the proposal process. andrew bernat harriet taylor about this issue… adele j. goldberg risks to the public in computers and related systems peter g. neumann external customers enhance university services the computing services center (csc) of wayne state university began to actively seek business from local government, health care, other educational institutions, and research facilities in 1977. the addition of external customers acted as a catalyst, affecting csc policies and procedures, hardware and software, marketing strategy, and most importantly, attitudes toward customers and ourselves. external customers forced us to examine our service organization from a new perspective, and as a result, helped us find new ways to improve the service we provide to our own university community. patrick j. gossman european standardization policy david c. wood interactive learning with gateway labs mary johansen jason kapusta doug baldwin acm: the past 15 years, 1972-1987 anita cochran adapting not adopting a curriculum this paper presents the background of computer science programs in the dominican republic and show how the curriculum at universidad catolica madre y maestra was chosen. alonso villegas an intensive instruction format for information systems for over fifteen years southern illinois university at edwardsville has offered management information systems courses using an intensive weekend format at locations around the united states. although a variety of information systems courses in the masters of business administration has been provided, the most frequently taught course has been introduction to information systems. the introduction course combines elementary computer and data processing concepts, programming, applications, and systems analysis and design. however, the emphasis of the course is analysis and design of systems from a manager, or user frame of reference. because of the nontraditional format of the course, a variety of instructional strategies have evolved to insure successful student achievement of course objectives. frequent comparisons between the test scores of students finishing the off-campus program and students completing the on- campus program indicate that the achievement levels are equivalent. john f. schrage robert a. schultheis teaching very large instruction word architectures the vliw model describes a philosophy whereby the compiler organizes several nondependent machine operations into the same instruction word. some features of this form of architecture are illustrated and certain strategies on presenting this topic to students are shown. john impagliazzo identity theft, social security numbers, and the web hal berghel distance teaching at uppsala nell b. dale council decision of july 26, 1988 concerning the establishment of a plan of action for setting up an information services market (88/524/eec) corporate council of the european communities the crisis in academic hiring in computer science henry m. walker j. paul myers stu zweben allen b. tucker grant braught international perspectives: the globalization of computing: perspectives on a changing world are the computing and telecommunications technologies making the entire world a better place in which to live? s. e. goodman identity-related misuse peter g. neumann "network protocols and services": a non-specialist approach to teaching networking (poster session) david stratton inside risks: inside "inside risks" peter g. neumann the computer in cartoons: a retrospective from the saturday review walter m. mathews kaye reifers the coach supporting students as they learn to program (poster session) pete thomas computational science as an interdisciplinary bridge g. michael schneider in memory of walt kosinski jim adams "how much did you get?" (poster): the influence of algebraic knowledge of computer science students yifat ben-david kolikant faces: a faculty support program at syracuse university. how do we assist faculty in bringing the computer into the classroom? beth d. ruffo the design challenge of pervasive computing john thackara ensuring a local sig's future: fitting into and creating a culture richard anderson information technology, process reengineering, and performance measurement: a balanced scorecard analysis of compaq computer corporation william f. wright rodney smith ryan jesser mark stupeck from research to reward: challenges in technology transfer over a five year period the applied science & technology group of ibm's hursley laboratory in england turned itself from a fully-funded research organisation into an entirely self-funded technology transfer group. much practical experience and insight was gained into the questions of: what are the obstacles to overcome in successful technology transfer? how to find a match between technology and customer? how best to manage risk and expectation? to be successful a technology transfer group needs to be correctly positioned within its sponsoring organisation, use management processes that provide flexibility and control, and develop a sophisticated engagement model for working with its customers. adrian m. colyer career strategies, job plateau, career plateau, and job satisfaction among information technology professionals patrick chang boon lee a survey into the relevance and adequacy of training in systems analysis and design a questionnaire was designed to cover the areas of systems analysis and design (sad) which staff in northern ireland currently work on, the extent of training the person has had in those areas, and there assessment of the adequacy of any training given. the questionnaire was distributed to both commercial firms and to the civil service. the total replies were 150 - civil service(104), non-civil service(46). the survey showed that training in sad was concentrated in certain areas - those such as systems and data modeling. other areas such as interview techniques and sampling of documents are not equally as well covered in training. student assessment of the training provided varied significantly. the ease with which certain aspects of sad training courses can be developed as opposed to others is discussed and the important imbalance in the coverage of training provisioned is highlighted. r. larmour achieving bottom-line improvements with enterprise frameworks david s. hamu mohamed e. fayad new developments in accreditation doris k. lidtke john gorgone della bonnette michael c. mulder gender and programming: what's going on? the learning (and teaching) of programming in higher education is a perennial problem, and is the subject of much attention and innovation.one way in which the problem can be addressed is for instructors to investigate and thus better understand the ways in which students learn to program.we present the results of investigations carried out at the universities of kent and leeds into the ways in which _gender_ influences the learning approach of students in programming. the research shows that gender is a significant factor in determining the way in which students approach learning to program. a better understanding of the issues raised would lead to more effective teaching and thus better learning. janet carter tony jenkins a guideline for data protection legislation in thailand pateep methakunavudhi "what would you do?" an exercise in analysis and planning attendees will be divided into three large groups based upon the size of their installation (size in terms of number of employees). each group will be asked to analyze two situations. the situation may be presented on paper, videotape, or through role playing. the large group will be divided into small discussion groups (7-10 people) to analyze the situation, define the problem(s), and come up with solution(s). questions for directed discussion will be provided, and a facilitator will "drop in" on the groups or provide assistance if requested. spokespeople from the small groups will present their conclusions to the large group. frances m. blake alvin e. stutz question time: organizational shake-ups louis v. gerstner using an effective grading method for preventing plagiarism of programming assignments the two main purposes of this paper are: 1) to discuss four commonly-used grading methods (which we shall call methods a, b, c, and d) employed with programming assignments and 2) to present by way of recommendation two experimental methods (which we shall call methods x and y) which support our thesis that positive prevention of cheating on programming assignments through the use of an appropriately-designed grading method is far more effective than the other approaches in general use. c. jinshong hwang darryl e. gibson the care and feeding of an instructional program uc davis (enrollment approximately 19,000) is historically an agricultural school which has been somewhat slower to develop a computing intensive academic program than many other comparable universities. it became a full- range university in the uc system in the 1960's. this paper explores the development of a computer instructional program for the faculty, staff, and students at ucd, using an analogy of growing crops. robert s. fisk creating an atmosphere of responsible computing harvey axlerod information systems for organizational productivity gib akin david hopelain who teaches management information systems? george p. schell inferential security in individual computing environments cleopas o. angaye steven c. hansen global surveillance: the evidence for echelon duncan campbell using ada and c++ in computer science education undergraduate students of computer science or software engineering must become familiar with imperative programming languages, due to the extensive use of these languages in industry. perhaps the two most interesting imperative languages, from a technical standpoint, are ada and c++, as these two languages include a number of modern features and enjoy widespread popularity. we argue that a four-year undergraduate curriculum in computer science which emphasizes imperative programming languages benefits from a thorough coverage of ada in the first two years, while deferring the direct use of c++ until the junior year. the argument is supported by an overview of the technical features of ada and c++ which both aid and hinder the learning of software development skills. raymond j. toal teaching virtual reality this paper reports on the design and implementation of a two-semester course on virtual reality (vr). the course is aimed at final year undergraduates on programmes leading to specialist degrees such as information technology, software engineering, computer studies, business information systems.entitled "vr - implementation and applications", the course embraces novel devices, interfacing, software toolkits, graphics algorithms, system evaluation, technology prediction, human computer interaction (hci), social, psychological and philosophical aspects.the course focuses on the evaluation of current and expected technology and on the assessment of current and future applications in its complete variety. it thereby encompasses transferable skills in the mainstream areas of technology evaluation, system evaluation.this paper reviews the aims of the course, the curriculum content, a novel learning approach, assessment and an evaluation of the course. d. h. bell updating systems development courses to incorporate fourth generation tools carol chrisman barbara beccue finding value in cscl trent batson assessment and technology jan hawkins john frederiksen allan collins dorothy bennett elizabeth collins software support for computer science video courses karen van houten information technologies at the primary school elena koleva technology policy: preparing america's high-tech industries for business in the 21st century mark lilback panel on infomediaries and negotiated privacy techniques jason catlett security considerations in system design this session addresses today's design efforts, seeking to identify security issues for fifth generation computing. emphasis is placed on disaster recovery, auditing and evaluation of computer systems, and special problems arising from the proliferation of small computers. k. fong, whose abstract follows in these proceedings, describes our changing approach to providing uninterruptable access to automated information resources. these changes can include total redundancy and geographic dispersion, ensuring compatibility and transportability of information and software, as well as automated facilities to recover essential functions after disasters. james j. pottmyer ken fong stelio thompson ralph shattuck computer science - a mathematical science and accreditation f. garnett walters an instructional aid for student programs analyzing and grading programs in an introductory computer science course can require a great deal of time and effort from the course instructor. this paper investigates the development of a system called instructional tool for program advising (itpad) that assumes some of the instructor's duties by keeping student profiles and assignment profiles, by detecting possible plagiarism, and by providing suggestions directly to the students for improving their programs. the design of the itpad system is based mainly on the direct application of code optimization techniques to fortran source programs. several software science measures also provide some of the profile characteristics. the results of test runs show that this system helps the instructor monitor the progress of the students through the term and also helps the instructor determine the individual algorithmic approaches for a particular programming assignment. the system can further benefit the students directly by providing suggestions that emphasize the use of good programming style. sally s. robinson m. l. soffa a twenty year retrospective of the nato software engineering conferences (panel session): remembrances of a graduate student mary shaw zen and the art of teaching systems development joan greenbaum lars mathiassen a set of workshops for high school computer science teachers this paper outlines a set of workshops to provide training for certified high school computer science teachers. upon the completion of the four core workshops, a high school teacher would have an excellent background to teach high school computer science as detailed in the new proposed acm curriculum for high school certification. the workshops should also do a good job upgrading the background of high school computer science and computer math teachers to teach courses currently in the high school curriculum. one workshop, pascal with applications to data structures, was specifically designed to prepare current high school teachers to teach a pascal course whose goal is to prepare students for the advanced placement test. each of the six workshops is a three semester hour course and most carry graduate credit. three of the six courses have already been offered and more should be taught next summer. the reception of the high school teachers to the workshops has been very enthusiastic. george m. whitson organization and management of the information center: case studies this paper provides case studies of thirteen st. louis based information centers. the objectives of the paper are to describe the responsibilities of information center professionals, to identify tools and resources for end user computing, to describe users and user-developed applications, and to identify policies relating to end user application development. the findings reflect the growth of information center resources and staff in response to rising demand for support of end user computing . most user-developed applications were queries, reports, and analyses of production data extracts, as well as microcomputer based applications involving personal and departmental data. informal policies and guidelines for user applications have been defined, but as yet most of these applications have not impacted the application development backlog. mary sumner efficiency of algorithms for programming beginners david ginat historical perspectives on the computing curriculum michael goldweber john impagliazzo iouri a. bogoiavlenski a. g. clear gordon davies hans flack j. paul myers richard rasala the pascal trainer adam brooks webber microcomputer software management: a user-oriented approach microcomputers at eastern wyoming college have become an everyday tool for students, faculty, support staff, and administrators. as the impact of the increased usage of microcomputers became apparent to the computer services staff, a software management plan was implemented. the most important consideration in the development of this plan was the user. kathie france curriculum guidelines for teaching the consequences of computing dianne martin chuck huff donald gotterbarn keith miller project - after they are finished judith l. gersting frank h. young undergraduate computational science education (panel session) angela b. shiflet philip holmes chuck niederriter robert m. panoff ernest sibert a user essay: i need help while i am using speech as an information medium for the reading disabled carl friedlander a first year advanced students' project scheme angela carbone design and implementation of an interactive tutorial framework lewis barnett joseph f. kent justin casp david green some thoughts on undergraduate teaching and the ph.d. as the hiring crisis in academic computer science worsens, many people ask whether faculty whose primary job is teaching need doctorates. in the past, the answer would have been "yes;" now people increasingly suggest that it could be "no." i have argued in my own department for hiring only faculty with doctorates, because, while the doctorate does not directly train people to teach, it does seem to correlate with many characteristics of a good educator. this paper explores the thinking underlying my view, in hopes that it may help others clarify the needs and reasoning behind their own faculty searches. doug baldwin topics in professional certification (technical session) no universally accepted set of standards exist for the education or experience requirements for computer programming and data processing personnel to assume leadership positions in their profession. while various forms of certification have been implemented in other academically-oriented professionals for several decades, certification of data processing personnel is a relatively recent activity. however, recognition of the need for minimal qualification levels in data processing among the dp employers is becoming more apparent. one may read employment notices wherein holding a certificate in data processing is a qualification option. data processing personnel also recognize the merit in certification; this is evidenced by the growing numbers of individuals who undertake certification at their own motivation and expect their employers to recognize their evidenced level of competence. individuals who take the initiative to become certified demonstrate that they have attained a measurable, significant level of education and experience in their profession and a level of professional competence. james s. ketchel investigating the relationship between the development of abstract reasoning and performance in an introductory programming class a test of formal (or abstract) reasoning abilities was given to students in an introductory programming course. based on these results, students were classified at three intellectual development (id) levels: late concrete, early formal, and late formal. performance in various aspects of the course was analyzed by these three id levels. it was found that: (1) id level did not vary with sex, class level, and previous coursework; (2) the levels of late concrete and late formal are strong predictors of poor and outstanding performance, respectively; and (3) the id level predicts performance on tests better than performance on programs. barry l. kurtz collaboratively teaching a multidisciplinary course suzanne gladfelter conference preview: siggraph 2001 marisa campbell design patterns: an essential component of cs curricula owen astrachan garrett mitchener geoffrey berry landon cox an organizer for project-based learning and instruction in computer science the computer science (cs) educational community has recently realized the potential of project-based learning (pbl) in cs education. the new cs curriculum for high school students in israel consists of 3 to 5 courses --- one of which requires a final project instead of the traditional final examination. pbl demands many changes in students' learning habits and requires new instructional methodology. this paper presents the rational and objectives of a pbl organizer that was specifically designed for a project oriented logic programming high school course. the pbl organizer gradually supports both students and teachers in project development processes. zahava scherz sarah polak information systems curriculum (abstract): where we should be going? paul m. leidig mary j. granger asad khailany joan pierson dean sanders computing as a discipline: preliminary report of the acm task force on the core of computer science it is acm's 40th year and an old debate continues. is computer science a science? an engineering discipline? or merely a technology, an inventor and purveyor of computing commodities? what is the intellectual substance of the discipline? is it lasting, or will it fade within a generation? do core curricula in computer science and engineering accurately reflect the field? how can theory and lab work be integrated in a computing curriculum? we project an image of a technology- oriented discipline whose fundamentals are in mathematics and engineering --- for example, we represent algorithms as the most basic objects of concern and programming and hardware design as the primary activities. the view that "computer science equals programming" is especially strong in our curricula: the introductory course is programming, the technology is in our core courses, and the science is in our electives. this view blocks progress in reorganizing the curriculum and turns away the best students, who want a greater challenge. it denies a coherent approach to making experimental and theoretical computer science integral and harmonious parts of a curriculum. those in the discipline know that computer science encompasses far more than programming. the emphasis on programming arises from our long-standing belief that programming languages are excellent vehicles for gaining access to the rest of the field --- but this belief limits out ability to speak about the discipline in terms that reveal its full breadth and richness. the field has matured enough that it is now possible to describe its intellectual substance in a new and compelling way. in the spring of 1986, acm president adele goldberg and acm education board chairman robert aiken appointed this task force with the enthusiastic cooperation of the ieee computer society. at the same time, the computer society formed a task force on computing laboratories with the enthusiastic cooperation of the acm. the charter of the task force has three components: present a description of computer science that emphasizes fundamental questions and significant accomplishments. propose a new teaching paradigm for computer science that conforms to traditional scientific standards and harmoniously integrates theory and experimentation. give at least one detailed example of a three-semester introductory course sequence in computer science based on the curriculum model and the disciplinary description. we immediately extended our task to encompass computer science and computer engineering, for we came to the conclusion that in the core material there is no fundamental difference between the two fields. we use the phrase "discipline of computing" to embrace all of computer science and engineering. the rest of this paper is a summary of the recommendation. the description of the discipline is presented in a series of passes, starting from a short definition and culminating with a matrix as shown in the figure. the short definition: computer science and engineering is the systematic study of algorithmic processes that describe and transform information: their theory, analysis, design, efficiency, implementation, and application. the fundamental question underlying all of computing is, "what can be (efficiently) automated?" the detailed description of the field fills in each of the 27 cells in the matrix with significant issues and accomplishments. (that description occupies about 16 pages of the report.) for the curriculum model, we recommend that the introductory course consist of regular lectures and a closely coordinated weekly laboratory. the lectures emphasize fundamentals; the laboratories emphasize technology and know-how. the pattern of closely coordinated lectures and labs can be repeated where appropriate in other courses. the recommended model is traditional in the physical sciences and in engineering: lectures emphasize enduring principles and concepts while laboratories emphasize the transient material and skills relating to the current technology. peter denning douglas e. comer david gries michael c. mulder allen b. tucker a. joe turner paul r. young the fire of prometheus: forging a new millennium grace c. hertlein calculating the cost of year-2000 compliance leon a. kappelman darla fent kellie b. keeling victor prybutok kenya: a land of contrasts, growth and development - especially in education b. f. wray are we ready?: the faa's y2k preparedness frank cuccias models for estimation of effectiveness of html-based training courses raitcho iiarionov oleg asenov nsf report - computer and computation research d. t. lee approaching msis 2000: a new-fashioned graduate model john t. gorgone question time: global village or global police station? m. grundy executive mentoring: what makes it work? shari lawrence pfleeger norma mertz computer literacy objectives for college faculty the personal computer has had an enormous impact on our society during its short life span. microcomputer sales are soaring; costs continue to decrease putting the computer within the reach of nearly everyone. schools have rapidly begun to acquire hardware and software to aid teachers in the educational process. the infusion of computers into the schools extends upward into higher education. college faculty are becoming increasingly aware that the computer is going to have some impact in every discipline. the fact that computer literacy objectives are being established for students in many institutions makes faculty even more aware that they, too, must become computer literate. in trying to meet the literacy needs of faculty some questions arise: what is computer literacy? what should the content of a faculty computer literacy program be? how much is enough? who will provide for the computer education of faculty? the purpose of this paper is to provide answers to some of these questions. janet d. hartman reusable software components trucy levine open letter to p3p developers & replies jason catlett digital politics hal berghel giving computer science students real-world experience e. e. villarreal dennis butler the computer software products industry in the '80s the computer software products industry (sometimes called the independent packaged software industry) is composed of those companies which sell predefined and prepackaged computer software to the computer user community at large. today software represents a major bottleneck in the further rapid spread of computers. the computer software products industry is poised in a crucial position with the chance of playing a pivotal role in eliminating this bottleneck. the overall purpose of this panel will be to discuss the various ways in which this industry is beginning to meet the challenge. today the industry consists of essentially three groups. the first consists of the mainframe manufacturers. for example it is estimated that last year ibm sold &dollar;1.5 billion dollars of software or approximately 5% of its gross annual revenue. this figure should be compared with the total sales of the independent companies which is estimated to be around &dollar;1 billion. the second major group consists of those companies which are either totally devoted to software products or else they have substantial divisions devoted to software products. in addition these companies have major marketing forces which are nationally distributed. the larger ones are listed below along with their revenues from software product sales in 1980. ellis horowitz a componentized architecture for dynamic electronic markets the emergence and growing popularity of internet-based electronic market- places, in their various forms, has raised the challenge to explore genericity in market design. in this paper we present a domain-specific software architecture that delineates the abstract components of a generic market and specifies control and data-flow constraints between them, and a framework that allows convenient pluggability of components that implement specific market policies. the framework was realized in the gem system. gem provides infrastructure services that allow market designers to focus solely on market- issues. in addition, it allows dynamic (re)configuration of components. this functionality can be used to change market-policies as the environment or market trends change, adding another level of flexibility to market designers and administrators. benny reich israel ben-shaul the shifting software development paradigm richard j. welke positive alternatives: a report on an acm panel on hacking a broad cross section of computer security experts, hackers, educators, journalists, and corporate executives examine hacking issues, problems, and possible solutions. john a. lee gerald segal rosalie steier pioneering women in computer science denise w. gurer casablanca: designing social communication devices for the home the casablanca project explored how media space concepts could be incorporated into households and family life. this effort included prototypes built for the researchers' own home use, field studies of households, and consumer testing of design concepts. a number of previously unreported consumer preferences and concerns were uncovered and incorporated into several original prototypes, most notably scanboard and the intentional presence lamp. casablanca also resulted in conclusions about designing household social communication devices. debby hindus scott d. mainwaring nicole leduc anna elizabeth hagström oliver bayley a virtual lab to accompany cs1 and cs2 daniel t. joyce digital media and the law over the past two years, i have written seven "legally speaking" columns and one feature article for communications about legal issues affecting computing professionals. these writings have covered an array of legal topics including: criminal and civil liability for hackers who breach computer security systems; first amendment issues arising in computing or electronic publishing markets; intellectual property issues, such as patent protection for computer program algorithms; copyright protection for look and feel of user interfaces; what the user interface design field thinks about such protection, and various theories by which a firm might claim to own interface specification information for software systems. pamela samuelson teaching computer science in papua new guinea alvin bampton preparing society for the information age (panel) karl j. klee acm fellows diane crawford training professionals in object technology (panel): training professionals in object technology judy cohen mary lynn manns susan lilly richard p. gabriel janet conway desmond d'souza information for authors diane crawford book review: fatal defect: chasing killer computer bug saveen reddy the future belongs to those who teach themselves vance christiaanse the growing debate over science and technology advice for congress scientific and policy leaders interact during a rare but useful workshop on capitol hill. jon m. peha risks to the public in computers and related systems peter g. neumann computer programming in high school vs. college martina schollmeyer managing student teams in information systems courses (panel session) cindy hanchey marguerite k. summers carol chrisman joyce currie little richard a. lejk gender differences in programming? (poster session) janet carter tony jenkins authentic assessment through electronic portfolios portfolios are quickly winning favor as a form of authentic assessment at all levels of education. this paper discusses efforts to develop an electronic portfolio assessment strategy as a graduation requirement for all undergraduate students at northwest missouri state university. the author explains early efforts of an electronic portfolio pilot project and presents data analysis in the form of student responses to an opinion survey. the conclusions include projections about full implementation of an electronic portfolio assessment strategy at the university level. gary ury pat mcfarland the role of computer-center committees robin mayne needed: binary bar mitzvahs and computer confirmations? buck bloombecker computer science: a proposed alternative track - applied computing robert d. cupper the end of objects and the last programmer (invited talk) grady booch the politicizing of science policy fred w. weingarten ethical and legal issues in computer science (panel discussion) r. waldo r. roth john carroll susan nycum thomas lutz john e. kastelein web-based laboratories in the introductory curriculum enhance formal methods rhys price jones fritz ruehr richard salter the evolution of a mentoring scheme for first year computer science students damian conway exposure, knowledge or skill the computer literacy dilemma there was a time when computer knowledge was so esoteric that only specialists needed to be educated. only large organizations had computers and skills were needed only by the professional staff. now the scene has changed. computers are being used in the smallest of firms. today there are millions of computers in homes, and there are offices with more computers and/or terminals than employees. there is some possibility that in the future all financial transactions and much shopping will be done by computer. thus it is obvious that computer literacy is needed by all people. the question is not whether but rather what kind of education is required. there are three levels of understanding that may be imparted in education. one can teach a subject to the level of a basic understanding or general knowledge. or one can provide detailed knowledge. finally, one can develop in the student the ability to make skillful use of the subject area. music appreciation, counterpoint and private vocal lessons illustrate the differences in one field of study. in curriculum development in business we tend to solve the question of depth of knowledge this way. all students are required to have a general understanding of the business core and a detailed knowledge of a major field. in addition, we expect that skills be developed in communications and quantitative methods. we carried over this pattern into data processing. however, we didn't know whether dp was a quantitative area requiring skill development or not. thus we have introduction to data processing courses that develop a programming skill in some language and others that just talk about programming but don't do any of it. there is substantial disagreement among educators as to the correct course of action. this lack of agreement, as well as a belated entry into dp education, has resulted in a limited and uneven computer education in colleges of business. our graduates vary from computer illiterate to very skillful user. what to do about this situation is the focus of this study. theodore c. willoughby writing a textbook: walking the gauntlet nell dale rick mercer elliot koffman walter savitch my three computer criminological sins jay j. buck bloombecker coping with rapid change in information technology skip benamati albert l. lederer alternatives to teaching introductory short courses as the number of new computer users grows, it becomes increasingly difficult to give introductory short courses to everyone. in the past, we have dealt with this by offering more and more sections each semester. when all the sections fill up, we schedule an additional section (and end up, it seems, teaching all the time). even then, people come to other short courses without the "introduction to timesharing" prerequisite; or they say, "i couldn't fit any of your 5 sections into my schedule." to alleviate the pressures of teaching many sections (and to allow our students even more scheduling freedom), we at suacc have tried several alternatives, such as tutorial programs and more printed material. this paper describes some of the alternatives and combinations of alternatives we have experimented with. some were successful, some not. while we are still experimenting, we have found a combination which seems to please the students, while saving staff time. john thorton fair electronic cash withdrawal and change return for wireless networks we propose a practical mobile electronic cash system that combines macro and micropayment mechanisms and offers very high security and user's privacy protection. notably, we have developed an innovative fair withdrawal and change return protocols, which are efficient and preclude any fraudulent misbehaviors, while user anonymity and transaction unlinkability are preserved. coins are withdrawn if, and only if payer's account is debited. change is returned to an anonymous payer, who gets it always but only if she is entitled to receive it. wiretapping of even all channels does not yield a valid coin, and a payment can not be deposited at the account of another payee. lost or stolen coins are blacklisted immediately and their value is recovered. no cooperation of dishonest participants gives any advantage. a prototype implementation proved that the system is efficient and convenient. robert tracz konrad wrona little engines that could: computing in small energetic countries how do very small countries, here defined as having fewer than 10 million people, find places for themselves in the information technologies (it) arena? does success require accommodation in the global it regime that often seems dominated by the u.s. and japan? do the little countries scurry around, like birds among the lions and other predators looking for scraps? are they relegated to second tier "appropriate technologies," or do they operate in the mainstream? j. l. dedrick s. e. goodman k. l. kraemer bandwagons considered harmful, or the past as prologue in curriculum change the field of computer science changes rapidly, and this change occurs as well in the introductory curriculum. formerly advanced topics filter down to the first year, and even to secondary school; some topics disappear completely. these changes are good---they indicate a dynamic discipline and a still- emerging picture of the field's fundamental principles. but we must not let our revolutionary zeal blind us to the pedagogical need and conceptual value of time-tested material. many topics and approaches that are well understood and now unfashionable should retain their place in the introductory curriculum, where they serve as intellectual ballast, foundation, and motivation for the more current and trendier content. we argue here for balance: that radical change be tempered by an appreciation for the place of long-standing approaches and underlying fundamentals. those advocating curricular change must articulate their educational goals fully and consider explicitly what effect on those goals they expect the change to have; they must not throw the baby out with the bathwater. david g. kay electronic discussion boards (poster session): their use in post graduate computing courses donald joyce alison young regulation of technologies to protect copyrighted works pamela samuelson educating a new computer scientist (abstract) peter j. denning do we teach the right algorithm design techniques? anany levitin what would justice brandeis say? robert c. heterick web-based multimedia tools for sharing educational resources many educational resources and objects have been developed as java applets or applications, which can accessed by simply downloading them from various repositories. it is often necessary to share these resources in real time, for instance when an instructor teaches remote students how to use a certain resource explains the theory behind it. we have developed some tools for this purpose that emulate a virtual classroom, and are primarily designed for synchronous sharing of resources. they enable participants to share java objects in real time and also allow the instructor to dynamically manage the telebearing session. shervin shirmohammadi abdulmotaleb el saddik nicolas d. georganas ralf steinmetz information ethics in a responsibility vacuum the authors start with the premise that the field of information technology is not a "profession," because it is missing some of the defining characteristics of a profession. in particular, the authors claim the field is impotent in terms of certification and meaningful sanctions for unprofessional behavior. furthermore, unlike "true" professions, leadership (with the frequent exception of it in academia) is more often than not in hands of someone who has not come "up through the ranks" and therefore does not enjoy peer status.handicapped by lack of credentialing, the inability to deal effectively with unprofessional behavior, and non-peer leadership, the resulting power vacuum has led to a responsibility vacuum, which in turn complicates issues of information ethics.the authors then argue that the it field, whatever it considers itself to be, or is perceived by society to be, has a responsibility to society to disclose its position on ethical issues: to proclaim or disclaim such positions if there is consensus or acknowledge disagreement within the field in the absence of consensus. specifically, there arises as societal obligation to "own up" to the realities of our not being a profession and such implications as our inability to deal with unprofessional behavior.the authors conclude that information ethicists must not only assist the field in identifying ethical issues and formulating appropriate positions, but they must also lobby the field regarding the importance of keeping society informed on the existence, non-existence, or evolution of such positions; and directly alert society, on behalf of the field, to the existence, non-existence, or evolution of such positions. in particular, the authors assert that any progress the field can make in resolving ethical jurisdiction---who gets to have say in matters of information ethics---will have an immense positive impact on the future of information ethics, contribute significantly to progress in matters of legal jurisdiction, and may even assist the field in resolving its identity. james l. linderman william t. schiano the eurolemasters project: a european masters degree in information engineering alan f. smeaton edie m. rasmussen introductory computer science courses for in-service elementary and secondary teachers william g. frederick maynard j. mansfield thought and deed: bridging the gap between administration and user carol s. romano introducing knowledge-based projects in a systems development course due to recent advances in knowledge-based systems technology it is suggested that students in a systems development course be given exposure to the concept of designing and implementing knowledge-based systems. the purpose of this paper is to describe the objective, scope, methods and procedures of knowledge-based projects. the first half of the paper introduces the knowledge-based approach and describes the structure and components of such a system. the second half of the paper deals with course design procedures. the scope, topics, project characteristics and experiences pertaining to such a project are covered here. a practical knowledge-based systems development life cycle (ksdlc) is also proposed for use in such a course. it is a suitable methodology as it resembles the traditional systems development life cycle. vijay kanabar viewpoint: free speech rights for programmers david s. touretzky data processing and computer science theory theoretical results have had much greater impact on computing practice than we are normally aware of, and the practical significance of theoretical results can be expected to become more prominent in the future. we discuss the past and present significance for data processing specialists of some results in analysis of algorithms, languages, and program proofs, and argue for a greater emphasis on computer science theory in data processing curricula. a. t. berztiss a new instructional environment for beginning computer science students at the computer science conference in st. louis in february, 1981, there were 34 job offers for each phd and 12 for each bachelors candidate. these figures come as no surprise to those of us who teach undergraduate computer science courses. where we once taught beginning classes of 30, we are performing before lecture sections of 250. this short paper reports on an innovative introductory computer science course which attempts to make more productive use of faculty and teaching assistant time. nell dale david orshalick prepared testimony and statement for the record of marc rotenberg director, washington office, computer professionals for social responsibility (cpsr) on the use of the social security number as a national identifier before the subcommittee on social secu the learning and teaching support network promoting best practice in the information and computer science academic community (poster session) aine mccaughey sylvia alexander interaction processes in network supported collaborative concept mapping this study investigated group interaction processes in network supported collaborative concept mapping, and the influence of these group interaction processes upon the group concept mapping performance. a total of 36 in-service teachers and pre-service student teachers engaged in this study. it was found that group concept mapping performance was significantly correlated to the quantity of group interaction, particularly high-level interaction processes. suggestions for a further improvement in the system design to support collaborative concept mapping are also provided in this paper. chiung-hui chiu chun-chieh huang wen-tsung chang tsung-ho liang an educational project in computer science for primary and high school we analyze four different points of view in the teaching of computer science at school, taking into account the aims of institutions, i.e, computer sciences as a formative, operational, informative and pedagogical tool.we present a particular experience that links the four points of view proposed in a curriculum for primary and high school. this is the result of a two year work developed under a project between our research group, ine, and a private institution of our city. we believe our approach is quite original because these four points of view were traditionally considered as opposites. marcelo zanconi norma moroni perla señas software theft and the problem of intellectual property rights tom forester an experimental system for computer science instruction a number of educational institutions of the world offer academic programs in computer science for undergraduate and graduate students. many of these programs have used a medium sized to large computer system as a facility for the students. there is a need for a system in which the students have full access to the machine in terms of hardware and software details. most commercial systems do not normally provide this information. in this paper we describe an experimental system designed and built at i.i.t., madras, which is used for illustrating several concepts in machine organization, computer architecture, operating systems distributed computing etc.. present use of the system for computer science instruction is identified and proposed applications are indicated. r. kalyana krishnan a. k. rajasekar c. s. moghe management of decentralized computing facilities janet d. hartman microcomputer software distribution: valuable or valueless exercise in resource management? lawrence a. pounds new standards for educational technology relevant to multiple acm sigs roy rada james schoening promoting computer software intellectual property right in computer science education computer software intellectual property right (ipr) protection has become an important issue of universal concern. this paper, based on a survey conducted in five chinese universities reveals the importance of teaching the subject of computer software ipr to computer science students. the paper discusses the development of educational material that highlights the technical, cultural, legal and regulatory, and business factors related to computer software ipr. it also proposes an educational program for enhancing ipr protection, which can be incorporated into the undergraduate and graduate curricula of computer science. lan yang zhiqiang ding log on education: quick, where do the computers go? history has dealt computer and information science a special role in the inevitable restructuring of the educational system in the united states. in the coming decade computing and information technology will be the backbone of the most significant change in education in over 100 years. rather than being an adjunct to learning and teaching, technology is facilitating a fundamental re-thinking of what should be learned and how. such changes present the communications readership with a unique opportunity and a serious responsibility. toward meeting this challenge, in this column i will address some key issues in education and technology. for example, this first column examines how our basic notion of what needs to be learned is changing, and how this affects the ways in which technology is used. subsequent columns will explore topics such as "programming's role in learning," "multi-media, and nationwide, computer-based," testing. elliot soloway the relationship between some decision and task environment characteristics and information systems effectiveness: empirical evidence from egypt omar e. m. khalil manal m. elkordy e-commerce: an indispensable technology wael hassan survey of software engineering education l. h. werth two undergraduate courses in the theory of computation charles d. ashbacher systems analysis and design: an orphan course about to find a home bernard john poole what to do when the help desk needs help] glenda schester moum sharing printers in a pc lab robert j. mcglinn using learning style data in an introductory computer science course a. t. chamillard dolores karolick women in computing: where are we now? maria klawe nancy leveson introducing the project/process evaluation and development framework (poster session) many software projects fail. the main reason they fail is that people don't understand them well enough to make them succeed. the project/process evaluation and development framework (pedf) allows both customers and developers to understand their project and its process. it allows them to understand what they're doing, why it's supposed to work (or why it won't work), and how to modify it to make it more successful. daniel a. rawsthorne a new view of intellectual property and software randall davis pamela samuelson mitchell kapor jerome reichman a "hands-on" approach to computer literacy computer science departments face an overwhelming demand from the university community for computer literacy courses. in 1982 at rutgers university we began to offer a "hands-on" literacy course for non-computer science majors. the students learn the rudiments of basic, study "how the computer works" by learning a small pseudo-assembly language and experiment with a variety of applications software packages. applications include text processing, modelling, game playing, cai and spreadsheets. our experiences with this course have been positive, although the logistics of handling 960 students per semester are formidable. barbara g. ryder the successful information center: what does it take? the growth of end-user computing is one of the most important trends in information systems today. organizations are increasingly turning to information centers to facilitate and coordinate these activities. this paper reports the results of a field survey of information center managers. it attempts to identify major successes and problems that are faced by these managers. it also aims to define critical success factors for such centers. the paper concludes with recommendations on how firms can apply the results of the study to their own situations. robert l leitheiser james c wetherbe from the president: building big brother barbara simons metacognitive awareness utilized for learning control elements in algorithmic problem solving students who demonstrate high self-explanation ability show advanced metacognitive awareness of their own problem solving process. this awareness can be utilized to reveal and apply control elements they experience during problem solving. in this paper we present a study of capitalizing on student awareness for developing their control competence during algorithmic problem solving. we describe the rational for our study, illustrate the learning process through an initial problem solving activity, and show the outcome of this learning. david ginat revitalizing the computer science course for non-majors (abstract) barry burd j. glenn brookshear rick decker frances g. gustavson mildred d. lintner greg w. scragg lessons learned: tips from conducting an nsf faculty enhancement workshop the authors have conducted for each of the past three summers a workshop for faculty who have taught or are scheduled to teach an upper level undergraduate course in artificial intelligence. this paper describes problems the organizers have encountered and ways in which they have attempted to address them. the hope is that others designing and implementing similar workshops can benefit from these "lessons learned". r. aiken g. ingargiola j. wilson d. kumar r. thomas question time: napster john gehl a capstone course for a computer information systems major this paper describes the current form and organization of humboldt state university's cis 492: systems design and implementation, the capstone course for the computer information systems (cis) major. since spring 1998, this course has combined a team programming experience on a large-scale database project with discussions of a software engineering classic, frederick brooks jr.'s "the mythical man month"[1]. students seem to find this combination valuable, and it is hoped that this paper can impart some useful ideas to others in designing a cis/mis capstone course. integrating an intensive experience with communication skills development into a computer science course lori pollock plan of teaching & learning for database software through situated learning (poster session) soo-bum shin in-hwan yoo chul-hyun lee tae-wuk lee introducing computer science in a liberal arts college (abstract only) the introductory computer science course often plays a dual role in the resource limited environment of a small liberal arts college. it must provide a firm foundation for the computer science major while providing a service function to the rest of the college. this paper describes such a course that is being developed at linfield college. by emphasizing personal computers and introducing general purpose tools such as word processors, spreadsheets, database managers, and communications packages, the computer is shown to be a tool that can be used without learning the intricacies of programming. hands- on experience with these packages provides a good basis for discussions of the breadth of computer science, of current trends, and of related social concerns. program design and construction can be introduced with tools such as warnier/orr diagrams and karel the robot. customized software can be constructed with an application generator. brian crissey dean sanders the virtual meiji village in january of 1995 museum staff and the kinjo gakuin university multimedia center agreed to design a multimedia project to boost attendance at museum meiji mura. under the leadership of hitoshi nakata, kinjo gakuin university professors, taisei construction co. ltd. computer graphics technicians and kinjo media mix students (a university club for computers and media) worked together to create the virtual meiji village project. this type of collaboration, though common in the united states, is rarely done in japan. hitoshi nakata critical success factors in enterprise wide information management systems projects mary sumner an mcc update robert l. ashenhurst reason, relativity, and responsibility in computer ethics james h. moor the effect of student attributes on success in programming this paper examines the relationship between student results in a first year programming course and predisposition factors of gender, prior computing experience, learning style and academic performance to date. while the study does not suggest that any dominant attributes are related to success in programming, there are some interesting outcomes which will have implications for teaching and learning. pat byrne gerry lyons the information age and the printing press: looking backward to see ahead james a. dewar national science foundation information impact program laurence c rosenberg information technology and people, 1993 (itap'93) moscow, russia ray thomas personal computers and data processing departments: interfaces, impact and implications. satish chandra an innovative design and studio-based cs degree michael docherty peter sutton margot brereton simon kaplan the impact of gender differences on the career experiences of information systems professionals the purpose of this study is to examine the impact of gender differences on the career experiences and job success of information systems professionals. the study analyzes career experiences with respect to a number of variables, including feelings of acceptance, job discretion, met expectations, career support, career satisfaction, and organizational commitment. the study also examines which skills gained from educational preparation and work experience are viewed as most critical to successful performance. the study was accomplished by using in-depth interviews and a questionnaire. the findings showed that mis careers offer challenge based upon technical competence, regardless of gender. the likelihood of reaching a technical plateau and the increasing transition of mis roles into functional business units poses some career uncertainty. mary sumner kay werner cmu's andrew project: a retrospective n. s. borenstein deconstructing the acm code of ethics and professional conduct c. dianne martin software pricing and copyright enforcement: private profit vis-a-vis social welfare yehning chen i.p.l. png laboratory projects for promoting hands-on learning in a computer security course this paper describes two laboratory projects which were developed for use in a computer security course. the first project requires that students examine and analyze an existing computer network environment in order to find security flaws. a sample network environment containing numerous flaws was developed for this purpose. the second project requires students to analyze the security needs of a hypothetical college with respect to its academic computing network and establish a secure computing environment which meets those needs. the environment for both projects consisted of a network of ibm compatible computers running artisoft's lantastic network operating system for dos. janice e. huss examining the relationship between computer cartoons and factors in information systems use, success, and failure: visual evidence of met and unmet expectations computer cartoons express the inevitable discrepancies in expectations among systems analysts, users, and managers regarding the systems development processes and information systems products. computer cartoons also address gaps between ideal and actual information systems, or intended versus actual systems. no one sets out to design a system that fails, or a system that is seldom or incorrectly used. it is hypothesized here that the expectations of users, analysts, and managers are apparent in popular computer cartoons that are widely published, circulated, posted, and shared. eight broad critical success factors were identified through the csf literature, and cartoons from the world's largest computerized cartoon database were examined for their existence. factors identified concerning the is process include: 1. management support, 2. capable systems analysts, and 3. proper systems development methods. critical success factors that concern the is product include whether the information system: 4. works 5. is technically elegant, 6. is easy to use, 7. is a good fit with natural incentives found in the organization, and 8. is a good fit with motivations of users.illustrative evidence of the eight factors was found in the computer cartoons reviewed and examples of each are presented in this paper. since cartoons are emblematic expressions of an idea that can be shared, computer cartoons can help users and analysts express their feelings, attitudes, and opinions about information systems development and implementation. it is hoped that the contribution of this study is to bring together the knowledge of critical success factors with the use of computer cartoons to permit analysts and users access to a particularly revealing source of information which spotlights the gap existing between actual and intended information systems processes and products. many researchers have commented on the desirability of using humor in the workplace; however, this paper is innovative in specifically recommending that using and interpreting computer cartoons can assist in assessing expectations surrounding critical success factors whose attainment shapes the design, implementation, and use of information systems in organizations. julie e. kendall proposed curriculum for programs leading to teacher certification in computer science would a school system allow a person with only an eight-grade education to become an english teacher? would a self-taught math teacher be given full responsibility for the high school math program? the answers to these questions are obvious to everyone: to become a subject-matter specialist school teacher, one must demonstrate competency by fulfilling state certification requirements. yet these same school systems, which have well-defined qualification standards for english teachers and math teachers, today often assign full responsibility for the computer studies/computer science program to teachers who are self- taught or whose formal training is equivalent to an eight-grade education in the subject. jim poirot arthur luerhmann cathleen norris harriet taylor robert taylor are we becoming addicted to computers? what do we mean by "addicted"? the oxford english dictionary defines it as: "(1) delivered over devoted, destined, bound. (2) attached by one's own act; given up, devoted, ... naturally attached bondage, which displaces free will." it is a condition in which really objective self-examination does not and probably cannot guide our behavior. it is generally regarded as a pitiful state, over which one has little control. arthur fink economic mechanism design for securing online auctions wenli wang zoltan hidvegi andrew b. whinston integrating information requirements along processes: a survey and research directions information requirements have traditionally been collected separately for different business functions and then integrated into an overall specification. the recent orientation to a process perspective in managing business activities has emphasized early integration, by concurrently analyzing business processes and information requirements. accordingly, information requirements analysis methodologies should take into account these new integration needs. in the paper, we discuss these new integration needs. traditional methods for requirements integration from database design are analyzed and unfulfilled integration needs are highlighted. then, other research fields are surveyed that dealt with problems similar to integration and offer interesting results: recent developments in database design, software engineering and requirements reuse. finally, we compare the different contributions and indicate open research directions. c. francalanci a. fuggetta panel: dimensions of mobility in the i.t. profession examining "turnover culture" and "staying behavior" (panel session) this panel is designed to examine various dimensions of mobility among it professionals. turnover rates for it professionals have risen from the traditional 6-10% to approximately 20% industry-wide (maddern, 2000) and 25-35% in fortune 500 companies (agarwal & ferratt, 1999). fortune magazine recently noted that quitting in the technology profession has become literally an annual event, as the average job tenure in it has shrunk to about 13 months (daniels & vinzant, 2000). quite simply, this has to hurt. corporate efforts to utilize technology to strategic advantage are often hampered in firms experiencing such high turnover. when it professionals leave their employers, specialized skills, tacit knowledge, and understanding of specific business operations and systems often depart too. this can be a punch to the gut of an organization \--- sometimes a single powerful punch when a key "franchise player" is lost, or a constant pummeling inflicted by a steady exodus of it professionals. michael j. gallivan ephraim mclean jo ellen moore malu roldan editorial policy adele j. goldberg a multi-api course in computer graphics the choice of what to cover in a computer science major course is governed by both the objectives of the course, and the overall goals of the major. we describe a set of design decisions that have resulted in a project-oriented course in computer graphics that uses four different graphics apis: windows gdi, directdraw, java 2d, and opengl. linda wilkens who is carrying the monkey? the service manager concept with the revolutionary changes in hardware and user expectations, the user services of the 1980's will have to adapt to change if they wish to continue providing high quality service to their clients. the service manager concept is but one of the many organizational changes that we are planning to alter our direction. a. sheth s. s. swaminathan management guidelines for pc security troy e. anderson objectives and objective assessment in cs1 raymond lister viewpoint: fixing a flawed domain name system lenny foner strategies for leveraging user services resources bob stoller the uw-whitewater management computer systems program employers of students trained in computer science and data processing fall largely in three categories: manufacturers of computer equipment, software houses, and finally end users of the computers. of these categories, most employment opportunities are in the third category, the end user. for each employer there is a range of positions from systems programmer to applications programmer to business systems analyst. figure 1 shows the organization chart of a medium sized systems and data processing area in a company that manufactures consumer products. of the sixty five positions which would require a degree in computer science or data processing, in at most seven (perhaps only two) of these positions would the traditional computer science graduate be preferred (if the employer had a choice). at the university of wisconsin-whitewater our program is aimed at the large number of positions where a business background is helpful. in addition to introductory programming we require three programming courses, two analysis and design courses, a course in hardware and software selection and a course including data base management. in addition, all students must have at least 15 hours of business courses including two accounting courses and a management course. our graduates have the technical ability to be good programmers and/or systems analysts, and they have the business background so that they can talk to users in the users own language. in developing the major the faculty consulted outside business computing managers and the acm information systems curriculum1. many of the courses in the major are very close to those specified in the acm curriculum. jacob gerlach iza goroff a comparative study of information system curriculum in u.s. and foreign universities martin d. goslar p. candace deans synchronous collaborative systems project there are many multi-user or collaborative computing systems and applications being used today, but little is known in regards to evaluating their usability. as a research assistant, i have focused on extending usability inspection techniques for multi-user systems. to do so, i helped to revise heuristics (rules of thumb) and validate them through use, while searching for and recording collaborative problems that can be analyzed and used in further experimental work. heather craven a model of customer satisfaction with information technology service providers: an empirical study amit das christina wai lin soh patrick chang boon lee book review:bandits on the information superhighway paul rubel on secure and pseudonymous client-relationships with multiple serversthis paper introduces a cryptographic engine, janus, which assists clients inestablishing and maintaining secure and pseudonymous relationships withmultiple servers. the setting is such that clients reside on a particularsubnet (e.g., corporate intranet, isp) and the servers reside anywhere on theinternet. the janus engine allows each client-server relationship to useeither weak or strong authentication on each interaction. at the same time,each interaction preserves privacy by neither revealing a clients trueidentity (except for the subnet) nor the set of servers with which aparticular client interacts. furthermore, clients do not need any secure long-term memory, enabling scalability and mobility. the interaction model extendsto allow servers to send data back to clients via e-mail at a later date.hence, our results complement the functionality of current network anonymitytools and remailers. the paper also describes the design and implementation ofthe lucent personalized web assistant (lpwa), which is a practical system thatprovides secure and pseudonymous relations with multiple servers on theinternet. lpwa employs the janus function to generate site-specific personÂ',which consist of alias usernames, passwords, and e-mail addresses.eran gabber phillip b. gibbons david m. kristol yossi matias alain mayer the potential of the computing center library a computing center library can provide traditional library services to staff and users, while playing an integral part in the development and dissemination of system documentation. this paper deals with a discussion of the functions of a technical library. it can be a tool for research, teaching, communication among staff and users, and an archival record of computing center development. by providing a coherent classification scheme for information in computer science and related fields, the library can play a major role in decisions about development and distribution of documentation. we present a classification system, based on acm's, that is tailored to the needs of a computing center library. we discuss the importance of the librarian's professional education, sound budgetary planning, workable library procedures, and promotional efforts. gayle m. lambert william w. mcmillan selling your software the academic computing center (macc) at the university of wisconsin-madison has distributed a variety of software for several years. early on, this distribution was casual, informal, and intended to answer whatever requests for software came our way. more recently, we have become the sperry-univac conversion site for spss, and this has forced us to systematize our methods in distributing spss. with this as an incentive, we have recently developed more formal procedures, and are distributing about twenty-five pieces of software. this paper discusses the methods we have developed, and what our experiences have been in entering the potentially lucrative software distribution market. doris r. waggoner what academic impact are high school computing courses having on the entry- level college computer science curriculum? it appears that the computer revolution is irreversible and almost every americians' life will be affected by the use of its technology. more and more jobs will require computer literacy and skills, and as a result more courses will be offered by secondary schools to prepare their students. as this area of computer technology expands, the job market will require individuals that have been trained with various amounts of knowledge and skill. many states are requiring a certain degree of computer literacy for high school graduation. these requirements are resulting in the development of secondary curriculum guides at the state and national levels. roger e. franklin bias and responsibility in 'neutral' social protocols lorrie faith cranor reinventing business processes through automation david paper what do exam results really measure? kathryn crawford alan fekete distance learning model with local workshop sessions applied to in-service teacher training this paper presents the development, implementation and evaluation of a novel distance learning model integrated with local workshop sessions. the model was developed for a large-scale in-service teacher training, and applied to introduce a new computer science curriculum and its didactic approach. the model evaluation showed a significant success, both in terms of participants' attitudes, and the assimilation of the subject matter and its didactic approach. bruria haberman david ginat not without us joseph weizenbaum computer professionals whose scientific freedom and human rights have been violated - 1982: a report of the acm committee on scientific freedom and human rights jack minker teaching microcomputer concepts through modelling brian lees all your consulting needs under one roof at last year's siguccs xiv conference, i compared the tradeoffs between organizing computing center staff by hardware versus by function1. the broadly stated conclusion was that it is not feasible to duplicate every service for each class of system that exists at the center or on the campus. it is advantageous to combine services of similar function, regardless of the system being supported. one example where we are organized along functional lines is the front line consulting staff. we find it advantageous to have all our student and professional consultants report to the same supervisor: mini/mainframe consultants 2 shifts of students at 2 sites micro consultants 1 shift of students at a telephone micro showcase (for purchases) 1 shift of clerks micro gurus 1 shift of 2 full time staff. although these people report to the same supervisor, initially they worked in separate locales. this necessitated that a user with a question bore the responsibility of determining where to go for assistance. this often resulted in users being shuttled from one room to another seeking an answer. another concern was reports of poor service in the micro showcase, where students, faculty and staff received information on microcomputer discounts. when first opened the micro showcase was staffed by the full time micro staff (the micro gurus.) after losing or promoting all the technical staff to other endeavors, the showcase was left with only part-time employees possessing clerical rather than technical skills. a crucial decision was made to not hire a technical manager for the showcase. instead it was believed that micro purchase consulting could be broken into two categories: pricing and configuration. we elected to make the showcase clerks responsible for providing price information. we planned to encourage people with configuration questions to make an appointment to see one of the micro gurus. in practice this division proved to be artificial. micro purchases always require a measure of technical consulting. with no technical consultant readily at hand, the utility of the showcase decreased markedly. long time clients of the showcase, who were accustomed to talking with our technical staff, began to complain that it wasn't worth their trouble to stop in. as a former micro guru, i began to receive many calls concerning specific configurations and pricing. since i no longer worked with this information daily, and did not have the voluminous files of spec sheets in my office, i would refer the callers to the showcase. too often i would hear that they had already called the showcase and had been referred to me, or were unsatisfied with the answer they had received. not only was the showcase inefficient and unreliable in performing its stated function, but we also began to question whether it was necessary. the showcase was conceived as a place where people buying microcomputers could "test drive" different systems. at first the showcase contained cp/m, apple ] [+, atari and commodore systems, as well as the new ibm pc. over time the showcase evolved into a room filled with ibm pc clones which literally looked alike. (as part of a state university system it was not feasible to negotiate a macintosh discount program.) also, purchasers were no longer spending considerable time weighing the features of different models. the usual buying criteria were price and availability of service. since we could not afford to hire a technical manager for the showcase, we considered alternatives that could make available the expertise of the other micro staff. various proposals to assign the technical staff to work limited hours in the showcase were discarded as too unwieldy. finally we opted to eliminate the redundant demonstration hardware and to consolidate our consulting services (both micro and mini/mainframe) into the showcase. since the showcase was no longer a showcase, a new name was needed. we chose info centre as the least offensive of the dozens of names we concocted. removing the demonstration hardware was a bold move. the showcase had originally been conceived as a showroom for microcomputers. instead of featuring hardware, we chose to focus on information resources such as magazines, datapro reports, ratings newsletter, as well as diskette publications including letus abc and computer price index. another bold move was providing micro consulting on a walk-in basis. prior to the formation of the info centre, the micro consultant was only available by telephone. it was feared that a micro consultant at a public desk would be overwhelmed by the large number of micro users on campus. but the necessity of having the micro consultant's technical expertise available to micro purchasers overcame the pessimistic expectations. (an observation made at last year's siguccs conference was that every campus appears to be struggling to support a plethora of microcomputer users without sufficient staff.) by consolidating both the micro and mini/mainframe consulting services into one locale, we also hoped to better serve those users who had problems involving more than one system; such as using kermit to transfer files between a micro and a mini. for convenience sake we also put a "hands-on" laser printer for the mainframe in the info centre. from a management perspective, the info centre accomplished our goals. before the formation of the info centre: several services were located in isolated locales. space was not allocated efficiently. users were required to self-select the correct consultant. there was no convenient technical support for microcomputer sales. micro consulting was only available by telephone. after the formation of the info centre: all public- contact services are housed in one location. less space is needed to house the same number of services. users can be easily referred to the correct consultant. the micro consultant provides technical support to purchasers. micro consulting is available on a walk-in basis as well as by phone. we have learned several lessons since forming the info centre. the number of users to be found in the info centre at any given time is less than we anticipated. this scarcity is attributed to three factors: poor publicity, inadequate signs and inconvenient terminals. the info centre was formed during spring break and announced only once in our newsletter. when the majority of users, especially students, returned from spring break the familiar consulting offices were closed and small hand-lettered signs redirected them to a new room. the mini/mainframe consultant was previously located in a terminal room, which made it convenient for the terminal users to ask a question. now a difficulty has to be worth the effort of walking down the hall to the info centre to see the consultant. the problems with publicity and signs could have been avoided. the inconvenience of the terminals was anticipated, but not avoidable. although no one has complained about the lack of demonstration hardware, people who were familiar with the original showcase have commented upon it. ironically, shortly after the formation of the info centre, several vendors offered to loan us new models which we had to refuse. recently both apple and ibm have released new lines of micros that we may regret not being able to display. a benefit of the consolidation is the scheduling ability we now have during semester breaks. by scheduling the two technical consultants and the purchase consultant for sequential shifts of a couple hours apiece, we can keep the info centre open and provide all the services on a limited basis during the slow weeks. since the majority of the technical consultants, both micro and mini/mainframe, are either engineering or mis students, they can often cover for each other if only one is on duty. to promote this cross-consulting, we have replaced the mini/mainframe consultant's terminal with a micro-computer. david stack curriculum descant: interdisciplinary artificial intelligence deepak kumar ricard wyatt hci/sigchi issues for policy '98 austin henderson the problem of producing teachers with computing expertise within the school system annie g. brooking where to place trust peter g. neumann a semester project for cs1 catherine c. bareiss risks to the public in computers and related systems peter g. neumann computing as a discipline the final report of the task force on the core of computer science presents a new intellectual framework for the discipline of computing and a new basis for computing curricula. this report has been endorsed and approved for release by the acm education board. peter j. denning d. e. comer david gries michael c. mulder allen tucker a. joe turner paul r. young visionaries: an interview with professor michael o'leary d taylor ask jack: summer jobs and job skills jack wilson future software development management system concepts b. livson associate-level programs for preparation of computer support personnel (panel) karl j. klee joyce currie little john lawlor pamela matthiesen t. s. pennington josephine freedman karen richards viewpoint: existential education in the era of personal cybernetics steve mann computer scientists and south africa s. hazelburst exploring recursion in hilbert curves (poster session) this tip will describe the use of a graphical tool to explore the recursive hilbert curves and will explain some of the mathematical information that can be visualized using this tool. richard rasala competing in the acm scholastic programming contest (abstract) the scholastic programming contest was conceived as a vehicle to showcase the talents of some of the brightest young computer scientists in the world. its founding precept was to provide a competition that is challenging, exciting, and most of all, enjoyable for the contestants. the recent addition of the contest sponsor, at&t; easylink services, has made it possible to significantly improve the quality of the contest for contestants at all levels, including the institution of a major prize structure at the contest finals. many participants would agree that this has had the effect of increasing the level of competitiveness between the teams. this panel session will explore the variety of ways in which teams prepare themselves for the competition while maintaining that all-important sense of enjoyment which is at the heart of the acm scholastic programming contest. some questions to be addressed by the panel are: how are the teams selected for the regional competition? what techniques are used to prepare those teams for the regional contest? is preparation for the contest finals any different from preparing for the regionals? in what ways does teamwork play a part in the contest? what effect has lowering the size of the teams from four to three in 1991-92 had on the role of teamwork? what changes, if any, should be made in the contest competition? for instance, does the contest focus too much on "programming-in-the-small" and "quick-and- dirty programming" over current software engineering techniques? should the contest continue emphasizing the procedure- oriented paradigm, or should other paradigms (especially object-oriented) be included in order to reflect current programming trends? this panel is a continuation of a session held at csc '91 [1]. the panelists for this year's session will be mostly composed of advisors and competitors from the international community, reflecting the increased representation in the contest by non-usa teams (from three in 1988 to eight in 1992). due to the timing of the regional contests, the panel participants were mostly undetermined as of press time. [1] bagert, donald; chavey, darrah; mourey, thomas l.; van brackle, david; and werth, john. preparing a team for the acm scholastic programming contest. proceedings csc '91, 5-7 march 1991, page 701. donald j. bagert computer science at western experience with curriculum '78 in a time-sharing environment the computer science programs and environment at the university of western ontario are described. the courses have recently been revised in the light of curriculum '78. we compare the new offerings with curriculum '78, discussing especially the mathematics requirements and courses we have introduced, and showing how a variety of three- and four-year programs is organized coherently. the department has also moved to virtually total interactive, time-sharing computer environment, even in introductory mass- enrollment courses. we discuss the impact of this, and of the increasing enrollments, on the education we offer and on our academic standards. d. j.m. davies i. gargantini computer science in correctional education james r. aman an empirical study of computer capacity planning in japan capacity planners in japan rely heavily on intuitive and judgmental approaches in their computer capacity planning functions, even more so than their u.s. counterparts, according to the following survey. the emphasis in large japanese firms on job rotations, lifetime employment, and extensive social and informal interaction among employees helps contribute to the effectiveness of these judgmental methods. s. f. lam editorial peter j. denning experiences in starting computer engineering programs (panel session) daniel d. mccracken manuel d. perez-quinones robert bryant fred springsteel anne- louise radimsky what's new with abet/csab integration (panel session) the accreditation board for engineering and technology, inc. (abet) and the computing sciences accreditation board (csab) signed a memorandum of agreement in november 1998 to integrate csab's accreditation services with abet, with a transition time of approximately two years. during the interim period, the operations of the computer science accreditation commission (csac) are contracted by csab to abet. a committee with csac, csab, and abet representation is working to set up the new commission for accrediting programs in the computing sciences. this new commission will probably be called the computing accreditation commission (cac). other activities are underway to try to assure that this integration goes as smoothly as possible. the panel members will discuss from various points of view the current status of the integration and plans for the completion of the integration. kenneth martin is a past chair of csac. lee saperstein is past chair of eac. della bonnette is a past csac chair and current team chair. doris lidtke is serving as adjunct accreditation director for computing at abet and a past president of csab. doris k. lidtke lee saperstein kenneth martin della bonnette internet voting for public officials: introduction lance j. hoffman lorrie cranor the introductory computer science course gary h. locklair reflections on teaching computer ethics (this paper has been accepted for publication in the proceedings, but the photo-ready form was not delivered in time. copies of the paper should be available upon request at the presentation.) robert m. aiken embedding technical self-help in licensed software pamela samuelson training student staff: a conference-style approach theresa a. m. noble a view of academic computing in china john e. skelton road crew: students at work john cavazos copyrightable functions and patentable speech a multi-modal chat for coordinated interaction (poster session) ng s. t. chong masao sakauchi email lorrie faith cranor death from above over the last 30 years, the american ceo corps has included an astonishingly large percentage of men who piloted bombers during world war ii. for some reason not so difficult to guess, dropping explosives on people from commanding heights served as a great place to develop a worldview compatible with the management of a large post-war corporation. john perry barlow accreditation in canada: an update suzanne gill computers and elections along the twists and turns of this year's presidential campaign trails, computers act as strategic tools that exert a decisive impact on capturing voters' attention. but a national bureau of standards study claims that system implementations are dangerously inadequate at the finish line, where carefully gleaned votes are less carefully counted. k. a. frenkel students at chi 98 brian d. ehret marilyn c. salzman how to succeed in graduate school: a guide for students and advisors: part ii of ii marie desjardins retraining computing faculty: a perspective the author has been actively involved in the retraining of college faculty to teach computing science for six years. he is presently recruiting a fifth class for a two- summer masters degree program which addresses this goal, and is preparing for a seventh offering of a week-long, non-credit summer institute. this paper reflects upon the experience of having worked first-hand with nearly 200 faculty members from a great variety of disciplines. it also incorporates interchanges with directors of other formal retraining efforts around the country, and the comments made by some of the nearly 500 faculty who have participated in formal summer retraining programs. from these sources the author tries to clarify the phenomena of retraining and suggests some areas which merit further study. william mitchell editorial steven pemberton qmesh: an interactive visualization tool for quadrilateral mesh-generation qmesh, my undergraduate research project, involves the design, implementation, and experimental evaluation of algorithmic techniques for quadrilateral mesh- generation. a large percentage of this project is the study of techniques for converting triangulations to quadrangulations. this paper is geared towards the understanding of implementation issues that are involved in computing the triangulation of a simple polygon and its dual tree, which are necessary procedures for computing a quadrangulation. michael orr three-level teaching material and its implementation in a teaching situation (poster) jorma sajaniemi marja kopponen thoughts on an information technology bachelor's degree program peter c. isaacson david auter host/target approach to embedded system development is becoming obsolete j. e. tardy software design documentation approach for a dod-std 2167a ada project dod-std-2167a and its predecessor dod-std-2167 impose significant documentation requirements on software development projects. the 2167 documentation set, particularly for documenting the software design through the life cycle, contained a significant, amount of redundancy. also, for ada development projects, 2167 did not adequately recognize the benefits achievable from using ada as a uniform representation of the design and code products throughout the software life cycle. dod-std-2167a is an improvement over 2167, but a contractor and the customer must still be conscious of the possibility of generating documents with limited utility to document producers and reviewers. this paper describes a software design documentation approach being used on the command center processing and display system replacement (ccpds-r) project that uses heavily tailored 2167 data item descriptions (because 2167a was still in the formulation stage when ccpds-r began) to: (1) provide reviewers with appropriate design information during the software development process; (2) provide the system user with the documentation needed to maintain the delivered software; (3) eliminate redundancy; and (4) streamline the generation of the deliverable documents through reliance on information already contained in the software development files (sdfs). the resulting design document set satisfies dod-std-2167a requirements. m. springman publishing research results nell dale viewpoint: pondering pixelized pixies eric m. freedman teaching advanced problem solving: implications for the cs curriculum in this paper, we discuss our experiences with an advanced problem solving seminar and acm programming contests. these are both activities that require teams of students to solve problems that are more challenging than those typically encountered in computer science courses. after presenting some effective and not-so- effective strategies in these environments, we conclude the paper by taking our observations and applying them to the computer science curriculum as a whole. by casting a spotlight on the curriculum from this more advanced setting, we are able to identify some weaknesses that must be addressed. because our curriculum is a fairly typical one, we hope that his analysis might help other universities and colleges reflect upon their curricula as well. john paxton brendan mumey hr project - evaluation of vendor documentation (a form) joan lee teaching computing to thousands diane jung-gribble charles kim acm forum robert l. ashenhurst a partnership approach in undergraduate business education walter skok rachel wardley applying tqm in the computer science classroom linda null teaching abroad - how to get the most out of a sabbatical seven years ago our speaker accepted a one year position teaching with the university of maryland university college european division. he describes his own situation as having forgotten to come back! what's it like teaching in europe? how much of europe does one see? what's it like living in the culture? you can do it too! john g. meinke techniques for mass training in basic computer skills the user service group within a university computing center faces a growing training challenge as computing becomes available to much larger numbers of students. in many institutions, larger class sizes for computer training are becoming a necessity. at washington state university (wsu), our class sizes have grown from a maximum of 12 students to a maximum of 250 students in four years. the larger classes require new and enhanced training techniques. this paper presents some techniques which have been successful at washington state university for classes up to 250 students. also presented are some speculations about the impact on training techniques of even larger classes (750 to 1000 students). jim d. mitchell joshua yeidel computer support for cooperative design (invited paper) computer support for design as cooperative work is the subject of our discussion in the context of our research program on computer support in cooperative design and communication. we outline our theoretical perspective on design as cooperative work, and we exemplify our approach with reflections from a project on computer support for envisionment in design --- the aplex and its use. we see envisionment facilities as support for both experiments with and communication about the future use situation. as a background we sketch the historical roots of our program --- the scandinavian collective resource approach to design and use of computer artifacts, and make some critical reflections on the rationality of computer support for cooperative work. susanne bodker pelle ehn joergen knudsen morten kyng kim madsen a microprogramming project for a course in computer systems j. archer harris about this issue… anthony i. wasserman centralized vs. diversified user services: a user's perspective as a group of professionals who use computers in our work, we see decentralization of computer user services as a possibly inevitable trend of the 1980s. it may be that it is simply no longer possible for a single, monolithic university or organizational computing center to provide adequately all relevant user services to a highly diversified population of users. it is important, therefore, to address the issue of how best to strike a balance between centralized and decentralized user services. that is, what kinds of services would be optimally provided by centralized user service organizations, and what kinds would best be dealt with on a departmental or college-wide basis? c. purvis computer ethics and social issue: an implementation model margaret anne pierce john w. henry reader comments john gehl australian attitudes toward legal intervention into hacking the australian public seems to have a characteristic response to computer crime. it is worth giving some anecdotal evidence to support this statement. during fieldwork [1-5], respondents were quizzed regarding their reasoning for their responses to the criminality of hacking. some results are given here in response to the social attitudes of australians towards reporting evidence of the enactment of computer crimes (see table 1). this is in support of thompson's [12, 13] and others' claims that relatively few computer crimes are, indeed, reported to the police in australia (see also [9]). r. a. coldwell scientific freedom and human rights of computer professionals - 1989 jack minker textual materials for courses in computers and society (sigcas) courses is computers and society are proliferating as the computer becomes more and more a central force in our society. such courses are given at all levels in the university. this poses problems of obtaining the proper materials for these various courses. these problems and their solutions will be discussed in this session. william atchison gerald engel stephen guty j. mack adams stephen cline inexpensive teaching techniques with rich rewards byron weber becker the japanese approach: a better way to manage programmers? with a turnover rate among computer programmers of 25%-50% per year, it's time somebody came up with a better way to manage computer professionals. one approach that holds promise is to create a japanese-style theory z atmosphere in the firm, stressing lifetime employment, non-specialized career paths, collective decision-making, and other holistic matters. paul s. licker stress management for the computing professional the daily stress of the computing professional's life takes its toll. stress management is learning to identify and control the factors creating the stress, thereby releasing more energy for creative and productive work. emphasis will be on: identifying and understanding situations and symptoms of stress on the job, and effective methods for managing stress. participants will have the opportunity to pinpoint the sources of stress typical in their daily lives and to consider ways to use stress management to feel better, achieve greater concentration, and increase ability to maintain a high performance level. sandra hollander retaining is talents in the new millennium: effects of socialization on is professionals' role adjustment and organizational attachment this study examined the effects of six organizational socialization tactics on new information systems (is) professionals' role adjustment and organizational attachment. data were collected from 187 newly hired is professionals. the results showed that the six socialization tactics affected new is professionals differently. investiture tactics affected directly all the variables studied except role ambiguity. serial tactics had a direct and positive effect on continuance commitment but a negative effect on intention to quit. both sequential and collective tactics had direct effects on role ambiguity. overall, the social aspects of the socialization process had the most significant effects on new is professionals' role adjustment and organizational attachment. the results suggest that organizational socialization is an important strategy that needs to be considered in both is research and practice. ruth c. king weidong xia integrating simd into the undergraduate curriculum assembly language instruction today, in our view, should include instruction in the newly important area of single-instruction, multiple-data (simd) instructions. such instructions are available on all major platforms, and they considerably speed up operations on arrays, particularly large arrays. this speedup is more pronounced with assembly language than with algebraic language programming, and thus provides another reason for undergraduate students to learn assembly language. we discuss the differences among approaches to simd on various platforms; then we describe our own experience with teaching this material. w. d. maurer usability planning for end-user training d. leonard a. l. waller evaluating effectiveness in computer science education barry l. kurtz nell dale jerry engel jim miller keith barker harriet taylor about this issue… adele goldberg recommendations for changes in advanced placement computer science (panel session) in 1981 the apcs development committee recommended the use of pascal in an ap course whose first exam was given in 1984. this decision was controversial; basic was in widespread use and serious consideration was given to a language- neutral exam and course. in 1985 an ad-hoc committee made recommendations on changing the exam format, essentially creating two courses that correspond roughly to cs1 and cs2. in 1995 an ad-hoc committee was convened to make recommendations on how best to incorporate c++ into the ap course and exam. the decision to adopt c++, made in 1994, was decidedly controversial. the ad- hoc committee made recommendations on a subset of c++ and on classes similar to those in the standard library, but which were safe for novice programmers to use. owen astrachan robert cartwight rich kick cay horstmann fran trees gail chapman david gries henry walkers ursula wolz assessing innovation in teaching: an example marian petre technology, the future of our jobs and other justified concerns peter van den besselaar computer science curricula for high schools (a panel discussion) this panel consists of discussions of current efforts underway to design and test textbooks, lesson plans and teaching strategies for the teaching of computer science in high schools. most high school curriculum development concerning computing has been directed toward the teaching of traditional subjects (physics, chemistry, math) with computers. the panelists will have just finished a site visit with hscs, the national science foundation high school computer science project at the university of tennessee. they will offer a critical look at its promises and problems compared to other approaches to teaching high school computing, such as the logo project, and the use of basic. michael moshell education of wireless and atm networking concepts using hands-on laboratory experience krishna m. sivalingam v. rajaravivarma using microcomputers in computer education t. s. chua j. c. mccallum combining the metaphors of an institute and of networked computers for building collaborative learning environments yongwu miao hans-rudiger pfister martin wessner curriculum and teaching delivery change in an international context poland and its education system have been undergoing radical and fast change since 1989 as the country "westernises". the pressures on he in some ways mirror those of western europe (growing classes, industrial and commercial demands), but the starting point is quite different. attempting to meet these challenges, a tempus phare [2] project has brought together 6 western universities and the technical university of lodz in a 3 year programme to redesign programmes and delivery across the faculty of technical physics, computer science and applied mathematics. a progress report on this attempt at major change management is presented. bogdan zoltowski roger boyle john davy the australian card - postscript roger clark the role of ease of use, usefulness and attitude in the prediction of world wide web usage albert l. lederer donna j. maupin mark p. sena youlong zhuang road crew saveen reddy computer assisted instruction at los angeles city college vin grannell the role of the government in standardization: improved service to the citizenry jerry l. johnson jim culp clyde t. poole margaret theibert ronald e. vidmar identifying the gaps between education and training this paper discusses some of the issues concerning education in the academic environment and training in the industrial work environment. recent college graduates, "new-hires", must realize as they enter the workforce, that even though they have completed four year degree programs, they are beginning at an entry-level position. they will need job specific training to make them productive software engineers from their employer's perspective. the aspects of distinguishing between education and training are discussed along with an understanding of how college prepares graduates for employment in the computer industry; specifically, the field of military software development as developed at texas instruments. freeman l. moore james t. streib heal ourselves: a design and prototype for an educational materials information system david langan software documentation and readability: a proposed process improvement the paper is based on the premise that the productivity and quality of software development and maintenance, particularly in large and long term projects, is related to software readability. software readability depends on such things as coding conventions and system overview documentation. existing mechanisms to ensure readability --- for example, peer reviews --- are not sufficient. the paper proposes that software organizations or projects institute a readability/documentation group, similar to a test or usability group. this group would be composed of programmers and would produce overview documentation and ensure code and documentation readability. the features and functions of this group are described. its benefits and possible objections to it are discussed. nuzhat j. haneef best practices in corporate training and the role of aesthetics: interviews with eight experts `stop playing around and get to work' is a common refrain. while the two do not seem to mix, research indicates that play has much to offer corporate training when it comes to learning. this paper presents in narrative form, a summary of the interviews with eight master designers of engaging and immersive learning products and the subsequent aesthetic framework for development of such environments. morgan jennings runaway information systems projects and escalating commitment robert c. mahaney albert l. lederer computer supported success in the modified-accelerated reading classroom lois f. kelso ruth a. ross hybrid learning - a safe route into web-based open and distance learning for the computer science teacher the hybrid learner is located on a continuum between the traditional student attending face to face classes in a university and the distance learner who may never visit the institution, except perhaps to graduate. modern methods of web-based open and distance learning make hybrid learning attractive and accessible to students. computer science students in particular make very good hybrid students because the content of the computer science curriculum has a strong practical element that is conducive to independent learning methods, and because they have a familiarity with the tools used in hybrid learning. suggestions are given on how a teacher may develop web-based open and distance learning (web-odl) for hybrid learners. john rosbottom inside risks: securing the information infrastructure teresa lunt is maintainability: should it reduce the maintenance effort? e. burton swanson ethical considerations in gender-oriented entertainment technology melissa chaika measuring the importance of ethical behavior criteria j. michael pearson leon crosby j. p. shim microcomputers are personal... the effects of microcomputers on our personal lives are pervasive. we wear microcomputers on our wrists as watches. we use dedicated microcomputers in appliances as diverse as microwave ovens, televisions, phone answering machines and sphygmomanometers. we buy ourselves and our children various microcomputer implemented games and educational toys. we actually use microcomputers explicitly in our personal computers: for personal data bases, word processors, tools of education, sources of recreation, communicators, and so on. the character of our technological civilization is changing significantly due to the abilities of these "printed computers" constructed on silicon chips. carl t. helmers computer literacy: people adapted for technology y. magrass r. l. upchurch self-instructional modules: their development and use by user services user services organizations are charged with educating large numbers of users, with staffs that are often too small to meet ever-increasing demands. two extremes in methods of educating users can be observed: 1) traditional short courses which are typically lectures and demonstrations; and 2) sophisticated audio/visual techniques or computer-aided instruction (cai). a middle ground can be found for those user services groups who are frustrated by the time and effort involved in teaching short courses, but who do not have the money, staff, or inclination to develop cai programs or videotapes. one such alternative is a series of written self-instructional modules, each of which includes discussion of basic computing concepts and "hands- on" practice using computer equipment. the modules have at least three applications: 1) they can supplement or replace lecture/demonstration- type short courses; 2) they can be distributed to faculty members who in turn can use them to teach computing concepts in academic classes; and 3) subsets of the module series can be packaged as user's guides for particular groups of users (e.g. pss users, fortran programmers), with the hope that such materials would help reduce the load on consultants. dianne m. smock an improved introduction to computing emphasizing the development of algorithms and using the apple macintosh pascal many colleges and universities offer an introductory computer science course based on a specific programming language. the department of computer science at the university of kansas has recently created a new environment in order to better teach such topics as problem solving, algorithmic design, elementary programming techniques, and elementary computer techniques. this paper will discuss the transition from a time-sharing environment to a modern microcomputer laboratory. it will also discuss the pedagogic techniques used in the new environment. it is hoped that others will benefit from our experiences. william g. bulgren earl j. schweppe tim thurman an historical review of iterative methods *the preparation of this paper was supported in part by the national science foundation through grant dcr - 8518722, by the department of energy through grant a505-81er10954, and the u.s. air force office of scientific research and development through grant af-85-0052. the u.s. government retains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so for u.s. government purposes. d. m. young remote teaching (panel discussion): technology and experience the demand for computer science education on the college campus is rapidly increasing. this is due to the ever expanding market for computer expertise in industry, government, and academia. the growth in the need for off-campus computer science instruction is also phenomenal. professionals in areas other than computer science---engineering, business, etc.---need to acquire computing skills. computer scientists need to continually keep pace with the rapidly evolving computer technology. this training must be available at sites remote from the college campus. in the era of overall decreasing college enrollments, computer science educators are being requested to service this off-campus market. traditional forms of providing education to this market include faculty or student travel, remote campuses, live video, and audio teleconferencing systems. each has either high cost, an ineffectiveness, and/or inconvenience factors. it is the purpose of this panel to explore the electronic remote education. the particular systems to be discussed are the "electronic blackboard," controlled scan tv, computer teleconferencing and computer-based color-graphics technologies. the first two systems are in use and the latter two are proposed. ted lewis will describe experience with the electronic blackboard and stuart meyer will describe the use of the controlled scan tv. ron clarke and william hankley will describe the proposed usage of the computer teleconferencing and color graphics, respectively, in the remote classroom. each of these panelists will briefly describe the particular system and address the areas of teaching technique and effectiveness within the specific technology. following the formal presentations, there will be an open discussion of the issues presented. for those people who cannot attend the panel discussion, a very short bibliography on electronic education and teleconferencing is included. virg wallentine william hankley ted lewis stuart meyer ron clark emerging issues in interpretive organizational learning michael j. hine jean b. gasen michael goul the standards factor p. billingsley events robots and programming using legos in cs1 (poster session) pamela lawhead reading and writing in the 21st century elliot soloway the effect of rapid it change on the demand for training information technology is rapidly changing. this increases the complexity of it management. research has yet to explain the it management problems due to such change. the current study developed a survey instrument about such problems based on experiences described in structured interviews with 16 it professionals. the instrument was mailed to a nationwide sample of 1,000 it professionals. two hundred forty-six provided useable data. the research identified five of problems of rapid it change, namely _vendor competitiveness_, _poor quality_, _incompatibility_, _management confusion_, and _training demands_. it confirmed a model suggesting that increased _vendor competitiveness_ leads to _poor quality_, _incompatibility_, and _management confusion_, and that these increase _training demands_. the research also offers suggestions for future investigation of the problems and a potential instrument with which to conduct it. john skip bernamati albert l. lederer a fresh look at is graduate programs is needed john t. gorgone a kernel-based synchronization assignment for the operating systems course mark a. holliday java pitfalls for beginners robert biddle ewan tempero in support of scraggs: the issue of research r. raymond e. jaede s. standiford electronic learning materials: the crisis continues carol b. macknight emphasizing design in cs1 martin l. barrett providing learning alternatives what is the best method for providing training to our users? is it workshops, hands-on, tutorials, handouts, or perhaps computer based training? these are some of the questions we must ask ourselves every time we need to develop training material for a new program, service, or announcement. in many cases, the audience addressed is so varied, it is very difficult to target a document or training to a specific segment of our users. we have those who have never used a computer before, to those who have used computers extensively; and all want only the relevant information to meet their needs. matters are now even more difficult, because individuals may have microcomputer experience, mainframe, experience, or both. in addition, a user with experience on one type of computer, but not the other, who needs to transfer information, will probably not know important information. another phenomenon that occurs frequently, is that the person who comes to ask the questions, is not the one who generated the data or program. instead, they have been given the task of making it work without any background knowledge. in actual fact, it seems there is no one "best" way of training. we need to pool all of our knowledge of training, computers, life experiences, problem solving techniques, and anything else that might be helpful to provide us with alternatives. jerry martin mis as a discipline: a structured definition diane m. miller a two-level investigation of information systems outsourcing kichan nam s. rajagopalan h. raghav rao a. chaudhury computer language usage in cs1: survey results the reid report [reid 94] is a list of more than 400 schools throughout the world and the language they are using in their first computer science course (commonly referred to as acm's cs1). it is a voluntary report updated regularly. its list of schools is not exhaustive or complete. based on this report, 139 colleges and universities within the united states who are not using pascal were surveyed. the intent of the survey was threefold: to find out why these schools are using their choice of language in their cs1 course, how the instructors at these schools feel about their current language as it compares to pascal in the way it aids in the teaching of introductory programming skills, and which language these instructors feel is the best one to use in a cs1 course.the results of this survey, conducted in february 1995, are summarized below. suzanne pawlan levy teaching workshops to college faculty, staff, and administrators helping users become self-sufficient is one of the primary functions of academic computing in a college or university computer center. one-on-one consulting, the teaching of literacy courses and workshops, along with the distribution of documentation and newsletters, are some of the methods employed to help the user attain this self-sufficiency. i have worked in academic computing at temple university in philadelphia, trinity university in san antonio, cuyahoga community college in cleveland, and montgomery college in rockville, maryland. at these institutions, most of the aforementioned methods of helping the user attain self-sufficiency have met with little success. manuals and newsletters are distributed for free, or for a small amount of money, to faculty, staff, and administrators. even though users eagerly accept the written material, they rarely read it as can easily be inferred by the questions they later ask. consulting is made available, but there aren't enough staff members to handle all of the needs for one-on-one consulting. the success of computer literacy courses often depends on work done outside the scheduled class time. at one institution where i taught literacy classes, the courses generated tremendous initial enthusiasm but that enthusiasm waned once the term got under way. of all of the methods i have used to help the users become self- sufficient, workshops have been the most successful. this success can be measured by the types of questions workshop participants later do or do not ask relating to the topic that was taught. in this paper, i will draw upon my experiences in academic computing at the four institutions at which i have worked. at these colleges and universities, workshops have typically been two-hour sessions for groups of ten to twenty people. i will examine some of the reasons for the success of workshops, the evolution of the techniques i have used, and the rules-of-thumb i have developed for giving successful workshops. susan j. bahr computer center consulting on personal computers: a changing role for a large computer center a large computer center in an academic setting generally supplies much of its computer services in terms of some large timesharing facility. in recent years the availability, power, and performance of minicomputers and microcomputers have made distributive systems more attractive than a single mainframe facility. as personal computers become more important at rensselaer polytechnic institute (rpi), the consulting role of the center is changing. this paper discusses some of our experiences with this changing role. geof goldbogen jon finke richard a. park the cutting edge suzanne weisband software engineering throughout a traditional computer science curriculum as software engineering (se) is becoming increasingly important as a discipline for computing professionals, so is it becoming an increasing emphasis in undergraduate computing education. the curricular revisions described here represent an attempt to incorporate se principles throughout an undergraduate curriculum in computer science (cs). the emphasis here, however, is not one of wholesale overhaul into a se program, possibly to the detriment of other strengths in the previous curriculum. rather this is a non-radical augmentation and change of focus in certain aspects of a strong, but somewhat traditional cs curriculum. commentaries on the past 15 years a longtime member comments on the trends, happenings, events, and accomplishments of acm in the period 1972-1987, followed by "self-assessment procedure xvii." eric a. weiss trusted recovery sushil jajodia catherine d. mccollum paul ammann bringing technology into pedagogy: a session presented at ccms, march 1989 jim moss larry levine william ayen software process management: lessons learned from history regarding history, george santayana once said, "those who cannot remember the past are condemned to repeat it." i have always been dissatisfied with that statement. it is too negative. history has positive experiences too. they are the ones we would like both to remember and to repeat. the three papers in this session are strong examples of positive early experiences in large-scale software engineering. the papers are: h.d. benington, "production of large computer programs," proceedings, onr symposium, june 1956. w.a. hosier, "pitfalls and safeguards in real-time digital systems with emphasis on programming," ire transactions on engineering management, june, 1961. w.w. royce, "managing the development of large software systems: concepts and techniques," proceedings, wescon, august 1970. given the short lifespan of the software field, they can certainly be called "historic." indeed, since many people date the software engineering field from the nato garmisch conference in 1968, two of them can even be called "prehistoric." they are certainly sufficiently old that most people in the software engineering field have not been aware of them. the intent of this session is to remedy this situation by reprinting them in the conference proceedings, and by having the authors (or, in one case, hosier's colleague j.p. haverty) discuss both the lessons from their papers which are still equally valid today, and the new insights and developments which have happened in the interim. b. w. boehm business process reengineering (abstract): roles for information technologies and information systems professionals barbara j. bashein m. lynne markus patricia riley conveying technical content in a curriculum using problem based learning alan fekete tony greening jeffrey kingston workshops offer mentoring opportunities janice e. cuny making the connection: programming with animated small world in learning to program, students must gain an understanding of how their program works. they need to make a connection between what they have written and what the program actually does. otherwise, students have trouble figuring out what went wrong when things do not work. one factor that contributes to making this connection is an ability to visualize a program's state and how it changes when the program is executed. in this paper, we present alice, a 3-d interactive animation environment. alice provides a graphic visualization of a program's state in an animated small world and thereby supports the beginning programmer in learning to construct and debug programs. wanda dann stephen cooper randy pausch teaching on the other side: computer science education in industry vance christiaanse teaching software development in a studio environment james e. tomayko commentators mike godwin william a. bayse marc rotenberg lewis m. branscomb anne m. branscomb ronald l. rivest andrew grosso gary t. marx acm forum: letters robert l. ashenhurst self-perceptions and job preferences of entry-level information systems professionals: implications for career development ephraim r. mclean john r. tanner stanley j. smits quality is free: but how do you implement total quality and restructure information technology at the same time? brenda l. firquin the ntu computer science program sartaj sahni the new challenges of e-learning william h. graves an integrated, breadth-first computer science curriculum based on computing curricula 1991 john paxton rockford j. ross denbigh starkey lower and upper bounds for attacks on authentication protocols scott d. stoller fully automatic assessment of programming exercises automatic assessment of programming exercises has become an important method for grading students' exercises and giving feedback for them in mass courses. we describe a system called scheme-robo, which has been designed for assessing programming exercises written in the functional programming language scheme. the system assesses individual procedures instead of complete programs. in addition to checking the correctness of students' solutions the system provides many different tools for analysing other things in the program like its structure and running time, and possible plagiarism. the system has been in production use on our introductory programming course with some 300 students for two years with good results. riku saikkonen lauri malmi ari korhonen electronic consulting mark d. eggers charles peirce's existential graphs: a logic tool in java a hundred years ago the american philosopher peirce found the linear notation and accompanying rules of traditional logic systems restrictive. he developed what he called existential graphs (eg), which allow the user to express logical statements in a completely elizabeth hatfield debbie kilpatrick lut wong is earth day a computer holiday? buck bloombecker organizational experiences and career success of mis professionals and examination of race differences wayne m. wormley magid igbaria integrating computers into the university curriculum: the experience of the uk computers special initiatives in the field of computing and information technology in the united kingdom have had a mixed history. recent appraisals of the much- publicized alvey programme, for example, cast real doubts on the extent to which any centrally directed initiative can effectively mediate change in a field developing as quickly as information technology1. where computing and education coincide in a single special initiative, the experience appears particularly unhappy. ndpcal and mep have come and gone, leaving no enduring positive mark on our educational system2. indeed, if a recent report in the times is to be believed, four out of five teachers in the uk do not judge that the introduction of computers into schools has enhanced any aspects of teaching methodology3. and, to judge from the continuing stream of publications4 bemoaning the acute shortage of adequately trained personnel to fill it-related positions in industry, commerce, government, and education, the widespread introduction of computers into schools, colleges and universities has yet to even start palliating the problem of a perceived mismatch between the products of our educational system and the needs of employers. on the face of it, therefore, the computers in teaching initiative (cti) appears to be attempting the impossible, or, at best, entering a domain where recent past experience is discouraging. but a longer view of history is replete with examples where technological innovation has been the lynchpin of successful educational and economic change. sir john hale's interpretation of intellectual revival in renaissance florence is an opposite example5. the computers in teaching initiative was prompted by the nelson report, published in 1983, which identified the inadequate provision of student workstations in uk universities6. the report explored future scenarios, delineating the well-equipped campus of the early 1990's as having one workstation available for every five undergraduates. although adventurous by british stations the aspirations for the next decade encompassed within the nelson report were very modest by comparison with the levels of workstation provision already being achieved by certain american universities7. rapid developments in computer nigel gardner information technology for flexible and learning and training (poster) valery zagursky a collaborative model for training non-it workers jean-pierre kuilboer noushin ashrafi vivion vinson a national perspective on computer security computers and associated resources influence many aspects of modern society. while the continuing dependence on information technology and the fifth generation of computers prompts attention to potential abuse and misuse of these resources. recent concerns have focused on the possible harm to the nation, society and individuals from misuse and abuse of computerized resources. the intentional and unintentional threats to computerized resources triggers consideration of appropriate measures to cope with the problems of computer vulnerabilities. the potential and real danger of unauthorized disclosures, unwarranted manipulation of data, degradation of services and capability, and destruction of resources prompts attention to protecting computerized resources in the fifth generation and beyond. this paper examines some of the major computer security problems and remedies within the broad context of defense and national security, critical systems, privacy and confidentiality, and computer crime. highlights of specific problems confronting decisionmakers in selecting the appropriate approach to protecting computerized resources are reviewed. the paper reviews recent legislative proposals and executive branch actions as well as private sector activities having implications on future developments. this paper explores some of the problems that may require a more comprehensive examination by government and the private sector to assure that computer security requirements are met by the fifth generation. identification of some of the critical issues related to implementing computer security measures will be a major focus. the prospects for new technology to meet the fifth generation requirements will be outlined. louise giovane becker changes in the management of the information systems organization: an exploratory study the present study was conducted as exploratory research to understand the activities and beliefs of is and line managers, with regard to the management of information technology (it). semi-structured interviews were conducted with 25 managers in seven firms to understand their current initiatives, future vision, and the factors driving change. managers from three different positions from each company were interviewed---a senior is manager, an is application development manager, and a line manager. the results showed that there were a variety of different initiatives underway--- with the most common ones being rapid prototyping, an emphasis on purchasing packages, business reengineering, and building it infrastructure. beyond these few commonalities, different firms were adopting a variety of changes to their is organization structure, working relationships with users and outside vendors, system development tools and methodologies, and their training and other human resource policies. similarly, a broad range of factors were cited as driving changes in it management practice---with these clustering into four major sets of drivers: business cost pressures, business service pressures, is service pressures, and technology-push factors. few respondents were able to articulate a vision for the is organization of the future, beyond describing their expectations for the initiatives currently underway. of those respondents who provided such a vision, few described the steps required to achieve the transition. these findings are analyzed in terms of a management framework derived from harold leavitt and discussed in light of other recent research on is management. questions for follow-up research are suggested. michael j. gallivan software engineering and undergraduate computing curricula most graduates of baccalaureate computing programs (at least in the united states) take jobs in the computing profession immediately upon graduation. these jobs generally involve software development or maintenance activities, and therefore demand certain software engineering skills and knowledge. it appears, however, that typical undergraduate computing programs are at present inadequately preparing their students in the area of software engineering. this was in evidence at the 1991 international conference on software engineering, in comments made at plenary sessions and in a special workshop on directions in software engineering education. it is also evidenced by recent attempts to encourage development of software engineering programs at the undergraduate level, separate from those in computer science or computer engineering. the purpose of this panel is to explore the following question: "what software engineering knowledge and skills should be required in a typical baccalaureate computing program, assuming the program is preparing its graduates for immediate entry into the computing profession?" in addressing this question, the panel will consider the recent report of the acm/ieee-cs joint curriculum task force, proposed software engineering curricula for undergraduates, and accreditation criteria. stuart h. zweben exploiting the benefits of y2k preparation stewart robertson philip powell the "new" contract between is employees and organizations: workplace and individual factors mary b. burns rosann webb collins labor shortfall in hong kong's it industry kam-fai wong bringing real-world software development into the classroom: a proposed role for public software in computer science education daniel nachbar employment decisions by computer science faculty: a summary of the 1980-81 nsf survey over the past several years a great deal has been written, and even more said, regarding the crisis in employment of faculty in computer science departments (1,2,3,4,5). in order to obtain data regarding the magnitude of the problem, and reasons for it, the national science foundation, in the 1980-81 academic year conducted a survey of ph.d. granting departments of computer science in the united states. this paper will present a summary of the results of the survey. data obtained regarding the departments were consistent with that reported earlier by hamblen (6,7), and taulbee and conte (8). results of the survey regarding motivation for professional mobility were consistent with those reported by eisenberg and galanti (9) regarding the engineering disciplines. gerald l. engel bruce h. barnes network user services and personnel issues in a network environment, users are remote from the computer center. user services personnel, being the initial and possibly the only contact, represent the computer center to the users. conversely, they must represent the view of the users to exert the proper influence on the operation of the computer center. with timesharing, users have close contact and scrutiny of system software. this "user intimate" software should be developed and maintained by user services. personnel issues for this expanded user services are discussed: staffing, organization, recruiting, training, appraisal and work satisfaction. clement luk road crew: students at work lorrie faith cranor technology as enabler: keeping work distinct from home while working at home christine salazar a reverse engineering environment based on spatial and visual software interconnection models reverse engineering is the process of extracting system abstractions and design information out of existing software systems. this information can then be used for subsequent development, maintenance, re-engineering, or reuse purposes. this process involves the identification of software artifacts in a particular subject system, and the aggregation of these artifacts to form more abstract system representations. this paper describes a reverse engineering environment which uses the spatial and visual information inherent in graphical representations of software systems to form the basis of a software interconnection model. this information is displayed and manipulated by the reverse engineer using an interactive graph editor to build subsystem structures out of software building blocks. the spatial component constitutes information about how a software structure looks. the coexistence of these two representations is critical to the comprehensive appreciation of the generated data, and greatly benefits subsequent analysis, processing, and decision- making. h. a. muller s. r. tilley m. a. orgun b. d. corrie n. h. madhavji designing the new american schools robert pearlman an international student/faculty collaboration: the runestone project students of today need to be prepared to work in globally distributed organizations. part of that preparation involves teaching students to work effectively in teams to solve problems. students also must be able to work with individuals located at distant sites where there is no or very little face-to-face interaction. the runestone project, an international collaboration between two universities, adds new dimensions to student teamwork, requiring students to handle collaboration that is remote, cross-cultural, and technically challenging. runestone is a three-year project funded by the swedish council for the renewal of undergraduate education. a pilot study in 1998 was followed by a full-scale implementation in 1999 with another implementation ongoing in 2000.each time this global cooperation project is run, both students and faculty learn important lessons in how to work with each other in a virtual environment. this paper discusses both student and faculty learning outcomes for runestone 1999. mary z. last mats daniels vicki l. almstrum carl erickson bruce klein the first 25 year: acm 1947-1962 lee revens the chi conference review process: writing and interpreting paper reviews wendy e. mackay down the road: university of pennsylvania: the dining philosophers sara m. carlstead user's guide for the shadowy crystal ball: practical tips and techniques for planning the future john w. smith educational development opportunities for computer professionals this session surveys the professional and educational opportunities prevalently available to computer and data/information processing professionals. each of the session participants has been invited to submit a formal paper and a discussion on selected topics in this area. professional and educational development is currently promoted by a number of societies, including the association of computing machinery (acm) and the ieee computer society. these societies sponsor educational tutorials and seminars, conmunity action groups, and chapter and committee involvements in the technical communities. one of these, the acm, sponsors a self-assessment program, which provides a set of testing procedures for individuals to personally assess their knowledge of the computer sciences. whereas self-assessment emphasizes personal learning, the certification programs offered by the institute for certification of computer professionals (iccp) provide a means for establishing that individuals have accomplished cerifiable level of professional competetence in particular areas. paul p. howley acm general elections diane crawford effective computer education the design of an effective program to provide pc education is an exciting, challenging, complicated, rewarding activity. such a program offers professional growth and development opportunities to university faculty, new corporate relationships for both the professors and the university, and the opportune moment for exciting explorations into the areas of curriculum development, evaluation methodology, and teaching styles. the design of an effective computer education program can be an important addition to a university's overall educational mandate. key issues for any successful computer education include curriculum development, instructional design, course development, delivery of instruction, evaluation, and overall project management. this paper provides some guidelines for those who desire to design successful pc computer education. under the author's direction, over 1,000 days of education have been provided to approximately 7,500 individuals during the past three years. the impact of such a program on the lives of the professors involved and the corporate clients receiving the education, has been enormous. david sachs natural language in computer human-interaction: a chi 99 special interest group fifty-three people from accross the world participated in the chi 99 special interest group on natural language in computer human-interaction. the sig's main goal was to provide an opportunity for chi 99 attendees from two research communities, natural language processing (nlp) and human-computer interaction (chi), to discuss issues of mutual interest. the sig embraced natural-language interfaces of all kinds, inlcuding text, spoken and multi- modal interaction. this report inlcudes the results of e-mail discussions following up on the sig itself. nancy green david novick the practical management of information ethics problems: test your appropriate use policy, part iv john w. smith a firm-level model of it personnel planning alice andrews fred niederman a view from the entry level: student perceptions of critical information systems job attributes globalization and the expanding digital nature of today's business environment have resulted in skyrocketing demand for qualified it personnel. organizations are increasingly turning to recently graduated is students as a critical source of supply. in light of this, the objective of this paper is to identify the relative importance of job attributes considered by is students when evaluating a job. drawing on content motivation as a general theoretical foundation, this paper surveyed 243 is students at two large southwestern universities to determine what job attributes they find most appealing when assessing potential positions and organizations. prior research was aggregated to compile a comprehensive list of job attributes. two methods were used by respondents to evaluate these attributes; rank ordering, and points allocation (weighted measure). the results provide insight into the relative importance that is students attach to specific job attributes. this information may be used by organizations to provide relevant information to potential employees, resulting in improved alignment between employee preferences and organizational reality, with a subsequent effect on job attitude and turnover. tim goles predictions of the skills required by the systems analyst of the future t d crossman international perspectives: software strategies in developing countries richard b. heeks a graduate diploma in computing dermot shinners-kennedy the role of stories in computer ethics john m. artz hail and farewell pat billingsley the martyrdom of user services this paper proposes to demonstrate how consideration of the mind state that spawned the organism, its evolution thus far, and the nature and extent of the forces that impinge upon it might lead one to prognosticate a violent mutation in the user services phenomenon at this juncture. john simon assessing object-oriented technology skills using an internet-based system in this paper, we describe a web-based system that defines training needs for object-oriented developers by identifying the strong and the weak areas of their knowledge and skills. the system is based on the use of two tools, gaa [8] and ukat [3], developed at the computer research institute of montreal (crim). ukat (user knowledge assessment tool) uses a state-of-the-art knowledge assessment method to create a user profile of the proficiency in a subject domain. gaa (intelligent guide) is a web-based training system that uses the ukat to personalize a training course and facilitate self-learning. ahmed seffah moncef bari michel desmarais gender discrimination in the workplace: a literature review ellen isaacs perspectives on social responsibility for the computing field the scale of contemporary social problems in arenas such as health care, education, or economics is so large the computing community must play a key role in finding solutions. this panel discusses the role of such institutions as acm, cra, cstb, and nsf in initiating these and similar activities. karen a. duncan accountability through transparency;: life in 2050 brock n. meeks computer science - too many students, too many majors (panel discussion) during the last decade enrollments in computer science courses have increased dramatically. classes are very large and faculty members are nearly impossible to recruit. the "seller's" job market has contributed to both of these problems and universities must deal with the problems of too many students and too many majors. the panelists have been asked to address the following questions: 1) how large should classes be at the upper and lower levels? 2) does everyone have a birthright to be a computer scientist? 3) are there reasonable ways to limit enrollments? 4) are we pleased with the quality of the average undergraduate computer science graduate? 5) have academic standards declined because of large enrollments? although the panelists do not have solutions to all of these problems we feel that it is important to identify them and discuss what sorts of alternatives have been tired. it is expected that about one-half of the session's time will be devoted to audience interaction. norman gibbs kenneth l. williams kenneth danhoff robert korfhage jack alanen organization of final year projects this paper details a method for the organization of final year computer science projects which has been found to be extremely beneficial both from the point of view of the students and the supervisor. these projects count for 20% of the final degree result in this department, and are a crucial part of the development of the student.the model proposed for the organization of the projects is one in which the students initially work in a group, co-operatively developing a basic platform on which they can then individually develop their projects. this organization allows the supervision to be achieved through regular group meetings, and provides the students with good experience (and all the benefits) of working in a group, while at the same time fulfilling the objective of getting the student to work on a complex task independently. an example of this project organization in practice, in the field of computer vision, is also detailed. kenneth m. dawson-howe bottom-up implementation of tqm: a new paradigm in bringing excellence to the customer robert h. august mark eversoll teaching design effectively in the introductory programming courses teaching program design in an introductory programming course is a big challenge for instructors. over the past few years many studies have been performed on how and when to apply design in cs 1. some researchers suggest that design methodologies and problem solving techniques should be introduced before any programming is taught, while others believe in the gradual integration of design into programming courses. we believe that the gradual integration of design into programming courses can be done effectively provided that appropriate measures of implementation are included. in this paper we present an approach to integrate design into the first programming course. the outcome of this integration is also presented. from washington: infowar: ak-47s, lies, and videotape neil munro the classroom as panopticon; protecting your rights in the technology-enhanced workplace marita moll teaching with technology at my fingertips elizabeth s. adams an efficient set of software degree programs for one domain there is increasing urgency to put software engineering (se) programs in place at universities in north america. for years, the computer science and professional engineering communities neglected the area, but both are now paying serious attention. there is creative tension as efforts accelerate to define the field faster than is possible. this paper discusses a set of four software degree programs that have evolved over 14 years at a small university with close ties to one software community. the context is computer engineering in a department of electrical and computer engineering, so the natural domain is software that is close to the hardware. this means an emphasis on real- time, embedded, and, to a lesser extent, safety critical issues. the newest of the four programs is a ph.d. program. it demonstrates that ph.d. programs can be created with limited resources, given the right circumstances. if similar circumstances exist in other small universities, the rate of ph.d. production in software engineering may be able to be increased, while maintaining quality. this paper describes the four degree programs, how they are related to each other, and how the programs have evolved. it makes limited comparisons to programs at other universities. terry shepard innovative teaching materials and methods for systems analysis and design this article describes materials and methods for a course in systems analysis and design. while the article is prescriptive rather than empirical, it offers new directions for instructors who wish to adopt an action research approach to syllabus development. information systems developers perform two roles in the process of managing a development project: the developer is both analyst/designer and facilitator. traditional textbooks for systems analysis and design courses emphasize the analyst/designer role, but say little about the facilitator role. we have developed course materials and teaching methods to address facilitation skills: outcome thinking, group process, and communications. we also emphasize some analyst/designer skills that are not addressed in the traditional texts, such as creative thinking. socio-technical systems, and case tool concepts. our teaching methods include the personal journal, which allows students to tailor classroom materials to their own needs, as well as other methods for providing experiential learning of course materials. lorne olfman robert p. bostrom the legal protection of computer software as new as the technology itself, a sometimes confusing array of legal protections are now available to safeguard the huge investment of time and money that goes into the development of sophisticated commercial software. robert l. graham trust mechanisms for hummingbird jason evans deborah frincke computing consequences: a framework for teaching ethical computing how to prepare tomorrow's professionals for questions that can't always be answered with faster, better, or more technology. chuck huff c. dianne martin the use of the internet in the teaching process the aim of this project is to present a teaching method based on the http protocol. the main goal of this method is to aid the self-study process with the use of interactive pages based on the http protocol. such pages are generated for each student individually, depending on his progress in internalising knowledge. dariusz put janusz stal marek zurowicz prediction of student performance using neurocomputing techniques cameron browne james hogan john hynd computer equity for the future yolanda s. george shirley m. malcolm laura jeffers hci in italy: an overview of the current state of research berardina nadja de carolis adapting computing services to the unwelcome realities of tuition hikes and budget cutbacks: a small college story jenny walter software ownership and charging this paper discusses about the software ownership and its charging james a. buss forum diane crawford editorial ryuji kohno sergio verdú zoran zvonar evaluation technique of software configuration management (poster session) jin xizhe from washington: budget fy 1991: the numbers tell the r&d; story this is the time of year when talk turns to fiscal budgets. in washington, however, such banter typically involves astronomical sums of money. when president bush released his proposed budget for fy 1991 last january, the reaction from the scientific community was mixed. many observed that seldom have research and development (r&d;) projects been given as prominent a place in a federal budget. other industry watchers, while less enthusiastic, had to agree that in many respects r&d; fared better under this year's budget than last year's. however, understanding the details of the budget is far more important than reviewing its broad outlines. for that reason, the american association for the advancement of science (aaas) calls members of the scientific community to washington each spring to dissect, denounce and defend the government's r&d; funding plans for the next fiscal year. the colloquium on science and technology policy conference centered around the analytical findings of the aaas's research and development fy 1991 report. the three-day conference was peppered with high-ranking white house officials who either defined the specific branches of the government's r&d; interests or discussed the possible implications the budget poses for future projects. there is an overall 7 percent budget increase for r&d;, with a 12 percent increase in nondefense r&d; programs and an 8 percent increase in basic research. in the area of computer science and engineering, darpa, nsf, and the onr remain the largest sources of government funds for r&d.; federal support in computer science is divided into two basic categories: defense and civilian. more than 60 percent of total federal r&d; expenditures in computer science and engineering are supported by the defense sector. moreover, federal r&d; activities are conducted in government and nonuniversity labs as well as in universities. the majority of the funding for computer science research supports activities outside of universities. d. allan bromley, director of the office of science and technology policy (ostp), explains the thinking behind the president's budget: to prioritize funding requests, the office of management and budget follows three basic guidelines. they include 1\\. programs that address national needs and national security concerns, * 2\\. basic research projects, particularly university-based, individual and small group research, and * 3\\. adequate funding for the nation's scientific infrastructure and facilities. bromley points out that one of the primary avenues ostp will emphasize this year is high- performance computing---a dynamic technology for industrial, research and national security applications. of first concern will be the development of hardware to enhance mainframes and address the parallelism needed to make teraop computers perform trillions of operations per second. the next phase will be software development, followed by the construction of a fiber optic network. bromley, who also serves as assistant to the president for science and technology, calls the fy 1991 budget an excellent one for r&d.; however, he is quick to add there are problems with those numbers. (one of the most serious involves the funding rate for grants at the nsf and national institute of health (nih). despite a decade of funding increases, the money available for new, young investigators is very tight. indeed, the scientist community is partly to blame, he says. "we argued for multiyear grants and contracts to cut down on the amount of paperwork required to do research," recalls the ostp director. "both nsf and nih have responded to those requests, and in the process they built substantial 'outyear mortgages' ' for themselves." diane crawford "i do and i understand": mastery model learning for a large non-major course mark urban-lurain donald j. weinshank issues and threats affecting timesharing and microcomputer sales robert gillespie patrick gossman participation in computer security grew in '92 lee ohringer multiple micros for education the advent of the microcomputer has caused a profound change in our thinking about the teaching of programming. up to now we have been assuming that a computer is necessarily expensive and must be shared by all students. with the appearance of time-sharing systems, many universities, including the george washington university, purchased large numbers of terminals for student use; but students were still using a single large or mid-size computer, albeit through several terminals. it is now possible, however, to purchase an entire computer for the price of a terminal. such a computer is necessarily limited in scope; but it can serve admirably for the teaching of programming at an elementary level, as well as handling certain more advanced tasks. about a year and a half ago, the computer committee of the department of electrical engineering and computer science, of which i was the chairman, made a decision to purchase sixteen microcomputer systems for student use, primarily for the teaching of basic, and secondarily for assembly language and pascal. w. d. maurer assembly programming on a virtual computer this paper introduces the vpc assembler, a windows 95/98 assembly programming environment that targets the virtual pc, a simulator of a small computer system based on the intel 8086 architecture [1]. the assembler provides an editor, a debugger, and views of the assembly programÆs variables, the cpuÆs registers, and the virtual pcÆs output. the vpc assembler was designed as a learning tool for courses that introduce assembly programming or for courses, such as computer architecture or organization, that briefly cover the fundamentals of assembly. pierre a. von kaenel the payoff on the information technology investment stephen jonas towards an error free plagarism detection process for decades many computing departments have deployed systems for the detection of plagiarised student source code submissions. automated systems to detect free-text student plagiarism are just becoming available and the experience of computing educators is valuable for their successful deployment. this paper describes a four-stage plagiarism detection process that attempts to ensure no suspicious similarity is missed and that no student is unfairly accused of plagiarism. required characteristics of an effective similarity detection engine are proposed and an investigation of a simple engine is described. an innovative prototype tool designed to decrease the workload of tutors investigating undue similarity is also presented. thomas lancaster fintan culwin question time?: global village or global police station? m. grundy a design automation roadmap for europe (panel discussion) joseph borel jean- jacques bronner frank ghenassia wolfgang rosenstiel irmtraud rugen-herzig anton sauer using subprograms as the main primitive in teaching ada (abstract only) the use of ada packages allows us to teach programming using a complete top- down approach. students begin by calling subprograms whose specifications are visible, but whose bodies are hidden. using this method, and a rigorous set of naming conventions, the student writes programs which are fully self- documenting. claims about the soundness of this teaching method are supported using (1) experiential evidence, and (2) formal arguments concerning the closure under syntactic and semantic necessity of certain selections of features in ada. barry burd technology transfer from university to industry jim foley about this issue… adele j. goldberg project management expert system (abstract only) effective management and technical support are required for the success of large-scale projects. pert packaged programs [1], [2], [3] provide scheduling capability. however, many tasks must still be performed by a human being, including activity plan generation, construction of activity networks, modification of a schedule produced by pert program, and project monitoring. to support these tasks, we believe that the application of artificial intelligence techniques to this area has great potential. accordingly, we are developing an experimental project management expert system named promx. the main tasks that promx supports are activity plan generation, activity scheduling, and project monitoring. the ordinary flow of these tasks is shown in fig. 1. activity plan generation is supported using a knowledge base consisting of the activities and the constituent relationships between them in a given project domain. a user first selects the template of the activity necessary to attain the project goal and assigns values to the attributes. the decomposition of an activity into its constituent parts is recursively performed by promx in cooperation with the user. the activity plan generation is followed by the activity scheduling. during the first phase of the activity scheduling, an activity network is constructed using the knowledge of precedence constraints between the activities. then, time and resources are assigned to each activity in the activity network. this activity scheduling is controlled by the heuristic search method to avoid combinatorial explosions. during project operation, the project is monitored to control its progress. the user supplies progress data to promx periodically. promx diagnoses whether the progress of the project is problematic (e.g., behind schedule). if some problems are found, promx makes a suggestion to deal with them (e.g., rescheduling). these project monitoring capabilities are realized using diagnostic and dealing heuristics. promx is implemented in esp [4], which is a prolog-based object-oriented programming language. various kinds of knowledge in the project domain are represented using the object-oriented feature and the logic programming feature of esp. to model activities in the given project domain, the knowledge of activities is represented as objects. the knowledge of the constituent relationships between the activities and the precedence constraints between the activities is represented in the form of a prolog horn clause. the knowledge of diagnostic heuristics and dealing heuristics is also represented as a horn clause. this work is part of the activities undertaken in the fifth generation computer systems (fgcs) project of japan. hideki sato hitoshi matsumoto hiroki iciki the computer science major within a liberal arts environment (abstract) henry m. walker nancy baxter robert cupper g. michael schneider testers and visualizers for teaching data structures ryan s. baker michael boilen michael t. goodrich roberto tamassia b. aaron stibel some thoughts on it employment in new zealand philip j. sallis from washington: the unhappy but beneficial coexistence of the fbi and the tech elite neil munro flexible delivery of information systems as a core mba subject in terms of prior education, culture and life experience, a diverse student profile is evident in the intake into the master of business administration (mba) degree. students may be experiencing tertiary education for the first time (industry experience entry) or adapting to a different education process (international students). in redeveloping the core mba subject, information systems, materials were constructed to support student- driven "just in time" learning. this argues for an information age pedagogical model in which learning can occur with efficiency, at the student's own pace, anytime and at a location of their choosing. the paper outlines the teaching and learning context, delivery infrastructure and activities developed in response to this model. rod learmonth would you buy a shrink wrapped automobile? john weld the quest for excellence in designing cs1/cs2 assignments todd j. feldman julie d. zelenski balancing the forest and the trees in courses henry m. walker the eight-minute halting problem j. paul myers designing an effective macintosh training program kenneth e. gadomski a major in computer applications for small liberal arts colleges a major in computer applications for small, liberal arts colleges is proposed in this article. the proposed program has characteristics to allow students to engage in a breadth of study, to integrate knowledge from a variety of fields, and to apply what is studied to their career lives. by emphasizing an inter- disciplinary approach to higher education, small, liberal arts colleges are able to interweave general education courses into the computer applications major without making large demands for additional staff. furthermore, students who earn their computer majors in such an interdisciplinary context can be expected to furnish employers with diversity and flexibility in problem solving -- just the sort of new blood many companies and entire industries are crying for. j. wey when gordon r. jones ask jack: careerline q & a jack wilson using meta-level compilation to check flash protocol code andy chou benjamin chelf dawson engler mark heinrich a case-study based approach to assessment of systems design and implementation christabel rodrigues martin atchison handling the incoming freshman and transfer students in computer science kenneth magel social interaction on the net: virtual community or participatory genre? thomas erickson a cs/se approach to a real-time embedded systems software development course joseph m. clifton quality in distance education gordon davies wendy doube wendy lawrence-fowler dale shaffer on the outer: women in computer science courses claire toynbee after-hours assistance jeanne l. lee socially responsible computing ii: first steps on the path to positive contributions ben shneiderman beneath the surface of organizational processes: a social representation framework for business process redesign this paper raises the question, "what is an effective representation framework for organizational process design?" by combining our knowledge of existing process models with data from a field study, the paper develops criteria for an effective process representation. using these criteria and the case study, the paper integrates the process redesign and information system literatures to develop a representation framework that captures a process' social context. the paper argues that this social context framework, which represents people's motivations, social relationships, and social constraints, gives redesigners a richer sense of the process and allows process redesigners to simultaneously change social and logistic systems. the paper demonstrates the framework and some of its benefits and limitations. gary katzenstein f. javier lerch capitalizing on faculty as lobbyists d. bowman federal funding in computer science: a preliminary report b. simmons j. yudken cyberethics biliography 2001: a select list of recent works included in the 2001 annual bibliography update is a select list of recent books and articles, each with a publisher's date of either 2000 or 2001. for an annotated list of selected books and articles published between 1998 and 2000, see the june 1999 and june 2000 issues of computers and society; and for a comprehensive list of books and articles published before 1997, see my computing, ethics, and social responsiblity: a bibliography (available at http://cyberethics.cbi.msstate.edu/biblio). herman t. tavani productivity trends in certain office-intensive sectors of the u.s. federal government it is often said that office productivity is virtually stagnant, increasing only about 4 percent every 10 years. the methodology used to estimate this 4 percent figure is examined and found to be inaccurate! there is no known way to estimate overall national office productivity trends. productivity trends in a single part of the economy, however, can be examined, namely, office- intensive sectors of the u.s. federal government. productivity in these sectors is found to be anything but stagnant, having increased 1.7 percent annually from 1967 to 1981 and 3.0 percent annually from 1977 through 1981. raymond r. panko putting the u.s. standardization system into perspective: new insights the odds are very high that an american attending an international standardization meeting or consulting in a foreign country will be asked about the u.s. standardization system. how is it organized? who is responsible for developing standards? how many standards? who sees to their implementation? what is the government's role? why is there more than one standard for many commodities? foreign engineers who are used to dealing with their national standards institute can be very critical of the decentralized u.s. system. many are dissuades from applying u.s. standards because of real or anticipated problems in choosing, obtaining, and applying them. many americans raise similar questions. some think that all standards come from the government. while most u.s. standardization specialists are familiar with the standards in their fields, few have an overview of the u.s. standardization system. bob toth recommendations for master's level programs in computer science: a report of the acm curriculum committee on computer science the acm committee on curriculum in computer science has spent two years investigating master's degree programs in computer science. this report contains the conclusions of that effort. recommendations are made concerning the form, entrance requirements, possible courses, staffing levels, intent, library resources, and computing resources required for an academic, professional, or specialized master's degree. these recommendations specify minimum requirements which should be met by any master's programs. the committee believes that the details of a particular master's program should be determined and continually updated by the faculty involved. a single or a small number of model programs are not as appropriate at the graduate level as at the bachelor's level. kenneth i. magel richard h. austing alfs berztiss gerald l. engel john w. hamblen a. a.j. hoffmann robert mathis computing at work: empowering action by "low-level users" andrew clement introductory computer science: the case for a unified view j. stanley warford teaching new technologies (panel discussion) one of the more difficult tasks in this era of adopting curriculums, is to keep a program current with technology. there is a growing number of new hardware, software, and concepts that emerge each year. the speakers will share their experiences in bringing to the classroom the new technology that the students will face or should be facing as professionals in the marketplace. the discussions will focus on implementation of courses in computer graphics, decision support systems, and artificial intelligence. these courses have been taught at all levels (undergraduate, graduate, executive mba) in computer science and business environments. the speakers will also hold a discussion on questions from the audience. richard bialac ronald frank allan waren emulators of "historic machines" john a. n. lee self-plagiarism or fair use pamela samuelson panel discussion (panel session) stan kurzban jim schweitzer don holden paul comba george hertzberg bill miller paperless submission and grading of student assignments john a. gross james l. wolfe facilitating the transition from high school programming to college computer science (abstract only) this paper will report on the development and pilot testing of an entry level course in computer applications (cap 100) designed by an interdisciplinary team of faculty members at state university college of new york at cortland. procedures for development included the writing of a proposal by an interdisciplinary team of faculty members, a faculty forum attended by at least 60 faculty members, a year's developmental work by a computer studies committee, and a summer's intensive development by an interdisciplinary team of faculty. the course was designed to introduce students in all disciplines to both the mainframes and microcomputers on campus. it included introductions to typical applications software on the microcomputer, familiarization of students with various disk operating systems, and an entry level experience with a programming language. henry m. walker using cs2 projects to introduce computer science concepts roberta e. sabin from the president: outlawing technology barbara simons an authentic task based computer literacy course tim bell computing trends in small liberal arts colleges this paper summarizes the information gathered by the author during visits to 40 small liberal arts colleges in the east and midwest during the winter of 1987. it focuses on the following questions: what is happening with computer science programs as colleges are coping with declining computer science enrollments? what trends are noticeable in staffing levels of computing faculty and of administrative and academic computing center support staff? how should colleges balance mainframe and micro computing and how many public access microcomputers are enough? should students be required or strongly urged to buy a microcomputer? should colleges provide faculty and administrators with microcomputers? what about networking? the paper provides tables and graphs to help small colleges answer these questions. peter d. smith centralized versus decentralized computing: organizational considerations and management options john leslie king computer literacy on a university campus. at the state university of new york at albany desktop computers are being loaned to faculty members as part of an overall effort to advance "computer literacy" on campus. this paper describes this year-long experimental project: its goals, its implementation, and its support by computing center personnel. this paper is written by the members of the computing center staff who bore primary responsibility for developing and supporting the project. isabel lida nirenberg kathleen a. turek information for authors diane crawford an investigation of computer literacy as a function of attitude a survey fo first and second year university students reveals the acceptance of a number of misconceptions about computers and computer applications, some on which indicate the presence of negative attitudes. a statistical analysis of the survey supports the proposition that previous computer experience is not always a corrective for unreasonable or even hostile attitudes. it is claimed that the achievement of computer literacy (in the sense of technical expertise) is possible for some populations only after attitudinal corrections, and that, in general, the strategy for achieving such corrections is dependent upon population characteristics. j. d. wilson r. g. trenary the prehistory and ancient history of computation at the u.s. national bureau of standards j. todd the electronic cottage: old wine in new bottles elia t. zureik teaching ethics, teaching ethically thomas j. bergin the role of the user services representative in an academic computing environment many frustrations confront users in an academic computing environment. one of the strongest is felt when users try to communicate effectively with the computing center's staff. at the kansas university academic computer center, we have created the position of user services representative (usr) to ameliorate this problem. among other responsibilities, this person functions as an ombudsman, and provides users with an "open door" to the services and facilities of the computing center. although this person is not actually classified as a consultant, s/he coordinates this activity, and tries to match up special consulting needs with the most appropriate staff member. additionally, the user services representative handles telephone consulting and acts as a distribution center for questions and problems of great diversity. this helps facilitate greater intra-staff communication too, which can be a real problem with a staff of 50-75 people. in the 15 months since this position was initiated many improvements have been noticed in user assistance services, including shorter waiting lines for consulting and more effective dedicated consulting sessions. this paper will detail the activities of the user services representative and show how the position helps to improve communication channels in an academic computing environment. john e. bucher satisfaction, technology adoption, and performance in project teams rita vick's paper focuses on technology adoption and performance in teamwork contexts. one way to increase performance, she argues, is to increase information sharing. this commentary argues that adoption, performance, and information sharing may depend on satisfaction with current project work. the paper notes that there is evidence that satisfaction with project work is very high. if this is the case, then the adoption of teamwork technologies should continue to be slow. furthermore, high satisfaction can have a negative impact on group performance, because it may cause team members not to stress challenging information that could disrupt team harmony. we suggest that the experiments that vick proposes consider satisfaction and consider not only general information sharing but also classify information shared in terms of potential divisiveness. raymond r. panko susan t. kinney the first novell education academic partner (neap) in australia - the university of ballarat g. r. stevens d. l. smith d. h. stratton p. d. kelly undergraduates in business computing and computer science (poster session) annegret goold russell rimmer beyond internet business-as-usual patrick steiger markus stolze michael good the 1988 snowbird report: a discipline matures with the severe crisis in computing behind us, the field is beginning to shed its preoccupation with its internal affairs. the representation of computing research to the public and to policy makers, was a major issue at snowbird 88. since no single entity fills this role, the computing research board plans to fill it. david gries terry m. walker paul young "last-mile" bandwidth recap and committee survey activity for people interested in graphics and graphical user interfaces, the greatest shortcoming of the internet is bandwidth, particularly lines to consumer premises. although most computing professionals have access to high bandwidth connections, most consumers do not. consumer access to computing and the internet is probably the most significant development in computing since 1980. this market now drives computing, and hence computer graphics, so it behooves all of us to understand the "last-mile" issues, both technical and political.i have enlisted the assistance of siggraph public policy committee member myles losch to co-author this report with me, i'd also like to give advance notice of the forthcoming public policy bof, which will be held at the siggraph 99 conference in los angeles. this will feature myles, as well as perhaps an outside speaker and myself speaking on this topic. at time of writing, the bof has yet to be scheduled, but traditionally it has been held early afternoon on the thursday of the conference.following up on our announcement of the first on-line public policy survey in the last issue, committee members laurie reinhart and david nelson present the initial results. don't forget that sometime in april the second survey replaced the first one. the second survey looks at computer graphics developments that will be important in research and commercial areas and is our contribution to the forward-looking events at the siggraph 30th year celebration at siggraph 99. bob ellis myles losch david nelson laurie reinhart the road less traveled: it shouldn't be allowed! lorrie faith cranor acm guidelines for associate and certificate level programs in computer information technology karl j. klee nancy burns fay cover judith porter a form for referees in theoretical computer science ian parberry object oriented programming in the computer science curriculum (panel session) julie zweigoron john bierbauer scott knaster tomasz pietrzykowski john pugh taking responsibility for our risks robert n. charette operating systems and cost management dinesh c. kulkarni arindam banerji david l. cohn intellectual property for an information age: introduction pamela samuelson biological versus computer viruses d. guinier a core based curriculum for a master's degree the recently proposed curricula for a master's degree present problems for a department with limited resources. this paper discusses the proposed curricula, their goals and problems, and then presents a new curriculum based upon a set of core courses. the new curriculum simultaneously satisfies two of the proposed curricula, yet imposes lower demands upon the department's resources. l. e. winslow l. a. jehn growth scenario of it industries in india phalguni gupta history in the computer science curriculum: part ii john a. n. lee improving word processing - data processing relationships word processing and data processing have developed separately over the past two decades with different equipment, different types of personnel and services performed. the latest technological advances have produced systems which have an overlap of functions. as a result, the well - established data processing and word processing groups within many organizations are involved in "turf protection" and confused roles. heavy-handed management effort s to resolve the issues often create more problems than they solve. this panel seeks to open channels of communication between data processing and word processing management by identifying word processing tasks and services and their relevance to data processing services. a. a. j. hoffman a skeptical view of computer literacy text of a presentation to be made at the computer science conference of the association for computing machinery (acm) and the acm special interest group on computer science education (sigcse) technical symposium, february 16, 1984, at philadelphia. a lot of people seem to believe that "everybody" needs "computer literacy." the proponents seldom offer a definition of what computer literacy is, but what comes out of a hand-waving argument generally turns out to be a programming course, most often in basic. the reasons given for this sudden need for universal computer literacy are similarly vague. sometimes it's vocational: there will be lots of jobs working with computers, and every other field will be impacted by them, so "everybody" needs to know... well, something. other times the reason is that computers will be so pervasive that it is important the man in the street not be afraid of them. and sometimes the argument seems to be that "everybody" needs to know about computers because so many policy decisions (privacy, employment, transborder data flow, etc.) will involve computers. daniel d. mccracken requirements for a computer science curriculum emphasizing information technology: subject area curriculum issues charles reynolds christopher fox information centric curriculum (isc'98) (panel) doris k. lidtke michael c. mulder editorial policy adele goldberg design of an authoring system for microcomputers n. lasudry-warzee system development woes peter g. neumann congress tackless computer abuse congress has been taking an active interest in protecting information stored in computers. congressional investigations and media reports have highlighted the vulnerability of computer systems to abuse and mis-use. recent accounts of "system hackers" describe young students and others who gain illegal access to systems, thereby obtaining information and services, as well as disrupting systems. rosalie steier the computer professional as an expert witness along with increasing numbers of computers and increasing utilization of computers in industry, government, and education, more hardware and software contracts are being written. as a result, there is an increase in computer- related litigation and a need for computer professionals to serve as expert witnesses. the purpose of this panel is to present issues of importance to the computer professional who is asked to serve as an expert witness. the panel will consist of computer professionals who have served as expert witnesses and attorneys (who may also be computer professionals). a. a.j. hoffman including the social and ethical implications of computing in the computer science curriculum florence appel ask jack: skill development jack wilson the national science foundation and support for research and education in computing (panel) bruce h. barnes john r. lehmann computer science degree programs: what do they look like? a report on the annual survey of accredited programs renee mccauley bill manaris information systems curriculum development (panel): skills and knowledge for today and the future g. daryl nord tom hilton wallace a. wood jeretta h. nord implementing a campus-wide computer-based curriculum wayne walters cooperative learning and computers david a. dockterman inside risks: risks of insiders peter g. neumann computers in schools: past, present, and how we can change the future doris k. lidtke david moursund information retrieval activity in pre-university education in the united kingdom steven pollitt you and whose army?: or requirements, rhetoric and resistance in cooperative prototyping john bowers james pycock predicting success of a beginning computer course using logistic regression (abstract only) enrollment in computer science classes have grown at an alarming rate despite often inadequate university resources and a shortage of trained computer science faculty. while the number of computer science faculty. while the number of computer science majors may now be leveling off or even declining, many other disciplines are requiring students to possess computer skills. an all too high percentage of students do not possess a sufficient academic background which thereby causes a high attrition rate in computing courses. these failures represent a loss of both university and student resources. whether choosing a selected group of students from a large number trying to enroll in a specific course or program or attempting to offer different prerequisite computer classes based on student need and aptitude, some method for predicting performance is required. this study was conducted to see what variables might be predictors of success in a first computer science course. in this study, success in the beginning course for computer science majors and minors was defined as earning a grade of c or better, while failure was considered as receiving a d, an f, or withdrawing from the course. the authors believe, from their experience, that what one instructor would evaluate as a caliber work another instructor might rate only as b performance. however, instructor agreement improves when performance is rated as acceptable (c or above) or not acceptable (d, f or withdrawal). the predictors used included success in a beginning calculus course, sat quantitative and verbal scores, high school class percentile rank, sex of the student, quarter in which the course was taken, class status of the student, and the instructor who taught the course. the sample was 262 students enrolled in different sections of the beginning course during three different quarters 1984-85. the stepwise logistic regression model was chosen because the dependent variable was dichotomized as described above and the predictor variables were either interval or categorical. high school class percentile rank was computed by subtracting the rank of the student in his or her graduating class from the size of the graduating class, dividing this difference by the size of graduating class. this variable, then, is interval, sat verbal and sat quantitative scores are interval variables. success in the beginning calculus course was defined the same way as success in the beginning computer science course was defined, with a third category added for students who had not attempted calculus. thus, success in the beginning calculus course, sex of the student, the quarter in which the course was taken, and the instructor are categorical variables. sat scores, the mathematics prerequisite, and sex of the student have been tested by others as predictors for success in computer science. [1, 2, 3, 4, 5]. the other studies noted above assume that course grades are on an equal interval scale which allowed the use of an ordinary regression model. the predictor variables in an ordinary regression model must either be continuous or have only two categories. the model used in the present study allows the stepwise technique utilized in the studies cited above for ordering predictor variables into the equation, but also allows categorical predictors to have more than two states if they are not independent. the results of this study indicate that the best predictor of success in the computer science course is success in a calculus course. this result supports the findings in other studies [2, 5]. the second best predictor found was high school class percentile rank. the only other variables that could account for variation in the dependent variable were sex of the student and class status of the student. a prior study also suggests that male students performed better in computer science courses. [2] the authors have concluded from the results that the reasoning skills necessary for success in calculus are also important for success in computer science. another trait for success is the competitiveness of the student. this trait is measured in some degree by the high school percentile rank. ashraful a. chowdhury c. van nelson clinton p. fuelling roy l. mccormick a look at nsf's college science instrumentation program john d. mcgregor john rudzki risks to the public and related systems peter g. neumann konrad zuse: reflections on the 80th birthday of the german computing pioneer wolfgang k. giloi late news - christmas gremlin at ibm scalable multicast security with dynamic recipient groups in this article we propose a new framework for multicast security based on distributed computation of security transforms by intermediate nodes. the involvement of intermediate nodes in the security process causes a new type of dependency between group membership and the topology of the multicast network. thanks to this dependency, the containment of security exposures in large multicast groups is ensured. the framework also ensures both the scalability for large dynamic groups and the security of individual members. two different key distribution protocols complying with the framework are introduced: an extension of the eigamal encryption scheme, and one based on a multiexponent version of rsa. refik molva alain pannetrat information systems outsourcing h. raghav rao kichan nam a. chaudhury cognitive style, personality type, and learning ability as factors in predicting the success of the beginning programming student larry s corman the burks project john english development of a systems analysis and design course cleveland state university is an urban university, with the department of computer and information science residing in the college of business administration. the major objective of the department's curriculum is to educate students for productive roles in industry, primarily in the development and implementation of business information systems. several years ago, critical comments from both the business community and the students themselves gave strong indication that the courses in systems analysis and design were not fulfilling this objective. not only were the courses not teaching state of the art techniques, they tended to vary in content considerably from quarter to quarter, depending on the instructor. the subject "systems analysis and design" covers a wide variety of material, far too much to be dealt with comprehensively in any reasonable time span. it was felt that the objectives for the courses could best be met by concentrating on structured analysis and design methodologies (particularly as they applied to the development of business information systems for computers), and by establishing well-defined syllabi for the courses. the material was divided into two courses, the first covering analysis and the second covering design. donald g. golden using a familiar package to demonstrate a difficult concept: using an excel spreadsheet model to explain the concepts of neural networks to undergraduates a course introducing neural networks to second year undergraduates with mixed disciplinary backgrounds needed a tool to reduce the overheads of simplifying the complex mathematical and programming skills normally associated with the subject. an excel model was produced that had the added benefit of reducing anxiety, as all students taking the course are competent with excel spreadsheets. william fone the rise and fall of corporate electronic mail sylvia lanz risks to the public p. g. neumann learning to program and learning to think: what's the connection? focusing on thinking skills that are cognitive components of programming--- rather than on intellectual ability---can illuminate the relationship between learning a programming language and learning more about thinking processes. richard e. mayer jennifer l. dyck william vilberg learning from litigation: trade secret misappropriation robert d. rachlin we need to help unmotivated students kim packa a proposed computer education curriculum for secondary school teachers a 1983 study investigated the certification of high school computer science teachers. a major portion of the study was devoted to the identification of those computer science courses most appropriate for such teachers and, therefore, for certification programs. this paper presents the results of the study and proposes a computer education core curriculum based on those results. harriet g. taylor james l. poirot hard questions have no answers alan dix research methods in computer science education vicki l. almstrum debra burton cheng-chih wu a survey of methods used to evaluate computer science teaching angela carbone jens j. kaasbøll some thoughts on retraining and the lack thereof a mathematics educator teaching computer science rosemary schmalz colorful illustrations of algorithmic design techniques and problem solving david ginat dan garcia owen astrachan joseph bergin who made israel's computer models for the 1967 war? mark midbon professional certification through iccp joyce currie little viewpoint: why we must fight ucita richard stallman to dream the invisible dream john seely brown the effect of a preliminary programming and problem solving course on performance in a traditional programming course for computer science majors a preliminary pascal course which emphasized problem solving was designed for incoming computer science majors who were identified as being at risk. in addition, students in the required pascal course could transfer back to the preliminary course prior to the administration of the first examination. students in the preliminary course were paired with comparable freshman majors from the preceeding academic year. paired t-tests revealed significantly higher grades in the required course for those students who had previously completed the preliminary course. the preliminary course also served as a filter; approximately 43% of the students did not subsequently attempt the required course. sex differences in persistence were also noted. patricia f. campbell the quality of managerial training in telecommunications: a comparison of the marketing and information systems managers' viewpoints this paper compares the results of two studies - the first study targeted the marketing manager while the information systems (is) managers were the focus of the second study. both studies questioned the respondents about the quality of their training in thirty telecommunications issues. what was most surprising was the total dissatisfaction with the training in telecommunications voiced by both the marketing and is managers. the marketing managers did not rate their training on even one issue as average or above. only one issue, local area networks (lan), was rated as average or above average in the quality of their training by the is managers. training in the currently used applications was rated the highest while training in the advanced or emerging applications was rated the lowest. when these applications are used innovatively, an organization can capture a strategic advantage. self-study, organizational programs, educational institutions and professional agencies were the primary sources of the training. training programs provided by professional agencies were rated the highest in quality while the organizational in-house programs were rated the weakest. karen ketler john r. willems departmental differences can point the way to improving female retention in computer science j. mcgrath cohoon upsilon pi epsilon (upe): the role of the computing science honor society in computer science programs robert f. roggio toward the perfect workplace? the experience of home-based systems developers is compared with their office- based counterparts in a uk computer firm. the analysis produced two major patterns: the home-based workers find intrinsic value in the job, whereas office-based employees view it more instrumentally and find it interferes with satisfaction on a personal level. l. bailyn the role of conceptual models in formal software training conrad shayo lorne olfman consulting methods used by the text consultants at a large research laboratory this paper will cover the methods used for consulting and teaching by the text consultants at a large research laboratory. text consulting, at this laboratory, has existed for more then ten years and many changes have occurred during this time. some of the changes that will be discussed are supported software, number of users and their changing needs. we will also cover the day to day activities of the consultants, including the methods that have been used in past years, and discoveries and conclusions that affect the way we consult today. when and why we use formal teaching methods will be discussed, including our special secretarial training courses. the secretarial training course is, on the whole, one-on-one instruction. why this course came into existence and the methods used (past and present) will also be mentioned. elizabeth a. mcauliffe ask jack: making a careers presentation jack wilson distance learning in the new millennium lisa neal using visualization tools to teach compiler design a project-based compiler course presents several challenges to the student- implementor. in addition to the "book learning about" various compiler topics, a student must assimilate a large amount of information about the compiler's implementation. furthermore he or she must be able understand each source- program construct at a number of different representation levels. finally, the student must apply that knowledge during implementation and debugging of a compiler. this paper describes a pair of packages that employ java's graphical capabilities so that a program may be visualized at various stages of the compilation process. we argue that these tools are effective in helping students understand the transformation process from source program to machine code. we summarize our experience in using these tools in the context of a project-based compiler course. we also discuss other features of java that make it well-suited for a student compiler project. steven r. vegdahl determinants of mis employees' turnover intentions: a structural equation model magid igbaria jeffrey h. greenhaus what a task - establishing user requirements! mike norman computer science in an undergraduate liberal arts and sciences setting ronald e. prather privacy: from abstraction to applications l. j. camp teaching using off-the shelf on-line materials carl alphonce debra burhans helene kershner barbara sherman deborah walters erica eddy gloria melara pete depasquale j. philip east fred springsteel kurt f. lauckner new standards for educational technology relevant to multiple acm sigs roy rada james schoening writing within the computer science curriculum henry m. walker science, computational science, and computer science: at a crossroads we describe computational science as an interdisciplinary approach to doing science on computers. our purpose is to introduce computational science as a legitimate interest of computer scientists. we present a possible foundation for computational science. these foundations show that there is a need to consider computational aspects of science at the scientific level. we next present some obstacles to computer scientists' participation in computational science. we see a cultural bias in computer science that inhibits participation. finally, we indicate areas of mutual interests between computational science and computer science. d. e. stevenson computer organization in the breadth-first course the topic of computer organization in the breadth-first cs course is addressed. a set of materials and a particular ordering of sub-topics is suggested to maximize the "cognitive hooks" for students. a new entry point to computer organization based on a natural physical metaphor of knobs and switches is introduced. the k&s; model 1 machine simulator is developed based on the knob and switch metaphor. the k&s; simulator starts as a simple datapath and is built up in several carefully planned increments until a complete stored program machine is constructed. the use of the k&s; machine simulator is placed in context by a short discussion of how it is used to motivate the investigation of binary representation, arithmetic, logic gates and combinational logic circuits. an intuitive model for cmos technology is introduced and used to demonstrate the tie between logic gates and transistors in the computer. finally, some qualitative observations are offered and improvements are suggested. grant w. braught motivation = value x expectancy (poster session) tony jenkins a challenge for computer science educators mike murphy richard halstead- nussloch introduction to computer science: an interactive approach using isetl nancy baxter david hastings jane hill peter martin robert paul spreading the word: funding faculty to train faculty in the california state university susan archer james fleming gary jones if aristotle were a computing professional james h. moor about the authors… anthony i. wasserman computer science curriculum recommendations for small colleges(panel session) panelists will report on the work of the acm education board's ad hoc committee to revise and update the small college curriculum recommendations published by acm in 1973. the revised report's completion is expected by the end of 1983. a preliminary report will be given which addresses topics including suggested curriculum content, resources needed, implementation problems, and other matters of concern to small colleges attempting to develop and/or maintain a viable computer science program. audience response will be welcome to provide input to the committee. john beidler lillian cassel richard austing the second isew cleanroom workshop graeme smith teaching ethical issues in computer science: what worked and what didn't kay g. schulze frances s. grodzinsky ethical and professional issues in computing (abstract) mary dee medley kay g. schulze bob riser rebekah l. tidwell radical and conservative top-down development g. hill math link: linking curriculum, instructional strategies, and technology to enhance teaching and learning math link was a professional development project involving louisiana state university and the west feliciana parish school district in the united states. math link focused on teachers in grades three through six in the entire district and included over ninety percent of the teachers in the district. the teachers participated in a three week summer workshop directed by a professional staff that included two elementary mathematics specialists, a technology specialist, a mathematics specialist/site coordinator from the school district, a mathematics education professor, and an educational technology professor.during the summer, the educators worked on four major components: technology tools to support learning: mathematics standards, instructional methodology, and assessment; team building and collaboration; and instructional models that they would implement. during the academic year, a site coordinator provided instructional and resource support for all the teachers. the site coordinator also monitored progress and worked directly with the university staff directing the project. four day-long workshops were also held during the school year. during the following summer, the educators worked at the university for two more weeks to formalize their instructional models and curriculum and plan for the future. harriet g. taylor the student consultant...supporting the microcomputer labs efficient and effective support of the microcomputer labs consists of one major component, the student consultant. student employees must be familiar with the wide base of software and hardware supported by the computer center. the student staff is also responsible for maintaining the physical security of the software and hardware within the labs. motivating the student employee and providing a state-of-the-art computing environment requires creativity to accommodate the stagnant budget. this paper outlines the importance of scheduling, training staff and maintaining the microcomputer labs at indiana state university. nancy j. bauer acm internation collegiate programming contest 2000 lynellen d. s. perry down the road: visit clarkson and the university of idaho sara carlstead user services as information center joel cohen mark h. castner coordination of systems development courses dale k. hockensmith the business of software: the laws of software process phillip g. armour i lift the lamp beside the golden door] from applications programming and systems design, from teaching computer science and latin, from business and government, they're coming to user services. "keep ancient lands, your storied pomp!" cries she with silent lips. "give me your tired, your poor, your huddled masses yearning to breathe free, the wretched refuse of your teeming shore. send these, the homeless, tempest-tost to me, i lift the lamp beside the golden door!" the familiar passage above, inscribed on the pedestal of the statue of liberty, can very easily be applied to user services: "well- established professions, keep your famous sons and daughters of status. give me your over-burdened teachers. your programmers yearning to breathe free, the wretched refuse of your teeming shore. send these, who do not fit, to me, i light the way to golden opportunities!" immigrants we have left the ancient lands and arrived in user services. where have we come from? where are we now? where do we go from here? john major source code security: a checklist for managers jon corelis computer information systems and organization structure a study of computer information systems and management (cism) is described and selected results relating to changes in organizational structure in eight organizations are presented. in five of the organizations no changes in formal structure accompanied the introduction of cis. where organizational changes did occur, the existing structure of the organization was usually reinforced. these findings suggest that cis is a flexible tool that is compatible with a variety of organizational design options and not a cause of design per se. daniel robey integrating business and is concepts through action research within a new is curriculum janice m. burn louis c. k. ma an empirically-grounded framework for the information systems development process brian fitzgerald on teaching computer science using the three basic processes from the denning report d. j. bagert human values and the future of technology ben shneiderman the validation of a political model of information systems development cost estimating albert l. lederer jayesh prasad accreditation in canada suzanne gill a new breed of computer professional the information specialist organizational concern with providing information for effective decision making has brought about new systems concepts, i.e., decision support systems, and information centers. this paper describes a new breed of computer professional, the information specialists (isp). the role of the isp is developing in response to the requirements of the new concepts, in particular those of the decision support system. the roles performed by an isp along with the skills and knowledge required are also discussed. donald l. davis guidelines for using case scenarios to teach computer ethics raquel benbunan- fich a cognitive framework for knowledge in informatics: the case of object- orientation knowledge is a function of both skills and understanding. the interaction between instrumental and relational understanding is necessary for construction of further knowledge. constructivist theory is used to analyse the learning process in informatics. a framework for describing different types / levels of knowledge is used to describe students and professors statements about the concept of object-orientation.the findings of this study may have implications for teaching object-orientation, especially in introductory courses. christian holmboe on the reliability of the cdp and ccp examinations are the cdp and ccp examinations reliable? this paper examines the nature of test reliability and methods of obtaining data to calculate the reliability coefficient. data for recent years for both examinations are given. james s. ketchel inside risks: be seeing you! lauren weinstein closing the fluency gap mitchel resnick an upper level computer science curriculum in response to national curricular trends, the computer science curriculum at the university of north florida has undergone three iterations since its inception in 1972. experiences with the development of the north florida curriculum coupled with recent exposure to the current thinking of the ieee-cs curriculum committee motivate this paper. the curriculum as outlined in this paper owes its origins to the earlier acm and ieee-cs model curricula. y. s. chua c. n. winton a course in computer systems planning those aspects of computer science dealing with the selection of computer hardware, the selection of computer software, the tradeoffs between in-house development and purchase, the transition to a new system, computer performance evaluation, and computer center management are not covered or are covered very lightly in an undergraduate curriculum. this paper presents the evolution and content of a senior-level course on these subjects taught at indiana-purdue at fort wayne. karl rehmer a supplementary package for distance education students studying introductory programming tanya mcgill valerie hobbs articulation reflections richard h. austing summer high school computer workshop although the use of computers in secondary schools is rapidly increasing, there still remain many schools (particularly the smaller rural schools) which have no computer access suitable for classroom instruction. providing educational opportunities in computers for students from these schools is a need which often can be easily filled by the university. the computer science department at bowling green state university has offered a week-long computer summer workshop for the past five summers aimed at filling this need. the workshops have been very successful, introducing the world of computers to many talented area high schoolers who would not otherwise have had the opportunity. as the use of computers in the secondary schools increases and changes, the role of this workshop will also change, but there will always be a need for special learning opportunities, such as this workshop, which the university can provide to supplement the computer education in the secondary schools. gerald a. jones computers and education sue hamby a gap in the path to professional competence (panel session) as our reliance on computers increases, so does the number and complexity of the applications and the number and quality of trained professionals that will be needed to support further progress. in the face of this evident and increasing need, we must think clearly about our expectations for computing professionals of all sorts and the way they are trained. a gulf is growing between the skills that students build at a university and the needs of industry. this panel examines the nature and extent of the problem and ways to address it. ralph morelli moshe cohen james chiarella in good company: how social capital makes organizations work donald j. cohen laurence prusak protection of intellectual property on the national information infrastructure nancy j. wahl president's letter adele goldberg changing is curriculum and methods of instruction the paper presents problems of teaching information systems at school of management. authors drew from observations in polish and western schools of management and literature. suggestions in respect to rules for new curriculum design are given in the paper. the results of investigations are presented in conclusions with recommendations. marian kuras mariusz grabowski agnieszka zajac should acm or sigsmall/pc publish courseware? larry press on site: computing in lebanon ghinwa jalloul the first-course conundrum recently, the college entrance examination board (ceeb) has decided to redesign the advanced placement (ap) examination in computer science (cs) so that ap courses will be forced to switch from pascal to c++ starting around 1998. hal abelson kim bruce andy van dam brian harvey allen tucker peter wegner process improvement competencies for is professionals: a survey of perceived needs eugene g. mcguire kim a. randall computer skills: they're not just for techies anymore kathleen janik training pre-assessment: is it feasible? conrad shayo lorne olfman ricardo teitelroit claes nordahl matthew rodriguez the relationship between business and higher education: a perspective on the 21st century apple's chairman of the board discusses the future of three core technologies ---hypermedia, simulation and artificial intelligence \---and the role each will play in education. the following speech was presented to an audience of teachers almost two years ago. its message, however, is as timely and inspirational today as we prepare for a new era. john sculley teaching writing and research skills in the computer science curriculum paul m. jackowitz richard m. plishka james r. sidbury question time: online privacy leon rogson mary forsht-tucker gene sheppard an introduction to java3d (tutorial presentation) java3d is a top- down approach for building 3d interactive programs which run in a web browser or as stand-alone applications. built on top of the java programming language, it uses a scene graph hierarchy as the basic structure. various geometrical representations, animation/interaction, lighting, texture, collision detection, and sound are supported. java3d offers an alternative approach to teaching the traditional first semester computer graphics course. typically such courses cover basic low level algorithms such as line drawing and clipping which can take a significant amount of time to learn, implement, and integrate into a full rendering system. as a result, students gain little or no experience actually building real 3d applications. java3d can change this. students who are already well versed in java can quickly begin writing fun interactive 3d applications, thus providing context and motivation for later understanding the low level algorithms. java3d is also a natural choice since java itself is being adopted as the first computer language in more and more colleges. it has the added advantage that it is freely available and can run on a standard pc. this workshop will present an overview of the java3d scene-graph hierarchy. classes for building shapes, textures, and lighting, as well as for creating behaviors will be described. many sample programs will also be presented. genevieve orr project management in an academic computing center: installing a laser printer dick hoskins thomas ormand equipment maintenance as a user service equipment maintenance services are not usually considered as a user service. these services are, however, important for user satisfaction as equipment malfunctions cause frustration and can result in loss of information. during the past two years, the academic computer center at washburn university has developed maintenance laboratory facilities and a maintenance staff of one full-time person. the impetus to do this came from the realization that acquiring off-campus maintenance was becoming increasingly costly with little or no control over the quality and timeliness of the maintenance services procured off-campus. also, there was no program of preventative maintenance which let to equipment failure during classes with the associated frustration of both faculty and students. the growth in the population of microcomputers and terminals connected to a minicomputer required that some program be developed to maintain the computing equipment and build and maintain data communications networks. this paper will point out that maintenance, while often not considered a user service, is in fact an increasingly important user service as the user community expands and users come to look upon access to a computer as a basic service. the paper will discuss what maintenance services are provided at washburn and the policies which govern the access to these services. it will also cover the benefits and problems associated with doing your own maintenance and how some strategies for getting maximum benefit from the resources which are allocated for maintenance activities. david oliver david bainum alice in computerland ronald l. danilowicz legally speaking: libel and slander on the internet anthony m. townsend robert j. aalberts steven a. gibson who owns my soul? the paradox of pursing organizational knowledge in a work culture of individualism eileen m. trauth when opportunity knocks: leveraging reengineering for computer training jeff pankin mary ellen bushnell inside risks: risks in computerized elections background: errors and alleged fraud in computer-based elections have been recurring risks forum themes. the state of the computing art continues to be primitive. punch-card systems are seriously flawed and easily tampered with, and still in widespread use. direct recording equipment is also suspect, with no ballots, no guaranteed audit trails, and no real assurances that votes cast are properly recorded and processed. peter g. neumann a student-directed computing community adam bonner a modern curriculum for an ancient culture this paper reports on one such effort. the senior author was invited by the first ministry of machine building to review a computer science program which had been established along the lines of the ieee model curriculum [4]. the work of the two other authors and their colleagues forms the bulk of the reminder of the presentation. this discussion of a particular program presents one of the ways in which the prc is "catching up". a more complete review is provided in another document [5]. robert m. aiken chien f. chao yi fen zhu academic careers for experimental computer scientists and engineers corporate computer science and telecommunications board a plagiarism detection system the problem of plagiarism in programming assignments by students in computer science courses has caused considerable concern among both faculty and students. there are a number of methods which instructors use in an effort to control the plagiarism problem. in this paper we describe a plagiarism detection system which was recently implemented in our department. this system is being used to detect similarities in student programs. john l. donaldson ann-marie lancaster paula h. sposato insights and analyses of online auctions principles for computer crime legislation from awareness to action: integating ethics and social responsibility into the computer science curriculum c. dianne martin elaine yale weltz e-commerce: a market analysis and prognostication sherrie bolin integrating ethical topics in a traditional computer science course lonny b. winrich computer science through the eyes of dead monkeys: learning styles and interaction in cs i gary lewandowski amy morehead java: hercules or godzilla this workshop will provide an overview of java sufficient to cover the first half semester of a beginning course. we will focus on using and defining java objects, java's control constructs, using it graphics capabilities, and responding to events. each topic will incorporate, a hands-on component using graphics to reinforce ideas. it will be assumed that all participants have, experience with a programming language. the principles of object-oriented programming will be introduced, thus, knowledge of an object-oriented language is not required. however, the pace may seem quick for those with little object experience. ed epp in school or out: technology, equity, and the future of our kids in january, newt gingrich proposed a tax credit for the poorest americans to purchase laptop computers, forgetting for the moment that the poorest americans do not pay taxes and thus a credit does them no good. in this case, his heart may be in the right place, even if he later called the idea "nutty." saul rockman the education of the new information specialist (abstract) michael mulder the cornell commission: on morris and the worm after careful examination of the evidence, the cornell commission publishes its findings in a detailed report that sheds new light and dispels some myths about robert t. morris and the internet worm. t. eisenberg d. gries j. hartmanis d. holcomb m. s. lynn t. santoro how do computer science lecturers create modules? (poster) john traxler the departmental user services program - a mixed model for distributed user support gregory t. kesner introducing the office systems research association's organizational and end- user information systems model curriculum bridget n. o'connor m. judith caouette teaching oral communication in computer science vianney cote distributed training: the msu computer laboratory departmental trainer program's second year with the advent of microcomputers, distributed computing is here to stay. with distributed computing comes distributed expertise; people must learn to run their own computing centers, no matter what the size of the computers. in trying to deal with increased demands for training in a limited-resource environment, we have taken the natural step of distributing the responsibility for the training. with appropriate training materials, a person who knows how to use a program can lead someone else through the process of learning. the training materials must be sufficiently detailed to include training on all major features of the software that people regularly use and include conceptual material that takes the place of lectures. the commitment to distributed training must be real. sufficient staff must be devoted to developing training materials because the materials must continue to evolve with the software it teaches and new materials must be created to meet the demands for training on new software. bill brown marilyn everingham end users bob marcus the case for integrating ethical and social impact into the computer science curriculum c. dianne martin micro labs and laser printing: scenarios and solutions with the advent of laser printers like the hp laserjet and the apple laserwriter, very high quality output using laser technology is readily accessible to microcomputer users. people trying this method of printing quickly decide that it is the method of choice for most applications. however, the popularity of laser printers in public sites can, if unregulated, quickly lead to overuse of what is a relatively high-cost method of printing. this results in excess strain on both the equipment and the operating budget. panel members from various universities and colleges will relate their experience with laser printing in academic microcomputing facilities. discussion questions will include: do they offer free laser printing? if so, do they restrict use to final copies and how do they enforce this restriction? are laser printers attached to stand-alone print stations or are they accessible from a network? are there other letter quality printers in the same lab? how much do they spend on maintenance and operations of the laser printer? if they charge for the use of microcomputing facilities, do they assess a lab- usage fee, or change for printing only? do they change for all letter-quality printers or only for laser printing? do they charge by means of a vend-a-card box attached to the printer, or do they accept cash per page? how much they charge? what problems have they had, either despite charging, because of it, or because they don't charge? the discussion of and answers to these questions will be invaluable to those developing new microcomputing facilities and should be of interest to those running existing labs as well. anne webster donna tatro cindy sanford linda downing lisa covi computer-related crime: ethical considerations richard parker editorial: introduction carl f. cargill online education - but is it education? tony clear the changing role of the user in application systems development user participation in developing application systems has evolved rapidly over the past ten years. the user's role has changed from a passive one, with little or no involvement in data processing and computer technology, to that of an active partner in designing and successfully implementing a system. in this paper, the authors examine the changing extent of user involvement, the effect of this involvement on user and data processing organizational structures and systems development techniques, and the outlook for the future. by exploring the pros and cons of this expanded interaction between user and data processing personnel, the authors show that, for the most part, this symbiotic relationship benefits both the institution and the quality and success of the application systems. the authors cite past and present experiences of users and data processing professionals in application systems development. the authors themselves have recently worked on a joint application system development project. ilene v. kanoff keith e. ickes acm forum: letters robert l. ashenhurst lab interns: a new level of service faced with ever increasing financial constraints imposed on universities and increasing demands for service from academic and administrative areas, university computing facilities are being forced to either increase computing charges or limit services. northern has developed a program designed to increase service while providing better trained personnel at a relatively modest cost to the academic computing center. before establishing an internship program, northern provided two levels of service at two different labs, using undergraduate students, who were paid minimum wage on an hourly basis, graduate students on contract, and full-time consultants. the problem users faced lay in the fact that the consultants were on duty only during the day, the graduate students worked a staggered schedule, some days, some weekends, and the center could not afford to give undergraduates, who made up the bulk of the work force, the extended training on the different packages and systems that would enable them to help with consulting. the primary responsibilities of the undergrads were maintaining the equipment, filling out reports, and keeping the labs running as smoothly as possible. consulting provided by undergraduate employees consisted of only helping with syntax errors in job control language and programming languages they were already familiar with, such as cobol, assembler, and pl/1. the establishment of an internship program provided the center with an opportunity to provide undergraduate students with training that goes beyond maintaining equipment. bradley e. coxhead exploratories: an educational strategy for the 21st century rosemary michelle simpson how the nintendo generation learns recently, i was chatting with my son, daniel (age 7) about nintendo. i asked him if he could bring it in to school for show and tell and he was horrified! "no," he exclaimed, "that's not for learning." well, what do you, dear reader, think about that? is nintendo for learning? no one is watching/listening … go ahead … admit it … you don't really like nintendo. (of course, when you play tetris, that is exercising problem-solving skills. right.) and, afterall, you didn't use all that technology in school, and you learned just fine. so, why do "kids these days" need all this new multimedia technology, anyhow? that technology just makes it fun and easy to learn---it's just glitz. and as for calculators..... elliot soloway be careful... terris wolff the it staffing situation in belarus uladzimir anishchanko electronic commerce: tutorial as we embark on the information age the use of electronic information is spreading through all sectors of society, both nationally and internationally. as a result, commercial organizations, educational institutions and government agencies are finding it essential to be linked by world wide networks, and commercial internet usage is growing at an accelerating pace. nabil r. adam yelena yesha describing the cs forest to undergraduates (abstract) henry m. walker kim bruce james bradley tom whaley a survey course in computer science using hypercard richard w. decker stuart h. hirshfield a fundamentals-based curriculum for first year computer science at wits we are concerned about offering a good computer science degree but at the same time making our degree programme accessible to all students who have the potential or ability to cope with the material. this paper discusses a new first year curriculum which has been developed to address some of the problems which the course that we offered from 1990 to 1998, with minimal changes, has begun to encounter. the most important of these problems is that of student perceptions of our old course. the new course stresses fundamentals of computer science and is structured around teaching basic principles and competencies. ian sanders conrad mueller good news and bad news on the intellectual property front pamela samuelson israel: of swords and software plowshares g. ariav s. e. goodman consideration on risks in using computing technology peter g. neumann delivering the frice of is management: flexibility, reliability, infrastructure, collaboration, and efficiency phillip d. farr leon kappelman log on education: handheld devices are ready-at-hand how acceptable are computers to professional persons? although our lives are all touched by computers these days, a great many people seem to be ambivalent about them, either fearing them or exhibiting reluctance about interacting with them. the most relevant study about attitudes towards computers was conducted by lee [1] in 1963. he found two major orthogonal factors: the computer viewed as a beneficial tool of man, and as a superhuman thinking machine that downgrades man's previously unique significance in the order of things. not only have computers changed quite dramatically since 1963, they have also become increasingly common. it would seem likely then that attitudes towards computers have also changed and hence need to be re- evaluated in the present decade. in this study, the attitudes of certified public accountants (cpas), lawyers, pharmacists, and physicians were investigated. professionals were studied because many marketing and electronics analysts have commented that industry is currently designing computers for professional persons [2]. computer availability, however, does not necessarily lead to computer acceptability. therefore, the primary question this study sought to answer was - how acceptable are computers to professional persons? elizabeth zoltan preserving due process in standards work e. m. gray dennis bodson burn out in user services the term "burn out" is frequently used to describe a person's fatigue or tire of a particular job or task. more often than not, it is associated with the helping professions and other jobs where people help other people. it is common to hear someone say that they are "burned out on the job," or "burned out on people," or just plain "burned out." one of the themes of the 8th user services conference held in 1980 in morgantown, west virginia, was the topic of the effects and treatment of burn out in user services personnel. papers presented by dr. james f. carruth of west virginia university and janice a. bartleson of southern illinois university were helpful in defining and coping with the problems of job fatigue in user services. in 1984, we have organized this panel discussion to reinspect the topic, especially with regard to the numerous changes that have taken place in our computing environments and the ways in which we help our users. to anyone working in user services, it should be obvious that the spreading out of computing services, in both quality and quantity, will continue to have dramatic impacts on the people who are designated to "help the users help themselves." john e. bucher orchestrating project organization and management larry l. constantine lucy a. d. lockwood response to the proposal for a "c-virus" database morton g. swimmer between immunity and impunity: respect for authors and copyright holders at public institutions h. j. woltring presidential politics and internet issues in the 2000 election doug isenberg global cooperation project in computer programming course tomas macek bozena mannova josef kolar barbara williams the magnetic poles of data collection neil munro can we improve teaching in computer science by looking at how english is taugh? richard buckland wearables in 2048 steven j. schwartz history in the computing curriculum (poster) gordon davies john impagliazzo you teach what is global and leave the specifics to me: a computer coordinator's view of the essential preservice curriculum in computer education sheila cory curriculum 2001: interim report from the acm/ieee-cs task force eric roberts russ shackelford rich leblanc peter j. denning the dynamics of e-commerce john stuckey perspectives on assessment through teaching portfolios in computer science james d. kiper valerie cross diane delisio ann sobel douglas troy implementing computer literacy: the year after m. gene bailey rebekah l. tidwell on the problems of information technology management in developing nations many developing nations treat information technology as a high priority item in their economic planning. the pace of computer introduction and the span of computer-based systems are expanding rapidly. this emanates from the realization that information technology has a great potential for the economic development of third world countries. however, it is our view that while information technology products open new opportunities for developing countries they also pose a new set of challenges. the often poorly managed computer resources tend to complicate the decision making process due to the introduction of new uncertainties. the problems of the industrial and communications infrastructure, personnel issues, political and social factors are important elements hampering the sound management of information technology resources in many developing nations. after giving some background material on characteristics of the computerization process in developing countries this paper discusses the manifestations of information technology mismanagement, the factors hampering the proper management of computer resources in a developing nation context and points to some of the potential solutions to these problems based on the experience of several nations in this field. adnan h. yahya commentary: the new wave of computing in colleges and universities: a social analysis rob kling interaction factors in software development performance in distributed student teams in computer science this research in progress paper compares the characteristics of high and low performance distributed student teams doing software development in computer science. the distributed student teams were involved in a software development project that was part of a computer science course at two universities located in different countries. we developed a set of categories to examine the email communication of distributed student teams. this paper tracks the progression and changes in the categories coded for each team's communication throughout the project's timeline, particularly during key decision periods in the software development cycle. martha l. hause vicki l. almstrum mary z. last m. r. woodroffe the personal touch dennis fowler why "the social aspects of science and technology" is not just an optional extra donald mackenzie motivation in the high technology industry motivation of employees in the high technology environment needs some very special considerations. this seminar will address some ways of motivating employees in this environment. each of the speakers/panelist are noted for their technical achievement as well as their ability to stimulate a healthy environment and generate enthusiam among their employees. the agenda will be a 15-20 minute statement of position from each participant and then questions and answers. the participants are: marshall bynum warren moseley making decisions in uncertain times john gehl computers, ethics, and values comdex in las vegas is the largest north american trade show for personal computer users. the show, which is vast, provided me with many impressions. i jotted many of these in my notebook and will share some of them with you here. peter g. neumann top management's view of computer related fraud thomas c. richards rose knotts the profession of it: the core of the third-wave professional peter j. denning robert dunham the standards factor jon meads seed corn session there is a consensus in academia and industry that the shortage of computer science faculty is at crisis proportions. the solutions to this crisis involve actions both independent and concurrent. some of the actions suggested for the academic arena include the offering of salaries competitive with industry wages, marketing the perceived advantages of university appointments as opposed to industrial positions, recruit from industry, provide stronger career counseling to avoid a surplus in esoteric specialties, reaffirm teaching as the primary responsibility, actively develop industrial contracts and acknowledge the validity of industrial research and selection of directions for computer sciences. industry could help alleviate the problem by rebuilding computer science laboratories with state-of-the-art hardware, provide summer jobs and encourage graduate students to complete degree work before seeking employment. both groups need to work together to improve channels of communication, to share personnel, seek better job matches and open research centers on both sides. artie briggs roger elliott orrin taulbee ray miller fred maryanski association for computing machinery special interest group on security, audit & control annual report dahl a. gerberick women and computers: what we can learn from science janet t. kerner kathy vargas an international standard for privacy protection: objections to the objections colin j. bennett cs265 web network and web site management development of a core course in the internet technology minor curriculum the internet is rapidly expanding into the world of commerce and communication media. the students in many different college majors, including cs and cis, have an increasing desire to learn more about the internet and what, where, when, why and how it is used. the question that they have is "how can they get involved and learn about the internet as it relates to their major field of interest. our current college students are witnessing a phenomenal effort by organizations of all types (e.g., government agencies, educational institutions, corporations, and small businesses) to exploit the web to deliver information to prospective customers and clients. developing web applications requires people skilled in the use of state-of-the-art technologies such as html, dhtml, cgi, java or javascript, javabeans, microsoft's activex and xml (extensible markup language). in addition, there is a need for an understanding of internet information architecture, distributed systems, client-server capabilities, user-interface usability, web-based databases, maintaining security, designing "effective" web pages, and maintaining a web site/server. this paper describes the development of cs265 "web network and web site management" a core course in an innovative curriculum for a computer sciences minor in internet technology. cs265 has been developed within a minor that includes several aspects of internet- related computing science that are interesting to both the cs/cis major and non-major alike. this paper presents the development of a successful course curriculum including lectures, lab exercises and projects. it is the intention that the variety of lectures, lab exercises and projects for cs265, "web network and web site management," presented here may easily be adapted into any computer science curriculum wishing to include a focus on programming for the internet and the information systems involved. kevin d. hufford the challenges and directions of mis curriculum development in respect of transformation of business requirements effective use of information technology (it), is becoming one of the prime determinants of success of business organization. however, rapid advances in it cause that knowledge and skills required by is professionals have to be changed continuously and profoundly. in this environment, is/it skills can become obsolete quite easily. the is/it jobs of tomorrow will require the flexible updating of knowledge, skills and abilities for effective performance of changing tasks and duties. stanislaw wrycza thaddeus w. usowicz andras gabor borut verber ec commission communication on establishing an information services market the following is an excerpt from the communciation sent by the european commission to the council of ministers explaining the rationale for an internal information services market. it is followed by the counicl's formal decision and budget allocation totalling $36 million. corporate european commission a pragmatic approach to systems analysis and design the university of new brunswick offers a wide selection of upper year computer science electives. systems analysis and design, consisting of the study and use of the system life cycle for developing business information systems together with associated techniques and issues, has been offered for about 14 years. considered to be a "soft" subject in a sea of hard-core technical courses, for many years this course was elected by a manageable maximum of 15-25 students per year. in recent years, particularly due to reinforcement by an active group of employers in our expanding co-op program, the value of such a course has been more widely recognized by our students. this paper examines methods used to handle substantial enrollment increases (40-60 students) in a subject which is especially sensitive to class size. efforts made to improve the effectiveness of teaching this non-technical but vital material to technically oriented students are discussed. as well, the differences between software engineering and systems analysis and design are outlined. a case is made for a change in the acm curriculum '78 software design course cs14 to relect the very important role of the analysis phase in software development. jane m. fritz ada sources for computer science educators (panel session) michael b. feldman mary armstrong richard conn philip wilsey design of charging mechanisms according to the interaction between information technology type and diffusion lifecycle phase nava pliskin tsilia romm widening international boundaries stuart zweben computer matching and personal privacy: can they be compatible? herman t. tavani use of collaborative multimedia in computer science classes while there is a lot of speculation about the benefits of multimedia exploration, research on learning and technology suggests that the _creation_ of media by students has even greater benefit for learning. students learn through articulating their knowledge in their multimedia documents, reviewing their own work, and receiving comments and critiques on their work. in the research of the collaborative software lab (http://coweb.cc.gatech.edu/csl), we are particularly interested in exploring the creation of media through collaborative technology. by having students work together in creating diverse media, we encourage review and critique, and create opportunities for joint learning. we have been using an environment for collaborative multimedia in several computer science classes, and in this paper, we describe some of the activities that teachers have invented for using the coweb. mark guzdial a psychometric study of the job characteristics scale of the job diagnostic survey in an mis setting maung k. sein robert p. bostrom workshop on academic careers for women in computer science janice e. cuny is software warfare "unthinkable"? or is there a rational basis for its adoption?: a proposal for ethical reflection and action eric v. leighninger planning student laboratories - what isn't new? planning a computer lab is a complex project on any campus. most planners start from scratch because either they aren't aware that others have done it already, they believe that their situation is unique or they just don't have time to investigate efforts elsewhere. even when planning a second lab, technical, political and environmental constraints change so much that the methods used for the first lab do not apply to the second. columbia university center for computing activities (cucca) is in the process of planning its second computer lab. the lessons we learned in designing and operating our first lab did not necessarily help us in the second. however, in searching for issues that are common to planning any computer lab, we have found ways to plan for the unexpected: changing technology, new faculty requests, even a change in location. instead of basing our planning on new considerations that differed from our first lab, we ended up defining issues that are common to any lab and creating a framework to accommodate the myriad of changing constraints that make every lab unique. our framework has helped us answer some of the questions that many lab planners face. in designing the space, what layout will make it usable for both a classroom and a laboratory? in the budgeting process, how do the capital costs of installing a lab affect the continuing costs of operating it? what are the operational problems in a large lab? for upgrading facilities and growth, how does the choice of hardware and software fit into the plans for future services? how can the work be scheduled to least interfere with the present level of services? although planning is still a complex process, the guidelines presented in this paper will provide useful information for lab planners at every level. in fact at columbia, where microcomputing has spurred the development of departmental labs, lab planning referrals have become one of the services provided by our group. once the issues are outlined and the possibilities are presented, informed choices can be made to create a facility to meet the needs for the chosen user population. lisa covi managing people who used to be your peers: how things change when you become a boss to your former peers, many things change, because you have more influence, power, and authority than before. some changes are obvious (new office, new phone number, secretary, different meetings, to name some) and others are more subtle. for instance, you have to "think bigger" than before and on behalf of more people. if you have never been a boss before, you will probably find yourself wondering about the role of the boss. is the boss the ultimate facilitator? does the boss work to improve the working lives of his or her staff so that work gets done effectively and efficiently? is the boss the ultimate dictator? does the boss wield power, doling it out in dribs and drabs to chosen staff members? can a boss always stay at any place along the continuum between these two positions? you probably have some pretty solid opinions about the role of the boss, gained from dealing with previous bosses and noting their strengths and weaknesses. now you have a chance to test your beliefs, to try to avoid the bad habits and unwholesome personality traits you have suffered from, to try to be a good boss to people who used to be your peers. in this presentation, i'd like to talk about some of the more subtle changes a person experiences when he or she stops being a peer and becomes a boss. s. webster teaching computational science in a liberal arts environment g. michael schneider daniel schwalbe thomas m. halverson parallel and distributed algorithms: laboratory assignments in joyce/linda bruce s. elenborgen implementing an it concentration in a cs department: content, rationale, and initial impact the increasing use of commercial off-the-shelf (cots) software has created a demand for it professionals\\---people that build and manage systems assembled from cots components. in fall 1999, the etsu dept. of cis started a program of study for training it professionals. this it concentration differs from existing concentrations in four key ways: the it concentration emphasizes vb instead of c++. it emphasizes web, database, and networking applications instead of systems software; it puts more emphasis on human issues in computing: ethics, computer-assisted instruction, and systems analysis and specification; it deemphasizes science and math, giving students more opportunity to complete a minor of their choosing. key design criteria for the concentration included making the content practical and attractive; teaching short-term and long-term skills; and minimizing the need for additional faculty. this final concern was addressed by reworking selected courses in computer organization, databases, networking, and software engineering for the concentration. the new concentration should meet the needs of students and employers while improving retention and increasing enrollment. preliminary indications suggest that the it will become the department's most popular concentration. terry countermine phil pfeiffer pattern computer (abstract only) problems of designing, creation, developing, production and dissemination for principally new so called pattern computers and respective intellectual information technologies and systems are discussed. research strategy and technical policy for ukraine on 2000-2010 years period as well as on nearest 2000-2003 years period are considered too. for example, ways how to create a dictation and spoken translation machine are debated. taras vintsiuk estimating and improving the quality of information in a mis most discussions of mis's assume that the information in the records is error- free although it is recognized that errors exist. these errors occur because of delays in processing times, lengthy correction times, and, overly or insufficiently stringent data edits. in order to enable the user to implement data edits and correction procedures tailored to the degree of accuracy needed, this paper presents functional relationships between three common measures of data quality. the mis addressed is one where records in a mis are updated as changes occur to the record, e.g., a manpower planning mis where the changes may relate to a serviceman's rank or skills. since each of the updating transactions may contain an error, the transactions are subjected to various screens before the stored records are changed. some of the transactions including some that are correct, are rejected; these are reviewed manually and corrected as necessary. in the meantime, the record is out of date and in error. some of the transactions that were not rejected also lead to errors. the result is that at any given time the mis record may contain errors. for each of several error control mechanisms, we show how to forecast the level of improvement in the accuracy of the mis record if these options are implemented. richard c. morey computing education for secondary school teachers: a cooperative effort between computer scientist and educators the proposed program will establish a computer education institute for computing, mathematics, and science teachers and supervisors in grade 7 to 12. the goals of the program are to: (1) provide teachers with a knowledge of programming in basic, and conceptual foundations of computer programming, (2) inform teachers of the variety of uses of computers in teaching science and mathematics, (3) provide an opportunity to observe and interact with youngsters as they learn to program, (4) establish a focus for teachers' future needs in computer education through contact with qualified scientists, and (5) develop and update teachers' knowledge about computers in society. the objectives will be accomplished through enrollment during a six-week summer session in a structured programming seminar and a computer education seminar. intensive practice in a computer lab will develop programming skills. during the fall semester, four guest speaker seminars will be held to provide nationally recognized experts as a resource. these meetings will also provide the program participants an opportunity to discuss their own implementation progress with the institute staff. c. jinshong hwang gerald kulm grayson h. wheatley finding a place for computer-equipped lectures in a lab-rich environment many computer science courses are taught via a combination of lectures and laboratories; many of the lectures use a room equipped with a computer and data projector. but such lecture halls create a problem for instructors: how to use the lecture-hall computer for more than poor approximations to experiences students could have working one-on-one with a computer in the laboratory. one solution is to use computers in lecture halls as tools for collecting and archiving information, much of it generated by students (via methods such as questions, discussions, and group exercises). my initial experience using a lecture-hall computer in this manner was a great success. doug baldwin what information systems graduates are really doing: an update catherine m. beise thomas c. padgett fred j. ganoe being creative in a user services environment or you have to be pretty crazy to work here - a workshop on synectics in this workshop, participants will look at a particular situation, define the problem, study alternatives, consider metaphorical definitions and arrive at solutions. by familiarizing themselves with the concepts of synectics, participants will be able to take internal private connections and define them in a metaphorical sense, thus creating new ways of looking at the world. phillip contic kathy smith pettit computer programs and copyright's fair use doctrine pamela samuelson will i still be writing code when i'm forty? vance christiaanse information systems curriculum - a status report j. t. gorgone j. d. couger g. davis d. feinstein g. kasper j. little h. e. longenecker we won't be getting a christmas card from them this year - kenyon college's client support transition mike ossing christy rigg issues in the law of electronic commerce richard l. field preparation for research: instruction in interpreting and evaluating research alan fekete who wants to learn online? a study of how different demographic groups find the experience of flexible online learning. a first year course in internet & web design is offered to a diverse range of students in traditional and online formats. we hope to draw conclusions about which groups find online learning works for them. stuart young ross dewstow mae mcsporran computer science approval/accreditation (panel discussion) a formal proposal for the establishment of an approval mechanism is being developed for presentation to the acm executive committee in february, 1982. some preliminary components of this proposal are: \\---a year long study during which materials will be developed, trial visits conducted, and approval parameters identified; \\---a volunteer based structure within acm which will grow to the final body responsible for approving programs; \\---initial funding to be sought from outside agencies to initiate the approval mechanism which will eventually be self- supporting. these and related items will be discussed by the panel. john f. dalphin terry j. frederick william j. macleod david r. kniefel gordon e. stokes semi formal process model for technology transfer g. glynn bits and bytes karl j. klee organizing a college microcomputer show going to a microcomputer fair or show is fun, exhilarating, and exhausting. organizing a microcomputer fair or show is also fun, exhilarating, and exhausting! despite the work that goes into a show, it is worthwhile if things go well. many arrangements for a microcomputer show are similar to the arrangements for any kind of show, fair, or exhibit. but there are some differences as well. for example, the organizers of a college microcomputer show may be less practiced in anticipating all the decisions that must be made. this is the report of one experience at organizing a microcomputer show and a descriptive checklist of things to consider. james w. cerny intelligent agent for knowledge tree administration joze rugelj recruiting, retaining, and developing it professionals: an empirically derived taxonomy of human resource practices ritu agarwal thomas w. ferratt updating the is curriculum: faculty perceptions of industry needs charles h. mawhinney joseph s. morrell gerard j. morris stuart r. monroe building a foundation gary beach assessing computer technology in europe bruce h. barnes a foundation course in computer science the discipline of computer science has matured to the extent that now it has become necessary to define a foundation course primarily designed for majors in computer science. such a course will include an introduction to the basic areas to which these students are later on exposed at their junior and senior years. in particular, the syllabus may consist of five core areas: problem solving including algorithm design, development and testing; data structures including representation and implementation of arrays, stacks, queues, trees, lists and files; computer systems including traditional hardware and software concepts; program design and development including modern programming methodology, debugging and documentation; and finally the syntax and semantics of one or two programming languages. the duration of the course will be one full academic year for a total of 12 quarter or 8 semester credits of work. the course will assume an entry level equivalent to college algebra, computer literacy and collegiate maturity measured by completion of a total of about 32 quarter or an equivalent amount of semester credits of college level work. ali behforooz the privacy act and the australian federal privacy commissioner's functions malcolm crompton assessing the ripple effect of cs1 language choice java has reinvigorated the debate on the `best' language to use for cs1. much of the controversy centers on the goals of cs1, specific language constructs that either hinder or support the first formal introduction to programming, and, even, `real-world' relevance. missing from typical discussions is the effect of the language choice in cs1 on cs2 and subsequent courses in the cs curriculum. in all such dialogues, it is important to note the characteristics of the department at hand. while many programs can afford to choose a language (such as scheme) purely for pedagogical reasons, others (if not most), due to pressure from students, industry, advisory boards, select a language with some market appeal. small departments that serve students who expect an immediate transition to a professional job typically choose a traditional procedural language like c, pascal, modula-2, or a popular oo language like c++, java, ada95, or visual basic. hence, we focus here on the choice of one of these languages, and the resulting effects on students' progression in the cs curriculum. designing courseware on algorithms for active learning with virtual board games we present a method for designing courseware on algorithms for active learning with virtual board games. our goal is to build algorithm courseware that integrates explanation with animation and makes the student an active participant. we give hints for structuring the material into sections and mixing presentation with exercises. we present our ideas for a new form of visual interactive exercise and a cardboard game prototype with which we tested our ideas. nils faltin scaling up computer science with efficient learning (abstract) sandra j. deloatch ernest c. ackermann john urquhart lynn ziegler corporate systems management: an overview and research perspective thomas d. clark art199/cs199 the electronic medium (poster) l. e. moses ifip b. c. glasson a computer literacy mini-course the computer consciousness minicourse being taught at notre dame's computing center began as a response to a growing need noticed by several staff members. computers today affect everybody's life, and people want to know about them. many courses in computer science and in programming are offered at notre dame, both some for credit, and the credit-free mini courses sponsored by the computing center. these courses contribute to computer literacy on campus, but they cannot meet the needs of everyone. there exists a group of people who don't necessarily want to learn programming or even to learn how to use computers. they simply want to know what they are all about. this group may include students in non-technical disciplines who cannot schedule a programming class into their packed curricula. possibly among these people are faculty or staff members who completed their educations before computers were common. also included are secretaries who are being asked to give up their typewriters in favor of computers dedicated to word processing. they want to understand the machines. teenagers who know they will have a head start in college if they understand computing are part of this group. our computer consciousness minicourse is for these people. victoria green microcomputers in educational and research environments: their management, acquisition, upgrade, and maintenance jean f. coppola francis t. marchese an international model for curriculum adaptation and implementation (poster session) comfort fay cover robert d. campbell karl j. klee international perspectives: wiring the wilderness in alaska and the yukon seymour e. goodman james b. gottstein diane s. goodman finding out what people really think about computer use at a university thomas h. bennett protecting an invaluable and ever-widening infrastructure stephen j. lukasik lawrence t. greenberg seymour e. goodman focus of the iccp the iccp is sharpening its focus on the goal of professionalism through certification. in may, the directors set several activities in motion that are based upon agreement of principle. each activity encourages the cooperation of all the professional associations and moves the information systems industry toward professionalism. current board members will be joined by members elected by and from current certificate holders. this action is based on the principle that there is now an adequate number of certificate holders to accept the responsibility to at least share in their professional certification program. roland d. spaniol toward an ideal competency-based computer science teacher certification program: the delphi approach the downward migration of computer science courses from university to secondary and even junior high school level is accelerated by the increasing computer usage in schools and the increasing demands of both parents and students for quality computer education. teacher training is a major vehicle to the success of this migration. however, at this time, there is no consensus concerning how the secondary school computer science teacher should be certified and what should be included in the study of a computer science teacher certification program. this paper collects data from various computer expert groups through the use of delphi technique to provide valuable guidelines for establishing a computer science teacher certification program as well as a model curriculum based on the minimum competency required of a successful secondary school computer science teacher. j. wey chen a semester-long project in cs 2 two years ago, like many others, our computer science program made the switch from pascal to c++. as we had in the past, we continued the practice of teaching some principles of software engineering in the second course. because students were working primarily on small projects, they were not seeing the need for the design techniques i was presenting. i decided to devise a semester-long project through which i could illustrate not only the use of standard data structures but also the need for employing principles of software engineering. christine shannon european experiences with software process improvement assessment models used include spice (iso/iec tr 15504) [1] and software engineering institute's cmm1 [2] (one organisation also achieved iso9001 certification). fran o'hara instructional development of computer workshops at indiana university diane jung-gribble math proficiency: a key to success for computer science students a computer science aptitude predictor was administered to students enrolled in a first technical course in computer science to determine potential for success. the study revealed significant differences in the scoring between students who withdrew from the course and those students who did not. the causes for the differences all related to the students' mathematical background: high school performance, previous computer science education, and the number of college mathematics courses taken. john konvalina stanley a. wileman larry j. stephens gauging the risks of internet elections deborah m. phillips hans a. von spakovsky from the president: melissa's message barbara simons the feasibility of personal computers as an only computer resource for a computer science program this paper reports on the experiences at union university in using a mini- computer based time-sharing facility versus a loosely-coupled micro-computer based facility. these two facilities are the only computer resources used by two disjoint academic populations. comparisons are drawn on acquisition costs, staffing, and operating costs. the micro- computer system is an inexpensive, viable alternative. geof goldbogen g. h. williams continuing education services of the ieee computer society recognizing the need of its members to keep themselves up-to-date in the rapidly advancing field of computer science and engineering, the computer society has for years devoted substantial effort in providing continuing education service. one can identify four major components in this service. they are: 1. the chapter tutorial program, 2. the professional development seminars, 3. the distinguish visitor's program and 4. the tutorial publications. each program provides a distinct type of service; together they meet the demand of the membership in large measure. willis k. king paradigm shift: distributed computing support - a restructuring of academic computer support l. dean conrad information systems: a disciplined approach to design this paper discusses an approach to information system design. it discusses specific factors which should be considered and the steps which should be followed during the phases dealing with study of the situation and analysis of requirements, and external design. for illustration of this disciplined approach, the paper provides examples of organizations such as a medical/dental office, a pharmacy, a retail store, and a repair shop. details of typical requirements are then derived for these example organizations, followed by a proposal for an external design configuration. s. imtiaz ahmad a microcomputer hardware course jayne peaslee survive and thrive at a job fair jessica ledbetter software development methodologies and traditional and modern information systems danny t. connors introducing java ed epp a report on industry-university roundtable discussions on recruitment and retention of high-tech professionals the purpose of this paper is to synthesize the proceedings of the 1998-1999 and 1999-2000 technology and commerce roundtables on the topic of the "recruitment and retention of high-tech professionals" and to raise issues and challenges which have been generated from these discussions. the paper will address the responses of corporate partners, students, and the university faculty with respect to the recruitment and retention of it professionals, skill and knowledge requirements for it personnel, the factors important to mis graduates in selecting career opportunities, the impact of commercial off- the- shelf software on it skill requirements, and it compensation strategies. mary sumner road crew: students at work lorrie faith cranor concrete teaching: hooks and props as instructional technology owen astrachan the myth of the awesome thinking machine c. dianne martin some suggestions for implementing closed laboratories in cs1 ken r. little a tale of two high school computer science programs and how the acm model high school computer science curriculum may shape their future (abstract) david w. brown michael a. sheets randy l. myers jeremy a. freed allan cameron patricia amavisca theresa cuprak brian pollack chris stephenson an experience is worth 1k words an introductory computer science course is presented which uses new techniques appropriate for a liberal arts college. students learn standard topics by means of a series of guided labs in which they are active participants. the students learn to question, analyse, and construct examples, thereby acquiring the means for further inquiry and understanding. irrelevant stumbling blocks are minimized in the hope that the positive learning process will be something they continue on their own. marjory baruch contextual programming (doctoral symposium - extended abstract) robert j. walker a senior project course in a computer and information science department the faculty and student viewpoints of the senior project course in the core curriculum of the computer and information science department at njit are presented. each viewpoint is examined along with its impact on the mechanics of the course. the changes in course mechanics are related to the growing student population in the department. michael a. baltrush computer crime donn b. parker susan h. nycum graduate information systems curriculum for the 21st century john t. gorgone cs 1 closed laboratories = multimedia materials + human interaction david mutchler claude anderson andrew kinley cary laxer dale oexmann rimli sengupta frank h. young teaching computer science: a problem solving approach that works v. h. allan m. v. kolesar a course in software reuse with ada 95 courses in software reuse based on ada have been taught for several years in department of computer science at north carolina a&t; state university [2][5][6]. the course materials have been updated and adopted for ada 95 under a curriculum development grant from the defense information system agency. the new course forms a semester-long undergraduate course titled "software reuse with ada 95".in this course software reuse levels, general criteria, and software development with reuse are overviewed. software development for reuse and object-oriented design techniques that support reuse are taught. ada 95's object-oriented features and how to integrate these features to develop reusable artifacts are examined. reuse library and management techniques are discussed. students are trained as designers, implementers and clients. they practice the object oriented software development process by implementing a team project. huiming yu changing user populations computing services in the form of assistance to users was first established in colleges and universities to assist professors and graduate students in writing computer programs usually in fortran and usually to solve scientific problems. these services evolved to include non-programmers---those who were using a higher level, non-procedural language usually to solve engineering problems and social science statistical problems. as software has improved to serve a wider audience of computer users, the typical user services have expanded. today user services addresses the needs of many people using computers to manipulate words. services have changed to include micro or personal computers as well as mainframes. many customers are not doing research but are merely trying to solve problems of office or personal drudgery. administrators who are querying and preparing reports are now among our customers. many of our customers of ten and twenty years ago are now very self-sufficient users who have incorporated computing into the curriculum and have produced computer- savvy younger faculty who are performing very sophisticated research using computers extensively themselves in the laboratory for data acquisition. these faculty are now gathering data faster than the central mainframe can process and reduce that data. thus the character of research has changed in one generation. these changes in user populations have enormous impact for people working in computing centers providing user services. the technical level of our staff must be greater than it was ten years ago. the sophisticated researcher needs much more help than merely helping him to debug a computational fortan program. his problems involve data acquisition in real- time, storing and sending that data, and processing it in larger and larger arrays. all users from the researcher to the novice with a simple personal computer have data communication needs and problems which are often not trivial to resolve. administrators have problems which are often very complex and require a combination of knowledge of data base, communications, query languages, report writers, and programming not usually found in the traditional user services staffs. graphical output is becoming a common requirement and as experience of our users increases, the simple is not sufficient. word applications are becoming more common than numerical applications requiring a different kind of staff expertise. as the computer becomes a common tool for all students, faculty, and administrators, the skills and tasks required for user services are changing. unless we are ready to provide the requisite services, our value to our institutions will diminish and pass away. barbara b. wolfe road crew: students at work lorrie faith cranor acm panel on hacking offers new directions jim adams using software to solve problems in large computing courses mark j. canup russell l. shackelford understanding values and biases in i.t. frans a. j. birrer a discussion of cybersickness in virtual environments an important and troublesome problem with current virtual environment (ve) technology is the tendency for some users to exhibit symptoms that parallel symptoms of classical motion sickness both during and after the ve experience. this type of sickness, cybersickness, is distinct from motion sickness in that the user is often stationary but has a compelling sense of self motion through moving visual imagery. unfortunately, there are many factors that can cause cybersickness and there is no foolproof method for eliminating the problem. in this paper, i discuss a number of the primary factors that contribute to the cause of cybersickness, describe three conflicting cybersickness theories that have been postulated, and discuss some possible methods for reducing cybersickness in ves. joseph j. laviola buying computers for your school: a guide for the perplexed elliot soloway surveyor's forum: related information oris d. friesen the first programming paradigm and language dilemma susan s. brilliant timothy r. wiseman information systems and organizational change peter g. w. keen professional literacy: labs for advanced programming courses our contention is that there now exists a considerable body of lab exercises that may be used in conjunction with introductory courses. there are fewer models available for instructors of more advanced programming courses, especially those courses which attempt to introduce students to current practices in software engineering. in this paper, we report on our experiences in building a second-year programming course that includes a significant lab and project component. these labs and projects are the vehicle we use to introduce students to the world of professional practice in software development. henry a. etlinger michael j. lutz articulation reflections richard h. austing the social implications of computerization: making the technology humane m. c. mcfarland inside risks: risks of panic lauren weinstein peter g. neumann the em88110: emulating a superscalar processor assembly programming is a very important topic to teach computer architecture. current computers include special techniques to improve performance such as pipeline and multiple instruction issue per cycle. but these kinds of computers are difficult to use in laboratory works because of the great amount of details of the target computer architecture that are not relevant to beginners. hence, we decided to build a configurable emulator of a superscalar processor to create a wide set of laboratory works, from the simplest one that uses the computer as a serial processor to the most complex that uses the full set of performance improvements of a superscalar computer. most of the computer parameters can be established by the student or the teacher providing a virtual machine that is easier to use. students can do their laboratory work without taking into account the additional problems generated by a real computer. jose m. perez villadeamigo santiago rodríguez de la fuente rafael mendez cavanillas m. isabel garcía clemente accreditation: does it enhance quality? accreditation, considered to be the one formal mechanism for assessing quality in the postsecondary environment, focuses on determining and encouraging acceptable levels of educational quality. in particular, specialized program accreditation is purported to enhance program quality. this exploratory study used a nationwide mail questionnaire to a stratified random sampling of 100 department heads of the units administering baccalaureate computer science programs. the purpose was to gain an understanding of how computer science programs and departments were related to selected indicators of faculty and program quality. several differences and some similarities exist between the accredited and non-accredited groups. the median of the data for each indicator suggested a quality breakpoint to be used in defining two indices. it was found that for each of the two indices, the accredited group outperformed the non-accredited group by thirty percent. the implication is that computer science programs that follow accreditation guidelines have the potential for increasing their quality indicators. evelyn p. rozanski is your degree quality endorsed? defining what we mean by a quality degree is a difficult enough undertaking. having to demonstrate that a degree program is adhering to this definition is at least as challenging. academics already "know" that they are doing a good job with the teaching and learning aspects of their courses and so any suggestion that the processes involved need monitoring and auditing commonly meet with a combination of disdain, calls of "time-wasting" and "meddling" and general derision. at the same time, there is increasing pressure on universities around the world to not only be accountable for the quality of their products but to also demonstrate that processes that reflect the efforts to achieve quality be visible and in place.in this paper we describe a new web-based database system which has been developed to monitor and track aspects of the quality of our degree offerings. particular attention has been paid to ensuring that the important facets which define or demonstrate quality are both highlighted and managed easily. isaac balbin through a glass, darkly: methodologies: bend to fit? robert l. glass a profile of today's computer literacy students: an update jean buddington martin kenneth e. martin personal data privacy in the asia pacific: a real possibility jim c. tam the development of an instrument to assess the impact of media use and active learning on faculty performance mark a. serva mark a. fuller computers and society -- another look at that general purpose course david feinstein david langan cefriel: an innovative approach to university-industry cooperation in information technologies m. decina conan and the jargonauts bill neugent current issues in undergraduate student research ann e. kelley sobel mario guimaraes student created user manuals for a course on programming languages kenneth g. messersmith hr strategies for is professionals in the 21st century eric richens risks to the public in computers and related systems peter g. neumann societal and technological problems of computer use in industry qi jiang danny kopec a new wave in software certification virginia a. reilly commitment development in software process improvement: critical misconceptions _it has been well established in the software process improvement (spi) literature and practice that without commitment from all organizational levels to spi the initiative will most likely fail or the results are not far reaching. commitment construct is explored and three forms of commitment are introduced: affective, continuance and normative commitment. analysis shows that current models of commitment development lack scientific validity and are based on four misconceptions: (1) the assumption of linearity of the human cognitive processes (i.e., commitment in this case), (2) the controllability of this process, (3) the notion of singular commitment construct, and (4) the sole utility perspective on the commitment phenomenon. implications of these findings for spi research and practice are discussed._ pekka abrahamsson helping computer scientists in romania as a professor in the computer science department of the polytechnical institute of bucharest, romania, i would like to bring to your attention the effort of a group of professionals, researchers, students and professors around the world. this effort, called "free unix for romania", has been initiated and coordinated by marius hancu, an advisor at the parallel architectures group, center de recherche informatique de montreal, montreal, canada. irina athanasiu forum: only the names have changed diane crawford central government computing agency in less developed countries this paper is a result of research conducted in the last two years to improve government computing systems in developing countries. various methods were used to introduce and establish computing systems in developing countries without any systematic approach. however, many of the governments in these countries soon realized the need for a central agency to regulate and monitor computing systems and their usage. many factors contributed to the decisions made in creating a central government computing agency. asad khailany competing in computing (poster session) fredrik manne culture shock: transfering into the professional work force connie e. denton retraining: is it the answer to the computer faculty shortage? this paper reports on the experiences acquired in initiating a summer retraining program to prepare college faculty to teach undergraduate computing. the distinction between formal and informal retraining, the benefits of formal retraining, and the justification for credentializing such programs with a masters degree are also discussed. william mitchell about this issue… adele j. goldberg tete: an intelligent knowledge testing engine (poster) adam wojciechowski jerzy r. nawrocki karolina kups michal kosiedowski fifth annual ucla survey of business school computer usage this year's survey takes a look at where business schools are in terms of computerization, providing useful information for deans and others involved in making computer related allocation decisions and program plans. jason l. frand julia a. britt question time: true leadership craig e. ward alan lawson distance learning lisa neal computer science education in a saudi arabian university: a comparative study of its b.sc. program the computer science curriculum at a university in the kingdom of saudi arabia is described and then compared to with the csac/abet accreditation criteria. the comparison is needed to determine the relevance of the curriculum in view of the dynamism and perturbations arising from the reality of the real world and csac/abet criteria. the curriculum emphasizes breadth and depth in the main areas of computer science education and makes systems and systems development as its main subject area of expertise. the policy to adopt breadth and depth was based on the fact that saudi arabia is a young and rapidly developing country and computer science education in the country is at its infancy. the pre-college curriculum in the kingdom is lacking in computer science. in addition, computer science is a rapidly developing field. the graduates from this program were expected to be pioneering professionals in the emerging market of computer employment in the kingdom. the curriculum attempts to serve as a catalyst, providing a platform for discussion, which hopefully will result into a feedback to us. we also hope that the curriculum will serve as a guidance to third world countries which are in the same circumstances with limited capabilities and resources who may want to address the critical issues involved in computer science education. abdulmalik s. al-salman jacob adeniyi an industry perspective on computer science accreditation (abstract) john impagliazzo standards and you (abstract) anthony gargaro history in computer science education: across the curriculum initiatives john a. n. lee inside risks: insecurity about security? background a few highly visible cases of computer system exploitations have raised general awareness of existing vulnerabilities and the considerable risks they entail. peter g. neumann ask jack: career line q&a; jack wilson facilitating localized exploitation and enterprise-wide integration in the use of it infrastructures: the role of pc/lan infrastructure standards two often- contradictory dilemmas confront firms in their efforts at promoting it-based innovation: facilitate localized exploitation of the it infrastructure within individual business units, but also ensure enterprise-wide integration of it innovation initiatives to facilitate business applications that exploit inter- unit synergies. therefore, developing an it infrastructure that is conducive to both localized exploitation and enterprise-wide integration is important. in this context, it infrastructure standards are an important element of the it infrastructures that shape managers' perceptions about the responsiveness of their infrastructures to their firms' it innovation needs.this study examines the role of pc/lan infrastructure standards in facilitating attention to localized exploitation and enterprise-wide integration. it identifies three attributes of pc/lan infrastructure standards: comprehensiveness, flexibility, and level of enforcement. through rich case studies in four bureaucratic organizations, the study develops insights about how pc/lan infrastructure standards influence perceptions about the responsiveness of the pc/lan infrastructures for localized exploitation and enterprise-wide integration. the emergent findings point to the influence not only of the standards, but also of the organizational context within which these standards are evolved, implemented, and used. the study offers valuable insights for both is practitioners and researchers about the use of pc/lan infrastructure standards in organizations. timothy r. kayworth v. sambamurthy viewpoint: stop that divorce! amr el-kadi words from the chairman george shaw attracting and retaining females in information technology courses debbie clayton mary cranston teresa lynch acm forum diane crawford the students' problems in courses with team projects hassan pournaghshband applying the personal software process in cs1: an experiment lily hou james tomayko small is beautiful: the next ten years in university computing e. b. james cooperative learning in simulation harriet black nembhard system construction with object-oriented pictures george w. cherry finding harmony in systems development and user services janis rogainis look at the underside first: brasilia and petersburg bruce sterling conference report: conferences offer insifhts into how computers may affect our future lorrie faith cranor adam lake motivation and performance in the information systems field: a survey of related studies martha e. myers a practical one-semester "vlsi design" course for computer science (and other) majors robert a. walker in search of design principles for tools and practices to support communication within a learning community stephanie houde rachel bellamy laureen leahy design: the what of xfr: experiments in the future of reading steve harrison scott minneman maribeth back anne balsamo mark chow rich gold matt gorbet dale mac donald kate ehrlich austin henderson is personnel: do they form an occupational community? katherine a. duliba jack baroudi requirements development: stages of opportunity for collaborative needs discovery john m. carroll mary beth rosson george chin jurgen koenemann using recursion as a tool to reinforce functional abstraction (poster session) raja sooriamurthi electronic mail as a coalition-building information technology one of the most intriguing lines of research within the literature on diffusion of information technologies (it) is the study of the power and politics of this process. the major objective of this article is to build on the work of kling and markus on power and it, by extending their perspective to email. to demonstrate how email can be used for political purposes within an organizational context, a case study is presented. the case study describes a series of events which took place in a university. in the case, email was used by a group of employees to stage a rebellion against the university president. the discussion demonstrates that email features make it amenable to a range of political uses. the article is concluded with a discussion of the implications from this case to email research and practice. celia t. romm nava pliskin is technology neutral?: space, time and the biases of communication leslie regan shade practical programmer: academics, and the scarlet letter "a" robert l. glass systems analysis: a systemic analysis of a conceptual model adopting an appropriate model for systems analysis, by avoiding a narrow focus on the requirements specification and increasing the use of the systems analyst's knowledge base, may lead to better software development and improved system life-cycle management. itzhak shemer teaching experimental design in an operating systems class allen b. downey the battle over the institutional ecosystem in the digital environment yochai benkler section 8. another major document maryann lawler balancing cooperation and risk in intrusion detection early systems for networked intrusion detection (or, more generally, intrusion or misuse management) required either a centralized architecture or a centralized decision-making point, even when the data gathering was distributed. more recently, researchers have developed far more decentralized intrusion detection systems using a variety of techniques. such systems often rely upon data sharing between sites which do not have a common administrator and therefore cooperation will be required in order to detect and respond to security incidents. it has therefore become important to address cooperation and data sharing in a formal manner. in this paper, we discuss the detection of distributed attacks across cooperating enterprises. we begin by defining relationships between cooperative hosts, then use the take-grant model to identify both when a host could identify a widespread attack and when that host is at increased risk due to data sharing. we further refine our definition of potential identification using access, integrity, and cooperation policies which limit sharing. finally, we include a brief description of both a simple prolog model encorporating data sharing policies and a prototype cooperative intrusion detection system. deborah frincke the semiconductor market in the european community the authors discuss the impact of three ec rules of origin on printed circuit boards and on the united states semiconductor industry, which has been the largest exporter to europe. roger a. chiarodo judee m. mussehl the report of the acm longe-range planning committee: a summary the committee evaluates acm as it exists today, and makes recommendations for the future. dorothy deringer david brandin dexter fletcher neil jacobstein dual careers and employment decisions in computer science (panel session) bill marion sue molnar marilyn mays jack mosley a call for early intervention: interview with bill joy john gehl smart instructional component based course content organization and delivery (poster session) hongchi shi yi shang su-shing chen the role of mathematics in computer science education john werth mary shaw abraham kandel bringing vendors and users together: site licensing dilemmas allan r. jones industrial software training opportunities for computer professionals industrial software training is becoming big business. the magnitude of the problem of producing a sufficient number of well-trained software engineers has led to a proliferation of internal corporate training divisions and programs in software education. such programs are becoming an added incentive in the hiring and retention of software professionals. industry is no longer limiting its efforts to developing and teaching specific skill- related courses. individual corporations are offering a software education the equivalent of a bachelor and/or master's degree, sometimes with university credit. while many of these programs involve cooperation between industry and university, others are obvious attempts to provide education felt lacking in the university setting. each decade finds new contributions to the computer science education literature on the gap between university education and industrial needs for trained software producers [1, 4, 5, 7, 9, 10]. a capitulation statement on the part of one educator recently appeared in the open channel forum of the ieee computer: &ldquouniversities; should expose students to modern technologies but leave training in the use of these technologies to industry, which should expect to provide months of training to new employees."[12] not all corporations can afford such intensive training programs. in view of our collective software personnel needs, the lack of personnel qualified to develop, package and lead training programs, and the economic investment such programs require, a collective effort should be established. minimally, we need a vehicle for inter-corporate sharing of training products and direct incentives for universities and colleges to educate productive software engineers. nancy martin contributions of the working student judith l. gersting frank h. young message from the chairperson ron anderson from washington: public policy issues ripen with age the computerists of 1947, as creative and foresighted as they were, could hardly have imagined that the industry they helped to mold would one day be forcing technological issues of worldwide consequence. back then the idea was to figure out how to build computing systems; today the idea is to figure out how to live (peacefully) with what we've built. as computing systems grew in popularity and use, so did the public's concerns over the impact these new machines would have on their lives and livelihoods. it would soon become the role of computing organizations like acm to alleviate those fears by educating the public, and later the u.s. government, as to the true power and potential of its technological efforts. however, many computer people today contend that sharing information is not enough and are urging the scientific industry as a whole, and acm in particular, to take a more active role in government and legislative activities. the technological issues at hand---sdi, trade sanctions, public security, international competition, among so many others---are too sophisticated, complex, and volatile to handle from the sidelines. political issues in the earliest days of the computing era most often involved business maneuvers. edmund c. berkeley, a founding member and first acm secretary (1947-1953), remembers that, even in 1947, there was the hint of "corporate takeover" in the air as he wrote the original acm bylaws with colleagues james russell of columbia university and e. g. andrews of bell telephone. "the first bylaw we wrote stated that dues would be $2 per year," recalls berkeley during a recent interview for communications. "the second bylaw we wrote stated that acm would never become an ibm organization. it was a possibility we all feared at that time because [ibm] was making use of 50 to 75 percent of the computer field. so we wrote up a rule to protect ourselves." public concerns over the social and political implications of this new technology did not surface until computers went commercial and data became public property, remembers mina rees, a member of the first acm executive council in 1947. before then, computer experts were too busy building systems to worry about their long-term effects. "we didn't have any of these bright youngsters around to tell us," she muses. "the [industry] people at that time didn't known about political or public concerns." rees's earliest memory of growing outside awareness of computerization backlash was in 1951 when the u.s. census bureau began automating its operation with eckert-mauchly's univac (see p.832). "that was the first time people became aware that other people might be able to get hold of private information," she says. one way to allay those concerns was through education, and rees was dedicated to spreading the word. "i made a point of accepting [speaking] invitations because, as an educator, i strongly believed the only way we would make progress would be to spread as much information as possible," she says. "there were deep sources of information and sharing that knowledge saved us from having to discover it a second time." diane crawford news track robert fox log on education: teachers and technology: easing the way elliot soloway henry j. becker cathleen norris neal topp netnews dennis fowler running a micro lab: one year later kelly havens searching for software it happens every so often - a faculty member or graduate student turns up at the computing center with a question like this: "do you know of any software which analyzes heat loss in buildings?" or, "have you heard of a program called slam? it's mentioned in an engineering book i'm reading, but there isn't any reference to the source." or, "do you have eispack? gearpack?" the problem you face is not the problem of finding whether programs of the given name or function exist on your system, or of locating the documentation. you already know that you don't have a program analyzing heat loss, or a mysterious entity called slam, or anything with a name like eispack or gearpack. eispack, for example, is available at other universities, but you have never heard of it at this particular moment. what do you do next? you need a 'documentation system' for outside software, to help you answer the questions \\- does any software with the given name or description exist? \\- who has it, who wrote it, who distributes it? \\- can it be acquired for free or purchased at reasonable cost? and lastly, if the answers to the above are favorable, \\- should you acquire it? is there sufficient interest among users to justify the project of implementing it on your system? rebecca r. deboer peer review lawrence w. westermeyer a system for program visualization and problem-solving path assessment of novice programmers this paper describes an educational programming environment, called animpascal. animpascal is a program animator that incorporates the ability to record problem-solving paths followed by students. the aim of animpascal is to help students understand the phases of developing, verifying, debugging, and executing a program. also, by recording the different versions of student programs, it can help teachers discover student conceptions about programming. in this paper we describe how our system works and present some empirical results concerning student conceptions when trying to solve a problem of algorithmic or programming nature. finally, we present our plans for further extensions to our software. maria satratzemi vassilios dagdilelis georgios evagelidis privacy protection in germany: regulations and methods andreas böhm a student's perspective of a capstone course the portrait of team dynamics in a capstone course is an intriguing saga rarely witnessed in full by the course's instructor. this student's perspective into one such course attempts to provide insights into the composition of the group and its effects on the course's outcome. matthew c. davis teaching computer science: experience from four continents mats daniels judith gal-ezer ian sanders g. joy teague on approaches to the study of social issues in computing this paper identifies and analyzes technical and nontechnical biases in research on social issues in computing. five positions---technicism, progressive individualism, elitism, pluralism, and radical criticism\\---which reflect major streams of contemporary social thought are examined. the analysis of positions documents the close relationship between research and policy formation and reveals the misleading and dangerous character of the presumption of scholarly objectivity in research on social issues. abbe mowshowitz a bachelor of science in information technology: an interdisciplinary approach rensselaer has launched a new bachelor of science degree program in information technology [4,5]. this degree is an alternative to the more traditional computer science or computer systems degrees that rensselaer continues to offer. it focuses on the application of computing and communications technologies in a student-chosen application area called a second discipline. the expectation is that a company doing business in the second discipline or closely related area will employ a student completing this degree. this paper describes the motivation behind the new degree program and its interdisciplinary approach. it also presents the organization of the curriculum and its requirements. david l. spooner acm forum diane crawford quality circles in three data processing organizations the japanese participative management philosophy of quality circles (qcs) recognizes that the people doing the work know more about how to improve the way work is done. quality circles provides a structure in which workers, after being trained, identify, analyze and solve work related problems, implementing solutions where possible. united states manufacturing companies caught on to qcs in the 1970s. by the early 1980s quality circles were being implemented in school systems, governments, hospitals, banks and in other white collar or service areas. one of the later areas to use quality circles was data processing. this paper first presents an overview of the philosophy and structure of quality circles; it then features three viewpoints from individual organizations who pioneered quality circles in the data processing field in the united states. these organizations are the us army darcom automated logistics management systems activity, the federal systems division of ibm and mcdonnell douglas automation. each organization gives a synopsis of its experience with implementing qcs, its degree of achievement with the process, and recommendations on pitfalls to avoid when establishing quality circles. when starting quality circles in their organizations, there were no known role models in a data processing environment to learn from. this paper attempts to provide specific data processing application examples so that companies contemplating quality circles will have appropriate information to assist them in making implementation decisions. elizabeth godair louis sportelli mahlon mccracken doris eschbach acm forum robert l. ashenhurst theory-w software project management: a case study the search for a single unifying principle to guide software project management has been relatively unrewarding to date. most candidate principles are either insufficiently general to apply in many situations, or so general that they provide no useful specific guidance. this paper presents a candidate unifying principle which appears to do somewhat better. reflecting various alphabetical management theories (x, y, z), it is called the theory w approach to software project management. theory w: make everyone a winner the paper explains the theory w principle and its two subsidiary principles: plan the flight and fly the plan; and, identify and manage your risks. to test the practicability of theory w, a case study is presented and analyzed: the attempt to introduce new information systems to a large industrial corporation in an emerging nation. the case may seem unique, yet it is typical. the analysis shows that theory w and its subsidiary principles do an effective job both in explaining why the project encountered problems, and in prescribing ways in which the problems could have been avoided. b. boehm r. ross developing efficient relationships with i/s vendors: a contingency framework this paper proposes a contingency framework for answering the questions: what is the range of options available in outsourcing relationships? and, how can the best, most appropriate relationship be identified? answers to these questions are important because of the considerable resources that will be devoted to outsourcing in the future. a case study analysis is proposed for a test of the framework. robert klepper reexamining organizational memory mark s. ackerman christine a. hadverson an overview of the wpi benchmark suite david finkel robert e. kinicki jonas a. lehmann cache conscious programming in undergraduate computer science alvin r. lebeck closed-laboratory courses as an introduction to the uniformation science curriculum the information science department at the university of arkansas at little rock (ualr) is just two years old. the challenges and choices of implementing a new curriculum are many. the department has chosen closed-lab courses to introduce its curriculum. early indications are that this approach will be a successful one for both students and the university. nicholas karlson turning liabilities into assets in a general education course gloria childress townsend software maintenance notes g parikh accreditation in the computing sciences (panel session) a joint task force of the acm and the ieee computer society is meeting regularly to discuss issues relating to accreditation or approval in the computing sciences. in addition to considering various mechanisms to implement the important qualitative review and certification, the joint committee is developing a preliminary set of computer science program requirements. increasing requests are being made to the professional societies to provide guidance in computer science programs. while certain guidance and evaluation mechanisms exists, and agencies to administer them, these tend to be directed to specialized programs and the field is so broad that a wider view must be taken. it is estimated that as many as 500 programs not presently served by existing mechanisms and agencies would benefit from such guidance. this panel will discuss some of the issues relating to implementation of accreditation/approval as well as quantitative criteria for computer science programs that provide competency in the profession. audience participation and discussion will be encouraged. john f. dalphin michael c. mulder tom cain george davida gerald l. engel terry j. frederick norman e. gibbs the role of inducements in the recruitment program of u.s. computer companies la verne hairston higgins anatomy of an introductory computer science course an introductory computer science course is frequently the most difficult course in the curriculum to teach. computer science educators must stay abreast of rapidly changing trends, text books, technology and teaching techniques. this paper provides an overview and perspective of introductory computer science courses, surveys some trends, and presents new alternative approaches regarding organization, foundations and material. it is based on the premise that the introductory course should create strong foundations upon which students can build, and that the curriculum should teach students to build software systems which people use and maintain, not just toy computer programs. the paper presents personal views and insights, motivates underlying concepts, and provides many useful suggestions which have been successfully employed in such introductory courses. peter b. henderson employment trends in computer occupations computer programmers and systems analysts are expected to be among the fastest growing occupations in the economy through the 1980's, according to the latest projections of employment published by the bureau of labor statistics. this growth is the result of a rapid increase in the number and types of computers in use and the rapid development of new computer applications. over the next decade, more systems programmers will be needed to develop the complex operating programs made necessary by higher level languages and complicated computer configurations, as well as to link the output of different programs from different systems. demand for applications programmers also will rise, despite increasing efficiency, as more computers are used. continued strong demand for systems analysts will be assured by the need to reduce computer systems problems and to develop more sophisticated and complex computer operations. although employment of computer professionals is expected to rise throughout the economy, demand will be strongest in the services factor, primarily in health care, education, and data processing, as well in banking. l. r. cottrell a software management system this paper discuss's about the software management system(sms). the software management system (sms) was designed, developed, and implemented at gsu user services to address these challenges facing our computer system and user community. sms is organized into a group of "protocols", as they will be called in this paper. a "protocol" is a standard formal set of actions to be taken which may include executive command language (ecl) procedures, file editor procedures, compilations, linkages, tape handling, documentation instructions and communications. darrell w. preble computing with geometry as an undergraduate course (poster session) ching- kuang shene john lowther the simple ways we deal with complexity the cs 2 course is a turning point in the undergraduate cscurriculum. it is often the first point the student encountersformal analysis and techniques, proofs of correctness, abstractdata types, and a host of other new ideas (as well as datastructures!). but behind the myriad complexities are a few simpleideas. i believe far too few students see the simplicity, becausethey become too mired in the complexities. in this talk i will tryto describe an overarching framework in which we can discuss thefew simple ideas that are behind much of the content in the cs 2course. tim budd designing information system task teams a theoretical model is suggested for designing information systems task teams based on a contingency approach using characteristics of the task, the environment and types of people as contingency variables. a fourth variable, systems design phase, is also considered for suggesting differences in the staffing and operations of task teams. the basic assumption is that staffing, ways of operating, ways of reaching decisions, and measures of task team effectiveness will vary during the systems development process. richard leifer kathy b white practitioner education - degrees of difference? tony clear characteristics of programming exercises that lead to poor learning tendencies: part ii in most introductory programming courses tasks are given to students to complete as a crucial part of their study. the tasks are considered important because they require students to apply their knowledge to new situations. however, often the tasks have not been considered as a vehicle that can direct learning behaviours in students. in this paper attention is paid to features of programming tasks that led to the following three poor learning behaviours: non-retrieval, lack of internal reflective thinking and lack of external reflective thinking. the data gathered for this study is provided by students and turors, and describes the students' engagement in the tasks. the paper concludes with a list of generic improvements that should be considered when formulating programming exercises to minimise poor learning behaviours in students. angela carbone john hurst ian mitchell dick gunstone a computer science syllabus for gifted pre-college students a computer science syllabus was designed for and taught to a group of gifted and talented high school students. a core course included segments on programming in lisp, software systems, digital hardware, theoretical computer science, and artificial intelligence. in addition, some students elected an independent programming project course. it was found that gifted pre-college students can be taught computer science, as opposed to merely computer programming. richard e. korf computers and science education: issues and progress (presentation abstract) unless a systematic plan for the use of computers in education is jointly developed by educators, business and the government, our technological leadership will be seriously affected. such a loss of leadership would harm our economic strength, our defense capability and the quality of life in general. for example, new job opportunities will increasingly fall in the field of knowledge production. unless we teach the knowledge and skills for using computers to the poor and disadvantaged, those jobs will go to the already affluent, and the promise of equality of educational opportunity will have little substance. improved educational attainment, enhanced intellectual power, increased productivity, and support of the u. s. computer industry are all likely outcomes of reasonable federal initiatives. a new electronic educational publishing industry, using computer-related technologies could be stimulated. the skills connected with the computer will become a basic skill ("the 4th 'r'&amprdquo;) that will give people vastly amplified intellectual resources. simulation of complex events is but one of the ways that new mental adventures and tools for problem solving will be put at our disposal. some of the educational alternatives will be discussed. a set of specific recommendations for federal action and a report on progress will be presented. joseph lipson licensing software professionals: where are we? laurie honour werth standards and standardization on the eve of a new century enrico zaninotto copyright's fair use doctrine and digital data pamela samuelson changing support services in the university system of georgia computer network i am ray argo and i represent the university system of georgia computer network (uscn). in my official capacity i am the manager of network services technical support - better known to you as user services. today i am going to talk with you about the uscn and "changing support services" that have been required to keep pace with our user demands. i am not here to enlighten you or necessarily provide words of wisdom on how user support should be handled. i am here to share our experiences in supporting users in a large distributed network, to tell you how we support them today, and how we think they will need to be supported in the future. more specifically i will ... • give you a brief historical perspective of the uscn • describe our user support philosophy • describe our current support activities • describe where we think we are headed with support services • describe what we think it will take to get there • summarize comments ray argo redesign of an a.s. degree in computer science to meet emerging national standards william c. harris leon p. johnson what's an mis paper worth?: (an exploratory analysis) the article performs an economic analysis of mis salary data and survey data from a variety of sources in order to estimate the marginal value of a published mis refereed journal article to its author, a faculty member. the three main conclusions it reaches are:• a published mis refereed journal article can be worth approximately $20,000 in incremental pay, over an assumed five-year lifetime, to a faculty member. that value is derived from two sources: (1) the ability such a paper gives the author to move to higher-paying institutions, and (2) the incremental impact of the paper on an individual's compensation across institutions having the same teaching load.• what constitutes a valid "article" is institution-dependent, so the faculty member must ensure that the publication outlets targeted for such an article are consistent with the institution's objectives and quality criteria.• in order to realize the actual marginal value of a publication computed in this paper, a faculty member must be willing to relocate, perhaps on a reasonably regular basis, to ensure that he or she does not fall victim to the salary inversion phenomenon that is currently widespread within the mis academic community.the paper does not attempt to quantify the impact of a publication on a number of other potential contributors to its author's income, including the long-term possibility of becoming an eminent scholar and the possibility that publications will lead to outside income opportunities. as a consequence, it is quite possible that the actual value of an article may exceed the paper's estimate for some individuals. t. grandon gill computers in education at the stevens institute of technology edward a. friedman afips secondary education curriculum in information technology this session includes a report on work in progress by a committee developing an interdisciplinary computers and information-based course/curriculum intended for all students at the secondary level. content, objectives, and a topical outline will be discussed. audience reaction and input are requested. in the fall of 1983, the afips (american federation of information processing societies) education committee funded a project to develop a technologically oriented, interdisciplinary course /curriculum for all students at the secondary level. a steering committee met in september 1983 and recommended that a working committee be formed to produce recommendations on course/curriculum. richard h. austing leep3 - distance education tips marsha woodbury surveyor's forum: augmentation or dehumanization? christiane floyd reinhard keil erhard nullmeier revolutionizing the traditional classroom course is the computer the device that can change the campus as we know it? roger c. schank microcomputer software piracy and the law t. richards r. chan response to the federal trade commission's proposed ruling on standards and certification in december 1978, the federal trade commission issued a notice of intention of rulemaking in regard to the matter of standards and certification. in cooperation with the american national standards institute, of which acm is a member, the acm standards committee prepared a response to that notice and submitted it to the commission in april 1979. the response gives a summary of the acm standards committee position on a standards regulation and affords insights into the process by which procedures evolve in this area. for this reason, the response is reproduced here as a report. john a. n. lee a twenty year retrospective of the nato software engineering conferences (panel session): my thoughts on software engineering in the late 1960s david gries survey in software development p. huang a trip report on creativity & cognition 1999: an insider's report on a far out conference ben shneiderman tom hewett section 11. international standards at the crossroads george t. willingmyre where are we? the year 2000 and computer science ken robinson a survey of first computing course students (poster): new findings and their implications for the curriculum jeanine meyer stuart a. varden computerization of the campus coping with small computer systems (introductory remarks) although the cover topic is computerization of the campus, the specific topic really is coping with small computers. i am not certain that coping with small computers really is a subset of computerization of the campus. unquestionably, we have all been computerizing our campuses, willy nilly and often without an overall plan, for the past five years. many of us have used mainframes, mini- mainframes, off-campus services, microcomputers and combinations and permutations of all of these. drexel university, in addition to all of the above approaches has undertaken a massive computerization of its campus by mandating the acquisition of microcomputers by all freshmen in the 1983-84 academic year. the problems encountered are the same as with any large-scale academic undertaking: faculty involvement, money, logistics. at drexel, we have been fortunate in having an enthusiastic faculty, generous support from the pew memorial trust for faculty development, and a group of healthy, young administrators and staff members who have done yeoman service in getting the program moving physically. but of all of these, i would stress faculty support, faculty development, faculty participation. from the moment the decision was made to mandate microcomputers, the critical ingredient in the program's success was and is faculty involvement. the faculty defined academic needs, recommended the selection of the hardware, and has reviewed every proposal for curriculum development and integration of the microcomputer into existing courses. bernard p. sagik sneaking in extra material (panel session) daniel joyce the european community and information technology the world has watched eastern europe erupt into such political turmoil that historians are expected to call this period the revolutions of 1989. economic evolution was also underway as the continent progressed toward a single european market. the goal---a market without national borders or barriers to the movement of goods, services, capital and people---was first outlined over 30 years ago by the 12 countries which became members of the common market. in the mid 1980s, the effort was renewed when these same countries approved an ambitious plan outlining hundreds of legislative directives and policies that would harmonize and re-regulate those of the member states. the measures are drafted by the european commission, voted on by the council of ministers, amended if necessary, and then assigned budgets by the parliament. they include competition law, labor law, product regulation and standardization, taxation and subsidies, and quota and tariff guidelines. in 1987, the single european act created a timetable for the passage of legislation with a formal deadline for the removal of barriers by december 31, 1992, hence the term europe '92 (ec '92). but many have described ec '92 as a process that will continue throughout the 1990s. the ouster of communist leaderships throughout eastern europe, however, has raised unexpected questions about the participation of the eastern countries, and this could alter or delay the process. nevertheless, the changes have begun and are taking place during the information revolution. it is therefore natural to ask what impact ec '92 will have on the computer industry. inevitably, several of the directives and policies relate primarily, and many secondarily, to information technology. table 2 lists the policies in effect and those being proposed. in the following pages, communications presents several points of view regarding the impact of ec '92 on the information technology market in europe. as of july 1988, the european information systems market was estimated at $90 billion by datamation magazine and is expected by many to be the fastest growing market this decade. but during the last ten years, european-based computer companies have had difficulty keeping pace with american and japanese firms. in 1988, european companies managed only a 20 percent market share on their own turf, according to market researcher international data corporation. not much had changed since 1982 when their market share was 21 percent. as reported in the wall street journal last january, european computer companies have been hindered by lack of economies of scale, narrow focus on national markets, and difficulty in keeping pace with japanese and ij.s. product innovations. but the occasion for the journal article was the news that germany's siemens ag was merging with the ailing nixdorf computer ag. the result would possibly be the largest computer company based in europe, and the sixth or seventh largest in the world. and in october of 1989, france's groupe bull announced the purchase of zenith electronics corporation's personal computer unit. bull claimed that it would become the sixth largest information service company in the world. such restructurings have been predicted with the approach of ec '92, as corporate strategies would begin to take into account directives and trade rules regarding the computer and telecommunications industries. smaller european and american computer companies are anticipating battle with giants like ibm and dec, which have long-established european divisions or subsidiaries. ibm has been the leader in mainframes, minicomputers, and personal computers, but it is expected that all computer companies, european-based or not, will face greater competition in europe. the netherlands' nv philips, the largest european semiconductor and consumer electronics company, says it has been preparing for ec '92 since the 1970s. and north american philips chairman gerrit jeelof has claimed company credit for initiating the 1987 european act. in a speech delivered at a business week and foreign policy association seminar last may, jeelof said that while american companies had forsaken consumer electronics, philips and france's thompson have held their own against the japanese. but he indicated that american dominance of the european semiconductor market was a major impetus for ec '92. jeelof said:. . . because of the lack of european strength in the field of computers, the integrated circuits business in europe is dominated by americans. europe consumes about 34 percent of all ics in the world and only 18 percent are made in europe by european companies. the rest are made by american companies or are imported. it is not a surprise then that in 1984 we at philips took the initiative to stimulate a more unified european market. at the time, we called it europe 1990. brussels thought that 1990 was a bit too early and made it 1992. but it has been the electronics industry in europe together with other major companies, that have been pushing for europe 1992\. why did we want it? we wanted a more homogeneous total market in europe and, based on that, we wanted to become more competitive. the process is on its way and obviously we see some reactions. if you take action, you get reaction. one reaction has been concern on the part of non- european companies and their governments that the ec is creating a protectionist environment, a "fortress europe." as walls between nations are coming down, some fear that other more impenetrable ones are going up on the continent's edges. jeelof argues against this perception in another speech, "europe 1992---fraternity or fortress," reprinted in this issue in its entirety. communications also presents an analysis of several trade rules relating to semi-conductors in "the semiconductor market in the european community: implications of recent rules and regulations," by roger chiarodo and judee mussehl, both analysts in the department of commerce office of microelectronics and instruments. the authors outline the consequences of europe's rules of origin, anti-dumping measures that are supposed to prevent companies from using assembly operations in an importing country to circumvent duty on imported products. in the united states, if the difference between the value of parts or components from the dumping country and the value of the final product is small, then duty will be placed on those parts or components used in u.s. assembly operations. by contrast, the ec rule says that if the value of parts or components exceeds 60 percent of the value of all parts and materials, then duty will be placed on those parts and materials upon assembly in europe. since 1968, origin was also determined according to "the last substantial process or operation" resulting in the manufacture of a new product. in the case of printed circuit boards, some countries interpreted this as assembly and testing, while others thought it meant diffusion. in 1982, the ec began harmonizing these interpretations, and as of 1989, the last substantial operation was considered diffusion: the selective introduction of chemical dopants on a semiconductor substrate. as a result, american and japanese semi-conductor manufacturers have spent millions building foundries on european soil. to reveal the japanese interpretation of such changes, japanese commerce minister eiichi ono, with the japanese embassy in washington, dc, expresses his country's impressions of ec '92 in this issue. in his speech, "japan's view of ec '92," delivered at an armed forces communications and electronics association (afcea) conference on europe '92, ono states that while the ec's intentions might not be protectionist, they could become so upon implementation. his discussion focuses on semi-conductors and technology transfer issues. although not a formal directive, in july 1988, the european council decided to promote an internal information services market (the last "l" document in table 2). to present the reasoning and objectives behind this initiative, we reprint the communication from the commission to the council of ministers, "the establishment at community level of a policy and a plan of priority actions for the development of an information services market," and the resulting july 1988 "council decision" itself. funds allocated for 1989 and 1990 are approximately $36 million, $23 million of which was slated for a pilot/demonstration program called impact, for information market policy actions. this may seem a pittance in comparison to the programs of other governments, but this decision and other ec legislation are the first steps toward an ec industrial policy. recognizing that europe's non-profit organizations and the public sector play a very important role in providing database services, in contrast to the u.s. where the private sector is now seeding the production of such database services, impact has prepared guidelines to help the public sector cooperate with the private sector in marketing information. these guidelines would also allow private information providers to use public data and add value to it to create commercial products. impact is providing incentives to accelerate innovative services for users by paying 25 percent of a project's cost. after the first call for proposals, 16 of 167 projects proposed by teams composed of 300 organizations were funded. american-based companies can apply for funds if they are registered in europe. unlike the u.s., the ec allows registration regardless of who owns a company's capital. projects funded are to develop databases that would be accessible to all members of the community either on cd-rom or eventually on a digital network, an isdn for all europe, as planned by the fifth recommendation listed in table 2. one project in the works is a library of pharmaceutical patents on cd-rom that will enable users to locate digitized documents. users will also have direct access to on-line hosts for all kinds of patents. a tourism information database and a multi-media image bank of atlases are other pilot projects chosen, and another project will provide information on standards. eventually, audiotext might be used to retrieve data by telephone instead of a computer terminal. when the initial projects have been completed, the commission will inform the market place about the results of the implementation. plans for a five-year follow-up program, impact-2 are also under discussion. these projects depend to some extent on the implementation and passage of directives or the success of larger and better funded projects. on-line access to databases depends on the recommendation for an isdn as well as on the standardization directive for information technology and telecommunications. the certification, quality assurance, and conformity assessment issues involved in that directive are too numerous and important to just touch on here and will be covered in a later issue of communications. to make these databases accessible not only technically, but also linguistically, the ec has funded two automatic language translation projects called systran and eurotra. systran is also the name of the american company in la jolla, ca, known for its pioneering work in translation. in conjunction with the ec, systran translation systems, inc., has completed a translation system for 24 language pairs (english---french, french---english, for example, are two language pairs) for the translation of impact- funded databases. the system resides on an ec mainframe; there will be on-line access by subscription; and it will also be available on ibm ps/2s modified to run vms dos. it is already on france's widespread minitel videotext network. as this practical, market-oriented approach to technology implementation is beginning, europe's cooperative research effort, esprit, is also starting to transfer its results. last year, the second phase, esprit ii, set up a special office for technology transfer. its mission is to ensure the exploitation, for the benefit of european industry, of the fruits of the $1.5 billion esprit i program that began in 1984, as well as the current $3.2 billion program (funding through 1992). the ec contributes half of the total cost, which is matched by consortia comprised of university and industry researchers from more than one country. about 40 percent of esprit ii's funds will be devoted to computer related- technologies. every november, esprit holds a week-long conference. last year for the first time it devoted a day to technology transfer. several successful technology transfers have occurred either from one member of the program to another or out of the program to a member of industry that had not participated in the research. an electronic scanner that detects and eradicates faults on chips, for example, was developed by a consortium and the patents licensed by a small company. this automatic design validation scanner was co-developed by cselt, italy, british telecom, cnet, another telecom company in france, imag, france, and trinity college, dublin. the company that will bring it to market is ict, gmbh, a relatively small german company. it seems that in europe, as in the united states, small companies and spin-offs like those found in the silicon valley here, are better at running quickly with innovative ideas, says an ec administrator. another technology transfer success is the supernode computer. this hardware and software parallel processing project resulted in an unexpected product from transputer research. the royal signal radar establishment, inmos, telmat, and thorn emi, all of the uk, aptor of france, and south hampton university and the university of grenoble, all participated in the research and now inmos has put the product on the market. three companies and two universities participated in developing the dragon project (for distribution and reusability of ada real- time applications through graceful on-line operations). this was an effort to provide effective support for software reuse in real-time for distributed and dynamically reconfigurable systems. the researchers say they have resolved the problems of distribution in real-time performance and are developing a library and classification scheme now. one of the companies, txt, in milan, will bring it to market. several other software projects are also ready for market. one is meteor, which is aimed at integrating a formal approach to industrial software development, particularly in telecommunications. the participants have defined several languages, called asf, cold, erae, pluss, and psf for requirements engineering and algebraic methods. another project is quick, the design and experimentation of a knowledge-based system development tool kit for real-time process control applications. the tool kit consists of a general system architecture, a set of building modules, support tools for construction, and knowledge-based system analysis of design methodology. the tool kit will also contain a rule-based component based on fuzzy logic. during the next two years, more attention and funds will be indirectly devoted to technology transfer, and the intention to transfer is also likely to be one of the guides in evaluating project proposals. some industry experts maintain that high technology and the flow of information made the upheaval in eastern europe inevitable. leonard r. sussman, author of power, the press, and the technology of freedom: the coming age of isdn (freedom house, 1990), predicted that technology and globally linked networks would result in the breakdown of censorious and suppressive political systems. he says the massive underground information flow due to books, copiers, software, hardware, and fax machines, in poland for example, indicates that technology can mobilize society. knowing that computers are essential to an industrial society, he says, gorbachev faced a dilemma as decentralized computers loosened the government's control over the people running them. glasnost evolved out of that dilemma, says sussman. last fall, a general draft trade and economic cooperation accord was signed by the european commission and the soviet union. and both american and western european business interests are calling for the coordinating committee on multilateral export controls (cocom) to relax high technology export rules to the eastern bloc and the soviet union. the passage of that proposal could allow huge computer and telecommunications markets to open up. and perhaps the revolutions of 1989 will reveal themselves to have been revolutions in communication and the flow of information due in part to high technology and the hunger for it. karen a. frenkel digital villiage: digital politics 2000 hal berghel risks in our information infrastructures peter g. neumann a proposed secondary education computer science curriculum stephen w. thorpe paul d. amer operation-based merging existing approaches for merging the results of parallel development activities are limited. these approaches can be characterised as state-based: only the initial and final states are considered. this paper introduces operation-based merging, which uses the operations that were performed during development. in many cases operation- based merging has advantages over state-based merging, because it automatically respects the data-type invariants of the objects, is extensible for arbitrary object types, provides better conflict detection and allows for better support for solving these conflicts. several algorithms for conflict detection are described and compared. ernst lippe norbert van oosterom revisiting the industry/academe communication chasm robert l. glass the dark side of computer operations: implementing user policies computer centers today aren't like the computer centers in the good ol' days. back then, users came to the window and handed us their keypunched cards. the computer operator stacked them on top of a batch of other cards in the card stacker with a divider card, and we all patiently waited. a half a day later the user would come back to see if his cards had been batch processed yet. he asked the status of his job, the operator told him, and the user accepted his word for it. when the user finally did get his cards and printout, he was grateful and amazed at the all-powerful computer and at the cooperation of the computer operator. life was so simple. the computer center of 1987 is altogether different. the computer is merely taken for granted, and the distressed operator takes abuse that defies description. what went wrong? when did the operator lose control of his little kingdom? why is it becoming increasingly difficult to implement policies? in the old days, users accepted the operator's word for everything, mainly because they had no choice. today, nearly every user has his own terminal or microcomputer right on top of his desk. the user can easily monitor the operator's every move. the user is able to determine which jobs are waiting in which queue and how long they've been waiting. if the user is really alert, he can tell if a job has been waiting an unusually long time, and possibly if the operator is asleep at the wheel. and to add to the operator's grief, the user can take it one step further by "notifying" the delinquent operator via console messages that his work could stand improvement. in the time it takes the operator to respond to the irate user, the operator could have put the printer on line or repaired whatever damage was occurring without any prompting from the user. in an environment of 2000 plus users, responding to console remarks can be a full-time occupation. and responses to users must be handled diplomatically. after all, the comments that are made are all right there in black and white---there is no denying the unintentional sarcastic overtones in the response. when life was more tranquil, it was simply our word against theirs. phyllis griggs a paradigm shift to oop has occurred…implementation to follow this paper reflects on the discussions concerning the first year programming sequence that have transpired at the last few national symposiums in computer science education and at other computer education conferences in the past two years. both paper and birds-of-a-feather sessions suggest that computer science educators are deep into the transition from the procedural to object approach as the preferred basis for the design and implementation of software. this transition has become evident in the introductory textbooks, but it is more evident in the struggles of a faculty that has grown up with structured programming and now has to overcome that deeply entrenched mind- set. the author reflects on the characteristics of this struggle by comparing it to academic worldview struggles that have been termed "paradigm shifts." such struggles have been studied in other scientific disciplines and the pattern observed there is emphasized in the author's interpretation of recent events in computer science education. william mitchell risks of passwords peter g. neumann latest developments in the "killer robot" computer ethics scenario richard gary epstein intellectual property rights and the global information economy pamela samuelson a methodology for active, student-controlled learning: motivating our weakest students curtis a. carver richard a. howard william d. lane risks to the public in computers and related systems peter g. neumann computer usage patterns on a computing intensive campus linda brown philip heeler jon rickman sean sheil no one is making money in educational software elliot soloway using the cloze procedure with computer programs: a deeper look this paper extends research on the use of the cloze test in the domain of computer software. in this study the cloze blanks were divided for the purposes of analysis into five structural subcategories. the relationships of the total cloze score and the subcategory scores to two criterion test measures were found to be positive, with the strongest and most consistent relationships being found for the variable subcategory. use of the cloze test for both instructional and assessment purposes was discussed. eileen b. entin road crew: students at work john cavazos workplace impacts - ai and automation: privacy and accountability of large- scale high-capacity personal data systems this session will focus on the problems for information policy in a democracy raised by new fifth generation computer technology. panelists will discuss the theoretical role of information technology in a democracy and the special problems of accountability and privacy. the possibility of new regulation and legislation will be discussed, as well as the development of new techniques by the executive branch such as matching and data bases. leslie gasser kenneth laudon ways of seeing elliot soloway denial of service: an example roger m. needham current trends in computer science curriculum: a survey of four-year programs sukhen dey lawrence r. mand shaping the focus of the undergraduate curriculum this paper outlines an approach to reshape the existing undergraduate cs curriculum. based on software engineering and parallel computing concepts, the details of the new curriculum are presented in terms of educational objectives, core courses, innovations in the teaching method, and early experiences. marcin paprzycki janusz zalewski user services in a semi-autonomous, decentralized computing environment this paper presents some observations and impressions gained through attempting to define the needs of, and provide assistance to, microcomputer users at msu over the past two years. consideration is given to technical and non-technical factors that result in similarities, but that also raise unique problems and opportunities, when compared to traditional, centralized computing. the author concludes that the need for user services will continue, if not become more acute, in the foreseeable future, in an increasingly autonomous and decentralized computing environment; the most effective way to meet these needs is with an educational program aimed at fostering self- sufficiency and cooperation among the users themselves, and with temporary, on-site assistance that involves users in developing perspective on both their computing needs and the responsibility they must assume if they are to supply their own computing power. bill brown algorithm visualization using quicktime movies for student interaction (poster session).: algorithms from computational geometry jay martin anderson computer-based management systems much has been written regarding the need for students majoring in information systems to take a sequence of courses in communications, both oral and written. most curricula relegate this important area to electives. this curriculum in computer-based management systems (cms) addresses this issue in a straightforward manner by incorporating courses in humanities and technical communications as part of the major requirements. the curriculum and facilities supporting the curriculum are discussed. bill mein software piracy: stopping it before it stops you in today's academic environment, as computing resources become more and more prevalent, computer software becomes easier and easier to access, and as such, easier and easier to copy. this is one of contributing factors to the reputation academic institutions have gained in the software industry for their blatant software piracy---the unethical and illegal copying of copyrighted software. academics everywhere clamor about protecting the rights to their works, but many view computer software in a different light; they often do not see or fully comprehend the negative implications of illegally copying software which they bring upon themselves and their institutions. yet what can one do to curb this blatant, yet sometimes elusive, practice? can one protect themselves and their institution from the repercussions of piracy? why should one even try? after all, if everyone else is doing it, it must be okay … "respect for intellectual labor and creativity is vital to academic discourse and enterprise. this principle applies to work of all authors and publishers in all media. it encompasses respect for the right of acknowledgement, right to privacy, and right to determine the form, manner, and terms of publication and distribution. because electronic information is volatile and easily reproduced, respect for the work and personal expression of others is especially critical in computer environments. violations of authorial integrity, including plagiarism, invasion of privacy, unauthorized access, and trade secret and copyright violations, may be grounds for sanctions against members of the academic community." \---educom software initiative statement of principle concerning software and intellectual rights. this statement from the educom software initiative is only that---a statement, but it does serve as a solid first step in better understanding the problem of software piracy in today's university environment. hypocrisy runs deep in the halls of many educational institutions. from primary schools through the most advanced graduate institutions, people profess to teach students the difference between right and wrong, or to at least guide them in making their own decisions. students are taught moral, ethical, and legal points of view, and they are then expected to go out into the world and live their lives according to these teachings. from the earliest days of preschool, children learn that stealing is wrong; they learn that different things belong to different people and that taking something which belongs to someone else is a serious and punishable act. later, in secondary school, students are taught that copying someone else's test or assignment is also wrong, and that doing so could result in a failing grade. finally, in institutions of higher education, young adults study and refine the origins and developments of their own moral, ethical, and legal points of view. they learn about the importance of intellectual property, and they find the seriousness with which breaches in the area of plagiarism are handled. in theory, students learn to respect intellectual labor and creativity, and they learn that this respect is a necessary base for the free flow of information in an academic environment. they also learn that respect for and protection of private property are cornerstones upon which our society is founded. in practice, however, students often learn the hypocrisy with which these lessons are taught; they learn how to bypass the educational system and how to use this intellectual property for their own benefit. much as physical property was to the industrial age, intellectual property--- software and the data it manages---is to the information age, and without the same, if not greater respect for intellectual property, today's society will not survive the transition. software and manuals, as novels and other literary works, are protected under united states copyright law. in simple terms, this law guarantees the copyright owner, the author in most cases, the exclusive rights to the reproduction and distribution of his intellectual property. thus, copyright law guarantees the owner of the intellectual property the same types of rights that patent law guarantees the owner of an invention or other piece of seemingly more tangible physical property. computer software and data are intellectual property, and as such are covered by united states copyright law---this realization is critical. the problems start when people cannot, or will not, make the mental transition from physical to intellectual property. whereas most people would not steal books from a bookstore or a software package from a dealer's showroom, even if they knew they would not be caught, many of the same people would not hesitate copying a computer program from a dealer's demonstration model or from their friends and colleagues. the truth, however, is that there is no difference beyond the chosen method of theft. where many people either do not know the laws governing intellectual property or do not consider computer software as intellectual property, many others choose to disregard the laws altogether. many people---and educators are often guilty of this---rationalize their actions using such arguments as the "fair use doctrine" or that an educational purpose is above the law. most of the arguments used for arriving at this rationale are poor at best. "i only pirate software i wouldn't otherwise buy. developers make a killing with the prices they charge, so copying a program here or there only balances it out for the little guy. i don't want to spend that kind of money without trying it first, so i'll copy it to try it before buying it. i only pirate the esoteric stuff, not the things i use everyday." many rational and ethical people use these arguments to deal with intellectual property without any shame or guilt. use the same arguments, however, in a situation involving physical property and see how attitudes change. "i only steal textbooks i wouldn't otherwise buy. publishers make a killing with the prices they charge, so stealing a book here or there only balances it out for the little guy. i don't want to spend that kind of money before reading it first, so i'll steal it and read it before buying it. i only steal the esoteric stuff, not the books i use everyday." as for the "fair use doctrine," it, in principle, grants the right to reproduce a small portion of a copyrighted work for educational purposes. this is meant to apply to such things as quotations and as such does not encompass the reproduction of software programs any more than it covers the complete reproduction of novels or other literary works. putting aside the moral and ethical issues of software piracy, one would expect a university to adhere to even stricter guidelines concerning software since it is intellectual property \---something in which these institutions specialize. unfortunately, the everyday evidence seems to demonstrate that the reputation of educational institutions as the worst offenders of copyright laws---when dealing with computer software---is well deserved. in almost any industry magazine or journal or on almost any public information service where the topic of software piracy is broached, one can find fervent academics on both sides of the argument. these discussions usually find themselves out of control with both sides deadlocked at the point where piracy is compared to common theft or shoplifting. those who defend software piracy claim the two have nothing in common; those who defend copyright laws claim the results are the same and the act is just as wrong. if there is any real difference between physical and intellectual property, it could be that the latter has a much greater societal value. this seems to be borne out studying the history of civilization and its use, or misuse, of information. many of the problems concerning software piracy arise out of innocent ignorance of the laws governing copyrights and license agreements. in these cases, simple education about what can and cannot be done within these guidelines can suffice. clearly point out that unless software has been placed in the public domain, it is protected by copyright law and one must abide by the license agreements which usually come with a program. in general, the user owns only a right to use the software and not the software itself. this type of usage license places restrictions upon reproducing and distributing the software itself. this type of usage license places restrictions upon reproducing and distributing the software, and this can include such things as loaning the software to a friend or colleague and making duplicates for classroom or network use. some licenses even go so far as to restrict use to a specific computer. more and more of today's software is easily copied as it is no longer common practice in the industry to use "copy protection" schemes. this lack of copy protection is a trust the developer is demonstrating in the integrity of the user, and not a permit to duplicate the software beyond what is allowed in the license agreement. in most cases, however, the user does have the right to make a backup copy of the software for archival purposes, and in an increasing number of cases dealing with university-produced software, copyrighted packages even encourage a free duplication and distribution system. in theory, any use of a software package which falls outside of the limits of the license agreement renders the user, and quite often the user's company or institution, liable to prosecution, so the simple rule is: read and understand the license agreement before using a new software package. where simply educating the users is not enough to slow software theft, proceed to more formal steps such as producing a clear written policy concerning software use. go so far as to have users sign a one-time agreement to abide by the policy and severely reprimand those who do not. rules concerning software piracy often exist in universities, but more often then not enforcement is lacking. do these rules hold the same weight as those concerning plagiarism? do the same rules cover faculty and staff as well as the students, or are there double standards? in most educational communities a member is disciplined---sometimes to the point of expulsion---for plagiarism as well as property theft. this similarity between the two types of theft demonstrates the obvious link between physical and intellectual property. is the university, therefore, willing to consider and enforce the same discipline for proven software piracy? if it is not, then it is not going far enough towards reducing the institutional liability for copyright infringement. with the recent advances in integrated computer networks, software is much more accessible. but many of these networks now also include "execute-only" features which prevent the average user from copying the software for personal use. if these features are not included, they can often be added by the university staff with a minimal programming effort. the idea behind education, formal policies, and simple local hardware or software protection schemes such as these is to stop the novice or casual user from copying application software. software piracy impacts both the university's future acquisitions and, if it exists, campus software development. anyone involved with computer use or the formation of university policy must encourage the legitimate use of software both on and off campus. follow university licensing agreements religiously--- not just in spirit. if the licenses are not flexible enough, expend the effort needed to negotiate with the developer for a special agreement which will work. if the developer does not offer a deal which suits the institution's needs, embrace another who does, and let the reasoning behind the choice be known. if the institution is not large enough or the demand for software is not great, work with other educational institutions within the region or state to gain a louder voice. let the software budget do what money does best--- talk. developers will listen, but an institution must also give them reason to trust. university microcomputer software budgets are too often underestimated. but considering the ethical and legal ramifications of software piracy within an educational institution, administrators must realize, or be made to understand, that the expense is a necessary one. when viewed next to mainframe software expenditures or even student textbook costs, microcomputer software expenses suddenly appear quite reasonable. when the time spent with a software package is compared to its price, one should find that the cost for using such a valuable tool is mere pennies per hour. so when budgeting for software acquisitions or thinking about what software to provide to faculty, staff, and students, remember: saving money is one thing, stealing is another. university administrators and staff must take the steps necessary to reduce the university's potential liability for copyright infringement. they must champion the fact that unauthorized reproduction or use of computer software is both a violation of federal law and an unethical, unacceptable act in an institution of higher education. the practical implications of software piracy are simple: the users pay the price in the end. the relationship between software customer and developer is based upon a mutual trust. the customer trusts the developer to produce high quality software and to provide updates and support, all at a reasonable cost. the developer trusts the customer to purchase the software and use it within the limits of the license agreement. if the customer keeps its part of the deal, it is worth the research and development time of the developer to invest in new improvements and products as well as to provide excellent documentation and customer support for current products. if the developer has done its homework, the customer will continue to support and use the product and possibly recommend it to friends and colleagues. in addition, the customer is likely to purchase future updates and even new products from the developer. if both the developer and the customer can develop and maintain this rapport, then both will benefit; if one or the other does not, then everyone involved stands to lose. software piracy by individuals or departments on a university campus can bring about repercussions for the entire institution. if the illegal duplication and use of software is not condemned by the university administration at the highest levels---as is plagiarism, the illegal duplication and use of other intellectual property---the institution may find it impossible to negotiate reasonable site or network licenses and other agreements with developers which could be advantageous to the whole university community. most of the software sold today, whether for mainframes or microcomputers, requires many man-years of development time at a cost often exceeding several hundred thousands of dollars. because of this, the real cost of software piracy for both the user---legitimate or not---and the developer is far greater than that of the immediate dollar losses associated with most types of theft. the cost is the long-term ability of the developer to bring new and better packages to the market. only when it is no longer profitable to invest the money necessary to produce high quality products, and only when these products stop arriving on dealers' shelves will the true cost of today's rampant software piracy be felt. but by that time, it will be too late to stop. both users and developers must act now, and act together, to stem the spread of software piracy if the software industry is to survive and grow and if the users are to ever see the more powerful, more "intelligent" software which the future promises just beyond the horizon. in its simplest expression, software piracy is theft of intellectual property, and just as there is no world in which theft of any type can be completely stopped, there is no way to completely stop piracy. but educational institutions have a greater responsibility than the world at large, and they must take every avenue available to them to lead the way in protecting the constitutionally conceived notion of copyrights for intellectual works, and in doing so, set an example for other public and private institutions. for without this guarantee, without the right to create and disseminate original works of authorship, the free flow of ideas in an academic institution, which is critical to learning of any kind, would soon disappear, taking with it a fundamental basis of today's society. mark b. johnson rhythms of collaboration helga wild chris darrouzet ted kahn susan u. stucky a new emphasis & pedagogy for a cs1 course a cs1 course introduces students to fundamental aspects of computing science. invariably, these aspects are ones of content (subject matter). there is an alternative, and arguably more beneficial, role for a cs1 course - it could introduce the fundamental _processes_ and _concepts_ which pervade _all_ computing science content domains, and which have but different instantiations in the different domains. this article considers the identification of these aspects, and suggests a pedagogy suitable for their emphasis. this pedagogy is applied to a traditional cs1 programming-content domain, resulting in a proposal for a new cs1 curriculum. m. d. evans the practical management of computer ethics problems john w. smith the integration of graphics, video, science, and communication technologies robert wickman thomas d. cauffield glenn dame kevin j. meehan impact of organizational maturity on user satisfaction with information systems user satisfaction has increasingly played an important role in information systems (is) organizations' effectiveness. does an is organization's ability to design and develop increasingly effective systems depend on its maturity? does maturity depend on bringing in state-of-the-art technology? is maturity related to the organizational structure of the data processing/management information systems (dp/mis) unit? is maturity related to spending increasing amounts of money on is organizations? the importance of these questions and their answers to dp/mis managers and users cannot be overemphasized. the research uses a field study of is users and managers to answer these questions and other related issues. the study shows a weak but significant overall relationship between user satisfaction variables and maturity criteria defined by nolan [25]. further research based on the findings of the study is suggested as a means of improving the degree of relationship between maturity criteria and user satisfaction variables. mo a mahmood jack d becker the cutting edge helen nissenbaum the hacker ethic sarah granger sources of government funding for college and universities the panel is composed of representatives of a number of federal agencies that provide research support for colleges and universities. each panelist will present a short overview of the research support provided by their agencies. this will be followed by a general decision of funding opportunities for computer science research in colleges and universities. the participants are donald m. austin, department of energy; bruce h. barnes, national science foundation; timothy d. evans, army research office; robert b. grafton, office of naval research; ronald l. larson, national aeronautics and space agency; and john p. thomas, air force office of scientific research. computers in schools grades k-12 (panel discussion) during this session, the participants will present different facets of computer use in education from kindergarten through secondary school. the unifying consideration throughout this session will be the options available for computer use in education, and the priorities that are being established for these various uses. jean b. rogers sharing standards: standardizing the european information society roy rada john ketchell comments on postwar development of computational mathematics in some countries of eastern europe at the request of the organizing committee, i would like to share some of my observations and remembrances about the development of computational mathematics in czechoslovakia and the ussr. my observations will be very subjective and broad in scope. i. babuska end-user participation and the evolution of organizational information systems: an empirical assessment of nolan's stage model nolan's stage hypothesis on the growth of data processing provides a popular framework for describing the typical development patterns of organizational information systems. up until now, a few limited tests have been published to ensure formally the reliability of the stage model. we report on an empirical study of 130 major finnish edp user organizations where the aim was to test the power of nolan's stage model. several of nolan's benchmarks were used as measures of the stage of development but special attention was given to the role of end-users in systems development. not only were the actual stages of development measured but also estimates of the future changes in the major benchmarks were compared for typical groups of organizations. our tests show that there are significant differences both in the stage and speed of development between organizations even with the same experience in computing. we have been able to assure that nolan's model is a good descriptor of the average patterns of changes in data processing but it is not able to predict the development. the nolan model seems to be confined within the limits of its own historical background. markku saaksjarvi is north american is research different from european is research? diversity, whether in terms of research methodologies or reference disciplines, enriches and benefits a field of research. if, on the other hand, this diversity inhibits the sharing of research and knowledge between communities with different intellectual heritages, loss of synergy and research opportunities for both communities may result. this study identifies similarities and differences between north american and european research in terms of theoretical bases and research methodologies by analyzing doctoral dissertation research. results show that european mis research is predominantly qualitative and non-empirical conceptual whereas north american mis research is predominantly quantitative and empirical. furthermore, research in europe is influenced considerably by computer science and artificial intelligence whereas research in north america has more behavioral and managerial roots. a comparison across a five-year period shows that this gap is narrowing. j. roberto evaristo elena karahanna computer centres - the next decade j. l. alty specialization is harmful to computer education dennis j. frailey cgtdemo - educational software for the central limit theorem gordana jovanovic-dolecek victor h. champac job turnover among mis professionals: an exploratory study of employee turnover the paper addresses the issue of it worker turnover. it reports on a survey it workers who graduated over the past decade (1990-1999) from saint louis university. research questions target it worker demographics, such as age and gender and job satisfaction, salary, job tasks, and opportunity factors for both prior and current employment. results included ranking of different factors comprising job satisfaction (satisfaction with financial compensation was high and with fringe benefits was low), observation of significant differences between several satisfaction measures but no task measures between prior and current job, and observation of few correlations between specific tasks performed and salary levels. fred niederman mary r. sumner forums for improving hci education andrew sears development and use of an assembler in computer systems courses assembly language programming provides students the ability to manipulate bits and bytes inside a computer. but how do they know that the effect of their assembly language program is the same as what they have expected? if there is a way to `see' the bits and bytes after execution of their programs, they will understand more about that computer and its instructions. therefore a simulator and an assembler for a basic computer, designed in computer system architecture (3rd edition) by m. morris mano [1], was developed and used in our computer systems course. the background information about the basic computer, development of the simulator and the assembler, our experience in using them in our course, and benefits gained from using them are discussed. soe than test construction and administration strategies for large introductory courses large introductory courses in computer science make test construction and administration a particularly difficult problem for the instructor. multiple- choice computer-graded tests can be used to alleviate this problem by testing not only in the knowledge areas of history, hardware, software and applications but in the area of programming skills as well. several illustrations of the type of multiple-choice question that can be used to test the skills of 1) reading a flowchart 2) reading a program 3) converting a flowchart to a program and 4) writing a program are given and are discussed in terms of their effectiveness. practical issues of test administration such as the pre-plan, open vs closed-book, cheating and grading are also discussed. the paper attempts to mildly formalize a shadowy area of computer science that has heretofore existed primarily as folklore and hearsay. stewart a. denenberg assistive technology computers and persons with disabilities carl brown re-conceptualizing it training for the workforce of the future maung k. sein robert p. bostrom lorne olfman the effect of high school computer science, gender, and work on success in college computer science researchers have often linked factors such as mathematics ability and overall academic achievement to success in computer science. in this study, a group of students with common mathematics backgrounds was examined to determine if some other new factors were also involved in success in computer science. in particular, the roles of prior computing experience, work, and sex are discussed. a composite picture of the typical successful student is drawn and the implications for computer science departments are identified. harriet g. taylor luegina c. mounfield accommodating disparities in secondary school backgrounds in the university environment exposure to computers in high school has become a widespread phenomenon, with several schools actually offering an integrated computer curriculum throughout grades k-12. because of the urgency of needs and the quickness with which these programs are implemented, as well as the lack of a well-tried standard to follow, the students coming out of these programs have every manner of experience. the resulting differences in the background of entering university students has created a need for an enriched introductory course. however, assessing the preparedness of students for such a course presents its own difficulties. this paper describes a placement test being used at the university of new brunswick to evaluate the knowledge of incoming students to determine those who have been adequately prepared for an enriched course. the placement test, including an initial analysis of the placement test as a predictor, is discussed. jane m. fritz introducing a legal strand in the computer science curriculum cristina cifuentes anne fitzgerald software engineering education in the associate-degree-level vocational/technical computer science program e. r. richmond what's happening steven pemberton managing information systems professionals: implementing a skill assessment process m. gordon hunter organizational change enabled by the mandated implementation of new information systems technology: a modified technology acceptance model lynda hodgson peter aiken the clinton/gore technology policies ralph d. nurnberger legally speaking: the promise and problems of the no electronic theft act andrew grosso lmds/lmcs hub interconnection alternatives and multiple access issues lmds/lmcs is a broadband wireless local loop, millimeter - wave alternative to emerging integrated multiservice access networks. significantly large amounts of bandwidth -- in the order of one ghz of spectrum -- are made available to residential subscribers or supported business users respectively that employ highly directional antennas and signal polarization to establish communication with a central hub. besides the requirement for dynamic bandwidth allocation capabilities, these networks should be able to guarantee negotiated quality of service qos levels to a number of constant - length atm \-- and possibly variable length tcp/ip -- packet streams. in this context, we analyze the performance of contention, polling/probing and piggybacking mechanisms that will be used by the lmds mac protocol for the dynamic support of both real - time and non - real - time traffic streams. more specifically, we focus on the end - to - end performance of a real - time variable bit rate connection for which the lmds link is only the access component of a multi - link path through an atm network. results are presented on maximum end - to - end cell delays under a weighted round robin service discipline and buffer requirements are calculated for no - loss conditions. in parallel, we also consider the case in which variable length ip packet traffic is supported as such by the same wireless access network. backbone interconnection alternatives of lmds hubs, multiple access proposals and scheduling algorithms are addressed in this framework. g. m. stamatelos v. n. koukoulidis evaluating software engineering methods and tool part 5: the influence of human factors in previous articles, we have considered a number of general issues concerned with evaluating methods and tools. in the next few articles we describe how to perform a specific type of evaluation exercise called feature analysis.this article discusses the principles of feature analysis and describes how to identify features and establish a means of scoring products against those features. later articles consider how to plan a feature-based evaluation and how to analyze the results of an evaluation. barbara ann kitchenham lindsay jones teaching recursion in a procedural environment - how much should we emphasize the computing model? david ginat eyal shifroni reflections on past research: part ii nell dale contextual techniques starter kit hugh r. beyer karen holtzblatt is big brother hanging by his bootstraps? l. t. greenberg s. e. goodman a model for a cai learning system over the past ten years, computer- assisted instruction (cai) has had an impact on the educational system. in this paper, we discuss our view of a model for developing an integrated set of cai modules for any given subject area. the model has been implemented and tested, with very favorable results, for the subject area of metrication. donald l. kalmey marino j. niccolai the 15-pegs puzzle and the conceptual tree jerome l. lewis mit's software copyright awareness campaign mary ellen bushnell a model high school computer lab(special session) the george washington high school has received many awards for its ecumenical approach to affording students a computer experience. the computer curriculum and laboratory treat the whole spectrum of student abilities and subject areas, including not only mathematics and business but also other areas such as english, art and music, as well as the educationally underprivileged. the curriculum includes 16 formal classes, and the laboratory currently contains 36 computers. the development and organization of the computer laboratory and curriculum will be presented. a brief demonstration of software to teach english to students speaking spanish, laosian, hmong, and vietnamese. irwin j. hoffman recruiting and retaining is personnel: factors influencing employee turnover jonathan palmer cheri speier michael buckley jo ellen moore evaluating academic computing on campus and developing a 5-year plan a comprehensive study of academic computing at baruch college was recently completed. the purpose of the study was to draw a baseline picture of current computing resources and needs, and to propose a five year development plan in order to improve academic computing for students, faculty, and staff. the impetus for this study was preparation for the middle states self study accreditation to take place in fall 1989. one of the requirements of the accreditation is a critical self evaluation of the state of academic on campus. the focus of this paper is the process by which the study was conducted. part 1, describes the study process. part 2 outlines the major impediments to computing and recommendations for the future of academic computing at baruch, and part 3 discusses the impact of the study on the college. s. l. prupis the university computer science curriculum: education versus training j. glenn brookshear "so how are your hands?": thoughts from a cs student with rsf rob jackson a quantitative evaluation of graduate programs in computer science in the united states and canada g. m. schneider information resource management(irm): theory applied to three environments the information resource management (irm) concept will be defined with reference to available resources written on the topic. the major business related information systems curricula projects by association for computing machinery (acm) and data processing management association (dpma) will be shown with their respective philosophy around the resource management area. the professional society which has emerged from the resource management area will also be described. discussion will be initiated on the relevance to the business curriculum of irm with details on how implementations of concepts have been done on both the graduate and undergraduate levels. a specific example of irm training will be presented with an overview of how the material is applied in other educational settings. john f schrage john m blair forest woody horton boulton b miller debating: its use in teaching social aspects of computing lorraine staehr a day in the life of… hal berghel to tap or not to tap dorothy e. denning issues in the computing sciences at two-year colleges (panel session) john impagliazzo helene chlopan ronald m. davis david m. hata karl klee recommended curriculum for cs2, 1984: a report of the acm curriculum task force for cs2 a report of the acm curriculum task force for cs2 elliot b. koffman david stemple caroline e. wardle rating the impact of new technologies on democracy ted becker building trust in the electronic market through an economic incentive mechanism sulin ba andrew b. whinston hang zhang the nii intellectual property report pamela samuelson a proposal for integrated software engineering education this paper describes a proposal to strengthen the coverage of software engineering within the undergraduate cs curriculum by integrating coverage of software tools, methodologies, and practices into core courses and by providing appropriate resources for instructors. the goal of the proposal is to improve technical readiness of cs graduates for the 21st-century software development workforce by providing an exposure throughout the cs curriculum to modern project management practices and tools, and by developing in students skills necessary to participate in software development projects. ralph f. grove software piracy: a view from hong kong trevor moores gurpreet dhillon a data processing communication skills course to be a successful business data processing professional, one should possess effective written and oral communication skills; therefore, any program which prepares computing students for the business world should effectively train them in this area. of the programs that attempt to handle this situation, most merely require their students to take several english/communication courses. from experience, this method is less effective than additionally reinforcing the students' communication skills within the entire range of the computing curriculum. this paper presents a methodology for accomplishing the task of implementing writing into an introductory data processing course. alka rani harriger thomas i. ho forum diane crawford need "therapy" for your "information pain"? john gehl suzanne douglas peter j. denning robin perry network-user policy bret krebeck facilitating career changes into it jo ellen moore susan e. yager mary sumner galen b. crow email groups for learning and assessment mike hobbs fortresses built upon sand dixie b. baker non-traditional ways for preparing computer science faculty (panel session) angela shiflet jim balch georgianna tonne klein jim cameron ken whipkey reader comments: semantics aside, "knowledge" can be managed john gehl wealth and jobs in the fifth new world automation in the 1950's sparked predictions of massive unemployment, and similar gloomy forecasts have been voiced many times since then. although "detroit automation" (transfer lines) did increase productivity and contribute to a decline in manufacturing employment, expansion of job opportunities in the service and information sectors more than compensated for that loss. moreover, the rapid diffusion of computer applications has been accompanied by the creation of new kinds of jobs. with the advent of microelectronics and, more recently, intelligent systems in the marketplace, the specter of unemployment looms large once again. however, many observers are reluctant to predict widespread unemployment for fear of emulating the little boy who cried wolf. abbe mowshowitz the standards factor: catching up with committees pat billingsley using transaction cost economics to underpin business process redesign (abstract) brent work an accelerated program in computer science this paper is in the nature of a preliminary report on a pilot project aimed at developing quality instructional materials in three basic areas of computer science (problem solving and programming, hardware and software) for presentation in a concentrated manner. a total of 162 90-minute (54 periods per area) class periods in a classroom environment and an equal amount of supervised workshop study are contemplated. it is expected that the project implementation would extend the capability of the computer science departments across the nation to offer a unique opportunity to students to earn a special minor in computer science and/or to prepare for entering a computer science graduate program - all in the shortest possible time. the courses were offered for the first time during the summer of 1981. the course outlines, the background of the participants and other details of the project are included in the paper. onkar p. sharma ali behforooz on students presenting technical material to non-technical audiences in a computer science curriculum much attention has been given regarding the lack of writing ability of our graduating students. many of these same students cannot make reasonable oral presentations of predominantely technical material to peers, graduate students, or faculty much less to an audience consisting of mostly non- technical (lay) persons. this paper examines one such attempt to do so for twenty undergraduate students in a computer science curriculum. the reactions of the presenters as well as those of the audiences are given. wm j. mein the profession of it: the it schools movement peter j. denning acm forum diane crawford teaching computer ethics delmar e. searls agents, profiles, learning styles and tutors (poster session) constance g. bland pamela b. lawhead employment contracts and the mobile employee (panel discussio)n alex a. j. hoffman robert l. graham arthur e. parry national computer policies: whither the united states? ben g matley making software development investment decisions john favaro shari lawrence pfleeger radical innovation: how mature companies can outsmart upstarts james f. doyle the at&t; teaching theater at the university of maryland at college park walter gilbert component-based software development with javabeans juan wang from the editors: crossroads finishes first year ronald b. krisko strategic planning in computer services: is the tail wagging the dog? robert h. august practical teaching tips from around the world scott grissom tom naps nick parlante pamela lawhead the virus is worse than the cure don gotterbarn system administration: mark's mega multi-boot computer mark talks about his crazy mul mark nielsen home-study software: flexible, interactive, and distributed software for independent study christopher connelly alan w. biermann david pennock peter wu mainstreaming the computer technology needs of disabled persons in higher education g. e. moum articulation: who needs it? your students do! (abstract) karl klee richard austing robert campbell joyce currie little program in computer science and engineering: ongoing education for computer system designers d. grimes-farrow approaches to using the year 2000 problem in information systems courses (panel session) michael vitale ben light gerald knolmayer john mooney ethics and computer use over the past 50 years, computers have undergone transformation from monolithic number crunchers, to centralized repositories of management information systems, to distributed, networked, cyberspace support systems. during the same period, uses of computers have moved from computational problems to life support, from machine language to guis, from abstractions of work to virtual reality on the world- wide web. these transformations have brought with them situations that have ethical implications. sue conger karen d. loch designing for a dollar a day this paper is about the kind of tools and techniques that are accessible to resource weak groups for use in design and evaluation of computer support. "resource weak" means in this connection, that the economic power and the ability to control the "local environment" of the group is limited. the human resources of such groups are often (potentially) strong, but restrained by the organization of work and society; and although the tools are cheap the activities are demanding in terms of human resources. this kind of work should be seen as a supplement to participation in design processes controlled by others. when end users participate in projects set up by management, these "lay" designers often lack familiarity with the tools and techniques, they lack the power and resources to influence the choice of questions to be considered, and they are not the ones deciding how to utilize the results of a design project when actually changing the workplace. to give the context of the work on which the paper is based, i first describe the scandinavian tradition of trade union based end user participation in systems development. then i discuss some of the issues involved in improving the conditions for independent end user design activities. i go on by presenting a set of "cheap tools" and techniques, including the use mockup's. this set covers the issues of establishing the possibility of alternatives, of creating visions of new and different uses of technology, and of designing computer support. a central question in relation to the tools and techniques, is their accessibility to end users, and i discuss this based on the notions of family resemblance and "hands-on" experience. morten kyng computing in the home: shifts in the time allocation patterns of households an empirical study of 282 users of home computers was conducted to explore the relationship between computer use and shifts in time allocation patterns in the household. major changes in time allocated to various activities were detected. prior experience with computers (i.e., prior to purchase of the home computer) was found to have a significant impact on the time allocation patterns in the household. the study provides evidence that significant behavior changes can occur when people adopt personal computers in their homes. nicholas p. vitalari alladi venkatesh kjell gronhaug careers in computer misuse - not so appealing after all stanley a kurzban progressive project assignments in computer courses this paper presents a method of design for projects in computer courses that tends to enable all students in the class to achieve their maximum potential. each project is structured at three progressive levels of difficulty corresponding to three prospective grades a, b, and c. the b-level is an extension of the c-level and the a-level is an extension of the b-level. each student starts at the c-level and progresses as far as possible and is scored accordingly. robert leeper ifip gordon b. davis defining computer science james w. mcguffee self-assessment procedure x: a self-assessment procedure dealing with software project management roger s. gourd a special group of users: instructional faculty a university user services staff serves a wide variety of users: students, staff, administrators, research faculty and instructional faculty. this paper reports on methods used to provide support to instructional faculty. short-term, concentrated support offered to instructional faculty includes introductory short courses directed to special groups of students, advanced seminars tailored to individual courses, machine room tours, equipment usage demonstrations, specialized consulting services and documentation. assisting in setting up common files for classroom use as well as reviewing course assignments, job control language, and logon instructions are frequently sought services. with proper planning and coordination, the jobs of both user services and faculty are simplified to the great benefit of the student and the educational process. examples of services will be reviewed. geraldine macdonald sigchi international advisory task force guy a. boy the computational view of nature: a liberal arts course in computer science nicholas ourusoff cross cultural comparison of the concordance of is education/training and is careers in the united states and latin america (panel) james b. pick guillermo mallen carlos navarrette ruth guthrie lorenzo valle taking stock of the tech industry: talking with denise caruso, industry analyst and founder of hybrid vigor institute john gehl the parnas papers david l. parnas felder's learning styles, bloom's taxonomy, and the kolb learning cycle: tying it all together in the cs2 course richard a. howard curtis a. carver william d. lane technical opinion: information system security management in the new millennium gurpreet dhillon james backhouse conceptual models and cognitive learning styles in teaching recursion cheng- chih wu nell b. dale lowell j. bethel reader comments: working knowledge john gehl growing systems in emergent organizations duane p. truex richard baskerville heinz klein structured systems analysis and the problem statement language (psl)(8) as a combined methodology in the teaching of system analysis and design the techniques of structured systems analysis are compared with the problem statement language (psl) and a method of teaching which involves the best features of both is proposed and discussed. the development work being carried out at bradford in developing a version of psl suitable for use in the teaching environment is described. r. j. thomas j. a. kirkham an open discussion on public policy as usual, the column is multi-topic. it starts with a brief review of the siggraph public policy program. this is followed with some comments (and my responses) on the program raised by an interested siggraph member. once again, myles losch presents recent information on digital copy protection, including its potential application to free terrestrial broadcast digital tv. we provide a review of the 2000 edition of one of the most important conferences at the intersection of competing and public policy, the acm sponsored computers, freedom and privacy conference (cfp). and finally, we describe the availability of our third on-line survey. bob ellis cracking the software paradox (panel session) john daniels jim amsden larryconstantine david e. delano martin griss ivar jacobson else-marie Ã-stlingrebecca wirfs-brock teacher preparation, school renewal, and computer technology beverly hunter a survey of attitudes toward computers what do people really think about computers and their impact? in 1970, a study of people's attitudes in north america showed computers to be regarded as either "beneficial tools of mankind" or as "awesome thinking machines." a recent survey taken in australia and reported in this article, though, suggests there may have been a change in attitudes over the past decade. the australians expressed much concern over the computer's possible disemploying and dehumanizing effects---as well as disquiet over the control computers could exercise over their lives. if these attitudes are typical beyond the shores of australia, they could create a barrier to the widespread acceptance and application of computers around the world. perry r. morrison collaborations within higher education jennifer fajman guest editorial peter marwedel the computer graphics course in a small-college computer science program computer graphics has moved from the laboratory to everyday use in the last few years. new graphics programming systems and low-cost, high- powered computers have made computer graphics both much easier and much less expensive to use --- and to both teach and learn. these now allow a beginning computer graphics programming course to be accessible to any student with reasonable programming skills, and consequently to be a service to a number of disciplines. this allows the course to take on a unique position on a small campus where cross-disciplinary interaction is valued. the author is developing such a course with a focus on graphics in the sciences, and will describe how the projected course topics and projects can involve students in scientific issues and support the development of skills in scientific visualization. steve cunningham ethical concepts and information technology the fundamental aspects of classical and contemporary ethics, particularly as they apply to the use of it, offer valuable lessons of professional conduct. kenneth c. laudon training software (ts) case study: feature selection system (fss) for computer personnel a. rushinek s. rushinek about this issue… anthony i. wasserman securing cyberspace ravi ganesan ravi sandhu computer graphics pioneers when editor gordon cameron asked me if i thought a regular column in _computer graphics_ authored by computer graphics "pioneers" would be a good idea, i responded with an enthusiastic "yes." then he asked me if i wanted to write it, and i became somewhat less enthusiastic. however, i've been able to convince sherry keowen, executive director of the computer graphics pioneers, a society comprised of long-time computer graphics professionals, to co-write the column with me. between the two of us, we should be able to develop a regular feature.of course, there is one question we might ask: "is anyone really interested in a column about the computer graphics pioneers?" at the august 1996 board meeting of the editorial board of _ieee computer graphics and applications (cg&a;),_ i suggested that _cg&a;_ do something in conjunction with the 25th anniversary of acm siggraph conferences. one of our board members looked me in the eye and said, "is anybody interested in that old stuff?" that's a valid question. however, since we attracted about 2,000 to the "into the fifth decade of computer graphics" panel on the last day of siggraph 96, i suppose i can respond with a tentative "yes."in any case, we plan to share with you some stories of the people and events that have shaped computer graphics; some quite significant, some modest recollections. carl machover inside risks: risks with risk analysis robert n. charette it skills portfolio research in sigcpr proceedings: analysis, synthesis and proposals this paper provides an analysis on information technology (it) skills portfolio issues from the perspective of firms. the motivation behind this study was our observation that firms are having increasing difficulty in determining their optimal skills portfolio and in acquiring and managing that portfolio in turbulent times. because a shortage of skilled it professionals with has been a common theme from the early days of computing, we believe that an analysis of past work using the skills portfolio perspective gives us insights into the gaps where future is research can illuminate this area. first, the paper reviews 102 it skills portfolio related papers that appeared in the sigcpr proceedings between 1985 and 2000. then the thesis of these papers are summarized and classified according to what skills portfolio firms look for and how they acquire and maintain skills. particular attention is paid to the gaps in the research to date. finally, a set of research issues and propositions are given based on the analysis. makoto nakayama norma g. sutcliffe computer whatcha-maycallit: insights into universal computer education a year's work on the nsf/university of tennessee high school computer science project (hscs) has indicated that it may indeed be possible to dissociate computer skills from the scary, elitist traditions of science and math curricula in high schools. teachers and students remote from the traditional science/math constituency are learning to play/work with the computer. the development of hscs is chronicled and some likely scenarios for its arrival on the high school scene are presented. the essential context is that of a race between declining computer hardware costs and declining support for public education. hscs is succeeding because it exploits computing's unique ability to bridge between the worlds of play, study and employment. j. m. moshell c. e. hughes c. r. gregory r. m. aiken our experiences teaching functional programming at university of rio cuarto (argentina) ariel ferreira szpiniak carlos d. luna ricardo h. medel effective group interactions: some aspects of group projects in computer science courses g. l. van meer c. d. sigwart client view first: an exodus from implementation-biased teaching timothy long bruce weide paolo bucci murali sitaraman person-job matching in the eighties one of the major tasks of a personnel psychologist is to match persons with jobs. this industrial match- making is a dynamic enterprise which continuously strives for improvement in its psychometric methods, as well as its responsiveness to the needs of society at large. from a psychometric point of view, several recent methodological advances have greatly improved the science of test development and the utility of test-based selection decisions or person- job matching. one such methodological advance is latent trait theory 4_ a precise mathematical statement about the relationship between test performance and the underlying abilities or aptitudes of the individual. although this particular mathematical theory has been known to measurement specialists for test-related problems, however, is a more recent development. other recent methodological advances include improved procedures for assessing the validity and utility of selection decisions in a person-job matching enterprise. nambury s. raju the soviet computer industry: a tale of two sectors for the soviet economy in general, and the soviet computing community in particular, the last few years have been a period of unprecedented troubles and changes. the old, stable, centrally planned economic system has proven to be far more brittle than almost anyone expected; but attempts to build a market economy have fallen far short of many hopes and expectations. the net result to date is an economy in confusion and shambles (e.g., see [3, 6]). s. e. goodman w. k. mchenry a crisis in computer science education at liberal arts colleges greg w scrugg harnesses and muzzles: greed as engine and threat in the standards process joseph farrell software survivor nicholas zvegintzov derivation of recursive algorithms for cs2 richard t. denman computers and society: a liberal arts perspective there is ambivalence among computer science educators regarding the degree to which ethical and value questions should be incorporated in the computer science curriculum. this paper states a philosophical case for substantive treatment of these topics in colleges committed to the liberal arts, and goes on to consider some of the practical difficulties involved. ellen cunningham road crew: students at work lorrie faith cranor legally speaking: first amendment rights for information providers? applying the first amendment of the u.s. constitution to computerized communication of information is raising many interesting questions. while the general principle of this amendment can be simply stated---it forbids the government from interfering with freedom of speech---the specifies of its application over two centuries of american history have yielded a complex matrix of principles whose application depends on a variety of factors. where computerized communication of information fits into this schema has yet to be definitely determined. the last "legally speaking" column (mar. 1991) discussed some first amendment issues raised by treating computerized information as private property, theft of which might be criminally prosecutable. this column will discuss quite a different first amendment issue. but these two columns can only begin to introduce a few of the challenging first amendment issues presented by the "electronic frontier." pamela samuelson methods of integrating the study of ethics into the computer science curriculum (panel session) donald gotterbarn deborah johnson keith miller gene spafford a web of fuzzy problems: confronting the ethical issues ina wagner tractor factories and research in software design roger c. schank comparison of student success in pascal and c-language curriculums richard f. gilberg behrouz a. forouzan computer support technology: training for the real world pam matthiesen t. s. pennington karen richards from student model to teacher model: enriching our view of the impact of computers on society z. chen general interest resources of use to computer science educators renee mccauley misconceptions in cis education prior knowledge can interfere with new learning. this is due to different contextual uses for the same symbols. some preconceptions that can lead to misconceptions are the equal symbol (assign value vs compare) and the addition symbol (addition vs concatenation). computer mathematics has differences from algebraic mathematics. a solution to these misconceptions is to develop mental models for these new conceptions. a stress on the new context and computer architecture will help show the differences between computer programming concepts and mathematical concepts. garry white a built-in educational platform for integrating computer engineering technologies joan batlle joan mart llu how to create a successful failure many people enjoy working on software projects that wing their way towards failure. after all, troubled projects offer so much more in the way of excitement and advantages than those that boringly plod their way to success. for instance, projects in turmoil provide numerous ways to look good. you can make both employer and customer deliriously happy, become a hero, and gain a promotion merely for getting something, anything, delivered on time---while on a smoothly running project, it's expected! even if you don't make the delivery, you can still appear heroic and get promoted. but if you screw things up, who knows? then there is the increased job security. projects in turmoil usually go on far past their original deadlines, as customers are loath to cancel and lose their investment. your company benefits too, as it gets paid much more than if the project was merely successful. robert n. charette forum diane crawford the role of stakeholders' expectations in predicting information systems implementation outcomes an integrative model of is implementation success is presented. the model focuses on stakeholders' expectations as predictors of is implementation outcomes, and on the organization's competence in designing and implementing the system. the proposed framework is derived from the literature on is implementation, especially those studies that explain and/or predict the outcomes of such a process. the framework attempts to integrate the existing research streams: factor studies, process studies and expectancy studies. the model uses a multiperspective approach by taking into consideration stakeholders' different views of these processes. aline f. de abreu david w. conrath conjugacy and gradients in variational theory and analysis m. r. hestenes electronic management: exploring its impact on small business edward d. bewayo changes in acm's revised copyright policy david s. wise setting up a tutor training programme in computer science paul bakker andrew goodchild paul strooper david carrington ian maccoll peter creasy helen purchase experience with an automatically assessed course this paper describes our experiences of developing and running an introductory module for first year computing undergraduates. the 'supporting technologies' module is intended to equip students with basic computing skills that they will need for the rest of their course. a novel feature of the work discussed here is that several different automated assessment tools and techniques are integrated into a common framework sharing a common results database. this allows a wide range of different assessment formats within the same module framework. john english phil siviter editorial diane crawford the last word: disappearing boundaries aaron weiss teacher training curriculum project editorial overview robert p. taylor computer science in secondary schools: curriculum and teacher certification computer science in secondary schools is an area of increasing interest and concern to educators as well as to computer science professionals. each of the next two reports addresses an issue of major importance regarding computer science in secondary schools. the first report recommends computer science courses for the secondary school curriculum, and the second report recommends requirements for teacher certification in computer science. in 1983 the acm education board initiated efforts to formulate recommendations for secondary school computer science. two task forces, one for curriculum recommendations and the other for teacher certification recommendations, were established under the education board's elementary and secondary schools subcommittee. the work of the two task forces was also supported by the ieee computer society educational activities board, and the final reports from the task forces were jointly approved by the acm and ieee-cs boards in july 1984. thus the reports are significan't not only for the important issues that they address, but also because they represent a joint activity between acm and the ieee computer society. the work of the two task forces is summarized in the next two reports. the full reports are available as the publication computer science in secondary schools: curriculum and teacher certification, order number 201850, from the acm order department, p.o. box 64145, baltimore, md 21264. a. j. turner touring the c++ classroom project this workshop will lead participants through the "c++ classroom project", a project designed to help faculty bring exciting applications into the introductory classroom. this project has created, class tested, and documented a number of c++ classes that enable students to: draw 2d graphics, populate excel worksheets and charts, control powerpoint presentations, and interact with talking/animated ms-agent characters. the classes are packaged with visual-c++ console applications in order to maintain a focus on learning standard c++ from a standard textbook. all system-dependent api's and nuances are "hidden" in the class internals for stepping over, or tracing into, as preference would have it! following a brief overview, participants will download the project files, step through the starter modules, and have the opportunity to write a few short exercises. a familiarity with c++, especially visual c++, is helpful but not imperative. feel free to join for the overview and/or the entire workshop. mike otten the central role of mathematical logic in computer science j. paul myers acm forum diane crawford evaluating the output of recursive routines using successive queues this paper shows a method of evaluating the results of recursive subprograms without using stacks, but using successive queues. in the sense used here, we start with a queue including just one item, namely the call to the recursive routine, using a suitable notation (usually compacted). the queue is then expanded one level into a new queue, using left-to-right evaluation, including a notation which shows which items are fully evaluated and which items yet need to be evaluated via calls back to the subprogram. the development of new queues stops when there are no more calls in the queue; in this case, what remains is the output desired, in left-to-right order. (sometimes the last queue contains operations which must be completed before the problem is solved.) one reviewer of this paper suggested an alternate title for this paper, which i'll reword as: "evaluating the output of a recursive routine using left-to right expansion rules based on the recursive routine." but i prefer my title, as it is simpler, though slightly restrictive for the general cases. toward improving female retention in the computer science major factors affecting professional competence of information technology professionals design and development of effective information technology (it) based systems depends upon a staff of competent information technology professionals (itps). due to the rapid pace of technological innovation, diverging application of it, and changing role responsibilities of itps, it is becoming increasingly difficult for itps to maintain up-to-date professional competency.although not extensively examined in it research, professional obsolescence threats have been acknowledged and evaluated in referent research. psychologists studying the engineering discipline have suggested individual characteristics, nature of work, and organizational climate as being important determinants of obsolescence.the purpose of this study was to evaluate the relationships between individual personality differences and manageable work context factors and the degree of professional competency, or conversely obsolescence of itps. structural equation modeling was employed in evaluating the direct-effects model of professional competency. the study used questionnaires to obtain 161 usable self-report responses from systems analysts. the results suggest that individual personality differences and factors of the work environment do effect professional competency levels. overall, the research model accounted for 44% of the variance in itp competency. j. ellis blanton thomas schambach kenneth j. trimmer computer management studies for developing countries this paper describes the postgraduate course in computer management studies given to students from developing countries at university college london. a brief historical introduction is followed by a discussion of the objectives and philosophy of the course; our experiences of the problems encountered by the students on the course, both before they arrive in london and while resident in london, are described. our proposals for an improved, but probably shorter, course are followed by syllabuses and statistics relating to participants. gordon davies charles d. easteal using animation of state space algorithms to overcome student learning difficulties we describe an algorithm animation system for artificial intelligence search algorithms. we have identified a number learning difficulties experienced by students studying search algorithms and designed the animation system to help students to overcome these difficulties. as well as the usual single step mode for assistance in learning the individual steps of an algorithm, the system supports an innovative burst mode for visualising qualitative behaviour and facilitating comparisons between different algorithms and heuristic functions. the system has successfully been used in the classroom for 4 years and survey results indicate use of the system improves understanding. an empirical study comparing a group of 15 students using the animation system and 15 students who wrote programs for the algorithms revealed a generally similar level of understanding, however the animation group was better at dealing with questions about qualitative behaviour. vic ciesielski peter mcdonald computer science for secondary schools (panel session): course content jean rogers michael r. haney john d. lawson software engineering for business applications: twenty years of software engineering (panel session): looking foward, looking back tom demarco brazil's pioneer undergraduate program in information systems in the early 70's, the number of computers installed in brazil was growing at a rate of 30% per year, which was higher than in european countries, the u. s. and japan, placing brazil among the eight top world users of computers.1 by that time, only few brazilian universities were offering courses in data processing. the lack of a formal education structure resulted in data processing positions being mainly filled with people trained by manufacturers. the high demand for data processing professionals, mainly in the top levels of the career, became one of the brazilian government's concerns. since brazil is a developing nation, resources had to be efficiently used and foreign reserves could not be spent on equipments which were not used to their full potential. therefore, highly qualified professional were needed. in view of these facts, the federal government decided to support and encourage the creation of professional data processing programs, mainly at the universities. the role of the university in education is extremely important, since it makes possible to improve the qualification of the labor power and also to improve the education professionals themselves. in 1973 and 1974, with the support of the federal government, about 15 programs for graduation of technicians in systems programming, analysis and design have been created. those programs, with the duration of two years, were intended to be a short term solution, providing a highly qualified labor force. besides the programs created directly by the government, several courses have been implemented at universities that assumed the task of preparing qualified data processing man power, accomplishing their social role. with this purpose, the pontifical catholic university of rio grande do sul, in 1974, implemented an undergraduate' program in information systems analysis. the pioneering aspect rises from the area involved, that of information systems. maria lúcia blanck lisbôa results of sigcse membership survey nell dale computer education in the 1980s, a somber view the discipline of computer science is a child of the 1970s. its growth in infancy has been impressive, statistically, but so it is with infants. as we enter the decade of the 1980s, the discipline and all of us engaged in computer science education face some difficult choices. it is becoming abundantly clear that in the 1980s computer education cannot be provided for our students in the variety and quality which they demand. it will fall to us, personally, to decide what kind of computer education will be made available. in this next decade we will suffer a national deficiency of computer expertise equivalent to our national deficiency in oil. the cost of this expertise is already inflating at an alarming rate, and we have yet to begin to mobilize programs which in the long-term will stabilize the market. it is therefore inevitable that the 1980s will witness a frantic shift to alternative sources of expertise and a consequent dilution in the quality of computer professionals and computer products. the academic profession must make program decisions now which will serve to minimize the cost which our society will pay as it struggles to fully enter the computer age. william mitchell computer laboratories for the theory of computing course we describe, a set of structured laboratories designed for the theory of computing course. these laboratories include various types of exercises, including the writing of programs which perform tasks related to the concepts studied in class, as well as more traditional theory problems done with the aid of interactive computer programs. laura a. sanchis international workshop on software transformation systems (sts '99) marcelo sant'anna julio leite ira baxter dave wile ted biggerstaff don batory prem devanbu liz burd e-commerce and computer science education electronic commerce is gradually changing the way the commerce is conducted. computer science graduates will need to be prepared for the challenge posed by the increasing demand for professionals who can develop and maintain electronic commerce systems. by examining the standard computer science curriculum, some suggestions are proposed. yuzhen ge jiangeng sun corporate contributions: is the tail wagging the dog? m. draper r. w. lutz j. nicholson the information center as a middle ground in offsetting personnel obsolescence the primary purpose here is to present the profound impact of the ic on organizations and individuals; the concept's congealing force which is now patterning organizations forged around user interfaces and environments; and the dramatic effect of evolving an organization essentially with an innovative and effective mechanism for merging organizational and functional islands of information systems and services. herman hoplin the management of information systems occupations: a research agenda jon a turner jack j baroudi human rights: some perspectives jack minker computer systems research (invited talk) (summary only): past and future butler lampson ethical knowledge for an electronic era (poster session) gladys garcía delgado tablet: personal computer of the year 2000 a design represents a compromise between conflicting goals, and the design of the personal computer of the year 2000 is no exception. we seek something that will fit comfortably into people's lives while dramatically changing them. this may appear to be a contradiction that cannot be reconciled. but if the technology does not fit easily into the habits and lifestyles of its human users, it will be discarded by those it was meant to help. and if this new tool does not change the life of its owner, it is only because we have been too shortsighted to imagine the possibilities. our way out of this dilemma is to base the design upon something which is already integrated into everyone's life, to take a vital tool and give it more life. we have chosen to improve something that most people use everyday, the humble paper notebook. we have all heard the computer revolution was supposed to eliminate paper from the workplace. instead, it has lead to desktop publishing; now we can not only write papers but typeset them ourselves. paper notebooks have many properties that make them particularly friendly. they are light and portable. no one thinks twice about taking a pad anywhere. they are easy and natural to use, as accessible to the toddler as to the octogenarian and as relevant to the artist as the engineer. they can be used to communicate with other people. they are the ideal medium for integrating text and graphics, and perfect for creative doodling. moreover, notebooks are forgiving of mistakes, simply peel off the page and start anew. it is natural to revise and edit written documents. there is something satisfying about crossing out an offending sentence from a written draft, a feeling that word processors have not captured. we aim for a computer that will provide all of these benefits and more. thus, the personal computer of the year 2000 will be a portable machine the size of a notebook. we will write and draw with a stylus on a screen which mimics a physical writing surface. enhancing this with the powers of computation and communication, we create a tool that will improve the way we live and work. this report provides a more concrete depiction of the machine we have in mind, namely tablet. b. w. mel s. m. omohundro a. d. robison s. s. skiena k. h. thearling improved user service - a 24 hour helpdesk john biddle john tyler the concerns of new trainers doris carey regan f. carey teaching software engineering through real-life projects to bridge school and industry to educate graduates to succeed in industries which demand high quality software engineers is not easy due to rapidly changing organization styles and working environments. the major limitation of university education may be the lack of opportunity to expose students to real field problems. in this article, we present our experience of exposing graduate students to a real-time plant monitoring and control software development project and show how the software engineering process has been customized to educate them and satisfy the user requirements at the same time. ki-sang song social imlications of computers: ethical and equity issues judith e. jacobs the computing skills workshop at carnegie-mellon university john stuckey designing a campus-wide workstation environment for science and engineering instruction sharon roy integrating vrml, java, and html in a web-based tool for computer literacy virtual reality visualization tools are gaining importance in many fields including medical training, elementary and secondary school education, and ecology, among others. since humans live in a three-dimensional world, there is an increasing interest in developing educational tools that are three- dimensional in nature and that allow the user to interact with objects in the world. virtual reality visualization tools can be used for this purpose. the rapid growth and popularity of the internet has caused the emergence of many technologies that can be used by instructors to develop two- and three- dimensional visualization tools to enhance the educational experiences of students. the work discussed describes the integration of html, java, java swing and vrml to produce a web-based crt virtual reality visualization tool that can be used in computer literacy when students are introduced to hardware components. the tool that was developed will be described. also, experiences learned and problems encountered during the integration of various technologies will be discussed. some experiences with cai and natal richard gee rob mcarthur a central ohio consortium for retraining in computer science a consortium of eight central ohio colleges and universities is described. the purpose of the consortium is to provide opportunities for faculty at the participating institutions to be retrained in the field of computer science. these faculty will then be able to return to their home institutions to develop and teach computer science curricula. the program provides flexibility of scheduling the retraining, in terms of the time of year and nature of the retraining undertaken by the individual participants. zaven karian stuart h. zweben planning and implementing an internship program for undergraduate computer science students this paper describes successful efforts to implement an internship program in computer science. details for planning, implementing, maintaining, and evaluating the program are presented. ted mims raymond folse andrea martin congressman stark and professor berghel were given the opportunity to respond to each other's viewpoints fortney h. stark hal berghel software process maturity: is level five enough? roger g. fordham yaatce - yet another approach to teaching computer ethics nancy j. wahl web-based student feedback to improve learning a web-based anonymous feedback facility has been used for three years to give students a voice and some ownership of the subjects they are studying, and to allow staff to make adjustments to the teaching program in time to help the current student group. improvements have been made to the system, e.g. the ability for students to identify themselves, extensions to staff e-mail copies, discussion threads to delineate topic areas, and automatic generation of statistics on usage. these statistics show how the need for anonymity decreases as the level of maturity of the students increases. other planned or recently added improvements are described.. jason lowder dianne hagan the myth of the electronic cottage t. forester reorganization for the 90's theresa a. pardo jami l. bult the great divide bob o'connor tracking the competition carol a. twigg war stories from andrew j. h. morris component-based e-commerce: assessment of current practices and future directions component-based e-commerce technology is a recent trend towards resolving the e-commerce challenge at both system and application levels. instead of delivering a system as a prepacked monolith system containing any conceivable feature, component-based systems consist of a lightweight kernel to which new features can be added in the form of components. in order to identify the central problems in component-based e-commerce and ways to deal with them, we investigate prototypes, technologies, and frameworks that will transcend the current state of the practice in internet commerce. in this paper, we first discuss the current practices and trends in component-based electronic commerce based on the international workshop on component-based electronic commerce. then, we investigate a number of research issues and future directions in component-based development for electronic commerce. martin bichler arie segev j. leon zhao underneath the arch: a personal report of arg meeting john barnes interactive learning in a beginning cs course ashraful a. chowdhury twelve ways to improve cooperation with the computer center cooperation between the computer center and faculty is necessary for the continuance of programs in computer science. enumerated here are twelve ways that cooperation between the computer center and faculty can contribute to a computer science education from the perspective of computer center staff. ardoth a. hassler the external auditor's review of computer controls the foreign corrupt practices act of 1977, coupled with growing demands for corporate accountability, have forced both auditors and computer administrators to evaluate computer based controls. computer administrators can benefit from both a knowledge of an auditor's approaches to evaluating controls and his/her recommendations for control improvements. here, a survey of the control evaluation practices and desirable control features identified by computer auditors is presented, along with recommendations to ease the burden of the auditor's review. the authors' suggestions should ease the tasks of internal control analysis and of preparation for possible public reports on an organization's system of internal control. charles r. litecky larry e. rittenberg the computer bowl karen a. frenkel inservice education of high school computer science teachers this paper describes an inservice retraining program for high school computer science teachers since computer science teacher certification is a recent development, most of these teachers were trained in another field. this project consisted of a sequence of courses which taught the core principles of computer science to these teachers. j. kiper b. rouse d. troy preliminary evidence for the effect of automatic responses to authority on information technology diffusion successful diffusion of it within an organization can depend on it use decisions by many individuals. current rational models focusing on cognitive determinants of such it use decisions explain 30 to 40 percent of organizational members' use behaviors. this article demonstrates, via a laboratory study, that automatic unthinking responses may be significant in explaining user behavior. we find that automatic responses to authority, that are cued by incentives and controls, may explain more than 20 percent of it use variance, over and above that accounted for by rational decision making. this suggests interesting opportunities for it diffusion research, exploring automatic it use responses resulting from authority, as well as other automatic response generators such as reciprocation and consistency. randolph b. cooper anol bhattacherjee life-long learning carol edwards news track robert fox activities to attract high school girls to computer science susan h. rodger ellen l. walker progress towards a world-wide code of conduct in this paper the work of the international federation for information processing (ifip) task group on ethics is described and the recommendations presented to the general assembly are reviewed. while a common code of ethics or conduct has been not recommended for consideration by the member societies of ifip, a set of guidelines for the establishment and evaluation of codes has been produced and procedures for the assistance of code development have been established within ifip. this paper proposes that the data collected by the task group and the proposed guidelines can be used as a tool for the study of codes of practice providing a teachable, learnable educational module in courses related to the ethics of computing and computation, and looks at the next steps in bringing ethical awareness to the it community. john a. n. lee jacques berleur history in the computer science curriculum ifip working group 9.7 (history of computing) is charged with not only encouraging the preservation of computer artifacts, the recording of the memoirs of pioneers, and the analysis of the downstream impact of computer innovations, but also on the development of educational modules on the history of computing. this paper presents an initial report on the study of the history of computing and informatics and preliminary proposals for the inclusion of aspects of the history of computing and informatics in the curriculum of university and college students. john a. n. lee a software infrastructure to support introductory computer science courses kenneth a. reek computer support for teaching large-enrollment courses computing systems are particularly useful for teaching support of large- enrollment courses where essentially the same material is covered during successive course offerings. described herein are the computer capabilities developed and used for teaching introductory computer science courses at west virginia university. capabilities include examination question data base creation and maintenance, automated examination preparation and grading, and student records handling. william h. dodrill reconstructing the acm code of ethics and teaching computer ethics don gotterbarn a proposal for valuing information and instrumental goods marshall v. van alstyne does world wide web provide better resources than library for learning - a case study (poster) raymond m. w. leung eugenia m. w. ng working the net: it workers, educate thyselves jill h. ellsworth road crew: students at work john cavazos computers and the future of work (panel session) jeanne adams jerry wagener walt brainerd the need for standards in multistate transaction taxes kaye caldwell tom's tool tara elmer a study of the relationship between job requirements and academic requirements in computer science in scrutinizing the 1981 occupational outlook handbook, one cannot fail to recognize a strong positive connection between mathematical requirements in educational pre-requisites for occupational entry and the concomitant demand and remuneration of such employment. very little research has been completed characterized by an intensive analysis of the mathematical competencies requisite for effective job functioning in specific occupations. laws (1968) and miller (1970) interviewed occupational representatives in technical areas and 44 occupational specializations in science. both studies challenged the mathematical collegiate pre-requisites expected of the practitioner; laws and miller recommended a re-evaluation of pure mathematics requirements for employment. saunders (1978) interviewed a single representative from each of 100 occupations and found that facility with whole numbers, decimals, use of calculators, and percentage as essential. saunders' sample needs to be increased in order to obtain more reliable results. john f. loase brian d. monahan responsible computing at the university of delaware richard gordan on distance education courseware multimedia, www and advanced technologies are bringing changes in the field of education. additional ways of interaction between students, study material, and teachers, are introduced. development in technology and their introduction into education, changed the character and methods of it. a need exists for a distance education and additional education at home. with the cost of multimedia computers becoming lower, while availability of educational cd-rom's higher, a proper inclusion of these into the curricula and distance education is needed. here, some strategies for distance education software are being proposed. marijana lomic zoran putnik on the functional relation between security and dependability impairments erland jonsson lars strömberg stefan lindskog harnessing technology for effective inter/intra-institutional collaboration (panel) marian petre douglas siviter project leap: lightweight, empirical, anti-measurement dysfunction, and portable software developer improvement philip m. johnson designing closed laboratories for a computer science course alan fekete antony greening some ideas on the educational use of computers in this paper we will develop a new point of view on the educational use of computers, stressing its possibility for interactive use and the intellectual implications of the programming activity. after giving a rather detailed analysis of programming viewed as a cognitive process we propose a dual model centered on the notions of piloting and programming, followed by some snapshots of the interaction of a child with a computer. we will end with a proposition for the construction of an intelligent and convivial user environment. harald wertz proposed information systems accreditation criteria (panel session) this panel will discuss the background leading to the decision to develop a draft criteria for accreditation of information systems programs, the current status of the draft criteria, feedback received from presentations at a number of conferences and on a web-based survey, and a brief description of future plans for the project. time will be allotted for questions from the audience. doris k. lidtke willis king john gorgone gayle yaverbaum computer science learning at pre-college ages this paper has been accepted for publication in the proceedings, but the photo-ready form was not received in time. copies of the paper should be available upon request at the presentation. it may appear in a later issue of the sigcse bulletin. mark a. rosso kevin w. bowyer inferring cognitive focus from students' programs programs written by students in an introductory computer science course were analyzed and patterns abstracted from them. these patterns include style of modularization, choice of constructs, choice of vocabulary, and style of communication through user- interaction and documentation. individual characteristics of the students, such as their focus on detail or on aggregate conceptual units, their manner of organizing knowledge, and their perception of the purpose of computer programs was compared with the patterns in the students' programs, with tentative relationships being identified. jean b. rogers the limits of correctness brian cantwell smith a conversation about ensuring computer literacy of first-year students this paper explores the purpose and usefulness of a course in computer literacy. it introduces a method that allows students to make choices on the order and timing of covering material within the course and uses a skills assessment software package to ensure learning and equivalency across and within sections. it provides some methods for dealing with the problems inherent in a self-paced course. finally, the future of a course in computer literacy is discussed. i'm all for a paradigm shift in the way we teach computer literacy to our first-year students. for instance, we could follow in the footsteps of our colleagues over in the english department and establish a "computers across the curriculum" program. or we could provide on-line computer tutorials to all of our entering students and let them teach themselves. or we could simply assume/require that computer literacy is provided in the k-12 years. that's got to happen at some point, doesn't it? our course is offered within the school of business and our employers tell us that computer application skills (or the lack thereof) are the primary contributor to the first impression of competency of a new intern or entry level employee. our goal is to insure that students going to do internships and at graduation are computer literate. thus, we have the computer literacy course, sometimes known as a course in computer applications or teaching microsoft office/wordperfect suite. elizabeth edmiston marilyn mcclelland an evaluation scheme for a comparison of computer science curricula with acm's guidelines during the past ten years, several model curricula for technical academic disciplines have been developed and published by professional societies. among the recommendations for computer science are those of '68 and '78 by acm. this paper presents a quantitative scheme for evaluating a computer science curriculum as compared to acm '68 and '78 guidelines. to demonstrate the evaluation scheme, curricula from three (3) universities are compared to the acm guidelines. the results of those numeric comparisons are tabulated and discussed. other areas that affect a computer science program; textbooks, computer facilities, and faculty are also discussed. nancy e. miller charles g. petersen project leap: personal process improvement for the differently disciplined carleton a. moore impact of the technological environment on programmer/analyst job outcomes recent research has shown that key dp/is personnel job outcomes (e.g., turnover, organizational commitment, job satisfaction) are affected by job design, leadership characteristics, and role variables. this study investigates another class of variables, the technological environment faced by dp/is personnel, that might impact these job outcomes. the technological environment includes (1) development methodologies employed, (2) project teams and reporting relationships, and (3) work characteristics. variables from all classes were found to impact dp/is job outcomes. over 11 percent of the variance in dp/is job satisfaction is explained by these variables. jack j. baroudi michael j. ginzberg information technology and national development: enlarging the opportunity by enlarging the workspace paul s. licker competition versus cooperation: models for computer education? tony clear teaching theory of computation using pen-based computers and an electronic whiteboard this paper describes a _theory of computation_ course that was taught in an electronic classroom equipped with a network of pen-based computers, a touch- sensitive electronic whiteboard, and locally written groupware that was designed to enhance the ability of teachers and students to share written information during class. we first describe the technology that was used to support the course, and then provide an overview of the instructor's use of this technology to engage students during class. finally, we present the students' reaction to the approach. dave berque david k. johnson larry jovanovic hci education - people and stories, diversity and intolerance alan dix contextualism as a world view for the reformation of meetings the foundations for research and action in the area of group work are examined. four alternative "world views" are presented. one of these, contextualism, is discussed in depth. its methodological consequences for research and implications for reform of group meetings are explored. john whiteside dennis wixon building self-contained websites on cd-rom the burks project has for the past three years produced non-profit cd-roms of resources for students of computer science. now in its third edition, burks is a self-contained website which incorporates a pre-installed web browser and which now spans a set of two cd- roms. this paper describes the techniques used to implement this product. john english systems analysis and design in an uncontrolled management environment john c. stoob user services - whom do you work for? every university computer center has to spend money on machine resources, and every university computer center has to spend money on administrative programmers and machine operators. but to what extent does a university feel that it should spend its resources in support of user services? c. j. duckenfield it staffing and retention: a success story the current high demand for information technology (it) work to be done and the comparative shortage of well-trained staff have forced many firms to seriously consider unique methods to recruit and keep their it personnel assets. we provide an industry case study of a firm with a normally high it workload, overloaded in 1998 with three additional large-scale it projects. simultaneously, the firm experienced its highest it staff turnover in recent years. the firm undertook a four-part research study and analysis or its problems. a balanced program was implemented based on the results of the study, focusing on interesting and challenging work, working environment and compensation. it retention increased dramatically from a low of 76% in september 1998 to 84% in late 1998 to 98% in most of 1999, key business imperatives are proceeding on schedule, new work is being managed, and it's contribution has become a recognized success factor for the company as a whole. brian gill anne banks pidduck asc x3-a: strategic directions rhett dawson the changing of the guard: a case study of change in computing change, in the context of computing environments, can assume a number of forms, varying widely in nature and degree. at tulane university, tulane computing services initiated a significant change in the office of the president of the university: the existing system of stand-alone ms-dos microcomputers was replaced by a macintosh local area network complete with laser printing, mainframe communications, and shared file access. in this instance, the director of the department (the president of the university) directed that the change take place, and the central computing organization and the president's office staff worked together to accomplish the transition to the new computing environment. the change was effected with minimal disruption to the department's operations. this paper describes the successful implementation of change from one computing environment to another. we discuss the planning and preparation involved in facilitating change, the training and education required for those involved in the change, and the attitudes of the individuals affected by the change. an important component of the process was interviews conducted with those involved. these interviews were conducted in order to determine the factors which influence an individual's attitudes toward change, specifically in the context of computing environments. alison hartman thomas gerace cliff woodruff j. e. diem managing on oft overlooked resource: student employees with the ever- present need for quality user services support, a computing center can ill afford to overlook any resource which might aid in accomplishing it. to arrive at this end, a staff must remain abreast of the constant change in computer technology, as well as the changing needs and wants of its users. at the university of notre dame, we have found that student employees have the potential to excel in the area of user services support, and many times they can do so in a more cost effective manner than a full-time staff member. with careful planning and consideration on the part of those supervising the students, their employ can grow to benefit themselves, the full-time staff, and most importantly, the users. if the process of successfully utilizing students were as easy as it sounds, this paper would have no raison d'etre, but there are serious questions to be posed when considering student employees: who to hire, and why; how they should be trained; how to motivate them to reach their fullest potential; and how to most effectively manage them. mark b. johnson beyond the limits: flight enters the computer age g. pascal zachary the evaluation of it ethical scenarios using a multidimensional scale concerns about the increased use and abuse of information technology have evolved into more formalized evaluations of computer ethics in many organizations. while ethical situations regarding computer usage, privacy, and ownership have been previously researched by focusing on ethics as a holistic construct, recent ethics research has produced an ethics scale which measures not one but three dimensions of ethics: moral equity, relativism, and contractualism. the purpose of this research is to extend previous research in which computer users evaluated ethics using a single item. the authors utilize a multidimensional scale that identifies the ethical rule or principle that was violated. they then compare the unidimensional scale to the multidimensional scale and discuss the tradeoffs involved in adding scale items. t. selwyn ellis david griffith the anomaly of other-directedness: when normally ethical i.s. personnel are unethical despite the existence of laws and much publicity surrounding illegal software copying, it is widely believed that software copying is commonplace. yet reasons why such illegal behavior continues to occur are lacking. this study used a model of ethical decision making as a guide for research and found the individual factor of other-directedness helped explain is personnel's intentions toward illegal software copying. no such individual factor was related to judgments concerning right and wrong. these findings suggest that highly other-directed is personnel may behave against their better judgment, especially in cases where they perceive unethical behavior is commonplace. implications for management and ethics education are discussed. susan j. harrington inside risks: risks of pki: e-commerce carl ellison bruce schneier acm forum diane crawford minimum requirements for effective distance teaching systems the first practical video conferencing systems were introduced at the fourth world telecommunications forum held in geneva in 1983 [eva95], but their use is still rare. by comparison, the personal computer, introduce about the same time, is now a commodity sold by the millions.that is surprising, considering the cost and time savings associated with teleconferencing and the economy of scale achievable by teleteaching.this paper presents the position that lack of quantity and quality in the transmission process are responsible for the low success. geraldo lino de campos an elementary school scenario david moursund karen billings hiring interns at kent state univ.: a winning manpower strategy in a market shortage nancy bogucki duncan bruce m. petryshak who should teach computer ethics and computers & society? deborah johnson modifying freshman perception of the cis graduate's workstyle student interest in computer-related careers has declined dramatically in recent years. one possible explanation for this decline is incorrect perceptions of the workstyle associated with the positions held by cis graduates. a study of freshman business majors was conducted which: (1) examined whether an introductory computing course changed those perceptions, and (2) compared those perceptions to their own expected starting positions. the study showed that: (1) the introductory computing course had a negligible effect on changing student perceptions of the nature of the cis graduate's initial job, and (2) compared to their perceptions of cis jobs, they expected their own jobs to involve substantially more human interaction and less direct involvement in the implementation of computer technology. the results suggest a need for: (1) a more proactive strategy to market the mis career both inside and outside the classroom, and (2) some creative approaches for the placement and content of programming activities in both the major and the career. charles h. mawhinney david r. callaghan edward g. cale some thoughts on revising a computer science program micheal k. mahoney reexamining the introductory computer science course in liberal arts institutions j. thomas allen hayden porter t. ray nanney ken abernethy beyond the data processing horizon the transition to the post- industrial society is characterized by the introduction of electronic computer-communication systems whose mind- amplifying powers have made data and information our most precious resource. realtime services provide knowledge workers with access to powerful information systems, in offices, factories, laboratories, class rooms and even the home. spectacular hardware progress has been attributed to the "miracle of the chip". we find such chips at the heart of computers, communication devices, and input/output peripherals with whose help we perform such functions as packet- switched communications, distributed data bases, and computer networking. powerful software engineering tools brought into existence new and improved programming languages, powerful computer operating systems, and a plethora of applications programs. these advances have also created problems. as the data trail grew, citizens voiced their concern about privacy protection of these data. computer security and the design of trusted computer bases are expected to stem the rising tide of reported "computer crimes". electronic funds transfer systems are used to move astronomical sums of "virtual money" over worldwide networks; "virtual books" and other electronic communications are being read widely and instantaneously without the need for hard copy; mechanical slide rules and mechanical watches have gone the way of the dodo bird after being replaced by their electronic offspring. their is even talk of impending changes in the structures of our institutions as a consequence of the pervasive transfer of this new technology into the hands of everyone. carl hammer acm president's letter: h.r. 109 peter j. denning electronic frontier: blanking on rebellion: where the future is "nabster" brock n. meeks being blank in bright waters brock n. meeks thinking about writing a textbook? paul morneau goal-oriented laboratory development in cs/ee the ieee computer society's educational activities board with the strong support of the acm is tackling the problem of laboratory development, maintenance and support. the laboratory monograph series is intended to provide help to those setting up laboratory programs and as an outlet for those who wish to publish in a practical-oriented educational area. keith barker a. wayne bennett gordon e. stokes mike lucas maarten van swaay the motivation of students of programming students approach the study of computing in higher education in increasing numbers from an increasingly wide variety of backgrounds. in most degree level courses one of the first modules students will encounter is intended to teach them to program. as the students become more diverse, so do their motivations for taking their degree. anecdotal evidence from many institutions is that students are becoming more tactical, and will engage only in those activities that they see as contributing to an eventual highly paid job. this paper describes an investigation into the motivations of students for taking a degree in computing, and for studying programming in particular. the results raise a number of issues for the teaching of programming. tony jenkins model curricula for it schools: report of a curriculum committee peter j. denning wayne dyksen richard leblanc edward robertson a hierarchical spiral model for the software process j iivari multi-team development project experience in a core computer science curriculum this paper discusses our introducing a multi-team development project into the third semester of an undergraduate computer science core curriculum. one of the goals of the third semester course has been to provide student with practical experience in system design and development. the multi- team development project exposes students to the rigors of working together to complete a specific component of a larger system and to insure that all components integrate properly. it also introduces them to project management concepts such as project scheduling, maintaining team journals, delivering written and oral team status reports, and participating in project meetings. norbert j. kubilus the next computer t. dietrich integrating technology into computer science examinations david v. mason denise m. woit viewpoint: technological access control interferes with noninfringing scholarship andrew w. appel edward w. felten is the new economy socially sustainable? (invited presentation) (abstract only) at the turn of the millennium, the revolution in information technology has ushered in a new economy. this economy, originated in the united states, and more specifically in the american west coast, is spreading throughout the world, in an uneven, yet dynamic pattern. it is essentially characterized by the key role of knowledge and information in spurring productivity and enhancing competitiveness; by its global reach; and by its networked form of business organization. well managed, this new economy may yield an extraordinary harvest of human creativity and social well being. however, several major contradictions threaten the stability of this new economy: the volatility of global financial markets; the institutional rigidity of business, legislation, and governments in many countries; increasing social inequality and social exclusion throughout the world, limiting market expansion and triggering social tensions; and the growing opposition to globalization without representation on behalf of alternative values, and legitimate concerns on the environmental and social costs of this model of growth. information technology offers great potential in helping to supersede these contradictions at the dawn of an emerging socio-economic system. but the speed of technological innovation requires the parallel development of institutional and cultural innovation, away from bureaucracy but closer to people, to ensure the sustainability of the new economy, and to spur the new wave of technological creativity. manuel castells combining cooperative learning and peer instruction in introductory computer science cpsc 120, principles of computer science i, is a first semester freshmen level course for computer science majors. over a three semester comparison period, this course had an average wdf rate of 56% (i.e. percentage of students receiving a grade of "d" or "f", or withdrawing from the course). in two sections of this course, two strategies, peer instruction and cooperative learning, were combined to lower the wdf rate for both sections to an average of 32.5%. the improvement was even more dramatic for the female students in the classes, who improved from a 53% wdf rate to a wdf rate of only 15%. j. d. chase edward g. okie viewing video-taped role models improves female attitudes toward computer science gloria childress townsend improving the performance of technologists and users on interdisciplinary teams: an analysis of information systems project teams jonathan k. trower detmar w. straub the education and licensing of software professionals: the myth of "a perfected science" considered harmful don gotterbarn the senior information systems design project seminar one of the challenges of teaching mis is preparing students to apply the knowledge they have gained in a real project. although they have developed proficiency in problem-solving, structured design techniques, and programming during course work, many mis students have never interviewed a user or been asked to make design changes at a user's request. in addition, many students have never had to work on a project team, manage schedules, and meet project deadlines. the senior information systems design project course provides students with an opportunity to apply systems concepts and techniques in the design of an information system. students identify "live" projects and work on project teams. in the past, many of these projects have been programming design and implementation projects provided by local industry. however, in large-scale projects, students could not start with problem definition, proceed to requirements specification, and complete detailed design, because all of this could not be accomplished in a single term. as a result, they would do segments of larger projects. with the advent of the microcomputer, however, many smaller scale projects became available in offices seeking to automate records management, routine accounting, and other office automation systems. these projects created an opportunity for students to do an entire project, from systems analysis to detailed design. the purpose of this paper is to describe the objectives and procedures of the information systems design project seminar and to discuss the nature and scope of design projects conducted in university offices during the fall, 1985 quarter. the paper will describe the respective roles of student systems analysts and users, the systems development practices followed, and some of the successes and pitfalls of the experience. mary sumner organizational issues for user services agelia velleman the dimensions of healthy maintenance what characterizes "healthy" or "satisfactory" software maintenance? how can we know it when we see it? this paper gives initial answers to these questions. we first argue the need for objectively measurable maintenance performance criteria in judging the "adequacy" of maintenance and present a set of criteria for judging maintenance performance in a particular software environment. we then subject the criteria to a practical test by applying them in this environment. we show how applying the criteria enables an informed overall maintenance performance appraisal, locates general maintenance problems, stimulates suggestions for improving maintenance on individual projects, allows these projects' maintenance to be compared and the projects ordered for improvement, and assesses the potential effectiveness of the suggestions in new project maintenance. we also sketch how criteria application can be generalized to software development monitoring and design methodology evaluation. robert s. arnold donald a. parker experiences with a continuing education seminar: "computers for small business" an ongoing, evening seminar concerning the selection and use of computers in small business is described. the factors affecting the attendee response to each of the four offerings and the course format as currently evolved are discussed. william e. leigh the secretariat in formal standards development kate mcmillan teaching computer science without a computer "the computer in action" is a role-playing activity developed by the computer science outreach program at purdue university, in which students process instructions and data in the same way that a real computer does. students play the roles of keyboarder, bus driver, cpu (the brains), math wizard, memory manager, print manager, screen writer and user, carrying cards with instructions, values and messages to and from the input, processing and output stations. the activity is designed for students in kindergarten through grade 3. paul addison the group rep: effective decentralization of applications support at the center for naval analyses (cna), we have succeeded in providing computation application support to all parts of this decentralized research organization by decentralizing our staff. cna is divided into four functionally separate departments, each overseen by a vice president and broken functionally into smaller programs or divisions. three departments do studies of day-to-day navy operations, long-term navy planning, and the marine corps; the fourth is concerned with cna's administration. applications support operates in an environment much like that of any university user services group serving users with a wide variety of needs, skills, and applications. the application consultants are call group representatives. the group reps, serving as the first line of consulting support, share office space with the study teams they support. they speed communications throughout cna and are in a position to gain detailed knowledge about applications peculiar to their projects. they also tend to build close working relationships with their "customers." finally, group reps are often major participants in research for cna's clients in the department of the navy and elsewhere. research results are reported at the highest levels of government and often affect important decisions. there are four useful results of such participation in research. first, the group rep gains greater respect from the line organization. second, being a "user" endows the group rep with a little more understanding and even a touch of humility. third, the group rep develops experience with new tools. finally, the group rep who wants a career path into the line organization is thus provided with one. the remainder of this paper covers issues that may prove useful to anyone who is considering the advantages of setting up a similarly oriented user services organization. the appendix tries to define the group rep position in more detail. steven e. naron educational research: a new arena for computer science education janet hartman system reconnaissance for system development this paper examines methodologies for large scale system development. by "large scale" we mean system projects carried out by teams as compared to a single person. system development is taken to mean the whole range of activities from (some one, some how) deciding a system is needed to eventually (some one, some how) deciding to scrap the system. the typical organizational setting we are concerned with would be a large corporation or government agency with a systems staff numbering in the hundreds or thousands. existing life cycle methodologies have a number of commonalities. the rather poor results from use of these methods is traced to a basic flaw in all of the methods. a number of common results stemming from this flaw are discussed. the solution to the problem is conceptually simple but somewhat more difficult to implement. implementation schemes for the solution are discussed and our particular incarnation of a solution is briefly presented. barry clemson hasan sayani syril svoboda emphasizing the process in delivering cs-1 v. arnie dyck why implementing information technology is more than a technicality (abstract): a theoretical and empirical analysis of implementation strategies w. m. de jong j. l. simons how to be a terrible thesis advisor nigel ward the impact of the americans with disabilities act of 1990 gretchen l. van meer charles d. sigwart the role of consensus roy rada george s. carson christopher haynes architecture and design of alphaserver gs320 kourosh gharachorloo madhu sharma simon steely stephen van doren viewpoint: developing it skills internationally: who's developing whom? scott hazelhurst on computing and the curriculum robert mcclintock programming pedagogy - a psychological overview can we turn novices into experts in a four year undergraduate program? if so, how? if not, what is the best we can do? while every teacher has his/her own opinion on these questions, psychological studies over the last twenty years have started to furnish scientific answers. unfortunately, little of these results have been incorporated into curricula or textbooks. this report is a brief overview of some of the more important results concerning computer programming and how they can affect course design. leon e. winslow editorial carl cargill giancarlo succi design of c++ classes and functions to model non-game interfacing applications of the inputs of a joy port (tutorial presentation) a summary and demonstration of the synthesis of c++ classes and functions which would model non-game applications of a traditional joy port of the ibm compatible personal computer. a classic security alarm system with multiple zones circuits hardwired to the port will be used as an application example. the generic physical/electrical provisions and characteristics of the port for digital and analog inputs are reviewed with schematic diagrams. applicable functions from the c++ libraries, conio.h and math.h, are identified and explained. these functions provide access and bit level logic to isolate logic states at individual joy port pins. the hardware port supports four (4) sink mode digital switch inputs which are used as the basis for security zones. the declarations and definitions of member functions for a security zone class which can monitor and control alarms and screen reports for each zone is presented. additionally, one of the joy port analog inputs will be interfaced to an led-photo-transistor pair setting up a disable function by breaking a light beam. a 2 ft. × 4 ft. display/demo board, interfaced to a laptop computer joy port and accompanied with prints of source code and diagrams will clarify understanding. lee r. clendenning generic programming using stl the course content and methodology of a senior-level, computer science software engineering course is described and analyzed with respect to general education goals. the general education model in use is briefly described and the potential application of software engineering in that model is explored in detail. with only minor revisions, the course may be used simultaneously as a writing intensive, oral communication intensive and critical thinking intensive course simultaneously. lawrence d'antonio international perspectives: y2k international status and opportunities william k. mchenry using the lessons of y2k to improve information systems architecture garland brown marshall fisher ned stoll dave beeksma mark black ron taylor choe seok yon aaron j. williams william bryant bernard j. jansen supply/demand of is doctorates in the 1990s the field of information systems (is) has experienced a severe shortage of faculty throughout its 20-year history. this shortage now appears to be lessening. a survey of the supply of is doctorates finds a steady stream of graduates from is doctoral programs. in 1989, 61 universities in the u.s. offered ph.d. or doctor of business administration (d.b.a.) concentrations in information systems. a survey of these programs resulted in 51 responses, including all the programs producing significant numbers of graduates. the following are highlights from the survey: recent increase in the number of is doctoral students: in 1988-89 there were 807 doctoral students enrolled in 51 doctoral programs in information systems. the programs admitted 217 new students for 1989-90. in the 1988-89 time period, 36 programs produced 120 doctorates---a 24 percent increase in graduates from the previous year. in the 1989-90 time period, 41 programs expect to graduate a total of 140 students---a 2-year cumulative increase of 44 percent: downsizing by some programs, but others---including new programs---adding to capacity. in 1988-89, there were 13 programs that produced three or more doctorates. those 13 programs accounted for 70 percent of all the graduates in 1988-89. in 1989-90, those same 13 programs expect to account for only 39 percent of the total number of graduates. of the 13 programs, 5 except to have a decrease in the number of students over the next five years. of 51 schools, 9 offering doctorates in is have yet to graduate a student. another 9 schools had their first graduate in 1985 or later. several additional doctoral programs are in the planning stages. twenty-three programs expect a growth in the number of students over the next five years. in this article, we examine the supply and demand gap sirkka l. jarvenpaa b. ives gordon b. davis integrating a realistic information technology project into an introductory computer science course at west point teaching information technology (it) is an important first and sometimes only opportunity to expose undergraduate students to technology and concepts that will be a part of their daily lives and future careers. this paper describes and compares the integration and use of a real world information technology project into an introductory level computer science course at west point over 3 semesters. the real world problem posed to the students was to set up a deployable network using commercial of the shelf (cots) equipment and a limited budget. specific requirements for the system to be developed were given such as the number of users, system functionality and connectivity. the project was to address the following components of an information system. during the first semester this project was used, it was handed out during lesson 8 of 40 lessons. during the second semester when this project was given to freshmen for the first time, the project was assigned later, during lesson 26. the motivation for assigning the project later was to reinforce material covered in the course. during the third semester, a hands-on introduction of new equipment such as personal digital assistants (pda's) as well as network components was added before the project was assigned. one of the objectives was to provide a better understanding of the technology used in creating the system for the project as well as a better understanding of the parts of an information system. the project was successful in all three iterations. the cadets, who were given the project later in the semester, when it was used to reinforce previous classroom lessons, developed the more detailed and potentially successful systems. overall, the it project was successful in the goals of introducing and reinforcing basic it principles and applying this knowledge to a real world problem. this type of it project can be used in all introductory, undergraduate computer science courses to reinforce the material taught as well as generate excellent discussion and questions on important it concepts. jerry schumacher margaret sosinski managing and evaluating students in a directed project course evaluating individual students is especially difficult in a directed project course because the content is dictated by the projects rather than by a fixed syllabus. by merging the evaluation process with the project management tasks, and by using prepared checklists for peer, task, and meeting evaluations, students working in a group may be evaluated as individuals and the same grading criteria may be applied to all students even though they are working on different projects. dean sanders objects at their best: introductory applet programming and the java awt this workshop will introduce the java abstract window toolkit. we will provide a simple framework for applet development that encompasses placement of components, display of graphical elements, and event handling. we will also introduce the model-view-control paradigm and apply it to the applet context, thereby bringing an important tool from object-oriented methodology to this area and laying the foundation for a study of the newer java swing package. participants should be familiar with the java programming language, though presumaly not the awt. in addition, some familiarity with object- oriented language concepts including classes, methods, messages, inheritance, interfaces, and polymorphism (method overriding) is required. david arnow gerald weiss is the global information infrastructure a democratic technology? deborah g. johnson hardware and software choices for student computer initiatives richard j. leblanc steven l. teal is computer science obsolete? (panel) jim browne bill wulf law in cyberspace (tutorial session) (abstract only) goals and content: the tutorial will explore the following issues regarding law on the global network: copyright and trademark law, privacy, free speech and "obscenity," defamation, protection of proprietary interests in factual data, and computer contracts. david post the pieces of a policy: categories for creation of a comuter ethics policy c. augustine early lisp history (1956 - 1959) this paper describes the development of lisp from mccarthy's first research in the topic of programming languages for ai until the stage when the lisp1 implementation had developed into a serious program (may 1959). we show the steps that led to lisp and the various proposals for lisp interpreters (between november 1958 and may 1959). the paper contains some correcting details to our book (32). herbert stoyan surveyor's forum: augmentation or dehumanization? thomas p. moran about this issue… adele j. goldberg the 21st century it workforce: addressing the market imbalance between supply and demand k. d. schenk k. shannon davis a "frequently asked questions" management system that supports voting, built for student evaluation and optimization purposes (poster session) huu le van andrea trentini risks to the public in computers and related systems peter g. neumann the practical management of computer ethics problems john w. smith multiple ways to incorporate oracle's database and tools into the entire curriculum it is not uncommon for students to view the curriculum as pieces instead of an integrated whole. this can hinder them from applying knowledge learned previously in one course to the course they are currently taking. in addition there are a number of recurring concepts in computer science that students need to recognize. the concepts associated with the management of large amounts of information is one such area. catherine c. bareiss economics of intellectual property protection for software: the proper role for copyright frederick r. warren-boulton kenneth c. baseman glenn a. woroch sigact salary survey maria klawe updating systems specialists the proposition to update systems specialists begs two questions: who will do it and what they will teach? the area of computer studies is technology driven. changes in technology echo through systems and methodologies. hardware and languages come with detailed manuals. it is possible to incorporate them rapidly in our curricula. but a very essential area of systems analysis and design (sad) relates to application of technology to user functional systems. new areas of usefulness and new techniques are discovered and tested by best- practice installations. only gradually they spread in industry. it may take years before the practice is universally accepted or rejected. for a long period of time the documentation of these new developments is sketchy. ultimately textbooks appear. we can start teaching these topics with a delay of five to ten years, probably at the time when already another approach is used in industry. (figure 1). creating an information systems research centre is the step in the right direction. it opens a few direct and shorter communication channels between industry and academia. but an advanced sad course, offered as a co- operative effort between industry and universities, with a high proportion of lectures given by leading-edge industry lecturers would be the ultimate weapon fighting obsolescence of both systems specialists and mis teachers. jacek kryt technology transfer as collaboration: the receptor group p. j. fowler a case study of the static analysis of the quality of novice student programs susan a. mengel vinay yerramilli how does ada affect the way we teach software engineering?how does software engineering affect the way we teach ada? ada* is more than just a programming language. it is a tool that affects every stage of the software life cycle. ada provides mechanisms for (among other things) formal specification of software interfaces, encapsulation of abstract data types, production of general-purpose, resusable software components, enforced consistency between separate compilation units, and forced recompilation of one compilation unit when a change to another compilation unit renders the previous compilation obsolete. the advent of ada blurs the distinction between the teaching of programming languages and the teaching of software engineering. to truly teach ada, must one teach not only the language rules for each feature, but also the software engineering methodology relevant to its use? does ada provide an attractive vehicle for teaching software engineering? the panel discussion will deal with these and other issues. norman cohen how to choose the right ict policy: report on the cec standardization policy workshop david arnold creative pedagogy for computer learning: eight effective tactics do your students seem uninterested in learning about computing? do they complain that the subject matter has no relevant application to the "real world"? do they appear baffled, bored, and inattentive? your mission as a creative facilitator is not to assign a grade; your mission is to educate students to think, learn, and make new connections they never thought possible. a teacher's guidance, constructive feedback, and facilitated instruction should pave the way for students to meaningfully bridge prior knowledge with new knowledge. in this article, the authors suggest eight essential tactics on how teachers might teach creatively, particularly with respect to computing curricula, while they enjoy the teaching and learning processes and reap the pleasures of getting students to think creatively and productively in a complex information world. m. khalid hamza bassem alhalabi david m. marcovitz computers across campus jennifer j. burg stan j. thomas assessing innovation in teaching: an example - part 2 marian petre on julius caesar, queen eanfleda, and the lessons of time past brian l. meek should computer scientists worry about ethics?: don gotterbarn says, "yes!" saveen reddy a new role for government in standard setting? linda garcia empirical investigation throughout the cs curriculum empirical skills are playing an increasingly important role in the computing profession and our society. but while traditional computer science curricula are effective in teaching software design skills, little attention has been paid to developing empirical investigative skills such as forming testable hypotheses, designing experiments, critiquing their validity, collecting data, explaining results, and drawing conclusions. in this paper, we describe an initiative at dickinson college that integrates the development of empirical skills throughout the computer science curriculum. at the introductory level, students perform experiments, analyze the results, and discuss their conclusions. in subsequent courses, they develop their skills at designing, conducting and critiquing experiments through incrementally more open-ended assignments. by their senior year, they are capable of forming hypotheses, designing and conducting experiments, and presenting conclusions based on the results. david reed craig miller grant braught working with the borg: trust, systems development and dispersed work groups diane bandow growing pains - and successes - in transforming the information systems organization for client/server development information systems (is) groups are under increasing pressure to contribute to organizational performance and to support, or even drive, broad organizational transformation efforts through the successful exploitation of information technology (it). using a "sociocentric" model of organizational work, this paper analyzes the experiences of one company's is group that recently embarked on a long-term, enterprise-wide client/server system development initiative designed to transform organizational decision support processes. even though the client/server initiative is still in its infancy and has not yet delivered high-impact applications, it has brought about substantial changes in the nature of work in the is group. these changes range from new philosophies, methodologies, and technologies to shifts in the skills, communication patterns, control structures and management styles required to develop and manage information systems. william d. nance an inexpert system for system development e.d. ought are the university computer sciences satisfying industry (panel discussion) john w. hamblen barry b. flachsbart leslie d. gilliam bernie c. patton daniel c. clair competency testing in introductory computer science: the mastery examination at carnegie-mellon university jacabo carrasquel nimbus logo: an innovative british logo edinburgh university has been using logo for about twelve years. this institute recently designed a highly innovative logo for use on the nimbus computer in british schools. this paper describes the nimbus briefly and sets out the features of the new version, drawing particular attention to parallel processing, data files array operations, the ease with which you can add machine code extensions and a number of novel infix operators. ken johnson information systems graduates: what are they really doing? catherine m. beise thomas c. padgett fred j. ganoe diversity recruiting tracy camp business: the 8th layer: shoring up security - an imperfect art kate gerwig notes for the workshop on freedom and privacy by design roger clarke practical experiences for undergraduate computer networking students merry mcdonald jon rickman gary mcdonald phillip heeler doug hawley visions of breadth in introductory computing curricula (abstract) doug baldwin jerry mead keith barker allen tucker lynn r. ziegler what are those students really doing? richard gordon toward managing information systems professionals better thomas w. ferratt ritu agarwal the future of computers in education (abstract only): learning 10 lessons from the past elliot soloway cathleen norris self-assessment procedure xvii a self-assessment procedure dealing with acm eric a. weiss consulting for microcomputers at the university of north carolina at chapel hill forty-two university of north carolina at chapel hill faculty, administrators, and students who either had access to or owned microcomputers were surveyed with respect to their interests in the area of microcomputer usage. almost all of those surveyed were apple microcomputer users. they spanned a wide variety of departmental affiliations. needs for additional information were categorized into four areas: 1) applications, 2) mainframe interfacing, 3) hardware, and 4) purchasing. the survey results are also discussed with respect to providing aid to these microcomputer users. margret hazen open location ronan kennedy acm forum robert l. ashenhurst considerations in beginning a program for computer based educational software development (abstract only) lynn begley a new is graduate curriculum model - after eighteen years john t. gorgone ensuring success with usability engineering arlene f. aucella how long will they stay? predicting an it professional's preferred employment duration agarwal and ferratt [3] describe a theory of the staying behavior of it professionals, which combines individual- and organizational- centric views. their theory, drawing upon rousseau's [24] conception of psychological contracts, proposes that a critical antecedent of staying behavior is an it professional's preferred employment duration. the purpose of the research-in- progress described in this paper is to build upon their work by further examining the antecedents of this construct we argue that preferred employment duration is jointly determined by career anchor, life stage, and competencies of the it professional, with the type of the employing organization serving as a potentially moderating factor. the paper presents the model and describes the conceptual underpinnings for each of its various constructs and relationships. we also briefly discuss a two-phased empirical study to investigate the model. ritu agarwal prabuddha de thomas w. ferratt computer uses in education (track introduction only) malini krishnamurthi professional chapters scott lang colleen cleary plumbing the soul of is mitch betts do users get what they want? three cases peter thomas specialized certification programs in computer science robert montante zahira khan it degree studies and skills development for learning organisations s. h. nielsen l. a. von hellens a. greenhill p. halloran r. pringle plant design management system (pdms) in action dancing with dynalab: endearing the science of computing to students christopher m. boroni torlief j. eneboe frances w. goosey jason a. ross rockford j. ross women and computing there is mounting evidence that many women opting for careers in computing either drop out of the academic pipeline or choose not to get advanced degrees and enter industry instead. consequently, there are disproportionately low numbers of women in academic computer science and the computer industry. the situation may be perpetuated for several generations since studies show that girls from grade school to high school are losing interest in computing. statistics, descriptions offered by women in academic and industrial computing, and the research findings reported later in this article indicate that much is amiss. but the point of what follows is not to place blame--- rather it is to foster serious reflection and possibly instigate action. it behooves the computer community to consider whether the experiences of women in training are unique to computer science. we must ask why the computer science laboratory or classroom is "chilly" for women and girls. if it is demonstrated that the problems are particular to the field, it is crucial to understand their origins. the field is young and flexible enough to modify itself. these women are, of course, open to the charge that they describe the problems of professional women everywhere. but even if the juggling acts of female computer scientists in both academia and industry are not particular to computing, american society cannot afford to ignore or dismiss their experiences; there is an indisputable brain drain from this leading-edge discipline. a look at statistics reveals a disquieting situation. according to betty m. vetter, executive director of the commission on professionals in science and technology in washington, dc, while the number of bachelor's and master's degrees in computer science are dropping steadily for both men and women, degrees awarded to women are dropping faster, so they are becoming a smaller proportion of the total. . bachelor's degrees peaked at 35.7% in 1986, masters also peaked that year at 29.9%, and both are expected to continue to decline. "we have expected the numbers to drop for both, due to demographics such as fewer college students," says vetter, "but degrees awarded women are declining long before reaching parity." (see table i.) vetter also would have expected computer science to be "a great field for women," as undergraduate mathematics has been; female math majors have earned 45% of bachelor's degrees during the 1980s. on the other hand, math ph.d.'s awarded to women have gone from only 15.5% to 18.1% in this decade, which is more in line with computer science ph.d.'s earned by women. in 1987, 14.4% of all computer science ph.d.'s went to women; this number declined to 10.9% the following year. although the number almost doubled between 1988 and 1989 with women receiving 17.5% of ph.d.'s, vetter points out that the number remains very small, at 107. since these figures include foreign students who are principally male, women constitute a smaller percentage of that total than they do of ph.d.'s awarded to americans. but while american women received 21.4% of ph.d.'s awarded to americans, that is not encouraging either, says vetter. again, the number of american women awarded computer science ph.d.'s was miniscule, at 72. and taking a longer view, the awarding of significantly fewer bachelor's and master's degrees to women in the late 1980s will be felt in seven to eight years, when they would be expected to receive their ph.d.'s. how do these figures compare with those of other sciences and engineering? in her 1989 report to the national science foundation, "women and computer science," nancy leveson, associate professor of information and computer science at the university of california at irvine, reports that in 1986, women earned only 12% of computer science doctorates compared to 30% of all doctorates awarded to women in the sciences. leveson notes, however, that this includes the social sciences and psychology, which have percentages as high as 32 to 50. but the breakout for other fields is as follows: physical sciences (16.4%), math (16.6%), electrical engineering (4.9%), and other engineering ranges from 0.8% for aeronautical to 13.9% for industrial. those women who do get computer science degrees are not pursuing careers in academic computer science. leveson says women are either not being offered or are not accepting faculty positions, or are dropping out of the faculty ranks. looking at data taken from the 1988-89 taulbee survey, which appeared in communications in september, leveson points out that of the 158 computer science and computer engineering departments in that survey, 6.5 percent of the faculty are female. one third of the departments have no female faculty at all. (see tables iii and iv.) regarding women in computing in the labor force, vetter comments that the statistics are very soft. the bureau of labor statistics asks companies for information on their workforce, and the nsf asks individuals for their professional identification; therefore estimates vary. table ii shows that this year, women comprise about 35% of computer scientists in industry. and according to a 1988 nsf report on women and minorities, although women represent 49% of all professionals, they make up only 30% of employed computer scientists. "there is no reason why women should not make up half the labor force in computing," betty vetter says, "it's not as if computing involves lifting 125 pound weights." the sense of isolation and need for a community was so keen among women in computing, that in 1987 several specialists in operating systems created their own private forum and electronic mailing list called "systers." founded and operated by anita borg, member of the research staff at dec's western research lab, systers consists of over 350 women representing many fields within computing. they represent 43 companies and 55 universities primarily in the united states, but with a few in canada, the united kingdom, and france. industry members are senior level and come from every major research lab. university members range from computer science undergraduates to department chairs. says borg, "systers' purpose is to be a forum for discussion of both the problems and joys of women in our field and to provide a medium for networking and mentoring." the network prevents these women, who are few and dispersed, from feeling that they alone experience certain problems. says borg, "you can spit out what you want with this group and get women's perspectives back. you get a sense of community." is it sexist to have an all- women's forum? "absolutely not," says borg, "it's absolutely necessary. we didn't want to include men because there is a different way that women talk when they're talking with other women, whether it be in person or over the net. knowing that we are all women is very important." (professional women in computer science who are interested in the systers mailing list may send email to systers- request@decwrl.dec.com) the burden from women in computing seems to be very heavy indeed. investigators in gender-related research, and women themselves, say females experience cumulative disadvantages from grade school through graduate school and beyond. because statistical studies frequently come under fire and do not always explain the entire picture, it is important to listen to how women themselves tell their story. in the sidebar entitled "graduate school in the early 80s," women describe experiences of invisibility, patronizing behavior, doubted qualifications, and so on. given these experiences, it is not surprising that many women find the academic climate inclement. but while more women may choose to contribute to research in industry, is the computer business really a haven for women, or just the only alternative? in the sidebar entitled "the workplace in the late '80s," women in industry also tell their story and describe dilemmas in a dialogue on academia versus industry; this discussion erupted freely last spring on systers. in addition, findings of scholars conducting gender-related research are presented in a report of a workshop on women and computing. finally, communications presents "becoming a computer scientist: a report by the acm committee on the status of women in computer science." a draft was presented at the workshop and the report appears in its entirety in this issue. karen a. frenkel computer science research and instuction at institutions with large minority enrollments (panel session) william l. lupton mary ellis andrew bernat benjamin martin surrendar pulusani leroy roquemore summary of a model preservice technology course roundtable on structure editing (panel session): teachers' experiences using carnegie mellon's genie programming environments dennis r. goldenson michael brown jane bruemmer nathan hull roy jones bruce mcclellan joseph kmoch phillip miller mark stehlik laurie werth breaking the ada privacy act jeffrey r. carter the distributed course - a curriculum design paradigm bill toll open season dennis fowler a course on professionalism in the undergraduate cs curriculum richard l. weis judith l. gersting distributed vs. centralized computing a regional college center at the crossroads in this paper we will discuss some of the preliminaries that led up to the final drafting of a request for proposal (rfp) for a new system and a few of the pros and cons of distributed vs. centralized computing as they appeared to us. the creation of the rfp is nearly complete and we expect to be well into the decision-making process by november, hence this document must perforce be somewhat incomplete at this time. ronald blum computer network management: theory and practice bruce s. elenbogen a framework for a student-centered approach to learning in an information system curriculum g. joyce dunston julio gallardo computer assistance for managing educational resources and managing collaborative educational processes d. siviter using while moving: hci issues in fieldwork environments reminiscences on influential papers richard snodgrass observations from "the front": it executives on practices to recruit and retain information technology professionals thomas w. ferratt ritu agarwal jo ellen moore carol v. brown 1992 - a critical year in hci education gary perlman towards a guide to social action for computer professionals jeff johnson evelyn pine an adult education course in personal computing the paper describes a non-credit course being offered through the school of adult education at mcmaster university. the aim of the course is to familiarize members of the general public with what home computers can do for them and to provide the knowledge needed for the selection and purchase of a personal computer. n. solntseff notes from comdex finally, new and interesting ideas about documentation. it is kind of funny, really. most documentation is written by technicians--- not professional writers. and most technicians would include documentation among their top ten complaints regarding the software they use. physician, heal thyself. this column describes ideas and suggestions from current literature on software documentation. i hope they will change the way you think about documentation. if you are in the software field, it is almost certain that you will have to write documentation, for either your peers or your users. if you are designing software, you owe it to those you serve to gain an enlightened attitude toward documentation, recognizing the interconnectedness of the software, its documentation, and the help system. otherwise, you are not a "practical programmer." larry press another way to teach computer science through writing dona lee flaningam sandra warriner president's letter paul abrahams micro computers - the procurement process (panel discussion) the rapid proliferation of microcomputers in higher education for uses which vary from process and instrumentation control to computer aided instruction has caused many universities and state coordinating agencies to reevaluate their master plans for computing. this panel will discuss these issues and offer insight into possible solutions to some of the most common problems facing present and potential users of this technology. the use of microcomputers in the classroom and methods of acquisition utilized by a private university will be the concern of dr. pitts. comparisons of the acquisition process between a large state institution with which he was recently affiliated and the private university will be emphasized. a different approach to the acquisition process and microcomputer utilization will be presented by dr. newman. barry l. bateman gerald n. pitts james s. harvison j. richard newman e-commerce and security whitfield diffie a foundations course for a developing computer science program this paper discusses a course, referred to as foundations, which has been used to partially satisfy the need for a broad program in computer science in a situation where staffing is limited. this course was introduced at tulane university in 1974 and was taught until recently when a full-fledged major program was established. mark benard the information system - pivotal to business process redesign? (abstract) lesley a. beddie scott raeburn entry-level position of computer programmer survey the mission of a community college is to provide educational services to the community. these services are dictated by the priorities of the specific community. the first priority for lexington, kentucky, is employment. the two factors of employment are an available position and a qualified applicant. for local business, the majority of computer-related technical positions are in data entry, operations, and programming. the community college must be able to prepare individuals to be qualified for some of these careers. lexington technical institute (lti) is one of thirteen institutions under the university of kentucky's community college system. lti offers programs in many technical areas. among them is a business data processing curriculum leading to as associate degree in applied science. the intent of the curriculum is to prepare individuals for careers in computer programming for business applications. other state-supported institutions cover data entry and other types of programming. formal training for data entry is given through local vocational schools. formal training for research and advanced programming positions is given through the computer science department at the university of kentucky. there is no institution providing adequate formal training for computer operations. the curriculum at lti is oriented toward preparing individuals for entry-level positions as computer programmers within the lexington area. to ensure that an institution continues to provide a relevant curriculum, the needs of local business must be periodically reviewed (little, 1977). a mailed questionnaire was returned by 142 companies in the lexington area in 1975 (hager, 1975). the important facts identified were: 1\\. the predominant computer languages (in decreasing order of use) were cobol, rpg, and assembly language; 2\\. trend toward increased use of basic timesharing systems; 3\\. a preference for an education stressing business systems as well as programming; 4\\. job opportunities enhanced by practical experience. a follow-up survey was undertaken through the support of the university of kentucky's community college system during the summer of 1979. selected employers of computer programmers within the lexington area were interviewed. james w. phillips a cooperative learning approach to teaching social issues of computing tom jewett making computing history for 40 years linda feczko reflections on teaching computer ethics robert m. aiken infrastructure risk reduction harold w. lawson what's next for apple? jef raskin perspectives: computers, tools, and people mankind has used tools since prehistory. although the technology of tools has long been of interest and much has been written on the subject, the development and effect of tools on people and society is not as widely studied. this talk examines, using historic analogy, what effects tools can have on people, their work, and on society. various tools and technologies from the stone age through modern time are examined. some parallels with the computer age are drawn. david n. smith the effective use of undergraduates to staff large introductory cs courses in the past few years many schools have tried to simultaneously achieve the following goals in their introductory cs courses: allow more students to enroll improve the quality of education keep spending at current levels everyone has discovered that the first two goals are difficult to achieve in the presence of the third. this paper presents a model that has evolved over the last five years at stanford university where all three goals have been accomplished by replacing graduate student tas with undergraduate section leaders. stuart reges john mcgrory jeff smith triple boot machines for cash-strapped small college labs this paper describes a small college computer lab consisting of machines bootable in windows nt, linux, and minix. i argue that such an arrangement allows small departments to use the same machine for projects in operating systems and data communications that would normally require dedicated labs. i offer technical advice on configuring such a lab and suggest some possible extensions to be implemented in the future. paul de palma e-commerce database issues and experience: (talk-slides available at at the conference) anand rajaraman pros and cons of various user education modes panel will discuss various methods for providing assistance and education to users. results from the university of minnesota experience will be presented to help compare and evaluate the relative advantages and disadvantages of the following modes: short courses multi-level documentation on-line documentation, including help files telephone consulting face-to-face consulting audio-visual materials (video tapes) cai for use of computers mary c. boyd lincoln fetcher sara k. graffunder thea d. hodge a survey of system administrator mental models and situation awareness little empirical research has been conducted on the mental models and situation awareness of system administrators. to begin addressing this deficiency, a short survey was prepared and broadcast to system administrators via internet newsgroups. fifty-four sysadmins responded. these respondents indicated that there is much about the systems they oversee that they don't understand, and the more they do not understand about their systems, the more likely they are to attribute this ignorance to hardware, and not software, unknowns. the respondents attribute little of the expertise they do possess to formal education or training. further, when faced with a novel situation, the respondents were more likely to rely on themselves and their personal contacts than on the system's manufacturers, or on third party support. however, the more the sysadmins attributed their ignorance of their systems to hardware unknowns, the more likely they were to rely on manufacturer and third party support. compared to microsoft oriented sysadmins, unix oriented sysadmins were more likely to attribute their expertise to working with others, and were more likely to attribute their ignorance to hardware unknowns. finally, respondents who felt that their organizational superiors understood what is involved in system administration were more likely to perceive these superiors as providing the sysadmins with adequate support. dennis g. hrebec michael stiber quo vidas, professor during the past 5 years academic computing services (acs) at the university of colorado boulder (ucb) has progressed through evolving support structures for faculty microcomputing. each step was unique and timely. each lead to the next. each became more specialized yet met the needs of an expanding audience. chronologically, the programs started with a broad scope and a narrow audience, then progressed to narrower scopes with broader audiences. 1983 the faculty pc program 1984 microcomputer applications classes 1984 the pc maintenance center 1986 the instructional grant program 1987 the disc center1988 quo vidas? paula rosen teaching client/server programming in the context of computing curricula 2001 this paper discusses a client/server programming course and its relationship to the knowledge units suggested by the draft proposal _computing curricula 2001_ [ieee00]. first, the progression of concepts covered by this course is discussed. the course is then discussed in context of the overall motivation for curricular revision. finally, the concepts of the course are related to the knowledge areas of _computing curricula 2001_. this client/server programming course also addresses societal needs. since modern applications require computer professionals to conceptualize solutions built on multi- threaded and network-based programming constructs, educators have a duty to teach students material similar to that presented in this client/server programming course. evelyn stiller cathie leblanc rehabilitation technology at the vienna university of technology franz peter seiler dealing with independent studies courses - an effective methodology gordon bailes jerry sayer ten years of computers, freedom and privacy: a personal retrospective lorrie faith cranor intraed - an intranet solution for education the intraed concept has been developed in 1996 as a solution for the problem of shortage of financial resources for setting up necessary environment for introducing the intranet technology into instructional process at the cracow university of economics. the developed system has been intensively tested since that and supporting more courses every year. therefore we would like to share the intraed system concept as well as gained experience coming out from over two academic years of tests with everybody interested in setting up low cost intranet environment for educational purposes. jan madej tadeusz wilusz value conflicts in the deployment of computing applications computing technologies importantly affect the social climate of the organizations into which they are introduced. widespread computing developments are rarely socially neutral, since they absorb scarce resources, and re-allocate them by limiting access to the data, equipment, expertise, and other resources they utilize. this paper examines the value conflicts engendered by computing developments in two different institutional settings: electronic funds transfer systems and instructional computing in primary and secondary schools. the paper identifies five value positions which are central to debates about computerization in each of the two settings. the value positions identified differ somewhat in each area. in addition, the policies which advance more egalitarian values also differ in each setting. in the case of electronic funds transfer, policies aimed at correcting imbalances can be focused on services closely linked to computer-based technologies. in the case of schooling, difficulties of computerization are more far-reaching institutionally, and are bound up with national and regional policies of school funding. while specific values depend upon culture and upon the character of the particular institutional setting studied, these two cases can serve as instructive points of departure for examining the value conflicts which generally accompany different modes of computerization. rob kling a logical foundation course for cs majors juris reinfelds poster session. internet curriculum. two courses: introductory and advanced we describe two courses that deal with the internet. one is an introductory course with no prerequisites and the other is more advanced and focuses on providing web services. ernest ackermann an alternative computer architecture course aaron garth enright linda m. wilkens james t. canning 1981 a.c.m. annual conference position paper computer crime laws and the computing professional: what you don't know hay hurt you this position paper succinctly describes several key issues arising in the area of computer crime laws and their connections to the computing profession. through this statement, and the technical session presentation, the computing professional will be introduced to: 1 meanings of computer criminal law 2 debates on legislative control 3 debates on current enactments 4 interpretations of enacted and proposed laws 5 comments and suggestions offered to professionals 6 summarized views for further consideration lance b. eliot john sanchez toolbook multimedia demonstrations for java programming this paper describes a set of multimedia demonstrations built to facilitate the learning of introductory java programming. they provide demonstrations of complex processes and concepts that are difficult, if not impossible, to present using more traditional media used in lectures. ainslie ellis performance variation across benchmark suites carl ponder at the heart of information ecologies: invisibility and technical communication the ecological metaphor for technological systems provides a useful supplement to others dealing with the question of human control over technologies. however, it fails to develop adequately its own reliance on communication as the means whereby human values may be embedded in technologies, or to recognize the role of professional communicatorsin that process. frances j. ranney national science foundation course, curriculum and laboratory improvement program: adaptation and implementation track c. dianne martin margaret m. reek roots of the job turbulence in information systems (is) development trevor crossman user services for the novice on the northwest missouri state university electronic campus phillip j. heeler measuring participants' perception on facilitation in group support systems meetings facilitation is often considered to be one of the key factors in the successful application of group support systems. research on gss facilitation has revealed insight into the types of tasks performed by facilitators and the potential positive effects of facilitation on group consensus and satisfaction. however, earlier research has hardly approached gss facilitation from the participants' point of view. this paper presents a study in which a questionnaire was developed and distributed to 182 participants of facilitated gss meeting in order to measure their perception on various facilitation tasks. the results suggested three categories of facilitation tasks that are perceived important by participants. each of these categories strongly correlated with participants' meeting satisfaction. further research is needed to refine these categories so that the instrument may be used to evaluate a facilitator's performance. gert-jan de vreede fred niederman ilse paarlberg learning from the euro robert o'connor education column launches new format jacquelyn ford morie from washington congress is paying careful attention to threats to u.s. high technology from two areas: trademark and patent infringement and proposed foreign violation of u.s. copyright laws. computer technology is becoming a major victim of the nearly $8 billion counterfeit business threatening the american economy. some recent product examples include fake apple computers seized by u.s. customs and reverse engineered apples and other microcomputers on sale in hong kong for $300, about a third of the price of a genuine product. u.s. firms fear losses large enough to hinder research and development efforts. this would result in erosion of their technological edge. rosalie steier do we teach computer science as religion or as history? michael r. williams large introductory computer science classes: strategies for effective course management david g. kay the united states: a standardized vision of international relations? the u.s. system for standardization and certification remains complex and misunderstood by europeans, however, on the eve of the transatlantic dialogue, insight into this system is necessary since it influences the position of u.s. trade diplomacy. florence nicolas structural example-based adaptive tutoring system (poster session) (seats) alex davidovic james warren elena tricina site licensed software: marketing & distribution rosa gilman sandra j. li the future for ansi sergio mazza should commercial software packages be used in the classroom? giora baram frank f. friedman real-world program design in cs2: the roles of a large-scale, multi-group class project recent curricular recommendations (e.g., [7,9]) encourage the early and regular use of significant group projects in the introductory computer science sequence. in this paper, we report on a group project that we used in two courses during the second half of the semester. rather than having each group work on the same project (or even individual projects), the groups build parts of a larger project: a distributed auction system to be used by art shows at conventions. students reacted quite positively to the experience, in spite of reporting that they spent upwards of twenty hours on the project in many weeks. students also learned important software design principles from experience. samuel a. rebelsky clif flynt maintaining high living standards through innovation, strong patents richard c. hsu an on-line system for controlling and monitoring software usage in a microcomputer laboratory andrew n. kreutzer teaching e-commerce to a multidisciplinary class rachna dhamija rachelle heller lance j. hoffman a hard disk organization scheme for delivering cai on a novell based pc/ms dos local area network networked pc laboratories are constantly installing and removing various cai packages and data sets for individual users as well as courses. a disk organization scheme was developed to help in installation, removal, data collection, and easy student access. a security scheme is included to allow students to collect handouts and submit homework without unauthorized file access or tampering. this organization is combined with a user-friendly menu system to allow novice users access to computers as an educational media (cai, cmi, cbt) without prior computing skills. david k. boeshaar keith e. gatling prelude to the java event model (poster session) raja sooriamurthi competitive testing issues and methodology kristyn greenwood suzy czarkowski the ui design process the root cause of many user interface (ui) design deficiencies is not a lack of knowledge about human-computer interaction principles nor a lack of information on user needs. rather, many ui deficiencies arise because the ui design process is ad hoc and the design is not communicated successfully to the programmers who will implement it. many ui designers are seeking and discovering ways to plan, manage, and document ui design work more effectively. this workshop provided an opportunity for participants to share lessons learned and obtain advice from other participants.in the weeks leading up to the workshop, participants selected the specific topics that were of prime concern to them. as a result, we narrowed the focus of the workshop to the following topics: division of ui design activities into stages division of labor and interdisciplinary collaboration collaborating in geographically-dispersed projects writing the ui specification defining the maturity of the ui design process.the following sections summarize the results of the workshop activities for each of these topics. paul mcinerney rick sobiesiak should we teach students to program? elliot soloway business: learning the ropes of conference and meeting organization donald l. day susan dray recruiting more computer science students - what to do after the "glamour" has gone away? the moderator will first focus on the question: "are enrollments actually declining in computer science programs across this country?" he will report the results of his surveys which indicate there has been a definite downturn in enrollment over the past two years. the panelists will then discuss what is being done at their schools to recruit students and to meet this new challenge of declining enrollments. william e. mcbride james calhoun james l. richards harriet g. taylor f. garnet walters group 4 (working group): the impact of campus-wide portable computing on computer science education stan thomas modern introductory computer science there have been numerous testimonies to the inadequacies of our educational system [83]. for undergraduate computer science educators, major concerns regarding student preparation include poor problem solving and critical thinking skills, weak mathematics background, an inability to convey thoughts and concepts, and a lack of motivation. these problems can be addressed in the introductory computer science course by developing an integrated approach to effectively teaching discrete mathematical foundations, fundamental computer science concepts, and problem-solving skills. this paper is conceptual in nature and introduces some specific examples of possible approaches to overcoming these deficiencies and problems. peter henderson issues in conducting a field study of computerized work monitoring rebecca a. grant is downsizing survivor's career management attitudes is downsizing is a significant tool of management in the 1990's. a key element in downsizing success is how it is perceived by employees. still, no widely published study has examined the attitudes of is survivors to downsizing. moreover, in the age of a new psychological contract between organizations and employees, employees have assumed responsibility for managing their careers. however, no studies of the career management strategies of is survivors in the downsized workplace are found. in addition, studies of downsizing in the general workforce have shown that demographic differences may affect employees' responses to downsizing. again, no study appears to have focused on information system personnel. for all of these reasons, this study examines the attitude of is employees toward downsizing, the career management approach of survivors, and whether demographic characteristics of is survivors of organizational downsizing are related to their attitudes and strategies. the results of a survey indicate that is survivors, regardless of their demographic characteristics tend to be neutral toward downsizing as a means of improving the organization. demographic characteristics were also unrelated to internally-oriented career management strategies but were related to externally-oriented strategies. james j. jiang stephen t. margulis gary klein how does a centralized computer center with a large mainframe adapt to the introduction of lots of micros? over the past ten years or so many computer centers have made the transition from large, centralized batch machines to large, centralized timesharing machines. this was a carefully planned move controlled largely by the central site. today an even more revolutionary transition is underway: the introduction of microcomputers. this new change is often not being led by the central site and is rarely under the control of any central authority. how can the central site manage the influx of micros under these circumstances? can micros be supported by the central site and should they be? how does one manage the "threats" to the central site by the introduction of micros? what can be done to ensure that productivity goes up with the introduction of micros? what can be done to ensure that productivity goes up with the introduction of micros? what role should the central site play, if any, in this new revolution? how are user services organization altered by the introduction of micros? howard strauss helping the rest of them: the unique consulting meeds of the macintosh user the apple macintosh burst onto the microcomputing scene in early 1984, amid much media hype, sporting the claim that it was to be "the computer for the rest of us." the unique macintosh user interface makes the mac easy to learn and use, and has made it quite popular, particularly among those with fear of computers or little time to spend learning to use them. after a slow start due to hardware limitations and lack of software, the macintosh has grown into an important presence in academic computing, with mac labs appearing in colleges and universities everywhere in increasing numbers. in the summer of 1987 the computer center at the university of kansas opened a macintosh lab for students, faculty and staff. from the beginning the lab has been in almost constant demand, often with waiting lines day and night. our consultants find that they are now faced with a new breed of users whose only computing experience is with the macintosh. many of the mac users filling our labs are people with little or no background in computing, who understand little of the traditional computing jargon, and who may have difficulty explaining to a consultant exactly what went wrong (or understanding the consultant's answer). this growing group of users represents a special class of consulting needs, and we must be prepared to meet them. this paper will draw on our experience with the macintosh and its users at the university of kansas, and will discuss (1) the basis for this problem, (2) the specific nature of the problem, and (3) what consultants and others can do to help. doug heacock using erp systems in education edward e. watson helmut schneider distance learning (tutorial) lisa neal some (provocative) thoughts on "teaching computers and society" david bellin sara baase chuck huff practical advice for creating an e-commerce curriculum susy chan the positives and negatives of managerial telecommunications training programs karen ketler john willems yunus kathawala the changing curriculum of computing and information technology in australia binh pham management of dormant files rajeev aggarwal a structured review of reasons for the underrepresentation of women in computing joy teague security news a retrospective on an early software projects course henry a. etlinger a developmental computing course for computer technology majors robert a. barrett bruce c. davis robert leeper educational computing in england, wales and northern ireland j. foster reviewing the risks archives ok, you expect your shrink-wrapped software to work properly, without annoying reliability bugs and security holes. or maybe you would like the systems you develop to work properly, without serious risks to their users. but those systems don't quite work the way they are supposed to. so, what's new? perusal of the risks archives [1] suggests that startlingly few real success stories are recorded, and that perhaps we might as well just learn to live with almost every system having bugs---some even colossal in scope [2]. peter g. neumann the master's degree program in information systems (invited session) gordon b. davis david feinstein ted stohr joe valacich rolf wigand an innovative, multidisciplinary educational program in interactive information storage and retrieval (abstract only) a description of the development of a set of transportable, college-level courses in the use of interactive, online is&r; systems, in particular the nasa/recon system, is presented. the purpose of these courses is to educate science and engineering students in the effective use of automated scientific and technical is&r; systems. the presentation includes an overview of project objectives, management phases, and accomplishments to date. the methodology used for the course development is described and future plans, both long-term and short- term, are discussed. suzy gallagher computers then and now - part 2 maurice v. wilkes integration of elementary patterns into the first-year cs curriculum this poster will present ongoing research into documenting elementary patterns and using them as a structuring mechanism for lectures, in-class laboratory assignments, and programming projects in cs1 and cs2. the poster will focus on a subset of labs and projects that best exemplify the methodology that we employ. autumn c. spaulding a constructivist approach to object-oriented design and programming computer science education is greatly affected by the object-oriented approach. this can be seen in the numerous new teachers being attracted to programming languages such as java. learning the object-oriented approach is however difficult for novice students, mostly because it requires a new way of thinking about computing and more depth to grasp. thus, to promote the object- oriented approach at the introductory level, a re-examination of the teaching method is recommended. this article describes a pedagogical framework rooted in the constructivist epistemology for teaching object-oriented design and programming. said hadjerrouit the diverse career orientations of mis personnel past studies have found different dominant career anchors present among information system (is) personnel. this study provides additional evidence that is personnel have a diversity of career orientations and that the orientations can vary according to external conditions. a survey of is personnel supports the premise that each professional situation and each individual is unique. organizations are better off creating flexible career structures rather than rigid career paths and must be prepared to adjust to changes in the environment. james j. jiang gary klein joseph balloun the pna project the personal nutrition assistant (pna) project is a join effort involving faculty and students from two universities. the project was initially funded with a seed grant that supported cooperative efforts between these institutions. the goal of the pna project was the creation of a working prototype of a web-based system that assisted individuals in performing a nutritional analysis of their daily diet and made the results available via the web to their health care professionals. further, we wanted the data entry to be easy and quick. we had providing a system that helped clients performing daily diet data entry and analysis in about 15 to 20 minutes. as stated above, the system produced through the initial grant was a working prototype, a system that demonstrated the feasibility of the approach. further, since the daily diets are stored on the web the results could be viewed by the clients' dietician or health care provider, providing much greater feedback from the clients to the health care professional than is normally achieved. the initial system was so successful that nine local medical centers started using the working prototype with their patients. the system will form the basis of senior projects starting in the 2000-2001 academic year, and is expected to support additional projects in the years to come. this paper reports on the first projects, which were designed to upgrade this system from working prototype status to production level. john beidler albert insogna nicholas cappobianco yaodong bi marianne borja speed is life, life is speed scott ramsey macdonald human nature and the glass ceiling in industry kathleen hemenway workforce retention: what do it employees really want? the purpose of this study is to present and test an integrated model of turnover intentions that addresses the unique nature of the it profession. we identified a multidimensional set of hr practices likely to increase retention among it employees and considered citizenship behaviors as well as two distinct types of organizational commitment as key antecedents of turnover intentions. a questionnaire was developed and sent to the quebec members of the canadian information processing society. data from 394 respondents were used to validate the measures and test the research hypotheses. we present and discuss the results and make a series of recommendations for it and hr executives. guy pare michel tremblay patrick lalonde a career component to the computer science curriculum cooperative education melvin w simms invitation to a public debate on ethical computer use to obtain ethically defensible behavior in a particular situation, or context, one must first define what is considered ethically defensible in that context. without contextual norms, people assume no norms and are then (mis)guided solely by their own experience. according to integrated social contracts theory, typical contexts include individual-only situations and situations in which the individual is a member of professional, business and social organizations. norm development is a complex process. it requires discussion between many individuals to identify the ethical issues relevant to each context, to define ethically defensible behaviors for each, and then to decide which norms take priority in the event of a conflict. this paper argues that the importance of context identifies fundamental flaws in the notion that professional associations can address all ethical behaviors through generic codes of ethics. the arguement goes further to assert that integrated social contracts theory, if applied to universal codes of ethics, can facilitate their redefinition into a useful set of guidelines for ethical professional behavior. sue conger karen d. loch legal liability for malpractice of computer specialists the proliferation of computer technology has focused attention on the regulation of computer specialists and the liability standards applicable to them for the work they perform. horror stories involving computer- related crimes, fraud and invasions of personal privacy, coupled with catastrophies resulting from computer systems that failed to work or functioned improperly, have been used as evidence to support a need for regulatory protection through a state occupational licensing scheme or a mandatory certification program for computer specialists. j. t. westmeier software piracy and software security in business schools: an ethical perspective jin h. im pamela d. van epps outlook: computing in education: a single course for preservice teachers robert p. taylor advanced placement-plus in computer science: a summer institute at the university of tulsa in this paper we discuss an in-service course designed to give secondary school teachers the background needed to teach an advanced placement (ap) course in computer science. in order to do this effectively, we argue that additional computer literacy and computer system concepts must be developed as well (plus). we present the (ap) course outline and objectives as well as the outline and objectives for some additional computing skills. roger l. wainwright dale a. schoenefeld the reason god made oklahoma? anne m. parker recruiting and retention of information systems professionals in nebraska: issues and challenges mike echols uma g. gupta experiences with introductory computer science courses survey results jane m. fritz computing in higher education: the athena experience project athena at mit is an experiment to explore the potential uses of advanced computer technology in the university curriculum. about 60 different educational development projects, spanning virtually all of mit's academic departments, are already in progress. edward balkovich steven lerman richard p. parmelee combating the code warrior: a different sort of programming instruction many cs101 courses purport to teach object-oriented programming, but many seem to be directly translated from traditional structured programming courses. lynn andrea stein's "rethinking cs101" program at mit offers a radically different approach to teaching oo programming by concentrating on the interactive aspects of object-oriented systems. this approach has the added advantage that students who have previously learned "programming" must also relearn how to approach the problems involved in programming interactive systems. this paper reports on the author's use of this concept outside of mit, with encouraging results. debora weber-wulff evolution of a program in computing for a latin american graduate college this paper describes the design process for the program in computing of the colegio de postgraduados located in chapingo, mexico. the program was designed to fit the research and academic requirements of the agronomical community in mexico. first, a brief description of the organization of the colegio de postgraduados and of its broad objectives will be presented. then the setting in which the program was developed and the design process itself will be described. yolanda f. villasenor job and health implications of vdt use: initial results of the wisconsin-niosh study magnitudes and correlates of stress were investigated among 248 office workplace vdt users and 85 nonuser counterparts using field survey and objective physical measurement techniques. other than a tenuous indication of increased eyestrain and reduced psychological disturbances among users, the two groups were largely undifferentiated on job- attitudinal, affective, and somatic manifestations of stress. however, aspects of working conditions were judged less favorably by vdt users. stress mechanisms were much the same for both groups, involving psychosocial as well as physical environmental job attributes. for vdt users, the chair and workstation configuration were particularly important predictors of musculo- skeletal disturbances, as were corrective eyewear use and ambient lighting for visuo-ocular disturbances. steven l. sauter mark s. gottlieb karen c. jones vernon n. dodson kathryn m. rohrer risks to the public in computers and related systems peter g. neumann analysis of strategies used in teaching an online course in a predominantly hispanic university roberto vinaja mahesh s. raisinghani an overview course in academic computer science: a new approach for teaching nonmajors alan w. biermann "women in programming" is not an oxymoron! in teaching the comparative programming languages course, one quickly discovers that there are few references in most of the texts to women who have made significant contributions to the field. as we become more aware of the emphasis on recruiting and retaining young women in the computing field, we as educators must take an active part in providing role models---both to the young women and to the men who will become the professionals in the field. this paper addresses one mechanism that has been classroom tested. cindy meyer hanchey education is the key to future dreams john glenn acm president's letter: eating our seed corn on july 12 and 13, 1980, the biennial meeting of computer science department chairmen was held at snowbird, utah. this meeting, which is organized by the computer science board (csb), is a forum for the heads of the 83 departments in the united states and canada that grant ph.d.s in computer science. the meeting was attended by 56 department heads or their representatives, and by six observers from industry and government. this report was developed during the meeting as a result of intensive discussions about the crisis in computer science. this report was endorsed by the entire assembly. peter j. denning an information resource for computer science educators renee mccauley bill manaris infomediaries and negotiated privacy techniques alexander dix the internet-based lecture: converging teaching and technology network-based distributed education is a reality today. at george mason university, we have been pursuing a capability beyond the widespread practice of supporting courses with webpages: delivering lectures and seminars in real time, over the internet. this paper describes the range of distributed education technologies available today, focusing on issues of instructor presentation, student participation, and temporal qualities of response to student questions. the analysis supports our selection of desktop audiographics for synchronous internet-based course delivery. courses that have been presented in this mode are described, along with factors influencing their success and factors in student participation. j. mark pullen road crew: students at work john cavazos physics in computer games (title only) chris hecker "a study of personal computing in education" this paper summarizes a study which was made by the authors on the various roles of personal computing in early education, college education and continued education of the individual. the role of personal computing in continued education is decomposed into its specific roles in the re-education of business persons (especially small businesses), of computer professionals, and of educators and other users of personal computing. it is pointed out that among many professional societies today, as well as within the total education of the individual, personal computing is an essential topic of national and international concern. john walstrom david rine a new bachelor's degree program in software engineering we describe a new program of studies in software engineering (se). the program regards se as an engineering discipline and focuses on the training of the student as an engineer involved in all aspects of a software product. ad soloman real-time lab excercises: a teacher's dilemma erik herzog peter loborg simin nadjm-tehrani who owns your course's intellectual property rights? today, many instructors prepare cds, videos and web sites to either complement their teaching or as the primary method of instruction in their course. these high- technology course components often have considerable value and could be used in similar courses at many universities. university administrators have been quick to recognize the value of these high-technology components and are re- evaluating whether or not the instructor should be the sole owner of a high- technology course component. this paper highlights some points of contention today, discusses solutions being used by various universities and considers where the current trends appear to be leading. george whitson perfect choice richard t. watson high school participation in the association for computing machinery (abstract) david w. brown why a college course in computer literacy? harold l. sellars evega: an educational visulalization environment for graph algorithms this paper describes the package evega (educational visualization environment for graph algorithms) and possible ways of incorporating it into the teaching of algorithm. the tool is freely available, platform- and network-independent, and highly interactive. the tool is designed for three different groups of users: students, instructors, and developers. interaction with evega can be achieved through the exploration of existing default visualizations, through the direct manipulation of graphical objects, or through the implementation and visualization of new algorithms using existing classes. sami khuri klaus holzapfel inside risks: the trojan horse race bruce schneier methods & tools: how to xfr: "experiments in the future of reading" steve harrison scott minneman anne balsamo reality check (poster session): an informal feedback tool scott grissom from the president: to dvd or not to dvd barbara simons careers for computing professionals p. mckelly k. farrell j. hamilton m. mcmillan s. regans j. singer a racquetball or volleyball simulation henry m. walker micros for students-a computer-intensive environment in order to integrate effective computer utilization into undergraduate engineering, science, computer science, and management curricula, stevens institute of technology has added to its central computer capabilities, personal ownership of microcomputers by students. in 1983, and again in 1984, all entering freshman at stevens were required to purchase a digital equipment corporation professional 350 computer with 10 megabyte hard disk, 512k of random access memory and dual floppy disk drives with 800k bytes of storage. with this system, students received prose editing software for word processing and two programming languages, basic and fortran. dec pro 350s have also been distributed in various campus locations and a subsidized program has resulted in widespread faculty ownership of these systems. external grants and institutional funds have supported faculty efforts to develop computer-related curriculum materials in more than 50 courses. curriculum applications are extremely varied and include simulations, design projects, numerical methods, and tutorials. currently, there are 1,400 dec 350s being utilized at stevens, a pilot networking effort is being implemented, and a fully networked campus is planned for implementation during the 1986/87 academic year. curriculum examples, initial networking experience, and planning objectives will be discussed. roger s. pinkham edward a. friedman a new vision for sigchi marilyn mantei tremaine guidelines for a minimum program for colleges and universities offering bachelors degrees in information systems the tremendous demand for education in the use and application of computers and computer based systems in business, commerce and government has led to the establishment of information systems departments and to the option of an information systems concentration in established computer science departments. in fact, the information systems degree is now becoming one of the fastest growing and most popular in the area of computer education. this report is presented to the computer science and information systems education community as a preliminary proposal of ideas on which to base an accreditation standard. it was developed with the background that many information systems departments are incorporated in schools and colleges of business, and an attempt was made to have the standard consistent with the american association of colleges and schools of business accreditation guidelines. the successful future of information systems depends on a firm foundation for graduating students; this work is directed towards offering a minimal or floor program for the information systems bachelors degree. john t. gorgone norman e. sondak benn konsynski the impact of computer viruses on society j. lin c.-h. chang an open source laboratory for operating systems projects typical undergraduate operating systems projects use services provided by an operating system via system calls or develop code in a simulated operating system. with the increasing popularity of operating systems with open source code such as linux, there are untapped possibilities for operating systems projects to modify real operating system code. we present the hardware and software configuration of an open source laboratory that promises to provide students that use it with a better understanding of operating system internals than is typically gained in a traditional operating systems course. our preliminary projects and evaluation suggest that thus far the lab has achieved its primary goal in that students that used the lab feel more knowledgeable in operating systems and more confined in their ability to write and modify operating system code. mark claypool david finkel craig wills technology transfer aspects of environment construction k. kishida socially responsible computing i: a call to action following the l.a. riots ben shneiderman a framework for developing pre-college computer science retraining programs james l. poirot harriet g. taylor cathleen a. norris working group on hci education (identifying & disseminating resources) andrew sears exploring internet e-business programming technologies e-businesses are being created at incredible rates every year. more and more companies are realizing the importance of having their company exposed on the web. being able to create both an effective and highly efficient web site are the two most essential aspects when developing a web site for a potential client. having an e-business that is user friendly is key. it is necessary to react as quickly as possible to clicks issued by clients, take inputs from a form, store data into databases, and dynamically respond to user requests. this research explores various tools to implement these tasks. minta royster software process improvement education (poster session): a european experiment rory o'connor gerry coleman maurizio morisio computer accessibility for federal workers with disabilities: it's the law in 1986, congress passed public law 99-506, the "rehabilitation act amendments of 1986." this law, amending the famous rehabilitation act of 1973, contains a small section, titled "electronic equipment accessibility," section 508, which may have significant impact on the design of computer systems and their accessibility by workers with disabilities. the bill became law when it was signed by former president reagan on october 21, 1986. the purpose of this article is to inform concerned computer professionals of section 508, outline the guidelines and regulations pursuant to the law, describe some of the reaction to the guidelines and regulations, and describe some of the challenges for the future in meeting the computer accessibility needs of users with disabilities. section 508 was developed because it was realized that government offices were rapidly changing into electronic offices with microcomputers on every desk. in order for persons with disabilities to keep their jobs or gain new employment in the government, congress decided it was necessary to make provisions to guarantee accessibility to microcomputers and other electronic office equipment. the driving principle behind section 508 can be found in section 504 of the rehabilitation act of 1973 which states: no otherwise qualified handicapped individual in the united states . . . shall, solely by reason of his handicap, be excluded from the participation in, be denied the benefits of, or be subjected to discrimination under any program or activity receiving federal financial assistance. it should be stated off the top that the scope of section 508 is not as broad as section 504. in particular, section 508 only applies to direct purchases by the federal government and not to purchases made by all programs receiving government funding. section 508 does not specify what the guidelines should be nor does it delineate a philosophy on which to base the guidelines. a committee established by the national institute on disability and rehabilitation research (nidrr) and the general services, administration (gsa), in consultation with the electronics industry, rehabilitation engineers, and disabled computer professionals worked for a year developing the philosophy and guidelines which will significantly affect the purchase of electronic office equipment, including computers and software, by the federal government, the largest computer customer in the world. richard e. ladner the technology infrastructure of u.s. schools ronald e. anderson viewpoint: the internet patent land grab tim o'reilly quo vadimus: computer science in a decade a panel discussion was held during the third biennial meeting of chairmen of ph.d.-granting computer science departments in june, 1978 at snowbird, utah, a meeting sponsored by the computer science board. invitees from industry and government were also present. a report was prepared from tapes made of the discussion (department of computer science, carnegie-mellon university: report #cmu-cs-80-127, june 1980). it contained all the prepared statements of the panelists, lightly edited, and the panelists' discussion in its entirety. a selection of the audience discussion was also included, rather heavily edited. the following presentation is derived from that report. j. f. traub a successful support model for student consultants at rutgers university in january 1986, a program which provided non-curriculum, general academic microcomputing for students began at rutgers university. this consisted of 25 apple macintoshes, 24 at&t; 6300s, laser printers, software, and consultants at four libraries on the new brunswick campus. a support plan consisting of two full-time staff and approximately 40 students developed from the philosophy of a) you need technically qualified, personable consultants and staff to serve the users and associated departments, b) satisfied, challenged, and rewarded consultants have a higher probability of desiring continued employment, and c) if the consultants aren't happy, neither are your users. student positions included clerical assistant, consultant, programmer, site manager, software preparer and technical writer. four steps were used in managing the student staff: obtaining students with the necessary technical and people skills, training and maintaining the skills of the workers, retaining student employees, and evaluation. obtaining student personnel involved determining job requirements, soliciting the appropriate potential applicants, interviewing and selective hiring. interpersonal and communications skills were more important that technical skill for perspective student consultants. humanities and liberal arts majors were encouraged to apply. all potential employees were subjected to a standardized interview for the job applied. all exceptional students were hired. additional staff were hired to compensate for attrition and for substitution. extensive training of student consultants was done. this included initial workshops, a consultant's handbook, monthly meetings, and training exercises. the ability to retain student employees depended on hiring the correct candidate, attractiveness of the job, and fate. prioritizing work before social life, reporting to work faithfully, and carrying a reasonable course load were characteristics found in returning students. blair c. brenner a spiral approach to software engineering project management education this paper describes the experiences of an instructor, experienced in management, teaching a software engineering project management course at a school that specializes in software education. a "spiral approach" was used to provide for the parallel acquisition of management knowledge and experience, while building on recently acquired skills in software technology and developing confidence through the successful development of a software product. this paper describes the reason for selecting the approach, the course content, the observation of the students and the instructor, the advantages and disadvantages of the approach as applied, and conclusions. course objectives were met and course evaluations by students indicated a much higher level of acceptance than previous traditional approaches. the observations indicated that the approach may extend beyond the classroom to the industrial setting where on the job training, career path planning, and management development are of concern. joseph c. spicer the ethical computer practitioner - licensing the moral community: a proactive approach don gotterbarn faster, fairer, and more consistent grading, using techniques from the advanced placement reading (panel session) charles m. shub owen astrachan david levine stuart reges henry walker the terminal master's degree (panel): does it need to be cured? don goelman roberta evans sabin marty j. wolf pete knoke mike murphy elementary computer literacy (sigcas) computer science departments have spent a large percentage of their efforts on educating majors. this is as it should have been since there was a need to establish an identity for computer science. this task is by no means complete, but it is far enough along that more attention can be directed to the large numbers of students who do not want to be majors. departments are beginning to respond to students who want a minor - particularly those who are majoring in the sciences. but, other than providing what might be labeled "computer appreciation" courses or introductory programming courses, little has been done for the large body of non-science majors. the challenge is; what should be the content of a course in which the assumption must be made that this is the only opportunity for the student to study commuter science as an undergraduate? my remarks will be directed toward answering this question. gerald l. engel orrin taulbee philip l. miller robert hare real and virtual computing museums (poster session) john impagliazzo reengineering the university dennis tsichritzis computer backup pools, disaster recovery, and default risk there is a growing popularity of computer backup pools, where a few members share the ownership, or right for service, of a computer center. such a center stands by to provide for the lost computing capacity of a member suffering a computer breakdown and disaster recovery. the efficiency of such a solution may be examined from various points of view, such as costs, response time, reliability etc. we focus on the reliability of such an arrangement. two types of default risks are discussed: the probability that the center itself will break down when needed, so that it would be unable to provide service (this is similar to the traditional measure of a "probability of ruin") and a "perceived probability of ruin" (the probability that a member will be affected by the failure of the center). we borrow the concepts of probability of ruin from the risk management and insurance literature, in order to reach explicit relationships between these probabilities and the pricing of a mutual computer pool. it is shown that the membership fee for each participant must be a function of both the payments of all members and their loss (call for service) distributions, reflecting thereby the simultaneity of and mutual interdependence of members. yehuda kahane seev neumann charles s. tapiero from the president: the heroes among us barbara simons versatile concept map viewing on the web we present an applet-based system viewing concept maps on the web. the input consists of a concept map written in a description language with optional style and layout definitions. the system has numerous applications, because many kinds of graphs, trees, and flowcharts written by humans or generated by other software can be shown in addition to traditional concept maps. antti karvonen erkki rautama jorma tarhio jari turkia insurance and the computer industry bruce schneier no-nonsense guide to csab/csac accreditation csab/csac provides professional accreditation of computer science bachelor's degree programs in the united states. as of october 2000, 159 institutions held this accreditation. by our count, over 80% of the accredited programs were offered by departments which also offer graduate programs in computer science. this means that few small colleges are represented. our intent in this work is to give the small college audience an up-to-date guide to the recently-revised csab/csac accreditation standards. the guide is not comprehensive; we emphasize those issues we believe to be of greatest interest to small colleges and address them from the perspective we have gained from our own recent accreditation evaluation. pete sanderson time for industry to support academic infosec m. e. kabay a major in information systems an information systems major is presented as a second offering by a computer science department. the rationale for the degree program is based upon the demands of industry for graduates skilled in database and information systems. courses in cobol, business data processing, database management, and systems analysis are included in the information systems curriculum to prepare the student for employment in the business systems area. the contents of the courses in the information systems core curriculum at kansas state university are presented as an example of an implemented degree program. in addition, the information systems curriculum is analyzed in terms of its differences with the curriculum 78 computer science curriculum and of the resources required for its implementation. fred j. maryanski elizabeth a. unger selecting a computer ethics coursebook: a comparative study of five recent works herman t. tavani the chi97 chikids program: a partnership between kids, adults and technology allison druin methods and approaches for teaching systems analysis (panel session) effective teaching of systems analysis, information systems, or management information systems requires innovative approaches that go beyond the traditional classroom approaches. the panel will present ideas that have worked successfully and can possibly be utilized by other departments. the approaches center on the application of learned material into a setting that will allow the student to experience the "work place" environment. actual projects done in a team setting is the real key to improving the students learning of the subject material in systems analysis and design courses. this approach involves the skills of working in a team, writing, and presentation. the graduate gains experience and therefore preparation for the eventual job they will be doing from their class projects. requirements of the faculty increase somewhat in this type of approach and the panel will discuss these issues also. robert a. barrett, moderator in the area of systems analysis, information systems, and management information systems, we have in past symposiums presented papers that outline the courses and the course contents. we have not dealt with some of the approaches or methodologies of student assignments and work. our department has an advisory committee that provides input/guidance to the needs that business and industry have in regards to the individual who is working as a programmer, programmer/analyst, or systems analyst. one of the major issues that is being discussed (and has been over the past two years) is the writing, speaking and team concept abilities of the working professional. we have incorporated many of the needs in the individual classes to enhance and reinforce the learning of the student in these three areas. we put the student into teams as much as possible and require as many written reports and presentations as possible. ernest a. kallman at bentley the systems design course is a capstone course for seniors only. its objective, beyond the obvious purpose of covering the systems design function, is to help the student make the transition from textbook understanding of information systems to actual real world experience. to that end some part of the course is given over to topics such as installation organization and management. to add further realism a team project is assigned which requires the observation of an actual computer installation in some organization other than bentley college. john f. schrage the curriculum follows concepts noted in the major computer curricula studies from acm on both undergraduate and graduate levels. the dpma model curriculum also influenced the program in information systems. the programs provide training and education in both programming and systems with specializations somewhat determined by each student. the number of systems courses has expanded in the last three years. the systems concepts are presented, reinforced in intermediate courses, and culminated in a real-world project for both levels of students. the capstone situation for all students in the computer area is a real-life problem and solution. students form into teams of three or four and find an area company which has a systems-oriented problem applicable for solution within the ten-week term. the team approach is used in most of the courses, but the independence of students in this course shows more on adapting for the job market. team work is done in all courses after the introduction course in concepts and programming. robert a. barrett ernest a. kallman john f. schrage visualizing algorithms and processes with the aid of a computer communicating algorithms and processes is an integral part of computer science education yet in many instances is difficult to carry out effectively using traditional techniques. using the computer as an aid in visualizing and understanding an algorithm is one way to improve this communication process. with the computer technology available to us today, it would be unfortunate if we did not make effective use of it in computer science education. (we don't want to be like the shoemaker's children.) the prototype systems described in this paper exemplify how a computer might be used as an instructional aid; the observations resulting from their application suggest further experimentation and use. jeffrey w. mincy alan l. tharp kuo-chung tai students kate ehrlich the second information revolution thomas h. davenport computers, society, and nicaragua david bellin cra status of women column francine berman assessing a computer science curriculum based on internship performance norbert j. kubilus implementing a software management discipline the paper did not arrive prior to press time. r. loesh b. larman p. molko d. reifer risks to the public in computers and related systems peter g. neumann international master's program in information processing and telecommunications (poster session) jan voracek nina kontro-vesivalo learner interactivity and production complexity in computer-based instructional materials robert s. tannenbaum international dimensions of the productivity paradox sanjeen dewan kenneth l. kraemer rejuvenating technical consortia ray alderman computing: taking the first byte last year, the computing planning task force of the university of delaware recommended that the university adopt a goal for its 18,000 students. they proposed that "60 percent of students should have some familiarity with computers, 20 percent should make significant use of them, [and] 20 percent should be fully conversant with all aspects of computing."1 at about the same time the provost and vice president for academic affairs directed academic computing services to develop a program to promote computing awareness among the faculty. in addition, an anonymous grant to the humanities faculty for the purchase of microcomputers sparked new interest in computing among this group. this paper describes the on-going development of the "computing: taking the first byte" program, a series of seminars and workshops for faculty in fields that are not historically associated with computing. anne webster leila lyons the achievement of blacks in introductory computer science at a predominantly white public university mohsen chitsaz karen holbrook who do i work for anyway? don gotterbarn technical opinion: the ultimately punishable computer science paper for the latter '90s: a tip for authors sherman r. alpert richard b. lam who's to say?: essential elements of hci education jean gasen advisory committees: one approach to user input jennifer fajman susan clabaugh warren phillips sigcas' crisis and its role in advancing social analysis in computer science rob kling micros and minis - conflict resolution micro computers situated in micro computer labs and the desire of faculty to switch to using micro computers and micro-based packages as a tool for teaching is growing rapidly on campuses. this can have its advantages but also, from an economic as well as a practical point of view, is often loaded with disadvantages which can far outweigh the advantages of using a mini or mainframe computer. this paper will focus on some of the advantages and disadvantages, both the obvious and the sometimes obscure, which surround the micro vs. mini or mainframe tool for teaching. from an economic standpoint the discussion will point out the costs of maintaining a micro lab on a cost per student basis taking into account not only the initial hardware and software costs, but also the cost of system administration and hardware maintenance. from a practical standpoint the discussion will point out what little differences there are between packages which can be used on a micro and similar packages that can be used on a mainframe and how easily a student can adapt to using a slightly different but similar-in-purpose package. while it is important that a student be taught how to use tools such as a fourth generation language and a spreadsheet package, that fourth generation language or spreadsheet package does not have to be the most popular package being used on micros. there are very good fourth generation packages and spreadsheet packages available for minis and mainframes. allan haverkamp acom ("computing for all"): an integrated approach to the teaching and learning of information technology this poster describes the acom modules at leeds university, teaching it skills to a very large and diverse student body. hannah dee peter reffell acm model high school computer science curriculum corporate pre-college task force comm. of the educ. board of the acm using responsibility modeling to match organizational change to changes in the systems development process changes to an established software system development process are made for many reasons. we describe the use of responsibility modeling as an aid to identifying the organizational changes needed if changes to the software systems development process are to be implemented successfully within the organizational context. m. r. strens j. s. chudge user's groups - a source of information this paper looks at the ways one of the user's groups on campus provides information and sources of information to the campus community. it describes the activities and instruments the group uses to gather and disseminate this information to its members. it also discusses methods being used to increase user participation in the user's group. the user's group uses a variety of techniques to accomplish its goal of being a source of information to the campus community, including: a newsletter covering general information of interest to the campus community and software and hardware reviews by members of the user's group; hardware and software presentations by vendors and user group members during meetings; maintenance of a public domain software library at a central site for use by members; and the use of a database containing the software and hardware currently being used by individual members and their expertise with each product. alan albertus predicting student performance in a beginning computer science class this study investigated the relationship between the student's grade in a beginning computer science course and their sex, age, high school and college academic performance, number of mathematics courses, and work experience. standard measures of cognitive development, cognitive style, and personality factors were also given to 58 students in three sections of the beginning pascal programming class. significant relationships were found between the letter grade and the students' college grades, the number of hours worked and the number of high school mathematics classes. both the group embedded figures test (geft) and the measure of piagetian intellectual development stages were also significantly correlated with grade in the course. there was no relationship between grade and the personality type, as measured by the myers-briggs type indicator (mbti); however, an interesting and distinctive personality profile was evident. laurie honour werth critical issues in abandoned information systems development projects kweku ewusi-mensah computers in education in south africa - the state of the art - c. julie o. van den berg test review: a new method of computer-assisted learning to promote careful reading and logical skills dennis rothermel gregory tropea our computer science classrooms: are they "friendly" to female students? l. e. moses long-range planning in computer chess a serious deficiency in game- playing, tree-searching programs is their short- sightedness. program strategy revolves around the maximization/minimization of what is attainable within the scope of the search tree with little consideration of what lies beyond. this paper describes planner, a long-range chess strategist that assesses positions, determines a long-range strategy, and makes moves in the short-term (within the scope of the search tree) consistent with a long-term objective (beyond the depth of the search tree). in effect then, planner allows the program to "see" beyond the limitations of the search tree. jonathan schaeffer an inexpert system for system development e. d. ought a revised model curriculum for a liberal arts degree in computer science henry m. walker g. michael schneider retooling the user support staff of the computer center towards a micro information center facility bruce l. rose claire k. rossbach connected, smart devices - computing beyond the desktop i. bolsens jerry fiddler wim roelandts what does business and industry expect from computer science graduates today? in developing a curriculum that produces graduates that are readily accepted in today's business and industry the question arises, "what courses in a student's background are most vital?" to address this question a questionnaire was constructed and sent to 500 businesses and industries in south carolina, north carolina, tennessee, georgia, virginia, and florida. this short paper reports the rather startling findings of this survey. clark b. archer on being a ucitarian: winning the race to the bottom don gotterbarn down the road: meet our first european chapter sara m. carlstead editorial policy adele j. goldberg an international common project: implementation phase to better prepare students to work in globally distributed organizations, to develop effective communication skills to deal with the communication barriers that are inherent in such settings and to provide students with the opportunity to be involved in a complete software development cycle of a "real- world" project, from design to integration and testing, we have developed a course based on an "international common project" (icp) model [3] of the us-ec (european community) consortium "towards a common computer science curriculum and mutual degree recognition" [1]. the course is scheduled for the spring semester, 2001, and towson university, maryland, usa and evry university, france, will participate in this project. shiva azadegan chao lu a realistic approach to teaching systems anlaysis at the small or medium-sized college clark b. archer the standards development process: a view from political theory martin b. h. weiss "real world" skills vs. "school taught" skills for the undergraduate computer major janet hartman curt m. white screening freshmen computer science majors emery gathers editorial: changing times ken korman guidelines for interpreting and following the association for computing machinery (acm) code of ethics and professional conduct incorporating the client's role in a software engineering course jennifer a. polack-wahl from wealth to wisdom: a change in the social paradigm shumpei kumon integrating professional skills into the curriculum john lamp chris keen cathy urquhart prepared testimony and statement for the record on computer virus legislation marc rotenberg work at home for computer professionals: current attitudes and future prospects the subject of this paper is work performed in the home with computer and communications technology, also known as telecommuting. the article reports on two studies of work at home: a quasi-experimental field study of organizational telecommuting pilot programs, and an attitude survey comparing computer professionals who work at home to employees doing similar jobs in traditional office settings. the results of the field study demonstrated that working in the home had little impact on employee performance; however, supervisors were not comfortable with remote workers and preferred their employees to be on site. in the survey, work in the home was related to lower job satisfaction, lower organizational commitment, and higher role conflict. the survey also included computer professionals who worked at home in addition to the regular work day. the author suggests that performing additional unpaid work in the home after regular work hours may be an important trend that merits further investigation. the studies demonstrate that while computer and communications technology have the potential to relax constraints on information work in terms of space and time, in today's traditional work environments, corporate culture and management style limit acceptance of telecommuting as a substitute for office work. margrethe h. olson oo design in compiling an oo language norman neff can information systems concepts be assimilated into non-is courses? the american assembly of collegiate schools of business (aacsb) is encouraging schools to teach core topics across a spectrum of courses. subjects would not be artificially forced into a certain course because of departmental boundaries but would be introduced according to their relevance to the topic being presented. information systems (is) topics have received attention possibly because the wording of the current aacsb curriculum standards no longer implies that a separate course is required for information systems.many discussions about curriculum changes affecting is have been made without quantitative information. evidence has not been gathered about the qualifications of non-is faculty to teach is topics or even faculty preferences for delivery of is topics. the questionnaire used to gather data for this article revealed that non-is faculty strongly prefer a separate, required is course over integrating is topics into non-is courses. the introductory is course provides the bulk of knowledge that most business students receive about information systems. if students are not well taught by qualified faculty then their ability to use and assimilate is and is technology is diminished. this article brings some quantitative evidence to the debate and argues that is topics beyond computer literacy are best taught by is faculty. george p. schell student well-being in a computing department we describe a project exploring the relationships between factors in the learning environment, student well- being and learning outcomes, in the context of a computing department. a range of established psychometric tests identified areas of unhelpful stress in the working environment and measures were implemented to rectify these. a significant improvement in measured student well-being followed. j. r. davy k. audin m. barkham c. joyner treating information as an asset the fifth generation of computing presents a unique position for the value, protection, and support of information. this session addresses the needs of corporations and organizations by presenting and discussing information being treated as an asset. robert abbott, in his keynote address, will relate the seriousness and effort that corporations and government agencies expend to protect their information. k. lucey and t. brannan's paper addresses the corporate power plays that can occur by identifying the checks and balances necessary to assure protection and privacy. l. becker's paper correlates recent legislative proposals with the impacts and problems of computer security issues. david persin issues in software maintenance bennet p. lientz computers, crime and privacy - a national dilemma: congressional testimony from the industry peter j. denning a distributed computer approach to i/o rooms the louisiana state university system is composed of ten campuses, located statewide. each of the ten campuses has some locally installed computer power. on this campus, approximately 25,000 of the students attend and this is also the location of the system network computer center. the system network computer center installed the first ibm 3033 processor complex that was installed in an academic environment. this processor complex is used for research, instruction, and administrative work. the primary use of this is made by the main campus. however, all other campuses use this for large-scale computational support, some administrative supplement, and some instructional support. on campus, there are many tso terminals for graduate students and faculty and in 1978 there was a single student i/o room which had card input and printer output. there was another i/o room in the main center which was used solely for the faculty and graduate students. this i/o room had a single card reader and printer. the student i/o room had two documentation 600 cpm card readers, two data products printers at 600 lpm, and 24 card punches. john m. tyler the impact of developer responsiveness on perceptions of usefulness and ease of use: an extension of the technology acceptance model the technology acceptance model (tam) suggests that the perceived usefulness (pu) and the perceived ease of use (peou) of an information system (is) are major determinants of its use. previous research has demonstrated the validity of this model across a wide variety of is types. however, prior research has not identified antecedents of pu and there has been only limited research on the antecedents of peou. consequently, research has provided little guidance to is managers on methods to increase use by augmenting pu and peou.viewing is development as an instance of social exchange theory (set), this study proposes that is managers can influence both the pu and the peou of an is through a constructive social exchange with the user. one means of building and maintaining a constructive social exchange is through developer responsiveness. the results of this study, examining the adoption of an expert system, indeed support this notion. specifically, developer responsiveness strongly influenced both pu and peou, but only indirectly affected actual behavior --- is use --- in accordance with the predictions of set. an extension of tam based on set is presented and the implications of this extended model are discussed from both a managerial and theoretical perspective. david gefen mark keil mixed signals about social responsibility what is the appropriate role of computer professionals in determining how their work is used? should they consider the societal implications of their work? ronni rosenberg experiences with ethical issues judith l. gersting frank h. young universities in transition to virtual universities -facts, challenges, visions- claus unger experiences with tutored video instruction for introductory programming courses richard anderson martin dickey hal perkins information resource management: theory applied information resource management (irm) grew out of the efforts of the federal paperwork commission and the leadership of dr. forrest woody horton, during the 1970s. the diebold group claims that it first introduced the irm concept to the business world in one of its proprietary client reports in 1977. whatever the origin, the concept of irm is well established within the federal government through the enactment of the paperwork reduction act of 1980. this legislation was signed by the president on december 11, 1980 and became pl 96-511. during the past year federal agencies have been required to develop and implement the irm concept in their organizations. in the private sector many large firms and other organizations have implemented the irm concept or are making plans to do so. it was the anticipation of the enactment of the legislation and the appointment of corporate vice presidents for irm management, such as at the mcdonnell douglas corporation, which encouraged us to offer a graduate course in irm last fall. it was well received and is being offered again this fall. boulton b. miller a new approach to security system development the development of a security system is generally performed through a multiphase methodology, starting from the initial preliminary analisys of the application environment, up to the physical implementation of the security mechanisms. in this framework, we propose a new approach for the development of security systems based on the reuse of existing security specifications. in the paper we illustrate how reusable specifications can be built by analyzing existing security systems, and how they can be used to develop new security systems not from scratch. silvana castano giancarlo martella pierangela samarati resources, tools, and techniques for problem based learning in computing ainslie ellis linda carswell andrew bernat daniel deveaux patrice frison veijo meisalo jeanine meyer urban nulden joze rugelj jorma tarhio international standards: practical or just theoretical? hugo rehesaar a comparative investigation of ethical decision making: information systems professionals versus students as information technology evolves, it continues to raise new ethical challenges. in recognition of this, the business and academic communities have focused increased attention on ethics. professional codes of ethics have been enacted by the acm, aitp, and other computing organizations to provide guidance to information systems (is) professionals in resolving ethical dilemmas. in addition, the is'95 model curriculum and the american assembly of collegiate schools of business (aacsb) guidelines for business education both recognize the importance of ethics in business educational programs.this paper explores an important question that has been neglected by previous research: do is professionals differ significantly from students in terms of their perceptions about ethical issues? two studies were conducted and they revealed a number of ethical decision-making differences between professionals and students. this result, along with an additional finding that participants showed little consensus about most ethical scenarios, suggests that ethical decision making is often difficult and that both students and professionals can benefit from ethical training and education. the findings also have important implications for is research. james j. cappel john c. windsor cluster computing: development of a small scale cluster and learning modules for undergraduates dana drag lorraine juzwick program paper-slide-show (poster session) minoru terada news track robert fox more on the "dark side" of computing c. dianne martin teaching real time oss with doritos jae c. oh daniel mosse hci solutions for managing the it infra-structure thomas m. graefe dennis wixon computer science education links - what next? renee mccauley discrete mathematics as a precursor to programming peter b. henderson internet virus protection bill hanson the anthropology semaphores this paper describes research into the conceptions of students studying concurrency, using qualitative methods that originated in anthropological field work. we were able to obtain a deep understanding of students' mental models of semaphores: they construct consistent, though non- viable, models of semaphores, and they use them in patterns without understanding the synchronization context. we used the results to improve our teaching of concurrency, for example, by carefully defining the semaphore model and exercising the model outside of a problem-solving context. yifat ben-david kolikant mordechai ben-ari sarah pollack comparison of student success in pascal and c-language curriculums richard f. gilberg behrouz a. forouzan giving engineers a positive view of social responsibility leon tabak opportunities for watermarking standards fred mintzer gordon w. braudaway alan e. bell songs and the analysis of algorithms darrah chavey how to put :20fire" back into the burnt-out employee s. j. li service learning via the computer science club judith l. gersting frank h. young proposal for an on-line computer science courseware review michael goldweber power, politics, and mis implementation theories of resistance to management information systems (mis) are important because they guide the implementation strategies and tactics chosen by implementors. three basic theories of the causes of resistance underlie many prescriptions and rules for mis implementation. simply stated, people resist mis because of their own internal factors, because of poor system design, and because of the interaction of specific system design features with aspects of the organizational context of system use. these theories differ in their basic assumptions about systems, organizations, and resistance; they also differ in predictions that can be derived from them and in their implications for the implementation process. these differences are described and the task of evaluating the theories on the bases of the differences is begun. data from a case study are used to illustrate the theories and to demonstrate the superiority, for implementors, of the interaction theory. m. lynne markus the computer science fair: an alternative to the computer programming contest sue fitzgerald mary lou hines using software development teams in a classroom environment sallie henry nancy miller wei li joseph chase todd stevens dynamic patterns as an alternative to conventional text for the partially sighted when the partially sighted can no longer read even enlarged conventional text then possible aids include touch braille and speech synthesis. touch braille has proved to be very useful for those with no vision and also for the partially sighted who can only read with difficulty and hence liable to suffer from early fatigue. however, both of these aids lack any visual feedback useful for those with some residual sight. visual braille may also be of use but is limited to a given set of standard patterns. it should be noted that many partially sighted people may still be able to see a few areas of different color which maybe sufficient to allow different combinations of colored elements to represent a useful character set.the authors propose a possible alternative reading aid called dynamic pattern system (dps) where each pattern or series of patterns can represent an ascii character. each pattern consists of a number of distinct areas, each of which can assume one of a set number of colors. a particular set of patterns representing a character set is known as a patternset. there are many conditions that can lead to partial sight and hence these patternsets may be individualised to each user thereby making full use of their limited vision. this can be regarded as an interface matching solution.the input or output of a dps can be read as standard text by conventional text-based word processors, web browsers and other standard it equipment. using a dps it may be possible for some partially sighted users to communicate with each other each using their own individualised patternset. furthermore a fully sighted person may be able to both send and receive conventional text when communicating with a person using a dps. an extensive literature search has failed to find any comparable work.hence the advantages of such a system include:- standard pcs (minimum 386, windows 3.x, 4m ram) with no special equipment may be used to encourage the full integration of some partially sighted individuals, who are unable to read even enlarged text, into a standard office environment. a standard pc running dps may be used by the fully sighted. dps is a very small (one floppy disk) and easy to use software driven user interface requiring no special hardware and hence offers a low cost solution. individualized patternsets may allow the most effective use of a user's limited vision. david veal a methodology for preparing the computer professional for certification this paper describes a methodology by which higher education institutions, business and government training facilities and professional societies can set-up and administer a review course for the certificate in data processing examination. during the past seventeen years, one or more of the authors have conducted cdp review programs for the benefit of members of professional societies, students, and company employees. these years of experience have provided the basis for establishing a rigorous methodology for preparing candidates for the cdp examination with excellent results. the certificate in data processing examination program is administered by the institute for certification of computer professionals. more than 25,000 persons have taken the examinations over the past ten years. this large response to voluntary certification suggests the need for an orderly program to prepare the computer professional for certification. paul p. howley james s. ketchel michael g. rowloff arguments for weaker intellectual property protection in network industries joseph farrell what we have learned from a decade of research (1983 - 1993) on "the psychological impact of technology" larry d. rosen michelle m. weil computers and cultural change l k englehart design: expressing experiences in design bill moggridge global security edward s. h. bulman computer science education in the people's republic of china in the late 1980s last year a delegation of international computer professionals with interests in computer science education participated in an information exchange with colleagues in the people's republic of china. the delegation's experiences suggest that the chinese have made substantial progress in some aspects of computer science education since late 1982, but that difficult problems remain to be solved. j. d. wilson e. s. adams h. p. baouendi w. a. marion g. j. yaverbaum the digital millennium copyright act: an international assault on fair use? jill gerhardt-powals matthew h. powals women into computing janet stack ask jack: career q & a jack wilson can computers cope with human races? in trying to apply a computer to a task that humans do, we often discover that it doesn't work. one common problem is that humans are able to deal with fuzzy concepts, but computers are not---they need precise representations and it is hard to represent a fuzzy concept in a precise way. however, if we look closer at such tasks, we often discover that the weakness actually lies not in the computer but in ourselves \---that we didn't understand what we were doing in the first place. when faced with a problem of this sort, some people refuse to recognize the conceptual failure. instead of seeking a better representation for the task, they thrash away at making the fuzzy scheme work, insisting that there is nothing wrong with the conceptual base. i will illustrate one such problem with a true story. the central theme is the fuzzy concept of racial and ethnic classification, as used by the u.s. government and a horde of other bureaucracies. these organizations have been carrying out elaborate statistical computations and making major policy decisions based on this concept for many years with problematical results. i begin with my first encounter with this scheme, some 25 years ago. l. earnest a first course in computer science for small four year cs programs robert bryant paul de palma distance teaching workloads in this paper, we describe a formula for calculating the teaching workload for students who are studying off campus both on and off-line. initially the faculty of information technology developed a proposal for calculating academic workloads. this proposal reflected the rigid teacher centred learning structures of traditional on- campus delivery and made no allowance for the services required by off-campus students. in response, teachers of off-campus students developed a complementary proposal, based on actual time logs, which reflected their student centred approach to learning. contrary to popular wisdom, off-campus teaching was found to be more time-consuming than on- campus. wendy doube training aspects of microcomputer usage in the information center environment george n. arnovick orlando s. madrigal adapting curriculum 78 to a small university environment curriculum 78 was developed to present an undergraduate degree program in computer science at any university, with appropriate adaptation to each specific environment; in general the degree program suggested applies most naturally to relatively large universities. because small universities have limited resources---students, faculty, computing facilities---the implementation of curriculum 78 requires careful modification to fit the particular environment. by organizing the topics covered by the suggested courses in curriculum 78 in different combinations and emphasizing microcomputers, the department of mathematical sciences at loyola university can offer a complete degree program in computer science in spite of its limited resources. mary dee harris fosberg a few ideas from stan m. davis stan m. davis future ethics john karat clare-marie karat from silicon valley to silicon prairie: a long distance telecommuting case study the issues faced by firms in today's telecommunications environment are outlined and are discussed in juxtaposition with an actual telecommuting case study of trade reporting and data exchange, inc. (t.r.a.d.e.), a software engineering company located in san mateo, ca. telecommuting was successful for t.r.a.d.e in the short term because a) required technology was widely available, b)the candidate initiated the idea and had the necessary industry and company experience, c)the organization could provide the flexible work arrangement while retaining a valuable employee, d)the employee was able to live in a geographic area of their choice, e)overall costs could be shared by the company and the employee, f)the job category was an ideal fit, and g)existing procedures were in place for communicating and managing the geographically detached worker. in the long run the employee left the company to take a job with a local software company because he missed the everyday interaction with co-workers. anthony r. hendrickson troy j. strader letter from the chair jane shearin caviness sam spam the flimflam man shannon jacobs y2k compliance and the distributed enterprise j. arthur gowan chris jesse richard g. mathieu life and death in silicon valley memoirs of an adventure capitalist russell fish experiences in the establishment of a microcomputer support laboratory the explosive growth and general availability of microcomputers is creating a desperate need for the training of computer programmers in how to approach, assess and, if applicable, use microcomputers as an addition to their traditional hardware/software tool-kit in solving applications problems. until recently, our students (and, we suspect, cs majors and software engineers in general) had little or no formal training or experience in this area. one cannot appreciate the full extent of the applicability and limitations of micros without reasonable hands-on experience. yet the obvious approach - providing a number of stand-alone micros as the sole teaching vehicle - of itself creates a misleading context: restrictions of every kind force the student to spend most of his time circumventing limited resources and far too little time in actually learning about the useful aspects of the machines per se and as components of a more complex whole. in this paper, we define the educational goals that we are trying to achieve. then we identify five possible solutions to the problem. we then analyze each and present reasons why we adopted the approach we did. our microcomputer support laboratory [msl], consists of a pdp-11/34, under unix@@@@, connected to 8 motorola 6809's and 8 mc68000's. we address also the reasons which lead to this choice, the problems in building the msl, practical experiences derived from using it in an educational situation, and conclude by discussing the feasibility of providing this type of environment as a university-wide resource. we also indicate possible improvements and future directions in which we hope to grow. j. michael bennett e. elcock lessons learned from modeling the dynamics of software development software systems development has been plagued by cost overruns, late deliveries, poor reliability, and user dissatisfaction. this article presents a paradigm for the study of software project management that is grounded in the feedback systems principles of system dynamics. tarek k. abdel- hamid stuart e. madnick is it okay to cheat? - the views of postgraduate students this paper examines the attitudes of students in the masters of information technology, honours degree in the bachelor of computing and graduate diploma of computing at monash university. students were surveyed on the acceptability of a variety of scenarios involving cheating and on their knowledge of the occurrence of these scenarios. the survey found a strong consensus amongst the students as to what was acceptable or unacceptable practice. the paper then examines the significance of these results for educators aiming to prevent cheating amongst their students. the study reported is part of a larger study currently being undertaken in the school of computer science and software engineering (csse) at monash university. martin dick judy sheard selby markham the future of our profession bo dahlbom lars mathiassen using personality inventories to help form teams for software engineering class projects as faculty create their teams for software engineering class projects various techniques may be used to create these teams. random selection as well as structured assignments all have varied strengths and weaknesses. one method for selecting students involves using personality inventories to assess the various personality types of the students. this paper will discuss how the author used the keirsey temperament sorter to select teams for a software engineering class and some of the results of this experiment. rebecca h. rutherfoord providing the right computing services in times of financial crisis: a case study at the university of wisconsin-madison this paper describes the changes required by an academic computing center when faced with a dramatic shift in computer usage, compounded by a university budget shortfall. it discusses implications of a technology which is changing quickly in an environment where, historically, funding models do not. it describes one possible outcome with regard to academic computing, which is the solution that we at the university of wisconsin-madison are working on, that of becoming a smaller but better-focused facility. it discusses how we decided which services to discontinue and which to keep. finally, it touches briefly on modifying the computing center's internal structure to support these changes. kathi dwelle a versatile assignment in cs 2 (poster session): a file compression utility based on the huffman code joão paulo barros rui pais misconceptions of designing: a descriptive study our experience in designing and teaching a cross-disciplinary freshman design class has led us to believe that students entering design fields (e.g., computer science or engineering) are saddled with na ve or (mis)conceptions about design and design activity. it is our belief that for students to become effective designers, they must be helped to recognize and overcome these misconceptions through appropriate educational interventions. to better understand the nature and substance of these misconceptions, we conducted a descriptive survey study of 290 freshman in a technological institute. our findings begin to suggest a consistent profile of misconceptions across declared majors that start to explain observations we have made of na ve designers in our freshman design class. this paper reports on those findings. michael mccracken wendy newstetter jeff chastine a comparative analysis of mis project selection mechanisms james d. mckeen tor guimaraes james c. wetherbe down the road: visit the university of illinois at urbana-champaign sara m. carlstead a proposed redesign of the introductory service course in computer science g michael schneider information systems for management in the eighties a course in management information systems must prepare future users and information systems professionals for their roles in analyzing application requirements and designing information systems to serve business and individual needs. the objectives, organization, content, and methods used to teach this course to both mis and non-mis majors within the school of business at southern illinois university are described in detail. the systems development project, which involves students in learning tools and techniques for structured systems analysis and design, as well as in applying these methods to an actual design project, is one of the most important activities of the course. students have an opportunity to work together in their respective roles as users and systems analysts and to use project management and control techniques to assure effective results. mary r. sumner put ethics and fun into your computer course this article describes a fun yet informative approach to covering ethical issues in a systems analysis and design course. by assigning a series of readings, collectively known as "the case of the killer robot," students were exposed to a number of realistic ethical situations as well as a great deal of knowledge that is part of the computer science and information systems curriculums. based on the results from two questionnaires, by the end of the course the students had an increased knowledge of ethical issues, a stronger concern for ethical issues, and a greater awareness of the importance of ethics in the workplace. jill gerhardt surfing the net for software engineering notes mark doernhoefer an honors computer science seminar for undergraduate non-majors david g. kay perspectives on privacy in campus it inside risks: cryptography, security, and the future bruce schneier factors effecting high school student's choice of computer science as a major richard o'lander walking the tightrope: balancing digital and traditional skills in undergraduate education john mcintosh jeremy butler kate francek kathy griswold jeffrey lerer joel sevilla a systems analysis & design course sequence the university of wisconsin-whitewater's management computer systems (mcs) major1 includes a two course sequence, systems analysis and design 1 and systems analysis and design 2. in the acm information systems curriculum2 there is also a two course sequence in systems analysis and design. the managers of information systems departments who hire our graduates frequently express the opinion that this sequence is especially important. the progression of the courses begins with three weeks of 100% "theory" with artificial homework assignments before the students are assigned their projects. from the fourth week of the first course to almost the end of the second course the projects are carried through the successive phases of the systems development life cycle. the overall balance between theory and project is evenwith the theory coming earlier so as to illuminate the practice which follows. iza goroff letter from the chair jane shearin caviness private life in cyberspace i have lived most of my life in a small wyoming town, where there is little of the privacy which both insulates and isolates suburbanites. anyone in pinedale who is interested in me or my doings can get most of that information in the wrangler cafe. between them, any five customers could probably produce all that is known locally about me \---including a number of items that are well known but not true. me \---including a number of items that are well known but not true. john perry barlow the code red worm malicious software knows no bounds. hal berghel so how are your hands? thoughts from a cs student with rsi rob jackson computing curricula 2001 corporate the joint task force on computing curricula innovation in post graduate computer science education this presentation will outline the progress to date of the development of the master of computing degree program, the courses developed, the philosophy behind the development, the structure including the exit qualification and the proposed delivery. it will also outline the quality assurance process and the very stringent accreditation process required before it is approved for delivery. alison young donald joyce testing student micro computer skills through direct computer use this paper introduces the concept of testing students' microcomputer skills through direct computer use. techniques are discussed which make it feasible for the instructor to grade the disk and printout that are produced by each student. the process can be generally applied to testing many different skill areas, and has been effectively used for tests on dos and utilities, wordprocessing, spreadsheet work, and data base. practical examples of test creation and grading of spreadsheet tests are presented. further developments of the technique are suggested. michael m. delaney distributed development and teaching of algorithmic concepts seth teller brandon w. porter nicholas j. tornow nathan d. t. boyd teaching cscw in developing countries through collaboration teaching technical subjects in developing countries has proven to be a challenge. this is due to deficiencies in technology and knowledge resources, such as a lack of expertise in a field and a lack of up to date literature. these deficiencies have a major impact on the teaching of subjects such as hci and cscw, and have caused universities in developing countries to expand the scope of their teaching resources to form collaborations with other universities through distance leaning technology. this report will focus on experiences of a collaborative project between the university of the witswatersrand (wits) in south africa and staffordshire university (staffs) in the uk. the aim of the project was to provide experiential knowledge of cscw issues through establishing communication channels between the uk and south africa. andrew thatcher lesley-anne katz david trepess computing in the brazilian amazon renata l. la rovere keynote address - remembrance of things past h. h. goldstine certitude and rectitude peter g. neumann teaching fundamentals for web programming and e-commerce in a liberal arts computer science curriculum web programming and e-commerce are popular topics, but it is not clear how they should be supported in an undergraduate computer science curriculum. this paper presents experience with a course that used java and perl/cgi to provide some fundamentals for web programming, and plans for a related upcoming course that should cover material more efficiently and provide fundamentals for e-commerce. topics covered in the course are compared to skills required for e-commerce as described by ge and sun [ge and sun 2000]. goals for computer science education in the 1980s the nature of computing, and hence of computer science, is changing rapidly. many topics that now seem interesting will be obsolete or irrelevant within ten years, and our perspective on other topics will change. if a curriculum designed now is to remain effective through 1990 or beyond, we must try to understand the forces that are shaping the field and to anticipate the roles that computing and computer science will play in the future. at carnegie- mellon, a group of eight faculty and graduate students is designing a new undergraduate computer science curriculum. we began by examining the trends that will affect the field over the next decade and the new phenomena and issues that may arise. from this basis we are developing a new curriculum without prior assumptions drawn from existing curricula. in this talk i will discuss our view of current trends in computer science and the roles that colleges and universities must play over the next decade. mary shaw ethics activities in computer science courses: goals and issues don gotterbarn robert riser the animal algorithm animation tool in this paper, we present animal, a new tool for developing animations to be used in lectures. animal offers a small but powerful set of graphical operators. animations are generated using a visual editor, by scripting or via api calls. all animations can be edited visually. animal supports source and pseudo code inclusion and highlighting as well as precise user-defined delays between actions. the paper evaluates the functionality of animal in comparison to other animation tools. guido robling markus schuer bernd freisleben guidelines for teaching object orientation with java how to best teach object orientation to first year students is currently a topic of much debate. one of the tools suggested to aid in this task is bluej, an integrated development environment specifically designed for teaching. bluej supports a unique style of introduction of oo concepts. in this paper we discuss a set of problems with oo (teaching, present some guidelines for better course design and show how bluej can be used to make significant improvements to introductory oo courses. we end by presenting a description of a possible project sequence using this teaching approach. michael kölling john rosenberg the pass project (poster session): group research into parameters affecting student success donald joyce alison young 1990 ec directive may become driving force pat billingsley educating a new engineer peter j. denning the practitioner from within: revisiting the virtues frances grodzinsky social behavior in professional meetings: a video analysis of a panel discussion anat hovav munir mandviwalla the third millennium digital commerce act margot saunders developing teamwork through experiential learning the purpose of this study was to investigate the effectiveness of various experiential learning activities in the development of teamwork for students studying computer information systems (cis) who work in teams to complete cis projects or case studies. empirical evidence was obtained from students after participation in ropes courses. perceptions of satisfaction of the ropes courses were obtained from a survey on the extent participation helped build self-esteem, confidence, character, trust in others; and stimulated teamwork and cooperation with others. this was an exploratory study that presented descriptive statistics that showed a high extent of benefit from ropes courses. in conclusion, this study is a springboard for further research to establish a conceptual model on cis teamwork for students in a cis curriculum who have participated in ropes courses versus those who have not had similar team building experiences. the findings can also help firms who want to accelerate their teamwork training to enable their professionals to perform better and faster in teams by giving them a ropes course experience. security management--"hey! how do you steer this thing?" bill neugent the skills needed to teach computer-science courses hhhhhhhhhhhh gary m abshire effects of a political forum on the world wide web (abstract) david marston java as first programming language: a critical evaluation said hadjerrouit artificial intelligence (panel): finally in the mainstream? bill manaris robert aiken cris koutsougeras rasiah loganantharaj marco valtorta science and engineering for software development: a recognition of harlan d. mills' legacy victor basili tom demarco ali mili experience using the asa algorithm teaching system mario andre mayerhofer guimaraes carlos jose pereira de lucena mauricio roma cavalcanti litp: laboratoire d'informatique theorique et programmation paris presentation of scientific activity i. guessarian d. perrin ap cs goes oo david gries kathleen larson susan h. rodger mark a. weiss ursula wolz system designers' user models: a comparitive study and methodological critique ron dagwell ron weber why? when an otherwise successful intervention fails problem-based learning (pbl) has been an effective technique in developing self-directed learning and problem-solving skills in students --- especially in the medical school environment. this paper looks at some preliminary results of an ethnographic study of students in a software development environment trying to use pbl. our findings indicate that students need explicit training in group dynamics, students tend to rely excessively on existing knowledge, and they focus almost solely on product-related issues versus process-related ones. we then present some suggested improvements and future planned research. michael mccracken robert waters how shall we manage cyberspace and its growing population? the growing network of computer systems and its users has become a new society; it cannot exist in the absence of trust and promise, if for no other reason than because its members must be expected to abide by shared rules of behaviour. how shall we build trust without visibility? how can we reduce temptation, and much are we willing to tolerate? how shall we set rules of behavior? can we make promises about the behavior of the machines we rely on? how shall we share and apportion responsibility for mishaps and failures? how shall we recognize abuse and what shall we do with abusers? we cannot hope to answer these questions by decree, but must attempt to build consensus in vigorous and sustained discussion. maarten van swaay special message: intersociety cooperation walter m. carlson acm task force report on k-12 education and technology (abstract) teri perl dennis bybee carol e. edwards coco conn marketing-driven standards: virtual standardization ray d. brownrigg academic support vs. information support: bringing information management to the community college traditional user support at institutions of higher education has revolved around the idea that the staff of the computer center should provide resources and consultation primarily for applications considered more traditional in nature: registration, financial aid, data processing/computer science applications such as languages (cobol, fortran, basic, jcl, etc.), computer assisted instruction (cai), and research projects which might utilize a statistical package like sas or spss. this has most certainly been true at four-year institutions and to only a lesser degree at community colleges. however, as the number and types of user groups within the institution have become more diverse, the computer center has shifted its support away from more traditional classroom applications toward becoming an information service consultant for the institutional workforce. this has been done to acknowledge the fact that the computer center's user groups have become more diverse and many times more dependent on its support functions. this paper will discuss some of the major issues which have accompanied this shift from academic instructional support to information services support as it has been addressed by one major community college. most notably, it will discuss: the steps taken to implement a comprehensive management information system; planning mechanisms required to implement the necessary training support for new applications, including education programs, presentational and supplemental materials, along with media and hardware support; and the use of feedback mechanisms to aid the definition and development of training programs required by this transition from traditional academic consulting to information management user services. larry lambert jane-ellen miller a practical approach to integrating active and collaborative learning into the introductory computer science curriculum scott grissom mark j. van gorp children and computers in this panel presentation, we explore three related issues: the competencies that children bring to computer learning, the effects of computer learning, and the more general effects of growing up in a computer-rich environment. the presentation is sponsored by the society for cognition and brain theory, an interdisciplinary study group in philosophy, psychology, linguistics, artificial intelligence, and neuroscience. john m. morris the computer as microworld and microworld maker: a rationale and plan for the inclusion of logo in an introductory, preservice course on educational computing daniel h. watt brit bits: computer science in british further education the recent introduction of the accreditation scheme for undergraduate computer science programs has prompted increasing discussion of the curriculum within the academic computing community. various curriculum models have been proposed. this paper describes one of the national, standardized computer science curriculum syllabi used in further education colleges in britain at "a-level". this 2-year program is equivalent to 16 semester hours in an american college and covers a computer science core similar to that proposed in the acm "curriculum '78". it provides an interesting standardized model which is actually implemented in schools across britain.* donna m. kaminski commentary: icann and internet regulation milton mueller viruses and worms: more than a technical problem m. e. kabay risks to the public p. neumann other contributors a collaborative learning trial between new zealand and sweden-using lotus notes domino in teaching the concepts of human computer interaction this paper reports the results of a collaborative learning exercise between students at auckland institute of technology and uppsala university. the exercise was conducted using both a lotus notes domino collaborative database and electronic mail to support students working in remote groups to perform a common task. issues concerning the logistics of such an exercise, student participation and evaluations of the process, ethical considerations and the quality of the learning process are discussed. some conclusions are drawn concerning the value of group ware technology to support this form of collaborative learning, and suggestions are made for future developments. a. g.(tony) clear professional codes of conduct and computer ethics education c. dianne martin david h. martin readers comments: knowledge management accelerates learning john gehl computer center stress test sue stager supporting 50 classrooms full of whiz-bang technology walter gilbert duke university computer kamp 1982 kevin bowyer mel ray cary laxer computers and the law (an experimental undergraduate course) at the challenge of some good students, the information sciences department at taylor university decided to offer an overview course on computers and their legal implications for computer science majors and others interested in such a perspective. a two hour selected topic course was offered in the spring of 1980 to computer science majors, but open to any students who have taken at least one computer science course and had performed well in it. this paper reviews the content, successes and failures of this experimental course, with a view to assisting others who may wish to consider such an offering in the future. r. waldo roth the difficult client: consulting techniques from the human services e.-l. bogue risks to the public in computers and related systems peter g. neumann personal computers are coming to campus i have been asked to present a supplier's view of the campus computer revolution. computers are, of course, hardly new to the campus. before any bank or business ever ordered a computer, universities were already making use of them in engineering, mathematics, and sciences. university researchers were among the founders of computer technology and have contributed to most subsequent advances as well. most of you in the audience were probably introduced to computers at a college or university. in 1960, my high school did not have a computer, so i managed to get access to 1/2 hour a week of time on an ibm 650 at columbia university. today, i still meet high school students who obtain access to computers at the local college, if their high school has inadequate facilities, and if they do not have a personal computer at home. lawrence g. tesler outside the box: the changing shape of the computing world steve cunningham collaborative environmental education using distributed virtual environment accessible from real and virtual worlds we have designed and implemented a support system for collaborative environmental education, digitalee, which realizes distributed virtual environment accessible from real and virtual worlds. this system introduces the following diverse features into environmental education: global arguments supported by the internet, giving learners pseudo experiences by virtual reality, supplementing real natural experiences by augmented reality, and giving learners experts' valuable knowledge by distance education. shared virtual space in the distributed virtual environment is "3d virtual nature", which is a vrml world representing the real nature. learners learning through direct experiences can enter the 3d virtual nature from the real world with mobile computers, whereas experts and other participants can enter the 3d virtual nature from distant locations with their personal computers. people throughout the world can communicate with each other while sharing the same place virtually between real and virtual worlds. learners' observation, experts' knowledge, and other participants' information are continuously accumulated in the shared 3d virtual nature as vrml objects and web pages, and the world is being updated dynamically in the learning process. with these ideas, digitalee realizes a new style of environmental education such as collaborative outdoor learning supported by knowledgeable experts throughout the world and interactive virtual tours to inaccessible natural environment. masaya okada hiroyuki tarumi tetsuhiko yoshimura kazuyuki moriya web labs for the standard template library and the java generic library in a cs2 course william j. collins yi sun on computing and the curriculum robert mcclintock becoming a computer scientist it is well known that women are significantly underrepresented in scientific fields in the united states, and computer science is no exception. as of 1987- 1988, women constituted slightly more than half of the u.s. population and 45% of employed workers in the u.s., but they made up only 30% of employed computer scientists. moreover, they constituted only 10% of employed doctoral- level computer scientists. during the same time period, women made up 20% of physicians and, at the doctoral level, 35% of psychologists, 22% of life scientists, and 10% of mathematicians employed in the u.s. on the other hand, there are some disciplines in which women represent an even smaller proportion at the doctoral level: in 1987-88, 8% of physical scientists, and only 2.5% of engineers were women [21].1 the underrepresentation of women in computer science is alarming for at least two reasons. first, it raises the disturbing possibility that the field of computer science functions in ways that prevent or hinder women from becoming part of it. if this is so, those in the discipline need to evaluate their practices to ensure that fair and equal treatment is being provided to all potential and current computer scientists. practices that exclude women are not only unethical, but they are likely to thwart the discipline's progress, as potential contributors to the field are discouraged from participation. the second reason for concern about the underrepresentation of women in computer science relates to demographic trends in the u.s., which suggest a significant decrease in the number of white males entering college during the next decade. at the same time, the number of jobs requiring scientific or engineering training will continue to increase. because white males have traditionally constituted the vast majority of trained scientists and engineers in this country, experts have predicted that a critical labor shortage is likely early in the next century [4, 25]. to confront this possibility, the federal government has begun to expend resources to study the problem further. a notable example is the establishment of a national task force on women, minorities, and the handicapped in science and technology. their final report, issued in december of 1989, lists a number of government and industrial programs aimed at preventing a labor shortage by increasing the number of women and minorities trained as scientists and engineers [5]. in light of these facts, the committee on the status of women in computer science, a subcommittee of the acm's committee on scientific freedom and human rights, was established with the goal of studying the causes of women's continued underrepresentation in the field, and developing proposed solutions to problems found. it is the committee's belief that the low number of women working as computer scientists is inextricably tied up with the particular difficulties that women face in becoming computer scientists. studies show that women in computer science programs in u.s. universities terminate their training earlier than men do. between 1983 and 1986 (the latest year for which we have such figures) the percentage of bachelor's degrees in computer science awarded to women was in the range of 36-37%, while the percentage of master's degrees was in the range of 28-30s. during the same time span, the percentage of doctoral degrees awarded to women has only been in the range of 10-12%, and it has remained at that level, with the exception of a slight increase in 1989 [16, 21]. moreover, the discrepancy between the numbers of men and women continues to increase when we look at the people who are training the future computer scientists: women currently hold only 6.5% of the faculty positions in the computer science and computer engineering departments in the 158 ph.d.-granting institutions included in the 1988- 1989 taulbee survey (see communications september 1990). in fact, a third of these departments have no female faculty members at all [16]. this pattern of decreasing representation is generally consistent with that of other scientific and engineering fields [4, 25]. it is often described as "pipeline shrinkage": as women move along the academic pipeline, their percentages continue to shrink. the focus of this report is pipeline shrinkage for women in computer science. we describe the situation for women at all stages of training in computer science, from the precollege level through graduate school. because many of the problems discussed are related to the lack of role models for women who are in the process of becoming computer scientists, we also concern ourselves with the status of women faculty members. we not only describe the problems, but also make specific recommendations for change and encourage further study of those problems whose solutions are not yet well understood. of course, our focus on computer science in the university by no means exhausts the set of issues that are relevant to an investigation of women in computer science. most notably, we do not directly address issues that are of concern exclusively or primarily to women in industry. although some of the problems we discuss are common to all women computer scientists, there are, without doubt, other problems that are unique to one group or the other. nonetheless, the committee felt that an examination of the process of becoming a computer scientist provided a good starting point for a wider investigation of women in the field. clearly, to increase the number of women in industrial computer science, one must first increase the number of women trained in the discipline. thus, we need to consider why women stop their training earlier than men: too few women with bachelor's degrees in computer science translates into too few women in both industry and academia. moreover, because of the documented positive effects of same-sex role models [12], it is also important to consider why women drop out in higher numbers than do men even later in their academic training: too few women with doctorate degrees results in too few women faculty members. this in turn means inadequate numbers of role models for younger women in the process of becoming computer scientists. amy pearl martha e. pollack eve riskin elizabeth wolf becky thomas alice wu a cognitive model of learning from examples i am working on a project involving cognitive modeling. i am currently examining some of the types of study habits that cause students to learn most effectively. for this project we are focusing on students learning introductory physics. my current goal is to investigate an existing computer program called cascade (vanlehn, jones, & chi, 1992), and evaluate its coverage of some recent psychological data on human learning. in past research cascade was able to model psychological data on the "self-explanation effect" (chi, bassok, lewis, reimann, & glaser, 1989). i intend to use cascade to explain new results (renkl, atkinson, & maier, 2000) showing how student learning can depend on the particular presentation of examples and study problems. to summarize, renkl et al. found that it is possible to increase the effectiveness of studying worked out examples, if the examples are gradually "faded" from fully worked out examples to increasingly more incomplete problems. eric s. fleischman privacy panel questions and answers judith perrolle the design and implementation of a heterogeneous computer networking laboratory this paper presents the experiences gained by the authors in the design and implementation of a heterogeneous computer networking laboratory. hopefully, this laboratory model would better prepare any computer science department's curriculum to meet the challenges presented by the rapid advancement of telecommunication technology. guillermo a. francia randy k. smith computer education technology (cet) computer education and computer based education are becoming more and more complex. the theory part is well taken care of by acm curriculum. however, the practical laboratory aspect of the acm curriculum is left wide open for individuals to decide. this leads to the widespread variations and limitations in the practicum indicating that computer science is not treated as a science, rather as an art. this problem is related to another basic problem regarding what aspect of the computer science should be treated as a science. to teach or get trained in computer science, we need to provide 'paradigms' (and not mere syntax and semantics of programming languages). the construction of paradigms is fluid at the present. there is a need to investigate the theory of 'paradigms', their construction and use in computer science. the purpose of the panel is to identify some key aspects of 'paradigms', their nature and their application and the type of ongoing research in this direction. n. ramasubramanian post-mortem dump on changes, 1981-1983 (a panel discussion) during the past two years, every university computing center has been experiencing and striving to respond to a broad spectrum of changes in their own local environments. the panalists will share their own recent experiences in the following areas: an expanding user audience evolving computer systems changing organization structure the discussion will focus on viewing the stimulus of change as a positive force, providing opportunities for creative and innovative solutions, for re- thinking of roles and responsibilities, and for re-evaluating institutional procedures and structures. alice howard surveyor's forum: related information won kim risks to the public in computers and related systems peter g. neumann preparing future teachers to use computers: a course at the college of charleston frances c. welch the impending mcmorphosis of the global professor m. o. thirunarayanan formal techniques for object oriented software development dennis de champeaux fifteen years acm the development years of acm, as recounted in 1962 by founding member and former president franz l. alt, depicts the players and progress of an organization committed to sharing computing knowledge and skills. franz l. alt course content for computer science at the secondary school level a review of a preliminary report by the task group on secondary school computer science, working under the acm elementary and secondary schools subcommittee. jean b. rogers the paradox of place thomas a. horan planning for student computing sue stager leota boesen helping users help themselves this paper discusses how user's can help themselves. in order to help users help themselves, consultants must first examine and redefine their primary responsibilities. mildred s. joseph secretary's letter: the role of acm in standards david c. wood education (abstract): past, present, future norman gibbs the management of end user computing end users can be classified into six distinct types. each of them needs differentiated education, support, and control from the information systems function. to support a large number of their applications a new computing environment, "the third environment" must be developed by information systems (i/s) management. close attention must also be paid by i/s management to the need to involve "functional support personnel" (end users in each functional area who spend most of their time programming and aiding other end users) in the i/s end user management process. john f. rockart lauren s. flannery toward a new politics of intellectual property pamela samuelson improving gender equity in computing programmes: some suggestions for increasing female participation and retention rates joy teague val clarke a special learning environment for repeat students in 1997, surveys of a group of introductory programming students, in a class with a predominance of repeat students (i.e., students who had previously failed the subject), provided an opportunity to establish a profile of the weakest students. these students were the "alienated" ones who had not wanted to do this course in the first place, had little motivation to learn programming, and were characterised by poor class attendance and low work output. in 1998 a new learning environment, tailored to these students' special needs, was implemented to encourage them to achieve success. improvement was observed in many aspects of the learning behaviour of these students. judy sheard dianne hagan europe 1992: fraternity or fortress? the following 1989 william k. mcinally memorial lecture was presented at the university of michigan by gerrit jeelof, a champion of the movement toward european enconomic unity. he discusses the upcoming european integration and its implications for world trade. gerrit jeelof experiences with an innovative approach for improving information systems students' teamwork and project management capabilities william d. nance technology transfer macro-process: a practical guide for the effective introduction of technology in our efforts to increase software development productivity, we have worked to introduce numerous software development techniques and technologies into various target organizations. through these efforts, we have come to understand the difficulties involved in technical transfer. some of the major hurdles that these organizations face during technical transfers are tight schedules and budgets. we have made efforts to lighten this load by using various customization techniques and have defined an overall process called the technology transfer macro-process that we can use to introduce a wide variety of software development techniques and technologies into a target organization. this paper introduces this simple and practical process along with important methods and concepts such as the process plug-in method and the process warehouse, for the introduction of new tools, technologies, and processes within an organization. the issue of initial productivity loss will also be discussed and a suggestion on how to avoid this will be made. these methods have been successfully used to introduce object-oriented technology (oot) into actual development projects and have helped to increase overall productivity within the target development organizations. tetsuto nishiyama kunihiko ikeda toru niwa the role of certification in fostering professional development in the field of computing the institute for the certification of computer professionals (iccp) currently offers programs for two certificates, the certificate in data processing (cdp) and the certificate in computer programming (ccp). other programs are under development by the institute. all these programs are designed to foster development of a computing profession within which high standards of knowledge, performance and conduct are maintained. testing and certification provide an effective means of recognizing those individuals who have attained a level of excellence in their knowledge and experience in a field. achievement of a current certificate provides these individuals with an incentive for the continuing education and professional growth necessary in an area experiencing the rapid development of the field of computing. preparation for certification may be more flexibly organized than the preparation required for traditional academic degrees. these factors combine to make the programs of the iccp a potentially important influence in the continuing professional education of many working in the applied areas of the field of computing. j. r. sopka programming as writing: using portfolios christopher j. van wyk inside risks: using names as identifiers donald a. norman lobbying for computer legislation jay bloombecker the present state of historical content in computer science texts: a concern kaila katz organizational power and the information services department a theory of intraorganizational power is discussed and applied to the information services department. the results of a study of the power of five departments in 40 manufacturing plants is presented. hypotheses about the levels of power of information processing are not supported by the findings; however, the power theory in general does receive support.the information- services department is perceived as having low levels of power and influence in the organization: reasons for this unexpected finding are discussed. the paper suggests several explanations for the results and possible problems in the organization. recommendations to senior management and the information- services department are offered. henry c. lucas toward an ethics of persuasive technology daniel berdichevsky erik neuenschwander on-campus cooperative education in the past, on-site cooperative education has been the primary means of providing practical experience for computer science students. on-campus cooperative education is proposed as a viable alternative. this paper describes on-campus cooperative education as practiced at byu. advantages and disadvantages for the sponsoring company, university, students, faculty, and company personnel involved are also presented. the last part of the paper describes several guidelines which, if followed, should improve the educational experience. scott n. woodfield gordon e. stokes vern j. crandall news track robert fox evolution of an organizational interface: the new business department at a largeinsurance firm this paper describes how the work organization and computer system of the new business department at a large life insurance firm have interacted and evolved over time. the dynamics of interaction are explained largely in terms of the economic incentive to reduce the length of transaction-processing chains and the more political goal of extending managerial control. it is argued that examining the interaction of organizations and computer systems can contribute to a better theoretical understanding of the development of large computer systems and offer guidance to designers of user-computer interfaces. a graphical technique for depicting organizational interfaces is presented. andrew clement c. c. gotlieb the university of south carolina computer science institute the continuing deficit of computer related specialties is a cause for concern in the state of south carolina. this deficit could be reduced if the two and four year colleges in the state offered more computer related courses to their students. most of these colleges lack the appropriately trained faculty. in an effort to retrain existing faculty, a university of south carolina computer science institute was established in the summer, 1979. the primary goals of the institute are to upgrade the computer science competency of existing faculty, to utilize these newly trained faculty members to meet local demand for undergraduate instruction in computer related courses, and to conduct the institute in a manner that allows statewide cooperation. thus, participating colleges, knowing their own needs, will be able to integrate computer related courses into their programs of study. the larger colleges will be able to offer baccalaureate programs with a major in computer science, and the smaller ones can offer two year certificate or training programs. d. j. codespoti j. c. bays a new approach to computer science in the community college: negotiated teaching and learning dominic magno provision of market services for eco compliant electronic marketplaces sena arpinar asuman dogac cross-training of student consultants this paper describes the training sessions developed at syracuse university academic computing services to train the student consultants who work at one of five remote sites around campus. students must consult on any number of questions involving both mainframes and microcomputers. there are several different systems and many software packages which the consultants must also be familiar with. most student consultants will choose to work at either a microcomputer or a mainframe site, though some will chose hours at both types of sites. for these students cross training is essential. since there are often times when students must find someone to replace them when they are unable to work, it is desirable to have a large group of students who know enough about each site to work there effectively. agnes a. hoey unintentional power in the design of computer systems chuck huff educating computer people about people & computers: report on conference panel, hci'95, huddersfield this report describes a panel session on hci education held at the hci'95 conference. during the debate, expert views were expressed about many of the key issues affecting hci teachers today, including: how can educators cope with the flood of new hci ideas? should hci be fully integrated into software engineering courses or taught in specialist modules? what are the core elements of hci that all students should learn? there are perceived cultural differences in hci teaching in the uk, usa and scandinavia --- are these differences real and are they important? should we be sharing educational materials? can hci design be taught or are creative designers born not bred? philip j. a. scown barbara mcmanus towards benchmarks for maintainability, availability and growth/evolution(mage) aaron brown ada and the acm randy dipner intellectual property rights david l. toppen barbara morgan don mcisaac martin ringle richard giardina working knowledge: how organizations manage what they know thomas h. davenport lawrence prusak advising incoming cs students for what lies ahead in the mid 1980's freshmen at two universities were surveyed and their responses about computer science courses were compared with their responses about other freshmen courses…students were more likely to report reality shock, confusion, control attempts, anger, and withdrawal in their computing courses than in other courses [1]. today computer science departments still experience some of the highest dropout rates of all disciplines with a range from 25% to over 50% attrition by the end of the first year. this panel was formed to discuss the various advising methods used by schools to attempt to prepare the students for the rigors of what their first year experience in computer science courses will involve. helping the student understand what will be required of them goes a long way toward assuring the students success in the computer science major. can more be done? are any schools doing innovative approaches to this problem? robert bryant george hauser terry scott sherri shulman programmed computing at the universities of cambridge and illinois in the early fifties the development of methods of using computers for calculations in the early fifties at cambridge and illinois universities. they are the recollections of a participant. d. j. wheeler standardization, innovation and microsoft barry fagin virtual teams + virtual projects = real learning (abstract only) adele goldberg the special training needs of the first-time microcomputer user lisa hines user privacy in a networked environment (workshop): legal, policy, and ethical considerations when responding to complaints of misuse rodney petersen william kupersanin introduction to computer use: a course non-computer science majors at a large joan m. cherry rtp: a transport layer implementation project this paper describes a project for use in computer networks courses, implementation of the reliable transport protocol (rtp), that gives students hands-on experience with network protocol concepts and construction. lecture topics such as the protocol layering model, sliding window protocols, packet formats and headers, techniques for establishing and closing connections, and udp sockets programming are all driven home via first-hand experience. students gain general programming and debugging experience on a realistic, event-driven, asynchronous application as well, and necessarily exercise their knowledge of algorithms and data structures. brad richards accreditation in canada suzanne gill an evaluation of the paired comparisons method for software sizing this paper evaluates the accuracy, precision and robustness of the paired comparisons method for software sizing and concludes that the results produced by it are superior to the so called "expert" approaches. eduardo miranda the new computer science: it meets many needs requests from local industries led to the need for new curricula in computer science at the university of south carolina at spartanburg. the faculty discovered that the body of courses outlined in curriculum 78 could be manipulated into alternative curricula, while leaving the standard acm major intact. indeed, the acm curriculum includes exactly half of the "technical" courses required in the dpma model curriculum. thus, by using curriculum 78 as a base, it is possible to implement a number of specialized degree programs without creating new courses and adding the new faculty required to teach them. a. crosland d. codespoti successful associate degree programs in the computing sciences suzanne e. gladfelter william c. harris karl j. klee the civilization of illiteracy mihai nadin chi 99 sig: sigchi's role in influencing technology policy jeff johnson a model for incorporating learning theories into preservice computer training carl f. berger elizabeth a. carlson an acm response: the scope and directions of computer science the national research council's computer science and telecommunications board (cstb) chartered a two-year study on the scope and directions of computer science. as part of this study, acm was asked to provide input on three important questions, the answers to which could have significant impact on the future direction of our discipline and profession. barbara simons dennis j. frailey a. joe turner stuart h. zweben peter j. denning from system design to democracy steven e. miller computer chess panel - 1983 this year's acm annual conference will host a unique event in the united states, the fourth world computer chess championship. acm began sponsoring computer chess tournaments in 1970. the first world championship was held in stockholm in 1974. subsequent world championships were held in toronto in 1977 and in linz, austria in 1980. one popular activity at acm tournaments has been a panel discussion among the computer chess authors. this year, since the acm is hosting the world championship, the panel will include participants from the u.s. and europe. we hope to hear from the authors of the three former world championship chess programs, kaissa from the institute for systems science in moscow (1974), chess 4.6 from northwestern university (1977), and belle from bell labs (1980). in addition, this year's tournament and panel will feature authors of microcomputer chess programs as well. some of the questions to be discussed will be: has the chess playing strength of the programs reached a plateau? will research into "expert systems" migrate into the development of better chess programs? what is being done to use the 32-bit micros in chess programming? what about the "super" computers? we urge the audience to come prepared with questions, since these panels have elicited lively discussions in the past. this year should be no exception. benjamin mittman vertical integration in group learning this paper is mainly concerned with the teaching of computer science to first year (freshman) students. the method outlined is an attempt to change their generally 'convergent' attitudes into a more 'divergent' way of tackling problems. one of the most wasteful features of modern education is the vertical separation of students, so that the collective wisdom acquired by one generation is unavailable to the next. merely talking to those who have successfully overcome their problems is a great encouragement, and the presence of a senior acts as a catalyst in a group. one special feature of the brunel university position is that third and second year students have experienced work periods as "students." few of the lectures can comment on this aspect of the course from personal experience, so if real guidance is to be given it must be from "older" students. these students will have assimilated the group methods used in industry, commerce and research and will be able to organize their group to pass on their experience by example. our attempt at vertical integration involves treating all the students in a less paternalistic fashion and one feature of this, is to make the participation of senior students voluntary, with no "credits" for work contributed. we hope that as well as enjoying the experience, they will respond and gain from the reflection on other subjects. r. d. parslow shortages of qualified software engineering faculty and practitioners (panel session): challenges in breaking the cycle one of the most serious issues facing the software engineering education community is the lack of qualified tenure-track (full-time) faculty to teach software engineering courses, particularly at the undergraduate level. similarly, one of the most serious issues facing the software industry is the lack of qualified junior and senior software engineers. this shortage cycle has existed for some time, and if it is not addressed properly will only worsen, thereby affecting the software engineering field in a more general way than it has already. the objective of this panel is to put a number of suggestions for improvement into discussion and debate in order to evaluate their potential and viability. nancy r. mead hossein saiedian gunther ruhe donald j. bagert helen edwards michael ryan a computer science freshman orientation course this paper describes an orientation course for beginning computer science majors. the course is different from the cs 1 or computer literacy courses, but similar in intent and content to orientation courses in engineering, business, and other fields. its purpose is to give students an overview of computer science and an idea of what computer professionals do so that students can make an informed career decision. other emphases for the course are practice in problem solving, experience working in groups, teaching basic technical (non-programming) skills, social and ethical issues, and making students aware of the resources and opportunities available to them such as internship programs. influences and constraints on the design of the course and suggested changes the next time the course is taught are also described. curtis r. cook what about training: rinnnnnnnng, rinnnnnnnng, rinnnnnnnn consultant: hello, sdsc consulting. can i help you? user: hi! yes, please. i'll be getting an account on your system and am interested in any kind of training you do for users. first of all, do you have any courses for the novice user and/or the user who's been using the system a while? c: yes, for the new users we have two days of introductory workshops, four in all. for those who already know the basics and want to get beyond logging in, we offer an additional day of material, divided into two workshops. u: that sounds good! how often do you hold your classes? once a month? c: well, for the first two years, we did teach the courses once a month, often twice a month. however, we've found that our average user has become more knowledgeable about the system, and the requests for the intro-courses have therefore been declining, so we're teaching once a quarter now. basically the number of courses we teach is determined by you, the users; we're flexible to teach as many or as few as you request. u: it sounds like we users can help determine what services we get! does that mean that if enough of us were to want you to teach courses on any other topics in the future you might? c: certainly!! in fact, the advanced workshops started just last fall as a result of requests from users. we're also planning to develop courses to cover graphics and two other programming languages: c and pascal. the plans for these courses have been developed based on both direct requests and a need by users that we've noted from the number of questions that our consultants get in these areas. u: great, i'll look forward to hearing more about these! let me push my luck: in addition to your flexibility with what is taught and how often it's taught, how about where it's taught? would you consider teaching at our university, for example, if we had 15 or 20 interested people? c: absolutely! in our first year alone, we traveled to five different sites and in the second year we went to six additional sites. we've gone from maryland to hawaii for our users! (are you by chance calling from hawaii?) u: well no, we're in plainsville, so how about if we come to sdsc? can you describe your training facilities there and accommodations for out-of-towners? c: sure. we've got a nice training room with 20 mac pluses for the students, plus one hooked to a projector for the instructor, all of which are hooked directly to our cray supercomputer. in addition, there is a white board and an overhead projector, in case the instructor wants to use them. as for accommodations, users can stay anywhere they like, at their own expense. there are many nearby hotels which we can recommend. we only provide accommodations during our two-week, in-depth summer institutes. u: that sounds reasonable, as long as they're near the beach! what about handouts or documents. do you provide any of these? c: of course! with each of the four introductory and two advanced courses we give each user a handout that the instructor goes over. like all our documents, each of these is online so any user can print out copies of them at any time. in fact, while students are here they can print out and take with them any documents they think they might need in addition to those covered during the course. we've got almost 14,000 pages! u: great! sounds like i can get all the reading i could ever want! you mentioned a bunch of terminals in the training room. does that mean that we also get a chance to work on the cray while we're there, or are the classes all lecture? c: oh, no! you definitely get plenty of time to use the cray during class, in some classes more than in others. there's a mix in each class of lecture, lecture while the teacher demonstrates online (sometimes with students also online) and pure practice time when the instructor moves around helping everyone. the exact format of a particular class depends on the topics covered. u: ok, i like the sound of that. one more thing: could you please tell me a bit more about your current courses? that is, tell me what is covered in each? c: sure! on the morning of the first day we cover the introwork1 document, which discusses how to access the computers here from different sites; ctss, the cray operating system, and its routines; our long- term storage system, known as cfs, and how to access it using mass; as well as file transfers over various networks. during the afternoon on the first day we go over the introwork2 document, which covers different information sources available to you and two editors that we have available: tedi and sed. the information sources include our online documentation system, the electronic bulletin boards, and sending and receiving electronic mail. on the second day we start with ezcft, which is the handout describing the cray fortran compiler. on that morning we go over converting programs from other systems to run on our cray, compiling and loading the programs, then figuring out what some of the run-time errors you can get might mean. we also touch on vectorization, which is covered more in- depth in an advanced workshop. in the afternoon of the second day we discuss the introwork4 document. this includes using the dynamic debugging tool, ddt, using pascal and c, and the math and graphics libraries available to you, as well as a utility we have for displaying and transporting graphics files. u: wow! i guess i'd better stay awake for those sessions! what's left to teach in the advanced workshops? c: well, in the morning we cover ezjobcontrol, which is the common command language. in the afternoon we go over optimization, which includes monitoring optimization and requirements for, impediments to, and techniques for vectorization. u: i'm impressed, but i'm afraid to ask one last thing. i suppose all this must cost a bundle. right? what do you charge? c: why nothing of course! we even throw in coffee and pastries for morning breaks each day, and sodas and cookies in the afternoons! u: what a deal! i want to go to san diego!! where do i sign up? christine martin microcomputer users groups tamera fine-trail evolution and evaluation of spec benchmarks jozo j. dujmovic ivo dujmovic establishing a new curriculum in computer technology: the process continues josephine alessi freedman bibliography update '99: recent books and articles of interest herman t. tavani a middle-out concept of hierarchy (or the problem of feeding the animals) developers use class hierarchies to model "real world" objects and as metaphors to organize their programs. we (the authors) believe that developers intuitively understand the variety of everyday objects: this cup, that water, those pencils, or in computer terms, this number, that window, those records. we also believe that developers approve of only one concept of class hierarchy, one out of many possible concepts.many hidden assumptions lurk behind object-oriented inheritance. traditional object-oriented concepts of hierarchy are obsessively top-down and ignore many obvious bottom-up relationships. authors who write about object-oriented programming usually define class hierarchies in traditional or archaic terms, referring to the theory of realism and ancient greek philosophy. but, just because somebody sees a similarity between concepts in smalltalk or c++ and concepts described by plato and other philosophers, does not mean that the perception is appropriate. while the correspondences between classes and concepts from philosophy are interesting, they are not the only possible or useful correspondences. in _women, fire, and dangerous things_ on page 6, lakoff writes, "in fact, until very recently, the classical theory of categories was not even thought of as a _theory._ it was taught in most disciplines not as an empirical hypothesis but as an unquestionable, definitional truth."we want to change that assumption and expand the possibilities of what class hierarchies can be. specifically, we introduce a cognitive science perspective of objects and the _basic level_ of a class hierarchy. we believe that developers use software classes in a middle-out manner, just like people use linguistic categories. we believe that this middle-out interpretation explains some of the limits to modeling with traditional object-oriented inheritance. we use the basic level to distinguish between the type relationships within a class hierarchy: above the basic level, developers use runtime type relationships, and below the basic, level developers use compile-time type relationships. we also show that developers do program above the basic level.this paper is organized as follows. in the first section, we discuss _class_ and _hierarchy_ from a cognitive science point of view. in the second section, we define the _basic level_ and describe a middle-out concept of _hierarchy._ in the third section, we explain the developer's version of basic level through the example of "feeding the animals" and show how it relates to type checking. we also describe some of the techniques that developers use to program above the basic level. and, in the fourth section, we argue that many traditional concepts of inheritance fail above the basic level. we argue that the subset and prototype concepts of hierarchy do not work in general, and we argue that liskov's substitution principle and meyer's open-closed principle only work below the basic level. l. s. b. raccoon puppydog p. o. p. the collision of trademarks, domain names, and due process in cyberspace a. michael froomkin team projects in systems analysis and design sandra brown patricia nettnin why do they fail?: how do they succeed? the success of some students in cs2 this paper is intended to stimulate discussion about the progress of students through the typical cs2 course. it is based on test results from only a single class, and is thus extremely limited in terms of applicability or extensibility. however, the problems encountered by the class on this test seem to be of a nature that may be quite general. the paper focuses on the problems the students had writing the test and suggests reasons why they may have encountered these problems. it represents a first attempt to see what the students in this course are lacking for eventual success. m. dee medley survival: a tale of a senior project elaine anderson picture program execution (poster session) janet linington mark dixon innovative teaching practices in computing education (poster session): the tla project jan holden alison young the cognitive flexibility theory0: an approach for teaching hypermedia engineering hypermedia engineering constitutes the employment of an engineering approach to the development of hypermedia applications. its main teaching objectives are for students to learn what an engineering approach means and how measurement can be applied. this paper presents the application of the cognitive flexibility theory as an instructional theory to teach hypermedia engineering principles. early results have shown that students presented a greater learning variability (suggested by their exam marks) when exposed to the cft as a teaching practice, compared to conventional methods. emilia mendes nile mosley steve counsell the first-course conundrum, why change? susan horwitz kenneth appel theresa cuprak david kay christopher nevison leon schram mark stehlik owen astrachan research-led innovation in teaching and learning programming we describe an attempt to bridge the gap between educational research and practical innovation by making a package of changes to an introductory programming module based on the insights of existing theoretical work. theoretical principles are described, used to evaluate previous practices and then employed to guide systematic changes. preliminary evaluation indicates substantial improvements in student performance and enjoyment while indicating areas in need of further work. john davy tony jenkins the open information society niv ahituv self-evaluation system for digital systems subject (poster) marta prim jordi roig reading summaries (poster session): relating class to student's problems with the current reading assignment lillian n. cassel metamodels for system development p. kokol extending the technology acceptance model: the influence of perceived user resources there has been considerable research on the factors that predict whether individuals will accept and voluntarily use information systems. the technology acceptance model (tam) has a base in psychological research, is parsimonious, explains usage behavior quite well, and can be operationalized with valid and reliable instruments. a limitation of tam is that it assumes usage is volitional, that is, there are no barriers that would prevent an individual from using an is if he or she chose to do so. this research extends tam by adding perceived user resources to the model, with careful attention to placing the construct in tam's existing nomological structure. in contrast to measures of self-efficacy and perceived behavioral control that concentrate on how well individuals perceive they can execute specific courses of action, this paper examines perceptions of adequate resources that can facilitate or inhibit such behaviors. the inclusion of both a formative and reflective set of measures provides the opportunity for the researcher and manager to decide whether to evaluate only the overall perceptions of adequate resources or also the specific underlying causes. the extended model incorporating these measures was then tested in the field. the results confirmed that perceived user resources is a valuable addition to the model. kieran mathieson eileen peacock wynne w. chin creating self-paced courses for cs majors and non-majors gerald l. boerner carol backer stoker ethics, information systems, and the information highway richard a. huff carl stephen guynes robert m. golladay surveyor's forum: related information e. b. fernandez ethics and computer security: cause and effect newspapers around the world carried this report of the west german hackers infiltration of the nasa computer network. the hackers were members of a computer club in hamburg, germany who called themselves chaos. the group founded in 1985 were mostly hobbyists, but many were good programmers. each member had to qualify for acceptance in the group, and two of the members were systems managers [4]. during the summer of 1987, garry trudeau and his widely read "doonesbury" carried a series of strips addressing the universal and unashamed practices of unethical conduct in high places. one of those strips went like this: … and you still feel no remorse, phil? remorse? what for? we're talking about a very grey area, padre! 75% of the financial instruments available today didn't even exist when i was growing up! we've had to make up the rules as we go along! the insider trading rules in particular are very complex and ambiguous! hell, i didn't even know i was breaking the law. but you were being paid in an alley. so i had a hunch. to help alleviate phil's problem and others of this nature, senators donald riegle a democrat from michigan and alfonse d'amato a republican from new york introduced legislation to spell out for wall street speculators precisely what constitutes insider trading in stocks. in an associated press release, riegle was quoted as follows: for those that want to obey the law, who are the overwhelming majority of people in the securities business, this puts in their hands the kind of knowledge and a body of understanding in a coherent way where they can carry out better policing operations within their own firms. it is time for data processing managers throughout the industry and academicians around the world to make this kind of definitive statement for all of those people in their realms of influence. marlene campbell standardizing at the leading edge charles r. symons a model curriculum for a liberal arts degree in computer science this report proposes developing a rigorous undergraduate curriculum for a b.a.-degree program in computer science. the curriculum is intended as a model not only for high-quality undergraduate colleges and universities, but also for larger universities with strong computer science programs in a liberal arts setting. norman e. gibbs allen b. tucker providing mark-up and feedback to students with online marking david v. mason denise m. woit the logic tutor (poster session) david abraham liz crawford leanna lesta agathe merceron kalina yacef teaching practices c. dianne martin a limitation of the generalized vickrey auction in electronic commerce: robustness against false-name bids yuko sakurai makoto yokoo shigeo matsubara legally speaking: the economic espionage act: touring the minefields andrew grosso team work help to survive the user services crunch kay colbert kathleen s. finder student paper competition (abstract only) the top three papers from the cfp 2000 student paper competition have been printed in this proceedings. they are: internet filter effectiveness: testing over and underinclusive blocking decisions of fourpopular filters christopher d. hunterpages 287-294 when social meets technical: ethics and the design of "social" technologies patrick fengpages 295-301 quantum "encryption" mark v. hurwitzpages 303-313 sara basse jean camp dan gillmor wiley hodges bruce umbaugh danny yee creating high computer impact in a small liberal arts college during the period from 1977 to 1979, computer usage at goucher college went from an involvement of 10% to 60% of the student body; a goal of 90% appears to be within reach by 1981. during the past two years, approximately 40% of the faculty have begun to use the computer in some way. this rapid rise in computer usage was accomplished without the introduction of a required course in computing, without acquisition of an on-campus macrocomputer or mini- computer, and without creation of a computer center staff devoted to academic computing. b. l. houseman collaboration vs plagiarism in computer science programming courses carolee stewart-gardiner david g. kay joyce currie little joseph d. chase john fendrich laurie a. williams ursula wolz proposed evaluation criteria for information systems programs accreditation john t. gorgone thomas i. m. ho john d. mcgregor websites ken korman computer science accreditation (panel session): guideline application to some existing curricula john f. dalphin bruce mccormick gordon stokes integrating case studies and projects in is management education r. j. whiddett j. a. handy j. l. pastor viewpoint: why women avoid computer science computer literacy at paine college alice m. simpkins decentralizing university computer facilities: some uk experience p. brady r. startup profiles in computing karen a. frenkel a first problem for the algorithms course (poster session) i present a problem to be used in the first class of the algorithms course as an introduction to the topic. two algorithms are given, simple but rich enough to illustrate several issues. j. Á. velasquez-iturbide goals for and lessons from a computer literacy course the primary component of most computer literacy courses has been learning to use a computer. however, a detailed treatment of societal issues, including the view of humans as machines, is equally important. some difficulties of implementing a literacy course are also discussed. john t. peterson where have all the faculty gone? ubiquity plan for the development and implementation of the acm computer science curriculum for high schools (panel) susan s. merritt charles bruen j. phillip ease darlene grantham viera k. prouix charles rice gerry segal carol e. wolf information resources management (irm): where did it come from and where is it going most people would agree that the term refers to all of the specialists, facilities, supplies, hardware, and software which an organization employs to collect, handle, store, disseminate, and use its data and information. information flows into and out of organizations in enormous quantities. some information stays around for very long periods of time---in filing cabinets, microforms, magnetic tape capsules, and other media. thus its uses can be broadly divided into two major categories: transaction support and decision support. transaction suport information means data that is used to sustain an organization's routinized operations. as an example, companies which manufacture a product has to pay employees, keep track of inventories, account for funds in payables and receivables, and so on. these are transaction support operations, and usually involve some kind of informal or formal information system. in larger companies, often a computer supports the information system---hence the term "computerized" or "automated" information system. with this support, of tentimes an information flow is not computerized. forest woody horton taking "computer literacy" literally computer literacy implies an analogy between computer-related skills and linguistic literacy that has not been seriously explored. recent studies of linguistic literacy can illuminate the definition of computer literacy and can suggest new ways of teaching it. carolynn van dyke the planning and procedures associated with the western new england college winter invitational high school programming contest james r carabetta a computer science/mathematics major for liberal arts colleges the concepts of the model curriculum for computer science in a liberal arts college [3] and a traditional mathematics major are combined to form a computer science/mathematics major. this major is particularly suited to the mathematics faculty retrained in computer science, and it provides the students with strong preparation for graduate study or employment in computer science. the major's requirements include six computer science courses (the model's introductory and core courses), six mathematics courses, one advanced computer science elective and a year of introductory physics. nancy l. hagelhans our "miracle": self-supporting student microcomputer labs during the 1987-88 school year, western illinois university began a successful experiment to create a self-supporting microcomputer lab. from conversations we've had at recent acm conferences, we feel this experience is rare enough that reading about it will help other schools. naturally, we credit our success to the dedication and excellence of the people involved, but beyond that, the success of the lab was also due to several key management decisions: obtaining support from top administrators before starting planning and operating in cooperation with several departments keeping lab funds in a separate account, used only by the lab. knowing and carefully following the rules of the university isolating and meeting student needs: providing ready help to lab users staying open throughout student study hours thinking of ourselves as sales clerks, not bureaucrats starting small and seeking maximum benefits from each dollar using inexpensive computers and sharing printers using donated older versions of popular software < /item> the results of these decisions speak for themselves: the lab is self- supporting its financial records are in good order karen stallworth jim strasma ethics: the appearance of impropriety joseph s. fulda student employees supporting campus computing: benefits for everyone scott w. siler nardi and o'day's information ecologies: using technology with heart robert r. johnson university computing services in 1995 m. d. glick rank and tenure concerns of computer science educators this forum is concerned with the problems faced by educators from various computing disciplines in the process by which promotion and tenure are granted. candidates for promotion and tenure often encounter difficulties in documenting their work and achievements in a manner acceptable to a review committee. the panelists, consisting of faculty who have successfully completed the rank or tenure review process, will summarize the process used at their respective institutions and share with conferees elements of their experience that were significant obstacles to their success and describe approaches they took which proved successful. laura baker stewart carpenter johnny carroll h. paul haiduk imagination, truth and possible consequentialism promoting the possible john gehl baychi richard anderson successful corporate communities: all communities are not the same mark a. jones national educational computing policy alternatives during the past two years a number of bills have been introduced in the u. s. congress to improve the quantity and quality of computer instruction in our nation's schools and colleges. while none of these bills are likely to pass before the presidential election, they raise major policy questions regarding what is needed (and should be implemented) in 1985 and beyond. ronald e. anderson robert m. aiken richard close karen duncan marc tucker implementing a set of guidelines for cs majors in the production of program code bernard john poole timothy s. meyer laying the foundations for computer science this paper has three primary goals: stimulate the discussion of possible skills which might be incorporated into the k-12 curriculum in order to provide students with a foundation for the study of computer science. stimulate the discussion of strategies for incorporating into the k-12 curriculum the fundamental skills needed by students pursuing topics in the computer science discipline. present a possible set of fundamental skills. l. a. larsen information systems strategy and implementation: a case study of a building society the formation and implementation of strategy with respect to computer-based information systems (is) are important issues in many contemporary organizations, including those in the financial services sector. this paper describes and analyzes an in-depth case study of the strategy formation and implementation process in one such organization, a medium-sized uk building society, and relates the process to its organizational and broader contexts; the organization is examined over a period of several years and under the contrasting leadership of two different chief executives. the case study is used to develop some general implications on is strategy and implementation, which can be taken as themes for debate in any new situation. the paper provides an example of a more detailed perspective on processes in is strategy and implementation than typically available in the literature. in addition, a new framework for further research in this area is developed, which directs the researcher toward exploring the dynamic interplay of strategic content, multilevel contexts, and cultural and political perspectives on the process of change. g. walsham t. waema advanced placement in computer science: a summer workshop a discussion of an in-service course designed to give high school teachers the background needed to teach an advanced placement course in computer science is presented. the discussion outlines the decisions made regarding equipment and other facilities, support personnel, and textbooks. course outlines are presented, along with an evaluation of the course. james r. comer kurt a. schember session 3a: debate - software development paradigms g. fischer r. balzer f. l. bauer f. p. brooks t. e. cheatham h. mills a lab approach for introductory programming courses the lab portion of the introductory computer science course at wartburg college is described. these two-hour time blocks are designed to facilitate a high degree of student involvement through intensive practice in the development of algorithms and the application of key programming concepts. several positive outcomes of the laboratory approach are delineated and examples of specific lab sessions are provided. lynn j. olson reader comments john gehl report on the 7th annual maple workshop bruce w. char computer ethics and academic honesty: developing a survey instrument james h. adair james l. linderman an elementary school scenario david moursund managing the user services function over the years, a great deal has been written about performing the various functions of user services. last year's conference in montreal focused on a variety of topics, ranging from microcomputer support to supercomputer support and from online documentation to desktop publishing. as we---two "managers" of user services---watched, listened, read, and discussed the content and progress of the conference, it seemed to us that one key topic was missing: a discussion of the problems, concerns, and approaches to actually managing the multi- faceted user services function. since many of the conference's attendees were themselves managers, we thought that a session on "managing user services" could be truly helpful. at that time, we decided to put together our collective views on the user services management process---not because our views are particularly unique or because we think we have all of the answers, but because we have thought about and lived the process (which may help others) and because we would like to hear other views on the process (which may help us). the paper has been divided into six major topics of discussion: overview, internal organization, mandatory internal functions, classical functions, other ways to serve users, and concerns. each of us, user services managers at two distinctly different universities in two different parts of the united states, has addressed each topic from our organization's perspective. james m. pruett paul j. setze from needs assessment to outcomes: managing the training of information systems professionals ritu agarwal jayesh prasad michael c. zanino faculty (panel discussion): recruiting, retraining and retention why is it so difficult to attract computer science and information systems professionals to academic departments? why is it so difficult to retain faculty? what does it take to recruit qualified faculty? can faculty from other areas be retrained and utilized? the purpose of this panel is to discuss these issues and to suggest some possible solutions to these problems. john t. gorgone john beidler ask jack jack wilson electronic signature legislation ephraim l. michael guided self-development: an innovative approach to management education and development applied to information systems supervisors thomas w. ferratt ritu agarwal personal and professional growth in a user services career the commitment one makes to become part of a user services staff tends to be long term. this paper will examine what avenues are available for both career advancement and personal growth. a number of user related jobs in private industry have been created in the past years. a portion of this paper will look at parallel and new industry jobs and the differences in salaries and responsibilities those jobs have compared to university positions. also, personal growth can be a very attractive "perk" from a job. which environment is more likely to provide and foster that personal growth will also be discussed. jeb lawson a unifying project for csi daniel joyce digital: the y2k e-commerce tumble hal berghel a grand role for it howie jacobson a curriculum based visual interface for course authoring and learning marc kaltenbach rubiao guo national science foundation funding opportunities for undergraduate computer science faculty harriet g. taylor teaching computer communication skills using case study method modern management activities involve extensive usage of computer communication services. the poster summarizes experience in teaching these skills on the basis of case study method. two scenarios of computer game involving extensive usage of communication software and implementation of group decision making model are presented. przemyslaw polak the cai has invaded georgia state university the 1970's brought a dramatic increase in the use of computers in university communities. the impact was especially felt in the increase of novice student users. to meet the demand for training on the introductory level, user services groups plunged head first into a new field, teaching. at georgia state university for instance, to meet the ever increasing student influx into the use of computers, we developed twelve workshops each covering a specific area of study-a very traditional technique. we broke new ground by not only requiring that all students receive our training, but by also going into the individual classes at the professor convenience to present our material. once the professors saw that our training helped both them and the student, we were eagerly accepted. cai has been around for many years and there is an abundance of literature on the topic. jane t. anderson computer science for secondary schools: course content recommendations of the acm education board elementary and secondary schools subcommittee computers and computing are topics of discussion in many curriculum areas in secondary school. the four courses recommended by this task group, however, have computing as their primary content. the courses are: introduction to computer science i (a full year course) introduction to computer science ii (a full year course) introduction to a high- level computer language (a half-year course) applications and implications of computers (a half-year course) courses 1 and 2 are designed for students with a serious interest in computer science. course 1 can serve as a single introductory course for some students and also act as a prerequisite for course 2. at the end of two years of study, students should be prepared to be placed in second level computer science classes in post- secondary educational institutions or to take the advanced placement exam available through the college entrance exam board. courses 1 and 2 will require a significant amount of equipment for each student enrolled. computers, either microcomputers or a timesharing system with convenient storage for files (disks) and high-speed printers are necessary. courses 3 and 4 are designed to be of general interest to any student at the secondary level. course 3 is a course about programming. it is a general introduction to the process of writing programs in a high-level computer language. it requires extensive hardware facilities, as students will need to practice using whichever computer language is chosen for the instruction. course 4 includes information about the ways computers are used and about the impact of computer use on people's lives. because students in this course will be using a computer to learn about applications, teaching this course requires extensive software in addition to hardware. courses 1, 3 and 4 have been designed assuming no prerequisite courses and are not interdependent. teachers of all four of these courses should be qualified in the content area and preferably certified to teach about computing and computer science. the discipline of computer science has been evolving over several years and will continue to change. courses in computer science will similarly continue to change. well trained teachers are the cornerstone of effective instruction in such a fluid environment. jean b. rogers fred archberger robert m. aiken john c. arch michael r. haney john d. lawson cheryl lemke thomas a. swanson samuel f. tumolo a day in the life of...laura majerus laura majerus ask jack: researching companies jack wilson human centered systems in the perspective of organizational and social informatics rob kling susan leigh star bucking the tide: a transition from industry to academe how do colleges and universities deal with the increasing student demand for more computer information systems education, while qualified faculty (in short supply anyway) continue to leave for industry's greener pastures? the paper presents the author's personal perspective on recently adjusting to a faculty role after more than twenty years in industrial and research environments. many issues are dealt with, including: economics, institutional support, qualifications, lifestyle, work-load, and keeping up with technology. alternative solutions to the manpower problem of computing education are presented and examples are given of some university-industry technology transfer programs. specific examples of local academic support from the boston chapter of the society for information management will also be noted. robert l. chew y2k and euro project management: lessons learned janis gogan janice sipior can wiretaps remain cost-effective? robin hanson pc notes gene styer what is computer ethics? c. dianne martin defending against viruses and worms s. a. kurzban addressing student problems in learning computer graphics our goal is to improve the performance of computer graphics students by including problem- solving activities and situating their knowledge of computer graphics in authentic problems. the framework for achieving this goal is based on the apprenticeship model of training and integrates four modules: an interactive notes module which includes note pages, in-depth pages and a variety of activities such as interactive visualizations and algorithm animations, aimed at enforcing the acquisition of conceptual and factual knowledge. a case library which consists of stories of projects in computer graphics and maps them to common problems encountered in these projects. a problem-solving module which presents problems to the learner in a real-world context and provides him/her with support to solve the problems. a collaborative tool to support the undertaking of complex projects by teams of students. amnon shabo mark guzdial john stasko annual u.s. summaries of ph.d. production and employment in computer science orrin e. taulbee dot com versus bricks and mortar - the impact of portal technology (invited presentation) (abstract only) the "new economy is rapidly being adopted on a global scale as corporations vie for new competitive positions and defensive responses. incumbents" so- called "bricks'n'mortar corporations" are generally finding it challenging, but usually rewarding, to extend their business practices to the internet. new entrants" so-called "dot com companies" are unfettered from institutional rigidity and thus have an enormous opportunity to gain market share, but at the same time are frequently challenged to provide the same levels of brand awareness, product and service as at least some of the incumbents. in this presentation we consider how internet infrastructure software is evolving, and its implications for both brick'n'mortar and dot com organisations. chris horn the computer as a problem solving tool: a unifying view for a non-majors course daniel joyce open letter to a young master's degree computer scientist david l. travis teaching an ethics component to computer science majors (abstract) thomas j. scott richard b. voss cherri m. pancake the chi student volunteer program pia j. nielsen g. bowden wise karen j. horwitz a comparative analysis of design principles for project-based it courses john grundy computer security news no author five dimensions of information security awareness until the era of the information society, information was a concern mainly for organizations whose line of business demanded a high degree of security. however, the growing use of information technology is affecting the status of information security so that it is gradually becoming an area that plays an important role in our everyday lives. as a result, information security issues should now be regarded on a par with other security issues. using this assertion as the point of departure, this paper outlines the dimensions of information security awareness, namely its organizational, general public, socio-political, computer ethical and institutional education dimensions, along with the categories (or target groups) within each dimension. acm sigcse nsf ccli project showcase jane prey implementing a university level computer education course for preservice teachers dede heidt james poirot on-line dynamic interviews (odin) (poster session): a means of overcoming distance in student-teacher relations laura wilson jon preston russell shackelford adventures in selecting course materials donald voegele from washington: ceos unite to influence u.s. technology policy verbal sparring over u.s. technology policy has persisted unabated for over one decade and two presidential administrations. as politicians, academicians, and industry representatives continue to volley policy virtues, the nation has watched its strong lead in the world's technology tournament slip another few notches. industry observers lament that setting priorities, particularly in terms of r&d; spending, has reached a turning point. diane crawford teaching hci with scenario-based design: the constructivist's synthesis this paper describes the application of scenario-based design in the teaching of human-computer interaction (hci), in an undergraduate software engineering program. specifically, we describe how the ideas of constructivism can be synthesized with the team-based efforts in managing software requirements. the paper serves as an experience report of an ongoing action research the author has been executing to revise the curriculum and pedagogy of a junior core course entitled _software psychology_. in particular, we depict some problem scenarios, helping the evolution of the course content, and developing our students as self-directed work teams of software professionals. the paper concludes with the author's lessons learned with this course enactment plus the necessary reflective evaluations therein. kam hou vat asden: a comprehensive design framework vision for automotive electronic control systems the automotive electronics industry is experiencing an era of unprecedented growth. driven by emissions and safety legislation, fuel economy constraints, cost constraints, and customer demand for convenience features and enhanced performance, electronic controls are steadily replacing their mechanical and hydraulic predecessors. as the sophistication of these systems grows, their complexity has grown dramatically as well, creating difficulties in the application of traditional engineering methods to modern systems. new design paradigms, such as model-based control, have begun to emerge. these factors have created a need for more sophisticated, integrated tool sets to help support the systems engineering process and manage the designs of the new systems. the automotive systems design environment (asden) project has been undertaken by motorola to address this need for a sophisticated, capable framework of interoperable tools. this project paves the way for a future where the "virtual automobile" becomes a reality: a car designed, simulated, and "driven" before the first physical prototype is even built. deborah wilson daniel dayton r. todd hansell campus-based industrial software projects: risks and rewards the involvement of commercial companies as clients in software engineering project work adds a new dimension to our students' education, developing the communication, team- working and managerial skills demanded by employers, [1]. commercial collaboration introduces risks into course work which, like the risks in any commercial software project, must be controlled if the outcomes for all parties are to count as a success. helen parker mike holcombe software maintenance costs: a quantitative evaluation william r. herrin structured systems analysis (part ii) v. a. owles m. j. powers staying connected: body of technology meg mcginity a generic model for on-line learning we describe a generic model for on-line learning which has been used to _develop_ a course unit in computer science, and to _evaluate_ a course unit in economics.the model may be used to produce a _template_ for on-line learning resources. alternatively a template developed intuitively by an experienced teacher may be _evaluated_ using the generic model. using these approaches both the model and the template may be refined.we also study the use of the model and templates as ways of _disseminating_ web-based on-line learning among colleagues in economics and computer science departments. john rosbottom jonathan crellin dave fysh cimel (poster session): constructive, collaborative inquiry-based multimedia e-learning glenn david blank william m. pottenger g. drew kessler martin herr harriet jaffe soma roy laboratories and other educational experiences based on curricula '91 angela goh peng-chor leong national high performance computer technology act: siggraph and nationl high- tech public policy issues d. j. cox mechanisms for coping with unfair ratings and discriminatory behavior in online reputation reporting systems chrysanthos dellarocas interview tricks from a professional recruiter lynellen d. s. perry identity authentication based on keystroke latencies the variables that help make a handwritten signature a unique human identifier also provide a unique digital signature in the form of a stream of latency periods between keystrokes. this article describes a method of verifying the identity of a user based on such a digital signature, and reports results from trial usage of the system. rick joyce gopal gupta a strategic plan for ubiquitous laptop computing david g. brown jennifer j. burg jay l. dominick electronic frontier: coming into the country imagine discovering a continent so vast that it may have no end to its dimensions. imagine a new world with more resources than all our future greed might exhaust, more opportunities than there will ever be entrepreneurs to exploit, and a peculiar kind of real estate that expands with development. john p. barlow perspectives on innovations in the computing curriculum (panel) john impagliazzo michael goldweber children, creativity and computers allison druin utilization of the career anchor/career orientation constructs for management of i/s professionals connie w. crook raymond g. crepeau mark e. mcmurtrey computer science program accreditation: the first-year activities of the computing sciences accreditation board this report summarizes the activities of the computing sciences accreditation board from its inception in 1984 through its first accreditation cycle completed in june 1986. the major activities during this period were directed at developing the csab structure necessary to carry out the accreditation process, and at conducting the first round of accreditation visits and actions. taylor booth raymond e. miller computer security and impact on computer science education the integration of computer security into existing computer science undergraduate education is an urgent and complicated task. with the increasing risk of computer intrusion, computer crimes and information wars, computer science educators bear the responsibility of cultivating a new generation of graduates who are aware of computer security related issues and are equipped with proper knowledge and skills to solve the problems. the task of integrating computer security into existing computer science programs, however, is complicated due to the fact that most faculty members lack the specialty knowledge in this field. this paper begins with a survey of the computer security field by examining the sequence of actions that the us government has taken since 1987 to counter the computer security issues, followed by an assessment of needs for practitioners in the field. a comprehensive approach of integrating computer security into an existing degree program is then proposed. the paper concludes with observations upon what should be taught and how computer security could be integrated into undergraduate education. t. andrew yang standards: when is it too much of a good thing? robert j. aiken john s. cavallini individual privacy in an information dependent society brian patrick clifford fortune cookie management for information technology support professionals john e. bucher a hierarchy for classifying ai implications artificial intelligence is rapidly emerging from the laboratory into the market-place. industrial robots are cost effective in a wide range of manufacturing tasks. expert systems are commercially available and scientifically useful. sophisticated chess machines are routinely sold in retail outlets. assaulted by the unprecedented pace of these developments, society is confronted with assimilating these "apparently- intelligent" artifacts. this paper will view these developments in the context of a hierarchy for classifying social impact. we examine to what extent does the appearance of apparently-intelligent machines produce a paradigmatic shift in how society defines itself and its social relations to these machines. our analysis is performed in the framework of a taxonomy for segregating the continuum of effects that occur when advanced computer technology impacts society. ira pohl a self-paced first course in computer science as demand for a first course in computer science increases, more efficient and effective approaches to such a course become increasingly desirable. this paper describes the development and use of a completely self-paced cai course at the evergreen state college. use of behavioral objectives in designing the course is explained, the content of the course is outlined, the process used to develop the course is described, experiences with 256 students are reported, and some general observations on implementing cai courses are offered. john o. aikin the global it work force: introduction david arnold fred niederman contemporary trends in computing many computer science curricula use special topics courses as a vehicle to introduce students to new concepts and technologies. although the same policy is practiced at our institution, one course required of our associate degree students provides a forum for surveying contemporary trends in computing. such a course is essential for providing "a foundation of knowledge and skills sufficient to serve as a base for continued learning." [1] the purpose and content of this course is the topic that follows. richard m. plishka an introduction to tcl/tk: the best language for introduction to computer science courses this tutorial will present an introduction to the tcl/tk programming language which the instructor believes is the best language for introduction to computer science courses. this hands-on experience will be organized around an introduction to tcl/tk, sample tcl/tk programs some of which were used in our introduction to computer science course, and tcl/tk resources. peter c. isaacson a multifunctional computer system for the physically handicapped h. hoffmann mapping information-sector work to the work force eileen m. trauth cautionary tales and the impact of computers on society mary b. williams david ermann glaudio gutierrez computing and accountability helen nissenbaum improving the performance of technologists and users on interdisciplinary teams: an analysis of information systems project teams jonathan k. trower detmar w. straub annual report of the association for computing machinery special interest group on management information systems (formerly the special interest group on business information technology) ephraim r. mclean presenting a united front syracuse university academic computing services employs approximately 70 students as aids to work as consultants in five remote sites. there are two full-time and two part-time managers to see that each site is running smoothly. what are the problems associated with the management of a remote site and the students who work there? how does the su system of remote consulting work? finally, how do the managers and the students keep their sanity? this paper proposes to answer all these, and other questions based on how remote computer consulting is handled by academic computing services at syracuse university. agnes a. hoey keith e. gatling computing science: achievements and challenges edsger w. dijkstra about this issue… adele j. goldberg if they build it, they will come (panel): creating a virtual academic department in cyberspace - a presentation by the e-works collective of the university of illinois at chicago niki aguirre sajjad lateef keith dorwick ken mcallister jim fletcher james j. sosnoski bandwidth isn't a problem kate gerwig the licensing of computer professionals (abstract) the 1992 csc annual debate produced a lively discussion focused on a resolution calling upon the acm to support efforts in state legislatures to regulate and license computer professionals. this followed proposals in several states for just such licensing. meanwhile, the ieee computer society committee on public policy has drafted a position statement endorsing "a national policy on the certification of safety-critical software by licensed, professional software engineers." do acm members support the licensing of computer professionals? to what degree will such licensing ensure quality software or better testing? and can the abilities and responsibility of software engineers or programmers really be objectively measured? the distinguished panelists will address these issues in what is certain to be a frank and free-wheeling exchange with each other and with the audience. david bellin paul davis george eggert don gotterbarn eric roberts in search of cooperation: an historical analysis of work organization and management strategies during the last decade, literature about work has increasingly focused on the importance of collective communication, tacit knowledge, and group activities. the idea of designing computer support for group-based work activities, which we loosely call 'cooperative work', is a useful and challenging one, for it represents a break from design approaches that focused on centralized and bureaucratic systems of communication and control. to get a clearer idea of the meaning of cooperative work, this article will look at historical patterns of work organization and management strategies. it will contrast user-centered concepts of cooperative work, with the idea of seeing cooperative work in the context of democracy in the workplace. the focus on workplace democracy has been a main theme in the scandinavian systems tradition. the article uses the scandinavian tradition, with its roots in a labor process approach as a way to analyze the meaning of cooperation for workplace democracy and its implication for the design of computer support. joan greenbaum risks to the public in computers and related systems peter g. neumann the visible web browser as an aid to the study of the world-wide web, we have developed a software application that allows a user to observe the messages passed between a web browser and a web server. the application is based on the mozilla web browser, and displays the http headers sent and received by the browser. the program could be used by students in courses studying the web, by researchers interested in the behavior of web servers, and by developers to debug web-based applications. atticus gifford benjamin j. menasha david finkel the globalized growth of the acm scholastic programming contest (abstract) although it originated at texas a&m; university in the united states, the scholastic programming contest has quickly evolved into an annual event attracting worldwide interest and participation. for the past several years, the contest steering committee has been working to expand the contest's sphere of opportunity within the acm community. the ultimate goal is to provide students of the computing sciences at every university in the world with a chance to participate in the contest. to accomplish this very ambitious task, a coordinated effort among acm volunteers on a global scale must be established as part of the contest infrastructure. this panel session will trace the globalized growth of the contest and discuss many of the issues which must be addressed in order to make the acm scholastic programming contest sponsored by at&t; easylink services the premiere event of its kind in the world. brian rudolph william poucher nikolay ivanov sven neirynck raewyn boersen c. j. hwang computer science laboratory projects: breadth through depth we present a java toolkit platform (java power tools) to support gui's and io in a safe way that takes advantage of the java swing capabilities and provides a framework for gui design. java power tools provides support for input and display of all types of data through safe conversion to and from a string (by implementing the stringable interface). recursively defined displays allow for combining simple display panels into complex user interfaces. the design is based on model-view-controller paradigm. the design of all components is clean and extensible and provides a model for building user interfaces that students can first use with ease, and later study to learn the proper design techniques in java. jpt will be available for testing and a suite of projects and laboratory assignments will be available by october 2000. this tutorial will focus on the pedagogical aspects of the toolkits and present ideas for use in student projects. richard rasala viera k. proulx jeff raab the plight of a minority in computer science: an educational manifesto amos o. olagunju obstacles to freedom and privacy by design rebecca n. wright a secure unrestricted advanced systems laboratory jean mayo phil kearns increasing computer literacy & employability of the blind: a talking microcomputer a talking microcomputer recently developed at indiana university-purdue university at fort wayne enables blind and visually impaired students to complete the 2- or 4-year degree program in computer technology without need for sighted readers. the talking microcomputer, and the talking typewriter which was developed as a prelude to it, are discussed not only in the educational context, but also in terms of present and future working environments. coupling unmodified, commercially available hardware with customized software, the talking microcomputer should be well within the reach of individuals and employers. future plans include interfacing it with a mainframe, and thus expanding language capabilities. william teoh harry w. gates computing the future: committee to assess the scope and direction of computer science and technology for the national research council juris hartmanis departmentalization in software development and maintenance exploring the strengths and weaknesses of three alternative bases for systems staff departmentalization suggests the benefits of an organizational form in which maintenance is separate from new system development. e. burton swanson cynthia mathis beath the consortium in the evolving information industry geoffrey e. morris a flexible approach to decentralized software evolution peyman oreizy the excon project: advocating continuous examination urban nulden information resource management theory training as with most topics in the information systems area, there is generally a problem with acceptance in educating people about new areas. one of the newest areas for information systems is that of irm, information resource(s) management. the term has been noted with the "r" being resource or resources thus resulting in a splitting of fine points to create, for some, two distinct areas. discussion has resulted in government, business, and education as to what is irm with the term being analyzed during the last two years in two major information systems curriculum studies. john f. schrage javascript for educators and students participants will learn html/javascripting components that work well teaching programming to non- computing majors by designing interactive web pages. topics include: clickable images, forms, and dialogue boxes, a discussion of similarities and differences between java and javascript; sample programming exercises utilizing functions, event handling, object creation. supplied materials will include assignments, programming examples, teaching units and web resources. barbara boucher owens laboratory-style teaching of computer science j. p. penny p. j. ashton the case of the killer robot (part 1) richard g. epstein xml and browser development activities in cs2 (poster session) martha j. kosa mark a. boshart information for competitive advantage: implications for computer science education the primary focus in computer science education has been on computers as programmable devices rather than on computers as information handling devices which can add value to business. the skills for identifying opportunities to use information for competitive advantage are becoming increasingly important to computer science students. this paper presents examples of the use of information for competitive advantage, defines skill requirements, and discusses implications for computer science curricula. randall e. kobetich computer scientists whose scientific freedom and human rights have been violated: a report of the acm committee on scientific freedom and human rights this report had its genesis before the establishment in february 1980 of the committee on scientific freedom and human rights (csfhr). in 1978 paul armer, chairman of the committee on computers and public policy (ccpp), asked jack minker of the university of maryland to chair a subcommittee on human rights and to prepare a report on the human rights of computer scientists. when csfhr was formed it was natural to transfer this activity to it since one of the activities of csfhr, as specified in its charter, is: gathering data on systematic violations of scientific freedom and human rights and fully publicizing such data.… careful attention will be given to assuring the validity of all data. jack minker hyperminor: hypertext on juvenile criminal justice (abstract) isadora bombonati the hong kong personal data (privacy) ordinance stephen lau summary of the sigois general membership survey october 1991 nora comstock cissy yoes human interface design and the handicapped user w. buxton r. foulds m. rosen l. scadden f. shein information systems: educational offerings vs. industry needs - how well do they match? this study describes relationships between educational institution and industry practices in the information systems area. it is based on the results of two surveys conducted in the chicago metropolitan area. each of the surveys was designed to answer specific questions related to curriculum issues and industry needs. in addition to the curriculum of choice, other specific areas of comparison are degrees, software, and hardware. industry hiring practices and educational institutions' trends are also discussed. industry and educational institutions appear to have similar expectations and should be encouraged to combine forces to achieve a common goal. the information systems students, who eventually become employees in industry, will benefit from each group's coordinated efforts. judith a. knapp the standards factor: two for the price of one pat billingsley the 1988 - 89 taulbee survey report this report describes the results of a survey of the forsythe list of computing departments1, completed in december, 1989. the survey concerns the production and employment of ph.d.s that graduated in 1988-892 and the faculty of ph.d.-granting computing departments during the academic year 1989-90. all 129 computer science (cs) departments (117 u.s. and 12 canadian) participated. in addition, 29 of 32 departments offering the ph.d. in computer engineering (ce) were included3. throughout this report, ce statistics are reported separately so that comparisons with previous years can be made for cs, but the intention is to merge all statistics for cs and ce in a few more years. some highlights from the survey are: the 129 cs departments produced 625 ph.d.s, an increase of 8 percent over the previous year; 336 were americans, 35 canadians, and 248 (40 percent) foreign (6 were unknown). of the 625, 309 (49 percent) stayed in academia, 181 (29 percent) went to industry, 24 (4 percent) to government, and 56 (9 percent) overseas; 7 were self- employed; and 9 were unemployed (39 were unknown). a total of 1,215 students passed their ph.d. qualifying exam in cs departments, an increase of 9 percent over 1987-88. no afro-americans, 6 hispanics, and 87 women (14 percent) received ph.d.s this year. the 129 cs departments have 2,550 faculty members, an increase of 123, or almost 1 per department. there are 938 assistant, 718 associate, and 894 full professors. the increase came entirely in the associate professor range. the 129 cs departments reported hiring 204 faculty and losing 161 (to retirement, death, other universities, graduate school, and non-academic positions). only 9 assistant professors in the 158 cs and ce departments are afro- american, 24 hispanic, and 103 (9 percent) female. only 2 associate professors are afro-american, 8 hispanic, and 74 (8 percent) are female. only 5 full professors are afro- american, 8 hispanic, and 33 (3 percent) female. the growth in ph.d. production to 625 is less than what was expected (650-700). still, a growth of almost 50 ph.d.s is substantial, and it will mean an easier time for departments that are trying to hire and a harder time for the new ph.d.s. there is still a large market. the new ph.d.s. however, cannot all expect to be placed in the older, established departments, and more will take positions in the newer departments and in the non-ph.d.-granting departments. growth of ph.d. production seems to have slowed enough so that over production does not seem to be a problem in the near future. there will not be enough retirements, however, to offset new ph.d. production for ten years. (in the 158 departments, 22 faculty members died or retired last year.) we believe that many of the new ph.d.s would benefit from a year or two as a postdoc, and perhaps it is time for the nsf to institute such a program in computer science and engineering. the percentage of cs ph.d.s given to foreign students remained about the same at 40 percent. in ce, the percentage was much higher, at 65 percent. the field continues to be far too young, a problem that only time is solving. cs continues to have more assistant professors than full professors, which puts an added burden on the older people, but there was substantial growth this year in the number of associate professors (as assistant professors were promoted). but the ratio of assistant to full professors in cs has not changed appreciably in four years. as we have mentioned in previous taulbee reports, no other field, as far as we know, has this problem. in fact, most scientific fields are 80 to 90 percent tenured in many universities. in cs, this problem is more severe in the newer and lower- ranked departments. in fact, the top 24 departments now have 223 assistant, 176 associate, and 290 full professors. the ce departments have far more full professors than assistant professors, mainly because many are older ee departments offering ce degrees. as we have indicated, afro- americans and hispanics simply are not entering computer science and engineering. it is high time that we did something about it, and we hope the crb will take the lead in introducing programs to encourage more participation from these minorities. there was a slight growth in the percentage of female ph.d.s in cs, from 10 to 14 percent. still, there are far too few women in our field, and our record of retention of women in the faculty is abysmal. there are only 33 female full professors in the 158 cs and ce ph.d.-granting departments! again, we hope the crb will help introduce programs to encourage more women to enter computing and to remain in academia over the years. the signs are that the nsf is interested in this problem as well. david gries dorothy marsh designing new principles to sustain research in our universities peter j. denning a marketing framework for user services management dramatic, new developments in technology plus continually changing needs of users present great challenges for those of us involved with computing in higher education. while it is quite evident that a tremendous amount of change is occurring around us, it is often less noticeable that the rate of change is accelerating. one of the major factors contributing to this acceleration is the rapid growth and development of technology. alvin toffler, in future shock, called the "technological engine" the driving force of change and he predicted that as man's search for new knowledge expands, the rate of technological development will increase spectacularly. probably the most significant recent technological development is the growth in the use of computers. the way computers are used has also changed during this technological explosion. the computer, previously only a calculating machine, has become a major information processing instrument. the role of computing on colleges campus has changed accordingly. it is not uncommon for graduate students to spend as much time at a terminal preparing thesis text as they do executing data analysis programs. the quickening pace of change, increasing demands on personnel, and tightening financial constraints mean that computing centers can no longer merely react to current circumstances, but must anticipate changing user needs. therefore, it is important that a management framework be available to provide direction and guidance for all planning and decision making. marketing provides a good framework because it contains an overall philosophy which stesses both a consumer orientation and the identification of specific consumer needs. the emphasis on customer satisfaction implies that all computing center activities should be focused upon this fundamental objective. while this sounds straightforward, some computing centers may not have a single objective, instead have subunits each striving for what it thinks is important. a consumer orientation also keeps the computing center alert to changing needs in the campus marketplace. it is important that we develop a "user-friendly" atmosphere or service, but it is more important to be "user- conscious" so we provide what is needed (2). elliott j. haugen degree program planned in computer user support fred beisse an eighty-year perspective on automation outsourcing (abstract): its evolution from cost savings method to a key strategy of the virtual organization curt hellenbrand csab authorizes visits to test is/it proposed accreditation criteria john t. gorgone using computers to enhance the mathematics classroom sharon whitton ayers gary g. bitter the atg learning communities laboratory: an overview n. rao machiraju extended analogy: an alternative lecture method jeff matocha tracy camp ralph hooper computer graphics: the introductory course grows up lew hitchner steve cunningham scott grissom rosalee wolfe some problems about english-spanish translations in computer science literature carlos ivan chesnevar literacy and computer illiteracy computers are marvelous devices that have simplified data processing and stimulated new applications in many fields. as tools, the machines have become so useful and attractive that authorities are calling for computer literacy and ordering large numbers of microcomputers and full-screen terminals. the computer is becoming a dominant, all-purpose tool. but, like any tool, it is not a panacea for the difficulties of modern civilization. it may not even be a critical part of literacy. in fact, the computer may have distinct educational and biological disadvantages for the human species. this article places computers in a broad educational perspective. alan hagen-wittbecker advanced placement program in computer science(panel session) the advanced placement program in computer science will be discussed by members of the development group. the presentation will be geared to both high school and college level educators. steven j. garland, chairman the content of the advanced placement course and the information that was used to put the course together will be presented. alternative outlines will be discussed as well as long-range plans for the future. david c. rine, chief reader suggestions on facilities needed to support the advanced placement in computer science course, and preparations for teachers of the course, will be presented. standards, teacher training, and the advanced placement examination will be discussed. j.r. jefferson wadkins the role played by the college board and educational testing service in the development of advanced placement courses and examinations, as well as ways in which the college board and educational testing service assist high schools and colleges with courses and examinations, will be presented. available materials and information from the college board and educational testing service will be discussed. steven j. garland david c. rine j. r. jefferson wadkins enhancing the computer networking curriculum an increasing number of students in computer science are requesting advanced study and active learning experiences in computer networking. employers need graduates who not only understand the fundamentals of networking but those who can quickly be involved in network administration. meeting these demands in the curriculum suggests that new and well-planned laboratory and internship experiences should be incorporated into the computer science curriculum. however, there are some major challenges in providing these experiences; it is much more complex than just adding another compiler or server to a laboratory. this paper describes several efforts the authors are making to meet these challenges. the environment in which these efforts have been studied is a small state- supported university, northwest missouri state university, in rural missouri. northwest has over 6,200 students pursuing baccalaureate, masters and specialist degrees. the networking environment at northwest is more robust than one might expect. in 1987, the university became the first public institution in the united states to develop an "electronic campus" featuring university-provided, networked computing stations located in every residence hall room and faculty office. then in 1999, each faculty member was issued a personal notebook computer and the residence halls were upgraded to windows- based, networked desktop computers. [9] jon rickman merry mcdonald gary mcdonald phillip heeler an empirical study of multiple-view software development scott meyers steven p. reiss "schmoozing at the home office": reflections on telecommuting and flexible work arrangements for it professionals nancy a. flynn providing inexpensive software for campuses john f. schar frances m. blake kids are not "adults-in-waiting" allison druin active learning approaches to training: encouraging the self-taught user e.-l. bogue setting up a classroom lab the computer sciences accreditation board (csab) initially accredited the undergraduate computer science program at the university in june 1991. we were re-accredited in 1994 and 1997. several universities accredited by csab have implemented closed labs for their cs1 and cs2 classes. we have set up a classroom lab for our majors which we use _during_ regularly scheduled lectures. this lab is used not only for cs1 and cs2 but also for several other courses. our experiences in setting up the lab at a minimal cost were sometimes pleasant and often frustrating. we would like to share these experiences with those in other small colleges and universities so that they can learn from us when setting up a classroom or closed lab for the first time and not repeat the mistakes that we made. krishna k. agarwal adrienne critcher dave foley reza sanati john sigle acm's visit to the people's republic of china thomas a. d'auria franklin f. kuo dora kuo curriculum 2001 (panel session): evaluating the strawman report representatives of the acm/ieee-cs task force in the fall of 1998, the acm education board and the educational activities board of the ieee computer society appointed representatives to a joint task force to prepare curriculum 2001, the next installment in a series of reports on the undergraduate computer science curriculum that began in 1968 and was then updated in 1978 and 1991. interim reports on the initial planning of the curriculum were presented at the sigcse symposium in march 1999 and at the ieee frontiers in education conference in november 1999. in february 2000, the curriculum 2001 task force will release a preliminary version of its report, in the hope of gaining feedback from a wider audience. the purpose of this panel is to give attenders at the sigcse conference to review the current state of the preliminary draft and offer their comments to the members of the curriculum 2001 steering committee on the panel. eric roberts c. fay cover gerald engel carl chang james h. cross russ shackelford computers and employment inna shostak verbal skills in computer science education joseph s. fulda inside risks: pki: a question of trust and value richard forno william feinbloom using it to integrate societal and ethical issues in the cs/is curriculum (panel) mary j. granger joyce currie little thinking objectively: software process improvement in the small robert p. ward mohamed e. fayad mauri laitinen the work of george forsythe and his students j. varah experiences in teaching an advanced computer graphics course g. scott owen student consultants: a key to efficient utilization of computing resources i need three consultants for every one i have! we've all uttered something like this at some time. why not use students? they represent a largely untapped resource on most campuses. but not just any students --- use students who are trained, confident, well-managed, and backed by a professional staff. students, consulting effectively with faculty, staff, and other students, can extend your scarce resources while increasing the computing competence of your users. this paper reports how one computer center is increasing its effectiveness in consulting while freeing up permanent staff to deal with other matters, which include converting to a new system, developing an education curriculum, continuing system upgrade, establishing microcomputer labs, and implementing a campus- wide network. this paper covers not only the training methods, the three levels of support, and the management tools used in this program but also the type of response this program has received on campus. roi f. prueher crisis in computer education (this paper has been accepted for publication in the proceedings, but the photo-ready form was not delivered in time. copies of the paper should be available upon request at the presentation.) donald chand towards benchmarks for maintainability, availability and growth/evolution aaron brown design strategies for a computer-based instruction system in february, 1984, the computer science department at brigham young university began working on a project that would automate the delivery of a beginning programming class. this project known as the elrond project, was funded by the university with the expectation that the instructional delivery costs and the need for additional faculty for this course could be reduced. this paper describes the system design and principles that were used and presents strategies for creating computer automated courseware. larry c. christensen gordon stokes bill hays integrity in software development peter g. neumann multimedia and gender robert tannenbaum joanna m. badagliacco acm forum robert l. ashenhurst maintaining user service and direction in a complex environment bill l. ashmore f. russell helm sally d. haerer acm forum: letters robert l. ashenhurst infrared communication to control software applications computers are becoming more and more easy to use. from a sequence of keys pressed to a click of a mouse button users can make choices or events happen. however, people are required to sit at a computer. if they are away from the computer they cannot use the computer. there are, however, many ways to solve this problem. for instance, by voice, people can speak what they want accomplished and the computer can make it happen. another way is by a remote controller. this research proposes a way to use a universal remote controller and an infrared receiver, to control a computer application. walter v. romero viewpoint: exploring the telecommuting paradox mohamed khalifa robert davison do we really teach abstraction? paolo bucci timothy j. long bruce w. weide microsoft certification pak l. kwan acm code of ethics and professional conduct a comparison of recent textbooks for teaching computer ethics to undergraduates r. waldo roth judgement w. p. gray ask jack: skill development jack wilson retraining for a graduate program in computer science onkar p. sharma ali behforooz modeling nii services: future needs for standards and interoperability christopher dabrowski william majurski wayne mccoy shukri wakid robust training programs: new techniques in applying quality assurance to the development process robert h. august ask jack: negotiating with companies jack wilson reaching out and being reached richard anderson introductory course in human computer interaction g. w. strong iso 9000 reflects the best in standards roy rada accreditation in computer science gerald l. engel tom cain john f. dalphin george davida terry j. frederick norman e. gibbs doris k. lidtke michael c. mulder the piper cub offense jef raskin maintaining user service and direction in a complex environment bill l. ashmore f. russell helm sally d. haerer sigcse endorses a new journal on educational resources in computing deborah l. knox before the altair: the history of personal computing larry press overcoming computer phobia jean e. thompson the sigchi international issues committee: taking action david g. novick a perspective on the it industry in south africa linda marshall predicting performance in an introductory computer science course a group of 269 first-semester freshmen was used to predict both performance in an introductory computer science course and first-semester college grade point average by using information regarding the students' programs and performance in high school along with american college testing program (act) test scores. d. f. butcher w. a. muth assessing the need for training in it professionals: a research model training is critical to hiring and retaining information technology professionals. in order to improve the efficiency of the training of these professionals, a research model is proposed as a basis for future research regarding the methods that exist for assessing training needs in information technology professionals. creggan gjestland j. ellis blanton richard will rosann collins thetenthstrand == 3 * ethicaldebates + solution gloria childress townsend chi meets plop: an interaction patterns workshop this report summarizes the results of a workshop on pattern languages for human-computer interaction which took place at the chiliplop'99 conference on pattern languages of programming.it suggests a definition and taxonomy for interaction patterns, explains how writers' workshops are used to improve patterns, and points out some surprising issues about pattern languages as they are understood by key players in that field. it shows the importance of user interface and software engineering researchers to exchange their thoughts on this hot topic. jan o. borchers a conversation about computer science education daniel d. mccracken dennis j. frailey personal data privacy in the pacific rim kate lundy deja vu or returning to graduate school at age 52 paul haiduk universal literacy - a challenge for computing in the 21st century anita borg exploring the inputs of is service the importance of an is unit's level of service to its internal customers (i.e., end-users) is well recognized (kettinger and lee, 1994; pitt, watson and kavan, 1995; kang and bradley, 1999). in fact, the level of service provided by the is unit to its internal customers can ultimately impact the organization's level of service to its external customers, as the is unit contributes to the overall organizational climate for service (schneider, white, and paul, 1998). much of the research on is service quality has focused on the gap that exists between end-user expectations for the service and perceptions of the actual service received. therefore, while these studies have contributed to the measurement of the is unit's service (i.e., outputs), they have not looked closely at the service providers themselves (i.e., inputs). this research project focuses on the inputs. the research project will test the model depicted in figure i to investigate the antecedents of, and influences on, the service quality of an is service encounter (i.e. the service outcome). using ajzen and fishbein's (1977) theory of reasoned action, this model posits that the service orientation (so) of is personnel will have a significant impact on end-user ratings of is service quality (i.e., service outcomes). in addition, the model indicates that the service outcomes may also be influenced by the so modeled by is managers and team leaders, and by how (or if) service behavior is rewarded in the is unit. therefore, the model looks at the attitudes brought to the job by is personnel, two facets of the work environment encountered by is personnel while on the job, and the resulting service encounter as perceived by the end- user. janette moody computer literacy and the older learner: a computer department's response jerry maren computer crimes and the current legislation syed shahabuddin system growth and capacity planning a tutorial as a computer system continues to operate, it is almost inevitable that the demands for service increase, both in number and in amount amount of service per demand. at some point, the service level degrades to an unacceptable level, and some action must be taken: either reduce the demands or improve the system. it is the hope of system managers that these episodes of increasing demand, unacceptable service and implementation of a remedy could be anticipated, so that an orderly plan of operation could be initiated. herb schwetman a computer science curriculum for liberal arts colleges (panel session) norman e. gibbs kim bruce robert cupper stuart hirshfield ted sjoerdsma allen tucker the last word: women in it: looking beyond the "girl geek" stereotype maura johnston evaluating existing information systems from a business process perspectivethe evaluation of existing information systems has gained importance ininformation systems management. a research project involving ten major swisscompanies and the university of st. gallen has developed a method forassessing existing system from a business process perspective. systems areevaluated at two levels: the business process supported, and the technical andfunctional quality of the system. users and business managers participateactively in the evaluation project. evaluation criteria are derived using acritical success factor approach and a set of generally applicable factors.measurement is considered central to the evaluation, and is supported by a setof generic measures and a catalogue of further measures. the evaluation aimsat initiating improvement actions. the method has been tested successfully inthe participating companies.martin w. mende leo brecht hubert Ã-sterle growing pains in information systems: transforming the is organization for client/server development information systems (is) groups are under increasing pressure to contribute to organizational performance and to support, or even drive, broad organizational transformation efforts through the successful exploitation of information technology (it). using a "sociocentric" model of organizational work, this paper analyzes the experiences of one company's is group that recently embarked on a long-term, enterprise-wide client/server system development initiative designed to transform organizational decision support processes. even though the client/server initiative is still in its infancy and has not yet delivered high-impact applications, it has brought about substantial changes in the nature of work in the is group. these changes range from new philosophies, methodologies, and technologies to shifts in the skills, communication patterns, and control structures required to develop and manage information systems. william d. nance report of acm's technical standards committee jim moore influencing users to obtain cooperation and compliance in the context of systems design and implementation kailash joshi using patterns in the classroom joseph bergin alyce brady robert duvall viera proulx richard rasala ethical conflicts in the computing field - an undergraduate course marcia ascher learning operating systems structure and implementation through the mps computer system simulator mauro morsiani renzo davoli m-powering personnel for e-business change the paper examines a model that proposes various antecedents to successful e-business change management in erp environments. a case study of an m-commerce project for a personnel management system within a large traditional engineering company is described in the context of this model. the specific goal of the research is to determine facilitators that lead to e-business project success of these change efforts. the results show that performance gains from the intranet-erp project were accompanied by the presences of facilitators in all dimensions of the framework. of particular importance were those components related to employee empowerment --- knowledge management, relationship building, and learning capacity. this has implications for both it and user personnel and the future governance of the organisation. colin ash janice burn information systems curricula and accreditation john t. gorgone computer technology and jobs: an impact assessment model a model is proposed that associates the impact of computer technology on a job, with the set of underlying characteristics that describe the activities performed on the job. an empirical test of the model has been undertaken. one thousand and thirty-five experts assessed the impact of computer technology that they believed would occur on 306 jobs over a three-year period. job characteristics data was obtained from prior analyses of the jobs, using the position analysis questionnaire (paq). six job dimensions derived from analysis paq data were significant predictors of the technological impact ratings provided by the experts: engaging in physical activities; being aware of the work environment; performing clerical-related functions; working in an unpleasant or hazardous environment; performing service-related activities; and performing supervising, directing, and estimating functions. ron weber meeting user needs through in-service student projects lynn r. heinrichs we have the information you want, but getting it will cost you!: held hostage by information overload. mark r. nelson principles for privacy protection software harry hochheiser conference preview: iui '98: 1998 international conference on intelligent user interfaces loren g. terveen peter johnson an assessment of stress factors among information systems professionals in manitoba eugene kaluzniacky personal computing: an adventure of the mind. "a national educational tv series at pre-college level for personal computing and computer literacy", david c. rine, western illinois university. under grants from the ieee computer society, the johns hopkins university, radio shack and other agencies, the international instructional tv cooperative, source of instructional tv materials to all educational tv networks nation-wide and internationally, has finished and is marketing the implementation of a six-course national educational tv series aimed at the pre-college level in the area of personal computing and computer literacy. the name of the project is "personal computing: an adventure of the mind". the objectives of this new series are to illustrate the uses of personal computing, to demonstrate the interface of humans and machines, to identify the fundamentals of communication in personal computing, and to motivate students to be innovative in their own applications of personal computing. since the personal computer is viewed by many as a mind multiplier, a further objective of this educational tv series is to greatly increase the number of minds that can be multiplied, by taking personal computing to millions of children in classrooms across the country. education and informational programs are closely allied in that both attempt to communicate facts, concepts, and ideas. both need to be designed with specific objectives in mind. some of the objectives to be discussed are both attitudinal and informational in nature; that is, they deal with feelings as well as facts. the underlying thrust throughout is that . . . learning can be fun! david c. rine trends in end-user training: a research agenda fred niederman jane webster women in computing: what does the data show? linda selby ken ryba alison young acm model high school computer science curriculum (abstract) susan m. merritt charles j. bruen j. philip east darlene grantham charles rice viera k. proulx gerry segal carol e. wolf computing in a less-developed country almost all of the nations of latin america are so-called less-developed countries (ldcs). but unlike many such countries elsewhere, quite a few have recently attempted to install more democratic, or at least less authoritarian, governments that are permitting greater freedom of expression and information and encouraging market-oriented economic developments. the latter include decreasing protectionism and moves toward the privatization of state enterprises such as telecommunications companies. given the land area, natural resources, and populations involved, these developments are potentially of enormous international importance. the information technologies (it) could be used to accelerate and reinforce these political and economic changes. seymour e. goodman technics and higher education janet f asteroff computing the future: whither computer science and engineering? the computer science and telecommunications board of the national research council recently released a report entitled computing the future: a broader agenda for computer science and engineering. the report is intended as a first step in developing a vision of computer science and engineering that will carry the discipline into the 21st century. acknowledging the fraying of the social compact that has supported scientific research since world war ii, pressures on academic science, and major changes in the computer industry, the report puts forward a view that the intellectual boundaries of the cs&e; discipline should expand. the discipline should continue to support what has been traditionally been considered ""core" computer science and engineering, but it should also learn to embrace rather than eschew computing problems that arise in domains outside the traditional core. such problem domains include other sciences and areas with commercial or economic, significance. this broader agenda will have many benefits for cs&e;, including a rich set of intellectually challenging problems, greater social impact, and a wider variety of funding opportunities. not surprisingly, this vision of the field in the 21st century has generated considerable controversy and debate, as is appropriate for matters as important as these. a panel will briefly review the major judgments, priorities, and recommendations of the report, and differing perspectives on the report will be aired. herbert s. lin juris hartmanis john rice morton lowengrub new directions in the introductory computer science curriculum allen b. tucker peter wegner political determinants of system design and content computerized information systems seemingly offer technical solutions to corporate organization and adjustment problems. this idea of the system as the solution has certainly become the focus of information industry rhetoric and sales propaganda. industry advertising promotes ideas such as (1) the advantages of instant communication in the relative success of any business, or (2) the threat posed by the volume of information produced by a company and unorganized by a particular micro-system. the questions missing from the rhetoric and propaganda are what problems require solutions and what do those problems have to do with the capabilities of computerized information systems. the information industry has transformed these questions in a rather interesting way: they have been handed over to their sales and marketing divisions. ronald webb self esteem: moderator between role stress fit and satisfaction and commitment? many companies are having a difficult time satisfying and gaining commitment from it workers. this study determines whether satisfying preferences for role ambiguity and conflict will increase satisfaction and commitment levels. furthermore, this research seeks to determine if self- esteem moderates the relationship between satisfying employee preferences and levels of satisfaction and commitment. preliminary findings supported these propositions. anthony c. nelson cindy lerouge large introductory courses in research computer science departments (panel) david g. kay jacobo carrasquel michael j. clancy eric roberts joseph zachary opinion: why are people afraid of computers? scott ramsey macdonald proposed curriculum for programs leading to teacher certification in computer science (panel session) james l. poirot arthur luehrmann cathleen norris harriet taylor robert taylor metrics based plagarism monitoring plagiarism in programming courses is a pervasive and frustrating problem that undermines the educational process. defining plagiarism is difficult because of the fuzzy boundary between allowable peer-peer collaboration and plagiarism. pursuing suspected plagiarism has attendant emotional and legal risks to the student and teacher, with the teacher bearing the burden of proof. in this paper we present a metrics-based system for monitoring similarities between programs and for gathering the "preponderance" of evidence needed to pursue suspected plagiarism. anonymous results from monitoring are posted to create a climate in which the issue of plagiarism is discussed openly. edward l. jones pre-college computer use: u.s. versus japan raymond o folse is online democracy in the eu for professionals only? per-olof Ã...gren implementing a regionally unique master's curriculum in computer science (abstract only) a master's degree program in computer science has been implemented which includes a curricular option for educators. it is designed to be responsive to the needs of prospective computer science educators in the region determined by a survey of school systems conducted recently. in addition, analysis of responses and requests for professional development workshops and inservice courses indicated high interest and need for such a curriculum. a review of the literature further indicated a dearth of programs in existence which would fill this need. further review of curricula at colleges and universities around the nation which provided such programs revealed a curriculum base from which the faculty as a group developed a progression of courses leading to a master's degree in computer science for teachers at all levels of education-- elementary through post secondary. implementation of the program required faculty and student recruitment, advisement of students and establishment of research and development facilities and an ongoing system of program evaluation. patricia faser multiple paradigms in cs i chuck leska john barr l. a. smith king the power of play michele tepper debunking the software patent myths paul heckel shaping the roles to be played during the 21st century by local chapters of acm sigchi richard anderson acm curriculum committee report computing programs in small colleges the curriculum committee of the education board of acm has established as an ongoing committee a small college group. this committee will make a presentation of its report at this symposium and will provide an opportunity for attendees to comment. various aspects of computing programs will be considered, including obtaining qualified faculty, providing appropriate equipment and the selection of a suitable and manageable set of courses. john beidler richard h. austing lillian n. cassel question time: online privacy john gehl suzanne douglas peter j. denning robin perry acm doctoral dissertation award: acm international scholastic programming contest awards dr. bell is director of the computer museum in marlboro, mass., a memeber of the charles babbage institute program committee and an editorial board member for the annals of the history of computing. as director of the computer museum since 1980, she has interpreted computer history via exhibitions, programs and public speeches. gwen bell a moderate approach to computer literacy diane m. spresser a master's degree in school computer studies several papers have been written about the shortage of knowledgeable computer science teachers at the secondary and junior college level. additional reports have been written describing workshops, courses, and various other methods to help reduce this shortage. see for example papers by moursund, dennis, piorot and others in various publications by acm and the proceedings of recent national educational computing conferences.(l,2) very few papers have reported on the existence of degree programs for teachers who are interested in developing the necessary background to teach computer science. even fewer master's degree programs exist to help train teachers in computer science education. moursund has described the master's degree at oregon(3), lykos has established a degree at illinois institute of technology, and the university of illinois has a degree for teachers.(2) this paper describes a unique master's degree program at northwest missouri state university. details are given on the development and present status of the program. also, several suggestions are indicated for possible future directions for such a program. phillip j. heeler how we know what technology can do mark burgin profiles of mis doctoral candidates kai r. larsen m. pamela neely on the care and feeding of superusers don m. wee the profitability of career planning understanding your skills and priorities is essential to the career decision- making process. because your career decisions will probably determine the quality of your life for years to come, preparing and planning for your career is one of the most important moves you will ever make. one exciting feature of the computer field is the remarkable variety of choices in careers. well trained computer people are in high demand in this rapidly expanding field. you will ask the question, "is there a place in computing for me?". how can you tell if you might do well in a computer career? can you think of a business or profession which does not involve computers? today there are few. tomorrow there may be none. that fact alone should challenge you to pursue a career in computing. this industry is dynamic and growing. career opportunities abound. herbert b. safford university research in a squeeze fred w. weingarten ask jack: negotiating jack wilson general education in the computer science curriculum: software engineering this tutorial gives an overview of the c++ standard template library (stl), including its overall structure, functionality, and capabilities. particular emphasis is placed on the ways in which stl may be used in the curriculum. paul m. mullins the software sampler workshop: intermediate training for university microcomputer users the acquisition of microcomputers has changed the nature of user support at the ohio state university. the pioneer micro users of a few years ago primarily desired assistance in these areas: • introductory terminology, "computerese" vocabulary; • setting up their new systems; • communicating with the mainframes on campus. as users gained experience on their systems, training demands evolved. now micro users want training in the "big three," word processing, data base management, and electronic spreadsheets. in this paper, i shall examine this evolution in training support. what is taught, including a basic outline of objectives, and specific examples of materials, will be discussed. finally, a plan for administration will be given. susan jenkins embrace your limitations - cut-scenes in computer games richard rouse biz on the net: it is not all flash, jiggle, and beep jill h. ellsworth the high assurance brake job - a cautionary tale in five scenes kenneth g. olthoff using actors in an interactive animation in a graduate course on distributed system we describe and evaluate an experiment where actors were used to simulate the behaviour of processes in a distributed system in order to explain the concept of _self-stabilisation_ in a graduate course on distributed systems. a self-stabilising system is one that ensures that the system's behaviour eventually stabilises to a safe subset of states regardless of the initial state. protocols satisfying this elegant property, which enables a system to recover from transient failures that can alter the state of the system, are often hard to understand, especially for students that have not studied distributed computing and systems before. the experiment was part of an introductory course on distributed computing and systems for graduates in october 2000. the purpose of this interactive animation was to introduce to the students the basic concepts behind self- stabilisation (eligible states, transient faults, execution convergence) before their formal introduction. all of the students had a degree either in mathematics or computing science and had taken a course on algorithms before. however, most of the students did not have a background in distributed systems or distributed algorithms. the latter was not only the motivation for preparing this method of presentation but also what made this a challenging effort. the feedback from the class was that the concept and this teaching method were very well received. we could observe that their understanding evolved to the point that they were able to successfully come up with ideas for solutions and argue for/prove their correctness. as suggested in [1], dramatisation of executions can help the students to understand new issues and complications. this work shows that this is true even for graduate level courses. in our experiment we could conclude that dramatisation can be almost as powerful as a programming exercise in the teaching process; sometimes even more efficient, especially when we need to teach new concepts to an audience with diverse educational backgrounds. in analysing the results of our method we make a combination of the _qualitative_ and _quantitative_ approaches [4]. boris koldehofe philippas tsigas diversity in computing john gehl organizational prototyping: embarking on organizational transformation stan hume tom devane jill smith slater altitude vs. airspeed jamie myers the use of student workers in computer lab management at the university of scranton the computing systems department of the university of scranton has been very successful in implementing a program utilizing student workers to staff its extensive lab facilities. this program was initiated in the early 1980's with a hand full of students staffing one lab, and has grown to the point where we now employ over 60 students to staff eleven labs with more coming on line every year. the fact of the matter is that student employees have become the backbone of our lab support systems. in the early days the department was small and the student workers were under the supervision of the director. as the department grew, and more students were employed, they were eventually assigned to the assistant director and later to other professional support staff. by 1985, running the labs had become involved enough to require establishment of the position of computer lab coordinator (the current title for this position is information support analyst - labs) to oversee lab management and the growing number of student employees. t. j. hughes debunking the puppy baron culture c. dianne martin it programs and cs departments (keynote session) elliot b. koffman formal semantics and interpreters in a principles of programming languages course kim b. bruce the nuts and bolts of academic careers: a primer for students and beginning faculty dan curtin gary lewandowski carla purdy dennis gibson lisa meeden transnational information systems - a challenge for technical and general managers angèle l. m. cavaye property and speech: who owns what you say in cyberspace? john perry barlow teacing breadth-first depth-first this paper argues that current approaches to teaching the introductory course for the cs major fail to provide students with an accurate sense of the nature of our field. we propose that an introductory course focused on a single sub- field of our discipline could better prepare potential majors by using that sub-field as a vehicle to present an overview of the techniques and principles fundamental to computer science. we discuss our experience with such a course based on the field of computer networks. thomas p. murtagh the quest for quality control in computer training c. r. rauschert software engineering and history rachid anane conducting a survey of computer technology graduates (abstract only) the curriculum committee of the computer technology department at ipfw requested a survey of our graduates with degrees in computer technology. the following are reasons cited: to identify more closely the strengths and weaknesses of the computer technology program from the standpoint of our graduates. to permit the department to more closely tailor our curriculum to reflect the changing needs of our graduates. to identify topics of specific courses that need to be added to our curriculum. to determine the need to offer a graduate program in mis. a committee consisting of one faculty member and a representative from the ipfw alumni association conducted the survey: designing the survey instrument and the cover letter, supervising the distribution of the survey, collecting the replies and recording them in a statistical database. the data have been tabulated and are now being analysed. the survey was designed for the ease of completion by our graduates and for the ease of tabulation of the responses. questions in the survey covered the following areas: general background of graduates, including starting and current salaries; location, size and primary business of employer; and primary job responsibilities. assessment of computer technology curriculum in terms of: theory / application balance quality of courses offered knowledge of faculty. recommendations concerning subjects that need to be added to the curriculum and courses that may be outdated or are perceived to be of little value. survey forms were mailed to approximately 650 graduates during the winter of 1985 - 86. we have received 135 replies, giving us a 20% response. dale k. hockensmith early numerical analysis in the united kingdom l. fox constructivism in computer science education mordechai ben-ari the discrete structures course (abstract only): making its purpose visible discrete structures texts are notorious for their lack of examples from computer science. one response to this problem is to teach the discrete structures course as a course in pure mathematics. however the end result is computer science students completing the course without understanding the reason it is required. a second response to the problem is to give the course a computer science flavor by requiring the writing of programs illustrating the mathematical ideas introduced. this approach still does not address the real issue of why the course is required. a third response, the one advocated by this paper, is to supplement the text with examples and exercises showing how the mathematical concepts are actually applied in various areas of computer science. this paper examines some appropriate computer science examples and exercises and discusses the author's experiences when incorporating them into the discrete structures course at s.u.n.y. at plattsburgh. ann fleury the certificate in computer programming: five years of data the certification effort in the computer field has been in existence since the initiation of it by the data processing management association, with the 1962 certificate in data processing (cdp) examination. dpma then started an examination for business computer programmers, called the registered business programmer (rbp) examination, giving it first in 1970. with the formation of the institute for the certification of computer professionals (iccp) in 1973, these examinations were placed under the supervision of the board of directors of iccp, composed of members of the sponsoring societies. acm was one of the founding sponsors, and actively supports it. although the rbp examination was temporarily withdrawn during the early days of iccp, it was restored as the business specialty of the certificate in computer programming (ccp), first offered in 1977. two other specialty examinations were also established, one in scientific programming, and one in systems programming. the three examinations have a section, called the general, which is in common, with questions mixed and given without candidates knowing which ones they are. each examination specialty has different questions mixed with those from the general section. joyce currie little self-perceptions and job preferences of entry-level information systems professionals: implications for career development ephraim r. mclean john r. tanner stanley j. smits selecting a consultant: a maxi problem in the mini-micro world in the last decade there has been a significant increase in the number of computer users. much of the increase can be directly attributed to the development of mini-micro computer systems. vendors of these systems have recognized the tremendous potential market in both education and small businesses and their marketing campaigns have placed professional educators and small businessmen in a position to be challenged by peers as well as competitors. however, many of the educators and businessmen do not have the talent to select and integrate a computing system into their current environment. countless stories illustrate the frustrations those people have experienced as they attempted to make the transition into the world of computing. many of the stories also contain statements of dissatisfaction with outside consulting services. this paper will focus on the latter situation. it will contain a survey of the consulting practices associated with both business and education, a summary of efforts to establish a certification program for computer professionals, and offer suggestions on how to select a computer consultant for mini-micro computer systems. allen k. henry integration of computer ethics into the cs curriculum: attachment or synthesis don gotterbarn editorial pointers diane crawford weighting biodata to predict success of undergraduate business administration students in introductory data processing: item analysis and cross-validation of net weights (this paper has been accepted for publication in the proceedings, but the photo-ready form was not delivered in time. copies of the paper should be available upon request at the presentation.) warren s. blumenfeld models in teaching programming languages (abstract only) computer generated models to teach programming languages are defined and examined. the memory modeling, the program generation and the flowchart creation are presented. generation and transformation of pictorial structures for each model is analyzed. the application of these models to computer assisted instruction is shown. three modes of interaction between the student and the computer: information, simulation and testing are discussed using the above models bogdan czejdo ludwik kolkowski editorial pointers diane crawford is information systems spending productive? lee ridgeway plagiarism monitoring and detection - towards an open discussion plagiarism in programming courses is a pervasive and frustrating problem that undermines the educational process. often plagiarism falls in the gray area separating profitable peer-peer collaboration, excessive dependence on others, and outright cheating. unless the evidence is compelling, pursuing suspected plagiarism is generally not worth the emotional and legal risks to student and teacher alike. this paper proposes a metrics-based approach to monitoring patterns of similarities among student programs that may signal the onset of excessive collaboration or plagiarism. publishing anonymous results from monitoring creates a climate in which plagiarism is discussed openly. edward l. jones employment and salaries of recent cs doctorates herbert maisel catherine gaddy computer virus myths rob rosenberger ross greenberg down the road: the first high school chapter: the indiana academy for science, mathematics, and humanities sara m. carlstead realistic student projects when a student performs a project, under the supervision of a faculty member, it is important that the student feel that the project is of merit and reflects both the student's capabilities as well as the student's interests. all too often the project is fine unto itself but has no connection to the student's other studies or background except that the project falls in the same major field. therefore it is important to create a project, in cooperation with a student, that is not only challenging in itself but also relies heavily upon the student's previous coursework. it should mimic project work as performed outside the academic sphere in that it yields a useable result. through the use of techniques such as a contract for project grade, outlining attainable goals agreed upon by both the student and the instructor, the student gains an understanding of the project in relation to the student's entire course of studies, as well as future endeavors. william j. joel an overview of computer science in china: research interests and educational directions in september, 1979, an international delegation of computer scientists visited the people's republic of china. this paper reviews the sub-stance of discussions between this group and their chinese counterparts and gives the impressions of the visiting scientists based on this interaction. n. b. dale the 1984 snowbird report: future issues in computer science this workshop reports that no core description of computer science is universally accepted. a model of the discipline must emphasize global issues over isolated objectives, cooperation over competition. corporate asl committee on logic education programming contests james r. comer robert r. wier j. richard rinewalt computer language usage in cs1: survey results suzahne pawlan levy academic and administrative computing in small and medium sized colleges the problems of computer usage at small and medium sized colleges range from inadequate staffing for computer science course offerings and programs to having to share the computer resource with administrative staff. of course, the sharing of the computer does in fact put a strain on administrative staff as well. in addition, the high salaries paid by industry make it difficult to recruit and retain qualified people. representatives from administration, computer science faculty, and computer science faculty who are now part of business and industry will meet and discuss these and other related problems as they apply to their particular school organization. hopefully a set of partial solutions and/or a call for action will result from this pannel discussion. francis l. schneider book review: inventing software by kenneth nichols lynellen d. s. perry the professional growth of ict experts through progressive sandwich training (poster session) jorma kajaval rauno varonen statement of observation: itsec comments to v1.2 adreas pfitzmann reference interviews and the computing center help desk how often does the help desk forward a call to you, the spreadsheet expert, when they should have sent it to the database expert? has anyone ever replied to one of your brilliant problem resolutions with "but that isn't what i wanted!"? all consultants know the frustration of talking to someone at length, discovering only after several minutes that they've been answering the wrong question. still, much of the training for computing center user support focuses on technical knowledge. consultants often view the interchange between user and consultant as more of an art than a science. consultants devote their limited training time to learning new systems; they count users to measure their success. however, computer consultants can train themselves to interact more efficiently with users and can find new ways to evaluate their work. users will benefit from better-focused consulting. consultants will spend less time thrashing through misunderstandings, enjoying more time to work on actual problems. we can begin by using reference librarians as a model. you might wonder what computing consultants can learn from librarians. after all, the library is an long-established institution, familiar and friendly. the computing center, now---there's the cutting edge, unknown and frightening. however, novice users need expert guides to the information resources in both environments. both computing center and library users become anxious when they must search out specific information. they don't know how to ask for help; they don't know what information they should provide in order to get the best assistance. computing center users seem to sidle up to their main question. for example, a secretary asks about features of the laserjet ii when she wants to know how to get landscape printing on their old laserjet. similarly, library users approach the reference librarian with questions only obliquely related to their real aim. an anxious student asks for gardening books when his report on pesticides is due tomorrow. neither reference librarians nor computing center consultants can train users to ask better questions. however, reference librarians have developed a set of techniques which they use to discover their users' actual information needs. librarians use the term "reference interview" to refer to the transaction between the person seeking information and the person providing information. after the initial contact, the librarian controls the interview, asking questions designed to discover the user's real question. the length of the reference interview depends on the complexity of the problem and the ability of the user to define the question. computing center consultants can easily apply reference interview techniques to their interactions with users. listen to this quote from a reference service textbook: "many people are not aware that information of interest or of use to them is generally available in libraries." "as a reference librarian, one has accepted the responsibility of answering questions pleasantly, speedily, accurately, and, as nearly as possible, to the satisfaction of the inquirer." (thomas, p.95) change "library" to "computing center" and change "reference librarian" to "user consultant" and you have a goal statement suitable to any computing center help desk. a computing center help desk is very like a library reference desk. both computing consultants and reference librarians answer users' questions. both may use encounters with users to teach them how to use library or computing center resources. both struggle with similar problems of inarticulate users, inadequate resources, and burned-out staff. several factors exacerbate the communication difficulties. novice computer users don't know the jargon. they are scared of computers, or angry because they don't like feeling ignorant. they don't want to become computer experts; they just want to do their work. often they feel pressured by deadlines. they don't have time to become acquainted with their system; they need to produce work now. librarians face similar difficulties. library users don't understand the classification systems. they don't want to become librarians. they just want information and resent having to ask for help. just as novice computer users don't want to admit ignorance, library users don't want to admit that they don't know how to use the library. often they suffer the same time pressures; their paper is due tomorrow and they need information right now. computing center consultants can adopt most reference interview techniques unchanged, using reference librarians' work as a model to train and evaluate help desk staff. however, there are some obstacles to effortless use of the reference interview. some consultants may feel that the differences between reference librarians and computer user consultants outweigh the similarities. others would argue that we cannot define the reference interview clearly enough to apply these techniques in any structured way. finally, reference interview training and evaluation will demand time. these objections are valid. computing centers are different from libraries; computing center consultants will not use reference interview techniques in exactly the same way as reference librarians. the reference interview does sometimes seem to be used as no more than a catch phrase for a grab bag of communication tips. we cannot effectively teach reference interview techniques if we have no more than a list of tips. even if we can define the reference interview well enough to teach its use, we must add this training to consultants' already overbooked schedules. evaluating consultants' use of reference interview techniques will require close observation of consultants' work and new methods of evaluation. however, none of these objections outweigh the benefits of the reference interview. people do come to the library help desk for different reasons than they come to the computer center help desk. library users usually have "what" questions; they want specific pieces of information. computer users usually have "how" questions; they want instructions on how to accomplish some task. library users want facts; journal citations, articles, and books on a particular subject. computer users want instruction, problem fixes, disaster recovery, and help with diagnosing problems. however, even when "typical" computing center users ask different question than "typical" library users, consultants can use the same reference interview techniques that librarians use. reference interview techniques help the consultant control the interview in order to more easily discover the specific nature of each user's question. these techniques apply equally well whether users want quantitative or procedural information. the amorphous nature of the reference interview is a more serious obstacle to training and evaluation. the "reference interview" can degenerate to nothing more than a label for a bunch of communication buzzwords. computing center staff cannot apply reference interview techniques with measurable success unless they work from a clear definition. the essence of the reference interview is that the information provider controls the interaction in order to best meet the needs of the person who needs the information. i divide the reference interview into three parts: approach, dialogue, and closure. during approach, users decide who to approach and how to state their question. consultants can control the environment to make it easy for the user to find help, again taking suggestions from library research. signs should identify the help desk. both student assistants and full-time consultants may wear name tags or badges. help desk staff should appear "interruptable"\---neither completely idle nor completely immersed in some project. reference interview articles often focus on the many communication techniques used in the second phase of dialogue between the user and the information provider. at the university of iowa personal computing support center, we emphasize three techniques with our student workers: attending to the user, prompting for more information, and checking for mutual understanding. during the dialogue, consultants depend on their general knowledge to recognize common problems and make connections which the novice user cannot. if unchecked by careful use of the reference interview, this ability to jump to the correct conclusion betrays both the consultant and the users. dazzled users think that consultants magically solve their problems with almost no information. consultants may jump too fast, giving users the right answers to the wrong questions. careful attention to the user and checking for mutual understanding minimizes these problems. the consultant must close the interview so that both the consultant and the user have the same understanding. during closure, the consultant makes sure that the user understands the answer and that both have the same expectations of any follow- up. in an ideal world, every reference interview ends with a satisfied user. in our imperfect reality, some users will not accept our answers. consultants can borrow techniques from reference librarians to work with problem users. the last and most serious obstacle to using reference interview techniques is the need for new methods of training and evaluation. somehow consultants must find time to develop training, schedule training sessions, and evaluate their use of these techniques. with new software and hardware, consultants can accomplish much on their own if they are just given machine, manual, and disks. reference interview techniques require more structured training. again, we can draw upon the experience of reference librarians and use a variety of methods to train consultants to use reference interview techniques. combinations of case studies and role-playing work well for many, especially if combined with videotaping. concentrating on one technique at a time seems to work better than trying to teach every possible skill; this also allows shorter sessions which you can more easily fit into busy schedules. applying reference interview techniques will help consultants improve their work with users. consultants must also devise new methods of evaluating their work in order to prove to themselves and to their management that the time spent on reference interview techniques is worthwhile. in most computing centers, consultants religiously record each contact with a user. their managers want to see numbers --- preferably increasing numbers. bigger numbers indicate that the consultants have accomplished more and thus deserve a bigger share of the budget. the consultants know that other departments within the computing center supply statistics to the administration, and that the consultants had better supply statistics that are just as good. thus consultants keep logs to mark each time they talk to a user. they break down contacts into telephone and walk-in; they code each for the subject of the question and categorize each by status of user. consultants may decide to record not just each person, but each question asked by each person. after all, just one person may ask about using footnotes in word, transferring macintosh files to a dos diskette, and recovering erased files. shouldn't that count as three? they also note the start and stop time of each contact. each quarter the consultants transform their tickmarks into glowing reports of the multitudes who have benefited from the computing center help desk. however, these reports to the managers don't do that much for the consultants. the quantity of work may not impress the managers. some managers who find the statistics impressive may keep their impression to themselves, giving no reward to the consultants. those managers who do reward the consultants often give them rewards that increase the workload. they find money in the budget for more student workers --- who must be trained. they provide a local area network --- which must be installed and maintained. they give everyone new software --- which the consultants must somehow find time to learn and then use their learning as the basis for more consulting and classes. even when these quarterly stacks of statistics do bring rewards, they do not help the consultants or the managers truly evaluate their work. numbers of users seen does not invariably correlate with numbers of problems solved. measuring numbers alone tells the computing center much about increasing demand and peak usage times, but it tells nothing about the quality of computing center services. consultants must devise new methods to evaluate their work, educate their managers to use these new evaluation methods, and generate their own rewards for effective work. in order to develop adequate measures, consultants must decide why they consult and exactly what they do when they consult. they should know how their private purposes fit the computing center's official statement of purpose. many computing centers set forth the goal of greater productivity through computing and their consultants spread the gospel across campus. many consultants encourage users to become self-sufficient --- either to see the fledglings fly away from the nest or to shoo away the pests. some encourage dependency, protecting each tender novice from the awful complexities of computing. some consult only because they happened to find this job and they don't want to search for another right now. as they consider their reasons for consulting, the consultants should observe their daily work. consulting services vary widely; within one computing center the individual consultants provide different services, which should be evaluated differently. one consultant may teach no classes but spend twenty hours per week visiting departments to provide on- site assistance. each consultant should list his or her activities --- writing articles, teaching seminars, help desk services, etc. once they have developed a statement of purpose and a list of activities, the consultants can devise methods to evaluate their work. each activity calls for a different tool of evaluation. each tool should be chosen to answer two questions: "does x (a particular technique) accomplish y (my reason for consulting)? how well does it accomplish it?" for example, consultants may use feedback questionnaires and tests to answer these questions about their seminars. obviously, the simple recording of numbers takes less time than this meditation on purpose, listing of activities, and then evaluation of activities according to how well they accomplish the purpose. consultants often feel they have barely enough time to note one user contact before the next approaches. few managers encourage overworked consultants to take time for detailed evaluation. however, consultants can develop meaningful evaluations of their work a little at a time. while they continue to supply their managers with numbers, they can also select one activity to evaluate for a set time. the results of these evaluations, added to the usual statistics, will show their concern with quality as well as quantity. the consultants should demonstrate that these evaluations help the computing center accomplish its formally stated goals. this will encourage their managers both to see the value of evaluation and to appropriately reward the consultants. the evaluations point out where consultants need to improve their work and where they should receive rewards. part of the consultants' reward will lie in knowing that they have reached users. few consultants derive much pleasure from knowing that they talked to thirty users per day this quarter compared to twenty-two per day last quarter. most want to know they make a difference to at least some of those users. merely recording the number of users who came to the help desk does not tell the consultants whether any of those users left the help desk happier than they came to it. evaluation provides proof that the consultants actually do help users --- proof that will provide much-needed satisfaction to the consultants. beyond the intangible reward of personal satisfaction, those consultants who can demonstrate progress toward stated goals are best equipped to suggest suitable rewards to the management. if the consultants can link these rewards to future progress --- the three-day seminar in boston that will even further improve their consulting, for example \--- so much the better. as we hustle to keep track of continuous updates and the constant traffic of users, we can lose sight of our goals. finding new ways to work with users and new ways to evaluate our work will help us find new enthusiasm for our work. kristin m. evenson acm computing week '95 stuart zweben the pioneer days of scientific computing in switzerland scientific computing was established in switzerland by e. stiefel, assisted by h. rutishauser, a.p. speiser, and others. we cover the years from the foundation of the institute for applied mathematics at the eth in 1948 to the completion of the ermeth, the electronic computer built in this institute, in 1956/57. in this period, stiefel's team also solved a large number of real- world computational problems on another computer, zuse's z4, rented by the institute. along with this work went major contributions to numerical analysis by rutishauser and stiefel, and rutishauser's seminal work on compiling programs, which was later followed by his strong commitment in algol. we have tried to include some background information and to complement h.r. schwarz's article [scw81] on the same subject. m. gutknecht final report (abstract): curricula for two-year degree programs in computer sciences, and computing and engineering technology karl j. klee john impagliazzo using qualitative research software for cs education research research in computer science education has become more and more important in recent years. both quantitative and qualitative research methods yield interesting results, but most researchers in our field rely on software for only the quantitative methods. this paper describes one of several packages on the market that support qualitative research methods. these packages make qualitative research less unwieldy and provide the researcher with excellent tools for doing far more detailed analysis of the data than is possible by hand. the data for such analysis may come from a variety of sources including on-line or written tests, programming assignments, and exit interviews for assessment purposes. the results of qualitative research can produce a better understanding of the larger picture in the environment under study. m. dee medley the 1987 - 1988 taulbee survey the computing research board's 1987-1988 taulbee survey includes the latest statistics on production and employment of ph.d.'s and faculty in computer science and engineering. included also are departments offering ph.d.'s in computer engineering. david gries dorothy marsh self-assessment procedure xiv: a self-assessment procedure dealing with the legal issues of computing a self-assessment procedure dealing with the legal issues of computing jane p. devlin william a. lowell anne e. alger the costly implications of consulting in a virus-infected computer environment k. nunez t. gerace a. hartman consulting is not a matter of facts we can look at consulting as a function rather than as a personification. more and more, user services consist of a group of people each having their own different area of expertise. they may each have primary responsibilities in areas quite separate from user consulting. the assembling of a consulting staff can be seen as a three-part ongoing project. first, obtain the people who can bring the necessary variety of skills to the group. second, make plans to spread the knowledge each brings among the others in the group. third, the challenge that may bring success or failure to the entire effort, turn the "visiting experts" into consultants. we have recently had to face the situation of initiating a diverse group of people into the world of consulting. nancy o. broadhead risks to the public in computers and related systems peter g. neumann a discipline in crisis peter j. denning edward feigenbaum paul gilmore anthony hearn robert w. ritchie joseph traub information technology and mathematical modelling, the software crisis, risk and educational consequences bernhelm booss-bavnbek glen pate "i've got a quick question..." or, a dozen years of network user services the lead-in never changes, but the "quick" questions do. in 12 years at the merit computer network, the quick questions have tended to come in waves whose nature has changed dramatically as the nature of network usage and network services has changed. christine wendt a critical examination of separable user interface management systems: constructs for individualization carson reynolds about this issue… adele goldberg a capstone course for a computer information systems major this paper describes the current form and organization of humboldt state university's cis 492: systems design and implementation, the capstone course for the computer information systems (cis) major. since spring 1998, this course has combined a team programming experience on a large-scale database project with discussions of a software engineering classic, frederick brooks jr.'s "the mythical man month"[1]. students seem to find this combination valuable, and it is hoped that this paper can impart some useful ideas to others in designing a cis/mis capstone course. sharon m. tuttle acm - sigcse award speech karl v. karlstrom work-unit environments of information systems and non-information systems people: implications for end-user computing and distributed processing this study investigates whether work-unit environments differ for information systems (is) and non-information systems (non-is) employees. within the same occupational level, i.e., within clerical, technical- professional, and managerial level personnel, no differences are discovered in the overall work- unit environments of is and non-is employees. speculation is presented suggesting that (1) the management of end-user personnel need not change as they become more involved in performing information systems tasks and (2) is and non-is personnel in distributed processing environments should be managed similarly. thomas w ferratt larry e short an updated information systems curriculum osvaldo laurido-santos towards a definition of computing literacy for the liberal arts environment lynda sloan antony halaris gender differences in is career choice: examine the role of attitudes and social norms in selecting is profession the proposed research will systematically assess the causes of gender-related differences in attraction to information systems (is) as a career. we propose a theory of reasoned action (tra)---based model of intention to pursue an is career which incorporates work value congruence, attitudes, norms, and self- efficacy. undergraduate students in an introductory is course, which for many will be their first introduction to the field, will be surveyed to test this model and also the extent to which their perceptions change with exposure to is through the course. k. d. joshi kristine kuhn the payoff on the information technology investment stephen jonas telemachus an effective electronic marker of students' programming assignments (poster session) m. satrazemi v. dajdiielis adoption of is development methods across cultural boundaries gezinus j. hidding applying ethics to information technology issues the articles in this special section express a common theme; the use of information technology in society is creating a rather unique set of ethical issues that requires the making of new moral choices on the part of society and has spawned special implications for its members. technology itself is not the only, nor necessarily the most responsible, cause of these issues. all ethical questions arise initially out of human agency. technology, due to its capability to augment mental and physical powers of human beings, does stand in the role of a coconspirator. the hire of power-enhancing capabilities makes technology an inducer of sorts, a necessary but not sufficient underpinning to many of the ethical issues we face today. richard o. mason beyond traditional computer literacy a new approach to computer literacy is emerging, an approach that de- emphasizes the traditional overview of hardware and software and minimizes the teaching of traditional programming methodology. this paper describes the design and implementation of a literacy course intended to develop effective users of common applications software, including word processing, spreadsheets, graphics and database management. the paper continues by demonstrating how many academic computer science concepts can be effectively introduced using this approach. v. arnie dyck james p. black shirley l. fenton beginners program web page builders and verifiers martha j. kosa implementing computer-based systems there is an underlying philosophy behind much of the published literature that implementation is a rational, deterministic process and a belief that the people affected by the change will behave in a manner consistent with the implementor's view of the world. this results in simplistic analyses of change management such as alter's (1980) four suggestions: divide the project into manageable pieces, keep the solution simple, develop a support base and meet user needs. gordon, lewis and young (1977) note: "policy making may be seen as an inescapably political activity into which the perceptions of individual actors enter at all stages. in this case, implementation becomes a problematic activity rather than something that can be taken for granted as in the rational process model; policy is seen as a bargained outcome, the environment as conflictual and the process is characterized by diversity and constraint." gerard mccartney forum diane crawford computer user support degree program: an alternative to computer programming fred beisse supply, demand, and piracy eric zabinski guidelines for collaborative learning in computer science kathie a. yerion jane a. rinehart from zero to sixty in seven months: managing the change to a new computer system a new system ?! the very words strike fear into even the most hardened of user consultants. such is the nature of system implementation and conversion that even grown analysts have been known to prefer the job of ibm jcl debugger to that of managing the change to a new (and heaven forbid), another manufacturer's mainframe. this presentation will outline some of the steps taken at the university of northern iowa to make the transition from one computer system (hp-2000) to another (harris 800). before i begin, i must clear up some common misconceptions: we are the university of northern iowa. mail addressed to ohio or idaho is not likely to reach us; and yes, there are three state universities in iowa! terry a. ward a semi-automated approach to online assessment desirable though fully automated assessment of student programming assignments is, it is an area that is beset by difficulties. while it is not contested that some aspects of assessment can be performed much more efficiently and accurately by computer, there are many others that still require human involvement. we have therefore designed a system that combines the strengths of the two approaches, the assessment software calling upon the skills of the human tutor where necessary to make sensible judgements. the technique has been used successfully on a systems programming course for several years, and student feedback has been supportive. david jackson the effects of organisational restructuring on "survivor" stress in information systems professionals derek smith alan wenger yokow quansah assessing risks in two projects: a strategic opportunity and a necessary evil janis l. gogan jane fedorowicz ashok rao deja vu or returning to graduate school at age 52 paul haiduk hardware protection against software piracy a system that prevents illicit duplication of proprietary software is suggested. it entails the customization of the programs for each computer by encryption. the use of a public key cryptogram for this purpose means that anyone can customize programs, but neither other programmers nor the people having complete access to the target computer can obtain copies that will run on other machines. a possible implementation of the system is considered in some detail. it is based on a hardware security unit that is attached to the computer and that decrypts and obeys some parts of the program. tim maude derwent maude the retention of women in the computing sciences sharon n. vest janet j. kemp the new star in organizations: the chief knowledge officer and the knowledge audit function jay liebowitz resources, tools, and techniques for problem based learning in computing ainslie ellis linda carswell andrew bernat daniel deveaux patrice frison veijo meisalo jeanine meyer urban nulden joze rugelj jorma tarhio the future of programming - are fundamental changes in computer science programs coming? (panel) hal hart jim caristi robert dewar mark gerhardt drew hamilton christopher haynes sam rebelsky the chi 98 doctoral consortium deborah a. boehm-davis an unlevel playing field: women in the introductory computer science courses marian gunsher sackrowitz ann parker parelius ntu computer engineering program frederic j. mowie david g. meyer philip h. swain satisfaction of it professionals with employment arrangements in traditional and virtual contexts retention of it personnel is a difficulty encountered by many organizations today, particularly for those who work in virtual organization contexts. however, the fit between it employees' preferred work arrangements and their current work arrangements and this fit's impact on it employees' intention to stay has been an under-researched phenomenon. this paper develops a conceptual framework, based on rousseau's [38] psychological contract theory, to address the retention concern. additional literatures that enhance the conceptual framework suggest that the antecedents of fit, besides preferred employment arrangement, include it employee stages and anchors, it competencies, and organizational factors, including it human resource practices. it human resource strategy, in turn, impacts the organizational factors. the conceptual framework incorporates additional antecedents of intention to stay, including other employment opportunities, virtual team factors, and individual factors. the paper also suggests an initial empirical study to explore the concept of fit and its antecedents as an initial point of departure for the examination of the conceptual model. thomas w. ferratt harvey g. enns jayesh prasad distance learning: a chi 97 special interest group lisa neal judith ramsay jenny preece the ethics of macines which mimic people in presenting machines as "intelligent" we produce an illusion which may be beneficial, may lead to breakdown in the interaction, or may be used by parties to deceive and exploit others. the following quote (from a researcher at a major computer firm) is a bit extreme, but makes it clear the concern clear. "from my point of view, natural language processing is unethical for one main reason: it plays on the central position which language holds in human behavior. i suggest that the deep involvement weizenbaum found some people to have with eliza, is due to the intensity with which most people react to language in any form. when a person sees a lingusitic utterance in any form, the person reacts, much as a dog reacts to an odor. we are creatures of language .... since this is so, it is my feeling that baiting people with strings of characters (tokens) clearly intended by someone to be interpreted as symbols, is as much misrepresentation as would be your attempt to sell me property for which you had a false deed. in both cases, an attempt is made to encourage someone to believe that something is a thing other than what it is, and, only one party in the interaction is aware of the deception." this talk will examine the ethical and practical choices in developing machines which mimic human behavior. terry winograd the consultant as collaborator: the process facilatator model susan stager road crew: students at work john cavazos most computer organization courses are built upside down greg w. scragg students serving students: an effective lab management model brian johnston brian bourgon personality characteristics of information systems professionals jo ellen moore a proposed secondary education computer science curriculum (this paper has been accepted for publication in the proceedings, but the photo- ready form was not delivered in time. copies of the paper should be available upon request at the presentation.) stephen w. thorpe paul d. amer what it labor shortage?: redefining the it in "it professional" cathy beise martha myers getting started with parallel programming dean sanders janet hartman down the road: getting women involved sara carlstead jumping on the nii bandwagon many requests for proposals have been issued since the last issue of this column appeared six months ago. we first briefly touch upon some recent developments along the policy/legislation front concerning nsf, arpa, and hpcc. we then recap the recent requests for proposals from arpa, nsf, air force, nasa, and army. xiaolei qian the computer scientist as toolsmith ii frederick p. brooks standard setting and consortium structures andrew updegrove integration of usability issues within initial software development education. (it's all about the user dummy!) xris faulkner fintan culwin do it themselves robert boyle you have to run faster just to stay in the same place jerry martin predictions of the skills required by the systems analyst of the future t. d. crossman information technology support services: crisis or opportunity? j. michael yohe computer science, home computing and distance learning - the largest computer science course in the world? gordon davies jenny preece should a computer literacy class be the first required course for data processing majors barbara doyle who or what is leibniz? andreas goppold job hunting in the modern market h. k. hodge the future of software pearl brereton david budgen keith bennnett malcolm munro paul layzell linda macaulay david griffiths charles stannett concepts in the classroom, programming in the lab computing curricula 1991 calls for breadth in the undergraduate computer science curriculum. many authors have recommended structured laboratories for computer science. this is a report on a project to combine these goals in an introductory sequence of courses. we present two courses in which all programming is done in a laboratory environment, leaving the lectures for more conceptual material that ranges over a broad selection of topics. student reactions to this project have been very positive---we have increased the number of students continuing with the major, as well as student satisfaction with the courses themselves. robert geitz a popular new course scott badman ask jack: career q&a; jack wilson towards integrated safety analysis and design there are currently many problems with the development and assessment of software intensive safety- critical systems. in this paper we describe the problems, and introduce a novel approach to their solution, based around goal-structuring concepts, which we believe will ameliorate some of the difficulties. we discuss the use of modified and new forms of safety assessment notations to provide evidence of safety, and the use of data derived from such notations as a means of providing quantified input into the design assessment process. we then show how the design assessment can be partially automated, and from this develop some ideas on how we might move from analytical to synthetic approaches, using safety criteria and evidence as a fitness function for comparing alternative automatically-generated designs. p. fenelon j. a. mcdermid m. nicolson d. j. pumfrey proposed criteria for accreditation of computer information systems programs a working group, representing acm, dpma, and ieee-cs, was formed to draft a set of guidelines, including criteria, for the accreditation of computer information systems' undergraduate programs. the guidelines and criteria are summarized below. faculty: typically a minimum of 4 faculty, with 3 full time, are needed. normally, 25% of a faculty member's time should be available for scholarly activity and development. teaching loads should not exceed 12 hours and should not exceed 4 courses with 2 preparations. curriculum: curricular assume a 120 semester hour, four year, baccalaureate program. the program should consist of approximately 30% computer information systems, 20% business, at least 40% in general education and up to 10% other. forty to 60 percent of the cis portion should cover a broad core that includes a) computer concepts and software systems, b) program, data, and file structures, c) data management, d) data and computer communications, and e) systems analysis and design. students should be exposed to a variety of programming languages and be proficient in one structured language. the remaining courses should cover breadth and depth. resources: appropriate computing facilities must exist for students and faculty. adequate software and documentation must be available. students: established standards and procedures must insure that graduates have the requisite qualifications to function as cis professionals. institutional support: adequate support must be provided to support the faculty, department office administration, and library. faculty support includes leave programs, reasonable teaching loads, competitive salaries, and travel support. robert cannon john gorgone tom ho john d. mcgregor awareness of the social impact of computers in the ussr c. dianne martin integrating recent research results into undergraduate curricula (panel): initial steps bill marion keith vander linden roberta sabin judy cushing penny anderson life expectancy of standards (panel) stephen r. pollock the problem of producing teachers with computing expertise within the school system (this paper has been accepted for publication in the proceedings, but the photo-ready form was not delivered in time. copies of the paper should be available upon request at the presentation.) annie g. brooking critique and evaluation of the cal poly/dpma model curriculum for computer information systems the authors have been intimate observers of a significant movement within computer education. this paper presents a history of this curriculum project and an assessment of its future influence. the forces which mandate the focus of attention in the area of data processing education are identified and the nature of the response evoked from this project is analyzed. the paper reveals the need for a better understanding of curriculum development enterprises, and the necessity to promote greater cooperation both within the academic community and within the computer industry to insure that useful curriculum materials will emerge. william mitchell james westfall turnover among dp personnel: a casual analysis four variables suggested by previous research were included in a proposed causal model of turnover among computer specialists. when tested using correlational analysis, three of the variables were significant inverse predictors of turnover. a fourth variable was also inversely related to turnover, but not significantly so. path analysis supports the proposed causal model and suggest turnover indirectly through its influence on the other model variables. kathryn m. bartol the information age and history: looking backward to see us richard j. cox the landmark microcode legal decision g. r. johnson personal computers and data processing departments: interfaces, impact and implications satish chandra intellectual property rights and computer software william y. arms the archimedes project: providing leverage for individuals with disabilities through information technology betsy macken principles of computer charging in a university-type organization jack p. c. kleijnen anton j. van reeken power: a tool for quantitative evaluation of software project effectiveness in this paper a tool for quantitative method of evaluating software projects is presented. this method constitutes the project observation workbench and evaluation reportor (power). a representative project evaluation with power is presented to demonstrate its application. michael w. evans louis m. picinich unclear on concept: anarchy and the internet m. e. kabay against software patents corporate the league for programming freedom an automated management system for applications software the applications group at the academic computing center maintains approximately 100 applications software products on our cdc 170/750 cyber computer. we also maintain some 30 products on each of five instructional digital equipment vaxes. most of these we receive from vendors, some have been developed locally, and some of these we distribute to other sites. in order to manage the workload, we have developed over a period of time an automated product management system, which we call hellas. the basic philosophy behind hellas is that a software maintenance system will be successful if it is simple to use, has a straight-forward methodology, and fits our computing system management requirements. for the most part, this paper will describe the hellas system as it is implemented on the cyber computer. the features, however, have their analogs on the vax computers as well. john jacobsen computer science education for industry software personnel (panel session) this panel discussion was convened to share experiences and ideas on an issue which is critical to our profession. advances in computer science hold forth a promise of increased quality of software development in industry. yet the industry is composed of a large number of very experienced personnel lacking this computer science knowledge. thus the issue is how these experienced members of our profession can be taught computer science topics in a way which promotes the usage of the technology in industrial environments. thomas m. kraly the emerging role of standards bodies in the formation of public policy timothy schoechle teaching ethics in is courses (abstract): everything you always wanted to know but were afraid to ask ernest a. kallman john p. grillo computer science in the high school (abstract): what computer professionals need to know and do susan m. merritt computer fluency and the national academies' report (panel discussion) kurt f. lauckner j. philip east kenneth l. modesitt carol w. wilson is computer science education in crisis? hyacinth s. nwana images and reversals: missed opportunities and "dispensable erudition" thomas g. west new metaphors for understanding the new machines though frequently misunderstood and dismissed as irrelevant ornamentation, metaphors are useful tools for writers in the computer industry. metaphors are especially useful for presenting information about new technologies because they help readers grasp whole concepts immediately and because they illuminate concepts that are difficult to communicate otherwise. it is important to distinguish this use of metaphors from their use in literature and advertisement. participants in the workshop will follow procedures for investigating the suitability of several metaphors. they will analyze the power and appropriateness of metaphors currently used in the computer industry by applying seven criteria: is the metaphor useful? understood? close? illuminating? acceptable? economical? memorable? richard m chisholm designing software to be used up and protecting it from pirates byron k. ehlmann sivil: a true visual programming language for students this paper discusses the advantages and disadvantages of using sivil (simple visual language), a new visual programming language in development at canisius college, to teach novice programmers to think more deeply about programming. in consideration of how sivil meets its goal of making programming easier for beginners, the paper will look at bloom's taxonomy, specifically at bloom's levels of learning and how a visual language might aid or speed up the learning curve for students endeavoring to join the world of programming. timothy f. materson r. mark meyer computing in south africa: an end to "apartness"? s. e. goodman proposed joint acm/dpma/ais undergraduate information systems degree curriculum model (abstract) john werth john gorgone gordon davis david feinstein bart longenecker george kasper integrating multimedia and telecommunications laboratory experiences into teacher education harriet g. taylor teaching with and about computers in secondary schools henry jay becker free-ridership in the standards-setting process: the case of 10baset in many cases, standards have public goods attributes. as a result it is important to consider how the development costs are provided. it is well known that public goods, due to their nonexclusionary nature, are subject to free riders. we consider free-ridership in standardization in general, and examine the case of one standard, ieee 802.3i (10baset) in particular. we show that free-ridership existed in the development of the 10baset standard, and in the subsequent product market, by specifying the criteria for the existence of free-ridership and by providing the necessary data to show that such an issue actually exists. we discuss the consequences of free-ridership and consider the implications for the standards development process in general. martin b. h. weiss ronald t. toyokuku enforcement strategies for computer use violations at a public university jacqueline craig protecting privacy marc rotenberg teams need a process! this paper begins with a discussion of the importance of software development and the problems encountered by those trying to work effectively on software project teams. it is argued that for students to be effective in working on teams they need the discipline and organization offered by a rigorous team software process. the author describes his experiences in using the team software process (tsp) to teach an introductory course in software engineering. the structure and key elements of the process are presented, along with techniques used in selecting and forming teams. the paper examines the tsp quality assurance features and finishes with a discussion of the techniques used to acquire feedback and to evaluate the affect of the tsp on student learning. thomas b. hilburn the lameduck sig chairman's message: a parting shot at accreditation norman e. gibbs an intellectual property course for cs majors john p. kozma thomas dion india: is it the future? r. raghuraman advantages and disadvantages of decentralized computing services as computers expand further and further into all academic communities, so must the user services group. however, this expansion places a burden on a centralized computer center, and therefore, it is not surprising to see a trend developing in many large universities where the primary user services function is transferred from the main computer center to colleges and departments within the university. this results in the main computer center providing secondary user services and the college or department providing primary user services. such is the situation at the university of south carolina. the university is the largest in the state of south carolina with approximately 35,000 students enrolled. for many years certain computing services functions have been the responsibility of the various colleges within the university, although the main computing center provides some short courses on programming languages, programming packages, and utilities plus, a newsletter, limited consulting, and technical support. the bulk of the teaching, consulting, and dissemination of information is the responsibility of the individual colleges within the university. this paper will discuss the problems encountered in establishing an individual college computing center as well as its advantages and disadvantages based on our experiences at the college or department level. deborah a. truesdale business planning in technical documentation organizations business planning is a process of deciding what an organization will do to be successful, and how it will do it. business planning can benefit any organization that wants to control its future and to succeed. however, a literature review and some practical experience at at&t; bell laboratories suggests that technical documentation organizations have virtually ignored the application of business planning, both as a means of creating their futures and as a means of advancing their profession. the business planning process involves creating an organizational mission; diagnosing the organization's strengths, weaknesses, opportunities, and risks; setting goals; developing objectives and action plans; developing a financial plan; and writing, sharing, and implementing the plan. following these steps in the bell labs publication center, we have seen our budget and staff grow and our client base diversify. d. mongeau peace, love, and the rise of ibm mark midbon discussion topic: what is the old security paradigm? steven j. greenwald technology literacy for undergraduate liberal arts students: a course in communications systems and technology a. michael noll a case for data-driven testing this paper describes a novel approach to the on-line assessment of large groups of students, in which it may be desirable to maintain common questions between the groups. it is clear from the literature that computer based assessment has the potential to dramatically reduce the effort involved in testing and marking however problems arise where the cohort of students is larger than the number of available computers. however, the opposite situation is often true in practice, due to the perceived need to design multiple tests. the solution described here uses a small computer laboratory (20 machines) to administer a test to a series of groups of students in existing lab sessions. each group receives the same set of questions but the data to which the questions apply, and hence the test answers, vary from group to group. the data from tests that have been applied to students is analysed to determine whether discussions with early candidates have influenced the performance of students in later testing sessions. tony greening glenn stevens david stratton a scandinavian view on the acm's code of ethics bo dahlbom lars mathiassen when an advantage is not an advantage discussion of the merits and shortcomings of affirmative action (aa) has raged at all levels and in many forums and has been the concern of many policymakers, including president clinton. notably absent from the discussion is the perception of aa, and the effect of the angry backlash on women and minorities. recently, the question of whether women enjoy an "unfair advantage" was posted electronically on systers, a private organization of over 1,800 professional women in the male-dominated field of computing. the 42 responses contained strong personal stories from women at every level of professional growth who had all dealt with the issues of gender bias and aa at each step of the way. lisa m. zurk barbara simons rebecca parsons dawn cohen a tale of two shortages: an analysis of the it professional and mis faculty shortages there are hundreds of articles, numerous governmental research papers and congressional acts that are focused on attending to a shortage or information technology (it) labor. many predict a crippling effect from such a shortage, and a minority opinion suggests that there is no shortage. in part, this difference may reflect a lack of conceptual clarity of what constitutes a shortage. this paper, then, examines the nature of what shortages are, how shortages might look in the context of it professionals and mis faculty, and how we might be able to ascertain the nature of the it labor and faculty shortages. the outcome is that by understanding the specific nature of the shortage, we can then hope to provide a remedy. timothy babbitt seminar in computer ethics: a one semester hour course james t. streib a study of organizational structures in academic computing centers this paper reports the findings of an informal survey of organizational structures in several academic computing centers. small, medium, and large centers were studied and an attempt was made to identify the advantages and disadvantages or the various structures. not unexpectedly, the most serious criticisms of their own structures came from small computing centers, where severe shortage of staff was listed as the biggest disadvantage. all sizes of computing centers claimed that dual responsibilities for both academic and administrative computing frequently caused severe organizational problems. successful organizational structures are most often due to a cooperative staff and good lines of communication. john e. bucher can kiwis fly?: computing in new zealand michael d. myers computer education for pre-college teachers (panel session) over the past few years we educators in computer science have become increasingly aware of the need for computer science education of pre-college teachers. the es3 committee is working on an organized approach to this need; however, as more and more high and elementary schools offer instruction in computer programming and related cs subjects, an immediate ad hoc approach is frequently necessary. each of the members of this panel has had direct experience with teaching computer science to pre-college teachers. they will share some of these experiences with us and in so doing answer some questions and raise others for those faced with implementing programs for pre-college educators. spotswood d. stoddard agent technology in computer science and engineering curriculum in recent years, agent technology has been used increasingly in information management and distributed computing. a cse curriculum that cultivates the knowledge of agent technology will increase the likelihood that the next generation of it professionals have the background needed to design and develop software systems that are scalable, reliable, adaptable, and secure. in this paper, we present the rationale and our practice in incorporating agent technology into the cse curriculum. we develop agent-based teaching materials and software modules and apply them to existing cse courses including artificial intelligence, parallel and distributed processing, networking, and software engineering. promising results have been obtained in teaching two graduate level courses using agent components. yi shang hongchi shi su-shing chen support services for a heterogeneous environment james calhoun student internship program: capitalizing on our natural resources susan hazard david solomon the management of information systems occupations: a research agenda jon a. turner jack j. baroudi design issues in computer science education richard rasala does it help to have some programming experience before beginning a computing degree program? there is an intuitive perception that students with prior programming experience have an initial advantage in an introductory programming course, but that this advantage may decrease over the duration of the course if the style of programming is different from what the student has learnt previously. this paper reports on a study that indicates that students who have experience in at least one programming language at the beginning of an introductory programming course perform significantly better in the assessment than those with none, and that the more languages with which a student has experience, the better the performance tends to be. dianne hagan selby markham ucita: coming to a statehouse near you daniel uhlfelder in-room computing in akers residence hall at michigan state university diana e. d'angela creating an environment for student oriented computing (panel session) paul j. plourde james adair dennis m. anderson the chronology of y2k problems clement kent the problems of rapid information technology change john benamati albert l. lederer meenu singh factors affecting performance in first-year computing annagret goold russell rimmer cost effective training options jeanne cavanaugh an analysis of operating system behavior on a simultaneous multithreaded architecture joshua a. redstone susan j. eggers henry m. levy lydian (poster session): an extensible educational animation environment for distributed algorithms boris koldehofe marina papatriantafilou philippas tsigas the changing classroom - icts in 21st century education (poster) kate o'dubhchair sarah quilty perspectives: the future of computing: after the pc economist staff recommendations for the first course computer science working under the curriculum committee of the education board of the acm, the committee has developed a detailed analysis of the requirements of the first course in computer science as described in the current acm curriculum guidelines. the report includes the material which should be included in such a course and also recommendations for effective presentation of the material. discussion of the computer laboratory and implications of increased exposure to programming experiences by incoming students is included. elliot koffman philip muller caroline wardle results of a pbl trial in first-year computer science tony greening judy kay jeffrey h. kingston kathryn crawford question time: organizational shake-ups john gehl suzanne douglas peter j. denning robin perry a successful five-year experiment with a breadth-first introductory course this paper discusses the implementation and evolution over a five- year period of a breadth-first introductory computer science course which has both lectures and structured, closed laboratory sessions. this course significantly increased both the retention and passing rates for the next computer course (which emphasizes programming), and computer science graduation rates. donald bagert william m. marcy ben a. calloni understanding the stabilizing effects of it on organizations (abstract) martin boogaard marleen h. huysman what makes things fun to learn? heuristics for designing instructional computer games in this paper, i will describe my intuitions about what makes computer games fun. more detailed descriptions of the experiments and the theory on which this paper is based are given by malone (1980a, 1980b). my primary goal here is to provide a set of heuristics or guidelines for designers of instructional computer games. i have articulated and organized common sense principles to spark the creativity of instructional designers (see banet, 1979, for an unstructured list of similar principles). to demonstrate the usefulness of these principles, i have included several applications to actual or proposed instructional games. throughout the paper i emphasize games with educational uses, but i focus on what makes the games fun, not on what makes them educational. though i will not emphasize the point in this paper, these same ideas can be applied to other educational environments and life situations. in a sense, the categories i will describe constitute a general taxonomy of intrinsic motivation\\---of what makes an activity fun or rewarding for its own sake rather than for the sake of some external reward (see lepper and greene, 1979). i think the essential characteristics of good computer games and other intrinsically enjoyable situations can be organized into three categories: challenge, fantasy, and curiosity. thomas w. malone a forum on the "computing curriculum 2001" report ursula wolz richard eckhouse gerald l. engel russell shackelford robert h. sloan business: the 8th layer: will the digital signature transform e-commerce? kate gerwig hackers: computer heroes or electronic highwaymen? richard c. hollinger a student system development diagrammer new approaches to system development depend heavily on graphical methods for representation of system requirements and the depiction of final design configuration. among various diagramming methods, data flow diagrams, entity- relationship diagrams and structure charts have received the most attention. an automated diagramming tool is created and is made available to students. this tool requires minimum hardware and does not require the use of licensed software to operate. it therefore can be made accessible to many students at different schools. iraj hirmanpour renewing the commitment to a public interest telecommunications policy corporate telecommunications policy roundtable motivation and performance in the information systems field: a survey of related studies martha e. myers the social aspects neglected in e-commerce samuel chong kecheng liu a contingency model for mapping chargeback systems to information system environments malini sridhar g. lawrence sanders case history: effects of operator corruption on system data integrity bradley j. brown programming for learning in mathematics and science this paper presents a learning-research based argument for the integration of computer programming into the science and mathematics curricula in pre-college education as well as college. students who generate solutions to science and mathematics problems develop a procedural understanding of the fundamental theories of these disciplines. students should be taught to use programming languages for these solutions for the same reasons they are taught the universal tools of arithmetic and algebra, and because only a computer provides the means to describe solutions in explicit, correct, and executable form. programming should be integrated into all mathematics and science teaching from the earliest years. in precollege education, programming should be taught over a period of eight to ten years, rather than as a 6-12 week separate topic, and should be matched to the level of complexity of the science and mathematics content. sylvia a. shafto australia: services dominated; strong advanced applications and software industry philip k. robertson women in management - entering and growing(acm '81 panel session abstract) linda taylor will discuss how to get your first management position, the requirements and the transition from using your technical skills to gain success to using management skills emphasizing delegation, motivation, training, etc. of others to gain your success. nell cox's discussion focuses on breaking through the middle management plateau into top management. techniques for gaining "election" to corporate officer status, and the differences between middle and top management responsibilities and activities will be highlighted. peggy mckelly will provide perspective on the attitude and skills she has observed to be requirements or prerequisites for entrepreneurial success. starting your own business and the transition from employee/worker to owner/boss/worker will be featured, as well as capitalization and governmental reporting requirements. linda taylor commentary: the new wave of computing in colleges and universities: a social analysis rob kling introduction lynellen d. s. perry scholarly publishing in the age of information technology c. k. yuen session 2a: presentation - software development paradigm r. balzer f. l. bauer t. e. cheatham visions of education and the implementation of technology robert taylor heavy machinery for deforestation: establishing a dialogue between ghc and hugs for program transformation development functional programmers pride themselves on writing elegant and modular programs for the case with which such programs can be read, updated, and extended. in the modular style of programming, larger programs are written as compositions of smaller mix- and-match components that communicate via intermediate data structures. unfortunately, however, intermediate data structures are a major source of inefficiency in functional programs, consuming both unnecessary time and unnecessary space. as a result, the development of techniques to remove intermediate data structures from --- or deforest --- functional programs is important in overcoming the inefficiencies of functional languages while reaping their benefits. aaron j. wheeler the use of computers in belgian schools h. christian demonstration of a videodisc system for the classroom r. c. brandt changing expectations of our users with a changing user audience, how will expectations change? • users will know what computers can do and will indeed expect service, not just "hope" for service. • users will expect all services 24 hours each day, seven days a week. • users will expect us to come to them, any time of day or night. • users will expect personalized, prescriptive service, tailored to their individual needs. • users will expect infinite variety. • users will expect access to services anywhere, from anywhere. • users will expect near-total transparency. • users will expect user control of all aspects of service---or, at the very least, the illusion of user control. • users will expect service to be provided in modular, interchangeable units. • users will expect reduced costs. • users will expect bigger and better from us, and smaller and better. • users will expect service that is user-friendly according to the user's personal definition. • users will expect on-line, off- line, hands-on, and human service on a demand basis, no waiting. • users will expect a "matchmaking" service that puts them in touch with others of similar interests. in short, users in the coming decade will continue to expect the impossible. and user services personnel will continue to labor to give them the impossible. rita seplowitz saltz learning process maturity (poster session) errol thompson development of an entry level computer applications course in a small liberal arts college (abstract only) students entering colleges typically have a very wide range of problem-solving abilities. these students also have a broad range of experiences in programming, although this second range may be somewhat more narrow. some students have weak problem-solving skills and have done virtually no programming while other students with better problem-solving skills may have programmed in one of several languages. such diversity has considerable impact upon first courses in computer science in colleges. this talk will discuss several combinations of credit and non-credit courses and workshops which can be helpful in meeting the needs of incoming students. for each approach, various advantages and disadvantages will be identified. m. ware h. doerr r. pierce m. bielby a. zipp a. tiburzi hardware component of an upper level computer science curriculum this report elaborates on the hard-ware requirement recommended in north florida. brief course contents, minimal laboratory facilities, key experiments and laboratory management are described. y. s. chua c. n. winton a programming project: trimming the spring algorithm for drawing hypergraphs graph drawing problems provide excellent material for programming projects. as an example, this paper describes the results of an undergraduate project which dealt with hypergraph drawing. we introduce a practical method for drawing hypergraphs. the method is based on the spring algorithm, a well-known method for drawing normal graphs. harri klemetti ismo lapinleimu erkki mäkinen mika sieranta entity-relationship diagrams and english: an analysis of some problems encountered in a database design course the simplicity and clarity of the entity-relationship approach recommends its use as a tool for teaching database design. nonetheless, the approach does not appear to be problem-free. analysis of student entity-relationship diagrams for two database design projects reveals a tendency students have to model english sentences and to use english syntax to guide the modeling process. the paper discusses why this may be the case, and how it may be avoided. judith d. wilson organizing web site information: principles and practical experience as web sites continue to grow in their complexity, one of the most important usability design decisions is how to structure the web site topic hierarchy. this decision lays the groundwork for designing other aspects of the site, e.g., the home page table of contents, and for categorizing new documents in the topic structure over the life of the site. organizing web sites is a timely topic as evidenced by the recent spate of publications on this topic [1] [2] [3] and by nist's recent release of a tool, webcat, to help users participate in organizing their web site. more significantly, the organization is probably the limiting factor on success for web sites that provide useful information.if anything is clear about the organization of web sites, it is that there is no single best way to go about it. different types of web sites seem to demand different approaches to organization. for example, a site geared toward product support might use a task-oriented scheme that steps users through a process of problem-solving. other sites, such as an online shopping site, might be organized according to categories of products, in order to allow efficient browsing. other sites might be organized based on an analysis of user roles.designers who must choose among these schemes must also choose appropriate analyses to inform the design of the web site structure. these include item clustering techniques, user performance at finding documents using various prototype web site organizations, studies of user roles and tasks, and analyses of the material to be included in the site. kate dobroth paul mcinerney sharon smith technetronic education: answers on the cultural horizon larry press from under the rubble: computing and the resuscitation of romania how does an entire country return to life? the dictatorship of nicolae and elena ceaucescu turned romania into a political, social and economic wasteland that was vividly pictured in the world media after the overthrow of the stalinist couple in december 1989. their regime left the country in shambles: with an old or mangled physical plant, a currency worthless abroad and nearly worthless at home, a legacy of secret police abuses, terrible health and sanitary conditions, and a highly centralized, warped, and inefficient scientific and industrial infrastructure. seymour e. goodman building and testing a causal model of an information technology's impact r. a. grant mass market computer for software development the mass market or applicance computer has become a plausible alternative to workstations for software development. while it is easy to use for software development, a different style of development emerges. by being truly personal, the mass market computer provides a focus for all the professional activities of a developer. the primary form of software used on such computers is shrink- wrapped software; this has profound implicatioons on how the software is used and the make-or-buy trade-off takes on an entirely new meaning. this paper describes the advantages of mass market computers. as an example, the paper presents how an environment, consisting of a network of mass market computers,m has influenced the development of a novel source management scheme. w. morven gentleman marceli wein it programs and cs departments (panel session) computer science departments are experiencing increases in enrollments that rival the expansion in the early '80s. at the same time, many of these students do not seem interested in or equipped to handle the rigor of a traditional computer science program. they are coming into computer science with expectations about computer science education that are significantly different from what they are finding on campus. instead of courses on data structures and algorithms, automata, and operating systems, they want to learn visual basic, linux, and obtain microsoft certification. cs departments responses to these pressures differ widely. some take the approach that this is a temporary aberration and should have no affect whatsoever on degree programs in computer science. some provide one-credit courses or seminars to discuss practical aspects of it not covered in the curriculum. others have started information technology programs to provide these students with an alternative program. in some cases, outside pressures (i.e., the university administration or external funding) has mandated that cs departments provide such programs. this panel will discuss these issues from varying perspectives. it will also provide some examples of it programs in cs departments to give us some idea of what is currently being done at other institutions. elliot koffman dorothy deremer chris mcdonald loren rhodes rebecca thomas a. joe turner curt white a new perspective on teaching computer literacy the first step in designing a college level computer literacy course is to define what is meant by computer literacy. unfortunately no consensus exists as to what the label "computer literate" should imply. the difficulty in both defining computer literacy and designing a satisfactory computer literacy course is evident by both the frequency of change and experimentation occurring at many institutions and by the forests of textbooks that exist for such a course. in this paper we present a definition of computer literacy that is independent of any specific application or application genre and introduce the notion of application literacy as distinct from that of computer literacy. finally we describe a course implementation strategy commensurate with our philosophy. michael goldweber john barr chuck leska simulation of the computer science curriculum at southeastern louisiana university (abstract only) to obtain a better estimate of course offerings required to run a four year curriculum in computer science, ten years of student flow was simulated. the simulation was based on the computer science curriculum at southeastern and its student's progression as of fall, 1984\. mentioned topics will include (1) model construction (2) model validation (3) balancing of course loads (4) impact of curriculum changes, and (5) recommendations. david haas acm task force on k - 12 education and technology teri perl "software piracy and protection" protection of the intellect was difficult before the advent of computer technology, and controlling and protecting software is even more complicated. this panel is made up of attorneys whose practices are concentrated in computers and related technologies, and will discuss the relation of the legal system to software protection from misappropriation - "piracy". peter s. vogel subsumption ethics david h. gleason don't shoot me, i'm famous! brock n. meeks programming for everyone: a rationale and some teaching strategies the importance of offering non-technical instruction about computers is discussed, and the importance of introducing these students to programming is argued. two teaching experiences of the author are described in detail: a high school programming class with no math pre-requisite, and a community college short course about computers. william j. wagner on the retention of female computer science students mei-ling liu lori blanc national science foundation funding bruce h. barnes doris k. lidtke history in the computer science curriculum john a. n. lee visions of education and the implementation of technology robert p. taylor charles babbage - the great uncle of computing? maurice v. wilkes a peer review experience the following statements are taken from a brochure prepared by the acm sigucc peer review committee to acquaint university computer centers with peer review. peer review is a review of policies, procedures, and services provided in a specific college or university computing center. the goal of a peer review is to determine the strengths and weaknesses of an installation. the sigucc peer review committee maintains a list of valid peer reviewers, people actively involved in the delivery of computer services within an academic environment. working in cooperation with the host computing center (the center requesting the review) peer reviewers acting as a group; study documents prepared by the host center, interview appropriate individuals at the center, and observe all facets of the operation. based on the information provided, the interviews, and the observations, the group prepares a report and gives it to the director. m. lloyd edwards viewpoint barbara r. bergmann mary w. gray teaching operating systems john o'gorman nerd work: attractors and barriers perceived by students entering the it field the purpose of this study is to investigate the factors that attract as well as discourage students who display an initial interest in it careers. factors influencing students one way or the other may include media images of it, role models, gender, and age. the study proposes to investigate these factors by focusing on the first programming course that is standard in many it curricula. demographic factors of students who enroll and either withdraw, pass, or fail the course will be analyzed, and then the same students will be surveyed to examine in greater depth their perceptions, both negative and positive, about it careers and it professionals. the results will provide educators and practitioners with information about myths that students hold that need to be dispelled, as well as the possible need to convey more realistic perspectives on the breadth and variety of it jobs. identification of these factors can help in recruiting as well as retaining a greater number of qualified students and ultimately increasing the supply of qualified it workers. martha e. myers catherine m. beise a faculty development program the computer and information science (cis) department at brooklyn college is unable to use full-time cis faculty for most sections of the introductory programming course. instead of using adjuncts, the administration of the college would like to use full-time faculty from overstaffed departments to teach these courses. in an attempt to certify these people, the cis department launched a two part faculty development program in the summer of 1981. program i was an intensive introduction to computing for those with no previous experience. program ii was designed to enable those with some previous computing experience to teach computer programming. both programs were successful. the participants in program i gained quite a bit of programming experience. almost half of those in program ii have taught in the cis department in the fall of 1981, with others planning to do so next year. based in part on the information gathered from this model, the entire city university is offering a faculty development program in computer science as well. keith harrow how often should a firm buy new pcs? gerald v. post now hiring: sciengineer r. raghuraman curriculum '68 revisited - an historical perspective (panel session) joyce currie little the career dynamics of information systems professionals: a longitudinal study a concern of many information systems (i/s) managers is the ability to attract, retain, and motivate their i/s professional staff, particularly those who have the potential to be high performers. however, many of the attitudes and attributes of these newly-hired employees are formed prior to entering the workplace; they are shaped by the students' college studies and by their personal backgrounds and characteristics.this study investigates the career progression of nearly a thousand i/s majors from 38 colleges and universities who have been tracked over several years as they complete their college studies and move into their first i/s jobs. it examines three aspects of this progression: career preparation and entry; work adjustment; and career outlook, both at present and long-term. a number of key variables are identified, derived both from theory and from the empirical results of the study. in particular, the balance between developing technical skills and "people" skills as individuals move through the early stages of their careers is given special attention. ephraim r. mclean stanley j. smits john r. tanner risks to the public in computers and related systems peter g. neumann acm code of ethics and professional conduct ronald e. anderson educational computing in new zealand r. polson educating the working computer scientist(a survey and analysis) due to the strong job market for recipients of bachelor of computer science degrees, fewer graduates are immediately continuing with post-graduate education. these individuals will experience a need for graduate education later, at a time when they are less able to attend school on a full-time basis. this will lead to an increasing demand for part-time graduate programs, especially near centers of computer technology. this paper reports the results of an informal survey of existing part-time degree-granting graduate programs in computer science. topics discussed include program format, student enrollment, source of faculty, and characteristics of students. ronald l. danielson could it have been predicted? as everyone knows, the computer industry is passing through a period of great change. i was speaking recently to a senior executive in one of the large companies vigorously working to meet the developing situation. the question he posed was: "could it have been predicted?" maurice v. wilkes digital copy protection and siggraph public policy www pages this column covers a policy issue i haven't discussed before: digital copy protection. i also have an update on plans for siggraph's public policy web pages and a procedural item. bob ellis the reviewer's view of your proposal janet hartman information for authors diane crawford soft system methodology and is research: development of a new is design paradigm evaluation approach peter kokol bruno stiglic viljem zumer backtracking christopher welty louis j. hoebel a comprehensive curriculum for it education and workforce development: an engineering approach f. golshani s. panchanathan o. friesen y. c. park j. j. song activating "black boxes" instead of opening "zipper" - a method of teaching novices basic cs concepts in this paper we implement and evaluate of a unique instructional method for teaching basic concepts in computer science. this method is based on introducing a new concept through activating "black boxes" that demonstrate the properties of the concept and its role in the computing process. we used the "black box"-based instructional method to teach basic concepts of computation to novice high-school students. later we conducted research aimed at assessing the effectiveness of this method on novice students' perceptions of basic concepts in computation. research results indicated that students who learned according to the "black box"-based approach gained a better understanding of the basic computational model, compared to students who learned according to the traditional "zipper" approach. bruria haberman yifat ben-david kolikant plagiarism in computer sciences courses(panel discussion) what constitutes cheating on programming assignments? what methods can be used to detect cheating? what should be done with offenders? how can cheating be eliminated in programming courses? these are all pertinent questions, but they are directed more towards treating symptoms rather than towards correcting some very fundamental problems. how can student interest in computer programming be stimulated? what can be done to reduce the frustrations inherent in writing and debugging code? what should be expected (and what should not be expected) of students taking introductory programming courses? how can individual performance and achievement be measured effectively for grading purposes? with critical problems of computer fraud and software theft increasing all the time, making computer science students aware of the ethics of the computer industry seems not only appropriate but necessary. philip l. miller william dodrill doris k. lidtke cynthia brown michael shamos mary dee harris fosberg software protection of micro computer software (abstract only) the software pirate is ubiquitous. there is no way of estimating the quantity of stolen software. the suggestion has been made that part of the solution to this problem be through the teaching of ethics, and of the law. this reminds the writer of the effects the same approach which was taken during prohibition, and is now being taken to suppress drug traffic. which cured neither problem. this author believes that human nature is reasonably constant, that this "new" problem isn't new but can be considered as a combination of human greed combined with the almost certainty of a lack of punishment, either legally, or of community disapproval. the combination of greed + no punishment can only be stopped by the removal of one or both of the reason for software theft. the chance of actual punishment is very low --- much less than that of getting a parking ticket. this is only one solution the author can see --- remove the profit factor. bootlegging wasn't stopped by the revenuers --- the author lived in north georgia for a time, where corn was measured in gallons per acre. it wasn't stopped by a lack of greed. it simply became unprofitable. the argument the author presents is that the only way the problem can be attached successfully is for the software vendors to charge a low enough price that theft isn't worth the trouble --- at least a few software houses are taking this approach, with some success. t. f. higginbotham from the editor frank melanson introduction to computing via psi an introductory computer concepts course has been implemented using a personalized system of instruction. this experimental course is now in its fourth year of operation and is being taught on a regular basis. it makes use of textual materials, audio tapes and a detailed study guide as well as a novel telephone communication system. thomas c. richards computer ethics and tertiary level education in hong kong this paper seeks to highlight some ethical issues relating to the increasing proliferation of information technology into our everyday lives. the authors explain their understanding of computer ethics, and give some reasons why the study of computer ethics is becoming increasingly pertinent. the paper looks at some of the problems that arise in attempting to develop appropriate ethical concepts in a constantly changing environment, and explores some of the ethical dilemmas arising from the increasing use of computers. some initial research undertaken to explore the ideas and understanding of tertiary level students in hong kong on a number of ethical issues of interest is described, and our findings discussed. we hope that presenting this paper and eliciting subsequent discussion will enable us to draw up more comprehensive guidelines for the teaching of computer related ethics to tertiary level students, as well as reveal some directions for future research. eva y. w. wong robert m. davison patricia w. wade corrupted polling rebecca mercuri relationships between selected organizational factors and systems development three organizational variables influence the quality of the system development process: available resources (both human and financial), external influences on the development process, and the project team's exposure to information systems. public information and interviews with systems managers from 28 large private firms yielded data about the organizational variables. systems project group members completed questionnaires concerning the system development process. the results indicate that human resources affect the development process positively, but increased financial resources are related to team disagreement. the degree of external influence on the system development effort needs to be carefully monitored and controlled. systems exposure in the firm allows an increase in the degree of awareness among project group members about the different problems encountered by users and systems staff. ananth srinivasan kate m. kaiser exploring the pipeline: towards an understanding of the male dominated computing culture and its influence on women christina bjorkman ivan christoff fredrik palm anna vallin a framework for understanding the workspace activity of design teams small group design sessions were empirically studied to understand better collaborative workspace activity. a conventional view of workspace activity may be characterized as concerned only with storing information and conveying ideas through text and graphics. empirical evidence shows that this view is deficient in not accounting for how the workspace is used: a) in a group setting, rather than by an individual, and b) as part of a process of constructing artifacts, rather than just a medium for the resulting artifacts themselves. an understanding of workspace activity needs to include the role of gestural activity, and the use of the workspace to develop ideas and mediate interaction. a framework that helps illustrate an expanded view of workspace activity is proposed and supported with empirical data. john c. tang larry j. leifer organizing a live call-in tv show about computers a live, call-in television show is an environment conducive for answering quick questions. it also provides user services staff with an opportunity for distributing information quickly to a large audience without relying on the more traditional mechanisms of documentation or online help this paper describes the steps that were taken to develop and produce a live, call- in television show. it also provides some background about how local community resources were used for producing the show (the show was produced with no budget), how the television show was broadcast and what the response was to the show. some recommendations are made for user services personnel who may be interested in replicating the approach described in this paper. the television show described in this paper was made when the author was working for a state- run university in northwest indiana. william e. mihalo viruses and worms - what can they do? s. kurzban the impact of scanners on employment in supermarkets a brief review is given of the early estimates of the rate at which scanners would be installed in supermarkets and the resulting labor and consumer responses. the actual situation in 1979 is then discussed and detailed labor savings achieved by one supermarket chain are given. a fully scanner-equipped supermarket is estimated to have a 5 percent lower labor requirement than an unautomated store with the same volume. it is projected that 50 percent of the 23,000 large supermarkets will install scanners by 1984 with the remainder doing so by 1988. at full penetration, scanners will reduce total industry employment by approximately 50,000. few actual layoffs will occur because of the high turnover in the industry. furthermore, the stores that install scanners may attract customers from nonautomated stores leaving those stores to handle the job losses. bruce gilchrist arlaana shenkin an experiment in senior staff working alongside student consultants this past spring, syracuse university academic computing services (suacs) had all staff members spend time working with our users at the aid desk. the staff has progressed from a initial reluctance to an enthusiastic environment with enhanced communications, new ideas, and new learning experiences. the paper discusses the beginnings of our experiment, the reactions of the participants, and some of the resulting ideas. john thornton agnes hoey an interactive learning system visualizing computer graphics algorithms achim w. janser experiences teaching software engineering for the first time this paper presents an approach to teaching a software engineering course, as well as significant feedback from the students who were enrolled in the first offering of the course using this approach. the course provided students with conceptual material as well as experience with a large project. just teaching concepts or major topics, while important, is not sufficient; students need hands-on exposure to doing a large project in order to comprehend the complexity of building real systems. on the other hand a course cannot "teach" only a project because students need a conceptual framework, approaches, and techniques upon which to base the complexities of software engineering. the feedback from the students who took the first offering of the course provides useful information to anyone who teaches software engineering, in addition to instructors preparing to teach the subject for the first time. k. todd stevens networked pbl teaching the teacher on flexible learning (poster) rolf carlsson göran karlsson bengt olsen on changing from written to on-line tests in computer science i: an assessment how do on-line tests compare with written tests in _computer science i_? do students who do well in written tests also do well in an online test? is an online test better or worse than a written test at assessing the problem- solving skills of a student? this paper summarizes the answers to these questions that we found during our switch from written to on-line testing. amruth n. kumar electronic surveillance and civil liberties: testimony of fred w. weingarten before the house judiciary subcommittee on courts, civil liberties and administration of justice fred weingarten microsoft site server (commerce ed.): talk-slides available at the conference bassel ojjeh unionism for the computer professional harvey axlerod are computing educators and researchers different from the rest? (poster session) tony clear alison young a twenty year retrospective of the nato software engineering conferences (panel session): thoughts on software engineering bernard a. galler a twenty year retrospective of the nato software engineering conferences (panel session): twenty-year retrospective: the nato software engineering conferences james e. tomayko simplified task analysis and design for end-user interface design computing: implications for human/computer interface design (poster session) v. j. dubrovsky the iwar range: a laboratory for undergraduate information assurance education this paper describes a unique resource at west point, the information analysis and research laboratory, referred to as the iwar range. the iwar range is an isolated laboratory used by undergraduate students and faculty researchers. the iwar is a production-system-like, heterogeneous environment. the iwar has become a vital part of the information assurance curriculum at west point. we use the military range analogy to teach the students in the class that the exploits and other tools used in the laboratory are weapons and should be treated with the same care as rifles and grenades. this paper describes the structure of the laboratory and how it is used in classroom instruction. it also describes the process used to create the iwar and how an iwar might be built using limited resources. finally, this paper describes the future directions of the iwar project. joseph schafer daniel j. ragsdale john r. surdu curtis a. carver the multiplier effect of computing literacy on user services computing may be used in any or all of the following areas within an educational institution: instruction, research, and educational administration. the first two areas involve faculty, students, and researchers. the latter involves the educational administrator. all three user groups are, or will be, involved in integrating computing into their daily professional lives. the degree to which this will happen will depend on the ability of the academic community to understand where and how to apply computing in their respective areas. the ability to understand and use computers as a problem-solving tool has come to be known as "computing literacy." within the last five years, the development of computing literacy has been one of the more urgent concerns within educational institutions. the nontechnologist is not oriented towards the use of the computer as a problem-solving tool. helping the individual to achieve computing literacy is an essential activity for brining about behavioral change and reaching this goal. some of the issues that must be addressed as part of implementing a computing literacy program are to: • understand the environment for which it must be developed • develop a working definition of computing literacy that is suited for the environment • identify the target audience(s) to be involved in a computing literacy program • develop a computing literacy curriculum • develop a delivery mechanism to provide the education • evaluate the program's effectiveness this paper will discuss the comprehensive computing literacy program currently in place at iona college. it will discuss iona's approach to resolving the above issues and the implementation of a comprehensive computing literacy program for students, faculty, administrators, and staff. it will also discuss the relationship between interest in computing and demand for user services. it would appear that the introduction of computing literacy programs can be correlated to an increased request for user services and a need for an ever- widening scope of user services. lynda sloan antony halaris computer science (1979) each year computer science departments generally find that enrollment, at least in associate and bachelor's level programs, is increasing. at the same time, most computer science departments find it difficult to recruit faculty with ph.d. 's in computer science. in fact, computerworld (6) recently reported on an nsf study which identified approximately 600 vacant faculty positions in computer science in this country. primarily, the purpose of this article is to report the results of the study without attempting extensive analysis or interpretation. certain inconsistencies in the data have been noted by the authors---primarily due to inaccurate or incomplete answers to various questions. it was not feasible to disregard incomplete or inconsistent questionnaires since the number of complete, consistent responses was small. in a few cases, the authors have attempted to correct obvious inconsistencies \---for example, if an institution reported that it recruited to fill 3 vacancies on the faculty, that no one was hired, and that 4 of those hired had a ph.d. in computer science, we assumed that the 4 should be a 0. in most other cases, the data is reported as given in the responses and the authors believe that it is reasonably accurate and representative. gordon bailes terry a. countermine ais gordon b. davis comparing undergraduate courses in systems analysis and design raymond jr. mcleod collective education george dvorak technology as a cultural system: the impacts of ict upon the primary and secondary theories of the world matti kamppinen design process analysis: a measurement and analysis technique the range of services provided by design automation computing centers covers a broad spectrum. at one end of this spectrum is the center which provides pure computation power only. it is the user's responsibility to select, install, and tune the applications as well as to debug the resultant output. at the other end, it is the turnkey center which provides a near total service, relegating the user to job selection from a fixed set of installed applications with which to accomplish the design task. the most effective design automation center combines the healthy attributes of these two extremes: buffering the user from i/s considerations, freeing him to concentrate on the design task; and providing the i/s center with application understanding and involvement to allow for proper system tuning. the management information technique to achieve the desired objectives of improved user productivity and cycle time is referred to as design process analysis (dpa). detailed definitions of the data flow nodes as well as samples of the type of output obtainable from such a dpa system are covered in the presentation. management of an effective design automation center is a demanding and ongoing task, combining the skill levels of people, the capabilities of the da applications, and the processing power of the computer center. tuning this combination in a disciplined and orderly manner to provide the most timely and cost effective solutions to design problems requires an automatic data collection system driving a dynamic model of the design process. utilizing the techniques of design process analysis, such an objective can be met kenneth d. yates computer science education in china robert aiken elizabeth adams susan foster richard little william marion judith wilson gayle yaverbaum software system to learn objects (poster session) evgeny eremin computer ethics: a model of the influences on the individual's ethical decision making john w. henry margaret anne pierce faculty development centers: assessing needs (and wants) robert l. ferrett a tool for teaching advanced data structures to computer science students: an overview of the bdp system in order to design and write effective, robust code using advanced data structures, it is crucial to achieve a thorough understanding of the algorithms used to manipulate these structures. one means of accomplishing the task is to provide students with a graphical, animated system that allows users to observe changes that the structure undergoes while it is being used. one such system has been developed which demonstrates b-trees. some preliminary testing is complete and some initial reactions of the students who have tried the system are outlined. risks considered global(ly) peter g. neumann security auditing - fear of detection -vs- fear of detecting b. neugent an experiment with design formalism in introductory courses (this paper has been accepted for publication in the proceedings, but the photo- ready from was not delivered in time. copies of the paper should be available upon request at the presentation.) gary a. ford investment by the letters marcia kadanoff operating systems from assembler to c john l. donaldson tension and synergism between standards and intellectual property oliver r. smoot microcomputer training users?: do trained users differ from non-trained joey f. george james w. theis a framework for understanding the computer applications system development process mike bozonie news track robert fox presenting computer algorithm knowledge units in computer science curriculum a challenging aspect of computer science curriculum is to present the ever growing and ever changing topics of computer science within the time constraints imposed by the program of study. in this paper we present some of these challenges and outline strategies to meet them with regard to the computer science subject area of algorithms and complexity in the context of evolving needs of cs1 and cs2 courses. specifically, we consider those knowledge units recommended for this area by the ieee/acm joint task force on computing curricula in their cc2001 report [cc00]. tomorrow's campus larry press summary: empirical studies of software development and evolution rachel harrison from the editor's desk doris carey regan carey the chi tutorial program: building on common ground marian g. williams mark w. altom ethics, programming, and virtual environments michael e. houle self-assessment procedure xix a self-assessment procedure on the application of copyright law to computer programs r. w. bickel a preliminary comparison of body-wearable computers to modern audio equipment in a microgravity environment the capabilities of body-wearable computers (bwc) and modern audio equipment were compared in a micro-gravity environment. in the experiment carried out, the speed of performance on timed tasks was compared for bwc and the audio playback devices currently used by astronauts. the bwc provided faster performance times than audio equipment when used in a micro-gravity environment. in addition, the bwc was found to provide faster performance times in normal gravity than in zero gravity. matthew dooris michael moorman bryan gregory marilyn brown heather wright strategic sourcing for information processing functions louis a. le blanc global software piracy: you can't get blood out of a turnip ram d. gopal g. lawrence sanders strategy game programming projects in this paper, we show how programming projects centered around the design and construction of computer players for strategy games can play a meaningful role in the educational process, both in and out of the classroom. we describe several game-related projects undertaken by the author in a variety of pedagogical situations, including introductory and advanced courses as well as independent and collaborative research projects. these projects help students to understand and develop advanced data structures and algorithms, to manage and contribute to a large body of existing code, to learn how to work within a team, and to explore areas at the frontier of computer science research. timothy huang risks to the public p. g. neumann legally speaking: why the anticircumvention regulations need revision pamela samuelson a systems approach to the introductory course in information systems introductory courses in information systems are typically taught as computer "literacy" courses; in computer science they are oriented to "algorithm development". the course described in this paper is concerned with providing the student with facility in the top-down development of hierarchically related systems of programs to be used in a business context. it is considered critical to orient students to this conceptual approach early on in their professional education. the course has been offered for four terms and has been well received by students and valuable for continued educational development in later courses in the curriculum. david r. adams william leigh use of computing curricula 1991 for transition from "mathematics" to "applied mathematics and cs" baccalaureate programme (poster) iouri a. bogoiavlenski andrew a. pechnikiov gennady s. sigovtsev anatoly v. voronin japan's view of ec '92 delivered at the armed forces communication and electronics association's ec '92 symposium in december 1989, this speech focuses on the european semiconductor industry-where japan ranks as the second largest exporter-and technology transfer of dual-use products from japan. eiichi ono meeting the it-skill shortage in europe head-on: approaching in unison from practice and academia the it personnel crisis is global afflicting many areas, including the us, australia and europe. in this paper, we report and evaluate on-going approaches in europe from both practice and academia. we focus especially on the cepis projects that aim at addressing the it skills shortage. we then examine how research projects based on theoretical and conceptual premises in norway can enhance these practical approaches. we propose ways of integrating the two approaches, and incorporate human resource processes to successfully meet the it needs of organisations. carl erik moe maung kyaw sein situations and advancement measures in germany veronika oechtering roswitha behnke the experience of women in undergraduate computer science: what does the research say? kathy howell position paper on technology transfer r. alonzo systems development philosophy bo dahlbom lars mathiassen an updated information systems curriculum: first revision our original proposal for "an updated information systems curriculus" was presented on march 15, 1985 at the acm sigcse symposium (2). this paper presents our first revision of the original proposal. it includes two new courses and a redistribution of some of the topics covered on some courses. osvaldo laurido-santos the courseware development group at dartmouth college steve maker are we ready?: risk, reality, and readiness leon a. kappelman the practical management of information ethics problems: test your appropriate use policy john w. smith the politics of standards and the ec european legislation and power struggles in the standards arena are sparking fear of technical barriers to trade and prompting the american standards community to reevaluate its infrastructure. the national institute of standards and technology may step up its role in order to negotiate at a governmental level with the ec. karen a. frenkel editorial ravi sandhu a comparison of operating systems courseware michael goldweber john barr tracy camp john grahm stephen hartley job attitudes and perceptions of exhausted is/it professionals: are we burning out valuable human resources? jo ellen moore the is undergraduate curriculum: predicting student outcomes in the is'97.5 course (programming, data, file and object structures) charles h. mawhinney joseph s. morrell choosing a ph.d. program in computer science rachel pottinger great expectations and the reality of university computing resources "my boss wants this budget set up on the computer by friday and i don't know anything about budgets, or lotus, or computers!" a little over a year ago, these hysterics coming from a user were a common scene at the information resource center (irc), the end-user service branch of computer and information resources and technology (cirt) at the university of new mexico. users would come in with unrealistic deadlines assigned by their bosses, who, in turn, had unrealistic expectations of both their employees' skills and computer technology in general. often, we had the added problem of dealing with a user who was using unsupported hardware and/or software. an irc consultant, feeling sorry for the frantic employee, would then spend hours bailing out the employee by actually (in this case) designing the spreadsheet and meeting the employee's deadline. the end result of the consultant's effort was usually mediocre because of the time constraints and because the employee did not have the necessary skills, such a accounting in this case, to work with the consultant to make full use of the technology. in the aftermath, everyone lost. the consultant wasted valuable time, the employee didn't learn anything and probably was alienated by the technology, and the boss most likely was not happy with the end product or the employee, and he/she blamed the irc consultant for the poor results. william padilla barbara rigg-healy the role of a corporate object technology center timothy korson patterns in the organization of transnational information systems william r. king vikram sethi road crew: students at work john cavazos statement of ethics in the use of computers corporate the catholic univ. of america's a summer course for gifted high school students this paper describes a project conducted during the summer of 1981 at the university of central florida. through the governor's office of the state of florida, funds were identified for several state universities to support gifted high school students in summer programs. such governor's programs for gifted students have been done in a number of other states before, most notably virginia (see [1] and [2]), but this was only the second such program in the state of florida, and the first at the university of central florida. high school students were selected from the surrounding geographic area on the basis of high school grades, preliminary sat scores, and teacher recommendations. the thirty or so students chosen all had excellent credentials. students were rising seniors or juniors in high school, and the program lasted six weeks with the students housed in dormitories on campus. judith l. gersting future faculty development seminar in ethics, social impact and alternative teaching strategies (seminar session) this seminar/workshop on ethics and the social impact in computer science, supported by studies of the applicability of alternative teaching and learning strategies, is targeted towards doctoral candidates in computer science whose life-goal is to teach in a university or college setting. based on the concept of "ethics across the curriculum" the seminar/workshop will prepare future faculty to incorporate ethical and social impact concerns in their technical courses. at the same time they will be exposed to modern teaching and learning techniques that will assist them in making a good start in their teaching careers. john a. n. lee kevin bowyer retraining of college faculty for computer science (panel session) william mitchell, moderator this panel is convened so that the issues inherent in retraining strategies may be debated by representatives of the formal faculty retraining programs. the speakers will address the masters level retraining of college faculty from other disciplines via summer coursework, an approach which is markedly different from the traditional pattern of formal re-education because it assumes no discontinuity in a faculty member's service to his college. this approach is obviously most advantageous for both college and the participating faculty member, and it also permits the design of special programs to serve this unique audience. given the popularity of this format it is a matter of great concern to the discipline that these special programs be credible. carter bays the computer science summer institute at the university of south carolina was conceived in 1979 and has attempted to offer, over a period of 3 summers, the majority of the coursework required for the m.s. degree. the program has been successful in that approximately 20 faculty from 2 and 4 year schools in south carolina have completed, or nearly completed their m.s. in computer science. unfortunately in many cases the retrained faculty have left their schools and acquired better positions elsewhere. stephen mitchell a combination of several factors has resulted in the now well-publicized teacher shortage in computer education. the factors include expanding student enrollments, industry demand for trained personnel, and the related "brain- drain" of teachers to industry. innovative and flexible programs are needed for the necessary retraining of teachers. in considering resources for re- training, key issues are: program quality, objectivity, and visibility. stanley franklin our program is intended as a stopgap measure. the junior colleges and four- year colleges in our system cannot hire traditionally trained computer scientists. yet they face increasing demand from students for computer science courses. we intend to retrain faculty from other disciplines to teach the beginning computer science courses. we'll use a two-summer format and an existing degree program originally designed for high school teachers. no education courses are included; our students will all have successful college teaching experience. we think of this program as serving an interim need for the next few years. as better trained computer scientists become more plentiful, demand for this kind of training will diminish, and the program can be discontinued. ed dubinsky in summer 1983 an institute for retraining mathematicians to teach computer science will be established at clarkson college under the auspices of the joint acm/maa committee on retraining for computer science. this is the initial implementation in a project, which has been in development over the past two and one-half years, to deal with the shortage of college teachers of computer science. the panel presentation will discuss some of the history, present goals and future plans along with some of the features of the present implementation. richard austing i do feel that retraining is important to small colleges. in fact it will be a necessity if the colleges are to maintain computer science programs. they will not be able to compete for people who have phd's in computer science. colleges will have to find phd's in other disciplines who have (or who are willing to acquire) backgrounds in computer science. of course, these faculty members will need continued training. colleges should encourage retraining of faculty from a number of departments, including non-science ones. a good mix of interest can produce a fruitful environment for a computer science department which will service the entire campus and the surrounding community. william mitchell carter bays stephen mitchell stanley franklin ed dubinsky richard austing on the use of naming and binding in early courses in most computer science curricula, the concepts of naming and binding are explicitly treated only in a small number of the later courses, such as operating systems and programming language foundations. however, these concepts are fundamental and underlie the whole of computer science. in this paper, a proposal is made to explicitly introduce these concepts in the second or third course so that they may be used in the analysis of ideas encountered throughout a student's program of study. the benefit of this earlier introduction is demonstrated by detailing how a computer organization course can explicitly incorporate these concepts. these concepts can also be used to advantage in other early courses, such as data structures. mark smotherman an integrated framework for security and dependability erland jonsson summary of tpc results (as of march 15, 1993) corporate tpc computer ethics: philosophical enquiry herman t. tavani lucas d. introna testing intrusion detection systems: a critique of the 1998 and 1999 darpa intrusion detection system evaluations as performed by lincoln laboratory in 1998 and again in 1999, the lincoln laboratory of mit conducted a comparative evaluation of intrusion detection systems (idss) developed under darpa funding. while this evaluation represents a significant and monumental undertaking, there are a number of issues associated with its design and execution that remain unsettled. some methodologies used in the evaluation are questionable and may have biased its results. one problem is that the evaluators have published relatively little concerning some of the more critical aspects of their work, such as validation of their test data. the appropriateness of the evaluation techniques used needs further investigation. the purpose of this article is to attempt to identify the shortcomings of the lincoln lab effort in the hope that future efforts of this kind will be placed on a sounder footing. some of the problems that the article points out might well be resolved if the evaluators were to publish a detailed description of their procedures and the rationale that led to their adoption, but other problems would clearly remain./par> a collaborative strategy for developing shared java teaching resources to support first year programming this paper discusses a strategy for developing shared teaching resources to support java programming subjects taught using a variety of educational approaches (lectures and tutorials, problem-based learning, distance education) with differing computing focii (computer science, commercial computing, network computing).the strategy is a group process involving six distinct stages: selecting the topic areas considered integral to all subjects for which the resources will be used; defining the details and identifying areas / concepts of a topic; determining basic, intermediate and advanced levels of information; determining appropriate educational techniques that support the desired learning objectives for the concept; investigating existing resources and building new resources, both with and without the use of computer technology. ainslie ellis dianne hagan judy sheard jason lowder wendy doube angela carbone john robinson sylvia tucker teaching load and the quality of education s. l. sanders updating the copyright look and feel lawsuits pamela samuelson the never-ending struggle for balance pamela samuelson integrating professionalism into undergraduate degree courses in computing (panel) l. r. neal a. d. irons the system of checking students' knowledge with the use of wide area networks in the situation when computers, and primarily wide area computer networks, are easily accessible, it is possible to conduct examinations with no need for the examinee and the examiner to be in the same place at the same time. this paper suggests how to carry out this postulate. dariusz put janusz stal marek zurowicz audit commission fifth triennial survey allan mills tom richards leon kappelman references charles e. youman ravi s. sandhu edward j. coyne risks to the public in computers and related systems peter g. neumann the case for a precurricular computer course (abstract only) several factors indicate the need for a programming course to be offered for selected students prior to their entering the beginning course in the cs curriculum (assuming the first course is the equivalent of cs 1 of acm's curriculum '78). there is evidence that attrition in the first course at many colleges is high. the precurricular course tends to affect students in one of two ways: some students' knowledge of computing is increased so they may better comprehend the material presented in cs 1, hence, they are more likely to succeed; other students switch out of the cs curriculum due to low aptitude or lack of motivation. techniques for selecting students for the precurricular course by using sat scores, high school courses and math placement tests are discussed and a description and outline for the precurricular course are presented. robert a. barret robert r. leeper a time for celebration dan lynch bert herzog why do fools fall into infinite loops: singing to your computer science class one effective way to introduce a dose of humanity, acknowledge the needs and struggles of cs1 students, and appeal to a broader range of learning styles is to present a computer science topic in an entertaining manner, e.g., with some form of artistic performance. in this paper, i describe three songs for cs1 that are designed to help students surmount three of the most difficult hurdles of that course. empirical and anecdotal results demonstrate that the songs help students learn, and help them enjoy learning. these songs are a case study in entertainment; all instructors can find some way to entertain their class, and recordings of the songs themselves are available on the web for any instructor or student. eric v. siegel funding the computing revolution's third wave john backus the software police vs. the cd lawyers dan bricklin toycom - a tool for teaching elementary computer concepts many of those who teach introductory computing courses have recognized the pedagogic value of a very simple computer model. a large number of introductory textbooks on computing contain a section explaining the logical components of such a system (1, 2, 3, 4, 5, 6, 7,8). these usually include an introduction to assembler language and machine level programming of a machine which is sometimes called the minimum configuration computer model. all of the previously implemented models of which we are aware have been constructed to operate in batch mode. after having used such systems for some time, we felt that they too soon introduced students to the aggravation of mispunched cards and long waits for runs, only to discover minor syntactic errors. we felt that the simple computer model's pedagogic value would be significantly increased by implementing it in the interactive mode. in order to test this thesis, we have designed and implemented such a system. our interactive computer model is named toycom, an obvious acronym for toy computer. toycom is a submonitor- assembler-interpreter which can be collectively called a simulator. it presently runs as a submonitor under basic-plus, which runs under the rsts-e operating system of the medium to large-scale dec pdp-11 minicomputers. it is also written in basic-plus. robert w. sebesta james m. kraushaar a top-down, collaborative teaching approach of introductory courses in computer sciences (poster) doris k. lidtke harry h. zhou message from the chair of the tyc education committee karl klee a diagnostic view on information technology motamarri saradhi multiple ways to incorporate oracle's database and tools into the entire curriculum (tutorial presentation) it is not uncommon for students to view the curriculum as pieces instead of an integrated whole. this can hinder them from applying knowledge learned previously in one course to the course they are currently taking. in addition there are a number of recurring concepts in computer science that students need to recognize. the concepts associated with the management of large amounts of information is one such area. catherine c. bareiss fraud by computer peter g. neumann cs1 using java language features gently teaching a new programming language in cs1 requires the instructor to make several important decisions regarding sequencing of topics. in teaching java, the basic decisions center around how to perform input and output, when to teach the awt (abstract window toolkit) and threads, whether to begin with applets or applications, and how much detail about object-oriented programming and java language features is required in the beginning. this paper describes a "language features gently" approach to teaching cs1 in java. elliot koffman ursula wolz review:minerva's machine sara m. carlstead authentication strategies for online assessments peter summons simon risks to the public in computers and related systems peter g. neumann the dimensions and correlates of systems development quality the need to improve systems development quality is increasingly felt by information systems departments in organizations. a clear definition of systems development quality is required to focus quality management efforts. we develop a broader definition of quality by identifying product and process dimensions of quality that systems development should ensure, and present a framework to classify systems development quality metrics. we then go on to develop, a theoretical model to explain the correlates of systems development quality, by integrating related research in the areas of total quality management, systems development, management of the is function, and is implementation. five factors (top management support, development methodology characteristics, project management characteristics, task structure of is teams and control structure of is teams) were identified. implications for theory, future research and practice are discussed. t. ravichandran arun rai hci education: where is it headed? andrew sears acknowledgment: the practical management of computer ethics problems john w. smith why co-op in computer science? (panel discussion) to encourage computer science programs not in the co-op tradition (most are probably in arts and science colleges) to once again consider the benefits of establishing such a program. it gives specific corporations more than an abstract reason for assisting computer science programs in any of the various ways which have been suggested (financial contributions, sharing staff, faculty interns, etc.). cooperative education for computer science majors is beneficial for the students, the employers, and the schools. william mitchell h. r. halladay rich hendin roberta weller t. c. cunningham degrees in human-computer interaction: a common name is emerging and opportunities are expanding andrew sears modernity and betterment of life ivan da costa marques ask jack: confused about careers? jack wilson certification comes of age: end users are now eligible fred g harold reader comments: putting pretentious pontificators on notice john gehl universal access to net: requirements and social impact jeff johnson empirical exploration in undergraduate operating systems steven robbins kay a. robbins problem-based learning of first year computer science tony greening judy kay jeffrey h. kingston kathryn crawford marketing the information systems profession: a preliminary report on the boston - sim careers videotape charles h. mawhinney gerald miller computers and people with disabilities ephraim p. glinert bryant w. york cpsr's approach to advising policymakers jeff johnson inside risks: information is a double-edged sword peter g. neumann post implementation evaluation of computer-based information systems: current practices contrary to the widely-held view that post implementation evaluations are performed to assess and improve the information system and the system development practices, this study suggests that in practice, the primary reason for such evaluations seems to be project closure and not project improvement. kuldeep kumar trends in service courses (panel session) john beidler lillian cassel doris lidtke barbara owens the state of educational computing in singapore p. k. h. phua if you build it, they will come deborah l. knox icts, bureaucracies, and the future of democracy ignace snellen computer science program requirements and accreditation michael c. mulder john dalphin analyzing student programs (poster session) elizabeth odekirk converting from a dec system-10 to vax may 15th 1986, we unplugged our decsystem-10 after 17 years of service, and replaced it with new vaxes. we had to not only learn a new system, but prepare our users for the changeover, and help them learn the vax. we looked for ways to make sure dec-10 users would know their computer was leaving. we surveyed users looking for potential user conversion problems. we helped users get ready for and eventually get used to the vaxes. this article will discuss the preparation we (and our users) made for the upcoming changeover, a description of our conversion efforts, and a discussion of some problems we ran into. john thornton surveyor's forum: generating solutions george s. lueker a system-based sequence of closed labs for computer systems organization brenda c. parker peter g. drexel dialing for delcat: the end users justify the means s. glogoff r. gordon a suggested term project for the first course in computer science ronald j. leach group learning techniques (tutorial sessions) this tutorial is concerned with a method of organizing undergraduate computer science courses, in which the students collaborate in small groups. effectively this breaks up a large class into a number of independent small groups and changes the role of the teacher from a director to a 'consultant.' the teacher has to provide a series of discussion papers for the groups, each including a problem to be solved. the group is expected to investigate the topic, produce an exact specification of the problem, provide an algorithm to solve the problem, an implementation of the algorithm, and documentation including a discussion of implications and generalizations. this form of peer instruction has improved performance of both the better and the poorer students, and plagiarism is no longer a problem. (students who do not do their share of the work are liable to be excluded by their group.) they also investigate topics with much greater thoroughness and appear to obtain a deeper understanding. the group experience is valuable training for working on projects in industry. the tutorial will outline how to organize such a course and will discuss case studies. r. d. parslow how to attract women into the field of computing farideh a. samadzadeh usage of inter-organization computer networks by research and development laboratories deborah estrin the effects of partially-individualized assignments on subsequent student performance brian toothman russell shackelford what i didn't learn in the classroom some things can't be learned in a classroom. you learn theories in a class, but the applications of the theories are often very different. usually, students must wait until they have graduated, and landed that first job before they can apply what they have learned. there are, however, opportunities for students to work with professionals while in college to gain experience and ease the transition from college to career. i am a student at the university of wisconsin-eau claire and work with professionals, like you, at academic computing services. in addition to your normal responsibilities, you provide professional, on-the-job training for students. as a part of this training, students learn to adapt the theories learned in the classroom to real-life situations. this paper focuses on my experience and the benefits you, as professionals, have provided me. i would like to explain how you found me, what responsibilities you entrusted to me, what you do to keep me working for you, and how you have prepared me for the future. beth satter is this the same soviet union we've seen before? some rapid changes in computers in russian society shelly heller architectural support for copy and tamper resistant software david lie chandramohan thekkath mark mitchell patrick lincoln dan boneh john mitchell mark horowitz a pilot study for developing a quantitative model for outcomes assessment in the computer science program at a small university this paper examines a prototype model for outcomes assessment of a computer science program. the model explores correlating student's scores on the major field test (mft) in computer science with academic performance. the model's goal is identifying key factors (course offerings) and their contribution to mft performance and student success. robert o. jarman sankara n. sethuraman the publication process for computer science textbooks angela b. shiflet computing at carnegie-mellon university carnegie-mellon university (cmu) and international business machines (ibm) have recently decided to collaborate on an innovative project: to implement a distributed computing system at cmu. a distributed computing system integrates personal computers, midsized machines known as file servers, and mainframes through a network. this system will provide cmu's students, faculty and staff with state-of-the- art equipment and software, and will be designed to facilitate communication on campus. the authors describe the reasons that motivated the cmu-ibm project, the technology of this distributed system, and the impact they expect this system to have. keith slack jim morris douglas van houweling nina wishbow a total quality management-based systems development process meaningful user involvement in systems development and an overall user orientation is critical to the success of any development project. although traditional systems development methodologies recognize the importance of the user, they provide no formalized methods to translate user quality requirements into system design specifications. in addition, these methodologies are not rigorous in maintaining a focus on user quality requirements during the later phases of development.manufacturing organizations with established total quality management (tqm) programs successfully use a technique called quality function deployment (qfd) to carry the customers' quality requirements through the design, production, and delivery processes. qfd has also been used to determine software requirements in systems development. this paper describes a systems development process that integrates qfd more completely into the traditional methodologies. the result is a practical process that focuses the development effort on system quality as defined by the various customer groups. an example case study illustrates the methodology and its benefits. antonis c. stylianou ram l. kumar moutaz j. khouja a program to reduce the number of open i/t jobs at purdue university: an industry case study _the number of unfilled information technology (it) positions in demand in the united states is estimated at 1.6 million this year. the demand for skilled it people far exceeds the supply and it is estimated that half of the 1.6 million positions will likely remain vacant. due to the shortage of skilled i/t professionals, companies and universities have to be creative and develop in- house programs that will allow them to produce their own it professionals from among a pool of employees working in non-technical areas or possessing non- technical degrees. purdue university has developed such a program that will help fill open computing positions by providing people with a solid educational foundation as well as current technical skills. this paper discusses the program purdue has put into place to help overcome its shortage of qualified i/t professionals._ julie r. mariga fork diagrams for teaching selection in cs i amruth n. kumar policy, expections, problems. the depauw experience with allocating workstationsto every faculty member this paper features an in- progress policy and progression of a three year program designed to put a computer station on the desk of any faculty member who needs one. it features the definitions of both "workstation" and "faculty member", along with standardization revelations and policymaking factors. covering the special case members and problems involved, it deals with the university as one large entity composed of small factions with individual needs. it exemplifies the solutions we arrived at and exposes areas that any center should take caution approaching. the degree of enthusiasm, progression of the participants, and results of the program are analyzed. jennis pruett simulation of process and resource management in a multiprogramming operating system james h. hays leland miller bobbie othmer mohammad saeed providing intellectual focus to cs1/cs2 timothy j. long bruce w. weide paolo bucci david s. gibson joe hollingsworth murali sitaraman steve edwards personal computing: personal computers and the world software market it may be trite to say that "the world is shrinking," but it is true nonetheless. political and technological changes are edging us in the direction of the global village. we have the economic unification of western europe, the transition of eastern europe to market economies, and free trade agreements and negotiations in the western hemisphere. blue jeans and rock and roll music are found throughout the world, we can direct dial to iceland, and your grandmother may have a fax machine. larry press possible futures for cs2 (panel) daniel d. mccracken michael berman ursula wolz owen astrachan nell dale ubiquitous computing (abstract) mark weiser implementing a tenth strand in the cs curriculum c. dianne martin chuck huff donald gotterbarn keith miller internet-based distance learning: a multi-continental perspective dale shaffer computer personnel recruitment sigbdp and sigcpr - sponsored panel of acm 80 the first panel member will present "employment trends in computer occupations". this is a preview of a forthcoming bureau of labor statistics report on the same subject. it is the result of several years of research on employment issues in the field of data processing given by the head of the research team, mr. patrick wash, supervisory labor economist. the second paper will be presented by dr. dambury s. raju, a professor at the illinois institute of technology. his presentation, entitled "person-job matching in the eighties", reflects almost 20 years of experience as a personnel psychologist working for a large testing firm (sra) and now teaching. he will cover testing as a tool for person-job matching, considering the eeo requirement. i will wrap up the panel presentation with a case history of the structured recruiting program at missouri pacific for computer personnel. l. r. cottrell an innovative two-week computer science program for employed professionals william r. goodin walter j. karplus improving the learning environment in cs i: experiences with communication strategies dorothy deremer micro manager heal thyself: a medical school micro manager's prescription for curing those firefighting blues d. bayomi grouplens: applying collaborative filtering to usenet news joseph a. konstan bradley n. miller david maltz jonathan l. herlocker lee r. gordon john riedl a study of the outsourcing decision: preliminary results karen ketler john r. willems government agencies' activities in education the panelists describe the organization and current activities of the agencies with which they are associated. brief descriptions of some past and currently funded activities in computer science and computers-and-education are presented. the areas in which funding is currently available are highlighted. doris k. lidtke computer science homework and grading practices: an alternative to the popular model sidney l. sanders to serve or not to serve as microcomputers proliferate on campuses, changes will have to be made in user services to accommodate the "apple in every dorm room" and "pc in every office." brown university is being forced to examine these issues now, due to several important events: •brown's broadband network that links most office buildings is being extended to dormitories. •special agreements allow members of the brown community to buy microcomputer hardware at discounts of up to 45 percent; software site licenses are currently under negotiation. •a $50 million research project includes a gift to brown of 150 micros. traditional services cannot be mapped directly into microcomputer support for the following reasons: •the variety of hardware and software is so large that staff cannot maintain expertise in all these areas. •staff size has not usually grown in proportion to usage, so there is no extra staff to move into these areas. •centralized support services will become less effective as users become more distributed. other factors affect the extension of service to micros: •an increasing number of incoming freshmen have some computer "literacy." •file transfer standards are only beginning to evolve. •increasing numbers of administrative users are requesting these services. •the role of mainframe computing will change toward file and print servers. •microcomputer vendors are supply better user-oriented documentation including on-line tutorials. brown has recognized a number of directions that these services are likely to take: •with distributed users, there will be an increase in phone and interactive consulting. •more of the responsibility for traditional user services will be assumed by the users (human network). •solutions for user problems will include hardware as well as software. •a limited number of packages that are implemented on several common types of machines will be identified for support. •staff will develop and support interconnections between existing software and the local environment. anne oribello a profile of today`s computer literacy student what are the opinions and biases of students entering this course today? what do these students think they know, what do they think they should be learning? have their opinions been altered by the technological and software trends? do younger and older students have similar or markedly differing views and computer experiences. can the needs of the students be met by such courses, or are the students actually more technologically literate than we believe? noting the changes that have taken place over the last three or four years in the literacy course, the authors prepared a survey that was completed by all the students in two universities (317 students) during the first class of the fall 1985 semester. the results of that survey are the basis of this paper. jean buddington martin kenneth e. martin help for all the students ludwig braun security engineering in an evolutionary acquisition environment marshall d. abrams privacy policies and practices: inside the organizational maze h. jeff smith a training program for the computer integrated factory this paper describes a training program to prepare individuals for careers in the management of computer integrated factories. training is fourteen (14) months in length and has been funded through grants from the cambridge private industry council. the digital equipment corporation donated a pdp 11/34 mini- computer and rsts instructors for an interactive workshop. the boston chapter of the american production and inventory control society provided curriculum quality control and instructors. courses were taught by professors of mis/management sciences from boston area universities. dean j. saluti a student project in software evaluation properly educating computer scientists involves teaching effective means to properly engineer a system. an important part of such engineering work is ensuring that the computing system is both useful and usable. while many systems out there today are difficult to use, performing usability engineering on a system during its development has been shown to be an effective way to make a system more usable. the problem is fitting practical experience into the curriculum. this paper discusses a case example of how a team of undergraduate students learned to take a software system during its developing stages and perform effective usability engineering following the "thinking out loud" methodology. michael f. czajkowski cheryl v. foster thomas t. hewett joseph a. casacio william c. regli heike a. sperber distribution chain security glenn durfee matt franklin present status and direction of information curriculum of korea in-hwan yoo soo-bum shin chul-hyun lee tae-wuk lee a new standard for appropriation, with some remarks on aggregation joseph s. fulda distributed training: the msu computer laboratory departmental trainer program with the advent of microcomputers, distributed computing is here to stay. with distributed computing comes distributed expertise; people must learn to run their own computing centers, no matter what the size of the computers. in trying to deal with increased demands for training in a limited- resource environment, we have taken the natural step of distributing the responsibility for the training. with appropriate training materials, a person who knows how to use a program can lead someone else through the process of learning. the training materials must be sufficiently detailed to include training on all major features of the software that people regularly use and include conceptual material that takes the place of lectures. the commitment to distributed training must be real. sufficient staff must be devoted to developing training materials because the materials must continue to evolve with the software it teaches and new materials must be created to meet the demands for training on new software. bill brown marilyn everlingham the implementation of a data communications laboratory in small to medium sized universities this poster describes techniques for establishing a data communications laboratory in the small to mid-sized university. the experience for the poster was gained in the building of a lab funded by a nsf-ili grant.. we provide a tactical approach to the lab and a discussion of the physical environment and basic equipment requirements. special interest is devoted to sources of assistance for equipment procurement, as well as how to lower both hardware and software costs. included are the capabilities of the suggested lab for teaching the major network operating systems. the security aspects required by the laboratory are also covered.the type of data communications laboratory described in the poster can be established with a minimum of funds and using excess or donated equipment. it will provide students some hands-on experience with data communications and establish the lab as a reality from which to expand to ever changing technologies. one secret of success in this effort is to make the establishment of a lab, a rallying point between the faculty, students and university's instructional technology department. students should be involved in all steps of the planning and installation. the students involved with the project will be happy to be taking part in an effort that will add an effective hands-on approach to their data communications courses. martin h. levin use petri nets to improve your concurrent programming course (poster session) petri nets are recommended as a learning aid in a concurrent programming course covering modelling and verification techniques, based on state space analysis, and translation of formal models to java programs. joäo paulo barros the importance of a proactive approach to education at an educational/academic computer center a growing responsibility of academic computer centers is to provide seminars on computer-related topics to the faculty, staff and students they serve. the curricular aim is to instruct users on the effective use of the computer as a tool which is specific to users' needs. to successfully achieve this aim, i recommend that computer centers adopt a proactive instructional approach. adopting a proactive instructional approach requires that computer centers acknowledge their educational responsibility and take action for managing instructional time. it used to be that computer users were almost exclusively math, science or computer majors. at that time computer centers were called upon only occasionally to give a seminar, with the advent of new hardware or software. the user population of the eighties, however has dramatically changed; computer users are increasingly from business, the social sciences, and humanities. these so- called "end users" are more naive about computers and depend heavily on the computer center staff for basic and applied computer information and operations. initially, the educational response from computer centers was to provide 1) "user- english" documentation and 2) "user-english" consultants in order to more quickly orient students to computers and to their specific applications. as the number of users rapidly increased, the interaction time between students and consultants became more constrained and pressured, and the priority soon became how quickly users could be moved in and out of the consultant's office. certain dilemmas became endemic. for example, a student's offhand query, "i have a quick question…," often required a more extensive answer than the student expected. the consultant was then faced with providing the apparently "immediate" answer to the question, if that was possible, or whether and to what extent, to provide a more in-depth answer, if time allowed. also, as students' needs diversified, consultants had to be proficient in more applications. it became clear that a more efficient way to handle user needs was to work with groups, rather than, individuals. thus, instructional computer seminars were a practical response to the growing demands made on computer center staff. while seminars were in fact accommodating, their curriculum development tended to be somewhat frenzied; and quickly assembled to respond to more urgent needs. rather than providing structured curricula in computer skills, seminars were designed to deflect "quick questions." part of the problem was that the seminars were put together by individuals who were unable to take into account educational factors necessary in building a sound curriculum accompanied by appropriate teaching methods. addressing problems as they arise is a direct response, but a reactive educational approach. a proactive approach tries to address problems in the context of providing users with 1) a conceptual framework and 2) training in specific applications. the conceptual and functional aspects are complementary components and can be weighted according to the educational objectives of the computer center. for example, seminars teaching computer literacy take, by definition, a more functional approach. however, making a person computer literate does not mean that the person is computer knowledgeable. developing an overall approach for a computer center is difficult. computer centers exist in tandem with academic and service departments and may need to be careful not to step on toes. therefore, taking a proactive approach requires computer centers to make decision regarding responsibilities in stating and realizing educational goals. it is my view that user services must accept those responsibilities. sheri prupis are mis and cs really different? (panel) dennis a. adams gordon b. davis david l. feinstein george m. kasper jeffery zadeh on criteria for grading student programs james w. howatt log on education: the three ts of elementary education elliot soloway cathleen norris phyllis blumenfeld ron marx joe krajcik barry fishman pattern and toolkits in introductory cs courses this workshop will focus on the pedagogy for teaching introductory programming courses and the materials that support this approach. in our model, students learn programming basics by using elementary programming and design patterns. students then apply their skills in larger laboratory projects that present interesting applications of computer science. a collection of pattern tutorials and related laboratory problems will be presented and made available to the participants. all programming work is supported by the extensive use of core toolkits that serve as design models. the pattern tutorials, laboratories, and core toolkits are currently in use in c++ and are being developed in java. richard rasala viera k. proulx implementing national educational technology standards for student in the united states a consortium of leading educational professional organizations in the united states, led by the international society for technology in education has developed a set of national educational technology standards (nets) for students. the nets standards describe what all students should know about and be able to do with technology. they are divided into six categories [1]: basic operations and concepts; social, ethical, and human issues; technology productivity tools; technology communication tools; technology research tools; and technology problem-solving and decision-making tools.the nets standards also include performance indicators at each of four major target grade ranges, pk-2, 3-5, 6-8, 9-12. nets standards describe fundamental technology competencies to be developed through meaningful learning experiences integrated throughout the prek-12 curriculum. they provide realistic benchmarks for achievement. they also provide a continuum of technology skills that can be achieved through a curriculum that reinforces high level learning and problem solving strategies in which technology tools play a major role. harriet g. taylor a content analysis of six introduction to computer science textbooks a content analysis was conducted on three pairs of introduction to computer science textbooks to determine if there were any significant differences in their content. the results of the analysis seem to indicate that the content of major topics in earlier computer science texts is not signficantly different from more recent textbooks. h. willis means the case for internet voting joe mohen julia glidden a course in software portability this paper describes an experimental course on the topic of software portability, and initial experience in teaching this course. with the continuing proliferation of both applications and computing environments, the need for portability is being increasingly recognized. a large proportion of the software now being developed will eventually need to be ported to new environments. yet this topic is missing from most computer science and software engineering curricula. the course described here was designed to explore practical issues in the development of portable software. lectures and discussions on portability topics are combined with the ongoing development of a simple software project designed to expose a variety of portability problems. during the course the project is ported to several environments and redesigned to improve its portability. this course has been taught experimentally with encouraging results. student assignments have used novel and effective methods to overcome portability barriers. feedback from student indicates that they have become more aware of portability issues to be considered in software development, and have gained experience with system interface issues in several programming environments. james d. mooney social dimensions of reliability of complex systems the purpose of this panel is to explore the limits of our ability to build fully reliable systems and the consequences thereof. as we come to depend more and more on sophisticated computer-based systems, certain of society's functions are placed in jeopardy. in general our dependence grows gradually as we adjust our degree of trust based on incremental experience. but there some tendency to get into trouble by trusting too much too soon. this panel investigates these issues and raises questions about what society might do to deal with them more thoughtfully. severo m. ornstein siggraph public policy committee activity detailed we start this column with the results of our third on-line opinion survey on public policy issues affecting computer graphics. next we provide an introduction to and a copy of the definition paper for a prospective study of computer graphics research to be conducted by the national research council with partial funding from siggraph. this is followed by an update on our activities in proposing a course on public policy and a panel on digital rights management of intellectual property for siggraph 2001. myles losch provides an update on the issues related to the slow pace of adoption of digital tv. finally, we close with another set of comments from a reader. david nelson bob ellis laurie reinhart building firm trust online a prescription for computer narcotics robert papsdorf profiles of mis doctoral candidates: ideals and reality little is known about which qualities mis department search committees are looking for in candidates when conducting a faculty search at the assistant professor level. this paper examines the qualities being sought by institutions focusing on research, teaching, or both. the actual teaching and research performance of students in the field is then used to find clusters of students and to examine the performance of different clusters. finally, lessons for students and academic programs are discussed in terms of fit of the faculty candidate and the academic department. kai r. t. larsen m. pamela neely crisis in computer science education at the precollege level this paper attempts to focus attention on the problem of providing meaningful and effective educational programs for precollege teachers. computer science departments caught in their own staffing problems have not given much attention to precollege teacher training in computer science. elementary and secondary schools are experiencing very little turnover in staff. even when these schools have an open position, individuals entering the teaching field have little or no training in computer science. yet the need for precollege teachers with a computer science background exists and is growing larger each year. this paper addresses this crisis in computer science education at the precollege level and proposes an approach which can be implemented easily and effectively. larry w. cornwell using jflap to interact with theorems in automata theory eric gramond susan h. rodger computer professionals and the next culture of democracy doug schuler ensuring ethical behavior in organizations richard g. milter information technology curriculum development bei-tseng (bill) chu venu dasigi john gorgone david spooner your place or mine?: privacy concerns and solutions for server and client-side storage of personal information deirdre mulligan ari schwartz the impact of new accreditation and certification standards for secondary computer science teachers on university computer science departments the establishment of accreditation and certification standards for secondary computer science teachers has been taking place over the past 5 years. the acm has taken the lead in developing certification standards for adoption by states and the international society for technology in education (iste) has taken the lead in developing national accreditation standards for teacher preparation programs through affiliation with ncate, the accrediting body for professional education units in the usa. the impact of institutionalizing these new standards is discussed, and the role that university computer science departments should now take in the teacher preparation process is described. harriet g. taylor c. dianne martin starting a community college acm student chapter: a matter of motivation anne g. applin the 1986-1987 taulbee survey the computing research board's 1986-87 taulbee survey includes the latest statistics on production and employment of ph.d.'s and faculty in computer science and engineering. for the first time, departments offering ph.d.'s in computer engineering are also included. d. gries d. marsh computing education and the information technology workforce eric roberts guilds or virtual countries? the future of software professionals william g. griswold bit - a child of the computer the back-ground of the scandinavian computer journal b i t will be outlined, in particular with respect to computational demands in science, technology, industry and defense. the history of b i t will be described and related to the evolution of computers, numerical mathematics and computer science. some contributed papers which have had an impact on the general development will be discussed briefly. the 19th century could perhaps be characterized as a period of preparation for the advent of the computer. it so happened that quite a few swedish inventors played a role in this development. scheutz, father and son, as well as wiberg constructed mechanical devices for a somewhat automatized calculation for solving simple arithmetic problems by series of pre-determined operations. in fact, wiberg was able to compute a logarithm table which even appeared in print. later, odhner built a robust mechanical, hand-driven calculator which around 1930 was followed by electromechanical calculators. all lengthy calculations had to be performed manually by this time. let me mention a few examples from sweden. one such problem was to find periods of so-called internal waves in the sea. these waves are huge, up to 20-30 meters in size, but nevertheless invisible. they are generated by the moon and observed as sharp changes in the salinity. the method used was numerical autoanalysis, that is a kind of fourier analysis of the function by itself. during the war there was a great need for ballistie tables, and i belonged to a group involved in computing bomb tables for the swedish air force. we used the classical runga-kutta method with air resistance represented graphically, and we had only electro-mechanical calculators at our disposal. after having computed a basic set of orbits we could produce the wanted tables by a suitable interpolation procedure. it is a sad fact that all our tables could probably have been computed in a couple of minutes on a fast modern computer. after the war i was involved in some rather lengthy computations on the deuteron concerning energy levels and quadrupole moment and also in problems on scattering. however, in 1946 some people in the swedish navy and in the academy for engineering sciences got interested in the progress in the united states and after having visited the key projects they reported back with great enthusiasm. it was quickly decided to offer scholarships to four young students; they were selected in the spring of 1947. they arrived already in august or september; two of them went to harvard and mit while two, including myself went to princeton. as far as i am concerned i enjoyed a phantastic hospitality, particularly from herman goldstine with whom i established a lifelong friendship. back home in 1948 some of us got involved in the construction of a relay computer (bark), completed in 1950. however, it was soon understood that there was a need for more computer facilities, and the construction of besk under erik stemme was initiated. it was completed in 1953, and during a short period of time it was considered as one of the most powerful computers in the world. clearly its structure was very much the same as that of the princeton computer. in 1956 a simplified copy of besk called smil was completed at lund university, built with a minimum budget of some 20,000 $. this computer was used for a large variety of problems in nuclear physics (particularly eigenvalue problems), spectroscopy, mathematics (number theory, construction of tables), and also social sciences (geographical simulations). several problems coming from industry and different research organisations were also treated. the interest in and use of computers created a very natural demand for conferences since the literature on the subject for obvious reasons was very scarce by this time. the first scandinavian conference on computers was held may 1959 in karlskrona, later known as the place where a soviet submarine ran aground in 1981. one reason for this choice of site was the fact that the swedish navy played an important role in initiating the computer development, another that, especially in spring, it was a lovely place, situated on the baltic. preliminary discussions were held informally on the need for a nordic journal on computer related problems, and at the next conference in copenhagen in 1960 a more formal meeting was arranged. niels ivar bech acted as chairman, and further peter naur of denmark, jan garwick, norway, olli lokki, finland, and myself from sweden were present. it was unanimously decided to start a nordic journal within the computer area, to appear quarterly. the journal was intended to be international with papers published in english, german, or the scandinavian languages. as it turned out, only about 4-5 papers have been published in german, and very soon papers in the scandinavian languages gradually disappeared. nowadays it is required that all papers be written in english. the name of the journal was a long one: nordisk tidskrift for informationsbehandling, but playing around with the initials in a clever way we were able to form the name b i t. in fact, this name is most convenient because of its shortness which makes it very easy to quote papers printed in the journal. as is well known it is somewhat dangerous to suggest an activity including work since there is a great risk that the proposer is elected to carry through the project. this is exactly what happened in this case, and from the very beginning up to this time i have served as editor of b i t. naturally, we have also an editorial board with members from the nordic countries. peter naur of copenhagen has been a member right from the beginning in 1961 and germund dahlquist from 1962. we got financial support from the danish company regnecentralen under niels ivar bech and from several official sources including the nordic research organisations for natural sciences. finally, just a few years ago we managed to become self-supporting, perhaps mostly through favorable exchange rates. during the first decade b i t tried to @@@@ the public to get acquainted with new developments within the computer area. it is natural that the growing crowds of people working with computer applications of different kinds felt an increasing difficulty in keeping up with the fast progress, both in hardware and in software. that left a gap which b i t tried to fill. simultaneously we also tried to accommodate scientific papers, particularly in numerical analysis and in computer languages. very early we opened a special column for algorithms written in algol 60. as a consequence of this policy our subscribers to a large extent were private scandinavians during the first decade. then the situation changed slowly. the need for general information decreased because this was treated in special new publications of type datamation and also in ordinary and weekly newspapers. simultaneously the number of scientific contributions to b i t increased strongly, first in numerical mathematics, later also in computer science. as a result of this development the number of scandinavian subscribers decreased while the number of non-scandinavian subscribers, mostly libraries of research organisations and universities increased, the net result being slightly positive. from 1980 it was clearly indicated that b i t was divided in two sections, one for computer science, and one for numerical mathematics. in spite of obvious difficulties we have been able to strike a reasonable balance between these two. the first volume (1961) had 290 pages and was type- written and photographed. already volume 2 was printed in an ordinary way. b i t had obviously been observed also abroad since two contributions, one from the us (louis fein) and one from the netherlands (peter wynn) appeared already in the first volume, while several "foreign" papers (among them one by gene golub) were presented in volume 2. from the beginning there was a certain ambivalence with respect to papers on hardware: during the first 10 years we published a few of that kind, but finally they disappeared. turning to computer science there is an important subject which attracted considerable attention during the first 10-15 years, namely computer languages and compiler construction. the main interest was centered on algol 60 since by that time fortran was only available for users while the corresponding compilers were held secret. however, different aspects on other programming languages, e.g. cobol, algol 68, pascal and simula, have also been treated. it is of course hard to tell which papers have had an impact on the general development, but i think that papers by dahlquist and others on stability problems, enright-hull-lindberg on numerical methods for stiff systems and a whole bunch on runge-kutta methods have had a considerable influence. finally i think it is fair to mention that we offered a special issue dedicated to germund dahlquist on his 60th birthday, followed by one dedicated to b i t on its 25th birthday, both with about 300 pages. concerning the geographical distribution of authors and subscribers we can say roughly that the nordic countries, the rest of europe, and usa plus canada account for about 1/3 each in both respects. the most striking feature is the steep increase in offered contributions from taiwan, and we have also had quite a few from mainland china. in both cases the quality has been rather good. also some exotic countries are represented by authors: nigeria, singapore, ecuador, sudan and the fiji islands, just to mention a few. even if some papers must be rejected we try to encourage the authors, and in many cases the papers can be published after a more or less thorough revision. as a mean value the time between reception of a paper and publication is nine months. c.-e. froberg computers and the quality of life john m. artz thematic analysis of the new acm code of ethics and professional conduct c. dianne martin david h. martin inside risks: the perils of port 80 stephan somogyi bruce schneier time flies you cannot they fly too fast jean gasen an introductory computer science course for non-majors this paper describes an approach to an introductory computer science course designed especially for students who are not specifically required to take a computer course and thus ordinarily receive no appreciation for computers or computing. this is the third semester this course has been offered. student enrollment has been 31, 46 and 41 respectively. we anticipate higher enrollment figures next semester as more advisors are becoming aware of the course. in a typical semester students majoring in such disciplines as english, advertising, nursing, psychology, sports administration, sociology, broadcasting and communication, music, elementary education, art and anthropology have enrolled. this course is ideal for those majoring in mathematics education as one day they may be teaching such a course to high school students. to encourage this group of students to enroll in the course, we restricted students from engineering and physical sciences and business disciplines from attending. they are required to take a different computer course. we have observed that most students not required to take a computer course desire to learn something about computers, and because of the above restriction are less hesitant to enroll. roger l. wainwright treating computer science as science as: an experiment with sorting (poster session) when i teach sorting algorithms in our introductory computer science class, i always wonder how i can convince the students of the efficiency of (n log n) sorts, and their complex code, vs. the ease of writing (n2) sorts. with today's personal computers, even bubble sorting an array of a few thousand items appears to occur instantaneously. in addition, most textbooks provide the program code for implementing most of the standard sorting algorithms, such as bubble sort, selection sort, and quick sort. since our introductory course has a closed lab period each week, i looked for something to do with my students when it came to sorting. making them type in the code that was in the book seemed a waste of time. cary laxer using standards as a security policy tool j. m. ferris computing in the spanish educational system c. gomez bueno j. c. de pablos ramirez microcomputers - acquisition and use m. lloyd edwards roboprof and an introductory computer programming course roboprof is an online teaching system. it is based on www technology and can easily incorporate www- compatible media such as graphics, audio and video. it is structured as a self-paced course book: roboprof presents the student with information on a closely-defined topic and then marks a set of exercises covering that material. when the students results are satisfactory, a new topic is introduced.the idea behind roboprof is to increase motivation by borrowing ideas from certain games. these ideas include providing a challenge, giving quick feedback, making progress visible and encouraging experimentation.roboprof was used to teach an introductory computer programming course. an introductory computer programming course must cover two main areas, the computer model (syntax and semantics of a programming language) and program design. in this paper i show how roboprof can be effectively used to help teach the syntax and semantics of a programming language. charlie daly progress report on the study of program reading we present some ideas here about prose reading comprehension tests, with analogies to program reading exercises, and suggest the potential usefulness of a standardized, nationwide program reading comprehension test as a means to assess on a comparative basis individual and department-wide progress through the computer science curriculum. we conclude with a research agenda on program reading and encourage contributions to the work from interested colleagues. philip koltun lionel e. deimel jo perry a smorgasbord of pedagogical dishes anders berglund mats daniels vicki l. almstrum information systems skills: achieving alignment between the curriculum and the needs of the is professionals in the future the objectives of this study are to find out the perceived information systems (is) skills which are important at present and in five years time, and the perceived emphasis of is curricula on these skills. it is found that the scope of skills required of is professionals will broaden towards the end of the decade. interpersonal skills, business skills, analysis and design skills, and programming skills are the most critical for career development. results also show that there is a match between the skills and knowledge possessed by is graduates of the hong kong polytechnic university and industry requirements for the current is environment. however, the curriculum is not aligned with industry needs in the future. this indicates that the education the respondents received does not prepare them for progression up the career ladder. implications of the findings are discussed so that a new curriculum may be designed to provide the preferred is graduates to industry. eugenia m. w. ng tye ray s. k. poon janice m. burn the politics of advanced information technologies (panel) michael l. gargano frank losacco crisis and aftermath last november the internet was infected with a worm program that eventually spread to thousands of machines, disrupting normal activities and internet connectivity for many days. the following article examines just how this worm operated. e. h. spafford teaching computer science through writing william j taffe updating your users skills this paper discusses about the updation of the new or more advanced features, to the users. mary e. sherer signature schemes based on the strong rsa assumption we describe and analyze a new digital signature scheme. the new scheme is quite efficient, does not require the signer to maintain any state, and can be proven secure against adaptive chosen message attack under a reasonable intractability assumption, the so-called strong rsa assumption. moreover, a hash function can be incorporated into the scheme in such a way that it is also secure in the random oracle model under the standard rsa assumption. ronald cramer victor shoup about this issue… adele goldberg a partnership - school and computer science work experiences: a career component to the curriculum melvin simms privacy commissioners: powermongers, pragmatists or patsies? ann cavoukian reorganizing to serve users better this presentation will outline the emerging computing environment on four campuses of the university of colorado and describe the facilities and services of the university academic computing system. in particular, the organization and activities of user services personnel will be discussed in some detail. james nichols session m4: women in the workplace this session, organized by the association for women in computing, seeks to provide a forum for discussion of the ever-changing environment facing today s woman in the computer industry. each panelist, an experienced computer professional, will discuss private industry, academia, or government, in terms of the decision points in her own career. she will discuss topics such as initial educational requirements, work experience, career development activities, other issues relating to advancement, and the more general demands of the workplace as they have changed since she entered the workforce. she will then relate these factors to the entry-level computer professional, noting where shifts in emphasis have occurred. after the presentations, the panelists will discuss the similarities and differences in their various spheres. virginia c. walker mary charles blakebrough caroline m. eastman the 1985-1986 taulbee survey the computing research board's latest survey on the production and employment of ph.d.'s and faculty in computer science and engineering depicts a young, optimistic discipline where supply may not always meet demand. david gries models for the organization of academic computing the organization of academic computing is complex and requires many different dimensions to be described properly. each dimension is a continuum which extends between two models of academic computing. the models represent opposing views of how academic computing ought to be arranged, and they define the dimensions which they anchor. one dimension, for example, extends between totally centralized computing at one end to totally distributed computing on the other. another dimension is characterized by having the library model of computing at one end, where services are free to all patrons, and a fee-for- service model at the other end, where each computing service is charged back to customers. this paper proposes yet another dimension, one which encompasses the purposes of the enterprise, how it is organized, and how the staff do their work. it is a dimension defined by the contrast between the university faculty department and the commercial data processing shop as models of academic computing. the distinction between a faculty department and a dp shop is an artificial one, of course. i believe, however, that it is an illuminating comparison, one that forces us consider just what it is an academic computing organization is supposed to do and how it should do it. this paper will compare and contrast these two models in terms of how they define the goals of academic computing, how the department should be organized, and how the staff should conduct themselves within that organization. consider for a moment a teaching department in a university or college. the purpose of the department is to teach an area of study---its facts, theories and methodologies---to students. the theories, which explain how the field is organized, and the methodologies, which prescribe how the subject is to be studied, are more important than the facts, which are subject to interpretation and may be superseded by new knowledge at any moment. the students progress from general, introductory courses to more advanced courses. what would an academic computing department which followed this model be like? first, it would have as its goal the training of faculty, students and staff in the use of computers. giving users a good understanding of computer basics, a theory, and some general problem solving tools, a methodology, the department would concentrate on teaching computer users to help themselves, not on solving their problems for them. just as faculty would rarely teach the same subject to the same students twice, so the academic computing staff would rarely see the same users for the same problem twice. a user who comes back is returning with a different, more advanced problem, just as a senior may take a difficult seminar in the same department in which he took a survey course as a freshman. an academic computing department which is successful under this model will create computer users who can help themselves just as a college graduates students who can go on to do independent research. a teaching department has a nearly flat organizational structure, usually consisting of a number of faculty and a department head or chairperson. on entry to the department, each faculty member has basically the same credentials and qualifications: a terminal degree in the discipline. faculty are distinguished from one another by their areas of expertise, and their rank is based primarily on seniority. faculty of higher rank do not supervise those of lower rank. in an academic computing department which uses this model, the structure may be nearly flat, with the all or most of the staff reporting to a single manager or director. as faculty have their areas of expertise, so the staff of this department have their specialties. promotion within the department is based mainly on seniority. the staff will have similar backgrounds and qualifications, possibly with most holding master's degrees. the work of college faculty is characterized by autonomy. they are free to choose their research projects within their fields, and may pick the courses they teach, once the basic needs of the of the department are fulfilled. the can conduct research or teach using any methodology that seems appropriate, and are not required to account for their time on a regular basis. large projects are handled by committees, many of which are ad hoc. an academic computing department which was like this would allow its staff members to choose their own areas of concentration or projects when the core functions of the department are met. they would choose the software and hardware for their projects based on their own independent judgment. staff may work flexible hours, and the committee structure might be employed to deal with big undertakings. following this model, academic computing staff would be largely responsible for their own time, and act with a minimum of supervision. the faculty department model for academic computing has some clear strengths. as the cliche advises, it's always better to teach people to fish than to take them to long john silver's seafood shop. users who come in with new and challenging problems keep the staff on their toes. promotions based mainly on seniority give the staff assurance of progress in their careers. personnel who function mainly on their own authority have the assurance and satisfaction of doing jobs the way they think they ought to be done. this approach is not without its problems, however. as computer users bring more complex problems in, the likelihood of quick solutions decreases, adding to user frustration. staff who perceive themselves as more technically skilled that their colleagues may be frustrated if this counts for little in their advancement. a flat organizational structure may offer too large a span of control for one manager to supervise adequately. allowing staff to select their own projects may mean that some projects never get addressed. some people don't work well with little supervision. they get lazy or confused and require frequent prodding or direction. the committee approach to problem solving is notoriously inefficient. these drawbacks encourage us to examine the model at the opposite end of the spectrum, the one based on the commercial data processing shop. the purpose of a commercial data processing shop is to provide data processing services to customers. the customers are interested in getting results, and are not generally concerned with how the results are achieved. good service will generate repeat customers who will get the same services over and over. customers are differentiated from each other by the kinds of problems they have and services they require, not by the level of sophistication of their demands. an academic computing department that followed this model would be primarily concerned with providing computing results to users. the training function would be secondary, or even omitted. a successful academic computing department under this model would be one with lots of satisfied repeat customers. developing an efficient routine to process their requests would be critical to the functioning of the department. dp shops are typically organized hierarchically, with teams of programmers reporting to programmer/analyst team leaders, who report to managers reporting to a senior manager or director. team leaders or managers have often been around longer than the personnel they supervise, but they have achieved their positions mainly through their expertise, not their seniority. programmers are distinguished from each other by the tasks on which they're working. entry level employees may have fewer years of education and lack some of the credentials of more senior personnel. academic computing departments in state universities may be organized along these lines because state merit systems based on commercial models encourage it. departments which work well following this model will have a staff of versatile programmers who are switch hitters and have experience in diverse applications. staff will see clear paths of advancement through the organization. advanced degrees and certificates may be required of higher level employees. in the commercial environment, programmers' work is highly directed. they are assigned projects with strict objectives and goals, and adhering to timetables and getting products and services out by the deadline is critical. programmers apply a stable set of tools to problems within strict methodological limits set by management. documentation and programming standards exist and are followed. an academic computing department which follows this model will deliver its services on time and reliably. it will develop and observe standards in its own work, and may seek to provide standards for the rest of the college or university. its work will be distinguished by the efficient use of subroutine libraries, fourth generation languages and other tools. project assignments will flow down the organization from the top. the dp shop model is, of course, not without its own set of problems. although the successful academic computing unit which follows this model may have a large core of satisfied customers, there may be another large group of potential users whose needs are not addressed. the department may grow to be excessively reliant on established routines and procedures, and have difficulty addressing new or advanced problems. although many employees find a hierarchical arrangement appealing because of the clear paths of advancement, entrenched senior managers may force promising subordinates out of the organization because they are blocking the upward route. in a highly demanding environment, the list of potential projects demanding academic computing quickly escalates. those for which no team is available sit in the queue, causing customer frustration. this frustration may, in turn, lead the customers to seek other solutions to their problems. likewise, the problems which fall outside the normal range of experience may not be addressed. many academic computing centers were slow to join the personal computer revolution because they were not flexible enough to see the advantages of these tools. many college and university computer users chose personal computers because academic computing centers were unable to address their needs. obviously, neither of these models is the total solution to the problems of academic computing. no university or college, to my knowledge, follows either of these models, strictly speaking. on the other hand, i believe that almost all academic computing departments can be located somewhere on the continuum that runs between these two models. in the fall of 1987, we were given the task of combining the academic computing center, which provided central vax computing, and microcomputer services, which supported academic and administrative microcomputing, into a new department, academic computing services (acs). our new organization leaned heavily toward the faculty department model. we placed a great deal of emphasis on training, offering over 60 different workshops to faculty, staff and students, and purchasing video and computer based training packages. everyone on the staff was responsible for teaching or developing at least one workshop. we developed a very flat organizational structure, with over half the staff reporting straight to the director. acs staff worked autonomously and chose their own areas of concentration. large projects, such as the evaluation of categories of software, were accomplished through ad hoc committees. a little over a year later, we reorganized again, this time in response to a campus-wide restructuring of data processing and communications functions. the new organizational structure, implemented in january, is much more hierarchical. most of the staff report to the director through two senior managers. this change was undertaken mainly in response to demands by senior staff for more supervisory responsibility. we thus moved the department along the continuum toward the dp shop model. in other respects, however, we have stuck with the faculty department model. the location of any particular academic computing unit between the faculty department and dp shop models will be due to many factors. large departments may require more hierarchical structure. departments with both academic and administrative responsibilities may prefer the project team approach to administrative systems development and maintenance, or the customer service orientation of the dp shop model for generating routine administrative services. departments in research institutions may find that faculty and researchers demand little beyond consulting and training, providing most other academic computing functions on their own using workstations or departmental systems. i contend, however, that using this dimension to describe academic computing helps to focus our attention on what we are doing, how we are organized to do it, and what approach we take to getting the job done. paying attention to these questions is important, and is something we need to make time for on a regular basis. r. g brookshire teaching inter-institutional courses (panel session): sharing challenges and resources bruce j. klein mats daniels dianne hagan anders berglund annegret goold mary last tony clear erkki sutinen under the stress of reform: high-performance computing in the former soviet union p. wolcott s. e. goodman introduction to the workshop on freedom and privacy by design lenny foner the csedres toolbox: a resource to support research in computer science education vicki l. almstrum cheng-chih wu grow fast, grow global: how the irish software industry evolved to this business model barry murphy ai watch: can one really reason about laws? joseph fulda a conceptual framework for predicting the career success of is professionals denise potosky hindupur ramakrishna road crew: students at work lorrie faith cranor computer science accreditation (panel): current status and future directions lawrence g. jones keith barker susan conry doris lidtke a methodology for conducting advanced undergraduate computer science courses george j. pothering reflections: photocopy this article! steven pemberton using current literature in two courses robert a. hovis a case of academic plagiarism ned kock caught in the middle cartoonist sandra boynton has captured in a single cartoon the dilemma faced by middle managers at all levels. the cartoon shows a bewildered, hapless cat standing between a large, pointing, barking dog on one side and a small, distraught yapping dog on the other. when you were promoted or promised promotion, you were probably not warned about all the ways in which you would be "caught in the middle". in this presentation, i want to talk about some of the ways in which you can be in the middle. s. webster women in computing the world is changing and the demographics of the workforce are changing with it. at one time there was hope that our exciting new field of computer science would not only revolutionize the technical world in which we live, but break new ground in professional access and equity for women. this optimism has given way to pragmatism. if we want the best and the brightest women to participate fully in our profession, we must take a long hard look at our professional practices and conventions. the goal of this special section is to provide computer professionals with the information to understand the issues that women confront in their workplace and with the resources to successfully diversify their workplace. i hope you will save this issue of communications as a reference for many years to come. i also hope that, as opportunities for women in computer science expand, the need for the specialized resources and programs we described here diminishes. amy pearl webware: a course about the web sophisticated applications and software development on the web demand an extensive and thorough understanding of a variety of computer science disciplines, as well as providing their own set of issues. therefore, we have created an advanced undergraduate computer science course called _webware: computational technology for network information systems_ that builds upon and extends knowledge previously gathered by the students. we describe its contents, our teaching experience, and address the challenges of teaching both the foundations and current technological issues of web programming. david finkel isabel f. cruz netnews dennis fowler computer science accreditation (panel session): an introduction and status of the national program john f. dalphin taylor booth raymond e. miller john r. white robert aiken j. t. cain edward w. ernst michael c. mulder kathleen hennessey forum diane crawford product review: lifebook 420d notebook computer michael scott shappe report on cs2 from acm cs2 committee (panel session) elliot b. koffman david stemple caroline e. wardle standards: the language for success roy rada the digital dilemma randall davis computer science laboratories r. ross a staged progression for integrating ethics and social impact: across the computer science curriculum elaine yale weltz the costs of personal computing in a complex organization: a comparative study the widespread adoption of personal computers (pcs) may be attributable to their apparent low purchase and operational costs. however, significant procedural costs arise in fitting a pc application into a work setting. our investigation of the adoption and use of pcs in several departments of a complex organization reveals a large number of unanticipated costs. these indirect, deferred, and governance costs are chiefly borne by users not responsible for acquiring pcs. these costs represent additional demands for users' time, skill, expertise, and attention as well as money. we find that the distribution of deferred costs determines the viability of pc systems. we also find that the integration of pcs can alter the way people do their jobs. these changes in turn give rise to additional social and political costs within the organization. subsequently, we find that the true costs of personal computing are typically underestimated and unaccounted. sonia nayle walt scacchi functional computer literacy many volumes have been written about computer literacy; unfortunately, most of these have not defined it, and those that do tend to disagree. with so little agreement, it is not surprising that computer literacy has become one of our most frequently abused buzz words. playing games, teaching physical education, accounting for one of the big 8 firms, and programming on a supercomputer may all involve computers and, if so, require a certain level of computer literacy. but not all to the same extent. many universities have begun talking about a computer literacy requirement. indeed, several have already adopted one. those that have adopted such a requirement differ in how it is to be met. this paper discusses how these differences appear to have arisen and proposes a comprehensive plan to achieve functional computer literacy, the initial task being to define computer literacy. john h. major roi f. prueher growth stages of end user computing the stages of growth and interconnectedness of the applications of end user computing are described in a model that is directed toward management and planning. sid l. huff malcolm c. munro barbara h. martin a cooperative approach to software development by application engineers and software engineers we have tried a new project management approach for the development of engineering application software systems. the key factor of this new approach is the introduction of the role of "interpreter" who sits in between application engineers and programmers and handles all of the communication problems among them. our major objectives were: (1) to release application engineers from unfamiliar programming tasks as much as possible. (2) and to increase the productivity of a small group of technical staffs. experiments upon real projects resulted in much greater improvement of productivity and quality than we had expected. and also the morale of team members was improved. this paper describes the results of these experiments and the characteristics of this new development approach in comparison with the traditional one. keiji uemura miki ohori and nothing to watch: the divide-by-zero future: the arrival of the free computer clay shirky algorithms and software: integrating theory and practice in the undergraduate computer science curriculum a theoretical trend in the development of undergraduate computer science curricula is described. while this curriculum trend can be seen as a natural evolution of a discipline, there are other reasons for it. an opposite trend can be observed that seeks to integrate theory and practice in the undergraduate curriculum. we offer general guidelines based on this second curriculum philosophy. judith wilson newcomb greenleaf robert trenary organizing a class in microcomputer to mainframe communication northwestern university, has a large number of users with their own microcomputers. many of these microcomputer owners are interested in terminal emulation, and communication at baud rates approaching 9600. this paper describes the problems encountered in trying to meet the needs of users who attempt to communicate between their microcomputers and the mainframes available at northwestern. william e. mihalo netnews: ups and downs dennis fowler signing electronic contracts david molnar responses to "the consequences of computing" margaret anne pierce jacques berleur joseph r. herkert dorothy e. denning charles dunlop chuck huff factors influencing the formation of a user's perceptions and use of a dss software innovation understanding how users form perceptions of a software innovation would help software designers, implementers and users in their evaluation, selection, implementation and on-going use of software. however, with the exception of some recent work, there is little research examining how a user forms his or her perceptions of an innovation over time. to address this research need, we report on the experiences of a health planner using a dss software tool for health planning over a 12-month period. using diffusion theory as outlined by rogers, we interpret the user's perceptions of the software following rogers' perceived characteristics of the innovation. furthermore, we show how our user justifies her attitudes toward the technology using 5 important factors during 3-, 6- and 12-month interviews: stage of adoption, implementation processes, organizational factors, subjective norms, and user competence. results are compared with key is research in these areas, and the implications of these findings on the diffusion of decision support systems are discussed. mike w. chiasson chris y. lovato about this issue… anthony i. wasserman reducing risk for central site equipment moves equipment changes - moving, adding or removing units in a central computer facility - can have a devastating effect on a computer center. equipment failures and cable problems can result in prolonged service outages, user disservice and management frustration. risks from these changes can be effectively controlled by means of sound implementation planning tools. an effective methodology based on a graphics technique that accurately represents equipment location in a computer room on standard size paper provides the planning tools that minimizes the risk of outages. roger kleffman rick glesener jerry brzeczek technology in education (introduction) elliot soloway when social meets technical (student paper panel): ethics and the design of "social" technologies patrick feng separation thresholds, retention frontiers, and intervention assessment: human capital in the information technology workforce robert a. josefek robert j. kauffman current and future direction of the advanced placement exam mark stehlik susan h. rodger kathy larson alyce brady chris nevison integrating advanced workstations into a college curriculum: a panel presentation anne judd two weeks in the life of a technology teacher angela boltman an hci continuing education curriculum for industry m. m. mantei multimedia curricula, courses, and knowledge modules edward a. fox linda m. kieffer an evaluation of strategies for teaching technical computing topics to students at different levels (poster) c. king the increasing role of computer theory in undergraduate curricula donald j. bagert using ada-based robotics to teach computer science barry fagin ifip b. c. glasson separation of basic competency acquisition from advanced material in teaching and assessment in an undergraduate computing subject t. d. gedeon g. a. mann w. h. wilson r. a. bustos organizational expectations: what the top expects of those in the middle as a manager in user services there is a constant demand on your talents and time. believe it or not, managers higher up in the organization know that you cannot do all of them. they may not tell you, but they know --- they know because they're faced with the same dilemma. several years ago there was a "tongue-in-cheek" book on management that proposed that a manager was similar to a monkey attempting to shinny up a flag pole. the higher up either goes, the more exposed there rear ends are. your boss' future and success are dependent on all those who work for her/him. the conflicting demands of being a manager in computer services are a cause for stress and anxiety. if you can't do everything that is expected of you, what should you do? there are three simple rules that can make your job easier. the rules are: thou shall not surprise thy boss thou shalt not do anything your subordinates can do thou shalt not subvert thy bosses' priorities f. w. connolly transition from two year to four year programs (panel discussion) john f. dalphin donald e. burlingame wiley mckinzie joyce little spotswood stoddard thinking parallel: the process of learning concurrency this paper describes a course in concurrent and distributed computing for high school students and empirical research that was done to study students' conceptions and attitudes. we found that both their conceptions and their work methods evolved during course to the point that they were able to successfully develop algorithms and to prove their correctness. students initially found the course extremely challenging but eventually came to appreciate its relevance and its contribution to improving their cognitive skills. mordechai ben-ari yifat ben- david kolikant information and communication technologies: what's in it for computer science education rachelle heller the use of minicomputers in a first computer systems course this paper describes some experiences in the use of a set of small minicomputers in an elementary computer systems course. michael levison books michele tepper standards and innovation in technological dynamics dominique foray inside risks: web cookies: not just a privacy risk emil sit kevin fu editorial: prologue and introduction carl cargill training adult learners - a new face in end users julie a. scott why are the results of team projects so different? david ballew object-oriented inspection in the face of delocalisation software inspection is now widely accepted as an effective technique for defect detection. this acceptance is largely based on studies using procedural program code. this paper presents empirical evidence that raises significant questions about the application of inspection to object-oriented code. a detailed analysis of the 'hard to find' defects during an inspection experiment shows that many of them can be characterised as 'delocalised' --- the information needed to recognise the defect is distributed throughout the software. the paper shows that key features of object-oriented technology are likely to exaggerate delocalisation. as a result, it is argued that new methods of inspection for object-oriented code are required. these must address: partitioning code for inspection ("what to read"), reading strategies ("how to read"), and support for understanding what isn't read --- "localising the delocalisation". alastair dunsmore marc roper murray wood detecting memory errors via static pointer analysis (preliminary experience) nurit dor michael rodeh mooly sagiv framekit, an ada framework for a fast implementation of case enviroments software engineering methodologies rely on various and complex graphical representations and are more useful when associated to case (computer aided software engineering) tools designed to take care of constraints that have to be respected. now, case tools gave way to case environments (a set of tools that have a strong coherence in their us). this concept provides enhanced solutions for software reusability while the environment may be adapted to a specific understanding of a design methodology.this paper describes framekit, an ada based framework dedicated to the quick implementation of case environments. we summarize first the concepts implemented in framekit and illustrate them using a detailed example of a simple tool implementation and integration. fabrice kordon jean-luc mounier debugging heterogeneous applications with pangaea leesa hicks francine berman a complete binary tree based system for activation of concurrent processes (abstract only) the main purpose of this paper is to create a complete binary tree (cbt) system to activate processes concurrently. users are not provided any commands from the 4.2 bsd unix system to activate processes simultaneously. however, by using the result of this project, users are able to activate several processes at the same time by issuing the command of cbt. the cbt is an executable program which contains two routines. one of the routines is called nrouter.c which is a recursive program used to create a complete binary tree structure to activate user processes simultaneously. the advantages of using complete binary tree structure to activate user processes are that the user processes can be activated almost simultaneously and the redundant leaves will not be created. weishing chen s. sitharama iyengar safkasi: a security mechanism for language-based systems in order to run untrusted code in the same process as trusted code, there must be a mechanism to allow dangerous calls to determine if their caller is authorized to exercise the privilege of using the dangerous routine. java systems have adopted a technique called stack inspection to address this concern. but its original definition, in terms of searching stack frames, had an unclear relationship to the actual achievement of security, overconstrained the implementation of a java system, limited many desirable optimizations such as method inlining and tail recursion, and generally interfered with interprocedural optimization. we present a new semantics for stack inspection based on a belief logic and its implementation using the calculus of security- passing style which addresses the concerns of traditional stack inspection. with security-passing style, we can efficiently represent the security context for any method activation, and we can build a new implementation strictly by rewriting the java bytecodes before they are loaded by the system. no changes to the jvm or bytecode semantics are necessary. with a combination of static analysis and runtime optimizations, our prototype implementation showes reasonable performance (although traditional stack inspection is still faster), and is easier to consider for languages beyond java. we call our system safkasi (the security architecture formerly known as stack inspection). dan s. wallach andrew w. appel edward w. felten software pipelining: an effective scheduling technique for vliw machines this paper shows that software pipelining is an effective and viable scheduling technique for vliw processors. in software pipelining, iterations of a loop in the source program are continuously initiated at constant intervals, before the preceding iterations complete. the advantage of software pipelining is that optimal performance can be achieved with compact object code. this paper extends previous results of software pipelining in two ways: first, this paper shows that by using an improved algorithm, near-optimal performance can be obtained without specialized hardware. second, we propose a hierarchical reduction scheme whereby entire control constructs are reduced to an object similar to an operation in a basic block. with this scheme, all innermost loops, including those containing conditional statements, can be software pipelined. it also diminishes the start-up cost of loops with small number of iterations. hierarchical reduction complements the software pipelining technique, permitting a consistent performance improvement be obtained. the techniques proposed have been validated by an implementation of a compiler for warp, a systolic array consisting of 10 vliw processors. this compiler has been used for developing a large number of applications in the areas of image, signal and scientific processing. m. lam generalization from domain experience: the superior paradigm for software architecture research? richard n. taylor building consensus for ada 9x erhard ploedereder linux apprentice filters: this article is about filtering, a very powerful facility available to every linux user, but one which migrants from other operating systems may find new and unusual paul dunne everything i need to know i learned from the chrysler payroll project richard garzaniti jim haungs chet hendrickson forum diane crawford at the forge: configuring, tuning, and debugging apache reuven writes about what tools and techniques webmasters can use when trying to configure, tune, and debug their apache configurations. reuven m. lerner an alternate design for fortran 8x j. fullerton a portable syntactic error recovery scheme for lr(1) parsers a 4-level language independent error recovery scheme for table driven lr(1) parsers is presented. the first two levels are intended for recovery with appropriate corrections and the next two for simple recovery without corrections. the objective is to do the recovery without affecting the semantics or data structure of the compiler while at the same time producing necessary diagnostics and terminating gracefully. the scheme is a significant improvement and a step forward in the direction of language independent error recovery and is made portable when combined with any of the existing lr(1) parser generators. it is currently implemented as an integral part of the yacc parser generator in an ongoing project at advanced micro devices. pyda srisuresh michael j. eager petshop: a tool for the formal specification of corba systems petshop is a case tool dedicated to the formal behavioral specification of corba systems. the tool uses high-level petri nets as its specification language, and integrates seamlessly in a corba distributed environment, allowing for direct interpretation of specification models. remi bastide technical opinion: reuse: been there, done that jeffrey s. poulin formal refinement patterns for goal-driven requirements elaboration requirements engineering is concerned with the identification of high-level goals to be achieved by the system envisioned, the refinement of such goals, the operationalization of goals into services and constraints, and the assignment of responsibilities for the resulting requirements to agents such as humans, devices and programs. goal refinement and operationalization is a complex process which is not well supported by current requirements engineering technology. ideally some form of formal support should be provided, but formal methods are difficult and costly to apply at this stage.this paper presents an approach to goal refinement and operationalization which is aimed at providing constructive formal support while hiding the underlying mathematics. the principle is to reuse generic refinement patterns from a library structured according to strengthening/weakening relationships among patterns. the patterns are once for all proved correct and complete. they can be used for guiding the refinement process or for pointing out missing elements in a refinement. the cost inherent to the use of a formal method is thus reduced significantly. tactics are proposed to the requirements engineer for grounding pattern selection on semantic criteria.the approach is discussed in the context of the multi-paradigm language used in the kaos method; this language has an external semantic net layer for capturing goals, constraints, agents, objects and actions together with their links, and an inner formal assertion layer that includes a real-time temporal logic for the specification of goals and constraints. some frequent refinement patterns are high-lighted and illustrated through a variety of examples.the general principle is somewhat similar in spirit to the increasingly popular idea of design patterns, although it is grounded on a formal framework here. robert darimont axel van lamsweerde the scheme of things jonathan rees about conversations for concurrent oo languages a. romanovsky a monotonic superclass linearization for dylan kim barrett bob cassels paul haahr david a. moon keith playford p. tucker withington an editor for variables f. h. d. van batenburg bulk file i/o extensions to java dan bonachea climbing the smalltalk mountain mary beth rosson john m. carroll the survival of lisp: either we share, or it dies richard c. waters ada programming techniques, research, and experiences on a fast control loop system this paper discusses real time ada (60 hz control loop) programming techniques developed during research to support a conversion of software written in pl/m-86 to ada for a dod missile system employing fiber optic technology. two of the major requirements of the steelman document were that any dod common language must support embedded computer system development and must not impose undue execution or efficiency costs because of unused or unneeded generalities. this paper examines several constraints and requirements frequently encountered in distributed real time applications. such real world environmental factors provide challenges for any high order language attempting to meet these ideals. ada's ability to support these requirements is analyzed in light of an ongoing research effort to support the ada conversion of a time critical embedded application. ada programming techniques developed to achieve execution performance requirements of this system are presented and analyzed. the techniques presented include both those deemed successful at supporting performance requirements and those which failed to support such requirements. eric n. schacht designing computer systems with mems-based storage steven w. schlosser john linwood griffin david f. nagle gregory r. ganger sodos - a software documentation support environment: its use this paper describes a computerized environment, sodos (software documentation support), which supports the definition and manipulation of documents used in developing software. an object oriented environment is used as a basis for the sodos interface. sodos is built around a software life cycle (slc) model that structures all access to the documents stored in the environment. one advantage of this model is that it supports software documentation independent of any fixed methodology that the developers may be using. the main advantage of the system is that it permits traceability through each phase of the life cycle, thus facilitating the testing and maintenance phases. finally the effort involved in learning and using sodos is simplified due to a sophisticated "user-friendly" interface. ellis horowitz ronald williamson the /proc file system and procmeter andrew m. bishop 2nd international workshop on living with inconsistency _in software engineering, there has long been a recognition that inconsistency is a fact of life. evolving descriptions of software artefacts are frequently inconsistent, and tolerating this inconsistency is important if flexible collaborative working is to be supported. this workshop will focus on reasoning in the presence of inconsistency, for a wide range of software engineering activities, such as building and exploring requirements models, validating specifications, verifying correctness of implementations, monitoring runtime behaviour, and analyzing development processes. a particular interest is on how existing automated approaches such as model checking, theorem proving, logic programming, and model-based reasoning can still be applied in the presence of inconsistency._ steve easterbrook marsha chechik scheme: the new generation john d. ramsdell efficient method dispatch in pcl gregor kiczales luis rodriguez benchmark semantics carl g. ponder applying text and graphic tools in parallel (abstract): exploring multimedia approaches for accelerating "object think" peter coad jill nicola take command: hfs utilities data on macintosh disks can be marjorie richardson fast, effective code generation in a just-in-time java compiler ali-reza adl- tabatabai michal cierniak guei-yuan lueh vishesh m. parikh james m. stichnoth two years of experience with a -kernel based os jochen liedtke ulrich bartling uwe beyer dietmar heinrichs rudolf ruland gyula szalay user interface design from a real time perspective using a data flow diagram (dfd) to represent the functional requirements of a system to be developed, an analysis of a real-time perspective is augmented to generate user interface specifications. by applying a set of heuristics, these specifications facilitate the design of three user interface styles: question/answer, menu/form, and command language. feng-yang kuo jahangir karimi pointer functionality sans data-type l. schonfelder s. morgan from region inference to von neumann machines via region representation inference lars birkedal mads tofte magnus vejlstrup an "open" oriented file system bill mahoney advantages of a component-based approach to defining complicated objects we describe the construction of a browser in auditon, a language that incorporates and extends the performer and stage paradigm first introduced in rehersal world. performers provide the instance specialisation of both behavior and structure; stages manage performers and provide a local namespace. this approach improves the modularity and eases debugging of interconnected objects. andrew m. drinnan david m. morton derive: a tool that automatically reverse-engineers instruction encodings many binary tools, such as disassemblers, dynamic code generation systems, and executable code rewriters, need to understand how machine instructions are encoded. unfortunately, specifying such encodings is tedious and error-prone. users must typically specify thousands of details of instruction layout, such as opcode and field locations values, legal operands, and jump offset encodings. we have built a tool called derive that extracts these details from existing software: the system assembler. users need only provide the assembly syntax for the instructions for which they want encodings. derive automatically reverse-engineers instruction encoding knowledge from the assembler by feeding it permutations of instructions and doing equation solving on the output. derive is robust and general. it derives instruction encodings for sparc, mips, alpha, powerpc, arm, and x86. in the last case, it handles variable- sized instructions, large instructions, instruction encodings determined by operand size, and other cisc features. derive is also remarkably simple: it is a factor of 6 smaller than equivalent, more traditional systems. finally, its declarative specifications eliminate the mis-specification errors that plague previous approaches, such as illegal registers used as operands or incorrect field offsets and sizes. this paper discusses our current derive prototype, explains how it computes instruction encodings, and also discusses the more general implications of the ability to extract functionality from installed software. dawson r. engler wilson c. hsieh corrigenda thomas reps tim tietelbaum alan demers issues in multiparadigm viewpoint specification eerke boiten howard bowman john derrick maarten steen mid-line assignment adam kertesz book review: software engineering with java adrian p. o'riordan conceptual simplicity meets organizational complexity: case study of a corporate metrics program james d. herbsleb rebecca e. grinter a generic model for reflective design rapid technological change has had an impact on the nature of software. this has led to new exigencies and to demands for software engineering paradigms that pay particular atttention to meeting them. we advocate that such demands can be met, at least in large parts, through the adoption of software engineering processes that are founded on a reflective stance. to this end, we turn our attention to the field of design rationale. we analyze and characterize design rationale approaches and show that despite surface differences between different approaches, they all tend to be variants of a relatively small set of static and dynamic affinities. we use the synthesis of static and dynamic affinities to develop a generic model for reflective design. the model is nonprescriptive and affects minimally the design process. it is context-independent and is intended to be used as a facilitator in participative design, supporting group communication and deliberation. the potential utility of the model is demonstrated through two examples, one from the world of business design and the other from programming language design frequency interleaving as a codesign scheduling paradigm frequency interleaving is introduced as a means of conceptualizing and co- scheduling hardware and software behaviors so that software models with conceptually unbounded state and execution time are resolved with hardware resources. the novel mechanisms that result in frequency interleaving are a shared memory foundation for all system modeling (from gates to software- intensive subsystems) and de-coupled, but interrelated time- and state- interleaved scheduling domains. the result for system modeling is greater accommodation of software as a configuration paradigm that loads system resources, a greater accommodation of shared memory modeling, and a greater representation of software schedulers as a system architectural abstraction. the results for system co-simulation are a lessening of the dependence on discrete event simulation as a means of merging physical and non-physical models of computation, and a lessening of the need to partition a system as computation and communication too early in the design. we include an example demonstrating its implementation. joann m. paul simon n. peffers donald e. thomas object-oriented programming stuart hirshfield raimund k. ege separating key management from file system security david mazières michael kaminsky m. frans kaashoek emmett witchel the alonzo functional programming language j. d. ramsdell report on the programming language haskell: a non-strict, purely functional language version 1.2 paul hudak simon peyton jones philip wadler brian boutel jon fairbairn joseph fasel maría m. guzmán kevin hammond john hughes thomas johnsson dick kieburtz rishiyur nikhil will partain john peterson book review: linux universe jan rooijackers a high performance multi-structured file system design keith muller joseph pasquale the enable construct for exception handling in fortran 90 corporate ifip working group 2.5 session 4b: programming environments i a. wasserman behavioral analysis of software architectures using ltsa jeff magee reliability mechanisms for adams the goal of checkpointing in database management systems is to save database states on a separate secure device so that the database can be recovered when errors and failures occur. this paper presents a non-interfering checkpointing mechanism being developed for adams. instead of waiting for a consistent state to occur, our checkpointing approach constructs a state that would result by completing the transactions that are in progress when the global checkpoint begins. the checkpointing algorithm is executed concurrently with transaction activity while constructing a transaction-consistent checkpoint on disk, without requiring the database quiesce. this property of non-interference is highly desirable to real-time applications, where restricting transaction activity during the checkpointing operation is in many cases not feasible. two main properties of this checkpointing algorithm are global consistency and reduced interference, both of which are crucial for achieving high availability. s. h. son j. l. pfaltz asynchronism in ada 9x alan burns william eventoff distributed transactions for reliable systems alfred z. spector dean daniels daniel duchamp jeffrey l. eppinger randy pausch towards better software projects and contracts: commitment specifications in software development projects jyrki kontio olli pitkänen reijo sulonen some notes on program design a. j. van reeken kernel korner: block device drivers michael k. johnson sorting out signature schemes digital signature schemes are a fundamental tool for secure distributed systems. it is important to have a formal notion of what a secure digital signature scheme is, so that there is a clear interface between designers and users of such schemes. a definition that seemed final was given by goldwasser, micali, and rivest in 1988, and although most signature schemes used in practice cannot be proved secure with respect to it, they are all built so that they hopefully fulfil it, e.g., by the inclusion of hash functions or redundancy to counter active attacks. recently, however, several signature schemes with new security properties have been presented. most of them exist in several variants, and some of them pay for the new properties with restrictions in other respects, whose relation is not always clear. obviously, these new properties need definitions and some classification. unfortunately, however, none of the new schemes is covered by the definition mentioned above. hence the new properties cannot be defined as additions, but each new type of scheme needs a new definition from scratch, although there are similarities between the definitions. this is unsatisfactory. this paper presents (an overview of) a general definition of digital signature schemes that covers all known schemes, and hopefully all that might be invented in future. additional properties of special types of schemes are then presented in an orthogonal way, so that existing schemes can be classified systematically. it turns out that signature schemes are best defined by a separation of service, structure, and degree of security, with a service specification in temporal logic. several parts of such a definition can easily be reused for general definitions of other classes of cryptologic schemes. relations to secure multi-party protocols and logics of authentication are discussed. birgit pfitzmann the v project manager tools shaun marsh specifying the structure of large, layered, object-oriented programs (abstract only) harold ossher putting oo distributed programming to work pascal felber rachid guerraoui mohamed e. fayad perspectives on program analysis flemming nielson letters corporate linux journal staff linear scan register allocation we describe a new algorithm for fast global register allocation called linear scan. this algorithm is not based on graph coloring, but allocates registers to variables in a single linear-time scan of the variables' live ranges. the linear scan algorithm is considerably faster than algorithms based on graph coloring, is simple to implement, and results in code that is almost as efficient as that obtained using more complex and time-consuming register allocators based on graph coloring. the algorithm is of interest in applications where compile time is a concern, such as dynamic compilation systems, "just-in-time" compilers, and interactive development environments. massimiliano poletto vivek sarkar an object-oriented approach to software process modeling and definition this paper describes an approach to software process modeling that uses dragoon, an object-oriented programming language with ada-like syntax, to capture software process models. these models share the advantages of object- oriented software. they can be easily modified or extended. they allow the modeled process to be viewed at appropriate levels of abstraction. the use of a compilable programming language allows automated consistency checking and can help support automated enactment. dragoon is a particularly appropriate language for this task. it supports both full object-oriented programming, and concurrency. john d. riley ease; the model and its implementation steven ericsson zenith linux means business: remote data gathering with linux grant edwards static header as sentinel massimo ancona walter cazzola a task- and data-parallel programming language based on shared objects many programming languages support either task parallelism, but few languages provide a uniform framework for writing applications that need both types of parallelism or data parallelism. we present a programming language and system that integrates task and data parallelism using shared objects. shared objects may be stored on one processor or may be replicated. objects may also be partitioned and distributed on several processors.task parallelism is achieved by forking processes remotely and have them communicate and synchronize through objects. data parallelism is achieved by executing operations on partitioned objects in parallel. writing task-and data-parallel applications with shared objects has several advantages. programmers use the objects as if they were stored in a memory common to all processors. on distributed-memory machines, if objects are remote, replicated, or partitioned, the system takes care of many low-level details such as data transfers and consistency semantics. in this article, we show how to write task-and data-parallel programs with our shared object model. we also desribe a portable implementation of the model. to assess the performance of the system, we wrote several applications that use task and data parallelism and excuted them on a collection of pentium pros connected by myrinet. the performance of these applications is also discussed in this article. saniya ben hassen henri e. bal ceriel j. h. jacobs static analysis of ada programs daniel h. ehrenfried an operational requirement description model for open systems requirement engineering has been successfully applied to many superficial problems, but there has been little evidence of transfer to complex system construction. in this paper we present a new conceptual model which is for incomplete requirement descriptions. the model is especially designed for supporting the requirement specification and the analysis of open systems. an analysis of existing models and languages shows the main problem in requirement engineering: the harmony between a well-defined basic model and a convenient language. the new model remos combines the complex requirements of open systems with the basic characteristics of transaction-oriented systems. transaction-oriented systems are fault-tolerant and offer security and privacy mechanisms. such systems provide such excellent properties - why don't we already profit from it during requirement specification? the model remos and the applicative language relos take advantage of transaction properties. the idea remos is based on is the definition of scenarios and communicating subsystems. remos guides the users to making their requirements more clear, and relos offers a medium for requirement definition. m. hallman comparison of the functional power of apl2 and fortran 90 apl and fortran, although very different, share the challenge of remaining "competitive" in the light of an onslaught of "modern" computer languages. to meet this challenge, both have attempted to enhance their position by adding significant new features to their language. for example, apl2 is an extension of apl.fortran has also attempted to meet the challenges of modern programming by developing a new fortran standard called fortran 90. this standard revises many areas of fortran, but this paper will only concentrate on those that affect the computational power of fortran. many of the changes were motivated by the increased use of vector and array "supercomputers." therefore, array features, the ability to act on entire arrays instead of individual elements, are an important part of this new standard. in doing this work, the fortran community looked to apl as an example of a powerful array language.this paper will answer several questions regarding this new standard. first, from a computational or functional point of view, what are the major features of fortran 90? next, how do these features compare with apl2? and finally, what can apl2 learn from the fortran 90 work? robert g. willhoft multi-process structuring of user interface software k a lantz design and evaluation of a linear algebra package for java g. almasi f. g. gustavson j. e. moreira forthought chuck moore graphically-oriented reverse engineering tools for ada software (abstract) computer professionals have long promoted the idea that graphical representations of software are extremely useful as comprehension aids when used to supplement textual descriptions and specifications of software, especially for large complex systems. the general goal of this research is the study and formulation of graphical representations of algorithms, structures, and processes for ada (grasp/ada). the research is presently focused on the extraction and generation of graphical representations from ada source code to directly support the process of reverse engineering. our primary motivation for reverse engineering is increased support for software reusability and software maintenance. while applications written in ada may seem somewhat young to benefit from reverse engineering, nasa and others are quickly amassing libraries of ada packages. both reuse and maintenance should be greatly facilitated by automatically generating a set of "formalized diagrams" to supplement the source code and other forms of existing documentation. the first tool implemented was the control structure diagram (csd generator. the csd is representative of a new group of graphical representations for algorithms that can co-exist with pdl or source code. the csd generator has the potential to increase the comprehensibility of ada pdl and/or source code, which may have wide-ranging implications for the design, implementation, and maintenance of software written in ada. in particular, the tool can provide graphical representations which are more easily understood than their textual equivalents. graphical aids which can increase the efficiency of understanding will ultimately reduce the overall cost of software. james h. cross change analysis in an architectural model: a design rationale based approach prasanta bose a procedure for evaluating human-computer interface development tools d. hix dependency analysis of ada programs janusz laski william stanley jim hurst hooking into object-oriented application frameworks gary froehlich h. james hoover ling liu paul sorenson the ramp-up problem in software projects: a case study of how software immigrants naturalize susan elliott sim richard c. holt implementaion of a parallel subsumption algorithm (abstract only) many current automated theorem provers use a refutation procedure based on some version of the principle of resolution. these methods normally lead to the generation of large numbers of new clauses. subsumption is a process that eliminates the superfluous clauses from the clause space, thus speeding up the proof. the research presented here is concerned with the design and implementation of a subsumption algorithm which exploits the parallelism provided by a multiprocessor. all coding is being done in the programming language c, for portability. monitors [1] are used as the synchronization mechanism. correct performance in both a multiprocessor and uniprocessor mode has been stressed. the parallel tests are run on a denelcor hep located at argonne national laboratories. ralph m. butler arlan r. dekock acm csc '92 case panel session as case continues to broaden its scope from software engineering to systems engineering, more data designer aids will be needed that spread further "upstream": to the critical problem definition and project-planning support areas. case should help provide a complete systems developer's environment, for the full sdlc; it then would include data re-engineering and enable reuse of existing data files. frederick springsteel distributed computing column cynthia dwork best of technical support corporate linux journal staff application-based search and retrieval a software science analysis of programming size programming development at ibm's santa teresa laboratory has been investigating "the elements of software science" as defined by maurice h. halstead.1,2 this report summarizes the findings after several large ibm products have been counted and analyzed. program vocabulary, length, volume and lines of code are discussed and compared. it is shown that the size of a program can be estimated with reasonable accuracy once the vocabulary is known. within the programming community the traditional measures of quality and productivity have been based upon the number of lines of source statements which have been coded for a given product. estimating the costs, time duration, number of programmers required, and quality levels also hinge upon "lines-of-code" predictions for the product. maurice h. halstead 1,2 has suggested a comprehensive set of software metrics which we are using to examine the characteristics of existing program products. these software metrics do not hinge upon lines-of- code counts but rather upon the counting of operators and operands. this report explains how the "elements of software science" as defined by professor halstead relate to ibm data from large programming products. the software science formulas are neither derived nor altered in this paper but are treated as theorems which we have set out to prove or to disprove by measuring santa teresa laboratory program products. we were motivated to look for measures other than lines-of-code because of our historic inability to estimate the size of a project with any consistent degree of accuracy. in performing this study, we have counted and analyzed large volumes of existing program product code. this report focuses upon the software science size metrics (length and volume) in comparison to the traditional lines-of-code measure. charles p. smith a conceptual model of software maintenance mira kajko-mattsson chairman's corner: documentationalizationism diana patterson remarks on a methodology for implementing highly concurrent data joseph p. skudlarek software counting rules: will history repeat itself? counting rules in the software metrics field have been developed for counting such software measurables as the occurence of operators, operands and the number of lines of code. a variety of software metrics, such as those developed by halstead and others, are computed from these numbers. published material in the software metrics field has concentrated on relationships between various metrics, comparisions of values obtained for different languages, etc. yet, little, if anything has been published on assumptions, experimental designs, or the nature of the counting tools (or programs) themselves used to obtain the basic measurements from which these metrics are calculated. mitchell g. spiegel chime: a metadata-based distributed software development environment we introduce chime, the columbia hypermedia immersion environment, a metadata- based information environment, and describe its potential applications for internet and intranet-based distributed software development. chime derives many of its concepts from multi-user domains (muds), placing users in a semi- automatically generated 3d virtual world representing the software system. users interact with project artifacts by "walking around" the virtual world, where they potentially encounter and collaborate with other users' avatars. chime aims to support large software development projects, in which team members are often geographically and temporally dispersed, through novel use of virtual environment technology. we describe the mechanisms through which chime worlds are populated with project artifacts, as well as our initial experiments with chime and our future goals for the system. stephen e. dossick gail e. kaiser object-orientation in operating systems (workshop session) vince russo marc shapiro an integrated prolog programming environment u. schreiweis a. keune h. langendörfer a model and compilation strategy for out-of-core data parallel programs it is widely acknowledged in high-performance computing circles that parallel input/output needs substantial improvement in order to make scalable computers truly usable. we present a data storage model that allows processors independent access to their own data and a corresponding compilation strategy that integrates data-parallel computation with data distribution for out-of- core problems. our results compare several communication methods and i/o optimizations using two out-of-core problems, jacobi iteration and lu factorization. rajesh bordawekar alok choudhary ken kennedy charles koelbel michael paleczny multilevel atomicity - a new correctness criterion for database concurrency control multilevel atomicity, a new correctness criteria for database concurrency control, is defined. it weakens the usual notion of serializability by permitting controlled interleaving among transactions. it appears to be especially suitable for applications in which the set of transactions has a natural hierarchical structure based on the hierarchical structure of an organization. a characterization for multilevel atomicity, in terms of the absence of cycles in a dependency relation among transaction steps, is given. some remarks are made concerning implementation. nancy a. lynch a proposal for improving optimizer quality via dynamic analysis keith h. bierman using testability measures for dependability assessment antonia bertolino lorenzo strigini duplicates': a convention for defining configurations in pcte-based environments ian simmonds a novel program representation for interprocedural analysis gagan agrawal shyamala murthy chandrashekhar garud assembler utility functions for apl2/pc some general guidelines are discussed on why and how to develop utility functions in assembler for the apl2/pc environment. assembler utility functions may replace some most heavily used idiomatic apl auxiliary functions and boost the efficiency of apl applications. about twenty such functions are presented, some of them perhaps useful as temporary extensions of the apl2/pc interpreter. the collection of assembler utility functions is available in the public domain, and should be extended by the joint efforts of the international apl community. tauno ylinen clog:6,sm,(c),21an ada® package for automatic footnote generation in unixtm preet j. nedginn trebor l. bworn replica management in real-time ada 95 applications in this paper, we present some of the fault tolerance management mechanisms being implemented in the multi-mu- architecture, namely its support for replica non-determinism. in this architecture, fault tolerance is achieved by node active replication, with software based replica management and fault tolerance transparent algorithms. a software layer implemented between the application and the real- time kernel, the fault tolerance manager (ftmanager), is the responsible for the transparent incorporation of the fault tolerance mechanismsthe active replication model can be implemented either imposing replica determinism or keeping replica consistency at critical points, by means of interactive agreement mechanisms. one of the multi-mu architecture goals is to identify such critical points, relieving the underlying system from performing the interactive agreement in every ada dispatching point. luís miguel pinho francisco vasques at the forge: embperl and databases reuven m. lerner upcoming events corporate linux journal staff analysis of ada for a crucial distributed application john c. knight marc e. rouleau open systems world/fedunix 1995 corporate linux journal staff what does modular-2 need to fully support object oriented programming? joseph bergin stuart greenfield an intelligent dynamic load balancer for workstation clusters a key issue of dynamic load balancing in a loosely couple distributed system is selecting appropriate jobs to transfer. in this paper, a job selection policy based on on-line predicting behaviors of jobs is proposed. tracing is used at the beginning of execution of a job to predicate the approximate execution time and resource requirements of the job to make a correct decision about whether transferring the job is worthwhile. a dynamic load balancer using the job selection policy has been implemented. experimental measurement results show that it is able to improve mean response time of jobs and resource utilization of systems substantially compared with the one without selecting job policy. jiubin ju gaochao xu kun yang back to the future larry l. constantine concurrent programming in smalltalk-80 r. steigerwald m. nelson object technology, architectures and domain analysis sholom cohen a dynamic storage allocation technique based on memory residence time leland l. beck an evaluation of a compiler optimization for improving the performance of a coherence directory both hardware-controlled and compiler-directed mechanisms have been proposed for maintaining cache coherence in large-scale shared-memory multiprocessors, but both of these approaches have significant limitations. we examine the potential performance improvement of a new software-hardware controlled cache coherence mechanism. this approach augments the run-time information available to a directory-based coherence mechanism with compile-time analysis that statically identifies write references that cannot cause coherence problems and writes that should be written through to memory. these references are marked as not needing to send invalidation messages to thereby reduce the network traffic produced by the directory while maintaining cache consistency. for those memory references that are ambiguous, due to conditional branches, or due to the need for complex data flow analysis, for instance, the compiler conservatively marks the references and relies on the hardware directory to ensure coherence. trace-driven simulations are used to emulate the compile- time analysis on memory traces and to estimate potential performance improvement that could be expected from a compiler performing this optimization on the perfect club benchmark programs. by reducing the number of invalidations, this optimized directory scheme is capable of reducing the processor-memory network traffic by up to 54 percent compared to an unoptimized directory mechanism. in addition, the overall miss ratio can be reduced up to 42 percent due to a corresponding reduction in the number of write misses. farnaz mounes-toussi david j. lilja zhiyuan li the fujaba environment however, a single collaboration diagram is usually not expressive enough to model complex operations performing several modifications at different parts of the overall object structure. such series of modifications need several collaboration diagrams to be modeled. in addition, there may be different situations where certain collaboration diagrams should be executed and others not. thus, we need additional control structures to control the execution of collaboration diagrams. in our approach we combine collaboration diagrams with statecharts and activity diagrams for this purpose. this means, instead of just pseudo code, any state or activity may contain a collaboration diagram modeling the do-action of this step. figure 1 illustrates the main concepts of fujaba. fujaba uses a combination of statecharts and collaboration diagrams to model the behavior of active classes. a combination of activity diagrams and collaboration diagrams models the bodies of complex methods. this integration of class diagrams and uml behavior diagrams enables fujaba to perform a lot of static analysis work facilitating the creation of a consistent overall specification. in addition, it turns these uml diagrams into a powerful visual programming language and allows to cover the generation of complete application code. during testing and maintenance the code of an application may be changed on the fly, e.g. to fix small problems. some application parts like the graphical user interface or complex mathematical computations may be developed with other tools. in cooperative (distributed) software development projects some developers may want to use fujaba, others may not. code of different developers may be merged by a version management tool. there might already exist a large application and one wants to use fujaba only for new parts. one may want to do a global search-and-replace to change some text phrases. one may temporarily violate syntactic code structures while she or he restructures some code. for all these reasons, fujaba aims to provide not just code generation but also the recovery of uml diagrams from java code. one may analyse (parts of) the application code, recover the corresponding uml diagram (parts), modify these diagram (parts), and generate new code (into the remaining application code). so far, this works reasonable for class diagrams and to some extend for the combination of activity and collaboration diagrams. for statecharts this is under development. the next chapters outline the (forward engineering) capabilities of fujaba with the help of an example session. ulrich nickel jörg niere albert zundorf exploiting domain architectures in software reuse this paper provides motivation towards the use of domain specific repositories and dssa's. it shows many of the positive side-effects this usage brings about. an extension to the faceted approach to components classification [prieto-diaz and freeman 1987] is introduced. our extension suggests a natural way of further benefiting from the use of domain specific repositories. cristina gacek toward tools to support the gries/dijkstra design process robert b. terwilliger adding real-time capabilities to java kelvin nilsen book review: running linux grant johnson an implementation supporting distributed execution of partitioned ada programs r. jha g. eisenhauer j. m. kamrad d. cornhill extending device management in minix minix is a unix clone operating system, designed by tanembaum ([2],[3]) to allow beginners to do practical training in operating systems area. in this context the present paper describes the work done by a group of undergraduates implementing extensions in device management. problems in the original code, detected during the analysis and development stages, are also reported. c. kavka m. printista r. gallard debugging agent interactions: a case study david flater selective specialization for object-oriented languages dynamic dispatching is a major source of run-time overhead in object-oriented languages, due both to the direct cost of method lookup and to the indirect effect of preventing other optimizations. to reduce this overhead, optimizing compilers for object-oriented languages analyze the classes of objects stored in program variables, with the goal of bounding the possible classes of message receivers enough so that the compiler can uniquely determine the target of a message send at compile time and replace the message send with a direct procedure call. specialization is one important technique for improving the precision of this static class information: by compiling multiple versions of a method, each applicable to a subset of the possible argument classes of the method, more precise static information about the classes of the method's arguments is obtained. previous specialization strategies have not been selective about where this technique is applied, and therefore tended to significantly increase compile time and code space usage, particularly for large applications. in this paper, we present a more general framework for specialization in object-oriented languages and describe a goal directed specialization algorithm that makes selective decisions to apply specialization to those cases where it provides the highest benefit. our results show that our algorithm improves the performance of a group of sizeable programs by 65% to 275% while increasing compiled code space requirements by only 4% to 10%. moreover, when compared to the previous state- of-the-art specialization scheme, our algorithm improves performance by 11% to 67% while simultaneously reducing code space requirements by 65% to 73%. jeffrey dean craig chambers david grove slicing concurrent programs slicing is a well-known program analysis technique for analyzing sequential programs and found useful in debugging, testing and reverse engineering. this paper extends the notion of slicing to concurrent programs with shared memory, interleaving semantics and mutual exclusion. interference among concurrent threads or processes complicates the computation of slices of concurrent programs. further, unlike slicing of sequential programs, a slicing algorithm for concurrent programs needs to differentiate between loop-independent data dependence and certain loop- carried data dependences. we show why previous methods do not give precise solutions in the presence of nested threads and loops and describe our solution that correctly and efficiently computes precise slices. though the complexity of this algorithm is exponential on the number of threads, a number of optimizations are suggested. using these optimizations, we are able to get near linear behavior for many practical concurrent programs. mangala gowri nanda s. ramesh ada's abort statement: license to kill j. goldenberg g. levine modeling the software process: what we really need are process model generators (panel session) barry boehm integrated projection illustrating tore smestad kristian andersen design principles behind chiron: a uims for software environments user interface facilities are a crucial part of the infrastructure of a software environment. we discuss the particular demands and constraints on a user interface management system (uims) for a software environment, and the relation between the architecture of the environment and the uims. a model for designing user interface management systems for large, extensible environments is presented. this model synthesizes several recent advances in user interfaces and specializes them to the domain of software environments. the model can be applied to a wide variety of environment contexts. a prototype implementation is described. m. young r. n. taylor d. b. troup c. d. kelly computer security - an end state? steven m. bellovin take command kill: the command to end all commands: need to get rid of a job that's gotten into a loop and refuses to end? here's a command that will take care of the problem dean provins a fortran 77 interpreter for mutation analysis a. j. offutt k. n. king a practical example of multiple inheritance in c++ r. s. wiener l. j. pinson high-level language facilities for low-level services ez is a language-based programming environment that offers the services provided separately by programming languages and operating systems in traditional environments. these services are provided as facilities of a high- level string processing language with a 'persistent' memory in which values exist indefinitely or until changed. in ez, strings and associative tables provide traditional file and directory services. this paper concentrates on the use of ez procedures and their activations, which, like other values, have indefinite lifetimes. in ez, the low-level aspects of procedure execution, such as activation record creation, references to local variables, and access to state information, are accessible via high-level language constructs. as a result, traditionally distinct services can be provided by a single service in the ez environment. furthermore, such services can be written in ez itself. an editor/debugger that illustrates the details of this approach is described. christopher w. fraser david r. hanson the role of failure in successful design david b. benson uniprep - preparing a c/c++ compiler for unicode martin j. durst software metrics and measurement principles john m. roche towards a better visual programming language: critiquing prograph's control structures r. mark meyer tim masterson an object-based infrastructure for program monitoring and steering greg eisenhauer karsten schwan functional programming is not self-modifying code r. p. mody building a foundation for the future of software engineering peter a. freeman marie-claude gaudel coordination languages and their significance david gelernter nicholas carriero scheduling problems for parallel and distributed systems a two-pass scheduling algorithm for parallel and distributed computer systems is presented in this paper. we consider this algorithm as a complex of two stages: process queue formation and assignment procedure. a new approach of both stages realization is proposed. our algorithm can be used to increase efficiency of static and dynamic scheduling. olga rusanova alexandr korochkin linux as a case study: its extracted software architecture ivan t. bowman richard c. holt neil v. brewster letters to the editor corporate linux journal staff automated testing from object models robert m. poston locally least-cost error recovery in earley's algorithm s. o. anderson r. c. backhouse on random and partition testing simeon ntafos porting dos applications to linux: lots of practical tips for porting your dos applications alan cox a matlab to fortran 90 translator and its effectiveness luiz de rose david padua benefits of a data flow-aware programming environment many programmers write their programs with a primitive text editor that has no knowledge about the edited text. on the other hand, they use ingenious compilers that collect control flow and data flow information to perform optimizations and generate optimized code. we argue that program editors should have the same knowledge about the control flow and data flow of a program. such editors could help programmers to better understand programs and to be more productive. we propose a data flow-aware programming environment that makes the information that the compiler already computed visible. the bidirectional feedback from the compiler to the programmer and back from the programmer to the compiler enables productive programming and faster debugging. christoph steindl compiler technology evaluation the ada compiler evaluation capability (acec) is a u.s. government sponsored evaluation suite that was first released in the fall of 1988. it consists of over 1000 runtime performance tests and is under continuing development to address additional evaluation concerns. currently, the government is developing procedures and guidelines to make the collection and reporting of evaluation data a more systematic and formal process. the compiler technology evaluation panel will address the current status of evaluation technology, and the prospects for improving both compiler technology and the compiler selection process through formal evaluation procedures. the panel will be comprised of four panelists and an "acec procedures/guidelines" presenter. the four panelists will consist of two ada compiler developers, a defense contractor, and a government program representative selected for their knowledge, experience, and different perspectives of ada compiler evaluation. the presenter will be an ajpo representative who will be providing the current status on the intended use and procedures of the acec. this presenter will also serve as a panelist during the question and answer period. the panel is meant to promote discussion and provide insight into embedded systems ada compiler quality/usability especially as it relates to current and proposed evaluation tests and procedures. to facilitate this discussion, in addition to the acec presentation, each of the four panelists have been requested to respond to the following three questions: what is the current overall quality of embedded system ada compilers targeted for "bare" processors? answers should consider the following: code generation - correctness, time and space efficiency runtime system - correctness, time and space efficiency, configurability, and interfacing capability implementation dependent features for efficiency and their impact on portability compiler support for good symbolic information ability to interface with other software integration tools robustness (freedom from errors) and user interface < ;item> what is the current overall quality of the acec for evaluating ada compilers for "bare" processors? answers should consider the following: coverage of major evaluation areas size - number of tests ease of use and degree of automation analysis tools and the ability to synthesize results soundness of the methodolog y remembe ring that language conformity is a critical quality parameter, what has been the impact of the acvc (test and procedures) on ada compiler quality? what is and/or what will be the impact of the acec (tests and procedures) on ada compiler quality? d. smith n. weiderman best of technical support corporate linux journal staff working in the garden system (abstract only) steven reiss an approach to modeling and measuring the design complexity of abstract data types joseph y. kuo derek s. morris the ic* system for debugging parallel programs via interactive monitoring and control e. j. cameron d. cohen introspection: a register transfer level technique for cocurrent error detection and diagnosis in data dominated designs we report a register transfer level technique for concurrent error detection and diagnosis in data dominated designs called _introspection_. introspection uses idle computation cyles in the data path and idle data transfer cycles in the interconnection network in a synergistic fashion for concurrent error detection and diagnosis (cedd). the resulting on-chip fault latencies are one ten-thousandth (10-4) of previously reported system level concurrent error detection and diagnosis latencies. the associated area overhead and performance penalty are negligible. we derive a cost function that considers introspection constraints such as (i) executing an operation on three disjoint function units for diagnosis and (ii) promoting function units to participate in at least one cedd operation. we formulate integration of introspection constraints into the operation-to-operator binding phase of high-level synthesis as a weighted bipartite matching problem. the effectiveness of introspection and its implementation are illustrated on numerous industrial strength benchmarks. ramesh karri balakrishnan iyer demise of the metacompiler in cmforth j. melvin what price reusability?: a case study john favaro opm: an object process modeling environment0 this document as an attempt to give an overview of opm, a process modeling environment . by an environment we mean a user interface through which process model templates are designed, instantiated, and tracked as they execute. the two novel features of opm that we want to stress here are: (i) the way opm uses conventional case tool graphical notations for describing process templates, and (ii) the way opm supports model instantiation, resource management, and cooperative work among a team of developers. we begin with some basic definitions. a process is a collection of activities that will be carried out over time. an activity is a basic unit of work and it minimally contains a name, and some attributes such as a duration or some resources. often the activities are ordered in some way, but not necessarily. if an ordering is available, then we can talk about an activity's predecessors and successors. when a process is being carried out (instantiated), typically more than one activity is going on at the same time. the result of an activity may be a product or simply a confirmation that it has concluded. an activity can invoke other processes. in fact, there is no logical reason to distinguish between an activity and a process, so in the future we will use these terms interchangeably. there are several basic features of processes that we assume. in particular: there are several ways a process may be started, e.g. by human intervention, by the arrival of a message, or by the completion of some product; an action by a computer may end a process and start a new one; a process may require access to resources before it can begin; sometimes the time for a process can be reliably estimated and sometimes it cannot; processes must have the ability to look at (access) directories, files, test file attributes, express conditions, and invoke programs. figure 1 contains a description of how debugging takes place within some hypothetical organization. a database of bug reports is collected and there is an available pool of programmers to work on them. each instantiation of debugging process will assign a bug report to a programmer for fixing. when the programmer is done, he submits his fix to the qa (quality assurance) group who confirm that his fix is correct. if not, it is returned for further work. if so, then a report is written and submitted to the manager. from this example we can draw conclusions that extend our earlier notions about processes and their descriptions. the use of dataflow diagram notation is suitable for describing this process. rectangles are used to represent resources and ovals are used to represent processes. arrows indicate process flow and they may carry data. other examples we have done indicate that a wide variety of processes can be adequately described by this natural notation. to refine further the debugging example, we assume that when the programmer submits the corrected code to the quality assurance group, a new process is started. that process is shown in figure 2. from this elaboration we conclude that processes may have lower levels which contain process descriptions. therefore we see that a process description should be thought of as a hierarchical object. another observation from this example is that there may be several people all following this process description at the same time. therefore we see the need to view the process description as a template and talk about its instantiations. when the template of figure 1 is instantiated we interpret this to mean that a new bug has been assigned to a programmer for repair. observe that when a single programmer is selected to fix a single bug, that instantiation of the debugging process may give rise to multiple instantiations of the working on bug subprocess. therefore we see that when a process is executing, subprocess templates may be instantiated multiple times. yasuhiro sugiyama ellis horowitz overcoming the nah syndrome for inspection deployment pankaj jalote m. haragopal commercial realtime software needs different configuation management arguments are presented as to why integrated, monolithic configuration management is not well suited to commercial realtime systems. an alternative approach to configuration management that over several years we have found to be effective and widely useable is described. this approach, database and selectors cel (dasc), separates treatment of versions that exist simultaneously from the evolution of those versions over time. versions that exist simultaneously are represented by selectors from a common database. evolution is represented by layers, as in the film animators cel. w. m. gentleman a. mackay d. a. stewart controlling deadlock in ada gertrude levine the run-time structure of uims-supported applications j r dance t e granor r d hill s e hudson j meads b a myers a schulert foundation: a model for concurrent project development karen hope peter symon convolutional bound hierarchies the time required to find the exact solution of a product-form queueing network model of a computer system can be high. faster and cheaper methods of solution, such as approximations, are natural alternatives. however, the errors incurred when using an approximation technique should be bounded. several recent techniques have been developed which provide solution bounds. these bounding techniques have the added benefit that the bounds can be made tighter if extra computational effort is expended. thus, a smooth tradeoff of cost and accuracy is available. these techniques are based upon mean value analysis. in this paper a new bounding technique based upon the convolution algorithm is presented. it provides a continuous range of cost versus accuracy tradeoffs for both upper and lower bounds. the bounds produced by the technique converge to the exact solution as the computational effort approaches that of convolution. also, the technique may be used to improve any existing set of bounds. lindsey e. stephens lawrence w. dowdy ada conformity assessments: a model for other programming languages? this paper presents the actual status of ada conformity assessments after the transition of ada conformity assessments from the ada joint program office to iso. the principles of ada conformity assessments according to the iso/iee final committee draft 18009 are summarized and the commonalties and differences to the previous practices are discussed. in the main part of the work conformity assessments for ada c, c++, and java are compared. it is shown that the process as practiced with ada is unique compared to other programming languages. this can be understood by looking at the special culture of the language ada and its validation system. both were sponsored by one single party (the us dod) and not the it industry. the paper concludes with an assessment and outlook on the future development on compiler conformity assessments in general. michael tonndorf implementation of a software quality improvement program nelly maneva nyi award: visual programming languages margaret m. burnett from the editor: connectivity marjorie richardson a brief introduction to smalltalk tim budd the last word stan kelly-bootle proposal for adding discriminants for ada task types donald r. clarson the gnome project: what is gnome and where is it heading? miguel tells us all miguel de icaza 1/k phase stamping for continuous shared data (extended abstract) interactive distributed applications are a relatively new class of applications that are enabled by sharing continuously evolving data across distributed sites (and users). the characteristics of application data include very fine-grained updates that can atomically access a subset of the shared data, masking of update effects, and irregular locality and contention for access. existing programming approaches are not appropriate for programming such continuous shared data in a wide-area environment. we define an object-set abstraction, where a set is replicated at sites interested in the objects, and is modified using add, delete and update operations. the key features of this abstraction are issue-time access information for update operations, and the potential for replicating the computation associated with updates. ordering of operations is an important problem, and we present a fast, scalable ordering algorithm, 1/k phase stamping, that uses distributed tokens that correspond to objects in the set. this algorithm provides substantially better performance than alternative algorithms, is deadlock and abort free, and requires no queuing at the tokens. therefore, tokens can be located inside a programmable 'active' network. the precise stamps generated with 1/k phase stamping enable a dynamic communication optimization technique, effect and stamp merging, which can reduce communication and allow utilization of best-effort message delivery. these algorithms have been implemented in the raga system which can tolerate crash failures. sumeer bhola mustaque ahamad stage scheduling: a technique to reduce the register requirements of a modulo schedule alexandre e. eichenberger edward s. davidson plugging the holes in the sieve of eratosthenes k. dritc an overview of c++ (abstract only) bjarne stroustrup two models of object-oriented programming and the common lisp object system norman young implementation of resilient, atomic data types a major issue in many applications is how to preserve the consistency of data in the presence of concurrency and hardware failures. we suggest addressing this problem by implementing applications in terms of abstract data types with two properties: their objects are atomic (they provide serializability and recoverability for activities using them) and resilient (they survive hardware failures with acceptably high probability). we define what it means for abstract data types to be atomic and resilient. we also discuss issues that arise in implementing such types, and describe a particular linguistic mechanism provided in the argus programming language. william weihl barbara liskov on-the-fly detection of access anomalies access anomalies are a common class of bugs in shared-memory parallel programs. an access anomaly occurs when two concurrent execution threads both write (or one thread reads and the other writes) the same shared memory location without coordination. approaches to the detection of access anomalies include static analysis, post- mortem trace analysis, and on-the-fly monitoring. a general on-the-fly algorithm for access anomaly detection is presented, which can be applied to programs with both nested fork-join and synchronization operations. the advantage of on-the-fly detection over post- mortem analysis is that the amount of storage used can be greatly reduced by data compression techniques and by discarding information as soon as it becomes obsolete. in the algorithm presented, the amount of storage required at any time depends only on the number v of shared variables being monitored and the number n of threads, not on the number of synchronizations. data compression is achieved by the use of two techniques called merging and subtraction. upper bounds on storage are shown to be v × n2 for merging and v × n for subtraction. d. schonberg japanese language handling in apl environments while japan is amongst the first countries to use apl back in the early 1970's and enjoyed at one time as high as 40% of ibm main-frame customers using apl, the lack of japanese language support of apl in popular computing environments today such as unix/aix, os/2 and windows seems to be driving apl out of the scene.this paper tries to bring up this subject more officially than i have done in the past for more public discussion not only for japan, but also for those countries, where the characters required by national languages cannot be contained in the ascii table of 256 codes. kyosuke saigusa in black and white: an integrated approach to class-level testing of object- oriented programs because of the growing importance of object- oriented programming, a number of testing strategies have been proposed. they are based either on pure black-box or white-box techniques. we propose in this article a methodology to integrate the black- and white-box techniques. the black-box technique is used to select test cases. the white-box technique is mainly applied to determine whether two objects resulting from the program execution of a test care are observationally equivalent. it is also used to select test cases in some situations. we define the concept of a fundamental pair as a pair of equivalent terms that are formed by replacing all the variables on both sides of an axiom by normal forms. we prove that an implementation is consistent with respect to all equivalent terms if and only if it is consistent with respect to all fundamental pairs. in other words, the testing coverage of fundamental pairs is as good as that of all possible term rewritings, and hence we need only concentrate on the testing of fundamental pairs. our strategy is based on mathematical theorems. according to the strategy, we propose an algorithm for selecting a finite set of fundamental pairs as test cases. given a pair of equivalent terms as a test case, we should then determine whether the objects that result from executing the implemented program are observationally equivalent. we prove, however, that the observational equivalence of objects cannot be determined using a finite set of observable contexts (which are operation sequences ending with an observer function) derived from any black-box technique. hence we supplement our approach with a "relevant observable context" technique, which is a heuristic white-box technique to select a relevant finite subset of the set of observable contexts for determining the observational equivalence. the relevant observable contezxts are constructed from a data member relevance graph (drg), which is an abstraction of the given implementation for a given specificatin. a semiautomatic tool hass been developed to support this technique. huo yan chen t. h. tse f. t. chan t. y. chen from high-level specifications down to software implementations of parallel embedded real-time systems c. rust f. stappert p. altenbernd j. tacken vista: vitro software test application program michael b. paulkovich bill r. brykczynski integrating destructive assignment and lazy evaluation in the multiparadigm language g-2 john placer smalltalk standardization efforts peter deutsch fortran 90/95/hpf michael metcalf automated testing of posix standards j. f. leathrum k. a. liburdy ada performance issues for real-time systems jeffrey becker robert goettge an approach to conceptual feedback in multiple viewed software requirements modeling harry s. delugach stop the presses: reader survery response phil hughes processing multiple files j. rueda type-safe linkage for variables and functions diomidis spinellis efficient and timely mutual authentication dave otway owen rees safely running programs as root phil hughes making active case tools - toward the next generation case tools in case field, there is a long-standing topic, i.e. the reason that case tools seem to be dearly bought but sparsely used. based on our practical experience of making and using case tools, we point out the reason is that today's case tools are actually not very much user-oriented. the factors and requirements of case users are not given enough consideration in the production of case tools. most today's case tools are motivated by new paradigms and techniques in the area of software engineering. therefore, making case tools are often driven by techniques, rather than by real users' needs and expectations. in this paper, we present our viewpoints on the reason why case tools are used below expectations. we suggest some important features that are worthy of careful consideration while producing more people-oriented and active case tools. we also describe some strategies for building such case tools. riri huang extending attribute grammars to support programming-in-the-large attribute grammars add specification of static semantic properties to context- free grammars, which, in turn, describe the syntactic structure of program units. however, context-free grammars cannot express programming-in- the-large features common in modern programming languages, including unordered collections of units, included units, and sharing of included units. we present extensions to context-free grammars, and corresponding extensions to attribute grammars, suitable for defining such features. we explain how batch and incremental attribute-evaluation algorithms can be adapted to support these extensions, resulting in a uniform approach to intraunit and interunit static semantic analysis and translation of multiunit programs. josephine micallef gail e. kaiser a detailed view of dark roger van scoy judy bamberger robert firth industrial strength compiler construction with equations lutz h. hamel stop the presses: the software world--it's a changin' phil hughes marvel strategy language example the docprep environment was developed by two students, laura johnson and victor kan, as their term project in the spring 1989 e6123y programming environments and software tools course. neither student had any prior knowledge of or experience with marvel, and were not familiar with its internal implementation. they successfully used docprep to produce their final project report. the twelve envelopes, bind, delete- sect-order, format, printdoc, printsec, review-format-err, review-spell-err, runeditor, specify- header, specify-printer, specify-sect-order and spell- check, are omitted. gail e. kaiser using peephole optimization on intermediate code andrew s. tanenbaum hans van staveren johan w. stevenson kernel korner: the linux keyboard andries e. brouwer procedure parameters can imitate sequence concatenation k. mcc. clarke a superimposition control construct for distributed systems shmuel katz designing an application development model for a large banking organization jörg noack bruno schienmann a programming language and its efficiency r. r. loka the ergo attribute system the ergo attribute system was designed to satisfy the requirements for attributes in a language-generic program derivation environment. it consists of three components: (1) an abstract data type of attributes that guarantees attribute consistency, (2) a common lisp implementation which combines demand- driven and incremental attribute evaluation in a novel way while allowing for attribute persistence over many generations of a program, and (3) an attribute-grammar compiler producing code based on this abstract data type from a high-level specification. our experience with three major applications (one being the attribute-grammar compiler itself) confirms that the overhead in storing and accessing attributes incurred by our implementation scheme is more than offset by the gains from the demand-driven, incremental, and persistent nature of attribution. robert l. nord frank pfenning multiparadigm research: a new direction of language design john placer object fault handling for persistent programming languages: a performance evaluation antony l. hosking j. eliot b. moss lesstif and the hungry viewkit the efforts of the hungry programmers are aming the motif widget set to available users malcolm murphy linux for suits: linux for suits doc searls a relations-based approach for simplifying metrices extraction the field of software metrics is constantly changing and metrics extraction tools have to be updated frequently to handle these changes. this paper details the use of an intermediate relation set to decouple code parsing from metrics analysis. parsers simply generate a set of intuitive relations, which a separate analyzer uses as input to compute arbitrary metrics. then, new metrics simply have to be specified in terms of these relations. a c++ metrics extractor tool for oo design metrics was built as a proof of concept. giancarlo succi eric liu automatic verification of hardware and software systems edmund m. clarke metrics in software quality assurance the nature of "software quality" and some software metrics are defined and their relationship to traditional software indicators such as "maintainability" and "reliability" are suggested. recent work in the field is summarized and an outlook for software metrics in quality assurance is provided. the material was originally presented as a tutorial at the "acm sigmetrics workshop/symposium on measurement and evaluation of software quality" on march 25, 1981. j. e. gaffney the form of reusable ada components for concurrent use j. carter map: a tool for understanding software maintenance of software is a major problem that the data processing industry faces today. this paper describes map, a tool, that addresses the problems of software maintenance by helping programmers to understand their programs. sally warren making inodes behave claiborne describes the difficulties he encountered while building linux systems for general dynamics. clay claiborne exokernel: an operating system architecture for application-level resource management d. r. engler m. f. kaashoek j. o'toole a tool for designing java programs with uml this project intends to develop a simple uml (unified modeling language) tool in java to be used by students learning object-oriented design and java programming. each student will be able to design a java program by filling in forms (use cases, crc cards) and making diagrams (use case diagrams, class diagrams and sequence diagrams) in a number of views. when the student enters information for any one of the views the tool will update the equivalent information for the other views. the tool will generate java skeleton code from information supplied by the student. anna armentrout kernel korner: system calls michael k. johnson stop the presses: open source summit eric raymond recent work on the toronto toolkit richard m. levine an improved inspection technique john c. knight e. ann myers visual support for version management m. wein wm cowan w. m. gentleman a constructive alternative to axiomatic data type definitions many computer scientists advocate using axiomatic methods (such as algebraic specification) to specify a program data domain---the universe of abstract data objects and operations manipulated by a program. unfortunately, correct axiomatizations are difficult to write and to understand. furthermore, their non-constructive nature precludes automatic implementation by a language processor. in this paper, we present a more disciplined, purely constructive alternative to axiomatic data domain specification. instead of axiomatizing the program data domain, the programmer explicitly constructs it by using four type construction mechanisms: constructor generation, union generation, subset generation, and quotient generation. these mechanisms are rich enough to define all of the abstract data objects that programmers commonly use: integers, sequences, trees, sets, arrays, functions, etc. in contrast to axiomatic definitions, constructive definitions are easy to write and to understand. an unexpected advantage of the constructive approach is a limited capacity to support non-deterministic operations. as an illustration, we define a non-deterministic "choose" operation on sets. robert cartwright interfacing prolog to pascal kenneth magel clang - a simple teaching language p. d. terry high-level language debugging for concurrent programs an integrated system design for debugging distributed programs written in concurrent high- level languages is described. a variety of user-interface, monitoring, and analysis tools integrated around a uniform process model are provided. because the tools are language-based, the user does not have to deal with low-level implementation details of distribution and concurrency, and instead can focus on the logic of the program in terms of language-level objects and constructs. the tools provide facilities for experimentation with process scheduling, environment simulation, and nondeterministic selections. presentation and analysis of the program's behavior are supported by history replay, state queries, and assertion checking. assertions are formulated in linear time temporal logic, which is a logic particularly well suited to specify the behavior of distributed programs. the tools are separated into two sets. the language-specific tools are those that directly interact with programs for monitoring of and on-line experimenting with distributed programs. the language-independent tools are those that support off-line presentation and analysis of the monitored information. this separation makes the system applicable to a wide range of programming languages. in addition, the separation of interactive experimentation from off-line analysis provides for efficient exploitation of both user time and machine resources. the implementation of a debugging facility for occam is described. german s. goldszmidt shaula yemini creating the apl extended standard eke van batenburg kernel korner miscellaneous character drivers: mr. rubini tells us how to register a small device needing a single entry point with the misc driver alessandro rubini why smalltalk? smalltalk is a single paradigm language with very simple semantics and syntax for specifying elements of a system and for describing system dynamics [1]. when the language is used to describe an application system, the developer extends smalltalk, creating a domain- specific language by adding a new vocabulary of language elements while maintaining the same semantics and syntax. using most smalltalk systems, it is easy to invent one's own development environment through inclusion of new system parts in the library and extension of the development tools. moreover, it is possible to make changes to the environment, and to applications written using the environment, while the system is executing. these system characteristics create a flexible and enjoyable software development experience. adele goldberg task termination and ada 95 ada 83 removed from the programmer the burden of coding potentially complex termination conditions between clients and servers by introducing an 'or terminate' option to the select statement. with the introduction of indirect communication (emphasised by the provision of protected objects in ada 95), it is no longer straightforward to obtain program termination. this paper illustrates the problem and suggests that adding a terminate alternative to an entry call might solve the problem. the advantages and disadvantages of the approach are discussed. a. j. wellings a. burns o. pazy a general economics model of software reuse j. e. gaffney r. d. cruickshank paging on an object-oriented personal computer a high-performance personal computing environment must avoid perceptible pauses resulting from many page faults within a short period of time. our performance goals for a paged virtual memory system for the smalltalk-80tm@@@@; programming environment are both to decrease the average page fault rate and to minimize the pauses caused by clusters of page faults. we have applied program restructuring techniques to the smalltalk-80 object memory in order to improve the locality of reference. the analysis in this paper considers the clustering of page faults over time and distinguishes between steady-state behavior and phase transitions. we compare the effectiveness of different restructuring strategies in reducing the amount of main memory needed to obtain desired levels of performance. ricki blau re-engineering legacy cobol programs j. k. joiner w. t. tsai an object-oriented programming language for developing distributed software shang lujun sun zhongxiu expressing design inheritance relationships in ada 95 brad balfour session 4a: management issues j. h. frame introduction to ada for programmers dean gonzalez dave cook classification of research efforts in requirements engineering pamela zave statement separators and conditional expressions zdenek v. jizba checking timing constraints in distributed object-oriented programs m. gergeleit j. kaiser h. streich a perfect square root routine to characterize a much larger effort, the design and implementation of a square root routine for ipsa apl is described. the routine is perfect in the sense that, if the result can be represented exactly, the exact result is given. if the result cannot be represented exactly, it is rounded to the nearest representable floating point number. the use of apl in designing and testing this routine is emphasized. e. e. mcdonnell kernel scheduling in reconfigurable computing r. maestre f. j. kurdahi n. bagherzadeh h. singh r. hermida m. fernandez c: a language for high-level, efficient, and machine-independent dynamic code generation dawson r. engler wilson c. hsieh m. frans kaashoek an integrated approach to software reuse practice since 1993, sodalia's software engineers have been studying a reuse program whose goal is making software reuse a significant and systematic part of the software process. the sodalia's corporate reuse program is intended to develop a software reuse process that incorporates reuse-specific activities along the object-oriented software development process, and a reuse library to support the classification and management of reusable components. this paper focuses on the on-going experience of sodalia in the gradual introduction of reuse practice in the organization, illustrates the evolutionary stages, and the reached results. e. mambella r. ferrari f. d. carli a. l. surdo incremental partial evaluation: the key to high performance, modularity and portability in operating systems charles consel calton pu jonathan walpole take command: klogd: the kernel logging d mon michael a. schwarz unix for the hyper-impatient daniel lazenby infinite loops and how to create them infinite loops are the worst enemy of any programmer in any language. they waste potentially huge amounts of time, system resources and/or money, and can destroy the credibility of the programmer amongst colleagues or customers. in apl, infinite loops can potentially occur wherever a loop is present. however, they can also occur wherever any branching statement is used, and can even occur where no branching instructions have been used. in order to avoid infinite loops, the programmer is best equipped if he/she is aware of all of the possible ways in which they may be created. an apl function is also presented which will give programmers new to the language a means whereby most infinite loops can be avoided in their training phase. john r. searle an observation on the c library procedure random() thomas a. anastasio william w. carlson an improved mixture rule for pattern matching j. ophel portable and efficient dynamic storage management in ada ron kownacki s. tucker taft establishing ada repositories for reuse ideally, a software reuse repository is a robust, dynamic interchange among software designers. this constantly growing and improving resource assists with---and influences---the creation of quality, cost-efficient software systems. setting this process in motion and sustaining its momentum calls for many of the approaches used in system design and development. its stable yet flexible structure anticipates the technical requirements of present and future users. but the successful repository reflects a broader perspective, a long-term commitment, and attention to a number of other technical and nontechnical issues. it must maintain components beyond the development cycle of supported projects. it must have a capability for continuity. and most important, it must be a value-added, self-sustaining resource. saic's experience with three significant ada reuse repositories has given us insights into a number of relevant issues and challenges, and has shown us the intricate relationships that link the technical, economic, human and other contributing factors affecting repository development and operation. these three repositories---stars, aflc and saic corporate---support software reuse and a number of additional specialized requirements. the software technology for adaptable, reliable systems (stars) repository serves as a testbed for repository and reuse technology. additional requirements include a means for sharing program-related information such as meeting minutes, presentations, contract data requirements lists (cdrl) items and peer reviews. the air force logistics command (aflc) ada software repository has been established to provide a means of sharing domain-specific information among software development and maintenance personnel at the air logistics commands. with the wide variety of domain-specific software developed at saic, the company has decided to institute a reuse program to enhance the quality of this domain software and to encourage cross-domain reuse. common objectives for these projects include long- term continuity and self- sustaining capabilities. our experience to date has shown that these objectives impact four principal areas of the repository system: content acquisition/update technical information services operational su pport facilities b. kitaoka lazy array data-flow dependence analysis automatic parallelization of real fortran programs does not live up to users expectations yet, and dependence analysis algorithms which either produce too many false dependences or are too slow to contribute significantly to this. in this paper we introduce dataflow dependence analysis algorithm which exactly computes value- based dependence relations for program fragments in which all subscripts, loop bound and if conditions are affine. our algorithm also computes good affine approximations of dependence relations for non-affine program fragments. actually, we do not know about any other algorithm which can compute better approximations. and our algorithm is efficient too, because it is lazy. when searching for write statements that supply values used by a given read statement, it starts with statements which are lexicographically close to the read statement in iteration space. then if some of the read statement instances are not "satisfied" with these close writes, the algorithm broadens its search scope by looking into more distant writes. the search scope keeps broadening until all read instances are satisfied or no write candidates are left. we timed our algorithm on several benchmark programs and the timing results suggest that our algorithm is fast enough to be used in commercial compilers ---it usually takes 5 to 15 percent of f77 -02 compilation time to analyze a program. most programs in the 100-line range take less than 1 second to analyze on a sun sparcstation ipx. vadim maslov the sombrero distributed single address space operating system donald miller alan skousen linux command line parameters passing command line parameters to the kernel during system startup solves some programmers' testing problems. jeff tranter object-oriented formal specifications to support ada 95 reuse huiming yu albert esterline joseph monroe a review month 1990 robert g. brown issues in ada's future sponsored by acm/adatc (panel discussion) the ada programming language has gone public. the department of defense originally sponsored the effort in order to develop a high level language which could be used in real time computer systems. such systems are written in assembly language or in obscure languages such as cms (navy), jovial (air force), and tacpol (army). each of the high level languages in current use by the services has deficiencies which require the use of assembly language in almost all applications. in fact, most of the real time applications which are written supposedly in a high level language are written actually in assembly language because the compilers accept assembly language statements. this sad state of affairs has resulted because none of the current language can handle all of the requirements of real time applications. for example, there exist at least four dialects of jovial (j2, j3, j3b, and j73) none of which have a basic input or output capability. some of the problems which result from the use of assembly language and obscure languages are logistical rather than technical: a small number of people know these languages; there is a dearth of training material and courses; there is no portability of programs or people between systems which use these languages; the programming environments for each of the languages is poor; the languages are not available on many computer systems. sabina h. saib report on a workshop on software change and evolution this paper provides a brief overview and report on the main out-comes of the software change and evolution (sce99) workshop held in los angeles on may 17, 1999, as part of the international conference on software engineering 1999.the purpose of the workshop was to gather the most active of researchers and practitioners in the field of software evolution and change. the overall conclusion of the workshop was that this is a topic of enormous importance to industry, and there is a growing community of both practitioners and researchers who are working in the field. it would therefore make sense to arrange further workshops to support this expanding community. v. t. rajlich s. rank n. wilde k. h. bennett practical techniques for the design, specification, verification, and implementation of concurrent systems rance cleaveland philip m. lewis scott a. smolka objects to the rescue! or httpd: the next generation operating system this position paper suggests that object-oriented operating systems may provide the means to meet the ever-growing demands of applications. as an example of a successful ooos, we cite the http daemon. to support the contention that _httpd_ is in fact an operating system, we observe that it implements uniform naming, persistent objects and an invocation meta-protocol, specifies and implements some useful objects, and provides a framework for extensibility.we also believe that the modularity that is characteristic of oo systems should provide a performance benefit rather than a penalty. our ongoing work in the synthetix project at ogi is exploring the possibilities for advanced optimizations in such systems. andrew p. black jonathan walpole foundations of software testing: dependability theory dick hamlet software engineering (part ii) m. r. berkowitz g. davis k. t. orr j. a. senn d. ward direct programming using a unified object model (abstract) bruce schwartz mark lentczner process assurance audits: lessons learned alain april alain abran ettore merlo a caching model of operating system kernel functionality operating system design has had limited success in providing adequate application functionality and a poor record in avoiding excessive growth in size and complexity, especially with protected operating systems. applications require far greater control over memory, i/o and processing resources to meet their requirements. for example, database transaction processing systems include their own "kernel" which can much better manage resources for the application than can the application-ignorant general-purpose conventional operating system mechanisms. large-scale parallel applications have similar requirements. the same requirements arise with servers implemented outside the operating system kernel.in our research, we have been exploring the approach of making the operating system kernel a cache for active operating systems objects such as processes, address spaces and communication channels, rather than a complete manager of these objects. the resulting system is smaller than recent so- called micro-kernels, and also provides greater flexibility for applications, including real-time applications, database management systems and large-scale simulations. as part of this research, we have developed what we call a cache kernel, a new type of micro-kernel that supports operating system configurations across these dimensions.the cache kernel can also be regarded as providing a hardware adaptation layer (hal) to operating system services rather than trying to just provide a key subset of os services, as has been the common approach in previous micro-kernel work. however, in contrast to conventional hals, the cache kernel is fault-tolerant because it is protected from the rest of the operating system (and applications), it is replicated in large-scale configurations and it includes audit and recovery mechanisms. a cache kernel has been implemented on a scalable shared-memory and networked multi-computer [2] hardware which provides architectural support for the cache kernel approach.figure 1 illustrates a typical target configuration. there is an instance of the cache kernel per multi-processor module (mpm), each managing the processors, second-level cache and network interface of that mpm. the cache kernel executes out of prom and local memory of the mpm, making it hardware-independent of the rest of the system except for power. that is, the separate cache kernels and mpms fail independently. operating system services are provided by application kernels, server kernels and conventional operating system emulation kernels in conjunction with privileged mpm resource managers (mrm) that execute on top of the cache kernel. these kernels may be in separate protected address spaces or a shared library within a sophisticated application address space. a system bus connects the mpms to each other and the memory modules. a high-speed network interface per mpm connects this node to file servers and other similarly configured processing nodes. this overall design can be simplified for real-time applications and similar restricted scenarios. for example, with relatively static partitioning of resources, an embedded real-time application could be structured as one or more application spaces incorporating application kernels as shared libraries executing directly on top of the cache kernel. david r. cheriton kenneth j. duda high performance fortran language specification corporate rice university a brief introduction to concurrent pascal charles hayden multi-dispatch in the java virtual machine (poster session): design and implementation mainstream object- oriented languages, such as c++ and java, provide only a restricted form of polymorphic methods, namely single-receiver dispatch. in common programming situations, programmers must work-around this limitation. we detail how to extend the java virtual machine to support multiple-dispatch and examine the complications that java imposes on multiple-dispatch in practice. our technique avoids changes to the java programming language itself, maintains source-code and library compatibility, and isolates the performance penalty and semantic changes of multiple-dispatch to the program sections which use it. we have micro-benchmark and application-level performance results for a dynamic _most specific applicable_ (msa) dispatcher, two table-based dispatchers (_multiple row displacement_ (mrd) and _single receiver projections_ (srp)), and a tuned srp dispatcher. our general-purpose technique provides smaller dispatch latency than equivalent programmer-written double-dispatch code. christopher dutchyn paul lu duane szafron steve bromling wade holst an exact array reference analysis for data flow testing istvan forgacs extracting concepts from file names: a new file clustering criterion nicolas anquetil timothy lethbridge communication scheduling peter mattson william j. dally scott rixner ujval j. kapasi john d. owens on understanding type declarations in c r. p. mody stop the presses: linux grows up phil hughes secondary storage and filesystems marshall kirk mckusick comments, assertions and pragmas p. grogono usability measurement - its practical value to the computer industry m. maguire a. dillon reasoning about team tracking one subject of much ongoing research is team tracking --- tracking a team's joint goals and intentions in dynamic, real- time domains. in this paper, a theory of team tracking and an architecture for team-oriented communication-based agents are presented. the problems faced in tracking ill-structured teams are also discussed. xiaocong fan dianxiang xu jianmin hou guoliang zheng designing concurrent, distributed, and real-time applications with uml object-oriented concepts are crucial in software design because they address fundamental issues of adaptation and evolution. with the proliferation of object-oriented notations and methods, the unified modeling language (uml) has emerged to provide a standardized notation for describing object-oriented models. however, for the uml notation to be effectively applied, it needs to be used with an object-oriented analysis and design method. this tutorial describes the comet method for designing real-time and distributed applications, which integrates object-oriented and concurrency concepts and uses the uml notation. hassan gomaa transvirtual adopts microsoft java extensions mr. knudsen tells us why this company chose to add ms extensions to kaffe, the open source java implementation craig knudsen mxec: parallel processing with an advanced macro facility mxec is a sophisticated computing environment (executive system) which extends and magnifies users' interactions with a computer. principally, mxec provides for parallel processing and also assumes many of the mundane, clerical tasks with which users of most systems find themselves burdened. some of the characteristics of the mxec system are given, and its motivation and implementation are discussed. william l. ash advertisers index corporate linux journal staff refactoring tool challenges in a strongly typed language (poster session) this poster examines the challenges of developing a refactoring tool for a weakly typed language such as smalltalk as opposed to a strongly typed language such as java. to explore this, we will compare the push up field refactoring in each language. this refactoring was selected because it is relatively simple conceptually, but difficult to implement for java. in a weakly typed language such as smalltalk, push up field is simple. the user simply determines that the parent class needs the variable. the refactoring tool moves the field to the parent class. then the tool searches all the subclasses of the parent class, if the classes have a variable with the same name, the refactoring tool removes the variable from the subclass. the task is complete. for java, a description of classes and types is necessary. let's start with a base class a. a has a number of child classes, b, c, d, e, f, and g. each of b-g has a single instance variable named var. the only difference between the classes is the type of var. b and c both have a variable named var with type x. d has a variable named var with a type y. e has a variable named var with type w. f has a variable named var with a type of z. and g has a variable named var with the type int. w, x, y, and z are classes. w is the base class, x is a subclass of w, and y is a subclass of x. z is unrelated to all of the other classes. since all subclasses of a have a variable named var, a programmer might suspect that they could reduce the amount of code by moving the variable into the parent class a. let's move the field named var from class b into class a. like in smalltalk, the java refactoring tool can remove the variable var from b and c, since the var variable in both classes have the same type. the source file for a would get the declaration of type x named var. the source file for a might also gain an import for the type x. let's postpone the discussion of the scope of the variable var. the refactoring tool can remove var from d since the storage in a is sufficient. however, the impact on the source code depends on whether there is a difference between the interface of x and y. if there are methods or fields added in y that are not in x, then by removing the variable var from d, the refactoring tool must be sure to cast var back to a y in code that refers to var in a way that accesses the differences. for instance, if x and y have a method m, then in d the invocation var.m() remains unchanged. if y has a method n and no similar method is present in x or a parent class of x, then d must change the code that looks like var.n(); to ((y) var).n(); the refactoring tool cannot remove var from e, since a variable of type w can not be stored in a class x. a refactoring tool might change the type of the object in class a to w. the effect of casting var to an x or a y type would be more widespread. to remove var from f, the refactoring tool would need to change the variable var's type to object, since object is the base class for all classes thus would be a common storage type for ws and zs. everywhere that var is used in any source files, it must be cast to the appropriate type, eliminating the benefit of compile time type checking. removing var from g is even more difficult, since a primitive type such as int does not derive from object, the refactoring tool must find a way to store var for g. java provides a series of immutable types that store the primitive types, i.e. java.lang.integer. to access the variable's value, "var" would be replaced with "var.intvalue()". to set var, the tool would create a new instance of the integer class, and assign it to var. since multithreaded programming is encouraged, any operation that replaces an atomic action with a series of steps must be considered very carefully. in this instance, the assignment of a value to var is atomic, but now includes the creation of an integer object followed by an assignment. object instantiation is not atomic, therefore if might be necessary to wrap the assignment of var in an appropriate synchronized block. synchronized blocks are computationally very expensive, and should not be used indiscriminately. therefore, an automated tool should not remove the variable var from g as a result of pushing var from b to a instead, a refactoring tool might recommend renaming the variable var in g. christopher seguin region-based compilation: an introduction and motivation richard e. hank wen- mei w. hwu b. ramakrishna rau generating a requirements specifications knowledge-base the requirements specifications phase of the software life cycle plays a vital role within the overall development process. existing methodologies provide a formalized specification document for use by the design phase. however, such methods generally rely on heuristic or ad-hoc methods for their initial conversion from the english requirements document to the formalized system specification. a methodology is under development by which the english requirements document is converted into a reliable set of knowledge for use within the specification process. such a method would be utilized to help improve the overall quality and reliability of the resulting specification document. this method provides a three-phase conversion from the initial english document into a knowledge- based tool for use by the system specification group. the initial phase in the methodology uses as input a standard english requirements document and produces a structured parse of this text. this step utilizes the concept of augmented transition networks [1] for the conversion. the grammar used in constructing the atn is based on chomsky's transformational english grammar [2]. inherent ambiguities within the document are resolved interactively with the user/analyst. the second phase converts this structured parse into a set of prolog-based facts and rules that encapsulate the knowledge within the initial document. these facts can be divided into three major classifications: hierarchies, properties and events. hierarchy facts detail the relationships that exist between the entities themselves, as described within the requirements document. property facts specify the specific characteristics (physical and logical) of an individual entity. finally, event facts provide information regarding the actual actions that take place within the proposed system. the specific action being performed, the entity invoking this action, and all other entities in this event are detailed. in addition to these facts, other relevant information is stored within the knowledge base. rules exist for the definition of the specific actions and entities referenced within the requirements specification document. the quantification of all abstract modifiers used within the document is performed interactively with the user/analyst. in addition, the method links the origin of all knowledge-base information back to the original requirements specification document itself. a final set of rules permits restrictions to be placed on specific entity characteristics (properties) or upon events themselves. the final phase checks the accumulated knowledge for reliability, understandability and traceability [3]. reliability checking includes examining the aspects of correctness, completeness, consistency and necessity. testability relates to the overall quantification, structure, conciseness and self-description within the document. traceable information augments the user's ability to track the correspondence of the entities and actions within the generated knowledge base back to the initial requirements document. d. w. cordes d. l. carver a conservative alternative to pancode t. kovats workshop on compiling and optimizing ralph johnson abstraction, data types, and models for software in the area of software development and maintenance, a major issue is managing the complexity of the systems. programming methodologies and languages to support them have grown in response to new ideas about how to cope with this complexity. a dominant theme in the growth of methodologies and languages is the development of tools dealing with abstractions. an abstraction is a simplified description, or specification, of a system that emphasizes some of its details or properties while suppressing others. a good abstraction is one in which information that is significant to the reader (i.e., the user) is emphasized while details that are immaterial, at least for the moment, are suppressed. what we call "abstraction" in programming systems corresponds closely to what is called "modelling" in many other fields. it shares many of the same problems deciding which characteristics of the system are important, what variability (i.e., parameters) should be included, which descriptive formalism to use, how the model can be validated, and so on mary shaw from the publisher: linux in the mainstream? phil hughes cheap eagerness karl-filip faxen software metrics: using measurement theory to describe the properties and scales of static software complexity metrics h. zuse p. bollmann a single-pass syntax-directed front end for ada this paper describes the front-end processor of an ada compiler that is under development at florida state university. the compiler is coded in pascal, to execute on a cdc cyber system, and is presently targeted to the z8000 microprocessor architecture. owing at least in part to the peculiar origins and changing goals of this project, the front end processor is rather unlike those of the other ada compilers of which we know. perhaps its most distinctive feature is that it operates in one pass. t. p. baker a report on object-oriented extensions to pascal joseph bergin linux apprentice: paths lynda williams an object-oriented file system - an example of using the class hierarchy framework concept this paper presents the design of an object-oriented file system which was developed as a part of the "objix object-oriented operating system" project. the file system is a self-contained program system which is decomposed using a standard object-oriented framework concept. a novel approach to object-oriented frameworks, the class hierarchy framework concept recapitulated in this paper, is employed in structuring components of the file system. further, this paper illustrates on an example how the file system pursues a typical system call. tomas smolik comments on "the law of demeter" and c++ m. sakkinen asynchronous and stand-alone entries this report summarizes the discussions held on asynchronous and stand-alone entries. richard powers stop the presses: decus and osw gary moore phil hughes interfaces, protocols, and the semi-automatic construction of software adaptors in this paper we show how to augment object-oriented application interfaces with enhanced specifications that include sequencing constraints called protocols. protocols make explicit the relationship between messages (methods) supported by the application. these relationships are usually only given implicitly, either in the code or in textual comments. we define notions of interface compatibility based upon protocols and show how compatibility can be checked, discovering a class of errors that cannot be discovered via the type system alone. we then define software adaptors that can be used to bridge the difference between object-oriented applications that have functionally compatible but type incompatible interfaces. we discuss what it means for an adaptor to be well-formed. leveraging the information provided by protocols, we show how adaptors can be automatically generated from a high-level description, called an interface mapping. daniel m. yellin robert e. strom going threadbare (panel session): sense or sedition? a debate on the threads abstraction mary baker a generalized iterative construct and its semantics a new programming language construct, called doupon, subsumes dijkstra's selective (if) and iterative (do) constructs. doupon has a predicate transformer approximately equivalent in complexity to that for do. in addition, it simplifies a wide variety of algorithms, in form as well as in discovery and proof. several theorems are demonstrated that are useful for correctness proofs and for optimization and that are not applicable to do or if. the general usefulness of doupon derives from a separation of the concerns of invariance, through iteration, from those of termination. ed anson writing multi-user applications in apl2 this paper presents a general scheme for quickly implementing multi-user applications. several facilities of apl2 are discussed which make apl2 a particularly suitable solution for such problems. the resulting implementation is used to demonstrate how the advanced facilities of apl2 affect the writing of programs. the simplicity of the programs indicates how productive apl2 can be. the paper does not discuss the necessary user-friendly front end that any application should have because that is no different from a single user application. james a. brown apl in the new millennium kenneth e. iverson building interpreters by composing monads we exhibit a set of functions coded in haskell that can be used as building blocks to construct a variety of interpreters for lisp-like languages. the building blocks are joined merely through functional composition. each building block contributes code to support a specific feature, such as numbers, continuations, functions calls, or nondeterminism. the result of composing some number of building blocks is a parser, an interpreter, and a printer that support exactly the expression forms and data types needed for the combined set of features, and no more. the data structures are organized as pseudomonads, a generalization of monads that allows composition. functional composition of the building blocks implies type composition of the relevant pseudomonads. our intent was that the haskell type resolution system ought to be able to deduce the appropriate data types automatically. unfortunately there is a deficiency in current haskell implementations related to recursive data types: circularity must be reflected statically in the type definitions. we circumvent this restriction by applying a purpose-built program simplifier that performs partial evaluation and a certain amount of program algebra. we construct a wide variety of interpreters in the style of wadler by starting with the building blocks and a page of boiler-plate code, writing three lines of code (one to specify the building blocks and two to (redundantly) specify type compositions), and then applying the simplifier. the resulting code is acceptable haskell code. we have tested a dozen different interpreters with various combinations of features. in this paper we discuss the overall code structuring strategy, exhibit several building blocks, briefly describe the partial evaluator, and present a number of automatically generated interpreters. guy l. steele tcl/tk the swiss army knife of web applications: tcl/tk offers many uses to the web programmer. mr. schongar describes a few of them bill schongar the design of a real-time multi-tasking operating system kernel for the harris rtx 2000 h. glass m. mellen t. hand unfair process scheduling in modula-2 david hemmendinger the design, definition and implementation of programming languages john c. reynolds knowledge-based optimization in prolog compiler naoyuki tamura a lightweight architecture for program execution monitoring clinton jeffery wenyi zhou kevin templer michael brazell the bits between the lambdas: binary data in a lazy functional language malcolm wallace colin runciman back to the future: the story of squeak, a practical smalltalk written in itself dan ingalls ted kaehler john maloney scott wallace alan kay structured analysis and object oriented analysis (panel session) dennis dechampeaux larry constantine ivar jacabson stephen mellor paul ward edward yourdon process locking: a protocol based on ordered shared locks for the execution of transactional processes in this paper, we propose _process locking_, a dynamic scheduling protocol based on ideas of ordered shared locks, that allows for the correct concurrent and fault-tolerant execution of transactional processes. transactional processes are well defined, complex structured collections of transactional services. the process structure comprises flow of control between single process steps and also considers alternatives for failure handling purposes. moreover, the individual steps of a process may have different termination characteristics, i.e., they cannot be compensated once they have committed. all these constraints have to be taken into consideration when deciding how to interleave processes. however, due to the higher level semantics of processes, standard locking techniques based on shared and exclusive locks on data objects cannot be applied. yet, process locking addresses both atomicity and isolation simultaneously at the appropriate level, the scheduling of processes, and accounts for the various constraints imposed by processes. in addition, process locking aims at providing a high degree of concurrency while, at the same time, minimizing execution costs. this is done by allowing cascading aborts for rather simple processes white this is prevented for complex, long- running processes within the same framework. heiko schuldt more on modules joerg stiller ted stern finding objects: practical approaches peter coad putting more meaning in expressions boyko b. bantchev oo distributed programming is not distributed oo programming rachid guerraoui mohamed e. fayad formalized three-layer system-level reuse model and methodology for embedded data-dominated applications f. vermeulen f. catthor d. verkest h. deman an extension to the power operator e. e. mcdonell a jini monitoring tool marcia zangrilli cache performance of garbage-collected programs as processor speeds continue to improve relative to main-memory access times, cache performance is becoming an increasingly important component of program performance. prior work on the cache performance of garbage-collected programs either argues or assumes that conventional garbage-collection methods will yield poor performance, and has therefore concentrated on new collection algorithms designed specifically to improve cache-level reference locality. this paper argues to the contrary: many programs written in garbage-collected languages are naturally well-suited to the direct-mapped caches typically found in modern computer systems. garbage- collected programs written in a mostly-functional style should perform well when simple linear storage allocation and an infrequently-run generational compacting collector are employed; sophisticated collectors intended to improve cache performance are unlikely to be necessary. as locality becomes ever more important to program performance, programs of this kind may turn out to have a significant performance advantage over programs written in traditional languages. mark b. reinhold the f3 reuse environment for requirements engineering silvana castano valeria de antonellis kernel korner implementing linux system calls: how to create and install a system call in linux and install an interupt for controlling the serial port jorge manjarrez-sanchez reducing maintenance costs through the application of modern software architecture principles large software programs are usually long lived and continually evolve. substantial maintenance effort is often extended by engineers trying to understand the software prior to making changes. to successfully evolve the software, a thorough understanding of the architect's intentions about software organization is required. software maintenance costs can be reduced significantly if the software architecture is well defined, clearly documented, and creates an environment that promotes design consistency through the use of guidelines and design patterns. building a maintainable system depends upon the consistent application of these architectural practices. this paper describes the application of modern software architecture methods to achieve a maintainable implementation of a large, distributed, real-time, embedded software system. christine hulse scott edgerton michael ubnoske louis vazquez developing correct and efficient multithreaded programs with thread-specific data and a partial evaluator yasushi shinjo more letters corporate linux journal staff on-the-fly topological sort - a basis for interactive debugging and live visualization of parallel programs doug kimelman dror zernik modeling multiple domains in software reuse stan jarzabek practcl programming tips: it's all a matter of timing stephen uhler session 10a: formal specifications m. nivat next: the elimination of goto-patches? d. jonsson we're using the wrong name g w macpherson designing documentation to compensate for delocalized plans conceptual representation methods play a significant role in facilitating the software process. recent studies explore and clarify the use of these representations and their impact on progress. elliot soloway robin lampert stan letovsky david littman jeannine pinto an executable language definition w. m. waite cool: kernel support for object-oriented environments the chorus object-oriented layer (cool) is an extension of the facilities provided by the chorus distributed operating system with additional functionality for the support of object-oriented environments. this functionality is realized by a layer built on top of the chorus v3 nucleus, which extends the chorus interface with generic functions for object management: creation, deletion, storage, remote invocation and migration. one major goal of this approach was to explore the feasibility of general object management at the kernel level, with support of multiple object models at a higher level. we present the implementation of cool and a first evaluation of this approach with a c++ environment using the cool mechanisms. sabine habert laurence mosseri blurfit: an application of functional programming to scientific analysis d. mcclain monotonic conflict resolution mechanisms for inheritance r. ducournau m. habib m. huchard m. l. mugnier a new model of program dependences for reverse engineering daniel jackson eugene j. rollins an object-oriented approach to automated generation of challenge examinations using ada 95 the primary objective of this paper is to analyze and evaluate the usefulness of object-oriented development and the ada 95 programming language as applied to a specific software development project. a secondary objective is to show that structured development is still useful while applying object-oriented development and that the two methods can be integrated. the project is to develop an automated tool for generation of challenge examinations to test the knowledge of students in a given subject. the concepts of object-oriented and structured development are discussed and compared as they pertain to this problem domain. the methods are then applied and evaluated as the project is taken from analysis of the problem, through requirements specification, design, and implementation. during the requirements analysis and design phases, comparisons are made between object composition and functional decomposition, and rationale is discussed for the best places to use each. during the implementation phase, object-oriented features of the ada 95 programming language are used as well as traditional structured programming techniques. conclusions are summarized regarding the usefulness of the object-oriented paradigm and ada 95 for this project, and the benefits of incorporating traditional structured development where it fits best. arthur irving littlefield a general model for concurrent and distributed object-oriented programming this summary presents a general model supporting object-oriented programming in concurrent as well as distributed environments. the model combines the advantages of remote procedure calls with those of message passing. it relies on the following concepts: all objects are not active but the active entities are objects. asynchronous message passing with data-driven synchronization. service mechanism allowing an explicit thread of control. d. caromel letters to the editor corporate linux journal staff object-oriented concurrent programming in cst cst is a programming language based on smalltalk-80 that supports concurrency using locks, asynchronous messages, and distributed objects. distributed objects have their state distributed across many nodes of a machine, but are referred to by a single name. distributed objects are capable of processing many messages simultaneously and can be used to efficiently connect together large collections of objects. they can be used to construct a number of useful abstractions for concurrency. this paper describes the cst language, gives examples of its use, and discusses an initial implementation. w. j. dally a. a. chien a methodology for implementing highly concurrent data objects a concurrent object is a data structure shared by concurrent processes. conventional techniques for implementing concurrent objects typically rely on critical sections; ensuring that only one process at a time can operate on the object. nevertheless, critical sections are poorly suited for asynchronous systems: if one process is halted or delayed in a critical section, other, nonfaulty processes will be unable to progress. by contrast, a concurrent object implementation is lock free if it always guarantees that some process will complete an operation in a finite number of steps, and it is wait free if it guarantees that each process will complete an operation in a finite number of steps. this paper proposes a new methodology for constructing lock-free and wait-free implementations of concurrent objects. the object's representation and operations are written as stylized sequential programs, with no explicit synchronization. each sequential operation is atutomatically transformed into a lock-free or wait-free operation using novel synchronization and memory management algorithms. these algorithms are presented for a multiple instruction/multiple data (mimd) architecture in which n processes communicate by applying atomic read, write, load_linked, and store_conditional operations to a shared memory. maurice herlihy introduction: format c kevin fu kaleidoscope: mixing objects, constraints, and imperative programming kaleidoscope is an object-oriented language being designed to integrate the traditional imperative object- oriented paradigm with the less traditional declarative constraint paradigm. imperative state changes provide sequencing while declarative constraints provide object relations. a variables as streams semantics enables the declarative-imperative integration. a running example is used to illustrate the language concepts---a reimplementation of the macdraw ii dashed-lines dialog box. the example is in three parts: the input channel, using imperative code to sequence through modes; the output channel, using constraints to update the display; and the internal relations, using constraints to maintain the data objects' consistency requirements. the last sections of the paper discuss views as a natural result of combining objects with constraints, as well as related and future work. bjorn n. freeman-benson an approach to reducing delays in recognizing distributed event occurrences madalene spezialetti letters to the editor corporate linux journal staff multiple views based on unparsing canonical representations - the multiview architecture chris marlin easy character-stream filters with apl2 defined operators michael kent defining and exploring an efficient distributed process for the reuse of ada software components and tools in a global theater: the public ada library richard conn a fully capable bidirectional debugger the goal of this research project is to develop a bidirectional program debugger with which one can move as easily backwards as current debuggers move forward. we believe this will be a vastly more useful debugger. a programmer will be able to start at the manifestation of a bug and proceed backwards investigating how the program arrived at the incorrect state, rather than the current and often tedious practice of the user stepping and breakpointing monotonically forward and then being forced to start over from the beginning if they skip past a point of interest.our experimental debugger has been implemented to work with c and c++ programs on digital/compaq alpha based unix workstations. bob boothe an event trace monitor for the vax 11/780 this paper describes an event trace monitor implemented on version 1.6 of the vms operating system at purdue university. some necessary vms terminology is covered first. the operation of the data gathering mechanism is then explained, and the events currently being gathered are listed. a second program, which reduces the data gathered by the monitor to usable form, is next examined, and some examples depicting its operation are given. the paper concludes with a brief discussion of some of the monitor's uses. stephen tolopka inline function expansion for compiling c programs inline function expansion replaces a function call with the function body. with automatic inline function expansion, programs can be constructed with many small functions to handle complexity and then rely on the compilation to eliminate most of the function calls. therefore, inline expansion serves a tool for satisfying two conflicting goals: minizing the complexity of the program development and minimizing the function call overhead of program execution. a simple inline expansion procedure is presented which uses profile information to address three critical issues: code expansion, stack expansion, and unavailable function bodies. experiments show that a large percentage of function calls/returns (about 59%) can be eliminated with a modest code expansion cost (about 17%) for twelve unix* programs. p. p. chang w.-w. hwu exploratory language design edward a. ipser setl-a very high level language oriented to software systems prototyping setl, like apl, is a language which facilitates manipulation of large relatively abstract data objects, namely sets, vectors, and maps, whose elements may in turn be sets, vectors, or maps. this language has been used to develop executable prototypes of several large software packages, including a full compiler for the new dod ada language. this talk will describe the man semantic features of setl, and assess its use as a software prototying tool. jack schwartz task coupling and cohesion in ada k w nielsen new challenges for fortran in the next millennium this paper discusses the economics of program optimisation and some of the challenges facing the parallel fortran community. industry takes a much broader view of cost optimisation than just speeding up a fortran application code. two examples of industrial "fortran" projects undertaken by the parallel applications centre (pac) at southampton are used to illustrate these industrial concerns. the promenvir project demonstrated cost-effective use of a europe-wide metacomputer, composed of idle workstations and parallel systems connected by a wan, to explore the design space of an industrial application. by contrast, the toolshed project was concerned with integration of the design and grid generation phase with the simulation phase of the industrial design cycle. the paper concludes with a discussion of three important challenges for the continued health of parallel fortran community. these are: the multiplicity of versions of parallel fortran; competition from high-level scientific packages such as axiom, matlab and mathematica; and finally, the inexorable rise in popularity of java-based, network-centric computing. tony hey how to write an apl utility function in today's business climate, reusable code is essential. but many programmers often don't use existing utility functions because they find them difficult to use or not general enough. also, they may not know that such functions exist. instead, programmers often clone lines of code from other functions. this results in sloppy, undocumented code which is full of errors. in order to avoid this, the author of a utility function must make an extra effort to ensure that his function is designed properly.apl is easy to learn because its primitives behave consistently, work on arrays as well as scalars, can handle edge conditions, often use default values and are totally encapsulated from the user. we can learn from this by designing utility functions in the same way, allowing them to become an extension of apl and its set of primitives. this paper will show some examples and design techniques. stephen m. mansour a protocol for wait-free, atomic, multi-reader shared variables richard newman-wolfe kernel korner linux 2.2 and the frame-buffer console: wondering about the new frame-buffer features in the kernel? mr. pranevich gives us the scoop joseph pranevich technical correspondence: on steensgaard-madsen's ``a statement-oriented approach to data abstraction'' john r. ellis best of technical support corporate linux journal staff relocating machine instructions by currying norman ramsey a fast method dispatcher for compiled languages with multiple inheritance this paper addresses the problem of an efficient dispatch mechanism in an object-oriented system with multiple inheritance. the solution suggested is a direct table indexed branch such as is used in c++. the table slot assignments are made using a coloring algorithm. the method is applicable to strongly typed languages such as c++ (with multiple inheritance added) and eiffel, and in a slightly slower form to less strongly typed languages like objective c. r. dixon t. mckee m. vaughan p. schweizer an efficient relevant slicing method for debugging dynamic program slicing methods are widely used for debugging, because many statements can be ignored in the process of localizing a bug. a dynamic program slice with respect to a variable contains only those statements that actually had an influence on this variable. however, during debugging we also need to identify those statements that actually did not affect the variable but could have affected it had they been evaluated differently. a relevant slice includes these potentially affecting statements as well, therefore it is appropriate for debugging. in this paper a forward algorithm is introduced for the computation of relevant slices of programs. the space requirement of this method does not depend on the number of different dynamic slices nor on the size of the execution history, hence it can be applied for real size applications. tibor gyimothy arpad beszedes istan forgacs airdisks and airraid (expanded extract): modeling and scheduling periodic wireless data broadcast a new generation of low-cost, low-power, and portable personal computer systems is emerging; sometimes these are referred to as palmtops or personal digital assistants (pdas). one of their key features is that they utilize wireless communication media, thus freeing the user from the constraints of wired or tethered communication. in fact, the wireless medium becomes a critical component of the i/o subsystem, allowing communication with fixed servers and other users. in particular, the broadcast nature of the wireless medium can be exploited to efficiently transmit information required by a large number of pda users (e.g. stock quotes, sports updates, etc.), with software on the pda being used to filter the information and present only the information of interest to the pda user.we introduce a simple model, called the airdisk, for modeling the access of data transmitted periodically over wireless media as being analogous to the access of data from a standard magnetic disk. we consider several issues related to airdisks, such as their mean rotational latency under certain assumptions. the problem of scheduling the order in which data items are broadcast is analogous to that of determining how data should be laid out on the disk. two problems of laying out data so as to minimize read time, given information about which data items are of most interest to the clients, are defined; both are shown to be np- complete. we discuss ways in which the information about which items are of interest to clients can be obtained. finally we consider how to increase the performance and storage capacity of airdisks, using the magnetic disk analogy as a guide. we suggest using multiple-track airdisks or borrowing the idea of redundant arrays of inexpensive disks (raid) which is used for magnetic disks; for the wireless data broadcast environment we call the latter approach airraid. ravi jain john werth using the logos programming environment - a case history logos is a unique programming environment for the apl language. it provides a uniform framework within which apl systems can be created and maintained. this paper describes how logos is used to manage and maintain a large, sharp apl- based information management system, viewpoint. a brief overview of both logos and viewpoint is presented, and the steps taken to retrofit viewpoint into the logos environment are traced. emphasis is placed on the practical considerations and consequences of the conversion, and how the facilities of logos are used in practice. examples are given to demonstrate the effect logos had in changing the way problems relating to the organization and management of viewpoint were approached and solved. finally, the paper describes areas in which logos provided techniques to solve problems that might not otherwise have been addressed. steve chapman the treadmill: real-time garbage collection without motion sickness henry g. baker apl: an educator's personal view james e. mckenna supporting ada in a distributed environment this paper describes a set of tools which support a pre-partitioning approach to programming distributed systems in ada. it also considers how these tools can be modified to support fault-tolerant applications. a. d. hutcheon a. j. wellings analyzing systems for object oriented design gregg p. reed donald e. bynum threading lisp d. j. nordstrom interfacing ada 95 to microsoft com and dcom technologies com, the component object model, and dcom, distributed com, were introduced as the underlying technologies of microsoft's second version of ole, but have since proven their utility as powerful model for distributed systems. this paper intends to introduce com and dcom to the ada community and demonstrate how to create and use objects that conform to these models. david botton pointer analysis for multithreaded programs radu rugina martin rinard analysis of local enumeration and storage schemes in hpf henk j. sips kees van reeuwijk will denissen data locality enhancement by memory reduction in this paper, we propose _memory reduction_ as a new approach to data locality enhancement. under this approach, we use the compiler to reduce the size of the data repeatedly referenced in a collection of nested loops. between their reuses, the data will more likely remain in higher-speed memory devices, such as the cache. specifically, we present an optimal algorithm to combine loop shifting, loop fusion and array contraction to reduce the temporary array storage required to execute a collection of loops. when applied to 20 benchmark programs, our technique reduces the memory requirement, counting both the data and the code, by 51% on average. the transformed programs gain a speedup of 1.40 on average, due to the reduced footprint and, consequently, the improved data locality. yonghong song rong xu cheng wang zhiyuan li implementing software configuration control in the structured programming environment the fundamental problems in the control of software are explored. the elements of control as they relate to communications is defined, and the implementation of these elements in solving the fundamental problems and achieving optimal control during a software development life cycle, is explained. control is defined as a vehicle for communicating changes to established, agreed-upon baseline points, made up of documents and subsequent computer programs. by communicating change to those involved or affected, and obtaining agreement of the change, one achieves a degree of control that does not inhibit software engineering innovation or progress, but helps maintain the project's prime objectives to deliver maintainable, error-free software to the ultimate user. h. ronald berlack a second look at overloading martin odersky philip wadler martin wehr increasing tlb reach using superpages backed by shadow memory the amount of memory that can be accessed without causing a tlb fault, the reach of a tlb, is failing to keep pace with the increasingly large working sets of applications. we propose to extend tlb reach via a novel memory controller tlb (mtlb) that lets us aggressively create superpages from non-contiguous, unaligned regions of physical memory. this flexibility increases the os's ability to use superpages on arbitrary application data. the mtlb supports shadow pages, regions of physical address space for which the mtlb remaps accesses to "real" physical pages. the mtlb preserves per-base-page referenced and dirty bits, which enables the os to swap shadow-backed superpages a page at a time, unlike conventional superpages. simulation of five applications, including two specint95 benchmarks, demonstrated that a modest-sized mtlb improves performance of applications with moderate-to-high tlb miss rates by 5-20%. simulation also showed that this mechanism can more than double the effective reach of a processor tlb with no modification to the processor mmu. mark swanson leigh stoller john carter static slicing of threaded programs jens krinke evolving toward ada in real time systems the ada view of multitasking represents a radical departure from the traditional "cyclic executive" approach to real time operating systems. since system designers must by necessity be conservative, it would be unrealistic to expect an abrupt change of this magnitude in engineering practice. instead, this paper outlines a sequence of intermediate steps designed so that the advantages and familiar structures of cyclic systems may be retained, while the capabilities of ada multitasking are gradually incorporated. a scale of increasing scheduling complexity provides the justification for this sequence. the discussion of each step then briefly mentions some of the related benefits and costs. the paper draws some conclusions about the use of ada in real time systems. lee maclaren adls and dynamic architecture changes nenad medvidovic take command ghostscript: need to preview and print postscript files? here's a utility that will do just that robert a. kiesling report on future software engineering standards direction leonard l. tripp polymorphism considered harmful carl ponder bill bush letters to the editor corporate linux journal staff an effective programmable prefetch engine for on-chip caches tien-fu chen the ml approach to the readable all-purpose language the ideal computer language is seen as one that would be as readable as natural language, and so adaptable that it could serve as the only language a user need ever know. an approach to language design has emerged that shows promise of allowing one to come much closer to that ideal than might reasonably have been expected. using this approach, a language referred to as ml has been developed, and has been implemented as a language-creation system in which user-defined procedures invoked at translation time translate the source to some object code. in this way the user can define both the syntax and the semantics of the source language. both language and implementation are capable of further development. this paper describes the approach, the language, and the implementation and recommends areas for further work. c. r. spooner stop the presses: uniforum '97 marjorie richardson scil-vp: a multi-purpose visual programming environment dennis koelma richard van balen arnold smeulders market making for the bazaar bernie thompson component migration with enterprise javabeans (poster session) michael richmond on relative completeness of programming logics in this paper a generalization of a certain lipton's theorem (see lipton [5]) is presented. namely, we show that for a wide class of programming languages the following holds: the set of all partial correctness assertions true in an expressive interpretation i is uniformly decidable (in i) in the theory of i iff the halting problem is decidable for finite interpretations. in the effect we show that such limitations as effectiveness or herbrand definability of interpretation (they are relevant in the previous proofs) can be removed in the case of partial correctness. michal grabowski a variable-arity procedural interface this paper presents a procedural interface that handles optional arguments and indefinite numbers of arguments in a convenient and efficient manner without resorting to storing the arguments in a language-dependent data structure. this interface solves many of the problems inherent in the use of lists to store indefinite numbers of arguments. simple recursion can be used to consume such arguments without the complexity problems caused by the use of the lisp procedure apply on argument lists. an extension that supports multiple return values is also presented. r. kent dybvig robert hieb data interpretation and experiment planning in performance tools allen d. malony high level description and implementation of resource schedulers resource sharing problems can be described in three basically independent components. • the constraints the resource places upon sharing because of physical limitations and consistency requirements. • the desired ordering of resource requests to achieve efficiency---either efficiency of resource utilization or efficiency for processes making the requests. • modifications to the ordering, to prevent starvation of processes waiting for requests which might otherwise never receive service. a high level description language to specify these components of resource sharing problems is introduced. an implementation that lends itself to mechanical synthesis is described. synthesis of the scheduler code by-passes the long and error- prone process of someone doing the coding themselves. proof techniques at the high level description level are introduced to show how to prove schedulers, synthesized from their description, are or are not deadlock and starvation free. solutions to the classical resource sharing problems of producer/consumer, reader/ writer, and disk scheduler (to the sector level) are shown to illustrate the expressiveness of this description language. dennis w. leinbaugh component-based software using resolve marulli sitariman bruce weide optimizing ada on the fly one of the features that makes ada such a reliable programming language is its use of run-time constraint checks. however, such checks slow program execution and increase program size.the adamagic compiler uses one-pass, on-the-fly flow analysis during intermediate-language generation specifically to eliminate unnecessary constraint checks and to warn about potential error situations such as the use of uninitialized variables.the one-pass nature of our method allows for faster compile times than more elaborate multi-pass flow analysis. thus, we are able to generate efficient code without the impact on compile-time, and without the added implementation complexity, associated with a traditional separate optimization phase. sheri j. bernstein robert s. duff understanding concurrent programming through program animation program animation has mainly been developed for sequential programs. animation of concurrent programs is much more uncommon, mainly because of the important technical problems. this paper presents a project whose objective is to animate any concurrent program written in the language portal, a real time language close to modula. the usefulness of animation in the context of teaching is shown by a few examples. m. zimmermann f. perrenoud a. schiper translation of nested apl to c++: problems and approaches this paper discusses the task of developing a nested apl compiler and puts forward the project of such a compiler. the proposed approach consists of simulating the semantics of apl by using the object-oriented c++ programming language. different kinds of problems ranging from apl syntax ambiguity to the implementation of nested arrays and the robustness of compiled code are considered and different solutions are proposed. we conclude that it is feasible to develop a robust nested apl compiler, although the problem of performance demands further research. dmitri gusev igor pospelov design and implementation of a parallel pipe the parallel pipe par_pipe is a communication mechanism. data which are written to the par_pipe by a single process can be read by multiple processes concurrently in first-in-first-out manner, where all reading processes read the entire data stream. we use a dynamic partitioned buffer limited in size only by available hardware resources to ensure that the writing process cannot be blocked because of slow reading processes. in a further step we examine how far caching with an optimized saving policy on top of the file based buffer can improve the performance of the par_pipe. design and implementation of two versions of the par_pipe (one without caching, one with caching) are described. we outline the general advantages of the par_pipe over conventional unix shell script solutions and evaluate the results of performance measurements done for both par_pipe versions. jonathan karges otto ritter sándor suhai new features in fortran 95 (draft of rationales appendix) craig t. dedo some comments on coding practice w l johnson new products corporate linux journal staff heisenberg uncertainty p. laplante tabled logic programming for verification and program analysis c. r. ramakrishnan cooking with linux: rapid program-delivery morsels, rpm installing and upgrading software need not be difficult--monsieur gagne tempts us with delectable rpm. marcel gagne mechanisms (session summary) takuya katayama lead the discussion on mechanisms for software process description. he classified software process views as reflected in the position papers into functional, behavioral and enactional, described some generally desirable properties of software process mechanisms, and listed those projects that seemed to have obtained non-toy experience. he made a plea for each of these projects to answer a questionnaire that, among other things, requested information regarding how the formalism handled ill-behaved processes. one generic example of an ill-behaved process was given: repeat function1, …, functionn until predicate1, …, predicatem are satisfied. it is not really clear to me that this process is ill-defined, since it seems to be computable using normal means; my opinion is that an ill- defined process is one that is made up and/or arbitrarily changed while it is in progress, in the sense of self-modifying code (e.g., lee osterweil's universal process program --- an unfair characterization since lee didn't show up to defend himself). in any case, katayama's questionnaire will hopefully be addressed in the position papers for 6ispw. the discussion focused on several interesting issues. the most controversial was instigated by mark dowson, when he asked what is the definition of "activity". there were a number of different responses, but most fell into one of three camps. the first, which i'll call notation-oriented, defined an activity as the primitive (according to watts humphrey, the instruction set) of the process modeling notation. in general, the notation might lack facilities to express decomposition of an activity, or at least --- as colin tully states --- we choose to regard such units as decomposable. the second, people-oriented, defined an activity as a primitive in the real-world process being modeled. bill curtis said this might be a job that one person takes responsibility for, without interaction with other people; the process would be defined in terms of such interactions. the third, which i'll call process- oriented, was exemplified by dewayne perry's definition of an activity in terms of causes (preconditions?) and goals (postconditions?) with respect to the process. there was some argument over whether or not such an activity necessarily resulted in an observable change of state. bob balzer pointed out it is an open question as to whether syntax (notation) can realistically be separated from semantics (people and process). there was some argument in all three camps as to whether activities were really primitive, as assumed in the definitions above, or could be decomposable at a lower level of abstraction. one issue is whether or not a particular activity can be decomposed with respect to the question at hand (david carr), for example, the encapsulation might be due to ignorance (sam redwine); alternatively, there may be no need for decomposition (frank belz). one response by mark dowson seems to cut across all three camps: tasks are defined as managed units (i.e., the units are scheduled and allocated resources), while activities are not managed separately from their enclosing tasks. but this leads to the distinction pointed out by peter feiler of machine-level scheduling and resource allocation versus process- level, since everything executed on a computer is scheduled and allocated resources at a fine granularity by the operating system. taly minsky expressed a concern that this entire discussion was at too high a level of abstraction, and we should return to more operational definitions; however, i think that discussion of scheduling and resource allocation is quite concrete, since we can define (computer or bureaucratic) algorithms to carry out these operations. at this juncture, anthony finkelstein pointed out the need for folk classification: what do the people involved in a process think are the meaningful units? this lead into a new controversy over the notion of "commitments", which seemed to have at least as many definitions as "activities". here the division seemed to be over whether the most important concern was why things are being done versus what is being done; there was also an issue over what it meant for commitments to be retracted. (the conversation was rather random, so i can relay only a small subset of the interesting comments made by particular individuals.) bill curtis pointed out that computerized formalization and monitoring of commitments may appear to project personnel as naziware; further, if there is a shared information base for rationale capture, who monitors and cleans it up? sam redwine gave the example of burying a project to track the fulfillment or lack thereof of presidential campaign promises. walt scacchi made a distinction between strong commitments, with real consequences if broken, and weak commitments, which are no big deal; bill curtis mentioned false commitments, based on what is acceptable to management or customers rather than how things are really done. according to anthony finkelstein, there are four kinds of commitment: commitment to a statement, no commitment to a statement, commitment to the negation of a statement, no commitment at all. taly minsky mentioned the commitment (or obligation) to restore inconsistent states to consistency; ian thomas pointed out such a commitment might be from an individual to himself, and not necessarily public. there was a digression and some brief confusion over the use of the term commitment as it relates to transactions, but that was determined to be irrelevant to the discussion at hand. watts humphrey defined a commitment as between people, freely entered into, based on intelligent knowledge, public, with a clear intention of performing, and prior notification and renegotiation if the commitment will not be met. in this sense, project personnel themselves act as two agents with respect to commitments, one that makes commitments while the other monitors whether the commitments are being fulfilled (mark dowson). after the break, two specific projects were briefly discussed. karen huff described her work on rule-based process modeling and planning. she made three main points: (1) components should be defined to permit dynamic construction of processes, in order to accommodate failure and iteration, particularly for ill-defined processes; (2) there is an important distinction between doing versus deciding what to do, i.e., it should be possible to reason about a course of action without committing to start its execution (bob balzer describes this as envisionment versus enaction); and (3) the process should be viewed in terms of its state of completion with respect to its goals rather than as an instruction pointer and a snapshot. she noted that the ability to decide was built-in, as opposed to a separate process. this led quite predictably to a discussion about planning (bob balzer) versus doing (pamela zave) versus planning in response to feedback from doing and measurement of progress (sam redwine). there were a number of questions, mostly from dennis heimbigner and bob balzer, regarding the power of huff's notation (apparently turing-complete in theory, but not fully implemented), its application to representing real processes (primarily unix-based coding and testing), and the difficulties encountered (there is a problem with suspending and then resuming a plan after the world has changed). i mentioned the concern about side-effects from what might otherwise be considered an atomic activity, and how such side-effects could be detected (the best we can do is operating system support --- which of course doesn't work for off-line activities) and modeled. wilhelm schaefer delineated the problem of combining process models for individuals, who can change their processes at will. going on to the next brief project description, walt scacchi described his work on a hypertext environment, where an empirical evaluation was made with respect to student groups. the process was defined at the intergroup but not intragroup level, and there was no control over the work division among participants. not surprisingly the unexpected circumstances arose at the intragroup level, which turned out to be decomposable after all and led to the greatest variation in productivity and quality. individuals may introduce new tools spontaneously, and there's no general way to describe this. there is thus a need to produce new processes and apply them on the fly, as well a model interactions among multiple agents whose purpose may be to decide what to do next; since people meander, processes need to be able to meander. people operate in an open world situation, but the process is defined by a closed world formulation. watts humphrey pointed out that processes don't transplant well; outside processes are not accepted and a sense of ownership (achieved through improving the process) is needed. marc kellner brought up the issue of where do processes come from, e.g., dod guidelines. this led to a discussion of description versus prescriptive processes (marc kellner) and abstraction versus instantiation of processes (bob balzer). anthony finkelstein questioned the idea of an emergent process. walt scacchi responded that modeling technology is too heavy to support modeling on the fly, and the effort of changing the model but be less than just doing the work correctly. at the end of the session, colin tully suggested holding a process description contest, with a benchmark process that all participants would define in their own process modeling notations. anthony finkelstein pointed out that this is problematical, however, since different notations address widely differing aspects of the process. it might be most reasonable to have at least two benchmarks, one involving upstream and the other downstream activities, with no expectation that every process modeling notation addresses both areas. hopefully this will be addressed in the call for 6ispw. gail e. kaiser linearizer: a heuristic algorithm for queueing network models of computing systems k. mani chandy doug neuse java will be faster than c++ kirk reinholtz aspectj: the language and support tools complex systems usually contain design units that are logically related to several objects in the system. some examples include: tracing, propagation of interrupts, multi- object protocols, security enforcement etc. this crosscutting between those design units and the objects is a natural phenomenon. but, using traditional implementation techniques, the source code --- i.e. the classes --- becomes tangled with the implementation of the crosscutting concerns. aspectj is an aspect-oriented extension to the java programming language that enables the clean modularization of crosscutting concerns. using aspectj we can encapsulate in program modules (aspects) the implementation of those design units that would otherwise be spread across the classes. this demo illustrates what the aspectj language can do and it shows the tools that support developing programs with this language. we present an example program, and demonstrate the edit-compile-debug cycle in an ide that supports aspectj. erik hilsdale jim hugunin mik kersten gregor kiczales cristina lopes jeffrey palm perspectives on programming environments many people have realized that programming needs more support than a compiler, linker, debugger, and a few other tools. more comprehensive systems have been proposed, built, and (sometimes) used. although the perfect programming environment is yet to be found, people have widely divergent views of the ideal. this paper neither surveys the field nor describes any single environment, but instead offers some attributes by which programming environments can be classified. it frequently uses the unix system to illustrate these attributes, and comments on the implications of the unix system's success for other programming environments. john r. mashey opium: a debugging environment for prolog development and debugging research mireille ducasse anna-maria emde interactive control restructuring jeanette m. bruno daniel j. rosenkrantz the common lisp object system metaobject kernel: a status report the metaobject kernel of the common lisp object system (clos) comprises the classes and methods that define and implement the behavior of the system. since clos is an object-oriented program itself, exposing this kernel allows people to make useful integrated extensions to clos without changing the behavior of the system for ordinary programs, and without unwarranted loss of efficiency. this paper is organized around the classes used to implement clos, describing reasons for the class organization chosen, and how these classes participate in some protocols of the system. daniel g. bobrow gregor kiczales studying the process of software design teams in the 1990s we will begin to see software development environments that embed models of the software process in the control of their tools. these process models will too often be based on traditional life cycle models, models of individual activity, or idealized models of team processes. if these environments are to benefit software development they must support team activity rather than hinder it. the crucial issue in developing technology to aid teams is how well the model implemented by the software coincides with a team's most effective behaviors in performing a task. in order to get a broad perspective on how software design teams solved problems, herb krasner and jeff conklin videotaped 37 team meetings over five months of an mcc team designing an object-oriented database. in a doctoral dissertation using these data, diane walz (1988) devised a method for scoring the verbal protocols of design teams into categories tailored to the design process. her analysis emphasized the team's information requirements and their effect on the group process, especially information sharing. she began by assuming that the conflicts within a software design team were not solely the result of incompatible goals and/or opinions, but also represented the natural dialectic through which knowledge was exchanged and terminology was clarified (walz, elam, krasner, & curtis, 1988). analyses of the protocols and follow-up questions to team members indicated that design meetings were generally dominated by a few individuals to whom fellow participants attributed the greatest breadth of expertise. these individuals appeared to form a coalition that controlled the direction of the team. a fundamental problem in building large systems is the development of a common understanding of the requirements and design across the project team. of the few studies of multiple agents performing complex design activities, none have investigated the integration of knowledge across experts in different domains or the communication processes required to develop a common model of the application and the system design across the entire design team. as a result, design in team situations is typically treated as an outgrowth of individual design activities, avoiding the multi-agent problem-solving and communication issues. the transcripts of team meetings reveal the large amounts of time designers spend trying to develop a shared model of the design. for instance, many design meetings may be devoted to filling knowledge gaps, such as lectures about the application (e.g., talks by outside experts on object-oriented programming or the requirements for object repositories). next, the team must come to a common understanding of the semantics of the symbol or diagrammatic system they are using to represent design information. the longer they go without establishing this consensus, the more communication breakdowns will occur. next they must try to comprehend the differences in their initial model of the application and the design solution. without understanding these individualized starting points, the team is unable to detect when a breakdown in establishing consensus is likely to have occurred. finally, they must come to negotiate a common understanding of the architecture. this common model allows them to work on different components of the system without violating interfaces or architectural constraints. problems of this nature usually do not show up until integration test, and are much more expensive to remove than they would have been in the design phase. the existing research on team problem-solving led us to expect monotonically increasing consensus among design team members on design issues. a simplistic model assuming that cooperative design activity requires agreement among team members lead us to expect this monotonic increase. however, an interesting pattern in the verbal acts representing agreement within the team was observed across the 17 meetings that constituted the design phase of this project. as the data in figure 1 demonstrate, walz observed a surprising inverted u-shaped curve (verified through logistic regression) that characterized verbal acts of agreement. the level of agreement increased until meetings 7-10, when the design team released a document presenting its functional specification in response to customer requirements. in subsequent meetings the level of agreement began to decrease. there are several possible explanations for the observed pattern. first, there may be an inflection point in the middle of a group process where the team is forced to come together and agree on their technical plan and operating procedures. gersick (1988) observed such a point in a study of project teams. rather than the standard group process of form-storm-norm- perform, gersick suggested there came a point halfway through a group project where the team faced its lack of progress during its early stage, and came to a consensus about how it would attack the objective. often this critical point involved an insight into the problem's structure. group process was relatively stable and productive until the delivery of the final product. although this model suggests that significant changes occur in a group's process midway through its history, it does not explain the downturn (the drop in consensus) of the inverted u-shaped curve. gersick's model may be more descriptive of temporary teams that are asked to perform tasks out of their area of expertise. a second hypothesis is that this curve results from the integration of two processes occurring in teams. there is a intellectual process of integrating technical ideas and resolving inconsistencies. overlaid on this process is a project process of meeting scheduled milestones. meeting the milestone forced a contrived consensus that did not resolve underlying technical disagreements, but allowed production of a document. however, the existence of this document disguised the lack of intellectual integration that remained in the design. these disagreements began to dominate design meetings immediately after document delivery. having completed their obligations the team was free to reopen the conflicts that were temporarily suspended to meet the milestone. thus, we would expect this inverted-u phenomenon of agreement to recur whenever the team must achieve a shared (rather than individual) milestone. since we only looked at the design phase, walz could only observe one such curve. had we studied behavior across the entire development phase walz might have uncovered an oscillating curve representing the level of agreement with the upper inflections occurring at deadlines for milestones. however, the magnitude of these oscillations should decrease over time as the team resolved more of their underlying technical disagreements. a third explanation is not unlike the second, but emerges more from modeling the stepwise refinement (decomposition) of the artifact. in this model the team struggles to resolve technical conflicts at the initial level of refinement required of them (e.g., functional specification, system architecture, detailed design, etc.). after the artifact at this level is produced, the next level of refinement presents many new technical issues over which the team must struggle toward consensus. thus, we would again expect to see an oscillating curve of agreement as succeeding stages of refinement present new sets of problems for the team to resolve. these continuing issues would not be surprising since the development team shifts its attention from the early concern with application structure to a later concern with how to optimize the software on the available hardware architecture. this model might differ from the second by not requiring a decreasing magnitude for the oscillations, since each oscillation represents a new wave of problems, rather than the continuing struggle to resolve disagreements that have existed from the project's start. of these three explanations we prefer the second because we believe that on many, perhaps most, real projects there are people present who recognize early some of the fundamental problems that must be resolved. their understanding of these problems cuts across several levels of abstraction or refinement and they are able to articulate the implications of these issues for several levels of refinement early in the design process. the levels of refinement argument may be relevant in that the problems attended to early are those that must be resolved to produce the artifact. thus, levels of refinement provides a model for selecting among problems in order to make progress toward a milestone. much research lies ahead before we can feel comfortable selecting among these speculations or other explanations of these data. we have concluded that the team design process should be modeled as a multi-agent cognitive process, on which the social processes of role development, coalition formation, communication, etc. are superimposed. in order to explain the team design process, we model group dynamics by their effect on team cognitive processes. bill curtis diane walz joyce elam the integration of application and system based metrics in a parallel program performance tool jeffrey k. hollingsworth r. bruce irvin barton p. miller software reengineering - hype and reality (abstract) stephen sherman configuration management with logical structures yi-jing lin steven p. reiss software re-engineering position statement e. bush algorithm engineering giuseppe cattaneo giuseppe italiano compas: compatible pascal w braunschober algorithm 568. pds - a portable directory system david r. hanson literate programming christopher j. van wyk reducing the cost of regression testing david binkley using abc to prototype vdm specifications aaron kans clive hayton status of object-oriented cobol (panel) joel van stee megan adams dmitry lenkov raymond obin henry saade the role of fourth generation tools in systems development (abstract only) recently many fourth generation tools have become available for commercial use. many companies are using these new tools which automate portions of the application development process. companies are experimenting with new approaches to systems development. information centers and prototyping are two of the new approaches which incorporate fourth generation tools. the potential impact of fourth generation tools on the training of systems analysts will be discussed. attempts to organize material about these tools for effective presentation will be described. carol chrisman barbara beccue toward a comprehensive, knowledge-based software design course many colleges and universities teach courses in the design and implementation of so-called expert systems. such a course might include discussions of topics from traditional (symbolic) artificial intelligence, and culminate in the construction of a prototypical rule-based system. in general, the emphasis is on the use of knowledge compiled and encoded by the designer. here, an alternative approach is suggested in which such a course is broadened to include fundamental topics from the study of artificial neural nets. in this way the student is exposed to the notion that not only can program knowledge be contributed directly by the designers, but might also be extracted by the software from the data itself. david thomas carl steidley ada applications on embedded targets the ada language system/navy (als/n) is a retarget of the ada language system (als) to navy standard 32 bit and 16 bit target computers (an/uyk-43, an/uyk-44, and an/ayk-14). we have developed a run time model which allows large ada applications to be fielded and run on limited address space machines. the run time model can be naturally extended to describe a fully distributed execution environment. michael r. middlemas closeness metrics for system-level functional partitioning frank vahid daniel d. gajski reuse-oriented requirements engineering in nature n. a. m. maiden a pattern oriented technique for software design d. janaki ram k. n. anantha raman k. n. guruprasad using partial order techniques to improve performance of data flow analysis based verification partial order optimization techniques for distributed systems improve the performance of finite state verification approaches by avoiding redundant exploration of some portions of the state space. previously, such techniques have been applied in the context of model checking approaches. in this paper we propose a partial order optimization of the program model used by flavers, a data flow based finite state verification approach for checking user-specified properties of distributed software. we demonstrate experimentally that this optimization often leads to significant reductions in the run time of the analysis algorithm of flavers. on average, for those cases where this optimization could be applied, we observed a speedup of 21%. for one of the cases, the optimization resulted in an analysis speedup of 91%. gleb naumovich lori a. clarke jamieson m. cobleigh design of the s system for data analysis s is a language and system for interactive data analysis and graphics. it emphasizes interactive analysis and graphics, ease of use, flexibility, and extensibility. while sharing many characteristics with other statistical systems, s differs significantly in its design goals, its implementation, and the way it is used. this paper presents some of the design concepts and implementation techniques in s and relates these general ideas in computing to the specific design goals for s and to other statistical systems. richard a. becker john m. chambers formal specification and design of a message router formal derivation refers to a family of design techniques that entail the development of programs which are guaranteed to be correct by construction. only limited industrial use of such techniques (e.g., unity-style specification refinement) has been reported in the literature, and there is a great need for methodological developments aimed at facilitating their application to complex problems. this article examines the formal specification and design of a message router in an attempt to identify those methodological elements that are likely to contribute to successful industrial uses of program derivation. although the message router cannot be characterized as being industrial grade, it is a sophisticated problem that poses significant specification and design challenges---its apparent simplicity is rather deceiving. the main body of the article consists of a complete formal specification of the router and a series of successive refinements that eventually lead to an immediate construction of a correct unity program. each refinement is accompanied by its design rationale and is explained in a manner accessible to a broad audience. we use this example to make the case that program derivation provides a good basis for introducing rigor in the design strategy, regardless of the degrees of formality one is willing to consider. christian creveuil gruia- catalin roman the boyer benchmark meets linear logic henry g. baker a program integration algorithm that accommodates semantics-preserving transformations given a program base and two variants, a and b, each created by modifying separate copies of base, the goal of program integration is to determine whether the modifications interfere, and if they do not, to create an integrated program that includes both sets of changes as well as the portions of base preserved in both variants. text-based integration techniques, such as the one used by the unix diff3 utility, are obviously unsatisfactory because one has no guarantees about how the execution behavior of the integrated program relates to the behaviors of base, a, and b. the first program- integration algorithm to provide such guarantees was developed by horwitz et al.[13]. however, a limitation of that algorithm is that it incorporates no notion of semantics- preserving transformations. this limitation causes the algorithm to be overly conservative in its definition of interference. for example, if one variant changes the way a computation is performed (without changing the values computed) while the other variant adds code that uses the result of the computation, the algorithm would classify those changes as interfering. this paper describes a new integration algorithm that is able to accommodate semantics-preserving transformations. wuu yang susan horwitz thomas reps concurrent programming in orient84/k: an object-oriented knowledge representation language mario tokoro yutaka ishikawa clock resolution and the piwg benchmark suite robert h. pollack david j. campbell experiences with cluster and class testing gail c. murphy paul townsend pok sze wong the recursive application of the object-oriented design methodology (abstract only) the object-oriented design (ood) methodology has generated much interest in the ada* community as a way of exploiting the features of ada to achieve the goals of software engineering. when used in the design of a large system, it must be recursively applied at successive levels of abstraction. there has been, however, little actual experience with this. the author has applied ood recursively to the design of a large system. the major issues involved in this will be presented, including package creation, type decomposition, operation absorption, and bottom-up use of resources. mark temte part i: the resolve framework and discipline: a research synopsis william f. ogden murali sitaraman bruce w. weide stuart h. zweben learning forth with modular forth paul frenger apl event / response programming a proposed mechanism is described by which responses can be programmed to react to apl events. events can be trapped, and a response is executed either where the trap is laid, or where the event occurs. david a. rabenhorst new standard for stored procedures in sql andrew eisenberg concurrency annotations klaus-peter löhr better c: an object-oriented c language with automatic memory manager suitable for interactive applications thomas wang operating systems in a changing world maurice wilkes evaluation of software safety n. b. leveson objective viewpoint: java awt layout management 101 george crawford when goto goes to, how does it get there? g a creak protocol-based data-race detection brad richards james r. larus engineering a simple, efficient code-generator generator many code- generator generators use tree pattern matching and dynamic programming. this paper describes a simple program that generates matchers that are fast, compact, and easy to understand. it is simpler than common alternatives: 200-- 700 lines of icon or 950 lines of c versus 3000 lines of c for twig and 5000 for burg. its matchers run up to 25 times faster than twig's. they are necessarily slower than burg's burs (bottom-up rewrite system) matchers, but they are more flexible and still practical. christopher w. fraser david r. hanson todd a. proebsting poor man's templates in fortran 90 van snyder compiler construction using attribute grammars the adequacy of attribute grammars as a compiler writing tool is studied on the basis of the experiences on attribute grammars for pascal and a subset of euclid. a qualitative assessment of the grammars shows that the compiler oriented view in the design of an attribute grammar tends to make the grammar hard to understand. a design discipline is proposed to improve the comprehensibility of the grammar. quantitative measurements of the automatically generated compilers suggest that an efficient compiler can be produced from an attribute grammar. to achieve this, a carefully optimized implementation of the compiler-compiler is required. kai koskimies kari-jouko räihä matti sarjakoski the sei environment peter h. feller ada-based abstract machine specification of cais to generate validation tests timothy e. lindquist jeffrey l. facemire getting the most out of x resources always wanted to change the look of x windows? here are the tools to do it easily preston brown ansi basic - the proposed standard the proposed standard for the programming language basic, although not yet approved, is sufficiently stable to permit examination and discussion. it contains, for example the usual structural constructs, multicharacter variable names, external subroutines, two-dimensional plotting, and several simple file types. optional extensions include additional file types, real-time, and a fixed-decimal module for business applications. the panel will present the main features of the proposed standard, will briefly discuss these features, and will accept questions about the standard. (two of the panelists are members of ansi committee x3j2, charged with developing the standard.) ronald e. anderson foundations for the study of software architecture dewayne e. perry alexander l. wolf how much non-strictness do lenient programs require? klaus e. schauser seth c. goldstein scheduling garbage collector for embedded real-time systems taehyoun kim naehyuck chang namyun kim heonshik shin accessibility rules ok! this paper shows that the intuitive notion of accessibility is correctly met by the rules of ada 9x. in particular, it shows that despite the notion of accessibility being intrinsically dynamic, nevertheless the rules can in the majority of situations be checked statically without any loss of capability. it also explores the impact of access parameters and shows that the mechanism suggested in the annotated reference manual [aarm] does indeed exactly map onto the requirements of the rules. john barnes combined path and server selection in dynamic multimedia environments zhenghua fu nalini venkatasubramanian designing in the dark: logics that compete with the user skills developed by software user interface designers to solve problems in communication, management, implementation, and other areas may influence design decisions in the absence of sufficient knowledge of user populations. given today's rapid change in both "faces" to the software interface --- user populations and software functionality --- the first pass at a design may be made without sufficient understanding of the relevant goals and behaviors of the eventual users. without this information, designers are less able to grasp "user logic", and may rely on more familiar "logics" that are useful in other problem-solving arenas. understanding how these approaches can affect a design may help us recognize them across a wide range of contexts and enable us to focus the human factors contribution to the design evolution process. j. grudin composing crosscutting concerns using composition filters lodewijk bergmans mehmet aksits resource abstract data type + synchronization - a methodology for message oriented programming - we present in this paper a methodology for the development (and analysis) of programs designed specifically for distributed environments where synchronization is achieved through message passing. the methodology is based on techniques and concepts which have been found to be useful for the development of sequential programs---namely, stepwise refinement and abstract data types. the methodology is based on the concept of resource, generalizing the concepts of monitors, managers, proprietors, etc. we put forward the proposition that a resource is an abstract data type together with mechanisms for synchronization: firstly, for the operations of the type with each other (to gain parallelism) and, secondly, to enable the user environment to perform operation invocation. a methodology is then presented for the design of resources and their implementation. p. r.f. cunha t. s.e. maibaum transition to object-oriented software development mohamed e. fayad wei-tek tsai milton l. fulghum tina ace: an environment for specifying, developing and generating tina services piergiorgio bosco giovanni martini corrado moiso using a global name space for parallel execution of unix tools a parallel make utility that executes on multiple workstations and achieves a significant real-time speedup makes work with distributed computing easy to conduct. charles j. fleckenstein david hemmendinger implementation of a "lisp comprehension" macro guy lapalme a performance comparison between an apl interpreter and compiler an analysis of the execution of several simple apl statements illustrates that interpretive overhead in the form of setup time is awesome, and is on the order of 100 times as much as the per-element time. comparison with an experimental compiler for apl illustrates that the setup time can often be eliminated or reduced greatly, producing code fragments that are more than 300 times as fast in favorable cases. further work on the compiler should make some compiled code fragments about 500 times as fast as interpreted programs in favorable cases. in less favorable cases, performance improvements of a factor of two are commonplace. many opportunities remain to improve the performance in such cases. a production compiler based on this work might allow use of apl to solve problems that in the past have been prohibitively expensive for apl solution. clark wiedmann catastrophe theory: its application in operating system design s j pratt technical correspondence the column, "benchmarks for lan performance evaluation," by larry press (aug. 1988, pp. 1014-1017) presented a technique for quickly benchmarking the performance of lans in an office environment. this piqued our interest since office automation is growing in importance. as a result, an empirical analysis of the press benchmark programs was conducted. the results indicated that these benchmarking programs were appropriate for the benchmarking of lans in an office environment. diane crawford smart recompilation with current compiler technology, changing a single line in a large software system may trigger massive recompilations. if the change occurs in a file with shared declarations, all compilation units depending upon that file must be recompiled to assure consistency. however, many of those recompilations may be redundant, because the change may affect only a small fraction of the overall system. smart recompilation is a method for reducing the set of modules that must be recompiled after a change. the method determines whether recompilation is necessary by isolating the differences among program modules and analyzing the effect of changes. the method is applicable to languages with and without overloading. a prototype demonstrates that the method is efficient and can be added with modest effort to existing compilers. walter f. tichy implications of automated restructuring of cobol j c miller b m strauss experience and evolution of concurrent smalltalk concurrentsmalltalk is an object-oriented concurrent programming language/system which has been running since late 1985. concurrentsmalltalk has the following features: upper-compatibility with smalltalk-80. asynchronous method calls and cbox objects yield concurrency. atomic objects have the property of running one at a time so that it can serialize the many requests sent to it. through experience in writing programs, some disadvantages have become apparent related to concurrency control and the behavior of a block context. in this paper, these issues are re-examined in detail, and then the evolution of the solutions for overcoming these shortcomings is described along with the model of computation in concurrentsmalltalk. new features are explained with an example program. the implementation of the concurrentsmalltalk virtual machine is also presented along with the evaluation of it. yasuhiko yokote mario tokoro adaptive storage control for page frame supply in large scale computer systems a real storage management algorithm called adaptive control of page- frame supply (acps) is described. acps employees three strategies: prediction of the demand for real page frames, page replacement based on the prediction, and working set control. together, these strategies constitute the real page frame allocation method, and contribute to short and stable response times in conversational processing environments. acps is experimentally applied to the vos3 operating system. evaluation of acps on a real machine shows that tss response times are not affected too strongly by king-size jobs and acps is successful in avoiding paging delay and thrashing. acps prevents extreme shortages of real storage in almost all cases. y. yoshizawa t. arai the atmada environment: an enhanced apse e r matthews w lively beyond structured programming s. pan r. g. dromey jakarta tool suite (jts) don batory declarative information in software manuals: what's the use? nicole ummelen type resolution in ada: an implementation report various features of ada [ichbiah 79, honeywell 80] make type resolution an interesting and difficult as implemented in a semantic analyzer for ada built in 1979-80. first, a straightforward algorithm, similar to that of ganzinger and ripken [ganzinger 80], is discussed. next an optimized version of this algorithm is presented. the optimization is based on the idea that, in a tree-walking analysis in which the information developed on one branch can affect the analysis elsewhere, and where the difficulty of analysis is not uniform, analysis should be performed a little at a time wherever it is most likely to be useful rather than according to any data-independent, pre-planned method such as bottom-up or top-down. this raises the likelihood that the analysis of "simple" branches will occur early and so ease the computational cost of analyzing the more difficult branches. peter a. belmont issues and challenges in developing embedded software for information appliances and telecommunication terminals in ryu from the editor: the trouble with the bastard operator from hell don marti interactive verification of communication software on the basis of cil the cil-approach for the development of communication services is based on the programming language cil (communication service implementation language) and a cil-compatible theory of program execution. the theory contains a first-order predicate calculus and an event-oriented model of program execution. the verification of programs written in cil is supported by the automated generation of program axioms and by an interactive theorem prover tailored to the predicate calculus. interactive verification during the design phase leads to early detection and localization of design errors and helps to reduce the efforts for debugging and testing. the paper describes the principles of the language, the theory, and the interactive verification tool. the design of a program realizing a transport service exemplifies the cil- approach. h. krumm o. drobnik apl86 progress report dick bowman synchronization of the producer/consumer problem using semaphores, monitors, and the ada rendezvous ralph c. hilzer components, frameworks, patterns ralph e. johnson the aida experiment f. mazzanti on built-in tests and reuse in object-oriented programming yingxu wang graham king dilip patel ian court geoff staples margaret ross shushma patel reuse: where to begin and why the software engineering institute (sei) is interested in identifying the costs and benefits of software reuse to the mission critical computer resource (mccr) community. in fulfilling this role, we were faced with the need to investigate reuse without making a large investment. this paper examines where to start a reuse activity by describing our initial view of reuse, our decisions on where to begin, what lessons we learned, and finally, our current view. the reuse life cycle described in our final view gives more insight on how and where to implement reuse. it defines the phases of a reuse life cycle that begin with business planning and shows their relationships. for each phase of the reuse life cycle, we give the goals, the outputs, and an approach for achieving the goals. each organization that is interested in obtaining the benefits of reuse must evaluate reuse in terms of business goals and objectives. once this evaluation has been made, the life cycle described in the paper provides one approach to achieving the benefits of reuse. r. holibaugh s. cohen k. kang s. peterson surfing the net for software engineering notes mark doernhoefer what's gnu: bash-the gnu shell chet ramey semaphores for fair scheduling monitor conditions neil dunstan computer-aided software engineering: present status and future directions m. chen j. f. nunamaker e. s. weber building generic user interface tools: an experience with multiple inheritance nuno guimarães data transfer between java applets and legacy apl systems the rise of internet technologies (particularly java) provides many benefits for the development and deployment of user interfaces. in many cases, however, the back end system is behind the times: internet hostile, no object orientation, etc. how can data be transferred between the new generation front end and the old generation back end without compromising the strengths or integrity of either? this paper discusses the use of customised java data serialisation to achieve this goal against a large ibm mvs sharp apl system. b. amos g. disney d. sorrey disaster recovery something is wrong, now what: this article will help you figure out what went wrong, how to get started on fixing it, or now to prepare for possible crashes. mark f. komarinski heterogeneous configuration management with dsee david lubkin efficient data breakpoints robert wahbe writing applications for uniform operation on a mainframe or pc: a metric conversion program the metric system of measurement is the primary standard in all countries except the usa and two others. use of the metric system is becoming more important to the usa for trade and commerce in the world economy. a metric conversion program was developed to convert 350 measurement units between inch-pound (or usa customary) and metric systems for engineering design and documentation. the program follows the primary national metric standard with its conversion factors and special rules for arithmetic, rounding, accuracy, formatting, and terminology. it was first developed in apl on a vm cms mainframe system, but subsequent demand warranted a pc version. the program has been presented at national metric meetings and is briefly described here. the portability of apl2 under vm cms and pc dos was exploited in moving the mainframe program to the pc, maintaining much of the code in identical form, thus giving advantages in testing, verification, and user experience. differences between the environments fell mainly in the user and file interface areas, which led to a matched set of cover functions for system dependent operations. these functions and differences between the two systems are described for file operations, pop-up windows, printing, stack and host system commands. the capability and portability of apl enabled this application to become the first internally written example at this company of ibm's new user interface guidelines working on both a central mainframe and a pc. charles a. schulz linux gazette an intranet filing system: mr. seiferth offers us a solution for keeping track of shared files on over an intranet that utilizes several operating systems justin seiferth an object-oriented tower of babel michael l. nelson panel on design methodology reid smith an overview of the state of the art in software architecture dewayne e. perry fortran 90 performance problems timothy c. prince first-class schedules and virtual maps rajiv mirani paul hudak automated interface code generation from ada specifications mapping data structures between incompatible representations remains an error- prone and tedious part of program development. this letter outlines a method for automating interface code generation based on ada package specifications and the use of ada keywords as interface mapping directives. this concept supports information hiding by placing the details of the generated interface code in the package body. paul r. pukite policies (session summary) the session on policies was led by mark dowson as keynoter. a more detailed description of this session was phrased as "discussion of experience with domains in actual models --- the semantic concerns of process models. what are lessons to be learned about model specific semantics? model independent semantics?". in his presentation mark dowson focused on the term policies. policies were described as constraints that facilitate coordinated performance of process steps by multiple agents. different kinds of policies exist, and there are different forms of policies. issues regarding the relationship between policies and processes were raised, and ways of applying policies were discussed. formal and informal as well as automated and manual policies and processes involving humans both at the organizational level and at the level of individuals were considered. the discussion generated by the presentation was lively. examples of processes and policies in a variety of domains including non-software engineering domains were presented. a spectrum of terms were used for the notion of policy ranging from laws and standards to procedures and methods. in the following the reader will find a capsule summary of the findings. this summary does not reflect the flow of the discussions, nor does it include all the examples mentioned. instead, the summary attempts to present the essence of the messages communicated by the participants by abstracting out some of the characteristics of policies. policies can be described as constraints with respect to certain processes. they are statements either in terms of the notation describing the process, or in terms of a notation whose interpretation establishes a mapping to the process. there are different degrees of compliance to these constraints and there are a number of ways this compliance can be achieved. in different domains people have identified constraints with certain characteristics and given them special labels. this was evident in the discussion by the usage of terms such as advice, culture, guideline, goal, law, method, objective, order, policy, practice, preference, procedure, rule, standard, strategy, etc. some of these terms imply particular degrees of compliance and forms of enforcement, while others imply that the constraints apply to certain types of processes and that the constraints may be in terms of the process, in terms of an abstraction of the process, or in terms the process of managing the execution of a process --- the latter two requiring interpretation to establish a mapping between the constraint and the process. in the remainder of this discussion we will use the term policy to mean a constraint. processes and policies can be characterized according to whether they have an explicit or implicit representation, whether their representation is formal or informal, whether the process and the policies are described in the same of different representations, and whether they are interpreted manually or automatically. processes may not have an explicit representation. this is the case with processes that are performed by humans, have evolved and are part of their culture, but have not been documented. similarly, policies may not have an explicit representation. they may be part of undocumented cultural guidelines. they may not be explicitly represented themselves, but may be embedded in a process that has an explicit representation. the representation used to describe processes and policies may have different degrees of formality ranging from a natural language and stylized natural language to formal notations with well-defined semantics based on a formal theory. informal representations require interpretation by humans, while formal descriptions can be interpreted both by humans and by automation tools. the interpretation of a formal representation by an automation tool can be for the purpose of validating the static description, or is the enactment of a process program. processes and policies can be combined in several ways. the first way is process construction through a human. policies, described informally or formally, are examined by the human and reflected in the process definition. in the second way, process definitions and policy definitions exist as formal but separate notations. they are supplied to a process driven environment. this environment enacts the process and interprets the policies to check for their compliance. in the third arrangement, both process and policy definitions are described using the same notation and interpreted by a process-driven environment. one way of visualizing the enactment in such an environment is that both processes and policies are being executed as cooperating processes. certain execution events are passed to the policy process. synchronous verification of execution events corresponds to enforcement of policies, while asynchronous monitoring of events corresponds to checking for compliance. in the fourth way, process descriptions are refined from process templates. policies are examined to make sure that the refinements are not in violation. in this case checking of policy compliance is attempted statically. the final way is a process construction process similar to the first way. the difference is that policies are formally expressed and the generation of process definitions is performed automatically. several of the above methods embed policies in the process and by enforcing the enactment of the process enforce the compliance of the policy. such an approach allows for the certification of a process to satisfy certain policies. the certification is dependent on the compliant enactment of the process. other methods, especially ones involving interpretation of either policies or processes by humans, have a scale of compliance. different degrees of compliance effectively provide different degrees of flexibility. flexibility is necessary to handle exceptions, especially in processes involving humans. compliance of a policy is always relative to the process it is applied to. for example, there may be a policy for always having a testing step done. this policy may be fully complied to. however, this policy does not specify anything regarding the quality of the tests to be applied. compliance to policies is only meaningful if there is accountability to penalty for non-compliance. if there is no penalty to non-compliance, then there is no forcing function for satisfying a policy, and no purpose for the policy. in general, policies can be interpreted two ways. policies can either be viewed as restrictions, a mechanism for controlling the process. or they can be viewed as a facility for specifying the scope of autonomy --- allowing for authority and freedom by specifying the policies at the appropriate level of abstraction and by separating concerns. processes may be constrained by an number of policies. policies may be in conflict with each other. such conflicts must be recognizable. in informal policies it is often left to the person interpreting the policies to determine how to resolve the conflict if and when it is detected. in many systems being modelled by processes and policies, priorities are assigned to policies specifying a precedence ordering regarding their compliance. basically, the penalties for non-compliance of different policies are weighed against each other. some systems allow for policies and processes to be changed. in such circumstances, processes and policies can be adapted to avoid conflicts between policies or between policies and processes. for example, in business organizations there is a hierarchy of policies. at the top level there are corporate policies. at the divisional level there are policies referred to as practices. finally, these get refined into operational policies called procedures. a change to a corporate policy can cause conflict with other corporate policies, which can be resolved before it goes into effect. the new corporate policy also affects practices. possible conflicts can be resolved by adapting the practices to the new policy, and by recommendation for adjustment of the corporate policy for otherwise unresolvable conflicts. the application of a policy to a process is a process itself. as discussed above this process can take on many forms. note, however, being a process it can be governed by policies. the result is that we have policies on (the application of) policies. to be more exact these policies divide into policies on the creation of policies, policies on the evolution, i.e., change, of policies, and policies on applying and verifying compliance of policies. this leads to a model of an organization as a growing organism. legislation is a basis for evolution of an organization. this is considered a deep issue. in an organization there are processes for producing products. these processes are managed. management is a collection of processes itself. some processes are concerned with monitoring and improving the production processes. other processes are concerned with resource allocation across parts of the organization. the management processes are under scrutiny of other management processes. those are the processes that have delegated responsibility for the execution of the particular process, and processes responsible for monitoring and improving management processes. in effect an organization can be viewed as the bootstrapping and on-the-fly evolution of a system of processes and policies. during the discussions a number of properties for policies and for commitment to policies were collected. properties of policies included genesis, scope, binding, change, responsibility, interpretation, enaction, consistency analysis, validation. properties of commitment included commitment to whom, commitment by whom, commitment known by whom, conditions of commitment, monitoring and reporting of compliance to commitment, and credibility of commitment. in summary, a number of other disciplines have dealt with the modelling of processes and their constraints. relevant disciplines include cybernetics, social science, cognitive science, behaviorial science, public policy, sociology, office automation, economics, etc. it was remarked that the participants of this workshop should investigate the work of those disciplines in order benefit from their generalizations of process modelling and avoid reinventing the proverbial wheel. peter h. feiler knowledge depot richard taylor david redmiles shuffle languages, petri nets, and context-sensitive grammars flow expressions have been proposed as an extension of the regular expressions designed to model concurrency. we examine a simplification of these flow expressions which we call shuffle expressions. we introduce two types of machines to aid in recognizing shuffle languages and show that one such machine may be equivalent to a petri net. in addition, closure and containment properties of the related language classes are investigated, and we show that one machine type recognizes at least a restricted class of shuffle languages. finally, grammars for all shuffle languages are generated, and the shuffle languages are shown to be context- sensitive. jay gischer making slicing practical: the final mile much progress has been made in the precision and performance of program slicers, but many challenges remain, such as cost-effective implementation and finding a role for slicing in software development. william g. griswold technical correspondence: on francez's ``distributed termination'' c. mohan lessons of current environments this panel will examine and evaluate some notable current and recent software environment efforts from the perspective of this conference---namely that environments should be viewed as vehicles for fostering the automation of software processes. at this conference we are exploring the premise that software is a product whose creation and evolution can and should be carried out according to the dictates of systematic and orderly processes. investigations such as software lifecycle modeling efforts have attempted to help us understand the nature of these processes by identifying major subactivities, subproducts and information flows. work on software environments has, in the past, emphasized the integration of tools, but has all too often not addressed the issue of how these integrated tools support software processes. fortunately there is an increasing realization that such process support is the goal of software environments, and more recent environments are addressing that goal more sharpl. in this panel we shall attempt to review some notable and influential tool integration and environment development efforts from this more contemporary perspective in an attempt to perceive current trends and directions more clearly and to more sharply identify the key contributions which this work has made. as an aid to setting this tone and perspective for the panel, we shall begin with a presentation by nelson weiderman, for the software engineering institute, carnegie-mellon university, pittsburgh, pa, usa. dr. weiderman will summarize a paper which he coauthored with a. n. habermann, m.w. borger and m.h. klein in which they suggest that environments should be evaluated in precisely this way. their paper details a flow of methodological steps for evaluation of toolsets and environments which stresses that such systems should be measured against a clear enunciation of the software activities which the tools and environments are purported to support. the paper goes on to describe how dynamic testing of subject toolsets should be organized to gain quantitative insights into the effectiveness of the subject systems in supporting the identified processes. the significance of this evaluative approach is at least twofold. first it forces environment developers to recognize that the key result of their work is support of processes, and second it forces all of us as a community to realize that we must increasingly work to understand the set of activities which comprise software processes and software objects which comprise our products. these realizations will help lead us to the more effective environments of the coming decade. after dr. weiderman's opening presentation there will be six subsequent presentations of current and recent work. these presentations have been carefully selected to represent a variety of approaches to providing support for a variety of client communities in a variety of locations. each presenter has been asked to consider the way in which his system supports a particular process or subactivities of a process; any architectural features which distinguish or characterize his system; how effective he believes his system has been in conveying support for the portion of the process which has been identified; and whether the architectural approach taken has or has not been particularly useful in helping to achieve the process support goals for the system. l. osterweil a cost-comparison approach for adaptive distributed shared memory jai-hoon kim nitin h. vaidya object-oriented programming in c++ - a case study r s wiener guarded page tables on mips r4600 or an exercise in architecture-dependent micro optimization guarded page tables implement huge sparsely occupied address spaces efficiently and have the advantages of multi-level tables (tree structure, hierarchy, sharing). we present an implementation guarded page tables on the r4600 processor. the paper describes both the architecture- dependent design process of the algorithms and the resulting tool box. jochen liedtke kevin elphinstone where does architecture research meet practice? jim q. ning distributed and fault tolerant systems (session summary) the workshop's objectives for the sessions were:1. discuss experiences and issues implementing annex e and providing tool support for developing distributed programs.2. examine issues implied by allowing the distributed replication of programs. anthony gargaro douglass locke richard volz waiting time analysis and performance visualization in carnival wagner meira thomas j. leblanc alexandros poulos corba program development, part 2 this month, the more advanced techniques of naming and event services are discussed j. mark shacklette jeff illian interception in the aroma system n. narasimhan l. e. moser p. m. melliar-smith programming pearls jon bentley david gries the acoustic orientation instrument: real-time digital audio control with forth john l. orr brian c. mikiten the evolution of a language standard: mumps in the 1980s in 1977 the mumps language standard was approved as an american national standard (x11.1-1977) by ansi. the specification of the mumps standard was carried out by the mumps development committee, which continues to direct the evolution of the language. this paper traces the standardization process from its inception in 1973 to the present, stressing the enhancements made to mumps since 1977. features which will be included in the proposed 1982 mumps standard are discussed, as well as other extensions currently being considered. a number of areas in which future changes may be made are also mentioned. these include not only simple additional functional capabilities, but also possible fundamental revisions needed to keep mumps in step with the technology of the 1980s. finally, some of the lessons learned about the process of language standardization are described. david d. sherertz a common-lisp implementation of an extended prolog system giuseppe cattaneo vincenzo loia dynamic variables most programming languages use static scope rules for associating uses of identifiers with their declarations. static scope helps catch errors at compile time, and it can be implemented efficiently. some popular languages--- perl, tel, tex, and postscript---offer dynamic scope, because dynamic scope works well for variables that "customize" the execution environment, for example. programmers must simulate dynamic scope to implement this kind of usage in statically scoped languages. this paper describes the design and implementation of imperative language constructs for introducing and referencing dynamically scoped variables---dynamic variables for short. the design is a minimalist one, because dynamic variables are best used sparingly, much like exceptions. the facility does, however, cater to the typical uses for dynamic scope, and it provides a cleaner mechanism for so-called thread- local variables. a particularly simple implementation suffices for languages without exception handling. for languages with exception handling, a more efficient implementation builds on existing compiler infrastructure. exception handling can be viewed as a control construct with dynamic scope. likewise, dynamic variables are a data construct with dynamic scope. david r. hanson todd a. proebsting simple but effective techniques for numa memory management multiprocessors with non-uniform memory access times introduce the problem of placing data near the processes that use them, in order to improve performance. we have implemented an automatic page placement strategy in the mach operating system on the ibm ace multiprocessor workstation. our experience indicates that even very simple automatic strategies can produce nearly optimal page placement. it also suggests that the greatest leverage for further performance improvement lies in reducing false sharing, which occurs when the same page contains objects that would best be placed in different memories. w. bolosky r. fitzgerald m. scott a high resolution event timer ada package for dos environments michael j. paul john e. gochenouer documentation: effective and literate the purpose of this paper is to show how documentation can be literate, in a stylistic sense, and still be effective. literate prose is a powerful tool that, when properly used in computer documentation, can take advantage of the full power of the english language. this does not mean that all computer documentation must or can read like a nobel prize novel, but neither does it have to read like a military cryptogram. a happy medium --- founded on healthy grammar and syntax, and following the logic of the software being documented --- is a good and obtainable goal. the roots of the problem of literate documentation lie in a common complaint from software users: "the program looked great in the store, but the documentation is so awful that i can't get it to work!" this is particularly acute in the field of educational software --- but applies across the board --- because the product cannot fulfill its purpose nor its potential unless it can be effectively employed. the problem is usually caused by an excessively technical focus, and/or poor composition (that obscures the information). a person should not have to be a computer scientist or a programmer to use software; that would defeat the purpose of the product. on the other hand, the user should not need to be a philologist to discover the author's intent. in short, documentation we compose needs to be truly "user- friendly." the major obstacle to producing effective documentation is selecting the appropriate conceptual approach. the two main tendencies when tackling the problem are both extreme: often, there is either over emphasis on academic modes of expression, or on the technical aspects. these are often inappropriate as people want beginning words and concepts when they are "beginners." they will enjoy high- tech once they grasp the basics; they want to how-to, not why. this "how-to" label is a critical issue in developing a good conceptual approach to documentation. in the field of software documentation, the "technical writing" mindset can often derail what would otherwise have been an effective product. far too many people are seriously misled by the term "technical" writing and it is unfortunate that the word has been so broadly applied. the use of "technical" conjures a dominant notion that, in our case here, the writer must tell the reader all there is to know about hardware and programming. if the purpose of the document is to explain such things, then that is certainly fine and can legitimately be called "technical writing." however, the technical details are often precisely what the non-technical user does not need to know; most really do not care about the electronic complexities, they just want their program to work. hence, they need "practical" documentation: that is, documentation that tells them how to perform a function or set of functions so they can do whatever it is that they want to do (one might also wish to define it as being task-oriented or task- specific rather than technology-oriented). so, the rule of thumb should be to tell them no more than what is necessary to use the program. there is also an economic bonus attached to practicality. supporting documentation that is unnecessarily abstruse will deter program use, and it is altogether likely that further offerings would consequently not sell. keeping it "sweet and simple," as they say, also keeps it saleable. the term "practical writing" is preferrable to "technical writing" because the term conceptually moves away from a nuts-and- bolts orientation toward a "how- to" approach. despite the fact that computers have been with us for some time and that personal computers are now the rage, the average user still feels somewhat intimidated by the machine and its accoutrements. therefore, they certainly do not need to be further put off by the software they wish to use. any computer magazine or journal you can select will show the current over-emphasis on hardware, diskettes, and their capabilities. however, a corresponding under- emphasis on the accompanying documentation exists. for example, in five recent issues of byte magazine, there were 134 feature articles of which only two would be of any use to documentation writers. that is rather sad. as the educational software market grows by leaps and bounds,1 and more and more companies become involved in similar enterprises for internal training purposes, it is imperative that not only the programs themselves be impressive and intellectually stimulating, but the accompanying documentation must really support the program. steve halpern, vice-president of classroom consortia media, inc. (staten island, ny), addressed the problem --- for both programs and documentation --- thus: "the computer should be a non- threatening tool to help the teacher. so software needs to be free of problems, of any kind of sophisticated codes that have to be put in to make it work. it should have all kinds of error-trapping that lets it do what it's supposed to do, which is reinforce learning and teach concepts."2 the computer should not be designed as a teacher replacement but, rather, to be a challenging teaching aid and learning tool. consequently, the accompanying documentation needs to be much more than merely a textbook supplement or user's manual. rather, it needs to be a bit of both, which is a difficult marriage to be sure, but necessary. we who produce documentation of this sort (and others) must be sensitive to the genuine needs of those who will be using the programs and their supporting material. this is to say that we must seriously consider our potential audience and resist the temptation to just "crank out" a document. the maxim "haste makes waste" certainly applies here. kathy yakal, editorial assistant for compute's! pc & pcjr magazine, has said, "courseware should be enjoyable, open-ended, and exciting. and humorous. intriguing. authentically pleasing. it needs to be easy to use. it must present concrete demonstrations of abstract ideas whenever possible. provide constructive feedback for errors. be accompanied with clear and complete manuals."3 equally important, it must also teach by its own example: which means that it should not include the kind of grammatical errors that are displayed in the foregoing quotation. the writer must be dedicated to producing documentation that is just as grammatically sound as it is accurate because poorly expressed information is worth little or nothing. consequently, to produce a document that will function as an effective information base, the writer must walk a tough but necessary tightrope between two extremes. information must be clearly expressed, and there is just no excuse for ignoring the rules of good english composition. to produce good documentation, we need to be very sensitive to writing as a mechanical process and the strategies employed by writers to present the information necessary to fulfill the required task. one might have all the best intentions of blending the academic and technical approaches, but you know the destination of the road that is pved with good intentions and those intentions go nowhere unless the writer produces really balanced copy. even talented writers need training to produce consistently effective documentation. the preferred method is formal classroom instruction where the individual can learn various strategies and tactics and gain some intensive practice in the writing and production craft. however, as anyone in business knows, the time necessary for this process is often a luxury. short company- sponsored workshops can be of assistance, but time is money and money is a precious commodity. with what, then, are we left? with the right tools, a lot can be done on one's own to become a more effective communicator. there are several sources in the marketplace that can be valuable as self- study materials as well as important job aids.4 they will certainly not replace a formal course, but they can at least serve as useful on-the-job training materials. let us consider a pedagogical tool i devised while teaching tech writing at the college of staten island-cuny that graphically displays the relationships between the five principles of practical writing: organization, clarity, exposition, accuracy and validity. i rather egotistically call it "burdett's pentagon." the arrows are all double-headed simply because the mutual interdependencies are critical to the strength of the whole structure. organization: organization is the basis of a document. without proper logical structure, a document will fall flat on its face. consequently, when planning a document, organize it according to the canons of logic as displayed in a normal syllogism (a word equation). premise / purpose / problem conditions / evidence / information conclusion / result / solution this is an oversimplification, but pursuing this tack will keep a writer properly focused on the necessary steps. it is amazing how much documentation falters because the information is not presented in a logical sequence, and there is nothing more frustrating to a reader than to have to keep questioning the author's meaning. exposition: exposition is text production and all its appurtenant features: proper grammar and syntax, effective vocabulary, appropriate reading level, and so on. this is where the writer's style will have the most direct impact and a talented writer should have little trouble. for those with average skills, the task will be more onerous. but, if they follow the syllogistic format, they will at least be able to produce satisfactory copy that will be logically consistent and do the job. clarity: clarity is the connection between organization, exposition and accuracy. if the organization is not clear, the exposition will be muddied. if the data are not clear, then the points one is trying to make will be correspondingly obtuse. for our purposes, clarity is more a psychic requirement: that is, always keep the principle in the front of one's mind as a constant check on the work being performed. accuracy: accuracy applies to the data or other information used in one's document. quite simply, if the information is not accurate, the whole thing falls apart regardless how well organized or well expressed it might be. validity: statisticians will be familiar with the concept of validity5: that is, accuracy itself is not all that is required for good documentation. in addition, we must ensure that the information used pertains to the points we are trying to make. it may well be accurate, but if it is not germane to our subject, the exposition will then be meaningless. it is fairly clear that if any one element of the pentagon is weak, the whole document will then be unsound. devices such as this simple tool are often helpful to talented writers who may produce poor documentation because they are harried and harassed. english is a very rich tongue that, when properly employed, has tremendous potential for accurately and effectively transmitting instructions and information. documentation writers should be particularly sensitive to the necessity of employing proper grammar and should utilize appropriate vocabulary because these documents (especially educational software) also teach by example. for instance, i cannot count the number of arguments i have had about the spelling of "through." i consistently heard lamentations like: "but the road signs say 'thru' traffic, so…it must be ok." another example is the roaring battle i once had with another writer who actually said, "what difference does it make if the grammar is bad? it's a science program!" regardless the subject, there is absolutely no excuse, especially in education programs but also in general, for sloppy exposition. therefore, when composing documentation, let's be tough on ourselves and do it right, or don't do it at all. one's exposition is often sloppy because one has developed sloppy habits. therefore, develop good habits, do things the proper way, use the full power of the language, and one's documentation will be exemplary. if we recall the pentagon, poor exposition can adversely affect the clarity of the document and obscure the information that one is trying to convey. let's not take shortcuts just because it is easy. let's not invent words (as "prioritize") when the language already has a perfectly good word that sounds much less jarring (as "rank" or "order").6 however, at present, there is a lot of documentation on the market that has such faulty punctuation, infantile vocabulary, excessive jargon, poor expression, that it would seem that few writers care whether they set a good example or not. therefore, to be literate in one's composition is also to be effective: instructions and information will be easy to understand, the documentation will thus mesh well with the program and will succeed in molding a comfortable computer-software-human unit. that's being truly "user-friendly," literate and effective. paul s burdett apl to ada translator jack g. rudd eric m. klementis acm forum robert l. ashenhurst on subtyping and matching a relation between recursive object types, called matching, has been proposed as a generalization of subtyping. unlike subtyping, matching does not support subsumption, but it does support inheritance of binary methods. we argue that matching is a good idea, but that it should not be regarded as a form of f-bounded subtyping (as was originally intended). we show that a new interpretation of matching as higher-order subtyping has better properties. matching turns out to be a third-order construction, possibly the only one to have been proposed for general use in programming. martín abadi luca cardelli dplab: an environment for distributed programming this paper describes the software package dplab which implements an integrated gui environment for developing distributed programs. the environment includes a text editor, a compiler, and a runtime system that establishes communications between networked computers and provides primitives for message passing between the computers. the source language is pascal extended with these primitives and with constructs for concurrent programming. the package is implemented in java and swing for portability. mordechai ben-ari shawn silverman linux means business: grundig tv-communications ted kenney representation and evaluation of security policies (poster session) tatyana ryutov clifford neuman implementing efficient fault containment for multiprocessors: confining faults in a shared-memory multiprocessor environment mendel rosenblum john chapin dan teodosiu scott devine tirthankar lahiri anoop gupta turtle walk through functional language putnik zoran budimac zoran ivanovic mirjana tool support for architecture analysis and design rick kazman implementing design patterns as language constructs yan-david erlich ephemerons: a new finalization mechanism barry hayes register allocation for banked register file a banked register file is a register file partitioned into banks. a register in a banked register file is addressed with the register number in conjunction with the active bank number. a banked register file may be employed to reduce the number of bits for register operands in the instruction encoding at the cost of bank changes and inter-bank data transfers. although a banked register file is introduced to provide sufficient registers and reduce memory traffic, it may on the other hand inflate code by unwanted bank hanges and excessive inter-bank data movements. in this context, code quality heavily depends on the register allocator that decides the location of each variable. this paper addresses a heuristic approach to register allocation for exploiting two register banks. it performs global register allocation with the primary bank registers, while reducing the register pressure by doing local register allocation with the secondary bank registers. experimental results show that the proposed register allocator eliminates a significant amount of memory traffic while achieving smaller code size compared to an allocator that utilizes the primary bank only. jinpyo park je-hyung lee soo-mook moon on the question of fill roland h. pesch selecting a linux distribution having trouble deciding which distribution to go for? here's help phil hughes synthesis of resource invariants for concurrent programs owicki and gries have developed a proof system for conditional critical regions. in their system, logically related variables accessed by more than one process are grouped together as resources, and processes are allowed access to a resource only in a critical region for that resource. proofs of synchronization properties are constructed by devising predicates called resource invariants which describe relationships among the variables of a resource when no process is in a critical region for the resource. in constructing proofs using the system of owicki and gries, the programmer is required to supply the resource invariants. methods are developed in this paper for automatically synthesizing resource invariants. specifically, the resource invariants of a concurrent program are characterized as least fixpoints of a functional which can be obtained from the text of the program. by the use of this fixpoint characterization and a widening operator based on convex closure, good approximations may be obtained for the resource invariants of many concurrent programs. edmund clarke x forms review and tutorial: karel kubat explores xforms, a graphical user interface toolkit for x. karel kubat generation of compiler symbol processing mechanisms from specifications stephen p. reiss technical contributions automated fortran conversion gregory aharonian python and tkinter programming phil hughes experience with topaz telebugging the topaz teledebug (ttd) facility provides a remote debugging capability supporting software development in the topaz environment. topaz is a software environment providing rich support for programming in modula2+, and extended version of modula 2. ttd allows uniform use of the same high level source language debugger for all debugging (both local and remote) of software at all levels of the system. special care has been taken to maximize ttd's reliability and robustness. our experience suggests that such a facility can be extremely useful, very dependable and quite inexpensive. it also suggests that the implementation issues can be somewhat subtle, and that correct choices in this area are vital if the promise of such a scheme is to be fully realized. this paper focuses on those implementation issues in a fair amount of detail, analyzing the limitations of our first ttd implementation and the redesign that produced a second, improved version. our main conclusion is that the target (debuggee) end of such a facility should be implemented at the lowest possible level in the operating system, and that a uniformly applicable remote invocation (e.g. remote procedure call) facility is key to enabling this approach. we also conclude that strict adherence to layered information hiding can lead to serious difficulties in the implementation of a remote debugging protocol. the discussion should be of particular interest to others designers of remote debugging facilities. while the paper does not devote much attention to higher level issues (e.g. novel user interfaces for debugging concurrent or distributed programs) most of the ideas discussed should be applicable to a wide class of debuggers, including those with higher aspirations for debugging such programs. david d. redell on the design of the amoeba configuration manager the program amoeba make, or amake, is being designed to fulfil the need of a make-like configuration manager capable of exploiting the potentials of the amoeba distributed operating system. the major design goal is to create a software configuration manager that is both easy to use and efficient. the specification and maintenance of a large configuration should be easy, and should be automated as much as possible. furthermore, the build process should exploit amoeba's capabilities and resources when creating or updating a target. in this paper we show how a smart file server can contribute to amake's efficiency. we also show how a declarative configuration description allows amake to take full advantage of parallelism and to determine the commands needed for building and maintaining targets. e. h. baalbergen k. verstoep a. s. tanenbaum best of technical support corporate linux journal staff controlling dynamic objects in large ada systems steven m. rosen computer aided software engineering (case) based on transformation with object-oriented logic (tool) traditional software engineering separates conceptual software development from design, assuming that concentration on the conceptual level yields a consistent software system---an assumption that may, due to weak communication links between conceptual and actual design, be incorrect. in this paper, a new paradigm---"transformation with object-oriented logic" (tool)---is proposed. jin w. chang plus/reducing arrays jan karman ccal: an interpreted language for experimentation in concurrent control p. kearns m. freeman towards the usage of dynamic object aggregations as a foundation for composition gustaf neumann uwe zdun an epistemology of apl epistemology is the study of what we know and how we know it. inevery day life this usually means trying to understand how we builda mental model which seems to correspond to the reality of theworld around us. from the dot matrix of our retinas, with theirmillion or so pixels, our brains recognize as units such objects asstraight lines, curves, and human faces. from vibrations of nearbyair molecules we hear conversations and c major triads. ordinarypeople simply consider what we know about our surroundings, asinterpreted by our senses, to be what we can know about theworld. humans do not have ways of knowing possessed by other creaturesin the world. dogs hear sound frequencies beyond the range of humanears. some insects can directly sense ultra-violet light. these andother animals with various modalities of perception experienceworlds that may be quite different from the one that humans know.uv sensitive insects use their eyes to find the flowers they needfrom which to grab pollen. conversely, the flowers are pollinated.one can wonder who is using whom. philosophers like to muse about such things. what do we reallyknow? how do we create symbolic constructs in our brains? irismurdoch [ref 1] puts it this way: the idea of a self-contained unity or limited whole is afundamental instinctive concept. we see parts of things, we intuitwhole things. we seem to know a great deal on the basis of verylittle. oblivious of philosophical problems and paucity of evidencewe grasp ourselves as unities, continuous bodies and continuousminds. we assume the continuity of space and time. this intuitiveextension of our claim to knowledge has inspired the reflections ofmany philosophers & the urge to prove that where we intuitunity there really is unity is a deep emotional motive tophilosophy, to art, to thinking itself. _intellect naturallyone-making._ [emphasis added] murdoch has distinguished two levels of understanding, callingone of them intuitive. the other one we shall call logical. withinthe ellipsis in the quotation above, murdoch suggests hume and kantas spokesmen respectively for two views: [hume said that] if a fiction is necessary enough, it is not alie. [kant said that] we could not infer reality from experience whenthe possibility of experience itself needed to be explained. the division is between laws and customs, between logic andintuition, between analysis and synthesis, between rules andjudgment, between the engineer and the poet. one might describephilosophy as the art of logical analysis of the intuitive. epistemology is not static. the ways in which we view the worldchange as our perceptions change. for hundreds of thousands ofyears our five senses were the only windows of perception.gradually, mental models became more important. who has ever seen,smelled, or touched a positron or a black hole? these and even moremysterious phenomena --- from the cosmological construct on thelarge scale to quarks on the small --- are not only the primarystudies of physics: they also appear regularly in the _new yorktimes science times._ the physical universe became a different place with thecopernican revolution. it wasn't much of a revolutionscientifically; it was little more than a change of origin. it did,however, shake up the religious establishment a bit. as theheliocentric view took hold it diminished the long assumedcentrality of humanity, but, curiously, enough it also madepossible not only seeing the earth as a minor bit of the solarsystem but also seeing the solar system as a trivial item in anordinary galaxy, and the galaxy as part of a galactic group, and soon to higher levels of cosmic organization. the sky today is generally believed to be full of neutrinos,entangled quantum states, and dark matter. this newly clutteredcosmos has led to wonderful new structures in our brains,particularly in the brains of cosmologists, of course, but also ---and more important --- to readers of the daily news. perception istranslation from sensory inputs to what pinker [ref 2] callsmentalese. apl epistemology has its own history. since its beginnings aplhas extended the perceptual field which characterized many earlylinear languages. in succeeding section several stages of thisdevelopment will be discussed. j. philip benkard the zebra striped network file system john h. hartman john k. ousterhout forum: cobol in question diane crawford supporting viewpoints in metaview paul g. sorenson piotr s. findeisen j. paul tremblay on the performance of balanced hashing functions when the keys are not equiprobable the cost (expected number of accesses per retrieval) of hashing functions is examined without the assumption that it is equally probable for all keys to be present in the table. it is shown that the obvious strategy---trying to balance the sums of probabilities of the keys mapped to any given address--- may be suboptimal; however, the difference from the exactly optimal distribution cannot be large. christos h. papadimitriou philip a. bernstein the trellis programming environment the trellis programming environment supports programming in trellis/owl, an object-based language with multiple inheritance and compile-time type- checking. trellis is composed of a number of integrated tools that share a common programming environment database. it is a highly interactive, easy-to- use programming environment, providing various programming aids, incremental compilation, and good debugging support. trellis is both integrated and open- ended. trellis was specifically designed to support the object-oriented programming methodology. thus it provides tools to manage the use of types and inheritance. trellis takes advantage of the strong-typing features of the trellis/owl language to provide more support for the programmer by keeping track of cross-references and inconsistencies in code. patrick d. o'brien daniel c. halbert michael f. kilian more than just coco david epstein single versus multiple inheritance in object oriented programming ghan bir singh object-oriented programming languages: the next generation fred a. cummins towards target-level testing and debugging tools for embedded software harry koehnemann timothy lindquist an equational language for data-parallelism data-parallelism provides a clean conceptual framework for parallel programming. we are developing two programming languages: a high level equational language, called el*, and a low-level implementation language. both languages exploit data-parallelism instead of control-parallelism. el* is a declarative data-parallel language. el* programs are high-level equational specifications that use extensive pattern-matching and recursion. the language's syntax and semantics are intended to be clear and simple. recursive forms are restricted to enable translation to efficient data-parallel operations. el* programs are compiled into fp*, a variant of backus's fp, where parallel operations are more explicit and low-level. the target language has a rich set of functions for performing communication, and computation. it also has a powerful set of combining forms that generate large highly-parallel functions from smaller program units. prototype compilers have been implemented for both languages, and they demonstrate good performance. several linear algebra and non-numeric problems have been programmed with relative ease using el*. we are currently developing compilation techniques for a wider range of scientific problems that have more complex parallel solutions, and are continuing to expand the language's scope. pushpa rao clifford walinsky a brief introduction to clu john guttag on defusing a small landmine in the type casting of pointers in the "c" language edward w. czeck james m. feldman specification-based prototyping for embedded systems specification of software for safety critical, embedded computer systems has been widely addressed in literature. to achieve the high level of confidence in a specification's correctness necessary in many applications, manual inspections, formal verification, and simulation must be used in concert. researchers have successfully addressed issues in inspection and verification; however, results in the areas of execution and simulation of specifications have not made as large an impact as desired. in this paper we present an approach to specification-based prototyping which addresses this issue. it combines the advantages of rigorous formal specifications and rapid systems prototyping. the approach lets us refine a formal executable model of the system requirements to a detailed model of the software requirements. throughout this refinement process, the specification is used as a prototype of the proposed software. thus, we guarantee that the formal specification of the system is always consistent with the observed behavior of the prototype. the approach is supported with the nimbus environment, a framework that allows the formal specification to execute while interacting with software models of its embedding environment or even the physical environment itself (hardware-in- the-loop simulation). jeffrey m. thompson mats p. e. heimdahl steven p. miller implementation strategies for diana attributes d. a. lamb software measure specification david a. gustafson joo t. tan perla weaver ada overhead reconsidered michael paulkovich assessment of system evolution through characterization f. fioravanti p. nesi s. perlini centaur: the system this paper describes the organization of the centaur system and its main components. the system is a generic interactive environment. when given the formal specification of a particular programming language-including syntax and semantics --- it produces a language specific environment. this resulting environment includes a structure editor, an interpreter/debugger and other tools, all of which have graphic interfaces. centaur is made of three parts: a database component, that provides standardized representation and access to formal objects and their persistent storage; a logical engine that is used to execute formal specifications; an object-oriented man-machine interface that gives easy access to the system's functions. centaur is essentially written in lisp (le_lisp). the logical engine is prolog (mu-prolog). the man-machine interface is built on top of the virtual graphics facility of le_lisp, itself primarily implemented on top of x-windows. p. borras d. clement th. despeyroux j. incerpi g. kahn b. lang v. pascual the novice user enters the discourse community: implications for technical writers karla saari kitalong instruction reference patterns in data flow programs instruction reference patterns in data flow environments differ from those in conventional systems. because execution is data driven in data flow environments, patterns of instruction references depend on data references. also, instruction reference patterns are two-dimensional because execution is parallel. in this paper, models of instruction reference patterns are presented to illustrate program behavior and to provide insight into the potential usefulness of an instruction cache or virtual memory in data flow environments. the results establish that locality exists and is exploitable in some data flow environments. s. a. thoreson a. e. oldehoeft development routes for message passing parallelism in java j. a. mathew h. a. james k. a. hawick flick: a flexible, optimizing idl compiler eric eide kevin frei bryan ford jay lepreau gary lindstrom protected records, time management and distribution this is a summary of the discussions on three requirement areas: * protected records; * time management; * distribution. ted baker datamesh, house-building, and distributed systems technology john wilkes experiences with process programming our research has investigated the feasibility and ramifications of process programming. we claim that it is highly advantageous to treat software processes, such as development and maintenance, as actual software which can be specified, designed, coded, executed, tested, and maintained. if software processes are to be coded and executed, it is necessary for them to be coded in a language which can then be compiled, loaded, and executed. likewise, if software processes are to specified and designed it is necessary to have process specification and design formalisms. we claim that process specification, design, and coding languages and formalisms will strongly resemble languages and formalisms currently used to specify software products, but that there will also be significant differences. we also claim that a properly conceived and developed software support environment can support the development of both product software and process software. we have engaged in considerable research to evaluate these claims. our progress has been hampered by "research deadlock." languages and formalisms are required to effectively represent such objects as process designs and process code. that suggests that such languages and formalisms should be developed first. on the other hand, the creation of new programming languages, design formalisms, and specification technologies should begin only after significant experience in creating these software objects. that suggests that significant process software development should come first. we broke this "research deadlock" by developing a variety of prototypes \---e. g. prototype process programs, prototype process coding languages, and a prototype process modelling formalism---and by evaluating each in the context of the others. this work has led to significant insights into the nature of process programming, the nature of various specific software processes, the difference between process programming and process modelling, the requirements for an effective process coding language, and the requirements for an effective process interpretation support system. leon j. osterweil linux gazette: clueless at the prompt mike list book review: metamorphosis: a programmer looks at the software crisis harvey friedman majordomo create your own internet mailing lists with the popularmajordomo sofware. piers cawley book review: perl5 michael hines a notation for manipulating arrays of operations much has been written on the-possibility of incorporating arrays of functions and arrays of operators into apl. various methods have been proposed for creating and restructuring such arrays. this paper proposes the introduction of arrays of operations in general into apl2. vector notation is extended, as suggested by brown and benkard, to allow easy creation of vectors of operations. a new object, called an elevator, is introduced in order to create operations for manipulating arrays of operations. the semantics of the jot is extended, as per benkard, to provide a means of extracting structural information from operation arrays. an analysis of the meaning behind the new notation is undertaken. definitions are given which not only satisfy the notational requirements but also provide insights into the nature of structural operations on function arrays. the information retained in an empty function array is explained, and the relationship between the primitive enclose function and the each operator becomes clear. david j. landaeta detecting relational global predicates in distributed systems alexander i. tomlinson vijay k. garg introducing reuse in companies: a survey of european experiences maurizio morisio michel ezran colin tully probabilistic data flow system with two-edge profiling traditionally optimization is done statistically independent of actual execution environments. for generating highly optimized code, however, runtime information can be used to adapt a program to different environments. in probabilistic data flow systems runtime information on representative input data is exploited to compute the probability with what data flow facts may hold. probabilistic data flow analysis can guide sophisticated optimizing transformations resulting in better performance. in comparison classical data flow analysis does not take runtime information into account. all paths are equally weighted irrespectively whether they are never, heavily, or rarely executed. in this paper we present the best solution what we can theoretically obtain for probabilistic data flow problems and compare it with the state-of-the-art one-edge approach. we show that the differences can be considerable and improvements are crucial. however, the theoretically best solution is too expensive in general and feasible approaches are required. in the sequel we develop an efficient approach which employs two-edge profiling and classical data flow analysis. we show that the results of the two-edge approach are significantly better than the state-of-the-art one-edge approach. eduard mehofer bernhard scholz pc-xinu features and installation timothy v. fossum a proposal for implementing the concurrent mechanisms of ada x zang template instantiation for c++ glen mccluskey robert b. murray cola: a coordination language for massive parallelism beat hirsbrunner marc aguilar oliver krone memory occupancy patterns in garbage collection systems some programming languages and computer systems use dynamic memory allocation with garbage collection. it would be useful to understand how the utilization of memory depends on the stochastic parameters describing the size and life distributions of the cells. we consider a class of dynamic storage allocation systems which use a first-fit strategy to allocate cells and perform noncompacting garbage collections to recover free memory space when memory becomes fully occupied. a formula is derived for the expected number of holes (available cells) in memory immediately following a garbage collection which specializes to an analogue of knuth's 'fifty percent' rule for nongarbage- collection systems. simulations confirm the rule for exponentially distributed cell lifetimes. other lifetime distributions are discussed. the memory-size requirements for noncompacting garbage collection are also analyzed. d. julia. m. davies anonymous routine-texts: an orthogonal approach to block objects otto stolz book review: samba: integrating unix and windows dan wilder aim project introduction stewart french automated test data generation for programs with procedures test data generation in program testing is the process of identifying a set of test data that satisfies a selected testing criterion, such as, statement coverage or branch coverage. the existing methods of test data generation are limited to unit testing and may not efficiently generate test data for programs with procedures. in this paper we present an approach for automated test data generation for programs with procedures. this approach builds on the current theory of execution-oriented test data generation. in this approach, test data are derived based on the actual execution of the program under test. for many programs, the execution of the selected statement may require prior execution of some other statements that may be part of some procedures. the existing methods use only control flow information of a program during the search process and may not efficiently generate test data for these types of programs because they are not able to identify statements that affect execution of the selected statement. our approach uses data dependence analysis to guide the process of test data generation. data dependence analysis automatically identifies statements (or procedures) that affect the execution of the selected statement and this information is used to guide the search process. the initial experiments have shown that this approach may improve the chances of finding test data. bogdan korel formalizing design patterns tommi mikkonen ftp access as a user-defined file system michael k. gschwind object orientation and fortran 2002: part ii malcolm cohen pipeline behavior prediction for superscalar processors by abstract interpretation jörn schneider christian ferdinand from the editor phil hughes a new approach to software tool interoperability yimin bao ellis horowitz pool: design and experience pierre america an implementation of express in smalltalk stephen chan fortran 90 pointers vs. "cray" pointers the fortran 77 standard does not contain pointer facilities, but because of heavy user demand, many fortran 77 compilers have been extended with "cray" pointers. the demand for pointers in fortran was heard by the standards committee, x3j3, and a pointer facility was added to the follow-on fortran standard, fortran 90. x3j3, for reasons that may soon become apparent, chose not to follow existing practice and specify "cray" pointers, but to standardize a somewhat different pointer facility. fortran 90 pointers complement the fortran 90 language; they fit well with the new fortran 90 array processing and data facilities. the popularity of "cray" pointers indicates that they fit well with fortran 77, and since fortran 90 contains all of fortran 77, kit would be possible to extend fortran 90 processors to accept "cray" pointers as well. this would make it easier for existing codes that use pointers to migrate to new fortran 90 processors. but is it a good idea for a processor to provide two pointer facilities? how difficult is it to convert from "cray" pointers to fortran 90 pointers? this paper provides some information that may be helpful in answering these questions. jeanne martin dynamic systems simulation using apl2 in this paper, we describe methods, models and software applied for dynamic system simulation using apl2. robertas alzbutas vytautas janilionis a linguistic contribution to goto-less programming lawrence clark comparison checking: an approach to avoid debugging of optimized code we present a novel approach to avoid the debugging of optimized code through comparison checking. in the technique presented, both the unoptimized and optimized versions of an application program are executed, and computed values are compared to ensure the behaviors of the two versions are the same under the given input. if the values are different, the comparison checker displays where in the application program the differences occurred and what optimizations were involved. the user can utilize this information and a conventional debugger to determine if an error is in the unoptimized code. if the error is in the optimized code, the user can turn off those offending optimizations and leave the other optimizations in place. we implemented our comparison checking scheme, which executes the unoptimized and optimized versions of c programs, and ran experiments that demonstrate the approach is effective and practical. clara jaramillo rajiv gupta mary lou soffa affording higher reliability through software reusability mitchell d. lubars software pipelining showdown: optimal vs. heuristic methods in a production compiler john ruttenberg g. r. gao a. stoutchinin w. lichtenstein optimization and relaxation in constraint logic languages kannan govindarajan bharat jayaraman surya mantha debugging heterogeneous distributed systems using event-based models of behavior event based behavioral abstraction (ebba) is a high-level debugging approach which treats debugging as a process of creating models of actual behavior from the activity of the system and comparing these to models of expected system behavior. the differences between the actual and expected models are used to characterize erroneous system behavior and direct further investigation. a set of ebba-based tools has been implemented that users can employ to construct libraries of behavior models and investigate the behavior of an errorful system through these models. ebba evolves naturally as a cooperative distributed program that can take better advantage of computational power available in a network computer system to enhance debugging tool transparency, reduce latency and uncertainty for fundamental debugging activities and accommodate diverse, heterogeneous architectures. peter bates operator strength reduction operator strength reduction is a technique that improves compiler-generated code by reformulating certain costly computations in terms of less expensive ones. a common case arises in array addressing expressions used in loops. the compiler can replace the sequence of multiplies generated by a direct translation of the address expression with an equivalent sequence of additions. when combined with linear function test replacement, strength reduction can speed up the execution of loops containing array references. the improvement comes from two sources: a reduction in the number of operations needed to implement the loop and the use of less costly operations.this paper presents a new algorithm for operator strength reduction, called osr. osr improves upon an earlier algorithm of allen, cocke, and kennedy [allen et al. 1981]. osr operates on the static single assignment (ssa) form of a procedure [cytron et al. 1991]. by taking advantage of the properties of ssa form, we have derived an algorithm that is simple to understand, quick to implement, and, in practice, fast to run. its asymptotic complexity is, in the worst case, the same as the allen, cocke,and kennedy algorithm (ack). osr achieves optimization results that are equivalent to those obtained with the ack algorithm. osr has been implemented in several research and production compilers. keith d. cooper l. taylor simpson christopher a. vick support for distributed systems in ada 9x a. j. wellings the wonders of java object serialization brian t. kurotsuchi system administration: getting the nt out--and the linux in an overview of configuring linux using samba to replace the services provided from windows nt servers. david c. smith the consistent comparison problem in n-version software s s brilliant j c knight n g leveson a file system supporting cooperation between programs file systems coordinate simultaneous access of files by concurrent client processes. although several processes may read a file simultaneously, the file system must grant exclusive access to one process wanting to write the file. most file systems consider processes to be antagonistic: they prevent one process from taking actions on a file that have any chance to harm to another process already using the file. if several processes need to cooperate in a more sophisticated manner in their use of a file, they must communicate explicitly among themselves. the next three sections describe the file system procedures used by clients for using and sharing files. section two discusses how a client gains access to a file and how a client can respond if the file system asks it to give up a file it is using. section three discusses the mechanism by which a client might ask to be notified that a file is available for some access. section four discusses a controlled type of file access that lets clients read and write the same file at the same time. section five comprises three examples of the cooperative features of the file system taken from the xerox development environment. section six discusses the subtleties of writing the call-back procedures that clients provide to the file system to implement interprocess cooperation. section seven discusses the implementation of this file system. loretta guarino reid philip l. karlton book review: linux kernel internals, second edition karl majer ensuring semantic integrity of reusable objects (panel) webb stacy richard helm gail e. kaiser bertrand meyer take command the awk utility: this column presents an introduction to the linux data manipulation tool called awk louis j. iacona l-one-two-three (l1:..l2:..l3:) considered harmful it is said by non-apl- programmers that apl code is hard to read and that it is unstructured. here we argue that apl-programmers may refute this assertion by pointing out some misunderstandings, but admittedly a final analysis will show a deeper truth in these criticisms. we will show that apl gives ample opportunity for unstructured code. two proposals are presented to address this problem.the first rejects the developed convention for labelling and suggests the adoption of a proper style of programming enforced by a new standard of labelling. this standard will abolish unstructured code. both negative and positive aspects of this proposal are discussed.the second proposal revives an old idea of introducing one single proper control structure into the language. this would make the current jump ( ) superfluous and enforce structured code. f. h. d. van batenburg the last word stan kelly-bootle reengineering a complex application using a scalable data structure compiler don batory jeff thomas marty sirkin atom: a system for building customized program analysis tools atom (analysis tools with om) is a single framework for building a wide range of customized program analysis tools. it provides the common infrastructure present in all code-instrumenting tools; this is the difficult and time- consuming part. the user simply defines the tool-specific details in instrumentation and analysis routines. building a basic block counting tool like pixie with atom requires only a page of code. atom, using om link-time technology, organizes the final executable such that the application program and user's analysis routines run in the same address space. information is directly passed from the application program to the analysis routines through simple procedure calls instead of inter-process communication or files on disk. atom takes care that analysis routines do not interfere with the program's execution, and precise information about the program is presented to the analysis routines at all times. atom uses no simulation or interpretation. atom has been implemented on the alpha axp under osf/1. it is efficient and has been used to build a diverse set of tools for basic block counting, profiling, dynamic memory recording, instruction and data cache simulation, pipeline simulation, evaluating branch prediction, and instruction scheduling. amitabh srivastava alan eustace storage management in a prolog compiler with the current intense interest in new computing paradigms based on logic, the efficient implementation of prolog has become an issue of prime importance. compiling prolog involves some unique and difficult problems relating to storage management: in particular, the somewhat conflicting requirements of backtracking, and cut and tail recursion processing. the usual solution is to use garbage collection, an expensive process in small computers with limited storage. we describe a prolog compiler which maintains the heap as lists of free records, to which records are released at the time they are deallocated, thus avoiding garbage collection. in this compiler, variable bindings are recorded in such a way that the speed of unification does not depend on the length of chains of bound variables. also, tail recursion optimisation is more thorough than in other implementations. y-l. chang p. t. cox efficient formal methods for the synthesis of concurrent programs paul c. attie balloting experiment with software engineering standards fletcher j. buckley object-based visual programming languages sergiu s. simmel debugging of heterogeneous parallel systems the agora system supports the development of heterogeneous parallel programs, e.g. programs written in multiple languages and running on heterogeneous machines. agora has been used since september 1986 in a large distributed system [1]: two versions of the application have been demonstrated in one year, contrary to the expectation of two years per one version. the simplicity in debugging is one of the reasons of the productivity speedup gained. this simplicity is due both to the deeper understanding that the debugger has of parallel systems, and to a novel feature: the ability to replay the execution of parallel systems built with agora. a user is able to exactly repeat for any number of times and at a slower pace an execution that failed. this makes it easy to identify time-dependent errors, which are peculiar to parallel and distributed systems. the debugger can also be customized to support user defined synchronization primitives, which are built on top of the system provided ones. the agora debugger tackles three set of problems that no parallel debugger in the past has simultaneously addressed: dealing with programming-in-the-large, multiple processes in different languages, and multiple machine architectures. alessandro forin improving the cache locality of memory allocation the allocation and disposal of memory is a ubiquitous operation in most programs. rarely do programmers concern themselves with details of memory allocators; most assume that memory allocators provided by the system perform well. this paper presents a performance evaluation of the reference locality of dynamic storage allocation algorithms based on trace-driven simualtion of five large allocation-intensive c programs. in this paper, we show how the design of a memory allocator can significantly affect the reference locality for various applications. our measurements show that poor locality in sequential-fit allocation algorithms reduces program performance, both by increasing paging and cache miss rates. while increased paging can be debilitating on any architecture, cache misses rates are also important for modern computer architectures. we show that algorithms attempting to be space- efficient by coalescing adjacent free objects show poor reference locality, possibly negating the benefits of space efficiency. at the other extreme, algorithms can expend considerable effort to increase reference locality yet gain little in total execution performance. our measurements suggest an allocator design that is both very fast and has good locality of reference. dirk grunwald benjamin zorn robert henderson optimized code generation of multiplication-free linear transforms mahesh mehendale g. venkatesh s. d. sherlekar object technologies and real-time scheduling harold forbes karsten schwan from the editor corporate linux journal staff creating efficient systems for object-oriented languages increasingly computer science research has been done using workstations with high- resolution bitmap display systems. smalltalk-80 is a very attractive programming language for such computation environments, since it has very sophisticated graphical systems and programming environments. unfortunately there are still very few computer systems on which smalltalk-80 can run with satisfactory speed, and furthermore they are quite expensive. in order to make smalltalk-80 accessible to a large group of people at low cost,. we have developed compiler techniques useful to generate efficient code for standard register machines such as mc68000. we have also extended smalltalk-80 to include type expressions, which allow compilers to generate efficient code norihisa suzuki minoru terada fig-forth for the signetics 80c522 microcontroller alberto pasquale on executable models for rule-based prototyping this paper proposes a particular style of executable specifications as a method for rapid prototyping. using a general state-transition framework, system behavior is specified by pattern-oriented rules containing pre- and post-conditions for each transition. the specification method is introduced by two small examples in which a finite-state machine and database are modeled. the main example is an executable model of a backtracking prolog interpreter, which is specified using five transition rules adapted from the formal- semantic literature. all models in the paper are executable and written in prolog; minimal familiarity with prolog is assumed. stanley lee inside risks: learning from experience jim horning the nyu ada translator and interpreter the nyu-ada project is engaged in the design and implementation of a translator-interpreter for the ada language. the objectives of this project are twofold: a) to provide an executable semantic model for the full ada language, that can be used for teaching, and serve as a starting point for the design of an efficient ada compiler; b) to serve as a testing ground for the software methodology that has emerged from our experience with the very-high level language setl. in accordance with these objectives, the nyu-ada system is written in a particularly high-level, abstract setl style that emphasizes clarity of design and user interface over speed and efficiency. a number of unusual design features of the translator and interpreter follow from this emphasis. some of these features are described below. we also discuss the question of semantic specification of programming languages, and the general methodology of "software prototyping" of which the nyu-ada system is a sizeable example. robert b. k. dewar gerald a. fisher edmond schonberg robert froehlich stephen bryant clinton f. goss michael burke how to relieve a programmer from synchronization details an alternative method of specifying concurrent systems is presented. the method consists in starting with a sequential program and next determining an independency relation that allows relaxation of the sequential structure of the program. a programming language and a theoretical background for the method are discussed. ryszard janicki run-time enhancements for object-oriented programming jim hendler performance analysis of embedded software using implicit path enumeration yau- tsun steven li sharad malik eti resource distributor: guaranteed resource allocation and scheduling in multimedia systems miche baker-harvey controlling garbage collection and heap growth to reduce the execution time of java applications in systems that support garbage collection, a tension exists between collecting garbage too frequently and not collecting garbage frequently enough. garbage collection that occurs too frequently may introduce unnecessary overheads at the rist of not collecting much garbage during each cycle. on the other hand, collecting garbage too infrequently can result in applications that execute with a large amount of virtual memory (i.e., with a large footprint) and suffer from increased execution times die to paging. in this paper, we use a large colleciton of java applications and the highly tuned and widely used boehm-demers-weiser (bdw) conservative mark-and-sweep garbage collector to experimentally examine the extent to which the frequency of garbage collectio impacts an application's execution time, footprint, and pause times. we use these results to devise some guidelines for controlling garbage and heap growth in a conservative garbage collection in order to minimize application execution times. then we describe new strategies for controlling in order to minimize application execution times. tim brecht eshrat arjomandi chang li hang pham a unified code generator for multiple architectures sarah rymal mckie an event-driven debugger for ada claude mauger kevin pammett feedback control real-time scheduling: support for performance guarantees in unpredictable environments (poster session) chenyang lu john a. stankovic tarek abdelzaher sang h. son gang tao using apl as a preprocessing selector from large vsam files michael simpson book review: seamless object-oriented software architecture corporate linux journal staff object-oriented and concurrent program design issues in ada 95 stephen h. kaisler michael b. feldman precise executable interprocedural slices the notion of a program slice, originally introduced by mark weiser, is useful in program debugging, automatic parallelization, program integration, and software maintenance. a slice of a program is taken with respect to a program point p and a variable x; the slice consists of all statements of the program that might affect the value of x at point p. an interprocedural slice is a slice of an entire program, where the slice crosses the boundaries of procedure calls. weiser's original interprocedural-slicing algorithm produces imprecise slices that are executable programs. a recent algorithm developed by horwitz, reps, and binkley produces more precise (smaller) slices by more accurately identifying those statements that might affect the values of x at point p. these slices, however, are not executable. an extension to their algorithm that produces more precise executable interprocedural slices is described together with a proof of correctness for the new algorithm. david binkley a formal description of the unix operating system in this paper we discuss our approach to a formal description of the unix operating system [rit78a] [rit78b] [tho78], using milner's calculus of communicating systems (ccs) [mil80]. the paper focuses on the problems one encounters and the decisions one has to make when describing a system such as unix. we believe that the problems that arise in defining such a system are much less well understood than those, for example, related to the formalization of programming languages. in particular, this work is intended to serve several different purposes. one is an extensive test of the capabilities of ccs. we are applying ccs to the description of a moderately large system. this exercise has uncovered many shortcomings of the formalism; some of these we have overcome, while others are the subject of continuing research. for example, an important and difficult problem is the lack of any direct means of communicating ports as values. thomas w. doeppner alessandro glacalone fair concurrency in actors (abstract only): eager evaluation producers strong convergence gul agha execute and its use this paper examines the use of execute in a large collection of functions found in 468 workspaces in 80 libraries of a large apl system complex. the relationship of execute to coding style and execution efficiency such as in compiling is overviewed. a summary of previous results in static and dynamic frequency counts of the execute token is given and the methodology and results of the study are discussed. will j. roden garth h. foster creating a practical object-oriented methodology judith n. cohen distributed application development with inferno ravi sharma a note introducing syntax through semantic routines j. a. ruf disciplined c yves l. noyelle a practical approach to software quality assurance nikolay s. bukovsky reuse_system software repository tool concepts greg gicca a taxonomy-based comparison of several distributed shared memory systems ming- chit tam jonathan m. smith david j. farber improve bash shell scripts using dialog the dialog command enables the use of window boxes in shell scripts to make their use more interactive mihai bisca using content-derived names for configuration management jeffrey k. hollingsworth ethan l. miller re-targetability in software tools software tool construction is a risky business, with uncertain rewards. many tools never get used. this is a truism: software tools, however brilliantly conceived, well-designed, and meticulously constructed, have little impact unless they are actually adopted by real programmers. while there are no sure-fire ways of ensuring that a tool will be used, experience indicates that _retargetability_ is an important enabler for wide adoption. in this paper, we elaborate on the need for retargetability in software tools, describe some mechanisms that have proven useful in our experience, and outline our future research in the broader area of inter- operability and retargetability. premkumar t. devanbu towards partially evaluating reflection in java reflection plays a major role in the programming of generic applications. however, it introduces an interpretation layer which is detrimental to performance. a solution consists of relying on partial evaluation to remove this interpretation layer. this paper deals with improving a standard partial evaluator in order to handle the java reflection api. the improvements basically consist of taking type information into account when distinguishing between static and dynamic data, as well as introducing two new specialization actions: _reflection actions_. benchmarks using the serialization framework show the benefits of the approach. mathias braux jacques noye pads: a working architecture for a distributed apse dinesh gambhir rinaldo digiorgio roy freedman porting from irix to linux coding for portability to linux: examples from the acrt land vehicle port. george koharchik brian roberts performance bound hierarchies for queueing networks derek l. eager kenneth c. sevcik hci and requirements engineering: exploring human-computer interaction and software engineering methodologies for the creation of interactive software judy brown evaluation criteria for functional specifications s. cardenas m. v. zelkowitz using object-oriented techniques to develop reusable components huiming yu alternative approaches to standardization and portability for forth michael l. gassanenko on loops, dominators, and dominance frontier g. ramalingam the linked class of modula-3 e. levy annealing and data decomposition in vdm s. j. goldsack k. lano the issue of mutual control: synchronization and decision making control for ada tzilla elrad specification versus implementation based on estelle l kovacs a ercsenyi integrating task and data parallelism using shared objects saniya ben hassen henri bal an ada package for dimensional analysis this paper illustrates the use of ada's abstraction facilities---notably, operator overloading and type parameterization---to define an oft-requested feature: a way to attribute units of measure to variables and values. the definition given allows the programmer to specify units of measure for variables, constants, and parameters; checks uses of these entities for dimensional consistency; allows arithmetic between them, where legal; and provides scale conversions between commensurate units. it is not constrained to a particular system of measurement (such as the metric or english systems). although the definition is in standard ada and requires nothing special of the compiler, certain reasonable design choices in the compiler, discussed here at some length, can make its implementation particularly efficient. paul n. hilfinger lightweight extraction of object models from bytecode daniel jackson allison waingold deriving imperative code from functional programs patrice quinton sanjay rajopadhye doran wilde raidframe: rapid prototyping for disk arrays william v. courtright garth gibson mark holland jim zelenka program design by informal english descriptions russell j. abbott run-time design for object-oriented extensions to pascal joseph bergin on the correctness of semantic-syntax-directed translations the correctness of semantic-syntax-directed translators (ssdts) is examined. ssdts are a generalization of syntax-directed translators in which semantic information is employed to partially direct the translator. sufficient conditions for an ssdt to be "semantic-preserving," or "correct," are presented. a further result shows that unless certain conditions are met, it is undecidable, in general, whether an ssdt is semantic-preserving. ramachandran krishnaswamy arthur b. pyster customization: optimizing compiler technology for self, a dynamically-typed object-oriented programming language dynamically-typed object- oriented languages please programmers, but their lack of static type information penalizes performance. our new implementation techniques extract static type information from declaration-free programs. our system compiles several copies of a given procedure, each customized for one receiver type, so that the type of the receiver is bound at compile time. the compiler predicts types that are statically unknown but likely, and inserts run-time type tests to verify its predictions. it splits calls, compiling a copy on each control path, optimized to the specific types on that path. coupling these new techniques with compile-time message lookup, aggressive procedure inlining, and traditional optimizations has doubled the performance of dynamically-typed object-oriented languages. c. chambers d. ungar about the semantic nested monitor calls l kotulski models and languages for component description and reuse ben whittle the need for closure in large distributed systems b. c. neuman studies of windows nt performance using dynamic execution traces sharon e. perl richard l. sites object-oriented reflection and metalevel architectures {fourth annual} brian foote developing a stage lighting system from scratch lula is a system for computer- assisted stage lighting design and control. whereas other systems for the same purpose are usually the results of long chains of incremental improvements of historic concepts, lula represents a complete redesign. whereas other systems focus on control aspects of lighting, lula focuses on design and generates control information from it. this approach gives significantly more flexibility to the lighting designer and shortens the design process itself. lula's design and implementation draw from a number of disciplines in advanced programming. it is written in scheme and runs atop plt scheme, and benefits from its high-level gui library. lula uses an algebraic model for lighting looks based on just three combinators. it employs functional reactive programming for all dynamic aspects of lighting, and is programmable via a functional reactive domain-specific language. lula is an actual product and has users who have neither interest in nor knowledge of functional programming. michael sperber forward computation of dynamic program slices a dynamic program slice is an executable part of the program whose behavior is identical, for the same program input, to that of the original program with respect to a variable(s) of interest at some execution position. it has been shown that dynamic slicing is useful for the purpose of debugging, testing and software maintenance. the existing methods of dynamic slice computation are based on "backward" analysis, i.e., after the execution trace of the program is first recorded, the dynamic slice algorithm traces backwards the execution trace to derive dynamic dependence relations that are then used to compute dynamic slices. for many programs, during their execution extremely high volume of information may be recorded that may prevent accurate dynamic slice computation. in this paper we present a novel approach of dynamic slice computation, referred to as forward approach of dynamic slice computation. in this method, dynamic slices are computed during program execution without major recording of the execution trace. the major advantage of the forward approach is that space complexity is bounded as opposed to the backward methods of slice computation. bogdan korel satish yalamanchili middleware for building adaptive systems via configuration sanjai narain ravichander vaidyanathan stanley moyer william stephens kirthika parmeswaran abdul rahim shareef vars: increasing margins through free software the internet has shifted the power of presence, acquisition and is in the beginnings of shifting the power of commerce. many value added resellers (vars) have recognized this shift and have been able to change with the new economy. dean taylor an object-oriented framework for graphical programming (summary paper) steven p. reiss surveyor's forum: idiomatic programming alan r. feuer narain h. gehani using smalltalk for wait-free implementation of highly-concurrent objects w. craig scratchley compiler-directed page coloring for multiprocessors this paper presents a new technique, _compiler-directed page coloring,_ that eliminates conflict misses in multiprocessor applications. it enables applications to make better use of the increased aggregate cache size available in a multiprocessor. this technique uses the compiler's knowledge of the access patterns of the parallelized applications to direct the operating system's virtual memory page mapping strategy. we demonstrate that this technique can lead to significant performance improvements over two commonly used page mapping strategies for machines with either direct-mapped or two-way set- associative caches. we also show that it is complementary to latency-hiding techniques such as prefetching.we implemented compiler-directed page coloring in the suif parallelizing compiler and on two commercial operating systems. we applied the technique to the spec95fp benchmark suite, a representative set of numeric programs. we used the simos machine simulator to analyze the applications and isolate their performance bottlenecks. we also validated these results on a real machine, an eight-processor 350mhz digital alphaserver. compiler-directed page coloring leads to significant performance improvements for several applications. overall, our technique improves the spec95fp rating for eight processors by 8% over digital unix's page mapping policy and by 20% over a page coloring, a standard page mapping policy. the suif compiler achieves a spec95fp ratio of 57.4, the highest ratio to date. edouard bugnion jennifer m. anderson todd c. mowry mendel rosenblum monica s. lam structuring operating system aspects: using aop to improve os structure modularity yvonne coady gregor kiczales mike feeley norm hutchinson joon suan ong software visualization in the desert environment steven p. reiss a new technique for induction variable removal haigeng wang alexandru nicolau roni potasman multiple view analysis of designs boumediene belkhouche cuauhtemoc lemus olalde dynamic binding for an extensible system przemyslaw pardyak brian n. bershad compile time syntax analysis of apl programs we present a technique for the compile-time determination of the syntactic attribute of names essential for the development of a compile-time parser for apl. the method is applicable to a large class of apl programs namely, programs which do not utilize certain features of the language allowing dynamic changes in the syntactic meaning of program statements. empirical evidence supports the hypothesis that the restricted apl language encompasses almost all existing apl code. zvi weiss harry j. saal the time is ripe for a dyadic execute a dyadic execute primitive will greatly simplify the building of artificial intelligence applications using apl. this function can also be used to remove much of the need for labels within defined functions. a suggested name for this new apl primitive is "rule". zdenek v. jizba compositional references for stateful functional programming koji kagawa a library of generic algorithms in ada it is well-known that data abstractions are crucial to good software engineering practice. we argue that algorithmic abstractions, or generic algorithms, are perhaps even more important for software reusability. generic algorithms are parameterized procedural schemata that are completely independent of the underlying data representation and are derived from concrete, efficient algorithms. we discuss this notion with illustrations from the structure of an ada library of reusable software components we are presently developing. david r. musser alexander a. stepanov a taxonomy of domain-specific reuse problems and their resolutions - version 1.0 wing lam ben whittle ...lest we forget the ways of old... gregg w. taylor counterpoint: do programmers need seatbelts? richard gabriel a layered approach to building open aspect-oriented systems: a framework for the design of on-demand system demodularization paniti netinant tzilla elrad mohamed e. fayad a portable user interface for a scientific programming environment the subject of integrated programming environments for scientific computing has become very popular over the last few years. environments such as rn [cchk87] are being constructed to help coordinate the disjoint activities of editing, debugging, and performance tuning typically seen in the program development cycle. one key aspect of an integrated development setting is the library of user interface tools which are available to the environment builders. projects such as andrew [msch86] have begun to construct reusable user interface libraries for client applications. this paper describes the interface tool kit for the faust project being conducted at the university of illinois. faust is targeted at building a coherent development environment for scientific applications through the use of a library of portable user interface utilities. vincent a. guarna yogesh gaur lisp systems in the 1990s d. kevin layer chris richardson breadth-first search in the eight queens problem c. k. yuen m. d. feng formal methods and standards haim kilov thoughts on large scale programming projects robert mclaughlin the long view on linux doc searls external representations of objects of user-defined type the portable programming language (ppl) is one of a number of recently designed programming languages that enable the user to define new types by giving their representations and operations in terms of those of previously available types. such provisions for the construction of objects of user- defined type have been discussed elsewhere; this work concerns the related problem of the external representations of such objects, both on input-output media and as written constants within the program text. we introduce an enhancement to the ppl design allowing specification of the external representations of objects of user-defined type. this extension to the ppl design means that objects of user-defined type can be read, written, and used as constants exactly as if their representations had been selected by the writer of the ppl compiler. the implementation and use of the added facilities are also discussed. peter l. wallis the blender book clifford anderson selecting locking primitives for parallel programming paul e. mckenney installing linux via nfs greg hankins motivation for and current work on copaging cache (abstract only) tools currently employed to aid in fully utilizing the processor capability of computer systems burdened with inherently slower memories include the use of virtual memory, instruction prefetching, and the implementation of cache memories as highspeed buffers between the cpu and primary memory. copaging cache is a cache which allows the overlapping of cache page-in or page-out operations with instruction execution(1). the motivation for a copaging cache comes partly from the successes and shortcomings of the 3 tools mentioned above. the presentation will be concerned with these motivating factors and briefly survey current research in the design of a viable copaging cache architecture. linda turpin designing for reuse: a case study david m. wade a formula for the re-inspection decision various recommendations concerning the re-inspection decision can be found in the literature. some are based on general assumptions or estimates concerning the downstream cost of defects and the proportion of defects that get past the inspection "filter". none of these recommendations allow you to work with your own data or assumptions concerning specific types of documents. this article provides a way to make the re- inspection decision using your own data. tom adams eros: a fast capability system eros is a capability-based operating system for commodity processors which uses a single level storage model. the single level store's persistence is transparent to applications. the performance consequences of support for transparent persistence and capability-based architectures are generally believed to be negative. surprisingly, the basic operations of eros (such as ipc) are generally comparable in cost to similar operations in conventional systems. this is demonstrated with a set of microbenchmark measurements of semantically similar operations in linux.the eros system achieves its performance by coupling well-chosen abstract objects with caching techniques for those objects. the objects (processes, nodes, and pages) are well-supported by conventional hardware, reducing the overhead of capabilities. software-managed caching techniques for these objects reduce the cost of persistence. the resulting performance suggests that composing protected subsystems may be less costly than commonly believed. jonathan s. shapiro jonathan m. smith david j. farber the application of forth engines as coprocessors for the macintosh computer albert pierce elizabeth pierce the performance of an object-oriented threads package presto is an object-oriented threads package for writing parallel programs on a shared- memory multiprocessor. the system adds thread objects and synchronization objects to c++ to allow programmers to create and control parallelism. presto's object-oriented structure, along with its user-level thread implementation, simplifies customization of thread management primitives to meet application-specific needs. the performance of thread primitives is crucial for parallel programs with fine-grained structure; therefore, the principal objective of this effort was to substantially improve presto's performance under heavy loads without sacrificing the benefits of its object-oriented interface. we discuss design and implementation issues for shared-memory multiprocessors, and the performance impact of various designs is shown through measurements on a 20-processor sequent symmetry multiprocessor. john e. faust henry m. levy design and static semantics of algorithm languagel zhang jiazhong wang yanbing zheng mingchun prototype-based languages: from a new taxonomy to constructive proposals and their validation christophe dony jacques malenfant pierre cointe the o-o-o methodology for the object-oriented life cycle b. henderson-sellers j. m. edwards book review: prime time freeware corporate linux journal staff running 7th edition unix programs on a vax in compatibility mode g r guenther reducing inter-vector-conflicts in complex memory systems a. m. del corral j. m. llabería minimal multitasking operating systems for real-time controllers in many dedicated microprocessor applications, a full-feature operating system is not required. a methodology is presented for the design of extremely minimal operating systems for such applications. the emphasis is on providing the most basic facilities quickly, without precluding later improvements and additions. geoffrey h. kuenning adding inheritance to ada jurgen f. h. winkler critical research directions in programming languages parenthetically speaking: more than just words: lambda, the ultimate political party kent m. pitman block structure and object oriented languages ole lehrmann madsen the talking toaster paul frenger aliasing, syntax, and string handling in fortran and in c james giles optimizing direct threaded code by selective inlining ian piumarta fabio riccardi a parallel programming environment supporting multiple data-parallel modules bradley k. seevers michael j. quinn philip j. hatcher omen: a strategy for testing object-oriented software this paper presents a strategy for structural testing of object-oriented software systems with possibly unknown clients and unknown information about invoked methods. by exploiting the combined points-to and escape analysis developed for compiler optimization, our testing paradigm does not require a whole program representation to be in memory simultaneously for testing analysis. potential effects from outside the component under test are easily identified and reported to the tester. as client and server methods become known, the graph representation of object relationships is easily extended, allowing the computation of test tuples to be performed in a demand-driven manner, without requiring unnecessary computation of test tuples based on predictions of potential clients. amie l. souter lori l. pollock memory management units for microcomputer operating systems i rattan bottom in the imperative world apostolos syropoulos alexandros karakos annotating objects for transport to other worlds david ungar the pdbg process-level debugger for parallel and distributed programs joão lourenço jose c. cunha casting in c++: bringing safety and smartness to your programs g. bowden wise linux in an embedded communications gateway this article describes a communications gateway system, why linux was chosen for the implementation and why linux is an excellent choice for similar gateways greg herlein oracles for checking temporal properties of concurrent systems laura k. dillon qing yu letters to the editor corporate linux journal staff configuring stand-alone smalltalk-80 applications the smalltalk-80 programming environment, though powerful for prototyping applications, does not have any mechanisms for constructing a stand-alone version of an application. traditionally, the application is bundled with an image including the entire development environment. production applications frequently require that the only interface visible to the end user be that of the application. a common misperception among smalltalk-80 application developers is that it is impossible to: develop and deliver applications containing proprietary algorithms, prevent inspection and modification of the application, separate the development environment from the delivered application, provide annotation of the application classes and methods without actually revealing the source code to the end user. in this paper, we introduce various techniques and mechanisms for meeting these requirements. s. sridhar comparative models of the file assignment problem lawrence w. dowdy derrell v. foster unidraw: a framework for building domain-specific unidraw is a framework for creating object-oriented graphical editors in domains such as technical and artistic drawing, music composition, and cad. the unidraw architecture simplifies the construction of these editors by providing programming abstractions that are common across domains. unidraw defines four basic abstractions: components encapsulate the appearance and behavior of objects, tools support direct manipulation of components, commands define operations on components, and external representations define the mapping between components and a file or database. unidraw also supports multiple views, graphical connectivity, and dataflow between components. this paper presents unidraw and three prototype domain-specific editors we have developed with it: a schematic capture system, a user interface builder, and a drawing editor. experience indicates a substantial reduction in implementation time and effort compared with existing tools. j. m. vlissides m. a. linton an efficient approach to computing fixpoints for complex program analysis a chief source of inefficiency in program analysis using abstract interpretation comes from the fact that a large context (i.e., problem state) is propagated from node to node during the course of an analysis. this problem can be addressed and largely alleviated by a technique we call context projection, which projects an input context for a node to the portion that is actually relevant and determines whether the node should be reevaluated based on the projected context. this technique reduces the cost of an evaluation and eliminates unnecessary evaluations. therefore, the efficiency of computing fixpoints over general lattices is greatly improved. a specific method, reachability, is presented as an example to accomplish context projection. experimental results using reachability show very convincing speedups (more than eight for larger programs) that demonstrate the practical significance of context projection. li-ling chen williams ludwell harrison distributed and multiprocessor scheduling steve j. chapin attached processors in apl an extension to the semantics of apl is proposed. it allows one apl processor (aplp) to attach another processor, which may be an (aplp) with its own workspace, or some other processor. a special case is signon from a terminal, where the attaching processor becomes a human user acting through a workstation. attached processors communicate through shared variables, and the impact on the apl syntax is minimal. the mechanism has the potential to allow, for example, session managers, editors, and event--- handling support to be implemented in a natural way through conventional apl programming. such facilities will therefore enjoy an entirely new flexibility and open- endedness, not seen in comparative proposals, which usually depend for their realisation on new apllansguage constructs. karl soop an object-based model for pprototyping user interfaces of cooperative systems mao bing xie li an experimental study of several cooperative register allocation and instruction scheduling strategies cindy norris lori l. pollock model checking distributed objects design nima kaveh apl tools and techniques and their effect on good programming style what is style? we know that it certianly is not confined to any specific field. instead, it applies to all creative endeavors, programming being one of those endeavors. however, before i describe style relative to writing programs, i want to discuss something with which we are all more familiar - style as it relates to writing prose since almost all of us have, at one time or another, studied literature and writing styles. what makes one author's works great and another's mediocre or poor? some may say good advertising or public whims - and sometimes this may seem true; however, good writing style is the real reason. and, you may ask, 'what factors make up writing style?' thomas w. cook "who cares about elegance?" the role of aesthetics in programming language design bruce j. maclennan genie forth roundtable jim callahan linux for suits: now what: are we going to let aol turn the net into tv 2.0... doc searls contemporary software development environments there are a wide variety of software development tools and methods currently available or which could be built using current research and technology. these tools and methods can be organized into four software development environments, ranging in complexity from a simple environment containing few automated tools or expensive methods to a complete one including many automated tools and built around a software engineering database. the environments were designed by considering the life-cycle products generated during two classes of software development projects. relative cost figures for the environments are offered and related issues, such as standardization, effectiveness, and impact, then addressed. william e. howden space-efficient closure representations many modern compilers implement function calls (or returns) in two steps: first, a closure environment is properly installed to provide access for free variables in the target program fragment; second, the control is transferred to the target by a "jump with arguments (or results)". closure conversion, which decides where and how to represent closures at runtime, is a crucial step in compilation of functional languages. we have a new algorithm that exploits the use of compile-time control and data flow information to optimize closure representations. by extensive closure sharing and allocating as many closures in registers as possible, our new closure conversion algorithm reduces heap allocation by 36% and memory fetches for local/global variables by 43%; and improves the already-efficient code generated by the standard ml of new jersey compiler by about 17% on a decstation 5000. moreover, unlike most other approaches, our new closure allocation scheme satisfies the strong "safe for space complexity" rule, thus achieving good asymptotic space usage. zhong shao andrew w. appel adding type parameterization to the java language ole agesen stephen n. freund john c. mitchell the interaction of social issues and software architecture alistair cockburn distributed algorithms visualisation for educational purposes we present our work on building interactive continuous visualisations of distributed algorithms for educational purposes. the animations are comprised by a set of visualisation windows. the visualisation windows are designed so that they demonstrate i) the different behaviours of the algorithms while running in different systems, ii) the different behaviours that the algorithms exhibit under different timing and workload of the system iii) the time and space complexities of the algorithms and iv) the "key ideas" of the functionality of the algorithms. visualisations have been written for a set of lo algorithms that are tought in a distributed algorithms advanced undergraduate course. boris koldehofe marina papatriantafilou philippas tsigas programming languages should not have comment statements m. j. kaelbling distributed system evolution-some observations june power responsive sequential processes neelam soundararajan roger l. costello "a focus on j: past, present and future ray polivka fon mcgrew reusable ada libraries supporting infinite data structures arthur g. duncan protecting java code via code obfuscation douglas low replication and fault-tolerance in the isis system kenneth p. birman software specification and design with ada: a disciplined approach ken shumate spades - a specification and design system and its graphical interface spades is a specification system consisting of a method, a language, and a set of tools. these components are based on a set of concepts, which forms its abstract kernel. spades supports the specification of software systems, in particular of real time software. the system to be developed is modelled using the entity-relationship-concept. while this seems to be the best way for storing specifications in a computer, it does not automatically lead to representations equally comfortable for humans. this is why spades, which has been available for some time, has recently been extended by a graphical interface. this paper gives a brief survey of the system, in particular of its new component. j. ludewig m. glinz h. huser g. matheis h. matheis m. f. schmidt long-lived and adaptive atomic snapshot and immediate snapshot (extended abstract) long-lived and adaptive to point contention implementations of snapshot and immediate snapshot objects in the read/write shared-memory model are presented. in [2] we presented adaptive algorithms for mutual exclusion, collect and snapshot. however, the collect and snapshot algorithms were adaptive only when the number of local primitive operations that a process performs are ignored, i.e., not counted. the number of primitive local steps (operations that do not access the shared memory) in the collect and snapshot operations presented in [2] is o(nk3) and o(nk4) respectively where n is the total number of processes in the system and k is the encountered contention. here we developed new techniques that enabled us to achieve fully adaptive implementations in which the step complexity (combined local and shared) of any operation is bounded by a function of the number of processes that are concurrent with the operation, in particular, o(k4) for the snapshot implementation. yehuda afek gideon stupp dan touitou programming pearls jon l. bentley polylith: an environment to support management of tool interfaces polylith is the name of a set of enhanced execution time system services along with development tools and an interfacing methodology.1 as a system, polylith supports the reliable union of many component tools, addressing the problems of data interchange and synchronization between these tools. it facilitates reuse of code, and promotes the notion that construction of large programs should be viewed instead as orchestration of services. the polylith is visible as a grammar in which instances of environments2 are precisely and rapidly specified; it is, through compilation and execution of assertions in that language, a medium through which many programs and tools can be united with impunity. this paper presents an overview of the polylith architecture, along with some brief remarks on the requirements analysis leading to project polylith at the university of illinois. section 2 presents this architecture, summarizing language and data transformation issues. simple examples are included. section 3 introduces one particular instance of an environment specified within polylith called minion. it is presented as an extended example, showing how the polylith is utilized to construct an enthusiastic assistant for mathematical problem solving. the closing section contains some evaluation of how polylith affects the task of environment development. james purtilo managing interference anthony finkelstein george spanoudakis david till tool interfaces e. ploederederm s. boyd i. campbell r. taylor r. thall further comments on implementation of general semaphores c. samuel hsieh corrigenda: an empirical validation of software cost estimation models chris f. kemerer the point of view notion for multiple inheritance we examine several problems related to the preservation of the independence principle inheritance. this principle states that all the characteristics of independent superclasses must be inherited by subclasses, even if there are name conflicts. in this context, a conventional approach is to use explicit class selection. we show that this mechanism suffers from serious limitations, and leads to inhibition of refinement and genericity. our experimental object- oriented language rome introduces the "point of view" notion (using an "as- expressions" mechanism) which solves these problems. bernard carre jean-marc geib a simple taxonomy for distributed mutual exclusion algorithms michel raynal a practical application of the ceiling protocol in a real-time system c. d. locke j. b. goodenough generic data structures in ucsd pascal david v. moffat experience report: using resolve/c++ for commercial software academic research sometimes suffers from the "ivory tower" problem: ideas that sound good in theory do not necessarily work well in practice. an example of research that potentially could impact practice over the next few years is a novel set of component-based software engineering design principles, known as the _resolve discipline_. this discipline has been taught to students for several years [23], and previous papers (e.g., [24]) have reported on student- sized software projects constructed using it. here, we report on a substantial commercial product family that was engineered using the same principles --- an application that we designed, built, and continue to maintain for profit, not as part of a research project. we discuss the impact of adhering to a very prescriptive set of design principles and explain our experience with the resulting applications. lessons learned should benefit others who might be considering adopting such a component-based software engineering discipline in the future. joseph e. hollingsworth lori blankenship bruce w. weide software interoperability: principles and practice jack c. wileden alan kaplan psd - a portable scheme debugger pertti kellomäki what are race conditions?: some issues and formalizations in shared- memory parallel programs that use explicit synchronization, race conditions result when accesses to shared memory are not properly synchronized. race conditions are often considered to be manifestations of bugs, since their presence can cause the program to behave unexpectedly. unfortunately, there has been little agreement in the literature as to precisely what constitutes a race condition. two different notions have been implicitly considered: one pertaining to programs intended to be deterministic (which we call general races) and the other to nondeterministic programs containing critical sections (which we call data races). however, the differences between general races and data races have not yet been recognized. this paper examines these differences by characterizing races using a formal model and exploring their properties. we show that two variations of each type of race exist: feasible general races and data races capture the intuitive notions desired for debugging and apparent races capture less accurate notions implicitly assumed by most dynamic race detection methods. we also show that locating feasible races is an np-hard problem, implying that only the apparent races, which are approximations to feasible races, can be detected in practice. the complexity of dynamically locating apparent races depends on the type of synchronization used by the program. apparent races can be exhaustively located efficiently only for weak types of synchronization that are incapable of implementing mutual exclusion. this result has important implications since we argue that debugging general races requires exhaustive race detection and is inherently harder than debugging data races (which requires only partial race detection). programs containing data races can therefore be efficiently debugged by locating certain easily identifiable races. in contrast, programs containing general races require more complex debugging techniques. robert h. b. netzer barton p. miller control flow and data structure documentation: two experiments two experiments were carried out to assess the utility of external documentation aids such as macro flowcharts, pseudocode, data structure diagrams, and data structure descriptions. a 223 line pascal program which manipulates four arrays was used. the program interactively handles commands that allow the user to manage five lists of items. a comprehension test was given to participants along with varying kinds of external documentation. the results indicate that for this program the data structure information was more helpful than the control flow information, independently of whether textual or graphic formats were used. ben shneiderman stop the presses corporate linux journal staff session 11a: verification c. ghezzi logc: a language and environment for embedded rule based systems logc incorporates the rule oriented programming to the procedure oriented programming in c. it is designed to support knowledge inference for intelligent problem solving. this paper outlines the design features and demonstrates some results about inference efficiency by comparison with other ai languages. feng yulin huang tao li jing maude jose meseguer carolyn talcott using yacc and lex with c++ bruce hahne hiroyuki sato on the asymptotic behavior of time-sharing systems lester lipsky chee-min henry lieu abolfazl tehranipour appie van de liefvoort a comparison of techniques for the specification of external system behavior the elimination of ambiguity, inconsistency, and incompleteness in a software requirements specification (srs) document is inherently difficult, due to the use of natural language. the focus here is a survey of available techniques designed to reduce these negatives in the documentation of a software product's external behavior. alan,m. davis message passing and administrators in ada f. hosch progress measures and stack assertions for fair termination nils klarlund system validation via constraint modeling richard c. waters building analytical models into an interactive performance prediction tool in this paper we describe an interactive tool designed for performance prediction of parallel programs. static performance prediction, in general, is a very difficult task. in order to avoid some inherent problems, we concentrate on reasonably structured scientific programs. our prediction system, which is built as a sub-system of a larger interactive environment, uses a parser, dependence analyzer, database and an x-window based front end in analyzing programs. the system provides the user with execution times of different sections of programs. when there are unknowns involved, such as number of processors or unknown loop bounds, the output is an algebraic expression in terms of these variables. we propose a simple analytical model as an attempt to predict performance degradation due to data references in hierarchical memory systems. the predicted execution times of some lawrence livermore loop kernels are given together with the experimental values obtained by executing the loops on alliant fx/8. d. arapattu d. gannon portable software in modular pascal c. neusius an incremental approach to structural testing of concurrent software structural testing of a concurrent program p involves the selection of paths of p according to a structure-based criterion. a common approach is to derive the reachability graph (rg) of p, select a set of paths of p, derive one or more inputs for each selected path, and force deterministic executions of p according to the selected paths and their inputs. the use of rg(p) for test path selection has the state explosion problem, since the number of states of rg(p) is an exponential function of the number of processes in p.in this paper, we present a new incremental approach to structural testing of p. based on the hierarchy of processes in p, our incremental testing approach is to integrate processes in p in a bottom-to-top manner. when a set s of processes in p at the same level are integrated, we construct a reduced rg for s such that the reduced rg contains all synchronizations involving the processes in s and some of the synchronizations involving processes at lower levels in order to connect synchronizations involving processes in s. based on the reduced rg for s, we can select test paths to focus on the detection of interface faults involving processes in s. after the selection of paths, rg(s) is further reduced in order to retain only some of the synchronizations involving processes in s that are needed in order to connect synchronizations between s and other processes in p. our incremental approach alleviates the state explosion problem and offers other advantages. pramod v. koppol kuo-chung tai some experiments in global microcode compaction global microcode compaction is an open problem in firmware engineering. although fisher's trace scheduling method may produce significant reductions in the execution time of compacted microcode, it has some drawbacks. there have been four methods. tree, srdag, itsc , and gddg, presented recently to mitigate those drawbacks in different ways. the purpose of the research reported in this paper is to evaluate these new methods. in order to do this, we have tested the published algorithms on several unified microcode sequences of two real machines and compared them on the basis of the results of experiments using three criteria: time efficiency, space efficiency, and complexity. b. su s. ding using corba and jdbc to produce three tier systems barry cornelius upfront corporate linux journal staff recovery management in quicksilver one price of extensibility and distribution, as implemented in quicksilver, is a more complicated set of failure modes, and the consequent necessity of dealing with them. in traditional operating systems, services (e.g., file, display) are intrinsic pieces of the kernel. process state is maintained in kernel tables, and the kernel contains explicit cleanup code (e.g., to close files, reclaim memory, and get rid of process images after hardware or software failures). quicksilver, however, is structured according to the client-server model, and as in many systems of its type, system services are implemented by user-level processes that maintain a substantial amount of client process state. examples of this state are the open files, screen windows, address space, etc., belonging to a process. failure resilience in such an environment requires that clients and servers be aware of problems involving each other. examples of the way one would like the system to behave include having files closed and windows removed from the screen when a client terminates, and having clients see bad return codes (rather than hanging) when a file server crashes. this motivates a number of design goals: properly written programs (especially servers) should be resilient to external process and machine failures, and should be able to recover all resources associated with failed entities. server processes should contain their own recovery code. the kernel should not make any distinction between system service processes and normal application processes. to avoid the proliferation of ad-hoc recovery mechanisms, there should be a uniform system- wide architecture for recovery management. a client may invoke several independent servers to perform a set of logically related activitites (a unit of work) that must execute atomically in the presence of failures, that is, either all the related activities should occur or none of them should. the recovery mechanism should support this. in quicksilver, recovery is based on the database notion of atomic transactions, which are made available as a system service to be used by other, higher-level servers. this allows meeting all the above design goals. software portability is important in the quicksilver environment, dictating that transaction-based recovery be accessible to conventional programming languages rather than a special-purpose one such as argus [liskov84]. to accommodate servers with diverse recovery demands, the low-level primitives of commit coordination and log recovery are exposed directly rather than building recovery on top of a stable-storage mechanism such as in cpr [attanasio87] or recoverable objects such as those in camelot [spector87] or clouds [allchin&mckendry83;]. r. haskin y. malachi w. sawdon g. chan software environments workshop report a recent workshop identified a variety of issues fundamental to advanc ing the state-of-the-art in software environments. in addition, activities were specified to address these issues and provide incremental improvement in the near and medium term. even though the sets of issues and activities are incomplete, they are reported here to seed the community's thinking about what is needed to advance the state-of- the-art for software environments and assist in establishing long-range goals, identifying and defining specific projects, and identifying the coordination needed among the projects. william e. riddle lloyd g. williams product review: igel etherminal 3x michael k. johnson principles of functional reactive programming paul hudak a note on ben-ari's concurrent programming system anthony j. dos reis apl thinking: examples in an effort to understand "apl thinking", we examine a few selected examples of using apl to solve specific problems, namely: compute the median of a numerical vector; simulate the replicate function; string search; carry forward work-to-be-done in excess of capacity; rotate concentric rectangular rings in a matrix; find column indices of pivots in an echelon matrix. these examples are drawn from our teaching experience as well as from apl literature. we are particularly interested in studying thinking processes underlying alternative solutions to such problems --- i.e., our goal is to "get inside the head" of the apl programmer. analyses include reconstructing thoughts, comparing alternative approaches, and, in general, scrutinizing supposed characteristics of apl thinking. murray eisenberg howard a. peelle coreldraw for linux: f/x and design clifford anderson executable object modeling with statecharts david harel eran gery a retargetable instruction reorganizer extant peephole optimizers can perform many optimizations that are handled by higher-level optimizers. this paper describes a retargetable instruction reorganizer that performs targeting and evaluation order determination by applying a well known algorithm for optimal code generation for expressions to object code. by applying the algorithm to object code after code generation and optimization, a phase ordering problem often encountered by higher-level optimizers is avoided. it minimizes the number of registers and temporaries required to compile expressions by rearranging machine instructions. for some machines, this can result in smaller and faster programs. by generalizing its operation, the reorganizer can also be used to reorder instructions to avoid delays in pipelined machines. for one pipelined machine, it has provided a 5 to 10 percent improvement in the execution speed of benchmark programs. jack w. davidson programming distributed fault tolerant systems pedro de las heras-quirós jesus m. gonzalez-barahona jose centeno-gonzalez the quintessential linux benchmark: all about the "bogomips" number displayed when linux boots william van dorst compiling c for vectorization, parallelization, and inline expansion practical implementations of real languages are often an excellent way of testing the applicability of theoretical principles. many stresses and strains arise from fitting practicalities, such as performance and standard compatibility, to theoretical models and methods. these stresses and strains are valuable sources of new research and insight, as well as an oft-needed check on the egos of theoreticians. two fertile areas that are often explored by implementations are places where tractable models fail to match practice. this can lead to new models, and may also affect practice (e.g., the average programming language has become more context free over the last several decades). places where existing algorithms fail to deal with practical problems effectively, frequently because the problems are large in some dimension that has not been much explored. the present paper discusses the application of a much studied body of algorithms and techniques [alle 83, kklw 80, bane 76, wolf 78, wolf 82, kenn 80, lamp 74, huso 82] for vectorizing and optimizing fortran to the problem of vectorizing and optimizing c. in the course of this work some algorithms were discarded, others invented, and many were tuned and modified. the experience gave us insight into the strengths and weaknesses of the current theory, as well as into the strong and weak points of c on vector/parallel machines. this paper attempts to communicate some of those insights. r. allen s. johnson type-based hot swapping of running modules (extended abstract) while dynamic linking has become an integral part of the run-time execution of modem programming languages, there is increasing recognition of the need for support for hot swapping of running modules, particularly in long-lived server applications. the interesting challenge for such a facility is to allow the new module to change the types exported by the original module, while preserving type safety. this paper describes a type-based approach to hot swapping running modules. the approach is based on a reflective mechanism for dynamically adding type sharing constraints to the type system, realized by programmer-defined version adapters in the run-time. dominic duggan unicstep-a visual stepper for common lisp: portability and language aspects ivo haulsen angela sodan area-efficient buffer binding based on a novel two-port fifo structure in this paper, we address the problem of minimizing buffer storage requirement in buffer binding for sdf (synchronous dataflow) graphs. first, we propose a new two-port fifo buffer structure that can be efficiently shared by two producer/consumer pairs. then we propose a buffer binding algorithm based on this two-port buffer structure for minimizing the buffer size requirement. experimental results demonstrate 9.8%~37.8% improvement in buffer requirement compared to the conventional approaches. kyoungseok rha kiyoung choi issues in object data management (panel session) jacob stein tim andrews bill kent mary lumas dan weinreb early storage reclamation in a tracing garbage collector timothy harris inside the apl2 workspace with the advent of ibm's apl2 program product (5668-899) which includes nested arrays and new internal types of data, the physical layout of the workspace has been redesigned. this paper presents the general workspace layout, gives details on the more important entries in the workspace, and discusses the philosophy behind changes made from vs apl. james a. brown a generic list implementation peter mcgavin roger young best of technical support corporate linux journal staff a position paper on compile-time program analysis barbara g. ryder towards a framework for managing inconsistency between multiple views bashar nuseibeh pclos: stress testing clos experiencing the metaobject protocol this paper demonstrates that the clos metaobject protocol approach to defining and implementing an object model is very powerful. clos is an object-oriented language that is based on common lisp and is in the process of being standardized. implementations of clos are themselves object-oriented with all major building blocks of the language being instances of system classes. a metaobject protocol provides a framework for clos implementations by specifying the hierarchy of these classes and the order and contents of the communication among their instances. this design has made clos both flexible and portable, two design goals that traditionally conflict. in support of this suggestion we present a detailed account of how we added object persistence to clos without modifying any of the language's implementation code. andreas paepcke product assurance program analyzer (p.a.p.a.) a tool for program complexity evaluation this tool has been developed to assist in the software validation process. p.a.p.a. will measure the complexity of programs and detect several program anomalies. the resulting list of analyzed programs is sorted in order of descending complexity. since high complexity and error- proneness are strongly related, the "critical" programs will be found earlier within the development cycle. p.a.p.a. provides syntax analyzers for rpg (ii/iii), pseudocode (design and documentation language) and pl/siii (without macro language). it may be applied during the design-, coding- and testphase of software development (e.g. for design- and code inspections). karl ernst schnurer tripping up intruders with tripwire you can ensure the security of your linux machine with this program kevin fenzi scale and performance in a distributed file system andrew is a distributed computing environment being developed in a joint project by carnegie mellon university and ibm. one of the major components of andrew is a distributed file system which constitutes underlying mechanism for sharing information. the goals of the andrew file system are to support growth up to at least 7000 workstations (one for each student, faculty member, and staff at carnegie mellon) while providing users, application programs, and system administrators with the amenities of a shared file system. a fundamental result of our concern with scale is the design decision to transfer whole files between servers and workstations rather than some smaller unit such as records or blocks, as almost all other distributed file systems do. this paper examines the consequences of this and other design decisions and features that bear on the scalability of andrew. large scale affects a distributed system in two ways: it degrades performance and it complicates administration and day-to-day operation. this paper addresses both concerns and shows that the mechanisms we have incorporated cope with them successfully. we start the initial prototype of the system, what we learned from it, and how we changed the system to improve performance. we compare its performance with that of a block-oriented file system, sun microsystems' nfs, in order to evaluate the whole file transfer strategy. we then turn to operability, and finish with issues related peripherally to scale and with the ways the present design could be enchanced. j. howard m. kazar s. menees d. nichols m. west abstract data types with shared operations kasper osterbye an alternative to the use of patterns in string processing snobol4 is best known for its string processing facilities, which are based on patterns as data objects. despite the demonstrated success of patterns, there are many shortcomings associated with their use. the concept of patterns in snobol4 is examined and problem areas are discussed. an alternative method for high-level string processing is described. this method, implemented in the programming language icon, employs generators, which are capable of producing alternative values. generators, coupled with a goal- driven method of expression evaluation, provide the string processing facilities of snobol4 without the disadvantages associated with patterns. comparisons between snobol4 and icon are included and the broader implications of the new approach are discussed. ralph e. griswold david r. hanson x window system programming with tcl and tk: unlock the power of x matt welsh tailoring testing to a specific compiler - experiences the testing of the univac ucs-pascal compiler is described. tests were acquired from various sources, converted from existing tests, and developed in house. test development and execution using the univac test controller system is illustrated with examples. the experiences gained from this and other compiler testing efforts are described. harlan k. seyfer practical programmer: testing made palatable how one team made software testing an integral part of the development process. marc rettig implementation of physical units geoff baldwin object linkage mechanism for threaded interpretive languages yong m. lee donald j. alameda extended pascal - numerical features d. a. joslin adequacy of checksum algorithms for computer virus detection doug varney object-oriented programming versus object-oriented design (keynote address): what's the connection? terry winograd a comparison between watfor-77 and fortran 90 k. a. redish language constructs and support systems for distributed computing this paper describes programming constructs and system support functions that are intended to facilitate the programming of reliable distributed systems. the systems considered include very different kinds of computers communicating through a network. such a heterogeneous network offers a number of advantages to designers of applications software. different machines emphasize different capabilities and many problems naturally break down into subproblems that are best solved with specialized resources. there is clearly a need for programming tools that will allow users to exploit this kind of environment. c. s. ellis j. a. feldman j. e. heliotis simplifying the evolution of java programs (tutorial) linda m. seiter karl j. lieberherr doug orleans extended use of null productions in lr(1) parser applications applications programmed using lr(1) parsers should be designed so that as many functions as possible are controlled by the driving parsing machine through conveniently staged reductions and associated primitive actions; in this paper this is achieved by making extensive use of null productions and nullable nonterminal symbols. gerard d. finn the variabilities are variable arthur salwin whizbang! roy a. sykes fifty years of progress in software engineering in this paper, i describe a new outlook on the history of software engineering. i portray large-scale structures within software engineering to give a better understanding of the flow of history. i use these large-scale structures to reveal the steady, ongoing evolution of concepts, and show how they relate to the myriad whorls and eddies of change. i also have four smaller, more specific purposes in writing this paper.first, i want to point out that old ideas do not die. in the mythical man-month after 20 years, brooks claims "the waterfall model is wrong." but if the waterfall model were wrong, we would stop arguing over it. though the waterfall model may not describe the whole truth, it describes an interesting structure that occurs in many well-defined projects and it will continue to describe this truth for a long time to come. i expect the waterfall model will live on for the next one hundred years and more.second, i want to show that the chaos model, chaos life cycle, complexity gap, and chaos strategy are part of the natural evolution of software engineering. the chaos model and strategy supersede, but do not contradict, the waterfall and spiral models, and the stepwise refinement strategy. they are more up to date because they express contemporary issues more effectively, and fit our contemporary situations better. the chaos model, life cycle, and strategy are equally as important, but not better than, other concepts.third, i compare the chaos model, life cycle, and strategy to other models, life cycles, and strategies. this paper can be considered a comparison of the ideas presented in my papers about chaos with other ideas in the field. i avoided comparisons in my other papers because i wanted to define those ideas in their own terms and the comparisons did not further the new ideas.fourth, i make a few predictions about the next ten years of software engineering. the large-scale structures described in this history provide a stronger base for understanding how software engineering will evolve in the future.this paper is laid out as follows. in the first section, i use the flow of water as a metaphor to describe the flow of progress in software engineering. i use the water metaphor to show some of the structures within software engineering. the current work builds on top of the historical work, and future work will build on top of current work. in the remaining sections, i describe the waves, streams, and tides that portray the evolution of concepts and technologies in software engineering. l. b. s. raccoon improved lineariser methods for queueing networks with queue dependent centres the lineariser is an mva-based technique developed for the approximate solution of large multiclass product form queueing networks. the lineariser is capable of computing accurate solutions for networks of fixed rate centres. however, problems arise when the lineariser is applied to networks containing centres with queue dependent service rates. thus networks exist which seem well suited (a large number of lightly loaded centres, large numbers of customers in each closed chain) for lineariser solution but whose queue dependent centres cannot be solved accurately by the lineariser method. examples have also been found where the lineariser computes accurate values for the queue lengths, waiting times and throughputs though the values computed for the queue length distributions are totally in error. this paper presents an improved lineariser which computes accurate approximate solutions for multiclass networks containing an arbitrary number of queue dependent centres. the improved lineariser is based on mva results and is therefore simple to implement and numerically well behaved. the improved lineariser has storage and computation requirements of order (mn) locations and (mnj2) arithmetic operations where m is the number of centres, n the total number of customers and j the number of closed chains. results from 130 randomly generated test networks are used to compare the accuracy of the standard and improved linearisers. the improved lineariser is consistently more accurate (tolerance errors on all performance measures less than 2 per cent) than the standard lineariser and its accuracy is insensitive to the size of the network model. in addition, the improved lineariser computes accurate solutions for networks which cause the standard lineariser to fail. a. krzesinski j. greyling linux gazette: the dotfile generator jesper k. pedersen shell functions and path variables, part 3 acontinuation of our introduction to path variables and elements. stephen collyer transparent replication for fault tolerance in distributed ada 95 in this paper we present the foundations of rapids ("replicated ada partitions in distributed systems"), an implementation of the pcs supporting the transparent replication of partitions in distributed ada 95 using semi-active replication. the inherently non-deterministic executions of multi-tasked partitions are modeled as piecewise deterministic histories. i discuss the validity and correctness of this model of computation and show how it can be used for efficient semi-active replication. the rapids prototype ensures that replicas of a partition all go through the same history and are hence consistent. thomas wolf obnet: an object-oriented approach for supporting large, long-lived, highly configurable systems t. gallo g. serrano f. tisato computational reflection in class based object-oriented languages this paper describes various models of computational reflection in class based object oriented language. two different approaches are covered: the meta- object approach which supposes that every object can have a meta- object describing and monitoring its behavior, and the message reification approach which describes a message as an object. the meta-object approach is discussed more fully showing that it is important to differentiate between structural reflection and computational reflection. we will see that, whereas classes and metaclasses are very important for the former, they cannot cope adequately with the later. therefore we introduce a model of computational reflection where meta-objects are instances of a class meta-object or of one of its subclasses. j. ferber perl embedding an overview of what is needed to embed your favorite perl application and help avoid some obstacles along the way john quillan continuous execution: the visiprog environment to date, program development environments have been static rather than dynamic. even emerging interactive, integrated program development environments, like the cornell program synthesizer, view program editing and execution as essentially independent activities. we envision an even more dynamic environment in which the functionality (input/output relationship) of a network of programs, an individual program, or a program segment can be viewed "continuously" with editing changes to either the program input or program body. this is the visicalc concept extended to program development environments (visiprog). in this paper, this "dynamic" approach to program development, testing and debugging is addressed, and considerations for the user interface are discussed. the latter includes a workstation with a flexible windowing system, three-dimensional views of programs, insertion of program control and observation points, and dynamic program slicing for "viewing" program execution. an existing prototype and current development activities are also discussed. peter henderson mark weiser object serialization for marshalling data in a java interface to mpi bryan carpenter geoffrey fox sung hoon ko sang lim assessing failure probabilities in safety-critical systems containing software m. thomas developing object-oriented user interfaces in ada with the x window system gary w. klabunde mark a. roth effective unit testing tim burns acm sigdoc presentation edward yourdon dod-std-2167 default ada design and coding standard s roski software lending library cornell university has a rather unique approach in microcomputer user support. in the spring of 1983, we ordered several software packages for our office ibm personal computers. at the time we also purchased all the microcomputers for university departments. because of this situation, we were in the public eye of campus microcomputer users. people would inquire about our software as they saw it in our office. how did we like it? could they try it? remember, software is usually purchased for a specific machine. how did we provide support to our campus users and remain legal? since we were such a public area we thought that something like a lending library might be a feasible option. we drafted policies on how we would check out software packages. a deposit, either a check or a departmental account number, would be required. people could check out a package for a week at a time, and they would have to sign a form stating that they had no intention of copying either the software or the documentation. we then drafted a letter to the dozen or so companies whose software we had and asked their permission to let users try the software. enclosed with each letter was a copy of the form users would sign indicating the restrictions and limitations of use. much to our delight, most companies responded very favorably to our request. after this initial success, i decided to carry the library one step further. we supported apple, ibm, and dec microcomputers for the campus and subscribed to a number of computer magazines. i selected about 100 packages that had received good reviews from campus users and/or magazines. these companies were contacted and asked if they would participate in our library by supplying us with a package. if this was not possible, did they have a demonstration package, or could they give us some kind of a discount? the result is a three- page list of software housed in the lending library. the list is updated monthly (some packages arrive for a 30-60 day trial), because of new additions. most companies are more than happy to participate and approve of our attempts to keep materials secure. many of the software companies were unsure how to respond to this novel idea. since the inception of the lending library, several universities and the cornell medical college in new york city have contacted us for guidance and advice on how "to get started." it is certainly worth the time and effort to begin such a library and support the users in this way. karen w. allen array oriented exception handling: a proposal for dealing with "invalid" data apl still deals poorly with some aspects of real world problems. for instance, frequently to mark entries in an otherwise numeric database as "not applicable" or "not entered," one picks arbitrarily one or more numbers to represent the attributes. but this and other approaches suffer difficulties ranging from ambiguity to inefficiencies in both programming and execution. further, apl errors which might logically be associated with individual elements of data arrays are not reported by existing apl systems in any detail. this paper proposes an enhancement to apl which would overcome both these difficulties, discussing considerations of both use and implementation. jim lucas designing an aspect-oriented framework in an object-oriented environment constantinos a. constantinides atef bader tzilla h. elrad p. netinant mohamed e. fayad combining objects and relations gottfried razek effective synchronization removal for java we present a new technique for removing unnecessary synchronization operations from statically compiled java programs. our approach improves upon current efforts based on escape analysis, as it can eliminate synchronization operations even on objects that escape their allocating threads. it makes use of a compact, equivalence-class- based representation that eliminates the need for fixed point operations during the analysis. we describe and evaluate the performance of an implementation in themarmot native java compiler. for the benchmark programs examined, the optimization removes 100% of the dynamic synchronization operations in single-threaded programs, and 0-99% in multi- threaded programs, at a low cost in additional compilation time and code growth. erik ruf components and generative programming (invited paper) this paper is about a paradigm shift from the current practice of manually searching for and adapting components and their manual assembly to generative programming, which is the automatic selection and assembly of components on demand. first, we argue that the current oo technology does not support reuse and configurability in an effective way. then we show how a system family approach can aid in defining reusable components. finally, we describe how automate the assembly of components based on configuration knowledge. we compare this paradigm shift to the introduction of interchangeable parts and automated assembly lines in the automobile industry. we also illustrate the steps necessary to develop a product line using a simple example of a car product line. we present the feature model of the product line, develop a layered architecture for it, and automate the assembly of the components using a generator. we also discuss some design issues, applicability of the approach, and future development. krzysztof czarnecki ulrich w. eisenecker software engineering: quality assurance raghavendra rao loka ext2tools for linux how to use windows and linux on the same pc while losing neither your mind nor your files robert a. dalrymple a "write on" file glenna james a comparative feature-analysis of microcomputer prolog implementations j weeks h berghel ndhorm: an oo approach to requirements modeling jiazhong zhang zhijian wang compiling lisp procedures bruce a pumplin verified program support environments william d. young a scalable, automated process for year 2000 system correction johnson m. hart antonio pizzarello a variation of knoop, ru"thing, and steffen's lazy code motion karl-heinz drechsler manfred p. stadel linux apprentice: bourne shell scripts scripting with ex and here files. randy parker an adaptable generation approach to agenda management eric k. mccall lori a. clarke leon j. osterweil workshop on tools peter deutsch how to program in ada 9x, using ada 83 erhard ploedereder user-defined local variable syntax with ans forth john r. hayes combined use of languages in object-oriented software construction v. karakostas l. pourkashani workshop on multi-dimensional separation of concerns in software engineering (workshop session) separation of concerns has been central to software engineering for decades, yet its many advantages are still not fully realized. a key reason is that traditional modularization mechanisms do not allow simultaneous decomposition according to multiple kinds of (overlapping and interacting) concerns. this workshop was intended to bring together researchers working on more advanced modularization mechanisms, and practitioners who have experienced the need for them, as a step towards a common understanding of the issues, problems and research challenges. peri tarr william harrison harold ossher anthony finkelstein bashar nuseibeh dewayne perry recursion is more efficient than iteration emmanuel saint-james trace-driven studies of vliw video signal processors zhao wu wayne wolf efficient support for complex numbers in java peng wu sam midkiff jose moreira manish gupta expanding the scope of prototyping tools (abstract) kenneth f. chin acm president's letter: computer-based predictive writing peter j. denning extracting library-based object-oriented applications in an increasingly popular model of software distribution, software is developed in one computing environment and deployed in other environments by transfer over the internet. extraction tools perform a static whole-program analysis to determine unused functionality in applications in order to reduce the time required to download applications. we have identified a number of scenarios where extraction tools require information beyond what can be inferred through static analysis: software distributions other than complete applications, the use of reflection, and situations where an application uses separately developed class libraries. this paper explores these issues, and introduces a modular specification language for expressing the information required for extraction. we implemented this language in the context of _jax_, an industrial-strength application extractor for java, and present a small case study in which different extraction scenarios are applied to a commercially available library-based application. peter f. sweeney frank tip covariant deep subtyping reconsidered david l. shang dependability of embedded systems john knight fortran 95 iso/iec 1539 is a multipart international standard; the parts are published separately. this publication, 1539-1, which is the first part, specifies the form and establishes the interpretation of programs expressed in the fortran language. the purpose of this part is to promote portability, reliability, maintainability, and efficient execution of fortran programs for use on a variety of computing systems. the second part, 1539-2, defines additional facilities for the manipulation of character strings of variable length. a processor conforming to 1539-1 need not conform to 1539-2; however, conformance to 1539-2 assumes conformance to this part. throughout this publication, the term "this standard" refers to 1539-1. corporate ansi accredited technical subcommittee x3j3 upfront corporate linux journal staff map: a functional analysis and design method dan russell transformations on higher-order functions hanne riis nielson flemming nielson linux vs. windows nt and os/2 we continue to see media blurbs and ads for both microsoft's windows nt and ibm's os/2. both promise to be the operating system that we need and to take advantage of the intel 386 and beyond bernie thompson module types, module variables, and their use as a universal encapsulation mechanism atanas radenski c compiler design for an industrial network processor one important problem in code generation for embedded processors is the design of efficient compilers for asips with application specific architectures. this paper outlines the design of a c compiler for an industrial asip for telecom applications. the target asip is a network processor with special instructions for bit-level access to data registers, which is required for packet-oriented communication protocol processing. from a practical viewpoint, we describe the main challenges in exploiting these application specific features in a c compiler, and we show how a compiler backend has been designed that accomodates these features by means of compiler intrinsics and a dedicated register allocator. the compiler is fully operational, and first experimental results indicate that c-level programming of the asip leads to good code quality without the need for time-consuming assembly programming. jens wagner rainer leupers incorporating usability into requirements engineering tools the development of a computer system requires the definition of a precise set of properties or constraints that the system must satisfy with maximum economy and efficiency. this definition process requires a significant amount of communication between the requestor and the developer of the system. in recent years, several methodologies and tools have been proposed to improve this communication process. this paper establishes a framework for examining the methodologies and techniques, charting the progress made, and identifying opportunities to improve the communication capabilities of a requirements engineering tool. gregory l. smith sharon a. stephens leonard l. tripp wayne l. warren an assessment of the overhead associated with tasking facilities and task paradigms in ada t m burger k w nielsen brouhaha- a portable smalltalk interpreter brouhaha is a portable implementation of the smalltalk-80 virtual machine interpreter. it is a more efficient redesign of the standard smalltalk specification, and is tailored to suit conventional 32 bit microprocessors. this paper presents the major design changes and optimization techniques used in the brouhaha interpreter. the interpreter runs at 30% of the speed of the dorado on a sun 3/160 workstation. the implementation is portable because it is written in c. eliot miranda sufficient test sets for path analysis testing strategies many testing methods require the selection of a set of paths over which testing is to be conducted. this paper presents an analysis of the effectiveness of individual paths for testing predicates in linearly domained programs. a measure is derived for the marginal advantage of testing another path after several paths have already been tested. this measure is used to show that any predicate in such programs may be sufficiently tested using at most m+n+1 paths, where m is the number of input values and n is the number of program variables. steven j. zeil lee j. white programmer performance and the effects of the workplace wide variation in programmer performance has been frequently reported in the literature [1, 2, 3]. in the absence of other explanation, most managers have come to accept that the variation is due to individual characteristics. the presumption that there are order-of-magnitude differences in individual performance makes accurate cost projection seem nearly impossible. in an extensive study, 166 programmers from 35 different organizations, participated in a one-day implementation benchmarking exercise. while there were wide variations across the sample, we found evidence that characteristics of the workplace and of the organization seemed to explain a significant part of the difference. tom demarco tim lister interprocedural modification side effect analysis with pointer aliasing we present a new interprocedural modification side effects algorithm for c programs, that can discern side effects through general-purpose pointer usage. ours is the first complete design and implementation of such an algorithm. preliminary performance findings support the practicality of the technique, which is based on our previous approximation algorithm for pointer aliases [lr92]. each indirect store through a pointer variable is found, on average, to correspond to a store into 1.2 locations. this indicates that our program- point-specific pointer aliasing information is quite precise when used to determine the effects of these stores. william landi barbara g. ryder sean zhang an ada-like separate compilation style in c the ada programming language's modularity allows for stepwise refinement in software engineering. effective separate compilation is crucial in most large systems, including those written in c. an approach to implementing effective modularity in c, modeled after ada, is explored. the concept of language-independent modularity, and the possibility of implementing it in a given "entry level" language is introduced. an example, a simple sentence parser, is developed in c using this concept. the technique, called a/sc2, may easily be extended to other programming systems and languages. modeling modularity is a useful design tool. alexy v. khrabrov a study of record packing methods j r parker the response times of priority classes under preemptive resume in m/g/m queues approximations are given for the mean response times of each priority level in a multiple-class multiserver m/g/m queue operating under preemptive resume scheduling. the results have been tested against simulations of systems with two and three priority classes and different numbers of servers. andre b. bondi jeffrey p. buzen software support for heterogeneous computing howard jay siegel henry g. dietz john k. antonio an overview of the standard template library g. bowden wise surveying current research in object-oriented design the state of object-oriented is evolving rapidly. this survey describes what are currently thought to be the key ideas. although it is necessarily incomplete, it contains both academic and industrial efforts and describes work in both the united states and europe. it ignores well-known ideas, like that of coad and meyer [34], in favor of less widely known projects. research in object-oriented design can be divided many ways. some research is focused on describing a design process. some is focused on finding rules for good designs. a third approach is to build tools to support design. most of the research described in this article does all three. we first present work from alan snyder at hewlett-packard on developing a common framework for object-oriented terminology. the goal of this effort is to develop and communicate a corporate-wide common language for specifying and communicating about objects. we next look into the research activity at hewlett-packard, led by dennis de champeaux. de champeaux is developing a model for object-based analysis. his current research focuses on the use of a trigger-based model for inter-object communications and development of a top-down approach to analysis using ensembles. we then survey two research activities that prescribe the design process. rebecca wirfs-brock from tektronix has been developing an object-oriented design method that focuses on object responsibilities and collaborations. the method includes graphical tools for improving encapsulation and understanding patterns of object communication. trygve reenskaug at the center for industriforskning in oslo, norway has been developing an object-oriented design method that focuses on roles, synthesis, and structuring. the method, called object-oriented role analysis, syntheses and structuring, is based on first modeling small sub-problems, and then combining small models into larger ones in a controlled manner using both inheritance (synthesis) and run-time binding (structuring). we then present investigations by ralph johnson at the university of illinois at urbana-champaign into object-oriented frameworks and the reuse of large- scale designs. a framework is a high-level design or application architecture and consists of a suite of classes that are specifically designed to be refined and used as a group. past work has focused on describing frameworks and how they are developed. current work includes the design of tools to make it easier to design frameworks. finally, we present some results from the research group in object-oriented software engineering at northeastern university, led by karl lieberherr. they have been working on object-oriented computer assisted software engineering (case) technology, called the demeterm system, which generates language- specific class definitions from language- independent class dictionaries. the demeter system include tools for checking design rules and for implementing a design. rebecca j. wirfs-brock ralph e. johnson letter to bob: configuring an intel linux system jon "maddog" hall definition of the disc concurrent language g. iannello a. mazzeo g. ventre selective and lightweight closure conversion we consider the problem of selective and lightweight closure conversion, in which multiple procedure- calling protocols may coexist in the same code. flow analysis is used to match the protocol expected by each procedure and the protocol used at each of its possible call sites. we formulate the flow analysis as the solution of a set of constraints, and show that any solution to the constraints justifies the resulting transformation. some of the techniques used are suggested by those of abstract interpretation, but others arise out of alternative approaches. mitchell wand paul steckler a comparative study of two simple network file access models h. ghosh s. sreedhar document structure and modularity in mentor mentor is a structured document manipulation system. it has been used for several years as a program development and maintenance environment. its main characteristics are: it is both interactive and programmable, it is parameterized by the language to be manipulated, it can manipulate several languages at the same time, as well as multi-lingual documents. it is open to and from the outer system, it is extensible. the current development of mentor reflects our belief that a major component of programming is the maintenance of large documents of a varied nature: specifications, programs, manuals, progress reports, documentation, etc... in addition, information of various kinds, and in different languages, is often mixed in a single document, and one may have to extract this information selectively upon request (e.g. text, examples and formal specification in a manual, or instructions, comments and assertions in a program). v. donzeau-gouge g. kahn b. lang b. melese automated installation of large-scale linux networks need to load linux on 100 workstations? learn some tricks and techniques that could save you days of tedious work. ali raza butt jahangir hasan software reuse economics model: version 1.0 george e. raymond david m. hollis an apl-to-c compiler for the ibm risc system/6000: compilation, performance and limitations wai-mee ching dz-ching ju embedded systems news briefs rick lehrbaum a method using procedural parameters and callback functions to create a generic exchange-sorting module a generic exchange-sorting module obviates the need for duplicating sort code in a program, thus promoting module reuse. this paper presents a method for using procedural parameters and callback functions to create a generic exchange-sorting module. based on techniques with roots in functional programming using higher order functions, the method is fairly easy to understand. more importantly it is easy to use, even for beginning programmers. using the rapid application development system delphi and it's object pascal programming language, the method is presented in a way that highlights some useful features of object pascal that might not be well known to readers more familiar with the now-outdated standard pascal or the currently popular c++. robin m. snyder demonstrating a view matcher for reusing smalltalk classes mary beth rosson john m. carroll christine sweeney from the editor: how many distributions? marjorie richardson a strongly typed, interactive object-oriented database programming language programming languages with data types have been used successfully to model databases with the abstraction mechanisms of a relational or semantic data model. the benefits of data types for modeling databases with an object- oriented database language has also been considered, but more research is required to isolate the basic features that the type system of the language should have, and to integrate the representation of abstract knowledge with the representation of concrete and procedural knowledge. the point of view is presented that, for a strongly typed programming language, the following features are relevant: a) a type system with concrete types, abstract data types with assertions and inheritance of operators from the representation type; b) the notion of type hierarchies; c) an object-oriented view of databases, where objects are the only values that can be created, destroyed, and updated. examples will be given to show how the conceptual language galileo might be modified to become a strongly typed, object-oriented database language. a. albano g. ghelli m. e. occhiuto r. orsini polling in concurrent programming hoare introduced the concept of polling in his communicating sequential processes (csp) to handle nondeterministic message communication in distributed and concurrent programming. in order to introduce the polling concept effectively in a programming language, the problems of simultaneous polling, effective termination, busy waiting, and expressive power in one-to- many or many-to-one communication must be solved. this paper discusses the concept of polling, the details about the problems, and how a new concurrent programming language, copl, solves the problems. copl introduces a general algorithm to establish a hierarchical relationship between two communicating processes, an efficient mechanism to handle polling termination, a flexible polling to avoid busy waiting, and implicit polling to add asymmetry in the language for more expressive power. [key words: concurrent programming, programming languages, polling, nondeterminism, and distributed systems] c.-d. jung e. siberrt structuring interfaces alexander ran jianli xu has the king returned? conrad weisert an input/output subsystem for the hawk operating system kernel david l. harris security in a secure capability-based system charles r. landau a practical application of parnas modular approach in the development of a system, it is often desirable to break it up into pieces so that different people can work on several parts simultaneously. this paper discusses the criteria used to decompose an actual system into modules and the effect this decomposition had on the implementation of that system. joyce r. calabro maria m. pozzo towards more informative fortran compilers the paper discusses a very simple but effective way to detect the spelling mistakes in the variables names, which are very common in the development of fortran programs and go undetected by the compiler. ajit kumar verma book review: the unix web server book gerald luther graef divergent views in goal-driven requirements engineering axel van lamsweerde appropriate interfaces between design tools, languages, compilers and runtimes in real-time systems (panel) rich gerber steve tjiang david whalley david wilner mike wolfe ada '82 - status, use and application techniques the ada language was designed as a language to address embedded systems within the department of defense. the interest in this language is evident from the numerous articles, texts and conferences which have addressed this language and its application within the real time embedded systems environment. there have been research efforts directed to the use of ada. these efforts have been conducted by a variety of individuals and organizations and have resulted in contributions to requirements and design methodologies as well as application areas. thus, we are seeing several prototype implementations which illustrate the use of ada in the software lifecycle. john foreman nil: an integrated language and system for distributed programming this paper presents features of the nil programming language which support the construction of distributed software systems: (1) a process model in which no pointers or shared data are visible, (2) interprocess communication via synchronous and asynchronous message passing, (3) compile- time typestate checking, guaranteeing module isolation and correct finalization of data, (4) dynamic binding of statically typed ports under the control of capabilities. we discuss how these features are defined in nil, illustrate the consequences of these decisions for the design of distributed systems, and compare these decisions with those made in other programming languages with similar objectives. robert e. strom shaula yemini object oriented programming b. p. pokkunuri using object modeling to transform structured analysis into object oriented design g. k. khalsa apl '85 kevin r. weaver wsclock - a simple and effective algorithm for virtual memory management a new virtual memory management algorithm wsclock has been synthesized from the local working set (ws) algorithm, the global clock algorithm, and a new load control mechanism for auxiliary memory access. the new algorithm combines the most useful feature of ws---a natural and effective load control that prevents thrashing---with the simplicity and efficiency of clock. studies are presented to show that the performance of ws and wsclock are equivalent, even if the savings in overhead are ignored. richard w. carr john l. hennessy adaptive object-oriented programming using graph-based customization karl j. lieberherr ignacio silva-lepe cun xiao software reuse based on a large object-oriented library hsian-chou liao feng- jian wang a documentation scheme for object-oriented software systems j. sametinger a. stritzinger endian-independent record representation clauses norman h. cohen apl complaints (out) of control f. h. d. van batenburg on do while vs. do with exit john reid integrating architecture description languages with a standard design method jason e. robbins nenad medvidovic david f. redmiles david s. rosenblum how to efficiently build vhdl testbenches markus schutz acm forum robert l. ashenhurst an ida algorithm for optimal spare allocation michail g. lagoudakis visual hipe: a prototype for the graphical visualization of functional expressions (demonstration) ricardo jimenez-peris marta patiño-martinez j. a. velazquez-iturbide c. pareja-flores some factors affecting program repair maintenance: an empirical study an empirical study of 447 operational commercial and clerical cobol programs in one australian organization and two u.s. organizations was carried out to determine whether program complexity, programming style, programmer quality, and the number of times a program was released affected program repair maintenance. in the australian organization only program complexity and programming style were statistically significant. in the two u.s. organizations only the number of times a program was released was statistically significant. for all organizations repair maintenance constituted a minor problem: over 90 percent of the programs studied had undergone less than three repair maintenance activities during their lifetime. iris vessey ron weber wish-4: a specification for a case-tool to wish for lars olenfeldt approved uniformity issues p. d. kenward b. a. wichmann technical correspondence: on apt, francez, and de roever's ``a proof system for communicating sequential processes'' abha moitra aspect-oriented programming using reflection and metaobject protocols gregory t. sullivan prioritized asynchronism in ada 9x this paper discusses the semantics for asynchronous message passing between prioritized tasks including the case where the messages may themselves be given explicit priority. a highly flexible scheme results that is applicable to a wide range of scheduling algorithms. the impact in the case of a simple priority-inheritance scheduler is explored. k. c. elsom kernel korner: the elf object file format: introduction eric youngdale a regular architecture for operating system vadim g. antonov the evolution of an object oriented development method brad balfour linux programming hints michael k. johnson software review: my humble opinions on windows nt vs. os/2 ronald b. krisko organization domain modeling (odm): formalizing the core domain modeling life cycle researchers and practitioners are looking for systematic ways of comparing domain analysis (da) methods. comparisons have often focused on linkage between da methods and related technologies such as systems modeling. less attention has been paid to comparing da methods in terms of certian core methodological issues, including problems of scoping, contextualizing, descriptive vs. prescriptive modeling, and formalized models of variability. this paper presents key aspects of organization domain modeling (odm), a systematic domain analysis method structured in terms of a core domain modeling life cycle directly addressing these methodological concerns. mark a. simos an abstract machine design for lexically scoped parallel lisp with speculative processing an abstract machine is designed to support the data environment requirements of balinda lisp, a parallel lisp dialect which permits speculative processing of conditional modules. logically, the machine provides multiple stacks connected into an environment tree, with lexically visible sections pointed to by display registers. physically, stack sections are stored as separate objects and access is established by the use of a dynamic stack recording the chain of function calls, and a set of lexical display registers pointing at visible objects. this arrangement allows parts of the environment of a function to be retained or garbage-collected as appropriate after exit. by making copies of visible ancestral stack sections, side effects of speculative parallel tasks are handled in accordance with language semantics. the architecture is generic and may be realized in a variety of forms, depending on whether balinda lisp is implemented on a conventional machine, stack machine, or dataflow machine. c. k. yuen position paper on microprocessor assembly language draft standard ieee task p694/d11 among the early purposes of assembly languages must be included: 1) the use in the programming process of mnemonic operation codes and symbolic variables names instead of the bit patterns and relative addresses of machine code as an aid to the human programmer. machine code programming has largely disappeared except as a temporary bootstrapping process in producing an early version of an assembler, a compiler, or an interpreter. we shall ignore it in this short note. other reasons often cited as justification for employing an assembly language at the present time are: 2) the availability of all the machine resources and facilities, 3) the necessity of producing a program that executes as fast as possible, 4) the necessity of producing a program that is as parsimoneous of memory as possible, and 5) as the original bootstrap processor in producing compilers or interpretors for higher level systems and applications languages. we wish to emphasize that these reasons were often valid in the past; that these reasons are sometimes valid at present; and that these reasons will remain valid only under very special circumstances in the future. g. w. gorsline concurrent object-oriented programming in classic- ada michael l. nelson gilberto f. mota vassilios theologitis always one more bug: applying adawise to improve ada code adawise, a set of tools currently under development at ora, performs automatic checks to verify the absence of common run-time errors affecting the correctness or portability of ada programs. the tools can be applied to programs of arbitrary size, and they are conservative\\---that is, the absence of a warning guarantees the absence of a problem. if adawise issues a warning, there is a potential error that should be investigated by the programmer. adawise checks at compile-time for such potential errors as incorrect order dependence and erroneous execution due to improper aliasing. these errors are not detected by typical compilers. we ran two of the tools on several publicly available ada software products to determine if the tools issue useful warnings without bombarding the user with "false positives." we found that adawise generated a small number of total warnings, and that false positives usually indicated areas of weakness in the products tested. this paper describes our preliminary tests using the adawise toolset, and analyzes the warnings that were issued. cheryl barbasch dan egnor the caede performance analysis tool c. m. woodside e. m. hagos e. neron r. j. a. buhr an introduction to ada (part ii) s. h. saib efficient replicated method invocation in java jason maassen thilo kielmann henri e. bal file server for apl at waterloo l. j. dickey the isis project: real experience with a fault tolerant programming system kenneth birman robert cooper parameter passing and control stack management in prolog implementation revisited parameter passing and control stack management are two of the crucial issues in prolog implementation. in the warren abstract machine (wam), the most widely used abstract machine for prolog implementation, arguments are passed through argument registers, and the information associated with procedure calls is stored in possibly two frames. although accessing registers is faster than accessing memory, this scheme requires the argument registers to be saved and restored for back tracking and makes it difficult to implement full tail recursion elimination. these disadvantages may far outweigh the advantage in emulator-based implementations because registers are actually simulated by using memory. in this article, we reconsider the two crucial issues and describe a new abstract machine called atoam (yet another tree-oriented abstract machine). the atoam differs from the wam mainly in that (1) arguments are passed directly into stack frames, (2) only one frame is used for each procedure call, and (3) procedures are translated into matching trees is possible, and clauses in each procedure are indexed on all input arguments. the above-mentioned inefficiencies of the wam do not exist in to he atoam because backtracking requires less bookkeeping operations, and tail recursion can be handled in most cases like a loop statement in procedural languages. an atoam-emulator-based prolog system called b-prolog has been implemented, which is available through anonymous ftp from ftp.kyutech.ac.jp (131.206.1.101) in the directory pub/language/prolog.b-prolog is comparable in performance with and can sometimes be significatly faster than emulated sicstus-prolog. by measuring the numbers of memory and register references made in both systems, we found that passing arguments in stack is no worse than passing arguments in registers even if accessing memory is four times as expensive as accessing registers. neng-fa zhou debugging race conditions in message-passing programs robert h. b. netzer timothy w. brennan suresh k. damodaran-kamal replicated procedure call eric c cooper supporting awareness and interaction through collaborative virtual interfaces this paper explores interfaces to virtual environments supporting multiple users. an interface to an environment allowing interaction with virtual artefacts is constructed, drawing on previous proposals for 'desktop' virtual environments. these include the use of peripheral lenses to support peripheral awareness in collaboration; and extending the ways in which users' actions are represented for each other. through a qualitative analysis of a design task, the effect of the proposals is outlined. observations indicate that, whilst these designs go some way to re-constructing physical co-presence in terms of awareness and interaction through the environment, some issues remain. notably, peripheral distortion in supporting awareness may cause problematic interactions with and through the virtual world; and extended representations of actions may still allow problems in re-assembling the composition of others' actions. we discuss the potential for: designing representations for distorted peripheral perception; and explicitly displaying the course of action in object-focused interaction. mike fraser steve benford jon hindmarsh christian heath applying domain analysis and modeling: an industrial experience in this paper we describe our experience in applying domain analysis within a company that develops personal electronic devices. we describe how we tailored the dssa method to suit our needs and then present the process and representations that we found most useful for this situation. the conclusions and lessons learned are useful because few studies published at this time provide details about applications of domain engineering in commercial development environments. robert b. france thomas b. horton agentsheets: a tool for building domain-oriented visual programming environments alex repenning some observations concerning reuse gary n. falacara tempo, a program specializer for c (panel session) ron cytron renaud marlet linux events corporate linux journal staff modern languages and microsoft's component object model david n. gray john hotchkiss seth laforge andrew shalit toby weinberg synchronization in portable device drivers we present an overview of the synchronization mechanisms offered to device drivers by different operating systems and develop a foundation for writing portable device drivers by unifying these mechanisms. our foundation has been used to implement an efficient portable cluster adapter driver for three different operating systems as part of the runtime system for a heterogeneous pc cluster. we show how our portable synchronization mechanisms map to the native synchronization mechanisms of these three operating systems. stein j. ryan functionality in the reusability of software reusable mathematical and statistical software has been available for some time providing substantial programming productivity to scientists and engineers. with the development of ada and extensions of its package concept to more general categories of information systems, libraries of reusable code and cataloging schemes are anticipated from regular practices to yield the same kinds of benefits to these systems. one obstacle has been the lack of abstractions for identifying and developing candidate reusable software, promoting its later classification, recovery and modification. this paper advances a method to discover common functions across different systems to enhance reusability. ross a. galgiano martin d. fraser mark e. schaefer g. scott owen user format control in a lisp prettyprinter richard c. waters soft timers: efficient microsecond software timer support for network processing this paper proposes and evaluates soft timers, a new operating system facility that allows the efficient scheduling of software events at a granularity down to tens of microseconds. soft timers can be used to avoid interrupts and reduce context switches associated with network processing without sacrificing low communication delays.more specifically, soft timers enable transport protocols like tcp to efficiently perform rate-based clocking of packet transmissions. experiments show that rate-based clocking can improve http response time over connections with high bandwidth-delay products by up to 89% and that soft timers allow a server to employ rate-based clocking with little cpu overhead (2-6%) at high aggregate bandwidths.soft timers can also be used to perform network polling, which eliminates network interrupts and increases the memory access locality of the network subsystem without sacrificing delay. experiments show that this technique can improve the throughput of a web server by up to 25%. mohit aron peter druschel an introduction to rlab a computational tool for scientific and engineering applications ian searle reactive dynamic architectures jesper andersson position paper on "fairness" edsger w. dijkstra embedded systems news rick lehrbaum software deviation analysis jon damon reese nancy g. leveson on synchronization in hard-real-time systems the design of software for hard-real-time systems is usually difficult to change because of the constraints imposed by the need to meet absolute real- time deadlines on processors with limited capacity. nevertheless, a new approach involving a trio of ideas appears to be helpful for those who build software for such complex applications. stuart r. faulk david l. parnas persistent messages in local transactions david e. lowell peter m. chen software development process from natural language specification motoshi saeki hisayuki horai hajime enomoto fortran 8x discussion kent paul dolan linux gazette: tips from the answer guy james t. dennis another defence of enumerated types markku sakkinen how "outsiders" see ada and its future mark gerhardt conjunction as composition partial specifications written in many different specification languages can be composed if they are all given semantics in the same domain, or alternatively, all translated into a common style of predicate logic. the common semantic domain must be very general, the particular semantics assigned to each specification language must be conducive to composition, and there must be some means of communication that enables specifications to build on one another. the criteria for success are that a wide variety of specification languages should be accommodated, there should be no restrictions on where boundaries between languages can be placed, and intuitive expectations of the specifier should be met. pamela zave michael jackson requirements on ada reengineering technology from past, present and future systems evan lock noah prywes automatic tuning of multi-task programs for real-time embedded systems this paper describes a programming tool for real-time embedded systems. real- time embedded systems are usually implemented by multiple tasks. it is important to allocate tasks to the system appropriately. our tool, called true, performs the allocation of tasks by transformation of the program the user has written. true reads the program and the requirements about its response time. true transforms the program applying rules based on the requirements. then the program is translated into a code for operating system i-tron. i-tron is our original real-time operating system for 32-bit microprocessors. this paper also describes the implementation of true and an example of its use. toru shimizu ken sakamura commutativity analysis: a new analysis technique for parallelizing compilers this article presents a new analysis technique, commutativity analysis, for automatically parallelizing computations that manipulate dynamic, pointer- based data structures. commutativity analysis views the computation as composed of operations on objects. it then analyzes the program at this granularity to discover when operations commute (i.e., generate the same final result regardless of the order in which they execute). if all of the operations required to perform a given computation commute, the compiler can automatically generate parallel code. we have implemented a prototype compilation system that uses commutativity analysis as its primary analysis technique. we have used this system to automatically parallelize three complete scientific computations: the barnes-hut n-body solver, the water liquid simulation code, and the string seismic simulation code. this article presents performance results for the generated parallel code running on the stanford dash machine. these results provide encouraging evidence that commutativity analysis can serve as the basis for a successful parallelizing compiler. martin c. rinard pedro c. diniz maintaining consistency in a database with changing types stanley b. zdonik yale: the design of yet another language-based editor r j zavodnik m d middleton formal methods for multimethod software components gary t. leavens experiences of use cases and similar concepts fredrik lindström is universal document exchange in our future? louis m. gomez donald f. pratt mark r. buckley retargetable compiler code generation mahadevan ganapathi charles n. fischer john l. hennessy a short note a small virtually-addressed control blocks being able to access task and thread control blocks in the virtual address space has some structuring and performance advantages, but using one page per control block is too expensive in some cases. this paper describes how to emulate smaller kernel pages though still using the hardware mmu. jochen liedtke calculating in an object-oriented iterator-view-generator framework this paper briefly describes research on generic object-oriented programming where we establish a set of design patterns (or idioms) which makes algorithms independent of data structures. in the framework that implements these design patterns, an algorithm is not defined directly on a data structure class, but more abstractly on a set of structural properties shared by multiple data structures. johan larsson a definition of an iswim-like language via scheme mirjana ivanovic zoran budimac pacceptor and sconnector frameworks: combining concurrency and communication raman kannan file-system development with stackable layers filing services have experienced a number of innovations in recent years, but many of these promising ideas have failed to enter into broad use. one reason is that current filing environments present several barriers to new development. for example, file systems today typically stand alone instead of building on the work of others, and support of new filing services often requires changes that invalidate existing work. stackable file-system design addresses these issues in several ways. complex filing services are constructed from layer "building blocks," each of which may be provided by independent parties. there are no syntactic constraints to layer order, and layers can occupy different address spaces, allowing very flexible layer configuration. independent layer evolution and development are supported by an extensible interface bounding each layer. this paper discusses stackable layering in detail and presents design techniques it enables. we describe an implementation providing these facilities that exhibits very high performance. by lowering barriers to new filing design, stackable layering offers the potential of broad third- party file-system development not feasible today. john s. heidemann gerald j. popek object-oriented software development with the demeter method (abstract) karl lieberherr using operational analysis assumption errors assumptions that are the basis for operational analysis models of devices have the characteristic that they can be proved to hold by observing the data. error measures are defined for the main operational analysis assumptions. a method for deriving correction terms is described. these terms are functions of the error measures and can be used to get exact results of behavior sequence performance measures of interest, such as, mean number of jobs at a device. these performance measures will be exact no matter how badly the operational assumptions are met by the data. formulas for performance measures that were developed assuming homogeneous arrivals and services were found to give exact results under less restrictive conditions. since the performance measure correction terms can only be calculated exactly with an amount of data that would be required to obtain direct performance measure results, ways to estimate the correction terms with reduced data collection are suggested. neal m. bengtson idiomatic design andrew koenig active badges--the next generation implementing a software location system as a linux embedded application results in a robust, efficient and inexpensive system igor bokun krzysztof zielinski literate programming and the "spaniel" method nick hatzigeorgiu apostolos syropoulos a smalltalk window system based on constraints we describe the design of a constraint-based window system for smalltalk. this window system uses constraints to specify attributes of windows and relationships between them. three classes of constraints are supported, one of which is implicit and not available for general use. the system extends the current smalltalk system, providing support for both fixed-size and fixed- scale windows. it also provides the capability to dynamically reorganize the layout of a window. a goal of the design is to produce a system with real-time response that is fast enough to be substituted for the existing system. a prototype with response times of approximately 1/4 second has been implemented to demonstrate the feasibility of the design as well as to point out several important optimizations. danny epstein wilf r. lalonde how does 3-d visualization work in software engineering?: empirical study of a 3-d version/module visualization system hideki koike hui-chu chu a comprehensive study of the complexity of multiparty interaction we present a taxonomy of languages for multiparty interaction, which covers all proposals of which we are aware. based on this taxonomy, we present a comprehensive analysis of the computational complexity of the multiparty interaction implementation problem, the problem of scheduling multiparty interactions in a given execution environment. yuh-jzer joung scott a. smolka linux expo 1999: linux journal attends linux expo marjorie richardson components and connectors are the assembly language of architectural description: compositions deserve first-class status (and better abstraction mechanisms) larry howard writing efficient c++ programs (abstract) scott meyers book review: active java and exploring java danny yee fast dispatch mechanisms for stock hardware john r. rose a modified form of software science measures k. k. aggarwal yogesh singh balancing runtime and replay costs in a trace-and-replay system jong-deok choi janice m. stone the string-to-string correction problem with block moves walter f. tichy an implementation of asynchronous i/o for ada thomas j. fleck paul kohlbrenner set of tools for native code generation for the java virtual machines oscar azanon esteire juan manuel cueva lovelle ada+sql - an overview ada+sql is a programming environment for ada 95 extended with basic sql single user capabilities. it incorporates a very fast compiler and interpreter, with debugging options, library generator and browser, syntax template editors, programmer wizard, two-dimensional graphics, sql interactive interface and hypertext documentation on the environment, ada 95 and sql. several implementation aspects are discussed. arthur v. lopes keyword input for c j thornburg mathematical roots of j roger k. w. hui kenneth e. iverson minnowbrook apl workshop r h pesch e e mcdonnell k e iverson b bernecky d b allen integration mechanisms in cedar the importance of integration in programming environments is well known. perhaps the easiest way to build an integrated system is to build a closed system; the designers of the system can use whatever ad hoc techniques are available to make the pieces they provide hang together nicely. many of the integrated editor/compiler/execution environments (like the cornell program synthesizer) fall into this category [teitelbaum81]. when building an open system, the problem for the designers of the system is not to integrate a fixed collection of tools, but to provide general mechanisms for tool integration. for instance, unix "pipes" provide an elegant means of integrating new tools by making it easy to make the output of one tool the input of another; also, the simplicity of the unix file system makes it easy to integrate new devices into the system - the file system conceals the physical peculiarities of the devices instead of making them visible [kernighan81]. james donahue system administration managing your logs with chklogs: an introduction to a program writtenmr. grimaldo to manage system logs emilio grimaldo perl annotated archives paul dunne software cancer: the seven early warning signs david boundy data groups: specifying the modification of extended state k. rustan m. leino source code secrets: the basic kernel phil hughes powerlist: a structure for parallel recursion many data-parallel algorithms---fast fourier transform, batcher's sorting schemes, and the prefix-sum---exhibit recursive structure. we propose a data structure called powerlist that permits succinct descriptions of such algorithms, highlighting the roles of both parallelism and recursion. simple algebraic properties of this data structure can be explotied to derive properties of these algorithms and to establish equivalence of different algorithms that solve the same problem. jayadev misra a balanced code placement framework give-n-take is a code placement framework which uses a generic producer- consumer mechanism. an instance of this could be a communication step between a processor that computes (produces) some data, and other processors that subsequently reference (consume) these data in an expression. an advantage of give-n-take over traditional partial redundancy elimination techniques is its concept of production regions, instead of single locations, which can be beneficial for general latency hiding. give-n-take also guarantees balanced production, i.e., each production will be started and stopped exactly once. the framework can also take advantage of production coming "for free," as induced by side effects, without disturbing balance. give-n-take can place production either before or after consumption, and it also provides the option to speculatively hoist code out of potentially zero-trip loop (nest) constructs. give-n-take uses a fast elimination method based on tarjan intervals, with a complexity linear in the program size in most cases. we have implemented give- n-take as partof a fortran d compiler prototype, where it solves various communication generation problems associated with compiling data-parallel languages onto distributed-memory architectures. reinhard von hanxleden ken kennedy samba's encrypted password support how smb-encrypted passwords actually works and a walk-through the steps required to enable encrypted passwords in samba john blair event and state-based debugging in tau: a prototype sameer shende janice cuny lars hansen joydip kundu stephen mclaughry odile wolf attacking the process migration bottleneck moving the contents of a large virtual address space stands out as the bottleneck in process migration, dominating all other costs and growing with the size of the program. copy-on- reference shipment is shown to successfully attack this problem in the accent distributed computing environment. logical memory transfers at migration time with individual on-demand page fetches during remote execution allows relocations to occur up to one thousand times faster than with standard techniques. while the amount of allocated memory varies by four orders of magnitude across the processes studied, their transfer times are practically constant. the number of bytes exchanged between machines as a result of migration and remote execution drops by an average of 58% in the representative processes studied, and message-handling costs are cut by over 47% on average. the assumption that processes touch a relatively small part of their memory while executing is shown to be correct, helping to account for these figures. accent's copy-on-reference facility can be used by any application wishing to take advantage of lazy shipment of data. e. zayas an exemplar based smalltalk two varieties of object-oriented systems exist: one based on classes as in smalltalk and another based on exemplars (or prototypical objects) as in act/1. by converting smalltalk from a class based orientation to an exemplar base, independent instance hierarchies and class hierarchies can be provided. decoupling the two hierarchies in this way enables the user's (logical) view of a data type to be separated from the implementer's (physical) view. it permits the instances of a class to have a representation totally different from the instances of a superclass. additionally, it permits the notion of multiple representations to be provided without the need to introduce specialized classes for each representation. in the context of multiple inheritance, it leads to a novel view of inheritance (or-inheritance) that differentiates it from the more traditional multiple inheritance notions (and- inheritance). in general, we show that exemplar based systems are more powerful than class based systems. we also describe how an existing class based smalltalk can be transformed into an exemplar-based smalltalk and discuss possible approaches for the implementation of both and- inheritance and or-inheritance. wilf r. lalonde dave a. thomas john r. pugh on the optimality of change propagation for incremental evaluation of hierarchical attribute grammars several new attribute grammar dialects have recently been developed, all with the common goal of allowing large, complex language translators to be specified through a modular composition of smaller attribute grammars. we refer to the class of dialects as hierarchical attribute grammars. in this short article, we present a characterization of optimal incremental evaluation that indicates the unsuitability of change propagation as the basis of an optimal incremental evaluator for hierarchical attribute grammars. this result lends strong support to the use of incremental evaluators based on more applicative approaches to attribute evaluation, such as carle and pollock's evaluator based on more applicative approaches to attribute evaluation, such as carle and pollock's evaluator based on caching of partially attributed subtree, pugh's evaluator based on function caching of semantic functions, and swierstra and vogt's evaluator based on functions, and swierstra and vogt's evaluator based on function caching of visit sequences. alan carle lori pollock register allocation for predicated code alexandre e. eichenberger edward s. davidson the code analyser lclint debugging code is never fun, but this tool makes it a bit easier. david santo orcero whither j: observations at the j user conference 2000 cliff reiter evolving object oriented design, a case study handling the complexity of hierarchical object oriented design (ood) decomposition and dealing with the subjective nature of the supporting pictorial representations have historically been two relatively undeveloped aspects of the design methodology. by defining decomposition rules, a requirements allocation scheme, and an organized method of uniquely cataloging objects, the overall complexity of the hierarchical design decomposition process can be reduced. the addition of documentation standards, which are used to formally record and track the assignment of requirements, further reduces the complexity. the documentation standards also require that all relevant information concerning objects and operations, as well as their interactions, be formally recorded. the subjective nature of ood pictorial representations can be eliminated by incorporating data couples and control arrow and using support documentation to record that information which can not be represented pictorially. the intent of this paper is to illustrate, by case study, method improvements to further evolve ood. by incorporating these improvements into the ood process, the portability, reusability and maintainability of the resulting software can be increased. m. p. schuler a large-scale study of file-system contents john r. douceur william j. bolosky off-line variable substitution for scaling points-to analysis most compiler optimizations and software productivity tools rely oninformation about the effects of pointer dereferences in a program.the purpose of points- to analysis is to compute this informationsafely, and as accurately as is practical. unfortunately, accuratepoints-to information is difficult to obtain for large programs,because the time and space requirements of the analysis becomeprohibitive. we consider the problem of scaling flow- and context-insensitivepoints-to analysis to large programs, perhaps containing hundreds ofthousands of lines of code. our approach is based on a variable substitution transformation, which is performed off-line, i.e.,before a standard points-to analysis is performed. the general idea ofvariable substitution is that a set of variables in a program can be replaced by a single representative variable, thereby reducing the input size of the problem. our main contribution is a linear-time algorithm which finds a particular variable substitution that maintains the precision of the standard analysis, and is also very effective in reducing the size of the problem. we report our experience in performing points-to analysis on largec programs, including some industrial-sized ones. experiments show thatour algorithm can reduce the cost of andersen's points-to analysis substantially: on average, it reduced the running time by 53% and the memory cost by 59%, relative to an efficient baseline implementation of the analysis. atanas rountev satish chandra an integrated prolog programming environment u. schreiweis a. keune h. langendörfer interactive multilevel definition of apl functions an attempt is made to establish a convention that will make possible an interactive multilevel definition of apl functions. in accordance with this convention, functions can be defined at two levels: first an outline (flow- chart) is specified, this being followed by a detailed specification of the outline, prompted by the apl system. the convention is based on iverson's direct definition form and enables the user to define long functions in an interactive manner. preliminary experience with this convention indicates that it can improve the readability of apl functions moshe sniedovich an office specification language based on path expressions s. ronzani f. tisato r. zicari a hybrid approach to software reuse we describe a hybrid approach to software reuse in an ongoing project that addresses a challenging software engineering task. the approach is driven by an architectural design and makes use of both code components and program synthesis technology. we describe criteria that were used in choosing the reuse strategy for different parts of the application and argue that to be successful a reuse strategy must be driven by the needs of an application program instead of adapting a software development strategy around a reuse program. sanjay bhansali metrics based asset assessment the re-use of software components during the software development is considered to be an important factor to improve the quality and productivity and thus to reduce the time to market of the final product. in this paper we will present a proposal for a description model for re-usable components. we will also present the results of case studies concerned with both telecom specific and ,,generic" it-components. these components have been examined using the description model and a further set of (empirical) criterions. based on the results a model concept for the empirical assessment of javabeans, which is currently under development, is presented.. andreas schmietendorf reiner dumke erik foltin design and evaluation of a compiler algorithm for prefetching todd c. mowry monica s. lam anoop gupta linux for suits: patent absurdities doc searls integrating analysis and design methods (abstract) derek coleman paul jeremaes thinking objectively: object-oriented abstractions for distributed programming rachid guerraoui mohamed e. fayad termination detection of diffusing computations in communicating sequential processes jayadev misra k. m. chandy whole program compilation for embedded software: the adsl experiment the increasing complexity and decreasing time-to-market of embedded software forces designers to write more modular and reusable code, using for example object-oriented techniques and languages such as c++. the resulting memory and runtime overhead cannot be removed by traditional optimizing compilers; a global, whole program analysis is required. to evaluate the potential of whole program optimization techniques, we have manually optimized the embedded software of a commercial adsl modem. using only techniques that can be automated, a memory footprint reduction of newly 60% has been achieved. we conclude that a consistent and aggressive use of whole system optimization techniques is feasible and worthwhile, and that the implementation of such techniques in a compiler for embedded software will allow software designers to write more modular and reusable code without suffering the associated implementation overhead. a. johan cockx efficient mutation analysis: a new approach in previously reported research we designed and analyzed algorithms that improved upon the run time complexity of all known weak and strong mutation analysis methods at the expense of increased space complexity. here we describe a new serial strong mutation algorithm whose running time is on the average much faster than the previous ones and that uses significantly less space than them also. its space requirement is approximately the same as that of mothra, a well-known and readily available implemented system. moreover, while this algorithm can serve as basis for a new mutation system, it is designed to be consistent with the mothra architecture, in the sense that, by replacing certain modules of that system with new ones, a much faster system will result. such a mothra-based implementation of the new work is in progress. like the previous algorithms, this one, which we call lazy mutant analysis or lma, tries to determine whether a mutant is strongly killed by a given test only if it is already known that it is weakly killed by that test. unlike those algorithms, lma avoids executing many mutants by dynamically discovering classes of mutants that have the "same" behavior, and executing representatives of those classes. the overhead it incurs is small in proportion to the time saved, and the algorithm has a very natural parallel implementation. in comparison to the fastest known algorithms for strong mutation analysis, in the best case, lma can improve the speed by a factor proportional to the average number of mutants per program statement. in the worst case, there is no improvement in the running time, but such a case is hard to construct. this work enables us to apply mutation analysis to significantly larger programs than is currently possible. vladimir n. fleyshgakker stewart n. weiss egregion: a branch coverage tool for apl this article describes our experience with test suites and automated branch coverage tools for apl software maintenance, based on our use of them to verify y2k compliance of an apl-based database system. we introduce _egregion,_ a simple, easy-to- use tool that assesses branch coverage in apl functions. the tool comprises a pair of apl functions that report detailed and summary function-level information about code coverage of test suites. the _egregion_ tool provides a line-by-line analysis of statement coverage, labels not branched to, branches never taken, branches always taken, transfer of control via non-branches, and branches to non-labeled lines. although we do not consider this groundbreaking work, we do believe that the coverage tool will be valuable to apl programmers who are engaged in the creation of large, reliable applications. robert bernecky overcoming the challenges to feedback-directed optimization (keynote talk) _feedback-directed optimization (fdo) is a general term used to describe any technique that alters a program's execution based on tendencies observed in its present or past runs. this paper reviews the current state of affairs in fdo and discusses the challenges inhibiting further acceptance of these techniques. it also argues that current trends in hardware and software technology have resulted in an execution environment where immutable executables and traditional static optimizations are no longer sufficient. it explains how we can improve the effectiveness of our optimizers by increasing our understanding of program behavior, and it provides examples of temporal behavior that we can (or could in the future) exploit during optimization._ michael d. smith a better way to combine efficient string length encoding and zero-termination c. bron e. j. dijkstra coloring register pairs many architectures require that a program use pairs of adjacent registers to hold double-precision floating-point values. register allocators based on chaitin's graph-coloring technique have trouble with programs that contain both single-register values and values that require adjacent pairs of registers. in particular, chaitin's algorithm often produces excessive spilling on such programs. this results in underuse of the register set; the extra loads and stores inserted into the program for spilling also slow execution. an allocator based on an optimistic coloring scheme naturally avoids this problem. such allocators delay the decision to spill a value until late in the allocation process. this eliminates the over-spilling provoked by adjacent register pairs in chaitin's scheme. this paper discusses the representation of register pairs in a graph coloring allocator. it explains the problems that arise with chaitin's allocator and shows how the optimistic allocator avoids them. it provides a rationale for determining how to add larger aggregates to the interference graph. preston briggs keith d. cooper linda torczon technical correspondence: on landwehr's "an abstract type for statistics collection" jakob nielsen carl e. landwehr apl extensions - a users view there has been much heated argument about extensions in apl. this paper reflects 5 years' experience with one brand of extensions (stsc's nested array system). useful and irritating features are discussed. facilities available are compared with other implementations - apl2, dyalog, and ipsa. topics covered include event handling, file systems, strand notation, indexing, the each dual and rank operators, and interfaces to other languages. the paper is illustrated with examples drawn from code produced internally, and from vector competitions. maurice h. jordan organizing software in a distributed environment the system modeller provides automatic support for several different kinds of program development cycle in the cedar programming system. it handles the daily evolution of a single module or a small group of modules modified by a single person, the assembly of numerous modules into a large system with complex interconnections, and the formal release of a system. the modeller can also efficiently locate a large number of modules in a big distributed file system, and move them from one machine to another to meet operational requirements or improve performance. butler w. lampson eric e. schmidt application development through reuse: the ithaca tools environment m. g. fugini o. nierstrasz b. pernici an interactive debugger for a concurrent language this work deals with issues of interactive debugging for the concurrent language ecsp. the debugger matches a formal specification of the expected behavior of a program against its actual behaviour. this specification can be given at different levels of abstraction. control is returned to the user when an error is detected. the user can then modify the flow of the computation and/or dynamically change the specification of the expected behavior. the debugger implementation is based on program transformation techniques. n. de francesco d. latella g. vaglini the renewed life of parsing tools giancarlo succi interacting with an active, integrated environment software engineering environments are intended to provide a cohesive and integrated set of tools to support the process of software engineering with much current research into environment design focussed on maximising the degree to which these tools can be integrated. this paper describes the architecture of a prototype environment which attempts to achieve a high degree of integration using techniques drawn from artificial intelligence, office automation and object-oriented programming. this environment is implemented as a federation of intelligent, co-operating agents which communicate, with each other and with users, by message passing. this paper is particularly concerned with user interface integration including the mechanisms employed to permit inter-agent and agent-user communications. thomas rodden pete sawyer ian sommerville a model for procedures passed as parameters graham birtwistle ken loose the cold, thin edge taking the shell paradigm to its brutal limits. whether you use tcl, shells, perl, or c, there is usually an option whereby tools from one programming environment can be imported into another. here's how to "push the envelope". todd graham lewis the evolution of an integrated testing environment by the domain testing strategy for the past several years a research approach has been developed in the area of program testing; this automated testing approach is called the domain testing strategy. this paper examines broader implications of that research, together with several new testing research approaches which have been motivated by this work. for example, recently some new results which characterize a set of paths which are sufficient for path-oriented testing have been obtained, motivated to a great extent by domain testing. this approach, in turn, has led to some positive and exciting results in the area of reliable module integration testing. a number of researchers are examining the issue of specification testing, combining information from the program specification with a structural testing approach which utilizes only program information; they have found domain testing to be helpful in this regard. combining all these aspects of the testing problem indicates the evolution of an integrated testing environment. lee j. white design requirements of a single-user operating system this paper describes the design requirements of a single-user operating system which could be easily be implemented and used on various personal computers. the main emphasis of the design is its robustness and its 'openess' in the sense that no sharp boundary exist between the operating system and the user's program. k. ranai a reuse triplet for systematic software reuse ezra k. mugisa small-x - a language and interpreter for building expert systems this paper describes a programming language and its interpreter that have been developed for the construction of expert systems on microcomputers. the language has been developed with attention to simplicity and functionality. its interpreter, now running on ms-dos machines executes expert system programs written in small-x and provides a simple environment for expert system development. randy m. kaplan an apl batch scheduler improves service and system management an apl.ms2 system is described which enhances the behavior of the batch task scheduler for the univac 90 series system: vs/9. in addition to permitting customized task control on a per-shift basis the system manager can interactively tune the system's batch load. the workspace communicates through operator commands and implements a simple algorithm which spreads the batch load so that both large and small tasks can execute. this work is an instance of the application of apl to systems programming in order to provide elegant interactive operating system tools. jeff shrager lyle hartman referencing and retention in block-structured coroutines gary lindstrom mary lou soffa agora: message passing as a foundation for exploring oo language concepts wim codenie koen de hondt theo d'hondt patrick steyaert a preliminary study of ada expansion ratios since the mandate (dodd 5001.31) requiring the use of the ada language in all mission critical software, a need for revised expansion ratios for estimating ada software has emerged. this paper describes a study to determine a set of preliminary ada expansion ratios. three classes of ada programming features were identified, also has a corresponding expansion ratio. these ratios are used to determine memory requirements which are input to cost estimation models such as cocomo and slim [1]. the results of this research will have dire ct bearing on future ada projects by helping to develop improved software costing and sizing estimates. this will lead to improved reliability on design requirements for program memory and execution timing specifications. a special note on the effect of timing and complier optimization is also included in this paper. james v. chelini edmund b. hudson stephen m. reidy finite state verification (abstract only): an emerging technology for validating software systems ever since formal verification was first proposed in the late sixties, the idea of being able to definitively determine if a program meets its specifications has been an appealing, but elusive, goal. although verification systems based on theorem proving have improved considerably over the years, they are still inherently undecidable and require significant guidance from mathematically astute users. the human effort required for formal verification is so significant that it is usually only applied to the most critical software components. alternative approaches to theorem proving based verification have also been under development for some time. these approaches usually restrict the problem domain in some way, such as focusing on hardware descriptions, communication protocols, or a limited specification language. these restrictions allow the problem to be solved by using reasoning algorithms that are guaranteed to terminate and by representing the problem with a finite state model, and thus these approaches have been called finite state verification. systems based on these approaches are starting to be effectively applied to interesting software systems and there is increasing optimism that such approaches will become widely applicable. in this presentation, i will overview some of the different approaches to finite state verification. in particular i will describe symbolic model checking, integer necessary constraints, and incremental data flow analysis approaches. the strengths and weaknesses of these approaches will be described. in addition, i will outline the major challenges that must be addressed before finite state verification will become a common tool for the typical well-trained software engineer. lori a. clarke scanning polyhedra with do loops corinne ancourt françois irigoin blocks and procedures heiko kießling uwe kruger kernel korner: contributing to the linux kernel - the linux configuration system joseph pranevich an ada solution to the general mutual exclusion problem kwok-bun yue (isef): an integrated industrial-strength software engineering framework isef is an environment for programming-in-the-large that integrates disparate software engineering principles, methods and tools into an industrial- strength, automated software development framework. projects using isef have reported increased software quality, improved software manageability and decreased software production costs. this paper presents the basic principles and mechanisms that enable isef to achieve environment/process integration as well as integration within the environment itself, describes the isef approach to software development, provides comparisons with related work, and reports on experience in using isef. shaye koenig prioritizing remote procedure calls in ada distributed systems in this paper we discuss the assignment of priorities to the execution of remote procedure calls in distributed real-time systems that are programmed using the distributed systems annex (dsa) of ada 95. we first discuss the current priority model used in the glade implementation of the dsa. we then present some theoretical results that show that a more flexible priority assignment methodology can provide much better schedulable utilization levels. based upon these results we propose an implementation that would allow using this priority assignment methodology in glade. finally, we propose new features for a future dsa that would allow prioritizing remote procedure calls in a flexible manner. j. j. gutierrez garcía m. gonzalez harbour data abstraction, implementation, specification, and testing john gannon paul mcmullin richard hamlet an optimal algorithm for mutual exclusion in computer networks glenn ricart ashok k. agrawala take command: the system logging daemons, syslogd and klog take command of your log fileslearning to handle those pesky logging daemons. michael a. schwarz a proposal for object oriented modula-2 richard thomas forth in postscript mitch bradley testgen - testing tool for ada designs and ada code thomas s. radi stop the presses: 1998 atlanta linux showcase norman m. jacobowitz the design and implementation of an intentional naming system this paper presents the design and implementation of the intentional naming system (ins), a resource discovery and service location system for dynamic and mobile networks of devices and computers. such environments require a naming system that is (i) expressive, to describe and make requests based on specific properties of services, (ii) responsive, to track changes due to mobility and performance, (iii) robust, to handle failures, and (iv) easily configurable. ins uses a simple language based on attributes and values for its names. applications use the language to describe what they are looking for (i.e., their _intent_), not where to find things (i.e., not hostnames). ins implements a _late binding_ mechanism that integrates name resolution and message routing, enabling clients to continue communicating with end-nodes even if the name-to-address mappings change while a session is in progress. ins resolvers self-configure to form an application- level overlay network, which they use to discover new services, perform late binding, and maintain weak consistency of names using soft-state name exchanges and updates. we analyze the performance of the ins algorithms and protocols, present measurements of a java-based implementation, and describe three applications we have implemented that demonstrate the feasibility and utility of ins. william adjie-winoto elliot schwartz hari balakrishnan jeremy lilley coping with deeply nested control structures g r perkins r w norman s danicic data structures in ada (panel session) s. ron oliver frank gonzales ron lee robert monical epep: an operating system designed for experiment-control j. h. emck j. h. voskamp a. j. van der wal architectural issues in software reuse: it's not just the functionality, it's the packaging effective reuse depends not only on finding and reusing components, but also on the ways those components are combined. the informal folklore of software engineering provides a number of diverse styles for organizing software systems. these styles, or architectures, show how to compose systems from components; different styles expect different kinds of component packaging and different kinds of interactions between the components. unfortunately, these styles and packaging distinctions are often implicit; as a consequence, components with appropriate functionality may fail to work together. this talk surveys common architectural styles, including important packaging and interaction distinctions, and proposes an approach to the problem of reconciling architectural mismatches. mary shaw libraries as programs preserved within compiler continuations an approach to library organization for block-structured languages is given that avoids corrupting such structuring with the introduction of "modules", "units", or "packages". the scheme involves retentive block structuring and the preservation of partially compiled programs as compiler continuations. its chief advantage is language simplicity. its chief disadvantage is, at the present, its proclivity towards redundant file storage. this can possibly be eliminated by adopting a computational model using a one-level store. mark b. wells margaret a. hug rollo silver definitions of dependence distance data dependence distance is widely used to characterize data dependences in advance optimizing compilers. the standard definition of dependence distance assumes that loops are normalized (have constant lower bounds and a step of 1); there is not a commonly accepted definition for unnormalized loops. we have identified several potential definitions, all of which give the same answer for normalized loops. there are a number of subtleties involved in choosing between these definitions, and no one definition is suitable for all applications. william pugh from a functional point of view: a framework for extensions to apl a preliminary framework for investigating certain extensions to apl is presented. this framework allows classification of some past and future language extensions according to their desirability from a functional point of view. object classes and syntax are the main themes. m. gfeller programming with xforms, part 3: the library thor sigvaldason correctness is not congruent with quality bradley j. brown the finalization operation for abstract types in this paper we argue the importance of a finalization capability in a programming language abstract type facility. finalization, the dual of initialization, is crucial for applications involving the allocation of, and access to, abstract resources. a semantic model for finalization is given, defining both statically and dynamically allocated abstract objects, in the presence of exception handling. for illustration, we incorporate finalization in an abstract data type facility designed as an extension of ada. richard l. schwartz p. m. melliar-smith experiences developing a bank loan system gordon sheppard combining generational and conservative garbage collection: framework and implementations two key ideas in garbage collection are generational collection and conservative pointer-finding. generational collection and conservative pointer-finding are hard to use together, because generational collection is usually expressed in terms of copying objects, while conservative pointer- finding precludes copying. we present a new framework for defining garbage collectors. when applied to generational collection, it generalizes the notion of younger/older to a partial order. it can describe traditional generational and conservative techniques, and lends itself to combining different techniques in novel ways. we study in particular two new garbage collectors inspired by this framework. both these collectors use conservative pointer- finding. the first one is based on a rewrite of an existing trace-and-sweep collector to use one level of generation. the second one has a single parameter, which controls how objects are partitioned into generations: the value of this parameter can be changed dynamically with no overhead. we have implemented both collectors and present measurements of their performance in practice. alan demmers mark weiser barry hayes hans boehm daniel bobrow scott shenker functional compiler techniques for an imperative language mitsuru ikei michael wolfe creating structure from linearity in non-ada interfaces john a. campbell incremental constraint engine steve vestal a formal model and specification language for procedure calling conventions procedure calling conventions are used to provide uniform procedure- call interfaces. applications, such as compilers and debuggers, which generate, or process procedures at the machine-language abstraction level require knowledge of the calling convention. in this paper, we develop a formal model for procedure calling conventions called p-fsa's. using this model, we are able to ensure several completeness and consistency properties of calling conventions. currently, applications that manipulate procedures implement conventions in an ad-hoc manner. the resulting code is complicated with details, difficult to maintain, and often riddled with errors. to alleviate the situation, we introduce a calling convention specification language, called ccl. the combination of ccl and p-fsa's facilitates the accurate specification of conventions that can be shown to be both consistent and complete. mark w. bailey jack w. davidson bottom-up tree rewriting tool mburg k. john gough a trace transformation technique for communication refinement models of computation like kahn and dataflow process networks provide convenient means for modeling signal processing applications. this is partly due to the abstract primitives that these models offer for communication between concurrent processes. however, when mapping an application model onto an architecture, these primitives need to be mapped onto architecture level communication primitives. we present a trace transformation technique that supports a system architect in performing this communication refinement. we discuss the implementation of this technique in a tool for architecture exploration named spade and present examples. paul lieverse pieter van der wolf ed deprettere operating system design: towards a holistic approach? john r nicol gordon s blair jonathan walpole a mechanism for specifying the structure of large, layered, object-oriented programs harold l. ossher extensible security architectures for java dan s. wallach dirk balfanz drew dean edward w. felten vaxclusters (extended abstract): a closely-coupled distributed system nancy p. kronenberg henry m. levy william d. strecker principles of package design subprogram packages are groups of related subroutines used to extend the available facilities in a programming system. the results of developing several such packages for various applications are presented, with a distinction made between external and internal design criteria--- what properties packages should offer to their users and the guidelines designers should follow in order to provide them. an important issue, the design of reusable software, is thus addressed, and the concept of abstract data types proposed as a desirable solution. bertrand meyer problems with pthreads and ada this position paper discusses issues related to the recent effort to add real- time extensions to the posix (1003.1 |3|) standard, and in particular to the proposal for light-weight tasking in the form of pthreads |2|. it is claimed that this effort is relevant to the ada community, especially in the real-time domain. while offering great opportunities to improve the performance of multi-tasking, real-time ada applications, it may become not useful if the semantics of the specifications are not thought about very carefully, and the match with ada requirements is not satisfied. due to certain problems with the proposal as it stands now, it is suggested that the workshop take a close look at the draft, evaluate it, and decide on a plan of action to guarantee the suitability of pthreads to real-time ada applications before it is approved as a standard. offer pazy a forth exception handler b. j. rodriguez language constructs for managing change in process-centered environments change is pervasive during software development, affecting objects, processes, and environments. in process centered environments, change management can be facilitated by software-process programming, which formalizes the representation of software products and processes using software-process programming languages (sppls). to fully realize this goal sppls should include constructs that specifically address the problems of change management. these problems include lack of representation of inter- object relationships, weak semantics for inter-object relationships, visibility of implementations, lack of formal representation of software processes, and reliance on programmers to manage change manually. appl/a is a prototype sppl that addresses these problems. appl/a is an extension to ada.. the principal extensions include abstract, persistent relations with programmable implementations, relation attributes that may be composite and derived, triggers that react to relation operations, optionally- enforceable predicates on relations, and five composite statements with transaction-like capabilities. appl/a relations and triggers are especially important for the problems raised here. relations enable inter-object relationships to be represented explicitly and derivation dependencies to be maintained automatically. relation bodies can be programmed to implement alternative storage and computation strategies without affecting users of relation specifications. triggers can react to changes in relations, automatically propagating data, invoking tools, and performing other change management tasks. predicates and the transaction-like statements support change management in the face of evolving standards of consistency. together, these features mitigate many of the problems that complicate change management in software processes and process-centered environments. stanley m. sutton dennis heimbigner leon j. osterweil a comparison of object-oriented analysis and design methods (abstract) martin fowler the class concept in the simula programming language this paper is directed at apl-users who wants a quick introduction into the main features of the simula programming language. simula is a general purpose high-level language. good simula systems exist on most large computers. the paper describes the methods for program and data structuring in simula. these are well suited for structured programming and for the production of secure, modular programs. the paper also points at some major differences between simula and apl. jacob palme mats wallin thread-specific data and signal handling in multi-threaded applications this second part of a series on multi-threading deals with how to use c programs with one of the posix packages available for linux to handle signals and concurrent threads martin mccarthy directions in programming languages (panel discussion) paul w. abrahams gerry fisher daniel d. mccracken larry rosler guy l. steele a note on the modulo operation in edison g a hill the new developments in c the c language has developed considerably since the publication of the kernighan and ritchie book. the language development includes enumeration data types, a void type, long (more than 8 character) identifiers, and an expanded semantic structure. at the same time, the compiler technology that developed the portable c compiler (which was used to provide more than 30 production compilers on different machines) is evolving into pcc2, which offers improved maintenance and an easier porting process while handling a larger number of machine features. s. c. johnson l. rosler kernel korner: block device drivers michael k. johnson opium: a debugging environment for prolog development and debugging research mireille ducasse anna-maria emde experience using an automated metrics framework to improve the quality of ada software j. a. perkins r. s. gorzela kernel korner the wonderful world of linux 2.2: mr. pranevich gives us a look at the changes and improvements coming out in the new kernel joseph pranevich understanding com/corba interworking mike rosen successful experience with adasage reusable component library jon s. jensen howard d. stewart paul h. whittington porting themcc powerpc c/c++ compiler into an interactive development environment farooq butt compiler-controlled memory keith d. cooper timothy j. harvey take command bc: a handy utility: mr. mcandrew shows us how the bc command can be used for prototyping numerical algorithms alasdair mcandrew the galley parallel file system nils nieuwejaar david kotz constraints in concurrent object-oriented environments chris laffra jan van den bos the system-oriented editor - a tool for managing large software systems dick schefstrom debugging concurrent programs the main problems associated with debugging concurrent programs are increased complexity, the "probe effect," nonrepeatability, and the lack of a synchronized global clock. the probe effect refers to the fact that any attempt to observe the behavior of a distributed system may change the behavior of that system. for some parallel programs, different executions with the same data will result in different results even without any attempt to observe the behavior. even when the behavior can be observed, in many systems the lack of a synchronized global clock makes the results of the observation difficult to interpret. this paper discusses these and other problems related to debugging concurrent programs and presents a survey of current techniques used in debugging concurrent programs. systems using three general techniques are described: traditional or breakpoint style debuggers, event monitoring systems, and static analysis systems. in addition, techniques for limiting, organizing, and displaying a large amount of data produced by the debugging systems are discussed. charles e. mcdowell david p. helmbold book review: practical programming in tcl and tk john mclaughlin types in school noemi de la rocque rodriguez roberto ierusalimschy jose lucas rangel techniques for trusted software engineering premkumar t. devanbu philip w-l fong stuart g. stubblebine from the editor corporate linux journal staff experience and lessons learned in transporting ada software karen e. sivley conversational group service the purpose of this paper is to propose a way of tolerating software (design) faults in distributed systems relying on the well-known conversation (atomic action) approach. to do this, we shall consider differences between two programming paradigms: group communication and conversations, and discuss how a group communication service can be used to provide design fault tolerance by conversations. the main characteristics and peculiarities of this new conversational group service are described. alexander b. romanovsky version management in gypsy this paper describes the version manager of the gypsy programming support environment, and its integration with the object-oriented extension of unix1 on which it is built. ellis s. cohen dilip a. soni raimund gluecker william m. hasling robert w. schwanke michael e. wagner optimization of range checking an analysis is given for optimizing run-time range checks in regions of high execution frequency. these optimizations are accomplished using strength reduction, code motion and common subexpression elimination. test programs, using the above optimizations, are used to illustrate run-time improvements. victoria markstein john cocke peter markstein at the forge: multiple choice quizes, part 3 reuven lerner a multi-language syntax-directed editor a limitation of current selection-entry syntax-directed editors is that a particular editor is limited to manipulating programs in one particular programming language. a primary goal of the imegs research project was to construct a selection-entry syntax- directed editor which can manipulate programs in a wide variety of languages. an imegs prototype has been developed which contains a multi-language editor. donald j. bagert donald k. friesen a method of acquiring formal specifications from examples lu jian relfun: a relational/functional integration with valued clauses h boley exciting!: a newbie's view of the apl2000 conference teresa camarillo-mandell experiences with the spiral model as a process model generator barry boehm frank belz product review: va research var station ii jim dennis tools and methodology for user interface development jim rhyne roger ehrich john bennett tom hewett john sibert terry bleser perspectives on software quality assurance (panel session) thomas l. hannan john r. brown roger fujii the design and implementation of home home is a version of smalltalk which can be efficiently executed on a multiprocessor and can be executed in parallel by combining a smalltalk process with a mach thread and executing the process on the thread. home is nearly the same as ordinary smalltalk except that multiple processes may execute in parallel. thus, almost all applications running on ordinary smalltalk can be executed on home without changes in their code. home was designed and implemented based on the following fundamental policies: (1) theoretically, an infinite number of processes can become active; (2) the moment a process is scheduled, it becomes active; (3) no process switching occurs; (4) home is equivalent to ordinary smalltalk except for the previous three policies. the performance of the current implementation of home running on omron luna-88k, which had four processors, was measured by benchmarks which execute in parallel with multiple processes. in all benchmarks, the results showed that home's performance is much better than hps on the same workstation. kazuhiro ogata satoshi kurihara mikio inari norihisa doi object-oriented programs in realtime j. m. gwinn technical correspondence diane crawford effective partial redundancy elimination partial redundancy elimination is a code optimization with a long history of literature and implementation. in practice, its effectiveness depends on issues of naming and code shape. this paper shows that a combination of global reassociation and global value numbering can increase the effectiveness of partial redundancy elimination. by imposing a discipline on the choice of names and the shape of expressions, we are able to expose more redundancies. as part of the work, we introduce a new algorithm for global reassociation of expressions. it uses global information to reorder expressions, creating opportunities for other optimizations. the new algorithm generalizes earlier work that ordered fortran array address expressions to improve otpimization [25]. preston briggs keith d. cooper c++ versus lisp: a case study howard trickey fortran reflections jerry wagener knowing is better than thinking: a simple approach to inter-procedural optimization r schooler the ada+ front end and coder generator m. r. barbacci w. h. maddox t. d. newton r. g. stockton dependable distributed object systems rachid guerraoui jean charles fabre gul agha a process-oriented approach to configuration management yves bernard pierre lavency lj readers' choice linux journal readers rank their favorite linux- related products. corporate linux journal staff designing programs to check their work (abstract) manuel blum interesting problems in transforming existing software for reusability this paper discusses some of the technical problems encountered in the automated transformation of ada software to improve its ability to be reused in other systems. it also presents approaches for addressing those problems. the specific transformations covered in this paper are: • replacing visible data structures with private types • converting non- generic units into generic units • extracting abstract data type (adt) packages • replacing pre- defined typed with user-defined types • parameterizing references to global variables for each transformation, the paper provides rationale for making the change, a description with examples showing how the code is changed, a discussion of potential problems in achieving a useful result, and decisions made for its implementation using an automated tool called the re-engineering mentor. despite the technical challenges presented by these and other automated transformations of ada software, the paper shows that significant improvements in the reusability of existing code can be achieved in a cost-effective manner. kathleen gilroy mapping ada onto embedded systems: memory constraints a. tetewsky a. clough r. racine r. whittredge sarek: a window system interface for object-oriented concurrent programming languages m. uehara c. numaoka y. yokote m. tokoro a discriminant metric for module cohesion the decomposition of a large program into modules can be guided by the use of a property called cohesion, first described by constantine. cohesion is a quality that describes the degree to which the different actions performed by a module contribute to a unified function. however, this technique may be difficult to apply due to the subjective nature of the definitions of levels of cohesion. in this paper a software metric is defined and proposed as a discriminant for classifying modules according to their cohesion. formal properties of the metric are derived which can be used to set the metric value ranges for module classification. thomas j. emerson a mechanism of process group for application reliability in distributed systems zhao hong huatian li why are we using java again? paul tyma spam: a microcode based tool for tracing operating system events we have developed a tool called spam (for system performance analysis using microcode), based on microcode modifications to a vax 8600, that traces operating system events as a side-effect to normal execution. this trace of interrupts, exceptions, system calls and context switches can then be processed to analyze operating system behavior for the purpose of debugging, tuning or development. spam allows measurements to be made on a fully operating unix system with little perturbation (typically less than 10%) and without the need for modifying the kernel. stephen w. melvin yale n. patt product review quickstart: replication & recovery v1.2: an overview and review of this replication and recovery product daniel lazenby linux for suits: a talk with tim o'reilly doc searls process decomposition through locality of reference in the context of sequential computers, it is common practice to exploit temporal locality of reference through devices such as caches and virtual memory. in the context of multiprocessors, we believe that it is equally important to exploit spatial locality of reference. we are developing a system which, given a sequential program and its domain decomposition, performs process decomposition so as to enhance spatial locality of reference. we describe an application of this method - generating code from shared-memory programs for the (distributed memory) intel ipsc/2. a. rogers k. pingali mimic: a fast system/370 simulator c. may taxonomy of benchmarks russell m. clapp trevor mudge vismod: a beginner-friendly programming environment ricardo jimenez-peris marta patiño-martinez jorge pacios-martinez if writers can't program and programmers can't write, who's writing user documentation? gregory r mcarthur towards a classification of visibility rules michael klug an innovative software reengineering tools workshop - a test of market maturity and lessons learned pierre bourque alain abran a technical tour of ada edmond schonberg mark gerhardt charlene hayden reliable, reusable ada components for constructing large, distributed multi- task networks: networks architecture services (nas) this paper will introduce the key concepts of trw's reusable message based design software (network architecture services- nas) which has proven to be key to the ccpds-r project's progress to date. the nas software and supporting tools have provided the ccpds-r project team with reliable, powerful building blocks that have been integrated into extensive demonstrations to validate the critical design approaches. the ccpds-r pdr demonstration consisted of 130 ada tasks interconnected via 450 different task to task interfaces, executing in a network of 3 vax nodes. the extensive reuse of nas software building blocks and ada generics resulted in the translation of 120,000 ada source lines into over 2 million lines of executable machine language instructions. the nas software (about 20,000 ada source lines) was conceived in a trw independent research and development project in 1985, and has since been refined and evolved into a truly reusable state. although nas reuse is limited currently to digital equipment corporation vax vms networks, efforts are underway to provide heterogeneous nas capabilities. the advantages of nas usage are twofold: value added operational software through reuse of mission independent, performance tunable components which support open architectures, and overall project productivity enhancement as a result of nas support for rapid prototyping, runtime instrumentation toolsuite, and encapsulation of the difficult capabilities required in any distributed real-time system into a standard set of building blocks with simple applications interfaces. this isolation of the "hard parts" into an easily used standard software chipset, results in a large net reduction in applications software complexity, less reliance on scarce real-time programming experts, and a substantial reduction in overall project risk. this paper describes the message based design techniques which drove us to the development of nas, the capabilities and components inherent in the nas product, and the ccpds-r experience in using nas in a stringent real time command and control environment. w. royce early specification of user-interfaces: toward a formal approach j.-p. jacquot d. quesnot eelru: simple and effective adaptive page replacement yannis smaragdakis scott kaplan paul wilson book review: linux secrets corporate linux journal staff preliminary experience with a configuration control system for modular programs this paper describes some preliminary experience gathered during the implementation and early use of a program composition and version control system. this system has been designed and implemented as a part of the adele research project, a programming environment for the production of modular programs (estublier 83). this project has four main components: a) a program editor, interpreter and debugger; b) a parameterized code generator; c) a user interface; d) a program base, the subject of this paper. the current version of this environment has been developed on a multics system. the program base, including the system composition and version control mechanisms, has been used for six months, notably for its own development and maintenance. j. estublier s. ghoul s. krakowiak increasing the use of apl it appears that apl is used by only 5 to 10 percent of the programming population that might use it. after 15 years of commercial availability, many apl proponents find this a disappointing percentage. this paper reviews the main factors that inhibit the use of apl. it describes the factors, suggests means to overcome those factors, and gives examples of various apl systems that address the limiting factors. john w. myrna the rendezvous is dead - long live the protected object dragan macos frank mueller performance measurement of a parallel input/output system for the intel ipsc/2 hypercube james c. french terrence w. pratt mriganka das generating local addresses and communication sets for data-parallel programs generating local addresses and communication sets is an important issue in distributed-memory implementations of data-parallel languages such as high performance fortran. we show that for an array a affinely aligned to a template that is distributed across p processors with a cyclic(k) distribution, and a computation involving the regular section a(l:h:s), the local memory access sequence for any processor is characterized by a finite state machine of at most k states. we present fast algorithms for computing the essential information about these state machines, and extend the framework to handle multidimensional arrays. we also show how to generate communication sets using the state machine approach. performance results show that this solution requires very little runtime overhead and acceptable preprocessing time. siddhartha chatterjee john r. gilbert fred j. e. long robert schreiber shang- hua teng letters to the editor corporate linux journal staff from the editor corporate linux journal staff object-oriented integration testing paul c. jorgensen carl erickson loop transformations for architectures with partitioned register banks embedded systems require maximum performance from a processor within significant constraints in power consumption and chip cost. using software pipelining, processors can often exploit considerable instruction-level parallelism (ilp), and thus significantly improve performance, at the cost of substantially increasing register requirements. these increasing register requirements, however, make it difficult to build a high-performance embedded processor with a single, multi-ported register file while maintaining clock speed and limiting power consumption. some digital signal processors, such as the ti c6x, reduce the number of ports required for a register bank by partitioning the register bank into multiple banks. disjoint subsets of functional units are directly connected to one of the partitioned register banks. each register bank and its associate functional units is called a _cluster_. clustering reduces the number of ports needed on a per-bank basis, allowing an increased clock rate. however, execution speed can be hampered because of the potential need to copy "non- local" operands among register banks in order to make them available to the functional unit performing an operation. the task of the compiler is to both maximize parallelism and minimize the number of remote register accesses needed. previous work has concentrated on methods to partition virtual registers amongst the target architecture's clusters. in this paper, we show how high- level loop transformations can enhance the partitioning obtained by low-level schemes. in our experiments, loop transformations improved software pipelining by 27% on a machine with 2 clusters, each having 1 floating-point and 1 integer register bank and 4 functional units. we also observed a 20% improvement on a similar machine with 4 clusters of 2 functional units. in fact, by performing the described loop transformations we were able to show improvements of greater than 10% over schedules (for un-transformed loops) generated with the unrealistic assumption of a single multi-ported register bank. xianglong huang steve carr philip sweany on the assessment of safety-critical software systems j.-c. laprie optimizing compilers are here (mostly) m jazayeri m haden beyond objects: a software design paradigm based on process control mary shaw support for heterogeneity in the global distributed operating system martin turnbull automated testing in apl - an application of exception handling several apl systems have recently been enhanced to include exception handling facilities. the construction of an automated tester is presented as a case study of one such definition: the latent expression exception handling system [1] available on stsc's apl*plus@@@@ system. the automated tester was developed as a tool to manage the systematic verification of apl system features through programmed tests. a test is an apl program containing assertions that prescribe the outcome of associated statements. the tester verifies that the asserted outcomes are in fact realized during execution. this paper describes the central role of exception handling in the implementation of the automated tester. when tests are executed as apl programs, the exceptions generated by the assertion construct can be used to control interpretation of the assertion. other errors encountered during test execution must be intercepted and recorded, so that it is possible to assert that a statement results in an error. finally, since the tester should execute large sets of tests unattended, it should not be interrupted by improper tests or unexpected results. jan f. prins expressing object-oriented concepts in fortran 90 fortran 90 is a modern, powerful language with features that support important new programming concepts, including those used in object-oriented programming. this paper briefly summarizes how to express the concepts of data encapsulation, function overloading, classes, objects, inheritance, and dynamic dispatching. viktor k. decyk charles d. norton boleslaw k. szymanski model checking without a model: an analysis of the heart-beat monitor of a telephone switch using verisoft patrice godefroid robert s. hanmer lalita jategaonkar jagadeesan session 7b: reliability and complexity measures ii p. de feo visualizing unix synchronization operations the synchronization of concurrent tasks is a fundamental topic in computer science education. graphical tools that visualize the effects of synchronization operations are helpful to understand their effects and pitfalls. _paco_ and _andi_ are one such tool. c programs using unix system calls for process and semaphore management are instrumented by paco with specific output functions. when executing these programs, andi visualizes the effects of the system calls by a dynamic graphical output. carsten vogt pascal implementation of a lisp interpreter m furnari compiler techniques for maximizing fine-grain and coarse-grain parallelism in loops with uniform dependences in this paper, an approach to the problem of exploiting parallelism within nested loops is proposed. the proposed method first finds out all the initially independent computations, and then, based on them, identifies the valid partitioning bases to partition the entire iteration space of the loop nest. because the shape of the iteration space is taken into account, pseudo- dependence relations are eliminated and hence more parallelism is exploited. our approach provides a systematic method to maximize the degree of fine- or coarse-grain parallelism and is free from the open question of how to combine different loop transformations for the goal of maximizing parallelism. it is also shown that our approach can exploit more parallelism than other related work and have many advantages over them. yeong-sheng chen sheng-de wang chien-min wang a first encounter with f90 michael metcalf interface usage measurements in a user interface management system user interface management systems have provided support for most user interface design personnel with the exception of dialogue evaluators. potential support that a uims can provide to evaluators of user interfaces generated by the uims are discussed. metrics for measuring interactive behavior have been implemented as part of a user interface management system. it is shown how the external control and separate dialogue description features of uimss make possible such measurements. these metrics are automatically mapped back to the responsible portions of the dialogue description. reports are generated which aid interface evaluators in locating specific problems in a user interface design. dan r. olsen bradley w. halversen workshop on object-oriented reengineering (woor'99) serge demeyer harald gall automatic compiler-inserted i/o prefetching for out-of-core applications todd c. mowry angela k. demke orran krieger reply to ehrad konrad: application of measurement theory to software metrics - comments on the bollmann-zuse approach horst zuse peter bollmann-sdorra configuring bash a quick introduction to the bash shell. david blackman operating systems specialization (invited talk) (abstract only): experiences, opportunities and challenges this talk will present a systems-centric view of the opportunities, effectiveness, and challenges of applying specialization techniques to operating systems. the first part of the talk will focus on our experiences using manual and then tool-assisted specialization in the systematic optimization of legacy operating systems software. these experiences were gained in the synthetix project which explored the use of static, dynamic and incremental optimistic specialization in a variety of operating system contexts, including file system calls in hp- ux, linux signal delivery, berkeley packet filter processing, and sun rpc. the second part of the talk will focus on the lessons learned in synthetix, the implications for systems, and the key opportunities and challenges for future research. jonathan walpole is a professor in the computer science and engineering department at oregon graduate institute. his research interests are in the area of adaptive systems software and its application in distributed, mobile, multimedia computing environments. his work has focused on quality of service specification, adaptive resource management and dynamic specialization for enhanced performance, survivability and evolvability of large software systems. jonathan walpole corrigendum: "distributed termination" nissim francez distributed ada execution: a definitional void richard a. volz take command: the chmod command corporate linux journal staff embedding continuations in procedural objects continuations, when available as first-class objects, provide a general control abstraction in programming languages. they liberate the programmer from specific control structures, increasing programming language extensibility. such continuations may be extended by embedding them in procedural objects. this technique is first used to restore a fluid environment when a continuation object is invoked. we then consider techniques for constraining the power of continuations in the interest of security and efficiency. domain mechanisms, which create dynamic barriers for enclosing control, are implemented using fluids. domains are then used to implement an unwind-protect facility in the presence of first-class continuations. finally, we present two mechanisms, wind-unwind and dynamic-wind, that generalize unwind-protect. christopher t. haynes daniel p. friedman embedded systems news rick lehrbaum experience with a modular typed language: protel the support for modular software and the ability to perform type checking across module boundaries are becoming the mainstay of recent high level language design. this is well illustrated by languages such as mesa and the us department of defence's new standard language ada. at bell-northern research, protel, one of the first modular typed languages, has been used since 1975 to implement a substantial software system. the experience accumulated in building this system has given us a unique perspective. it has shown that the confidence of language designers in modular typed languages is well founded. it has also revealed some pitfalls which others will undoubtedly encounter. the purpose of this paper is to share our experience by outlining the nature of the problems and our solutions to them. p. m. cashin m. l. joliat r. f. kamel d. m. lasker system object model (som) and ada: an example of corba at work ibm's system object model (som) provides a powerful toolset for building object- oriented applications in multiple languages on multiple platforms. it is fully compliant with the object management group's (omg) common request broker architecture (corba). oc system' powada product allows ada95 programmers "first-class" access to som. this article describes the use of som classes to build a simple "groupware" application in ada95. g. vincent castellano multiple view programming languages charles fiterman session 3b: distributed systems s. budkowski hierarchical modularity to cope with the complexity of very large systems, it is not sufficient to divide them into simple pieces because the pieces themselves will either be too numerous or too large. a hierarchical modular structure is the natural solution. in this article we explain how that approach can be applied to software. our compilation manager provides a language for specifying where individual modules fit into a hierarchy and how they are related semantically. we pay particular attention to the structure of the global name space of program identifiers that are used for module linkage because any potential for name clashes between otherwise unrelated parts of a program can negatively affect modularity. we discuss the theoretical issues in building software hierarchically, and we describe our implementation of cm, the compilation manager for standard ml of new jersey. matthias blume andrew w. appel safe: a semantic technique for transforming programs in the presence of errors language designers and implementors have avoided specifying and preserving the meaning of programs that produce errors. this is apparently because being forced to preserve error behavior limits severely the scope of program optimization, even for correct programs. however, error behavior preservation is desirable for debugging, and error behavior must be preserved in any language that permits user-generated errors (i.e., exceptions). this article presents a technique for expressing general program transformations for languages that possess a rich collection of distinguishable error values. this is accomplished by defining a higher-order function called safe, which can be used to annotate those portions of a program that are guaranteed not to produce errors. it is shown that this facilitates the expression of very general program transformations, effectively giving program transformations in a language with many error values the same power and generality as program transformations in a language with only a single error value. using the semantic properties of safe, it is possible to provide some useful sufficient conditions for establishing the correctness of transformations in the presence of errors. in particular, a substitutability theorem is proven, which can be used to justify "in-context" optimizations: transformations that alter the meanings of subexpressions without changing the meaning of the whole program. finally, the effectiveness of the technique is demonstrated by some examples of its use in an optimizing compiler. alexander aiken john h. williams edward l. wimmers ooe: a compound document framework björn e. backlund analysis of file i/o traces in commercial computing environments improving the performance of the file system is becoming increasingly important to alleviate the effect of i/o bottlenecks in computer systems. to design changes to an existing file system or to architect a new file system it is important to understand current usage patterns. in this paper we analyze file i/o traces of several existing production computer sytems to understand file access behavior. our analysis suggests that a relatively small percentage of the files are active. the amount of total data active is also quite small for interactive environments. an average file encounters a relatively small number of file opens while receiving an order of magnitude larger number of reads to it. an average process opens quite a large number of files over a typical prime time period. what is more significant is that the effect of outliers on many of the characteristics we studied is dominant. a relatively small number of processes dominate the activity, and a very small number of files receive most of these operations. in addition, we provide a comprehensive analysis of the dynamic sharing of files in each of these enviroments, addressing both the simultaneous and sequential sharing aspects, and the activity to these shared files. we observe that although only a third of the active files are sequentially shared, they receive a very large proportion of the total operations. we analyze the traces from a given environment across different lengths of time, such as one hour, three hour and whole work-day intervals and do this for 3 different environments. this gives us an idea of the shortest length of the trace needed to have confidence in the estimation of the parameters. k. k. ramakrishnan prabuddha biswas ramakrishna karedla an object-oriented ll(1) parser generator bernd kuhl axel-toblas schreiner measurements of migratability and transportability s skelton the spineless g-machine recent developments in functional language implementations have resulted in the g-machine, a programmed graph-reduction machine. taking this as a basis, we introduce an optimised method of performing graph reduction, which does not need to build the spine of the expression being reduced. this spineless g-machine only updates shared expressions, and then only when they have been reduced to weak head normal form. it is thus more efficient than the standard method of performing graph reduction. we begin by outlining the philosophy and key features of the spineless g-machine, and comparing it with the standard g-machine. simulation results for the two machines are then presented and discussed. the spineless g-machine is also compared with tim, giving a series of transformations by which they can be interconverted. these open up a wide design space for abstract graph reduction machines, which was previously unknown. a full specification of the spineless machine is given in the appendix, together with compilation rules for a simple functional language. g. l. burn s. l. peyton jones j. d. robson simplifying and improving qualified types mark p. jones an implementation of the hamlyn sender-managed interface architecture greg buzzard david jacobson milon mackey scott marovich john wilkes the effect of interrupts on software pipeline execution on message-passing architectures rob f. van der wijngaart sekhar r. sarukkai pankaj mehra extensibility safety and performance in the spin operating system b. n. bershad s. savage p. pardyak e. g. sirer m. e. fiuczynski d. becker c. chambers s. eggers the relationship between software development environments and the software process the program committee originally planned for this panel session (and this introduction) to provide a summary of the 4th international workshop on the software process. i have taken the liberty of covering a slightly more general view of the relationship between software development environments and the software process. david notkin a new notation for arrows the categorical notion of monad, used by moggi to structure denotational descriptions, has proved to be a powerful tool for structuring combinator libraries. moreover, the monadic programming style provides a convenient syntax for many kinds of computation, so that each library defines a new sublanguage. recently, several workers have proposed a generalization of monads, called variously "arrows" or freyd-categories. the extra generality promises to increase the power, expressiveness and efficiency of the embedded approach, but does not mesh as well with the native abstraction and application. definitions are typically given in a point-free style, which is useful for proving general properties, but can be awkward for programming specific instances. in this paper we define a simple extension to the functional language haskell that makes these new notions of computation more convenient to use. our language is similar to the monadic style, and has similar reasoning properties. moreover, it is extensible, in the sense that new combining forms can be defined as expressions in the host language. ross paterson surveyor's forum: retargetable code generators m. ganapathi j. hennessy and c. n. fischer fad, a functional programming language that supports abstract data types the paper outlines the programming language fad. fad is a functional programming system of the kind described by backus (backus78]. fad supports abstract data types, parameterized types, and generic functions. a single scope rule establishes the encapsulation requirements for data type specification and program structuring. certain syntactic additions improve program readability as compared to pure functional notation. johannes j. martin objective view point: object-orientation and c++ (part i of ii) g. bowden wise software evolution through iterative prototyping neil goldman k. narayanaswamy generational reference counting: a reduced-communication distributed storage reclamation scheme this paper describes generational reference counting, a new distributed storage reclamation scheme for loosely-coupled multiprocessors. it has a significantly lower communication overhead than distributed versions of conventional reference counting. although generational reference counting has greater computational and space requirements than ordinary reference counting, it may provide a significant saving in overall execution time on machines in which message passing is expensive. the communication overhead for generational reference counting is one message for each copy of an interprocessor reference (pointer). unlike conventional reference counting, when a reference to an object is copied no message is sent to the processor on which the object lies. a message is sent only when a reference is discarded. unfortunately, generational reference counting shares conventional reference counting's inability to reclaim cyclical structures. in this paper, we present the generational reference counting algorithm, prove it correct, and discuss some refinements that make it more efficient. we also compare it with weighted reference counting, another distributed reference counting scheme described in the literature. b. goldberg call forwarding: a simple interprocedural optimization technique for dynamically typed languages this paper discusses call forwarding, a simple interprocedural optimization technique for dynamically typed languages. the basic idea behind the optimization is straightforward: find an ordering for the "entry actions" of a procedure, and generate multiple entry points for the procedure, so as to maximize the savings realized from different call sites bypassing different sets of entry actions. we show that the problem of computing optimal solutions to arbitrary call forwarding problems is np-complete, and describe an efficient greedy algorithm for the problem. experimental results indicate that (i) this algorithm is effective, in that the solutions produced are generally close to optimal; and (ii) the resulting optimization leads to significant performance improvements for a number of benchmarks tested. koen de bosschere saumya debray david gudeman sampath kannan a note on before and after metamethods chris zimmermann pretenuring for java pretenuring can reduce copying costs in garbage collectors by allocating long-lived objects into regions that the garbage collector with rarely, if ever, collect. we extend previous work on pretenuring as follows. (1) we produce pretenuring advice that is neutral with respect to the garbage collector algorithm and configuration. we thus can and do combine advice from different applications. we find that predictions using object lifetimes at each allocation site in java prgroams are accurate, which simplifies the pretenuring implementation. (2) we gather and apply advice to applications and the jalapeño jvm, a compiler and run-time system for java written in java. our results demonstrate that building combined advice into jalapeño from different application executions improves performance regardless of the application jalapeño is compiling and executing. this _build-time_ advice thus gives user applications some benefits of pretenuring without any application profiling. no previous work pretenures in the run-time system. (3) we find that application-only advice also improves performance, but that the combination of build-time and application-specific advice is almost always noticeably better. (4) our same advice improves the performance of generational and older first colleciton, illustrating that it is _collector neutral_. stephen m. blackburn sharad singhai matthew hertz kathryn s. mckinely j. eliot b. moss designing and implementing choices: an object- oriented system in c++ roy h. campbell nayeem islam david raila peter madany limits to software estimation algorithmic (kcs) complexity results can be interpreted as indicating some limits to software estimation. while these limits are abstract they nevertheless contradict enthusiastic claims occasionally made by commercial software estimation advocates. specifically, if it is accepted that algorithmic complexity is an appropriate definition of the complexity of a programming project, then claims of purely objective estimation of project complexity, development time, and programmer productivity are necessarily incorrect. take command grep: searching for words: a command to help you find a specific word or a sentence in a file jan rooijackers debugging and the experience of immediacy david ungar henry lieberman christopher fry the predictability of branches in libraries brad calder dirk grunwald amitabh srivastava fweb: a literate programming system for fortran8x a. avenarius s. oppermann some issues and strategies in heap management and memory hierarchies paul r. wilson an exception handling model for parallel programming and its verification valerie issarny a functional macro expansion system for optimizing code generation: gaining context-sensitivity without losing confluence (poster) eero lassila translating algol 60 programs into ada r. d. huijsman j. van katwijk c. pronk w. j. toetenel examination of a memory access classification scheme for pointer-intensive and numeric programs luddy harrison architecture recovery in ares harald gall mehdi jazayeri rene klosch wolfgang lugmayr georg trausmuth synchronous operations as first-class values synchronous message passing via channels is an interprocess communication (ipc) mechanism found in several concurrent languages, such as csp, occam, and amber. such languages provide a powerful selective i/o operation, which plays a vital role in managing communication with multiple processes. because the channel ipc mechanism is "operation-oriented," only procedural abstraction techniques can be used in structuring the communication/synchronization aspects of a system. this has the unfortunate effect of restricting the use of selective i/o, which in turn limits the communication structure. we propose a new, "value- oriented" approach to channel-based synchronization. we make synchronous operations first-class values, called events, in much the same way that functions are first-class values in functional programming languages. our approach allows the use of data abstraction techniques for structuring ipc. we have incorporated events into pml, a concurrent functional programming language, and have implemented run-time support for them as part of the pegasus system. j. h. reppy the mjøner beta case tool elmer sandvad an empirical evaluation of three methods for deadlock analysis of ada tasking programs static analysis of ada tasking programs has been hindered by the well known state explosion problem that arises in the verification of concurrent systems. many different techniques have been proposed to combat this state explosion. all proposed methods excel on certain kinds of systems, but there is little empirical data comparing the performance of the methods. in this paper, we select one representative from each of three very different approaches to the state explosion problem: partial-orders (representing state- space reductions), symbolic model checking (representing obdd-based approaches), and inequality necessary conditions (representing integer programming-based approaches). we apply the methods to several scalable concurrency examples from the literature and to one real ada tasking program. the results of these experiments are presented and their significance is discussed. james c. corbett object-oriented parallel computation for plasma simulation object- oriented techniques promise to improve the software design and programming process by providing an application-oriented view of programming while facilitating modification and reuse. since the software design crisis is particularly acute in parallel computation, these techniques have stirred the interest of the scientific parallel computing community. large-scale applications of ever-growing complexity, particularly in the physical sciences and engineering, require parallel processing for efficiency. since its introduction in the 1970s, fortran 77 has been the language of choice to model these problems, due to its efficiency, its numerical stability, and the body of existing fortran codes. however, the introduction of object-oriented languages provides new alternatives for parallel software development. fortran 90 adds modern extensions (including object-oriented concepts) to the established methods of fortran 77. alternatively, object-oriented methodologies can be explored through languages such as c++, eiffel, smalltalk, and many others. our selection among these required a language that was widespread and supported across multiple platforms (particularly supercomputers) with strong compiler optimizations. c++, while not a "pure" object-oriented language, was our choice, since it meets these criteria. charles d. norton boleslaw k. szymanski viktor k. decyk interprocedural optimization: eliminating unnecessary recompilation while efficient new algorithms for interprocedural data flow analysis have made these techniques practical for use in production compilation systems, a new problem has arisen: collecting and using interprocedural information in a compiler introduces subtle dependences among the procedures of a program. if the compiler depends on interprocedural information to optimize a given module, a subsequent editing change to another module in the program may change the interprocedural information and necessitate recompilation. to avoid having to recompile every module in a program in response to a single editing change to one module, we must develop techniques to more precisely determine which compilations have actually been invalidated by a change to the program's source. this paper presents a general recompilation test to determine which procedures must be compiled in response to a series of editing changes. three different implementation strategies, which demonstrate the fundamental tradeoff between the cost of analysis and the precision of the resulting test, are also discussed. keith d. cooper ken kennedy linda lorczon understanding red hat run levels how to easily add to or modify the existing subsystems of red hat distributions of linux mark f. komarinski typed common intermediate format zhong shao object oriented programming, tutorial a classical procedural program (written in cobol, fortran, basic, pascal, lisp or apl2) is made of sentences that execute sequentially in a predefined order, that depends only on the values of the data the program is working with. this order can usually be deduced by visual inspection of the program. a non- procedural program (written in prolog, for instance) contains a certain number of instructions that will not be executed in a predefined order. they receive control from an inference processor, a procedural program that decides in every moment the order in which the sentences of the program should receive control (should be fired). in both the procedural and the non-procedural cases, the basic unit of execution is the program. the data only provide values that will be used to perform computations or to decide the order of execution. a given application is a hierarchical set of programs (modules) each of which is capable of invoking other programs in the hierarchy. the data may be global (accessible from every program in the hierarchy) or local (accessible by the program where they belong and, sometimes, by those at a lower level in the hierarchy). in object-oriented programming (oop in short), things are different. here it is the data that are organized in a basic control hierarchy. one piece of data may be linked to another through a relation of descendancy, and this fact gives rise to a network (usually a tree) similar to the hierarchy of programs in procedural programming. there are also programs in oop, but they are appendages to the data (in the same way as in classical programming data are appendages of programs). it is possible to build global programs (accessible to all the data in the hierarchy) and local programs (accessible from certain objects and their descendants). in oop, the execution of a program is fired by means of a message that somebody (the user, another program or an object) sends to a given object. the recipient of the message decides which program should be executed (it may be a local program, or a global program which must be located through the network that defines the structure of the objects). there is a certain amount of confusion on what is object-oriented programming and what is not. this has happened before, with other fields in computer science. there are still people, for instance, that call artificial intelligence to any program that is written in prolog or lisp. in the same way, there are those who maintain that any program written in smalltalk, c++ or objective c is oop. as in the ai example, this is not always the case. another source of confusion comes from the fact that object-oriented programming has been frequently used to build complicated user interfaces, with window systems, icons and so forth, and this has produced the unexpected results that many people believe that any program including these interfaces is oop. again, this is obviously wrong. the basic elements of oop are objects, methods and messages. an object is a complex data element that possesses structure and is a part of an organization. it embodies three different concepts: a set of relations to other objects (usually represented by pointers). a set of properties (which have values). a set of methods (defined by means of executable code). a method is a procedural program written in any language. what other programming systems call functions, programs or procedures, object-oriented programming calls methods. there is no essential difference between programs in any language and methods, except for the fact that in pure oop the call instruction is not allowed. the reason for this is obvious: the call instruction is manuel alfonseca product review: spellcaster datacommute/bri isdn adaptor jay painter microdos: an experimental single-user operating system this paper describes an experimental implementation of a single-user operating system called microdos. it has adopted a layered software system model (hierarchical nature) and is written almost entirely in the high level language bcpl. the main feature of microdos is that it has avoided or simplified many of the major problems that are normally associated with large multi access operating systems. k. ranai new software engineering programs - worldwide (panel session) as the growth in value and importance of the software component in virtually all industrial products accelerates, the efficiency with which software is constructed, and its quality, become vital for the entire industry --- high technology or not. in response, new concentrated efforts have been recently created in europe, asia, north and south america, established and owned by private enterprise or government. it seems that these institutes, or programs within new institutes, place themselves between academic research and advanced development, and plan significant experimental work. the panel will examine the strategies and planned approaches of eight such programs. brief descriptions of some of them follow. laszlo a. belady nico haberman lim swee say john elmore w. m. murray r. w. witty k. kishida fuad gattaz sobrinho a new program structure to improve accuracy and readability of pascal software based on an analysis of errors in a piece of pascal software a new language feature is introduced to increase the degree of compile time checking of program logic and thus improve the confidence of the programmer in the correctness of a program. specifically this involves a form of abstract data type, augmented by restrictions on the use of operations provided with the type, and a means of allowing the programmer to bring logically related segments of program together textually. w. j. rogers stop the presses phil hughes announcement of the idl toolkit richard snodgrass a brief introduction to c++ jerry schwarz incremental software test approach for dod-std-2167a ada projects one of the key challenges for large ada software projects is to define and execute an efficient, cost effective, bounded test program that results in a quality product that meets all customer software requirements. this is particularly challenging in the current climate of government fixed price contracts. the command center processing and display system replacement (ccpds-r) project is being developed entirely in ada by trw for the u.s. air force on a fixed price basis. to mitigate downstream test risks, trw has defined an incremental test approach that satisfies trw and government objectives for informal development/integration testing and for formal requirements verification. features of ada are employed to create a software architecture that supports an incremental test philosophy and contributes to reduced integration effort and risk. the resulting test approach conforms to dod-std-2167a standards. m. springman quicksilver: a quasi-static compiler for java mauricio serrano rajesh bordawekar sam midkiff manish gupta a framework for call graph construction algorithms a large number of call graph construction algorithms for object-oriented and functional languages have been proposed, each embodying different tradeoffs between analysis cost and call graph precision. in this article we present a unifying framework for understanding call graph construction algorithms and an empirical comparison of a representative set of algorithms. we first present a general parameterized algorithm that encompasses many well-known and novel call graph construction algorithms. we have implemented this general algorithm in the vortex compiler infrastructure, a mature, multilanguage, optimizing compiler. the vortex implementation provides a "level playing field" for meaningful cross-algorithm performance comparisons. the costs and benefits of a number of call graph construction algorithms are empirically assessed by applying their vortex implementation to a suite of sizeable (5,000 to 50,000 lines of code) cecil and java programs. for many of these applications, interprocedural analysis enabled substantial speed-ups over an already highly optimized baseline. furthermore, a significant fraction of these speed-ups can be obtained through the use of a scalable, near-linear time call graph construction algorithm. david grove craig chambers dyc (panel session) craig chambers the language of the year 2000: will it really be fortran charles koelbel increasing efficiency of symbolic model checking by accelerating dynamic variable reordering christoph meinel christian stangier a comparison of the model-based & algebraic styles of specification as a basis for test specification richard denney fips: a functional-imperative language for explorative programming jacek passia klaus-peter löhr literate programming christopher j. van wyk requirements for an effective architecture recovery framework nabor c. mendonça jeff kramer large, multimedia programming - concepts and challenges simon gibbs christian breiteneder program analysis for software engineering: new applications, new requirements, new tools daniel le metayer piwg measurement methodology daniel roy analyzing effects of cost estimation accuracy on quality and productivity osamu mizuno tohru kikuno katsumi inagaki yasunari takagi keishi sakamoto take command: wc alexandre valente sousa an introduction for the tri-ada session on reusability r. j. knapper ampl: a modified programming language fred a. masterson cml: a higher concurrent language john h. reppy letters to the editor corporate linux journal staff error messages: the neglected area of the man/machine interface the quality of error messages produced by software used in the field was tested by a simple experiment; it was found to be far from adequate. the results of the experiment are analyzed, and some responses which tend to collaborate the original findings are discussed. finally, some suggestions are made for improving the quality of error messages. p. j. brown automating parallel runtime optimizations using post-mortem analysis sanjeev krishnan laxmikant v. kale dynamic coordination architecture through the use of reflection carlos e. cuesta pablo de la fuente manuel barrio-solárzano take command good ol' sed: a nice little command to help you modify files hans de vreught middleware philip a. bernstein the comfy 6502 compiler henry g. baker rt pc distributed services overview charles h. sauer don w. johnson larry k. loucks amal a. shaheen-gouda todd a. smith melding transactions and objects steven s. popovich gail e. kaiser shyhtsun f. wu using the re-export paradigm to build composable ada software components bryce m. bardin christopher j. thompson generic packages in c the structuring achieved by generic packages in ada can be cheaply emulated in c by judicious use of the preprocessor. two files are required for the generic package: the specification and the body. two more files are used in the instantiation: one holding the instantiation parameters and one with auxiliary code. instantiation results in normal c header and object files (*.h and *.0). dependency control can be delegated *** the make program. dick grune more ambiguities and insecurities in modula-2 m a torbett the detection of dangling references in c++ programs the smart pointer is a programming technique for the c++ language that extends the functionality of the simple pointer. smart pointers have previously been used to support persistence, distributed objects, reference counting, and garbage collection. this article will show how smart pointers can provide an inexpensive method for detecting dangling pointers to dynamic objects that can be added to any standard c++ implementation. richard a. eyre-todd implementation of task types in distributed ada p. krishnan r. a. volz r. j. theriault managing variability in software architectures this paper presents experience with explicitly managing variability within a software architecture. software architects normally plan for change and put mechanisms in the architecture to support those changes. understanding the situations where change has been planned for and recording the options possible within particular situations is usually not done explicitly. this becomes important if the architecture is used for many product versions over a long period or in a product line context where the architecture is used to build a variety of different products. that is, it is important to explicitly represent variation and indicate within the architecture locations for which change has been allowed. we will describe how the management of variations in an architecture can be made more explicit and how the use of variation points connected to the choices a customer has when ordering a product can help to navigate to the appropriate places in the architecture. felix bachmann len bass single versus multiple inheritance in object oriented programming ghan bir singh software sizing problems in software engineering metrics effort estimation models such as cocomo (constructive cost model) or copmo (cooperative programming model) typically use kdsi (thousands of delivered source instructions) as the primary input variable to determine person-months and development time for software development projects. such models have the general form [boe81], [con86]: e = @@@@ where n = number of modules e = effort, in man-months (mm) s = software sizing estimate (kdsi) a,b = development mode constants c1 = complexity multipliers based upon project conditions a great deal of research has been conducted on effort model validation using databases of actual performance for a variety of projects, but little research has been reported on improving the quality of the original sizing estimates. pert-type loc (lines of code) sizing estimates have been suggested which utilize the activity-time mean and variance estimation equations used in pert networks to estimate the mean and variance of module size for a sub-divided software product [boe81]. these equations have been used in project management for almost thirty years, but were the subject of a great deal of criticism regarding their theoretical foundation, especially regarding the assumption of such a narrowly constrained beta distribution [cla62], [gru62], [mac64]. research has shown that more reliable estimates of the moments of pert-type distributions, especially of the variance, may be obtained by using percentile estimates rather than "end-point" estimates for the optimistic and pessimistic values. fifth and ninety-fifth percentile estimates lead to moment calculations which are robust to variations in the shape of the distribution [mod68]. it has also been found that these percentile estimates correspond more closely to the actual estimates obtained when asking for end points. variations to the pert estimation equations which use these percentile estimates for module size instead of the standard end- point estimates are recommended: s1 = (a′1 \\+ 4m1 \\+ b′1) /6 21 = (b′1 -a′1 )2/10.2 where s1 = mean module size in kdsi 21 = variance of module size a′ = 5th-percentile module size m = most likely module size b′ = 95th-percentile module size overall project size and variance would follow the typical pert approach by summing the module mean and variance over the entire software product: s = @@@@ v = @@@@ the project standard deviation could then be used to compute an upper confidence interval of s for model input to determine the desired level of confidence in the forecasted effort. an example of software sizing in the cocomo environment is demonstrated using the revised pert equations. effort forecasts calculated with these estimates are compared with the results using standard pert sizing. recommendations for additional research in improving effort forecasting are also presented. edward g. rodgers book review: programming perl phil hughes programming languages, oop, and c++ roger e. lessman adding generic functions to scheme thant tessman stop the presses: what price high-performance i/o? phil hughes a framework for efficient reuse of binary code in java this paper presents a compilation framework that enables efficient sharing of executable code across distinct java virtual machine (jvm) instances. high- performance jvms rely on run-time compilation, since static compilation cannot handle many dynamic features of java. these jvms suffer from large memory footprints and high startup costs, which are serious problems for embedded devices (such as hand held personal digital assistants and cellular phones) and scalable servers. a recently proposed approach called quasi-static compilation overcomes these difficulties by reusing precompiled binary images after performing validation checks and stitching on them (i.e., adapting them to a new execution context), falling back to interpretation or dynamic compilation whenever necessary. however, the requirement in our previous design to duplicate and modify the executable binary image for stitching is a major drawback when targeting embedded systems and scalable servers. in this paper, we describe a new approach that allows stitching to be done on an indirection table, leaving the executable code unmodified and therefore writable to read- only memory. on embedded devices, this saves precious space in writable memory. on scalable servers, this allows a single image of the executable to be shared among multiple jvms, thus improving scalability. furthermore, we describe a novel technique for dynamically linking classes that uses _traps_ to detect when a class should be linked and initialized. like back-patching, the technique allows all accesses after the first to proceed at full speed, but unlike back-patching, it avoids the modification of running code. we have implemented this approach in the quicksilver quasi- static compiler for the jalape~ no jvm. on the specjvm98 benchmark suite, our approach obtains code space savings, in writable memory, of between 82% to 89% of that from our previous quasi-static compiler, while delivering performance that is typically within 1% to 7% of that approach, and which is competitive with the performance of the jalapeño adaptive optimization system. pramod g. joisha samuel p. midkiff mauricio j. serrano manish gupta the new korn shell ksh93, the latest major revision of the korn shell language, provides an alternative to tcl and perl david g. korn charles northrup jeffrey korn detecting defects in object-oriented designs: using reading techniques to increase software quality inspections can be used to identify defects in software artifacts. in this way, inspection methods help to improve software quality, especially when used early in software development. inspections of software design may be especially crucial since design defects (problems of correctness and completeness with respect to the requirements, internal consistency, or other quality attributes) can directly affect the quality of, and effort required for, the implementation. we have created a set of "reading techniques" (so called because they help a reviewer to "read" a design artifact for the purpose of finding relevant information) that gives specific and practical guidance for identifying defects in object-oriented designs. each reading technique in the family focuses the reviewer on some aspect of the design, with the goal that an inspection team applying the entire family should achieve a high degree of coverage of the design defects. in this paper, we present an overview of this new set of reading techniques. we discuss how some elements of these techniques are based on empirical results concerning an analogous set of reading techniques that supports defect detection in requirements documents. we present an initial empirical study that was run to assess the feasibility of these new techniques, and discuss the changes made to the latest version of the techniques based on the results of this study. guilherme travassos forrest shull michael fredericks victor r. basili efficient and safe-for-space closure conversion modern compilers often implement function calls (or returns) in two steps: first, a "closure" environment is properly installed to provide access for free variables in the target program fragment; second, the control is transferred to the target by a "jump with arguments (for results)." closure conversion---which decides where and how to represent closures at runtime---is a crucial step in the compilation of functional languages. this paper presents a new algorithm that exploits the use of compile-time control and data-flow information to optimize funtion calls. by extensive closure sharing and allocation by 36% and memory fetches for local and global variables by 43%; and improves the already efficient code generated by an earlier version of the standard ml of new jersey compiler by about 17% on a decstation 5000. moreover, unlike most other approaches, our new closure-allocation scheme the strong safe-for-space- complexity rule, thus achieving good asymptotic space usage. zhong shao andrew w. appel a generative approach to universal cross assembler design p. p. k. chiu s. t. k. fu continuations and coroutines the power of first class continuations is demonstrated by implementing a variety of coroutine mechanisms using only continuations and functional abstraction. the importance of general abstraction mechanisms such as continuations is discussed. christopher t. haynes daniel p. friedman mitchell wand exploiting an event-based infrastructure to develop complex distributed systems g. cugola e. di nitto a. fuggetta promises: limited specifications for analysis and manipulation edwin c. chan john t. boyland william l. scherlis a newcomer's impressions of scheme gregory v. wilson linux meta-faq corporate linux journal staff building reliable, high-performance communication systems from components although building systems from components has attractions, this approach also has problems. can we be sure that a certain configuration of components is correct? can it perform as well as a monolithic system? our paper answers these questions for the ensemble communication architecture by showing how, with help of the nuprl formal system, configurations may be checked against specifications, and how optimized code can be synthesized from these configurations. the performance results show that we can substantially reduce end-to-end latency in the already optimized ensemble system. finally, we discuss whether the techniques we used are general enough for systems other than communication systems. xiaoming liu christoph kreitz robbert van renesse jason hickey mark hayden kenneth birman robert constable the function-class the 'function-class' concept is introduced and explained in this paper. the 'function-class' specifies the function's input procedure, input requirement, output requirement and output procedure. it includes the argument evaluation process in its input procedure part, and the output value type in its output requirement part. but it has other components which can describe the function's behavior in more detail. the 'function-class' can be represented in a declarative form, i.e. in a calculus form using *, +, and operators. the function can also be defined in a declarative form. this format increases the readability of the program. the feasibility of the 'function class' is considered in the context of its implementation in lisp- like environments. the 'function-class' idea is not restricted to lisp language, although it has been originally developed in the lisp-1.9 context. there are several utilizations including program checking and optimized compilation. toshiaki kurokawa a validation of software metrics using many metrics and two resources in this paper are presented the results of a study in which several production software systems are analyzed using ten software metrics. the ten metrics include both measures of code details, measures of structure, and combinations of these two. historical data recording the number of errors and the coding time of each component are used as objective measures of resource expenditure of each component. the metrics are validated by showing: (1) the metrics singly and in combination are useful indicators of those components which require the most resources, (2) clear patterns between the metrics and the resources expended are visible when both resources are accounted for, (3) measures of structure are as valuable in examining software systems as measures of code details, and (4) the choice of which, or how many, software metrics to employ in practice is suggested by measures of "yield" and "coverage". dennis kafura james canning sleeping with the enemy: lisp and c in a large, profitable real-time application john r. hodgkinson stop the presses: usenix/uselinux in anaheim phil hughes the rendezvous and monitor concepts: is there an efficiency difference? the efficiency of ada's rendezvous concept is compared with concurrent pascal's monitor concept. the differences between the two approaches, as well as a number of issues relating to their implementation, are presented. the results indicate that a concurrent programming language should provide both types of concepts. w. eventoff d. harvey r. j. price edicates - a specification of calling sequences mark caslunan change/configuration management gathering j. young object-oriented logical specification of time-critical systems we define trio+, an object-oriented logical language for modular system specification. trio+ is based on trio, a first-order temporal language that is well suited to the specification of embedded and real-time systems, and that provides an effective support to a variety of validation activities, like specification testing, simulation, and property proof. unfortunately, trio lacks the ability to construct specifications of complex systems in a systematic and modular way. trio+ combines the use of constructs for hierarchical system decomposition and object-oriented concepts like inheritance and genericity with an expressive and intuitive graphic notation, yielding a specification language that is formal and rigorous, yet still flexible, readable, general, and easily adaptable to the user's needs. after introducing and motivating the main features of the language, we illustrate its application to a nontrivial case study extracted from a real-life industrial application. angelo morzenti pierluigi san pietro multibox parsers lev j. dyadkin insights into postscript paul snow shape in computing c. barry jay an operating system for real-time ada many applications of ada are being designed for multiple program environments, distributed among one or more loosely- or tightly-coupled embedded processors, with hard real-time requirements for interrupt responsiveness to real-world i/o devices. a special-purpose operating system has been designed and implemented to support multiple communicating ada and foreign language programs on multiple embedded processors. in this operating system, the ada task is the basic unit of process concurrency, and the inter-task, inter- program and inter-processor communication functions are provided not only by ada rendezvous, but also by packages whose bodies are directly integrated with the ada tasking scheduler. the operating system also provides the capability of servicing hardware interrupts in ada with very low latency and high throughput. h. rabbie d. nelson-gal a subjective view of lisp christian queinnec content of fortran 2000 john reid the duality of memory and communication in the implementation of a multiprocessor operating system mach is a multiprocessor operating system being implemented at carnegie-mellon university. an important component of the mach design is the use of memory objects which can be managed either by the kernel or by user programs through a message interface. this feature allows applications such as transaction management systems to participate in decisions regarding secondary storage management and page replacement. this paper explores the goals, design and implementation of mach and its external memory management facility. the relationship between memory and communication in mach is examined as it relates to overall performance, applicability of mach to new multiprocessor architectures, and the structure of application programs. m. young a. tevanian r. rashid d. golub j. eppinger the design of a next-generation process language stanley m. sutton leon j. osterweil correspondence mathew reedy systems analysis: the challenge of integrating two competing technologies as object oriented technologies continue to permeate industrial software development methodologies, it is imperative that the techniques used in these methodologies are re-evaluated. classic top down structured analysis and design techniques are competing against object oriented techniques within these methodologies. this paper compares and contrasts both of these paradigms drawing conclusions about their strengths and weaknesses and suggesting that the two paradigms within systems analysis may actually provide similar information. paula gabbert software reusability and the internet guillermo arango acme and acmestudio naming ada tasks at run-time j. cheng k. ushijma deriving modular designs from formal specifications david carrington david duke ian hayes jim welsh a forth-based hybrid neuron for neural nets paul frenger dual objects - an object model for distributed system programming jörg nolte wolfgang schröder-preikschat dejavu: deterministic java replay debugger for jalapeño java virtual machine the execution behavior of a java application can be non-deterministic due to multithreading. this non-determinism makes understanding and debugging multithreaded java applications a difficult and laborious process. dejavu (deterministic java replay utility) helps the user in understanding and debugging non-deterministic java applications by deterministically replaying the execution behavior of a non-deterministic execution. in this demo, we will present a debugger for the jalapeño java virtual machine that utilizes the replay capability provided by dejavu. the debugger helps in isolating non- deterministic failure(s) by faithfully reproducing the same execution behavior that led to the observe failure(s). jalapeno is a jvm being developed at ibm t. j. watson research center. the debugger provides the following features: (1) dejavu deterministically replays java programs; (2) remote reflection supports general debugging functionalities such as setting breakpoints, examining the program state, and detecting deadlocks; and (3) a gui provides an intuitive and easy to use interface. bowen alpern ton ngo jong-deok choi manu sridharan clos: integrating object-oriented and functional programming lisp has a long history as a functional language,* where action is invoked by calling a procedure, and where procedural abstraction and encapsulation provide convenient modularity boundaries. a number of attempts have been made to graft object-oriented programming into this framework without losing the essential character of lisp---to include the benefits of data abstraction, extensible type classification, incremental operator definition, and code reuse through an inheritance hierarchy. the common lisp object system (clos) [3], a result of the ansi standardization process for common lisp, represents a marriage of these two traditions. this article explores the landscape in which the major object-oriented facilities exist, showing how the clos solution is effective within the two contexts. richard p. gabriel jon l. white daniel g. bobrow an object oriented design method for reconfigurable computing systems martyn edwards peter green implementation versus binding to the x window environment scott cleveland profile-guided receiver class prediction david grove jeffrey dean charles garrett craig chambers apl87 i first began developing a formal language for use in teaching in the graduate program in automatic data processing established by professor howard aiken at harvard in 1955\. this language, now known as apl, has since passed through several phases, the main ones being documented in three publications [1-3]; my book a programming language in 1962, the apl\360 manual in 1968, and the aplsv manual in 1975. the last two were co-authored with a.d. falkoff. the specifications of the language provided by these publications were later supplemented by more philosophical studies that discussed the design principles followed, and the major design choices made. these include the design of apl [4], and the evolution of apl [5], by me and falkoff, and the story of o, by e.e. mcdonnell [6]. because of implementations produced by various manufacturers, and because of attempts to inject aspects of other languages (as in aplgol), many diverse lines of development have been pursued. these have been largely reported in manuals, in the proceedings of apl conferences, and in journals such as apl quote-quad (association for computing machinery), and vector (british computing society). in 1978 i began a line of development which has been reported largely in documents internal to ibm corp. [7] and to i.p. sharp associates [8-10], but also in apl conferences [11-13]. this work has culminated in a dictionary of apl, scheduled to appear in an early issue of apl quote-quad [14]; in what follows it will be referred to as "the dictionary". the present paper is a companion study in the manner of [4-6]. a preview of it was presented in november of last year at an internal ibm conference that commemorated the 20th anniversary of the initiation of the apl timesharing service within ibm. the major points to be discussed here include terminology, the apl alphabet, word formation, parsing rules, mixed functions, operators, and localization. in discussing decisions made in the early days by me and colleagues in the apl group in the t.j. watson research center, (notably a.d. falkoff and l.m. breed), i will use the term we; this usage is not meant to imply their agreement with the current thinking of myself and present colleagues at i.p. sharp associates as presented in the dictionary. although there is no current implementation of the entire dictionary, several implementations embody significant parts of it, such as the application of operators to derived and user-defined functions, and the production of "mixed" arrays by expressions such as 3 4 5, 'abcd'. two implementations [13, 15] are particularly close to the dictionary; the latter was used in all executed examples in this paper. kenneth e. iverson safely creating correct subclasses without seeing superclass code clyde ruby gary t. leavens replication in the harp file system barbara liskov sanjay ghemawat robert gruber paul johnson liuba shrira reverse engineering of use case realizations in uml dragan bojic dusan velasevic if it's there, use it mike delves umbriel - imperative programming for unsophisticated students this article discusses an experiment in designing and using umbriel, a minimal imperative programming language in the pascal tradition, for teaching the rudiments of programming in situations where the overwhelming complexities of many modern language implementations have become intolerable. p. d. terry efficient computation of interprocedural definition-use chains the dependencies that exist among definitions and uses of variables in a program are required by many language-processing tools. this paper considers the computation of definition-use and use-definition chains that extend across procedure boundaries at call and return sites. intraprocedural definition and use information is abstracted for each procedure and is used to construct an interprocedural flow graph. this intraprocedural data-flow information is then propagated throughout the program via the interprocedural flow graph to obtain sets of reaching definitions and/or reachable uses for reach interprocedural control point, including procedure entry, exit, call, and return. interprocedural definition-use and/or use-definition chains are computed from this reaching information. the technique handles the interprocedural effects of the data flow caused by both reference parameters and global variables, while preserving the calling context of called procedures. additionally, recursion, aliasing, and separate compilation are handled. the technique has been implemented using a sun-4 workstation and incorporated into an interprocedural data-flow tester. results from experiments indicate the practicality of the technique, both in terms of the size of the interprocedural flow graph and the size of the data-flow sets. mary jean harrold mary lou soffa demeter tools/c++ (abstract) karl lieberherr ian holland walter hursch ignacio silva-lepe cun xiao an algebra approach to the deduction of data flow diagrams and object oriented diagrams from a set of specifications federico vazquez report: technology challenges at spc lisa finneran a very fast algorithm for ram compression compressed virtual memory systems have been suggested, and in some cases implemented, to improve the effectiveness of use of physical ram. however, most proposals and/or implementations are based on adaptive compression algorithms which achieve good compression ratios, but are slow compared to a local disk. hence, they can only give some advantage with very slow (e.g. network-mounted) swap devices. in this paper we show that in many cases memory pages contain highly compressible data, with a very large amount of zero-valued elements. this suggests the replacement of slow, adaptive compression algorithms with very fast ones based on static huffman codes.we present one such algorithm which, paired with a careful layout of the data, is able to compress 4kb pages at 40mb/s even when implemented in software on an inexpensive pentium 100 system. we also show that the algorithm can achieve interesting compression ratios despite its simplicity.since the compression/decompression speed of our algorithms exceeds disk bandwidth, its use in a compressed vm system can lead to both memory savings and speed improvements in servicing page faults. in this paper we discuss some possible applications of the algorithm in a compressed vm system. luigi rizzo demeter/adaptive programming karl j. lieberherr linux gazette mastering kernel modules with caldera: mr. nelson gives us step-by-step instructions for loading kernel modules, so we can keep our kernel lean david b. nelson process name resolution in fault-intolerant csp programs du zhang meiliu lu compositional pointer and escape analysis for java programs this paper presents a combined pointer and escape analysis algorithm for java programs. the algorithm is based on the abstraction of points-to escape graphs, which characterize how local variables and fields in objects refer to other objects. each points-to escape graph also contains escape information, which characterizes how objects allocated in one region of the program can escape to be accessed by another region. the algorithm is designed to analyze arbitrary regions of complete or incomplete programs, obtaining complete information for objects that do not escape the analyzed regions. we have developed an implementation that uses the escape information to eliminate synchronization for objects that are accessed by only one thread and to allocate objects on the stack instead of in the heap. our experimental results are encouraging. we were able to analyze programs tens of thousands of lines long. for our benchmark programs, our algorithms enable the elimination of between 24% and 67% of the synchronization operations. they also enable the stack allocation of between 22% and 95% of the objects. john whaley martin rinard checking safety properties using compositional reachability analysis the software architecture of a distributed program can be represented by a hierarchical composition of subsystems, with interacting processes at the leaves of the hierarchy. compositional reachability analysis (cra) is a promising state reduction technique which can be automated and used in stages to derive the overall behavior of a distributed program based on its architecture. cra is particularly suitable for the analysis of programs that are subject to evolutionary change. when a program evolves, only the behaviors of those subsystems affected by the change need be reevaluated. the technique however has a limitation. the properties available for analysis are constrained by the set of actions that remain globally observable. properties involving actions encapsulated by subsystems may therefore not be analyzed. in this article, we enhance the cra technique to check safety properties which may contain actions that are not globally observable. to achieve this, the state machine model is augmented with a special trap state labeled as π. we propose a scheme to transform, in stages, a property that involves hidden actions to one that involves only globally observable actions. the enhanced technique also includes a mechanism aiming at reducing the debugging effort. the technique is illustrated using a gas station system example. shing chi cheung jeff kramer standard output guy l. steele 1996 readers' choice awards linux journal readers rank their favorite linux-related products gena shurtleff modulo scheduling for the tms320c6x vliw dsp architecture eric stotzer ernst leiss why apl2: a discussion of design principles apl2 represents a quantum leap in the function of the apl notation over earlier ibm offerings. achieving a consistent language required going back to the fundamentals and rethinking some of the original principles of apl and in a few cases making some changes. this paper discusses some of the extensions and gives the reasons why they were chosen over other possible extensions. james a. brown distributed operating systems sape j. mullender descriptors for pointer target arrays larry rolison oop: the shape of things to come randy charles source-level debugging of scalar optimized code ali-reza adl-tabatabai thomas gross style: an automated program style analyzer al lake curtis cook enhancement through extension: the extension interpreter d. notkin w. g. griswold data flow equations for explicitly parallel programs we present a solution to the reaching definitions problem for programs with explicit lexically specified parallel constructs, such as cobegin/coend or parallel_sections, both with and without explicit synchronization operations, such as post, wait or advance. the reaching definitions information for sequential programs is used to solve many standard optimization problems. in parallel programs, this information can also be used to explicitly direct communication and data ownership. although work has been done on analyzing parallel programs to detect data races, little work has been done on optimizing such programs. we show how the memory consistency model specified by an explicitly parallel programming language can influence the complexity of the reaching definitions problem. by selecting the "weakest" memory consistency semantics, we can efficiently solve the reaching definitions problem for correct programs. dirk grunwald harini srinivasan an automated fortran documenter we have written a set of programs designed to help r and d programmers document their fortran programs more effectively. the central program reads fortran source code and asks the programmer questions about things it has not heard of before. it inserts the answers to these questions as comments into the fortran code. the comments, as well as extensive cross-reference information, are also written to an unformatted file. other programs read this file to produce printed information or to act as an interactive document. timothy e. erickson technical correspondence diane crawford integrating message-passing and shared-memory: early experience this paper discusses some of the issues involved in implementing a shared- address space programming model on large-scale, distributed-memory multiprocessors. while such a programming model can be implemented on both shared-memory and message-passing architectures, we argue that the transparent, coherent caching of global data provided by many shared-memory architectures is of crucial importance. because message-passing mechanisms ar much more efficient than shared-memory loads and stores for certain types of interprocessor communication and synchronization operations, hwoever, we argue for building multiprocessors that efficiently support both shared-memory and message- passing mechnisms. we describe an architecture, alewife, that integrates support for shared-memory and message-passing through a simple interface; we expect the compiler and runtime system to cooperate in using appropriate hardware mechanisms that are most efficient for specific operations. we report on both integrated and exclusively shared-memory implementations of our runtime system and two applications. the integrated runtime system drastically cuts down the cost of communication incurred by the scheduling, load balancing, and certain synchronization operations. we also present preliminary performance results comparing the two systems. david kranz kirk johnson anant agarwal john kubiatowicz beng-hong lim the response of job classes with distinct policy functions(extended abstract) policy function schedulers provide a flexible framework for implementing a wide range of different scheduling schemes. in such schedulers, the priority of a job at any instant in time is defined by the difference between the time it spent in the system and an arbitrary function of its attained service time. the latter is called the policy function and acts as the functional parameter that specifies a particular scheduling scheme. for instance, a constant policy function specifies the first-come, first-serve scheduling scheme. by changing the policy function, the system behavior can be adjusted to better conform with desired response characteristics. it is common to express response characteristics in terms of a response function, the average response time of a job conditioned on its service requirement in equilibrium. in this paper, we analyze processor-sharing m/g/1 systems in which the priorities of different classes of jobs are determined by distinct policy functions. manfred ruschitzka tasking troubles and tips l. rising a case for compositional file systems (extended abstract) rajesh bordawekar simple and effective link-time optimization of modula-3 programs modula-3 supports development of modular programs by separating an object's interface from its implementation. this separation induces a runtime overhead in the implementation of objects, because it prevents the compiler from having complete information about a program's type hierarchy. this overhead can be reduced at link time, when the entire type hierarchy becomes available. we describe opportunities for link-time optimization of modula-3, present two link-time optimizations that reduce the runtime costs of modula-3's opaque types and methods, and show how link-time optimization could provide c++ which the benefits of opaques types at no additional runtime cost. our optimization techniques are implemented in mld, a retargetable linker for the mips, sparc, and intel 486, mld links a machine- independent intermediate code that is suitable for link-time optimization and code generation. linking intermediate code simplifies implementation of the optimizations and makes it possible to evaluate them on a wide range of architectures. mld's optimizations are effective: they reduce the total number of instructions executed by up to 14% and convert as many as 79% of indirect calls to direct calls. mary f. fernández managed file distribution on the universe network the file distribution system on the universe network consists of a distributed set of co-operating agents which provide clients with a reliable bulk file collection, transfer and delivery service. the agent systems incorporate specialised techniques for optimizing use of the satellite channel, as well as making available facilities for broadcast file distribution. the distributed system architecture and protocols are described, with emphasis on the separation of control and data transfer. a detailed presentation is given of the way in which agents interact with both hosts and other agents to achieve both reliability and robustness in the face of breaks in host or network availability. a description is given of the special transfer methods used and some of the possible applications for such a system. an indication is given of possible future developments to enable evolution from an experimental to service system. christopher s cooper acm president's letter:pixel art adele goldberg robert flegal icse 97: picking up the gauntlet alfonso fuggetta object-oriented specification (abstract only) ole-johann dahl a tale of dxpc differential x protocol compression: article about using differential x protocol compression which compresses x messages up to over 7:1 justin gaither take command: dtree corporate linux journal staff implementation of 8-bit coded character sets in ada strohmeier alfred genillard christian weber mats object-oriented software development with traditional languages rick lutowski a generalized object model (abstract only) van nguyen brent hailpern optimistic active messages: a mechanism for scheduling communication with computation low-overhead message passing is critical to the performance of many applications. active messages reduce the software overhead for message handling: messages are run as handlers instead of as threads, which avoids the overhead of thread management and the unnecessary data copying of other communication models. scheduling the execution of active messages is typically done by disabling and enabling interrupts, or by polling the network. this primitive scheduling control, combined with the fact that handlers are not schedulable entities, puts severe restrictions on the code that can be run in a message handler. this paper describes a new software mechanism, optimistic active messages (oam), that eliminates these restrictions; oams allow arbitrary user code to execute in handlers, and also allow handlers to block. despite this gain in expressiveness, oams perform as well as active messages. we used oam as the base for an rpc system, optimistic rpc (orpc), for the thinking machines cm-5 multiprocessor; it consists of an optimized thread package and a stub compiler that hides communication details from the programmer. orpc is 1.5 to 5 times faster than traditional rpc (trpc) for small messages and performs as well as active messages (am). applications that primarily communicate using large data transfers or are fairly coarse-grained perform equally well, independent of whether ams, orpcs, or trpcs are used. for applications that send many short messages, however, the orpc and am implementations are up to three times faster than the trpc implementations. using orpc, programmers obtain the benefits of well-proven programming abstractions such as threads, mutexes, and condition variables, do not have to be concerned with communication details, and yet obtain nearly the performance of hand-coded active message programs. deborah a. wallach wilson c. hsieh kirk l. johnson m. frans kaashoek william e. weihl acm forum robert l. ashenhurst a review of the apl2000 user conference by richard woll richard woll deferring design decisions in an application framework james e. carey brent a. carlson when to white box test peter j. d. matthews take command: what is dd? corporate linux journal staff object oriented programming in aida apl m. gfeller remote procedure calls in linux an introduction to this vital software development technique ed petron a comparison of extended pascal and ada j. donaho using properties for uniform interaction in the presto document system most document or information management systems rely on hierarchies to organise documents (e.g. files, email messages or web bookmarks). however, the rigid structures of hierarchical schemes do not mesh well with the more fluid nature of everyday document practices. this paper describes presto, a prototype system that allows users to organise their documents entirely in terms of the properties those documents hold for users. properties provide a uniform mechanism for managing, coding, searching, retrieving and interacting with documents. we concentrate in particular on the challenges that property- based approaches present and the architecture we have developed to tackle them. paul dourish w. keith edwards anthony lamarca michael salisbury experience with a forth-like language j e jonak some uses of { and } we believe that the design of apl was also affected in important respects by a number of procedures and circumstances. firstly, from its inception apl has been developed by using it in a succession of areas. this emphasis on application clearly favors practicality and simplicity. the treatment of many different areas fostered generalization … \--- falkoff and iverson, "the design of apl" roger hui a logical approach to data structures russell turpin binary wrapping: a technique for instrumenting object code jon cargille barton p. miller eros: a fast capability system jonathan s. shapiro jonathan m. smith david j. farber borrowed-virtual-time (bvt) scheduling: supporting latency-sensitive threads in a general-purpose scheduler kenneth j. duda david r. cheriton spider - a language for control network programing t. golemanov k. krachanov e. golemanova a time complexity lower bound for randomized implementations of some shared objects prasad jayanti quantifying software designs this paper describes an effort to use metrics to evaluate software designs early in the design process. key facets of the work include a machine processable design notation and the definition of software design metrics. we believe that the future success of building an intelligent software design assistant depends on the ability to quantify attributes of a software design, as well as to have the representation of the design available for automated examination. john beane nancy giddings jon silverman mixed language programming realization and the provision of data types bo einarsson on parallel object oriented programming in fortran 90 the c++ programming language [6, 10] is well-known for its support of object oriented concepts, useful in abstraction modeling. containing many important features, its popularity is growing with a new generation of scientists anxious to bring clarity and flexibility to their programming efforts. nevertheless, most of the scientific applications in development and use today are based on fortran, the most popular language for scientific programming.fortran is not a static language, it has continually evolved to include the most recent proven ideas and concepts garnered from other programming languages. until recently, many of the most modern features were not available, complicating abstraction modeling for large scale development projects. this can make software difficult to comprehend, unsafe and potentially useless. the emergence of fortran 90 [3] has dramatically changed the prospects of fortran programming. not only are many of the most modern aspects of programming language techniques included in the standard, there are also specific new additions that will undoubtedly affect the next generation of all languages used in scientific programming [8]. charles d. norton viktor k. decyk boleslaw k. szymanski the object-oriented brewery: a comparison of two object-oriented development methods robert c. sharble samuel s. cohen evaluating software engineering methods and tool - part 3: selecting an appropriate evaluation method - practical issues barbara ann kitchenham programming pearls jon bentley best of tech support corporate linux journal staff automatically closing open reactive programs christopher colby patrice godefroid lalita jategaonkar jagadeesan customization and composition of distributed objects: middleware abstractions for policy management current middleware solutions such as corba and java's rmi emphasize compositional design by separating functional aspects of a system (_e.g._ objects) from the mechanisms used for interaction (_e.g._ remote procedure call through stubs and skeletons). while this is an effective solution for handling distributed interactions, higher-level requirements such as heterogeneity, availability, and adaptability require policies for resource management as well as interaction. we describe the _distributed connection language_ (dcl): an architecture description language based on the actor model of distributed objects. system components and the policies which govern an architecture are specified as encapsulated groups of actors. composition operators are used to build connections between components as well as customize their behavior. this customization is realized using a meta- architecture. we describe the syntax and semantics of dcl, and illustrate the language by way of several examples. mark astley gul a. agha parallel c/c++: convergence or divergence l. stanberry cost profile of a highly assured, secure operating system the logical coprocessing kernel (lock) began as a research project to stretch the state of the art in secure computing by trying to meet or even exceed the "a1" requirements of the trusted computer system evaluation criteria (tcsec). over the span os seven years, the project was transformed into an effort to develop and deploy a product: the standard mail guard (smg). since the project took place under a us government contract, the development team needed to maintain detailed records of the time spent on the project. the records from 1987 to 1992 have been combined with information about software code size and error detection. this information has been used to examine the practical impacts of high assurance techniques on a large-scale software development program. tasks are associated with the a1 formal assurance requirements added approximately 58% to the development cost of security-critical software. in exchange for these costs, the formal assurance tasks (formal specifications, proofs, and specification code correspondence) uncovered 68% of the security flaws detected in lock's critical security mechanisms. however, a study of flaw detection during the smg program found that only 14% of all flaws detected were of the type that could be detected using formal assurance, and that the work of the formal assusrance team only accounted for 19% of all flaws detected. while formal assurance is clearly effective at detecting flaws, its practicality hinges on the degree to which the formally modeled system properties represent all of a system's esential properties. richard e. smith apl and nested arrays - a dream for statistical computation alan sykes tom stroud improving apl performance with custom written auxiliary processors andrew k. dickey evolution of object behavior using context relations a collection of design patterns was described by gamma, helm, johnson, and vlissides in 1994. recognizing that designs change, each pattern ensures that a certain system aspect can vary over time such as the operations that can be applied to an object or the algorithm of a method. the patterns are described by constructs such as the inheritance and reference relations, attempting to emulate more dynamic relationships. as a result, the design patterns demonstrate how awkward it is to program natural concepts of behavioral evolution when using a traditional object-oriented language.in this paper we present a new relation between classes: the context relation. it directly supports behavioral evolution, and it is meaningful at the analysis, design, and implementation level. at the design level we picture a context relation as a new form of arrow between classes. at the implementation level we use a small extension of c++. the basic idea is that if class c is context-related to a base class b, then b-objects can get their functionality dynamically altered by c-objects. our language construct for doing this is a generalization of the method update in abadi and cardelli's imperative object calculus. a c-object may be explicitly attached to a b-object, or it may be implicitly attached to a group of b-objects for the duration of a method invocation. we demonstrate how the context relation can be used to easily model and program the adapter, bridge, chain of responsibility, decorator, iterator, observer, state, strategy, and visitor patterns. linda m. seiter jens palsberg karl j. lieberherr pc note eugene styer apl90 denmark an ada tmcompatible specification language this paper describes a notation for the formal specification of software packages. the main influences are the guarded commands of dijkstra and the algebraic semantics of guttag. however, a novel operator denoted by % is introduced, which allows algorithms to be abstracted in a specification, thereby creating a true specification language rather than another higher level language. the notation, called adl/1, is designed to be used in conjunction with ada;tm but is equally suitable for other languages, and has been used for real time software written in assembler and in a pascal-like language. n. c.l. beale peyton s.l. jones how strong is weak mutation? a. jefferson offutt stephen d. lee the myth: anyone can code the software of the requirements and design are... the focus of this paper is the conjecture that the coding phase of a software system is difficult even in those instances where the functional requirements are well defined, the functional design is finalized, and the implementation issues are well understood. to study this conjecture, we examined an actual implementation of a stack to discover what decisions the programmer encounters when attempting to transform the functional design into operational software code. we examine decisions affecting the specification of the stack package as well as those affecting the implementation of the subprograms in the body of this package. j. a. perkins hyflofusion: harnessing the power of acid rain p. christopher staecker a note on lamport's mutual exclusion algorithm tai-kuo woo the role of formalized domain-specific software frameworks we use our experience in developing formal domain-specific software processes in an industrial setting to argue the benefits of (a) developing specialized software process models tailored to a particular class of software system, and (b) the use of formal methods and notations for extracting these models. david garlan an example of the developer's documentation for an embedded computer system written in ada, part 2 t j wheeler software process modeling example marc i. kellner action system approach to the specification and design of distributed systems r. kurki-suonio h.-m. järvinen when objects collide experiences with reusing multiple class hierarchies well-designed reusable class libraries are often incompatible due to architectural mismatches such as error-handling and composition conventions. we identify five pragmatic dimensions along which combinations of subsystems must match, and present detailed examples of conflicts resulting from mismatches. examples are drawn from our experiences of integrating five subsystem-level class hierarchies into an object-oriented hypertext platform. we submit that effective reuse will require that these pragmatic decisions be explicitly identified in descriptions of reusable software. such descriptions will enable developers to identify and combine subsystems whose architectures are compatible. lucy m. berlin recent trends in experimental operating systems research edward d. lazowska interface control and incremental development in the pic environment the pic environment is designed to provide support for interface control that facilitates incremental development of a software system. interface control, the description and analysis of relationships among system components, is important from the earliest stages of the software development process right through to the implementation and maintenance stages. incremental development, wherein a software system is produced through a sequence of relatively small steps and progress may be rigorously and thoroughly assessed after each step, must be accommodated by any realistic model of the software development process. this paper focuses on the analysis component of the pic environment and demonstrates how it contributes to precise interface control capabilities while supporting an incremental software development process. alexander l. wolf lori a. clarke jack c. wileden object oriented methodology demonstration (oomd) and discussion david brookman jump minimization in linear time m. v. s. ramanath marvin solomon an automated oracle for verifying gui objects recently, software testers have relied more on automated testing to test software. automated testing method consists of the three modules: test case design, execution, and verification. yet, to accomplish these three phases, we are always in a dilemma due to a lack of a verification function. nearly all the commercial automated testing tools cannot efficiently compare graphic objects though gui (graphic user interface) software is now more crucial than text based user interface. this research develops a technique that aids automatic behavior verification for a particularly difficult problem: determining the correction of screen output. methodology to capture and compare screen output is presented and a case study using microsoft® powerpoint® is described. efficient cooperative caching using hints prasenjit sarkar john hartman a study of exception handling and its dynamic optimization in java optimizing exception handling is critical for programs that frequently throw exceptions. we observed that there are many such exception-intensive programs iin various categories of java programs. there are two commonly used exception handling techniques, stack unwinding optimizes the normal path, while stack cutting optimizes the exception handling path. however, there has been no single exception handling technique to optimize both paths. takeshi ogasawara hideaki komatsu toshio nakatani computer-aided verification of reactive systems rajeev alur xsuse--adding more to the xfree86 offerings in mid 1997, s.u.s.e. started to release a small family of xservers, called xsuse, that are based on xfree86 and are freely available in binary form. this paper explains who is involved in doing this, why we ar a doing and what dirk h. hohndel constructing usable documentation: a study of communicative practices and the early uses of mainframe computing in industry this study suggests that computer documentation is a complex technical communication genre, encompassing all the texts that mediate between complex human activities and computer processes. drawing on a historical study, it demonstrates that the varied forms given to documentation have a long history, extending back at least to the early days of commercial mainframe computing. the data suggests that (1) early forms of computer documentation were borrowed from existing genres, and (2) official and unofficial documentation existed concurrently, despite efforts to consolidate these divergent texts. the study thus provides a glimpse into the early experimental nature of documentation as writers struggled to find a meaningful way to communicate information about their organization's developing computer technology. mark zachry asap \\- a framework for evaluating run-time schedulers in embedded multimedia end-systems asawaree kalavade pratyush moghe evaluating software eng. methods and tools part 10: designing and running a quantitative case study in the last article we considered how to identify the context for a case study and how to define and validate a case study hypothesis. in this article, we continue my discussion of the eight steps involved in a quantitative case study by considering the remaining six steps: selecting the host projects; identifying the method of comparison; minimising the effect of confounding factors, planning the case study, monitoring the case study, analysing the results. barbara ann kitchenham lesley m. pickard exception handling in apl this paper examines apl exception handling facilities as they relate to applications programming. a brief background on exception handling is first presented. next, the qualities most desirable in an exception handler are discussed. these criteria are then used to examine and compare two different implementations of an exception handling facility, those of stsc, inc. and i. p. sharp associates (ipsa). finally, a set of apl- coded handlers is compared to the system-level implementations on the same basis. dennis r. adler verifying systems with integer constraints and boolean predicates: a composite approach tevfik bultan richard gerber christopher league understanding and using asynchronous message passing (preliminary version) message passing provides a way for concurrently executing processes to communicate and synchronize. in this paper, we develop proof rules for asynchronous message-passing primitives (i.e. "send no- wait"). two benefits accrue from this. the obvious one is that partial correctness proofs can be written for concurrent programs that use such primitives. this allows programs to be understood as predicate transformers, instead of by contemplating all possible execution interleavings. the second benefit is that the proof rules and their derivation shed light on how interference arises when message- passing operations are used and on how this interference can be controlled. this provides insight into programming techniques to eliminate interference in programs that involve asynchronous activity. three safe uses of asynchronous message passing are described here: the transfer of values, the transfer of monotonic predicates, and the use of acknowledgments. richard d. schlichting fred b. schneider surveyor's forum: a recurring bug r. s. bird for text files tom zimmer merging task-centered ui design with complex system development: a risky business yolanda j. reimer ray ford nicholas p. wilde a portable implementation of the distributed systems annex in java yoav tzruya mordechai ben-ari in search of "real" ada: a software saga with a moral or two bryce m. bardin marion f. moon efficient dataflow analysis of logic programs a framework for efficient dataflow analyses of logic programs is investigated. a number of problems arise in this context: aliasing effects can make analysis computationally expensive for sequential logic programming languages; synchronization issues can complicate the analysis of parallel logic programming languages; and finiteness restrictions to guarantee termination can limit the expressive power of such analyses. our main result is to give a simple characterization of a family of flow analyses where these issues can be ignored without compromising soundness. this results in algorithms that are simple to verify and implement, and efficient in execution. based on this approach, we describe an efficient algorithm for flow analysis of sequential logic programs, extend this approach to handle parallel executions, and finally describe how infinite chains in the analysis domain can be accommodated without compromising termination. saumya k. debray a programming environment for a timeshared system in 1968 the stanford artificial intelligence laboratory began to construct a programming environment from a pdp-10, a pre-tops-10 dec1 timesharing system, and some innovative terminal hardware. by now this has developed into a programming environment for a kl-10 that integrates our editor with various other system functions, especially the lisp subsystem. we use the term 'sail' to refer to the stanford a. i. lab kl-10 computer running the waits timesharing system. [harvey 1982] by 'programming environment' we mean the mechanisms that allow a user to type text at his program or subsystem, and which manage output text. 2 we are talking about mechanical management of the interaction between user and program, not about any intelligent mediation. a good programming environment should be flexible enough to suit individuals, yet without requiring the mechanics of interaction to be re-learned for each new program. in this paper we describe our programming environment, what makes it unique, and why we think that it is not necessary to move to personal computers for a very usable programming environment. richard p. gabriel martin e. frost is forth dead? paul frenger a note on hennessy's "symbolic debugging of optimized code" david wall amitabh srivastava fred templin object-oriented concurrent reflective languages can be implemented efficiently hidehiko masuhara satoshi matsuoka takuo watanabe akinori yonezawa ada on fault-tolerant distributed systems john c. knight automat, an end-user approach in handling applications as multi-dimensional arrays "automat" is an end user, general purpose tool developed initially in apl-sv currently running under vsapl and apl2 (ibm pc version currently under consideration). the basic underlying concept is to tap the power of apl in handling multi- dimensional arrays, and make it available to the end user in a very natural and intuitive fashion. the productivity gains generated by the system are extremely high, as a new application can be set up in terms of fraction of hours, with immediately usable results. a lot of provisions have been made to handle instantaneously any changes in the structures, whether they are due to new business requirements or to initially overlooked facts. the system has been and still is successfully used for data consolidation / analysis, planning models and application prototypes. the use of apl as internal language enables the system to communicate very easily with other applications (e.g. graphics). jean marie monnier a case for using procedure calls for i/o w. a. baldwin a fully reusable class of objects for synchronization and communication in ada 95 this paper presents a very general class which can be reused to specify and implement any type exporting synchronization or communication properties. the new ada 95 features modelling inheritance, polymorphism and hierarchies of library units are used extensively in describing the architecture of the class and other new features (access to subprograms, protected types, …) are used for the specification and implementation of the components of the class.section 2 presents the general architecture of our class. sections 3, 4, 5 respectively give examples of specification, use and implementation of its components. section 6 concludes on recalling the full role of formal techniques in our approach (they appear in the present paper only to show that the semantics of our class is defined at a more abstract level than its implementation) and discussing a few interesting points about the way ada 95 is used here. patrick de bondeli using apl expressions in database operations the collateral analysis system (cas) is an analytic database written in _dyalog apl_ which combines the flexibility of a database with the power of a sophisticated analysis package. much of the power and flexibility of cas comes from the ability of users to create their own selection statements, calculations, and summary expressions.an expression is a combination of _constants, fields_ and _user functions_ which can form a selection statement, new field calculation, or frequency report specification.users can generate tailor-made reports without programming. they simply enter expressions and format data and save the information with the database. then, with the touch of a button, they can generate reports quickly and easily. cas expressions are merely apl expressions in disguise. paul s. mansour stephen m. mansour components and program analyses model checking has been applied quite successfully to hardware verification and shows promise for software verification. the key obstacle is the well-known state explosion problem. this report describes work done by the investigator under nsf support, in particular grants ccr 980-4736 and ccr 941-5496, to ameliorate state explosion. matthias felleisen a conceptual model for megaprogramming will tracz virtual memory peter j. denning representation of function variants for embedded system optimization and synthesis k. richter d. ziegenbein r. ernst l. thiele j. teich evolution of a software architecture for management information systems jeffrey s. poulin ndc++: an approach to concurrent extension of c++ jiajun chen guoliang zheng formal modeling and analysis of the hla component integration standard an increasingly important trend in the engineering of complex systems is the design of component integration standards. such standards define rules of interaction and shared communication infrastructure that permit composition of systems out of independently-developed parts. a problem with these standards is that it is often difficult to understand exactly what they require and provide, and to analyze them in order to understand their deeper properties. in this paper we use our experience in modeling the high level architecture (hla) for distributed simulation to show how one can capture the structured protocol inherent in an integration standard as a formal architectural model that can be analyzed to detect anomalies, race conditions, and deadlocks. robert j. allen david garlan james ivers letters to the editor corporate linux journal staff genie forth roundtable sergei baranoff user interface prototyping - concepts, tools, and experience dirk bäumer walter r. bischofberger horst lichter heinz zullighoven inheritance and child library units brad balfour issues in object-oriented requirements analysis donald l. ross semantic analysis of virtual classes and tested classes virtual classes and nested classes are distinguishing features of beta. nested classes originated from simula, but until recently they have not been part of main stream object- oriented languages. c++ has a restricted form of nested classes and they were included in java 1.1. virtual classes is the beta mechanism for expressing generic classes and virtual classes is an alternative to parameterized classes. there has recently been an increasing interest in virtual classes and a number of proposals for adding virtual classes to other languages, extending virtual classes, and unifying virtual classes and parameterized classes have been made. although virtual classes and nested classes have been used in beta for more than a decade, their implementation has not been published. the purpose of this paper is to contribute to the understanding of virtual classes and nested classes by presenting the central elements of the semantic analysis used in the mjølner beta compiler. ole lehrmann madsen visual conventions for system design using ada 9x: representing asynchronous transfer of control jeffrey v. nickerson universal closure operator for prolog t vasak javascript brent w. benson an application of petri nets in structured analysis t. h. tse l. pong corrigendum: ``the design and application of a retargetable peephole optimizer'' jack w. davidson christopher w. fraser parallel compilation of ada units parallel compilation---compiling independent ada units or files in parallel--- offers a potentially enormous savings in the total compilation time of a program or library. the speedup realized depends primarily on the inter-unit dependency structure and the number of processors available. we have developed a tool, adaexpress , which exploits the opportunities for parallelizing compilations that are inherent in ada libraries. adaexpress is currently targeted to a sequent symmetry multiprocessor environment. the compilation engine is a sequent ada compiler, a software leverage product that is derived from the verdix ada development system (vads®). adaexpress automatically determines ada dependencies from sources. it then applies scheduling heuristics to the dependency structure to obtain a schedule with good, though suboptimal, overall compilation time. adaexpress implements this schedule as streams of parallel compilation processes. we present the details of adaexpress's graph- based scheduling algorithm. we also give an example using adaexpress on an actual program library, and we show the speedup achieved with four processors. b. cockerham generation of graphical representations from ada source code james h. cross kelly i. morrison charles h. may lazy and incremental program generation current program generators usually operate in a greedy manner in the sense that a program must be generated in its entirety before it can be used. if generation time is scarce, or if the input to the generator is subject to modification, it may be better to be more cautious and to generate only those parts of the program that are indispensable for processing the particular data at hand. we call this lazy program generation. another, closely related strategy is incremental program generation. when its input is modified, an incremental generator will try to make a corresponding modification in its output rather than generate a completely new program. it may be advantageous to use a combination of both strategies in program generators that have to operate in a highly dynamic and/or interactive environment. j. heering p. klint j. rekers from the publisher: a confession and some ramblings corporate linux journal staff use of forth in a course in computer algebra john j. wavrik the game of life: a clean programming tutorial and case study anthony h. dekker extensible access control for a hierarchy of servers jean bacon richard hayton sai lai lo ken moody the turing programming language turing, a new general purpose programming language, is designed to have basic's clean interactive syntax, pascal's elegance, and c's flexibility. richard c. holt james r. cordy overcoming communication obstacles in user-analyst interaction for functional requirements elicitation the importance of requirement engineering in the software development process has been widely recognised by the scientific community. one of the major error sources that arise in this phase is represented by ineffectual communication between users and analysts.valusek and fryback in [32] identify three classes of communication obstacles to a successful elicitation of requirements. purpose of this paper is to discuss these obstacles and to identify the structure of a case tool that may allow to overcome them. s. valenti m. panti a. cucchiarelli ewatch bill carlson chris garrity fortran 90/95/hpf information file (part 1, compilers) michael metcalf on mutual exclusion in faulty distributed systems abdelmadjid bouabdallah an application of queueing theory to the design of a message-switching computer system inexact or real-world queueing techniques are used to determine that the number of buffers provided in system design is indeed adequate to guard against message loss. jack gostl irwin greenberg goto machine paul frenger from reuse library experiences to application generation architectures reuse through application generators has been successful in the area of programming language systems. we analyzed three language system projects that realized transition from the initial ad hoc programs, through libraries of reusable modules to application generator solutions. we tried to understand the underlying thinking process and technical factors that made such a transition possible. based on this study, we generalized reuse experiences gained in the language system domain and formulated a reuse implementation framework. our framework is to facilitate transition from component-based reuse to application generators in other domains. ultimately, we hope our framework will offer reuse implementation guidelines for companies to realize such a transition. initial findings are described in this paper. stan jarzabek flavers leon osterweil lori a. clarke automated formal methods: model checking and beyond e. allen emerson facts and figures about the york ada compiler i. c. wand j. r. firth c. h. forsyth l. tsao k. s. walker algorithm 803: a simpler macro processor macro processors have been in the computing tool chest since the late 1950's. their use, though perhaps not what it was in the heyday of assembly language programming, is still widespread. in the past, producing a full-featured macro processor has required significant effort, similar to that required to implement the front- end to a compiler augmented by appropriate text substitution capabilities. the tool described here adopts a different approach. the text containing macro definitions and substitutions is, in a sense, "compiled" to produce a program, and this program must then be executed to produce the final output. william a. ward a hierarchical description of the hermix distributed operating system distributed operating systems are often structured in a server oriented way, where some system tasks are performed by server processes. for modularity reasons, hermix is build of a minimal kernel and many small servers; however to get a reasonable performance we use tools to merge servers together or to integrate them in our kernel. in this paper we analyze the most basic services needed in server oriented systems. we give a description of these basic services and place them in a hierarchical structure. servers usually handle services from different layers and can therefore not be ordered hierarchically. our hierarchical description gives a better insight in the relationships between inter-process communication, memory management based on swapping and both low level and high level process management. yolande berbers pierre verbaeten tame: tailoring an ada measurement environment victor r. basili h. dieter rombach objective view point: a cornucopia of c++ resources g. bowden wise completeness, robustness, and safety in real-time software requirements specification m. s. jaffe n. g. leveson lispview: leverage through integration although lisp was the host for many of the first graphical user interface (gui) packages, popular activity in this area has shifted to more primitive but widely used languages such as c and c++. one explanation for this shift is that while lisp's strength in rapid prototyping and development led to the initial progress, it also tended to inspire an imperialist attitude: applications were often crafted exclusively in lisp, even when part of the application could make use of an existing conventional language library. we believe the ideal way to construct a commonlisp gui package today is to integrate proven c libraries with an object-oriented lisp framework. hans muller replicate each, anyone? in both the ipsa and apl2/nars extensions of apl, the slash symbol always represents an operator. some new users of apl2, having been taught in the past that slash is an ambiguous symbol which is a dyadic function when it immediately follows data, find that the result when slash occurs between data and another operator is incompatible with that interpretation. the situation did not arise in apl1 because derived functions could not be operands. the classification of slash as an operator can be traced to the description given in the book. a programming language, from which apl ultimately derives. it was done so that the closely related mask operation could be specified with the same symbol. in this paper the syntactic differences between dyadic functions and aonadic operators with data operands are explicated using the apl2 concept of syntactic binding. two straightforward language extensions are considered based on the notions of a depth property, which apl2 shares with stsc's nars, and pairwise evaluation, which derives from syntactic binding. together they allow either dyadic functions or monadic operators with data operands to be executed in three structural contexts, which, in the cases of replicate and expand, can be loosely described as the same mask for each of several arrays, a separate mask for each array, and the same array for each of several masks. j. philip benkard the lion and the unicorn met prolog a recent paper published in the journal of automated reasoning, no. 1, 1985, described ways to apply automatic theorem-proving techniques to a logic puzzle. the prolog programming language pertains to automatic theorem-proving. that paper did not mention prolog, but we use prolog to solve the logic puzzle that paper discussed. we then compare our results with the results given in that paper. bruce d. ramsey a new graphical user interface proposal for apl in the past apl had two significant advantages over many other problem-solving computational tools. the first of these was that the language elements of apl have a consistency and breadth that remains unchallenged, at least for the tasks for which they were designed. the other historic advantage of apl, that it provided an interactive problem-solving environment, has long since been eroded. indeed, we have seen a transformation of some programming languages into programming environments. the purpose of this paper is to explore some possible ways in which the face that apl presents to those who program with it could usefully be enhanced. dick bowman kaos (abstract only): a knowledge aided operator's system for the vm operator's console kaos is a system currently under development at southern illinois university which attempts to combine all necessary operating procedures for bringing up, running, and bringing down certain virtual systems. this system runs as a front-end to the vm operator's console. kaos is designed to not only set up the procedures for running the system but also is capable of aiding the operator should certain problems arise during the process of bringing up or bringing down a virtual system. the main flow of the program core is an expert system implemented in ops5 and psl; however, kaos calls on user-friendly menu driven screens written in cms exec2 and ibm assembler languages. kaos also needs to make queries on the system, such as what virtual to real lines are connected, and do they match the given configuration. all the documentation for bringing up and bringing down each of the virtual systems is in kaos. the system will walk the operator step by step through the procedure. if a problem arises the operator may query the kaos system for possible solutions. other problems that the operator overlooks may be flagged by kaos. the kaos system is more than a consolidation of information from the manuals; it is an expert system which actually goes out and makes cp queries to determine what might be the cause of a certain problem, and then instructs operators in one or more methods that may solve the problem. the method chosen at this point will always be made and implemented by the operators, kaos simply suggests what may be the cause and how it might be corrected. kaos runs on an ibm 3081 and is currently confined to handling queries and giving information about the following: rscs, ims, passthru, virtual to real line connections, virtual to virtual line connections, and the kaos system itself. the system can answer limited queries not related to bringing the system up or down. most of these queries concern line connections and line activity. the following is some sample dialog with kaos in bringing up an rscs release 1.3 machine. rscs is ibm's remote spooling communications subsystem network. it is a communications package which allows information to pass from one machine to another machine or to a printer. the operator's console has special privileges to send special messages querying the rscs system. in this example the link to bussprt (business printer) is not connected, because the rscs line to the printer is not attached. the comprt (communications printer) is not connected because the printer is not turned on. the kaos dialog is in capital letters, the operator's dialog is in mixed case, and comments are enclosed in (* *). the following lines should be connected by 8:10 on a weekday comprt, bussprt, eduprt, bitnet, … (* kaos knows that printers for offices should be*) (* up between 8:00 am and 5:00 p.m. on a weekday. *) the following links are not connected: link comprt active --- dmtvmc line 330 noh nod link bussprt inactive --- dmtrpt line 040 noh nod (* an inactive link one which is not started, or *) (* for some reason has been brought down. an *) (* active link is a link which is started on this*) (* end, but not yet connected to the other side. *) link bussprt line 040 is not network attached to 2408. to attach bussprt issue bussprt recovery exec. type: smsg rscs bussprt smsg rscsbussprt activating link bussprt buss dmtrpt 040 * (* the business printer is not active, kaos will*) (* continue onto the next topic. link comprt is not connected. all lines appear active. the printer itself may be down. recommendation: call communication office 555-1212 and ask them to start the printer. leave the printer down. query kaos for further information. (* pf1 key is pressed, kaos will then continue *) (* with previous topics, and a new screen. &l; t;supscrpt>*) to query further information on an rscs node type in link node name and hit enter. query all links on the rscs system. exit to previous topic. exit kaos system. ; (* pf3 key entered. kaos returns to main screen. *) in the above example kaos walks the operator through the procedure needed to recover from the error; however, the operator decides which recovery procedure to take, and issues all recovery procedures. in a fully implemented version of kaos, a severity code would be attached to a problem and kaos would attempt to correct less severe problems and operators would be flagged for more complicated errors. different places run different systems and have different configurations. the kaos system is being developed with this in mind. it should be relatively easy to configure the system for the unique set of virtual machines and lines. it also adapt easily to changes made in the systems configuration. kathleen o'neil lossau advertisers index corporate linux journal staff more letters corporate linux journal staff writing cgi scripts in python michel vanaken a 100% portable inline-debugger jurgen heymann stack frames in forth bradford j. rodriguez automated assistance for conflict resolution in multiple perspective systems analysis and operation william n. robinson product review: metro-x 3.1.5 mark nassal reducing indirect function call overhead in c++ programs modern computer architectures increasingly depend on mechanisms that estimate future control flow decisions to increase performance. mechanisms such as speculative execution and prefetching are becoming standard architectural mechanisms that rely on control flow prediction to prefetch and speculatively execute future instructions. at the same time, computer programmers are increasingly turning to object-oriented languages to increase their productivity. these languages commonly use run time dispatching to implement object polymorphism. dispatching is usually implemented using an indirect function call, which presents challenges to existing control flow prediction techniques. we have measured the occurrence of indirect function calls in a collection of c++ programs. we show that, although it is more important to predict branches accurately, indirect call prediction is also an important factor in some programs and will grow in importance with the growth of object-oriented programming. we examine the improvement offered by compile-time optimizations and static and dynamic prediction techniques, and demonstrate how compilers can use existing branch prediction mechanisms to improve performance in c++ programs. using these methods with the programs we examined, the number of instructions between mispredicted breaks in control can be doubled on existing computers. brad calder dirk grunwald new products corporate linux journal staff precise and efficient integration of interprocedural alias information into data-flow analysis data-flow analysis is a basis for program optimization and parallelizing transformations. the mechanism of passing reference parameters at call sites generates interprocedural aliases which complicate this analysis. solutions have been developed for efficiently computing interprocedural aliases. however, factoring the computed aliases into data- flow information has been mostly overlooked, although improper factoring results in imprecise (conservative) data-flow information. in this document, we describe a mechanism which, in factoring in interprocedural aliases, computes data-flow information more precisely and with less time and space overhead than previous approaches. michael burke jong-deok choi mediator languages - a proposal for a standard: report of an i3/pob working group held at the university of maryland, april 12 and 13, 1996 peter buneman louiqa raschid jeffrey ullman distributed last call optimization for portable parallel logic programming a difficult but challenging problem is the efficient exploitation of and and or parallelism in logic programs without making any assumptions about the underlying target machine(s). in earlier papers, we described the design of a binding environment for and and or parallel execution of logic programs on shared and nonshared memory machines and the performance of a compiler (called rolog) using this binding environment on a range of mimd parallel machines. in this paper, we present an important optimization for portable parallel logic programming, namely distributed last-call optimization, an analog of the tail recursion optimization for sequential prolog. this scheme has been implemented in the rolog compiler, which ports unchanged on several shared memory and nonshared memory machines. we describe the effect of this optimization on several or, and/or and and parallel benchmark programs. balkrishna ramkumar what's in a region?: or computing control dependence regions in near-linear time for reducible control flow regions of control dependence identify the instructions in a program that execute under the same control conditions. they have a variety of applications in parallelizing and optimizing compilers. two vertices in a control-flow graph (which may represent instructions or basic blocks in a program) are in the same region if they have the same set of control dependence predecessors. the common algorithm for computing regions examines each control dependence at least once. as there may be o(v x e) control dependences in the worst case, where v and e are the number of vertices and edges in the control-flow graph, this algorithm has a worst-case running time of o(v x d). we present algorithms for finding regions in reducible control-flow graphs in near-linear time, without using control dependence. these algorithms are based on alternative definitions of regions, which are easier to reason with than the definitions based on control dependence. thomas ball uml in action grady booch a framework for visualizing object-oriented systems this paper describes a new approach to visualizing program systems within the object- oriented paradigm. this approach is based on a tex-like notation which has been extended and generalized for specifying graphical layout of arbitrary objects. the clos meta-level architecture is used to associate visualization and application objects. we propose several useful techniques such as indirect values, slot and method demons, and instance-specific meta-objects. our techniques require no modifications to the systems which are selected for visualization. we demonstrate the feasibility of our approach using application domains such as clos debugging and constraint systems. volker haarslev ralf möller flattening and parallelizing irregular, recurrent loop nests irregular loop nests in which the loop bounds are determined dynamically by indexed arrays are difficult to compile into expressive parallel constructs, such as segmented scans and reductions. in this paper, we describe a suite of transformations to automatically parallelize such irregular loop nests, even in the presence of recurrences. we describe a simple, general loop flattening transformation, along with new optimizations which make it a viable compiler transformation. a robust recurrence parallelization technique is coupled to the loop flattening transformation, allowing parallelization of segmented reductions, scans, and combining-sends over arbitrary associative operators. we discuss the implementation and performance results of the transformations in a parallelizing fortran 77 compiler for the cray c90 supercomputer. in particular, we focus on important sparse matrix-vector multiplication kernels, for one of which we are able to automatically derive an algorithm used by one of the fastest library routines available. anwar m. ghuloum allan l. fisher on the foundations of artificial workload design the principles on which artificial workload model design is currently based are reviewed. design methods are found wanting for three main reasons: their resource orientation, with the selection of resources often unrelated to the performance impact of resource demands; their avoiding to define an accuracy criterion for the resulting workload model; and their ignoring the dynamics of the workload to be modeled. an attempt at establishing conceptual foundations for the design of interactive artificial workloads is described. the problems found in current design methods are taken into account, and sufficient conditions for the applicability of these methods are determined. the study also provides guidance for some of the decisions to be made in workload model design using one of the current methods. domenico ferrari design and implementation of an ada mathematics library t. mattson l. shanbeck guardians in a generation-based garbage collector this paper describes a new language feature that allows dynamically allocated objects to be saved from deallocation by an automatic storage management system so that clean-up or other actions can be performed using the data stored within the objects. the program has full control over the timing of clean-up actions, which eliminates several potential problems and often eliminates the need for critical sections in code that interacts with clean-up actions. our implementation is "generation-friendly" in the sense that the additional overhead within a generation-based garbage collector is proportional to the work already done there, and the overhead within the mutator is proportional to the number of clean-up actions actually performed. r. kent dybvig carl bruggeman david eby book review: the linux system administration handbook david a. bandel ada, model railroading, and real-time software engineering education (keynote address) john mccormick reflections on software research can the circumstances that existed in bell labs that nurtured the unix project be produced again? dennis m. ritchie communicating with icons as computer commands philip rubens robert krull validation of the coupling dependency metric as a predictor of run-time failures and maintenance measures aaron b. binkley stephen r. schach panel on inheritance peter wegner three questions about each bug you find t. van vleck why do some (weird) people inject faults? joão carreira joão gabriel silva screen editor and emulator donald s. higgins penguin's progress: the new building trade corporate linux journal staff a worldwide survey of base process activities towards software engineering process excellence yingxu wang graham king alex dorling dilip patel ian court geoff staples margaret ross detecting data races in cilk programs that use locks guang-ien cheng mingdong feng charles e. leiserson keith h. randall andrew f. stark understanding generics in ada95 david a. workman adding more "dl" to idl: towards more knowledgeable component inter- operability alex borgida premkumar devanbu aspect-oriented programming: introduction tzilla elrad robert e. filman atef bader reusing software developments software development environments of the future will be characterized by extensive reuse of previous work. this paper addresses the issue of reusability in the context in which design is achieved by the transformational development of formal specifications into efficient implementations. it explores how an implementation of a modified specification can be realized by replaying the transformational derivation of the original and modifying it as required by changes made to the specification. our approach is to structure derivations using the notion of tactics, and record derivation histories as an execution trace of the application of tactics. one key idea is that tactics are compositional: higher level tactics are constructed from more rudimentary using defined control primitives. this is similar to the approach used in lcf[12] and nuprl[1, 8]. given such a derivation history and a modified specification, the correspondence problem [21, 20] addresses how during replay a correspondence between program parts of the original and modified program is established. our approach uses a combination of name association, structural properties, and associating components to one another by intensional descriptions of objects defined in the transformations themselves. an implementation of a rudimentary replay mechanism for our interactive development system is described. for example with the system we can first derive a program from a specification that computes some basic statistics such as mean, variance, frequency data, etc. the derivation is about 15 steps; it involves deriving an efficient means of computing frequency data, combining loops and selecting data structures. we can then modify the specification by adding the ability to compute the maximum or mode and replay the steps of the previous derivation. allen goldberg a brief introduction to c w. m. mckeeman an open framework for reliable distributed computing benoÃ(r)t garbinatorachid guerraoui design of an actor language for implicit parallel programming yariv aridor shimon cohen amiram yehudai summary of the dynamo '00 panel discussion (panel session) ron cytron yoo c. chung michael j. voss letters to the editor corporate linux journal staff an introduction to s/sl: syntax/semantic language richard c. holt j. r. cordy david b. wortman transformations on a formal specification of user-computer interfaces j foley a multi-service storage architecture jean bacon ken moody sue thomson tim wilson experiments with typing in process modeling much of our recent work has been predicated on the hypothesis that extensive use of typing will be an important aspect of creating and using process models. in particular, we have been actively investigating type models specifically tailored for use in software environment and process model definition. we recently used class projects in two different software engineering courses as preliminary experiments on the efficacy of typing in creating process models. in this paper we sketch our type model work, describe the experiments and offer a brief assessment of the results. jack c. wileden what makes one software architecture more testable than another? nancy s. eickelmann debra j. richardson to copy or not to copy: a compile-time technique for assessing when data copying should be used to eliminate cache conflicts o. temam e. d. granston w. jalby checkpoint - restart for apl applications the crash proofing of apl programs is relatively simple in an apl system having a file sub-system. the method is the classical check point and restart to which has been added completion codes. it can be used for single user programs or multiple user interactive programs with critical sections. it assures recovery for such external generated failures as cpu, terminal, telephone line drop, or network crashes. the method can be used on already written programs by simple insertion in the existent code. it is particularly useful in application programs where the users are laymen. otway o. pardee transportable applications environment (tae) plus experiences in "object"-ively modernizing a user interface environment this paper describes the evolution of the transportable applications executive (tae) (developed at nasa/goddard space flight center) from a traditional procedural menu and command-oriented system to an object-oriented, modeless user interface management system, known as tae plus. the impetus for developing this environment and early experiments which led to its current implementation are addressed. the current version of tae plus provides design and prototyping functions, working in tandem with a mature application management system. the main components are (1) a user interface designers' workbench that allows an application developer to interactively layout an application screen and define the static and/or dynamic areas of the screen; (2) an application programmer subroutine package that provides runtime services used to display and control workbench-designed "interaction objects" on the screen; and (3) an extension to the existing tae command language that provides commands for displaying and manipulating interaction objects, thus providing a means to quickly prototype an application's user interface. during tae plus development, many design and implementation decisions were based on the state- of-the-art within graphics workstations, windowing systems and object-oriented programming languages, and this paper shares some of the problems and issues experienced during implementation. some of the topics discussed include: lessons learned in using the smalltalk language to prototype the initial workbench; why c++ was selected (over other languages) to build the workbench; and experiences in using x window system and stanford's interviews object library. the paper concludes with open issues and a description of the next steps involved in implementing the "totally modern" tae. martha r. szczur philip miller dynamo: a transparent dynamic optimization system we describe the design and implementation of dynamo, a software dynamic optimization system that is capable of transparently improving the performance of a native instruction stream as it executes on the processor. the input native instruction stream to dynamo can be dynamically generated (by a jit for example), or it can come from the execution of a statically compiled native binary. this paper evaluates the dynamo system in the latter, more challenging situation, in order to emphasize the limits, rather than the potential, of the system. our experiments demonstrate that even statically optimized native binaries can be accelerated dynamo, and often by a significant degree. for example, the average performance of -o optimized specint95 benchmark binaries created by the hp product c compiler is improved to a level comparable to their -o4 optimized version running without dynamo. dynamo achieves this by focusing its efforts on optimization opportunities that tend to manifest only at runtime, and hence opportunities that might be difficult for a static compiler to exploit. dynamo's operation is transparent in the sense that it does not depend on any user annotations or binary instrumentation, and does not require multiple runs, or any special compiler, operating system or hardware support. the dynamo prototype presented here is a realistic implementation running on an hp pa-8000 workstation under the hpux 10.20 operating system. vasanth bala evelyn duesterwald sanjeev banerjia an interactive debugging tool for c based on dynamic slicing and dicing static program slicing has gained wide recognition in both academic and practical arenas. several debugging tools have been developed that utilize static program slicing. dynamic slicing has also gained considerable popularity in recent years. due to the several advantages of dynamic slicing over static slicing, the objective of this work was to develop a debugging tool for c programs, called c-debug, using dynamic slicing and dicing techniques. this paper reports the design considerations of c-debug and the data structures used in the implementation of c-debug. based on the usage experiments with the c-debug debugging tool, limitations and possible future enhancements are also outlined. m. samadzadeh w. wichaipanitch performance testing of software systems filippos i. vokolos elaine j. weyuker migrating to linux, part 1: linux--not just for hackers anymore... norman m. jacobowitz parametrized methods jose de oliveira guimaraes a practical tool for distributing ada programs: telesoft's distributed ada configuration tool tom burger jim bladen richard a. volz ron theriault gary smith kali h. buhariwalla towards monolingual programming environments most programming environments are much too complex. one way of simplifying them is to reduce the number of mode-dependent languages the user has to be familiar with. as a first step towards this end, the feasibility of unified command/programming/debugging languages, and the concepts on which such languages have to be based, are investigated. the unification process is accomplished in two phases. first, a unified command/programming framework is defined and, second, this framework is extended by adding an integrated debugging capability to it. strict rules are laid down by which to judge language concepts presenting themselves as candidates for inclusion in the framework during each phase. on the basis of these rules many of the language design questions that have hitherto been resolved this way or that, depending on the taste of the designer, lose their vagueness and can be decided in an unambiguous manner. jan heering paul klint keeping it simple, but… gregg w. taylor software construction by object-oriented pictures: stimulus-response machines george w. cherry sharlit - a tool for building optimizers a complex and time-consuming function of a modern compiler is global optimization. unlike other functions of a compiler such as parsing and code generation which examine only one statement or one basic block at a time, optimizers are much larger in scope, examining and changing large portions of a program all at once. the larger scope means optimizers must perform many program transformations. each of these transformations makes its own particular demands on the internal representation of programs; each can interact with and depend on the others in different ways. this makes optimizers large and complex. despite their complexity, few tools exist to help in building optimizers. this is in stark contrast with other parts of the compiler where years of experience have culminated in tools with which these other parts can be constructed easily. for example, parser generators are used to build front- ends, and peephole optimizers and tree matchers are used to build code generators. this paper presents sharlit, a tool to support the construction of modular and extensible global optimizers. we will show how sharlit helps in constructing data-flow analyzers and the transformations that use data-flow analysis information, both are major components of any optimizer. sharlit is implemented in c++ and uses c++ in the same way that yacc uses c. thus we assume the reader has some familiarity with c++[9]. steven w. k. tjiang john l. hennessy points-to analysis for java using annotated constraints the goal of point-to analysis for java is to determine the set of objects pointed by a reference variable or a reference object field. this information has a wide variety of client applications in optimizing compilers and software engineering tools. in this paper we present a point-to analysis for java based on andersen's point- to analysis for c [5]. we implement the analysis by using a constraint-based approach which employs _annotated inclusion constraints._ constraint annotations allow us model precisely and efficiently the semantics of virtual calls and the flow of values through object fields. by solving systems of annotated inclusion constraints, we have been albe to perform practical and precies points-to analysis for java atanas rountev ana milanova barbara g. ryder on statecharts with overlapping the problem of extending the language of statecharts to include overlapping states is considered. the need for such an extension is motivated and the subtlety of the problem is illustrated by exhibiting the shortcomings of naive approaches. the syntax and formal semantics of our extension are then presented, showing in the process that the definitions for conventional statecharts constitute a special case. our definitions are rather complex, a fact that we feel points to the inherent difficulty of such an extension. we thus prefer to leave open the question of whether or not it should be adopted in practice. david harel chaim- arie kahana ensuring strong typing in an object-oriented language (abstract) bertrand meyer kernel korner playing with binary formats: this article explains how kernel modules can add new binary formats to a system and show a pair of examples alessandro rubini models for visualization in parallel debuggers the complexity of parallel programming has stimulated the development of a variety of debugging tools. this survey of recent research focuses on debugger visualization systems. the effectiveness of such systems is bounded by the degree to which their representations of run-time behavior correlate with the language structures used to incorporate parallelism, as well as the logical framework adopted by the programmer. current visualization systems are compared with the conceptual models supported by parallel languages. attention is drawn to the fact that debuggers are tied to specific models and that this association may restrict their usefulness and acceptability. c. m. pancake s. utter from the editor corporate linux journal staff agile software process and its experience mikio aoyama symple, an icon-based computer language erwind earl blount iterate applications not just prototypes john a. cupparo safe kernel extensions without run-time checking george c. necula peter lee architectural support for fast symmetric-key cryptography jerome burke john mcdonald todd austin studies in ada style peter hibbard andy hisgen jonathan rosenberg mary shaw mark sherman a technique for tracing memory leaks in c++ steven j. beaty an object-oriented apl2 object oriented programming has become an important and accepted part of the computer software industry. nearly every new operating system that has recently arrived or is scheduled to arrive soon is object oriented in nature. if apl2 is to continue to be a mainstream computational environment, it is very important that it becomes object oriented. finding a way that apl2 may sit comfortably in these new operating systems is important. unfortunately, in both implementation and current programming teaching philosophy, existing apl2 does not easily offer the end user the ability to adopt this new approach. this paper proposes some small changes to the basic apl2 language definition which will permit objects to be defined and used. it also suggests some major changes to the end user's programming methodology. david a. selby distributed software prototyping with ads the difficulties ordinarily experienced in designing and developing large software systems are greatly increased in the case of distributed data processing (ddp) systems. thus there is an urgent need to support the software life cycle activities with effective prototyping tools and techniques. the architecture development system (ads) is being developed for use in prototyping ddp systems within a distributed testbed. ads features an interactive graphical user interface, a distributed runtime environment, and a prototyping framework based on the concept of abstract ddp objects with attributes and relationships. this paper presents the ads prototyping features and the user interface to describe, synthesize, execute and analyze system representations. then a discussion is presented of the applicability of ads support to the distributed software life cycle. the current implementation status of ads is also summarized. james w. hooper john t. ellis tony a. johnson systemc: a homogenous environment to test embedded systems _the systemc language is becoming a new standard in the eda field and many designers are starting to use it to model complex systems. systemc has been mainly adopted to define abstract models of hardware/software components, since they can be easily integrated for rapid prototyping. however, it can also be used to describe modules at a higher level of detail, e.g., rt-level hardware descriptions and assembly software modules. thus, it would be possible to imagine a systemc-based design flow, where the system description is translated from one abstraction level to the following one by always using systemc representations. the adoption of a systemc-based design flow would be particularly efficient for testing purpose as shown in this paper. in fact, it allows the definition of a homogeneous testing procedure, applicable to all design phases, based on the same error model and on the same test generation strategy. moreover, test patterns are indifferently applied to hardware and software components, thus making the proposed testing methodology particularly suitable for embedded systems. test patterns are generated on the systemc description modeling the system at one abstraction level, then, they are used to validate the translation of the system to a lower abstraction level. new test patterns are then generated for the lower abstraction level to improve the quality of the test set and this process is iterated for each translation (synthesis) step._ alessandro fin franco fummi maurizio martignano mirko signoretto embedded control as a path to forth acceptance philip koopman introduction to eiffel all four compilers for the new eiffel language are available for linux. dan wilder introduces us to the language dan wilder letter from the editor cameron donaldson a case study on design pattern discovery in ada little assistance, if any, is given to personnel who design software. this has caused the production of software which is less evolvable and of low quality. a case study was carried out to investigate the implications of design patterns during software building. the results suggest that design patterns can facilitate our understanding of software systems and reusing software designs in our future software building. hyoseob kim cornelia boldyreff adapting synchronization counters to the requirements of inheritance christian neusius protection traps and alternatives for memory management of an object-oriented language antony l. hosking j. eliot b. moss a critique of common lisp a major goal of the common lisp committee was to define a lisp language with sufficient power and generality that people would be happy to stay within its confines and thus write inherently transportable code. we argue that the resulting language definition is too large for many short-term and medium-term potential applications. in addition many parts of common lisp cannot be implemented very efficiently on stock hardware. we further argue that the very generality of the design with its different efficiency profiles on different architectures works against the goal of transportability. rodney a. brooks richard p. gabriel the long-term future of operating systems' maurice v. wilkes patterns in practice richard helm designing an efficient and scalable server-side asynchrony model for corba darrell brunsch carlos o'ryan douglas c. schmidt concepts and paradigms of object-oriented programming peter wegner the astro system i. b. elliott best of technical support corporate linux journal staff in defense of the use clause j. p. rosen an overview of actor languages gul agha software engineering session overview and introductory comments for the fourth consecutive year, the national acm conference is giving attention to the field of software engineering. this year builds on what has gone before---"structured program planning and design: standardization needs" (acm '79 - detroit), "more on structured design" (acm '80 - nashville), and the "software engineering tutorial" (acm '81 - los angeles)---providing for some measure of continuity. this session contains original papers stressing techniques for implementing reliable, maintainable, well-engineered software. presented are state-of-the-art applications of software engineering techniques addressing architectural, structural, behavioral, and informational considerations. the ariel pashtan paper "object oriented operating systems: an emerging design methodology" emphasizes architectural considerations. this survey analyzes eight major operating systems. these operating systems undergo a functional decomposition with a view toward the behavioral and informational requirements by way of the object model concept. pashtan shows us how object implementation techniques may be applied and leaves us to consider "whether 'thinking in terms of objects' will become a standard design methology for operating systems". murray r. berkowitz a brief introduction to domain analysis guillermo arango towards effective embedded processors in codesigns: customizable partitioned caches _this paper explores an application-specific customization technique for the data cache, one of the foremost area/power consuming and performance determining microarchitectural features of modern embedded processors. the automated methodology for customizing the processor microarchitecture that we propose results in increased performance, reduced power consumption and improved determinism of critical system parts while the fixed design ensures processor standardization. the resulting improvements help to enlarge the significant role of embedded processors in modern hardware/software codesign techniques by leading to increased processor utilization and reduced hardware cost. a novel methodology for static analysis and a field-reprogrammable implementation of a customizable cache controller that implements a partitioned cache structure is proposed. the simulation results show significant decrease of miss ratio compared to traditional cache organizations._ peter petrov alex orailglu object-oriented programming in classic-ada michael l. nelson gilberto f. mota introducing case tools into the software development group j. györkös i. rozman t. welzer object-oriented programming in smalltalk and ada though ada and modula-2 are not object-oriented languages, an object-oriented viewpoint is crucial for effective use of their module facilities. it is therefore instructive to compare the capabilities of a modular language such as ada with an archetypal object-oriented language such as smalltalk. the comparison in this paper is in terms of the basic properties of encapsulation, inheritance and binding, with examples given in both languages. this comparison highlights the strengths and weaknesses of both types of languages from an object- oriented perspective. it also provides a basis for the application of experience from smalltalk and other object-oriented languages to increasingly widely used modular languages such as ada and modula-2. ed seidewitz book review: the java series kirk petersen writing a java class to manage rpm package content a look inside rpm packages and how to use java to extract information. jean-yves mengant four refutations of text files bradford j. rodriguez assessing the quality of abstract data types written in ada as software systems have become more complex, a search for better abstraction mechanisms has led to the use of abstract data types (adts). to more appropriately use adts, however, it is imperative that their properties and characteristics be understood. in this paper we present a method of assessing the quality of adts in terms of cohesion and coupling. we argue that an adt that contains and exports only one domain and exports only operations that pertain to that domain has the best cohesive properties, and we argue that adts that make neither explicit nor implicit assumptions about other adts in the system have the best coupling properties. formal definitions are presented for each of the cohesion and coupling characteristics discussed. their application to ada® packages is also investigated, and we show how a tool can be developed to assess the quality of an ada package that represents an adt. we analyzed nearly one hundred ada adt packages found in ada text books, articles about ada, and student projects and discovered that more than half of them had inferior cohesive characteristics and almost half of them allowed inferior coupling characteristics. d. w. embley s. n. woodfield experience with a module package in developing production quality pascal programs a module package (amp) is a preprocessor to a pascal compiler to support data encapsulation and modular system development. experience with amp in developing a software product at amdahl corporation has demonstrated its utility and robustness. sally warren bruce e. martin charles hoch automated data-member layout of heap objects to improve memory-hierarchy performance we present and evaluate a simple, yet efficient optimization technique that improves memory-hierarchy performance for pointer- centric applications by up to 24% and reduces cache misses by up to 35%. this is achieved by selecting an improved ordering for the data members of pointer- based data structures. our optimization is applicable to all type-safe programming languages that completely abstract from physical storage layout; examples of such languages are java and oberon. our technique does not involve programmers in the optimization process, but runs fully automatically, guided by dynamic profiling information that captures which paths through the program are taken with that frequencey. the algorithm first strives to cluster data members that are accessed closely after one another onto the same cache line, increasing spatial locality. then, the data members that have been mapped to a particular cache line are ordered to minimize load latency in case of a cache miss. thomas kistler michael franz acm algorithms policy f. t. krogh ada requirements methodology (arm) phillip crews darrell ward jerry mungle dynamics of process models in pml the ipse 2.5 project is concerned with the problem of how computer systems can be used in the development of information systems. the project is being carried out under the uk alvey programme software engineering strategy by a consortium comprising stc technology limited, international computers limited, university of manchester, dowty defense and air systems limited, serc rutherford appleton laboratories, plessey research roke manor ltd. and british gas plc. praxis systems has worked as a subcontractor to stc and icl. it finishes at the end of 1989. the project is concerned to provide facilities in support of people and organisations engaged in all aspects of computer systems development. of course this is a vast field and thus the project has focussed on "process modelling" and "formal methods". the former is concerned with coordination of the many processes undertaken in computer systems development. formal methods are mathematically based approaches to software development. this synopsis (a summary of [1]) of the project emphasises "process modelling". more details on the formal reasoning work of the project can be found in [2]. process modelling is the act of participating in processes. each and every member of an ipse 2.5 community is a process modeller, a programmer, a secretary, the chief programmer, a project manger, a salesperson, the personnel manager, the accountant, etc. furthermore, the devices of an office environment, printers, fax machines, photocopiers, etc., each perform roles in the office environment, and thus each of these is a process modeller. this view has the important property that a process modelling language must be 'reflective' so that anything which has a purpose and place in a process model can be introduced in terms of the objects which are already supported in the language. modelling actions include: creating abstract processors --- roles; defining role behaviours --- role classes; defining inter and intra-role behaviours; instigating inter-role interactions; selecting agenda actions; performing selected actions. the ipse 2.5 project supports process modelling through the use of a process control engine (pce). a pce can be loaded with knowledge (a process model) of the roles (activities) to be carried out by the staff and tools of the organisation using the pce and thereafter the pce can control the progress of the organisation's processes; coordinating, aiding, enforcing, and triggering the desired actions between the organisations processing resources --- they thus cooperate in a meaningful fashion and work at a balanced rate. the process modelling language (pml) is used to interact with the pce and hence to 'load' process model fragments. pml development is based on work reported in [3]. a process model provides an appropriate context for each of the resources of a project to undertake the work ascribed to them and hence to enable development of a system. a further objective of the project has been the integration of managerial and technical procedures to enable controlled development of software systems using extant and new methodologies. the project focussed on the basic goal satisfaction (or planning) cycle of analyse the goal, create plan of action, resource and activate, monitor and revise actions. an important aspect of this approach is that revision constitutes 'process model evolution' --- a new behaviour! the project has spent some time on the challenging problem and implications of dynamic bindings and scoping of process model changes. in studying these problems the project developed models of a mini-company with three management levels of board, business centre, project. business centres sell software services and as a result start projects. the projects employ their own chosen technical methods, but report as directed from the business centre, which in turn reports to the board. much of the reporting is automated, and ensures timely and accurate reports of progress, accounting details, and specific items included on per project or business centre basis. staff may also be supplied semi-automatically through prompting for resourcing as and when the need arises. ipse 2.5 work has also used the above mentioned technology in the development of prototype ipses to support formal (obj/malpas) and informal (ssadm like methods) development methods. the project is now progressing on the evaluation of both formal reasoning ipse (mural), and the demonstrator pce. these evaluations will be reported as part of the project deliverables to alvey. r. a. snowdon clive roberts time-related issues in ada 9x ted baker bridging the gap between hypermedia design and implementation: a research prototype (abstract) fernando d. lyardet a provably correct operating system: -core ming-yuan zhu lei luo guang-zhe xiong data abstraction, data bases and conceptual modelling (position paper) there is no paper with a mathematical foundation that i know of that strikes more at the heart of the subject of this workshop than that of r.m. burstall and j.a. goguen [10], presented at the fifth international joint conference on artificial intelligence in august of 1977. j. w. thatcher refined types: highly differentiated type systems and their use in the design of intermediate languages j. r. rose a system-level synthesis algorithm with guaranteed solution quality u. nagaraj shenoy prith banerjee alok choudhary comments on the paper "parameterization: a case study, by will tracz" mats weker correctness is not congruent with quality d. w. ketchum computer aided hand tuning (caht): "applying case-based reasoning to performance tuning" for most parallel and high performance systems, tuning guides provide the users with advices to optimize the execution time of their programs. execution time may be very sensitive to small program changes. such modifications may be local (on loop) or global (data structures and layout). in this paper, we propose to help end-users with the tuning process through an interactive tool complementary to existing compilers and automatic parallelizers. our goal is to provide a _live tuning guide_ capable of detecting optimization opportunities that are not caught by existing tools. our first prototype, called caht (computer aided hand tuning), targets smp architectures for openmp programs. caht relies on a very general technique, case based reasoning. this technique is adequate to experiment and build an easily expandable and flexible system. our first implementation applies to scientific codes written in fortran 77. antoine monsifrot françois bodin the weak mutation hypothesis brian marick supporting dynamic data structures on distributed-memory machines compiling for distributed-memory machines has been a very active research area in recent years. much of this work has concentrated on programs that use arrays as their primary data structures. to date, little work has been done to address the problem of supporting programs that use pointer-based dynamic data structures. the techniques developed for supporting spmd execution of array- based programs rely on the fact that arrays are statically defined and directly addressable. recursive data structures do not have these properties, so new techniques must be developed. in this article, we describe an execution model for supporting programs that use pointer-based dynamic data structures. this model uses a simple mechanism for migrating a thread of control based on the layout of heap-allocated data and introduces parallelism using a technique based on futures and lazy task creation. we intend to exploit this execution model using compiler analyses and automatic parallelization techniques. we have implemented a prototype system, which we call olden, that runs on the intel ipsc/860 and the thinking machines cm-5. we discuss our implementation and report on experiments with five benchmarks. anne rogers martin c. carlisle john h. reppy laurie j. hendren comment on dpans x3.160-198x, extended pascal concerning the initial value clause mark molloy linux network programming, part 3 corba: the software bus: this month we are presented with an introduction to the networking of disributed objects and the use of corba ivan griffin mark donnelly john nelson software design is a good thing michael cook distributed algorithms in tla (abstract) tla (the temporal logic of actions) is a simple logic for describing and reasoning about concurrent systems. it provides a uniform way of specifying algorithms and their correctness properties, as well as rules for proving that one specification satisfies another. tla+ is a formal specification language based on tla, and tlc is a model checker for tla+ specifications. tla+ and tlc have been used to specify and check high-level descriptions of real, complex systems. because tla+ provides the full power of ordinary mathematics, it permits simple, straightforward specifications of the kinds of algorithms presented at podc. this tutorial will try to convince you to describe your algorithms in tla+. you will then be able to check them with tlc and use tla to prove their correctness as formally or informally as you want. (however, tla proofs do have one disadvantage that is mentioned below.) the tutorial will describe tla+ through examples and demonstrate how to use tlc. no knowledge of tla is assumed. tla does have the following disadvantages: it can describe only a real algorithm, not a vague, incomplete sketch of an algorithm. you can specify an algorithm's correctness condition in tla only if you understand what the algorithm is supposed to do. tla makes it harder to cover gaps in a proof with handwaving. some researchers may find these drawbacks insurmountable. leslie lamport improving computer program readability to aid modification james l. elshoff michael marcotty et++ - an object oriented application framework in c++ et++ is an object-oriented application framework implemented in c++ for a unix† environment and a conventional window system. the architecture of et++ is based on macapp and integrates a rich collection of user interface building blocks as well as basic data structures to form a homogeneous and extensible system. the paper describes the graphic model and its underlying abstract window system interface, shows composite objects as a substrate for declarative layout specification of complex dialogs, and presents a model for editable text allowing the integration of arbitrary interaction objects. andre weinand erich gamma rudolf marty a system for parallel programming the programming of efficient parallel software typically requires extensive experimentation with program prototypes. a programming system that supports rapid prototyping of parallel programs should provide high-level primitives with which programs can be explicitly, statically or dynamically tuned with respect to performance and reliability. when using such primitives, programmers should not need to interact explicitly or even be aware of the software tools involved in program construction and tuning, such as compilers, linkers, and loaders. in addition, programmers should be provided with the information about the executing program and the parallel hardware required for tuning. such information may include monitoring data about the current or previous program or even hints regarding appropriate tuning decisions. a programming system that includes primitives and tools for program tuning is presented in this paper. the system has been implemented, and has been tested with a variety of parallel applications on a network of unix workstations. k. schwan r. ramnath s. vasudevan d. ogle introducing scheme an extensible language that is easy to debug and easy to develop. robert sanders supporting the deployment of object oriented frameworks _frameworks [4] are usually large and complex, and typically reusers need to understand them well enough to effectively use them. this research concentrates on verifying applications built on top of oo frameworks. the idea is to get framework builders to specify a set of constraints for the correct usage of the framework and check them using static analysis techniques._ daqing hou editorial pointers diane crawford xforms marries perl how to add a powerful graphical user interface to perl scripts reza naima extendable, dispatchable task communication mechanisms the addition of object- oriented features to ada has left a disconnection between the object-oriented paradigm and the intertask communication and synchronisation paradigms. the lack of extensibility of tasks and protected types as well as the task synchronisation inheritance anomaly has made design of systems that use them with object oriented features more difficult. this paper proposes ada language changes that would make protected types and tasks partners in object oriented programming and would cure the inheritance anomaly. stephen michell kristina lundqvist garbage-further investigations g. alan creak a comparison of cost estimation tools (panel session) there are only a handful of software cost estimation tools that are in general use today. the authors, or representatives, of the most "popular" tools were presented with a common problem to analyze as basis for comparison. in this context, each was asked to address their analysis approach, input parameters used, parameters not used, and results generated. this paper contains the statement of the problem and a summary of the results which will be discussed at the panel session. b. kitchenbaum howard a. rubin a. jensen l. putnam p. rook structured assembly language programming for those of us who are essentially high level programmers, the intricacies and lack of structure in assembly language programs are often an insurmountable barrier to effective assembly language programming. this paper attempts to show a way to overcome this barrier. structured pseudocode is used to solve the problem just as if the solution were to be coded in pl/i, pascal, ada, or some other structured high level language. then the structured pseudocode is "compiled" into assembly language using appropriate labels to show the structure of the assembly language program. robert n. cook programming the premature loop exit: from functional to navigational c. k. yuen static type inference for parametric classes central features of object-oriented programming are method inheritance and data abstraction attained through hierarchical organization of classes. recent studies show that method inheritance can be nicely supported by ml style type inference when extended to labeled records. this is based on the fact that a function that selects a field of a record can be given a polymorphic type that enables it to be applied to any record which contains a field . several type systems also provide data abstraction through abstract type declarations. however, these two features have not yet been properly integrated in a statically checked polymorphic type system. this paper proposes a static type system that achieves this integration in an ml-like polymorphic language by adding a class construct that allows the programmer to build a hierarchy of classes connected by multiple inheritance declarations. moreover, classes can be parameterized by types allowing "generic" definitions. the type correctness of class declarations is statically checked by the type system. the type system also infers a principal scheme for any type correct program containing methods and objects defined in classes. a. ohori p. buneman design and implementation of a c-based language for distributed real-time systems a rizk f halsall a fast method for finding an integer square root ken lyons extended pascal - illustrative features d a joslin what's gnu: texinfo arnold robbins parallel ada: issues in programming and implementation rakesh jha involving coroutines in interaction between functional and conventional language m. ivanovic z. budimac stop the presses corporate linux journal staff point-extent pattern for dimensioned numeric classes conrad weisert letters to the editor corporate linux journal staff software development using models r. s. d'ippolito c. p. plinta conspectus of software engineering environments aspects of software engineering environments are discussed, namely motivations, life cycle models, concepts, methods, description means and tools. some general conclusions about these aspects as well as about the area of software engineering environments are drawn. the paper is based on a study of selected software engineering environments. hans-ludwig hausen monika mullerburg testing by identifying k. a. foster hint-based cooperative caching this article presents the design, implementation, and measurement of a hint- based cooperative caching file system. hints allow clients to make decisions based on local state, enabling a loosely coordinated system that is simple to implement. the resulting performance is comparable to that of existing tightly coordinated algorithms that use global state, but with less overhead. simulations show that the block access times of our system are as good as those of the existing algorithms, while reducing manager load by more than a factor of seven, block lookup traffic by nearly a factor of two-thirds, and replacement traffic a factor of five. to verify our simulation results in a real system with real users, we implemented a prototype and measured its performance for one week. although the simulation and prototype environments were very different, the prototype system mirrored the simulation results by exhibiting reduced overhead and high hint accuracy. furthermore, hint-based cooperative caching reduced the average block access time to almost half that of nfs. prasenjit sarkar john h. hartman a practical tool kit for making portable compilers the amsterdam compiler kit is an integrated collection of programs designed to simplify the task of producing portable (cross) compilers and interpreters. for each language to be compiled, a program (called a front end) must be written to translate the source program into a common intermediate code. this intermediate code can be optimized and then either directly interpreted or translated to the assembly language of the desired target machine. the paper describes the various pieces of the tool kit in some detail, as well as discussing the overall strategy. andrew s. tanenbaum hans van staveren e. g. keizer johan w. stevenson (*standard-output*) guy l. steele compile c faster on linux an introduction to lcc, a compiler 75% smaller than gcc that also compiles more quickly and helps prevent some porting bugs. christopher w. fraser david r. hanson from the publisher: let's take linux seriously phil hughes talking with an apl via dde: teaching an old dog new tricks steven j. halasz andrei v. kondrashev using metamodels of methodologies to determine the needs for reusability support esteban a. pastor r. t. price ada versus fortran performance analysis using the acps dan j. byrne richard c. ham linux to go marjorie richardson the portability project dan nagle cooking with linux: the ghost of fun times past marcel writes about text-based games. marcel gagne genoa - a customizable, front-end-retargetable source code analysis framework code analysis tools provide support for such software engineering tasks as program understanding, software metrics, testing, and reengineering. in this article we describe genoa, the framework underlying application generators such as aria and gen++ which have been used to generate a wide range of practical code analysis tools. this experience illustrates front-end retargetability of genoa; we describe the features of the genoa framework that allow it to be used with different front ends. while permitting arbitrary parse tree computations, the genoa specification language has special, compact iteration operators that are tuned for expressing simple, polynomial-time analysis programs; in fact, there is a useful sublanguage of the genoa language that can express precisely all (and only) polynomial-time (ptime) analysis programs on parse trees. thus, we argue that the genoa language is a simple and convenient vehicle for implementing a range of analysis tools. we also argue that the "front-and reuse" approach of genoa offers an important advantage for tools aimed at large software projects: the reuse of complex, expensive build procedures to run generated tools over large source bases. in this article, we describe the genoa framework and our experiences with it. premkumar t. devanbu fusion-based register allocation the register allocation phase of a compiler maps live ranges of a program to registers. if there are more candidates than there are physical registers, the register allocator must spill a live range (the home location is in memory) or split a live range (the live range occupies multiple locations). one of the challenges for a register allocator is to deal with spilling and splitting together. fusion-based register allocation uses the structure of the program to make splitting and spilling decisions, with the goal to move overhead operations to infrequently executed parts of a program. the basic idea of fusion-based register allocation is to build up the interference graph. starting with some base region (e.g., a basic block, a loop), the register allocator adds basic blocks to the region and incrementallly builds the interference graph. when there are more live ranges than registers, the register allocator selects live ranges to split; these live ranges are split along the edge that was most recently added to the region. this article describes fusion-based register allocation in detail and compares it with other approaches to register allocation. for programs from the spec92 suite, fusion-based register allocation can improve the execution time (of optimized programs, for the mips architecture) by up to 8.4% over chaitin-style register allocation. guei-yuan lueh thomas gross ali-reza adl-tabatabai grow: an apse stress tester kenneth c. elsom representation and evaluation of security policies tatyana ryutov clifford neuman performance analysis of checkpointing strategies a widely used error recovery technique in database systems is the rollback and recovery technique. this technique saves periodically the state of the system and records all activities on a reliable log tape. the operation of saving the system state is called checkpointing. the elapsed time between two consecutive checkpointing operations is called checkpointing interval. when the system fails, the recovery process uses the log tape and the state saved at the most recent checkpoint to bring the system to the correct state that preceded the failure. this process is called error recovery and consists of loading the most recent state and then reprocessing all the activities, stored on the log tape, that took place since the most recent checkpoint and prior to failure. former models of rollback and recovery assumed poisson failures and fixed (or exponential) checkpointing intervals. extending these models, we consider general failure distributions. we also allow checkpointing intervals to depend on the reprocessing time (the time elapsed between the most recent checkpoint prior to failure and the time of failure) and the failure distribution. furthermore, failures may occur during the checkpointing and error recovery. our general model unifies a variety of models that have previously been investigated. we denote by fi; and t(fi), i 1, 2, ..., the ith failure that occurs during normal processing (not during error recovery) and the time of its occurrence, respectively. we refer to the time period li t(fi+1) t(fi), i 1, 2, ..., as the ith cycle whose length is li. it consists of two portions: the total error recovery time and the normal processing time. the reprocessing time associated with failure fi is denoted by yi 1. since the variables of the ith cycle depend at most on one variable of the (i 1)st cycle, namely yi 1, the stochastic process of the reprocessing time {yi; i≥0} is a markov process. we obtain the transition probability density function and the stationary distribution of this process. the performance of the system is measured by the availability, the fraction of time the system is not checkpointing or recovering from errors. in equilibrium, the system availability is expressed as the ratio of the mean production time (normal processing time excluding checkpointing time) during a cycle and the mean length of the cycle. we obtain a general expression for the system availability in our general model. the checkpointing strategy is characterized by the sequence of checkpointing intervals. for the well-known equidistant checkpointing strategy, in which the checkpointing intervals are constant, we find that the resulting system availability depends only on the mean of the failure distribution. we define a checkpointing strategy as failure- dependent if the sequence of checkpointing intervals depends on the failure distribution. checkpointing strategies that result in a checkpointing operation immediately after error recovery are called reprocessing-independent strategies. we then introduce a novel checkpointing strategy, the equicost strategy, which is failure-dependent and reprocessing-independent. this strategy suggests that a checkpointing operation is to be performed whenever the mean reprocessing cost equals the mean checkpointing cost. interestingly, the equicost strategy leads to fixed checkpointing intervals for poisson failures. we compare the maximum system availability resulting from the equidistant and the equicost checkpointing strategies under weibull distributions which are good approximations of actual failure distributions. computational results based on weibull failure distributions (both increasing and decreasing failure rates) show that the equicost strategy achieves higher system availability than the equidistant strategy which is known to be optimal under poisson failures. asser n. tantawi manfred ruschitzka mostly parallel garbage collection hans-j. boehm alan j. demers scott shenker a proposal for implementing the concurrent mechanisms of ada this paper proposes a scheme for implementing the communication and synchronization mechanisms of ada. a minimum operating system kernel is assumed first. then the primitives and data structures used to interpret the concurrent activities are described. ada concurrent statements are translated into the calls to certain primitives. by properly explaining some details in it, the proposal can be implemented on various computer systems and supporting environments. xiaolin zang management of disk space with rebate the past decade has witnessed a proliferation of respositories whose workload consists of queries that retrieve information. these repositories provide on- line access to vast amount of data and serve as an integral component of many application domains (e.g., library information systems, scientific applications, entertainment industry). their storage subsystem is expected to be hierarchical consisting of memory, disk drives, and one or more tertiary storage devices. the database resides permanently on the tertiary storage devices and objects are swapped onto the magnetic disk drives on demand (and deleted once the disk storage capacity is exhausted). this may fragment the disk space over a period of time, resulting in a non-contiguous layout of an object across the surface of a disk drive. this is undesirable because, once the object is referenced, the disk drive is required to reposition its read head multiple times (incur seek operations) when retrieving the object, resulting in a low performance. this paper presents the design of rebate. rebate ensures the contiguous layout of each object across the surface of a disk drive by partitioning the available disk space into regions where each region manages objects of approximately the same size. we describe the tradeoffs of using rebate and its possible limitations. shahram ghandeharizadeh douglas j. ierardi on the effect of different counting rules for control flow operators on software science metrics in fortran halstead in his theory of software science, proposed that in the fortran language, each occurrence of a goto i for different label i's be counted as a unique operator. several writers have questioned the wisdom of this method of counting goto's. in this paper, we investigate the effect of counting goto's as several occurrences of a single unique operator on various software science metrics. some 412 modules from the international mathematical and statistical libraries (imsl) are used as the data base for this study. s. d. conte v. y. shen k. dickey imax the intel iapx 432 is an object-based microcomputer which, together with its operating system imax, provides a multiprocessor computer system designed around the ideas of data abstraction. imax is implemented in ada and provides, through its interface and facilities, an ada view of the 432 system. of paramount concern in this system is the uniformity of approach among the architecture, the operating system, and the language. some interesting aspects of both the external and internal views of imax are discussed to illustrate this uniform approach. kevin c. kahn william m. corwin t. don dennis herman d'hooge david e. hubka linda a. hutchins john t. montague fred j. pollack serving two masters: getting linux and windows 95 to coexist michael k. johnson a short note on implementing "new" machine instructions by software for efficient test of page accessibility jochen liedtke endeavors: a process system infrastructure arthur s. hitomi gregory alan bolcer richard n. taylor the design and management of c++ class libraries (abstract) arthuir riel automatic translation of fortran programs to vector form the recent success of vector computers such as the cray-1 and array processors such as those manufactured by floating point systems has increased interest in making vector operations available to the fortran programmer. the fortran standards committee is currently considering a successor to fortran 77, usually called fortran 8x, that will permit the programmer to explicitly specify vector and array operations. although fortran 8x will make it convenient to specify explicit vector operations in new programs, it does little for existing code. in order to benefit from the power of vector hardware, existing programs will need to be rewritten in some language (presumably fortran 8x) that permits the explicit specification of vector operations. one way to avoid a massive manual recoding effort is to provide a translator that discovers the parallelism implicit in a fortran program and automatically rewrites that program in fortran 8x. such a translation from fortran to fortran 8x is not straightforward because fortran do loops are not always semantically equivalent to the corresponding fortran 8x parallel operation. the semantic difference between these two constructs is precisely captured by the concept of dependence. a translation from fortran to fortran 8x preserves the semantics of the original program if it preserves the dependences in that program. the theoretical background is developed here for employing data dependence to convert fortran programs to parallel form. dependence is defined and characterized in terms of the conditions that give rise to it; accurate tests to determine dependence are presented; and transformations that use dependence to uncover additional parallelism are discussed. randy allen ken kennedy object oriented reuse: experience in developing a framework for speech recognition applications savitha srinivasan john vergo the object oriented model and its advantages jose de oliveira guimaraes smalltalk smalltalk is an object-oriented programming environment developed at the xerox palo alto research center over the last twelve years. all components of the smalltalk environment are represented as objects that communicate by exchanging messages. this includes all of the common data types and operating system features found in a typical programming environment. for example, smalltalk includes objects representing numbers, arrays, text, disk files, and independent processes. messages are exchanged to initiate arithmetic calculation, data storage and retrieval, text editing, file access and process scheduling. david robson problem report forms: a system for software configuration management s. c. kienle w. a. keller an integrated framework for quality scientific software development p. murali d. dutta r. n. biswas cross references are features r. w. schwanke m. a. platoff naval avionics center ada-based design languages workshop summary of events corporate inc. software productivity solutions the robustness of numa memory management richard p. larowe carla schlatter ellis laurence s. kaplan how to define a language using prolog there have been many papers, conferences and theses concerned with the task of defining programming languages. yet, twenty years after mccarthy's (1962) seminal paper there is still no widely accepted method which will deal with all parts of a language clearly and concisely. everyone who invents a language writes a bnf definition and the more theoretically minded will attempt a denotational semantics. for a complete definition of the syntax, attribute grammars are gaining increasing acceptance while more practically minded language designers will use one of the compiler definition languages such as meta4 or cdl. at the same time as the prolog language was defined, a grammar system called metamorphosis grammar (m-grammar for short) was invented by colmerauer (1978). this has been shown to be suitable (moss 1981) for defining both the full syntax and semantics of programming languages. the present paper attempts to give an overview of this method in the form of a practical guide to writing a language definition. this includes the context sensitive syntax, and semantics using denotational, relational or axiomatic methods. this definition is not simply of theoretical interest. since prolog is a practical programming language, though with a sound theoretical basis (warren, pereira, pereira 1977), the definition can be used as a prototyping tool for a new language. with the addition of plan-formation techniques one might form a prolog based compiler generator and by adding other axioms form a program proving system. chris d.s. moss content-dependent access control jonathan d. moffett morris s. sloman directory replication in distributed systems k. c. wong j. cornacchio fortran standard activities william b. clodius class-is-type is inadequate for object reuse it is well known that class and type are two different concepts in object-oriented programming languages (oopls). however, in many popular oopls, classes are used as types. in this paper, we argue that the class-is-type principle is a major obstacle to software reuse, especially to object reuse. the concepts of the basic entities, i.e., objects, object classes, object types, and object kinds, in the type hierarchy of oopls are revisited. the notion of object reuse is defined and elaborated. in addition, we show that parameterized types and generic functions are better served by used kind-bounded quantification than universal quantification and other mechanisms. sheng yu cst: c state transformers john d. ramsdell advertisers index corporate linux journal staff managing software processes in the environment melmac in this paper we introduce an approach to software process modeling and execution based on the distinction between an application level (oriented towards a comprehensive representation of software process models) and an intermediate level representation of software process models (oriented towards uniform and executable description of software process models). the application level representation of software models identifies various entities of software process models. for describing different entities of software process models different views are used. the entities specified within all the views are uniformly represented on the intermediate level by funsoft nets. funsoft nets are high-level petri nets which are adapted to the requirements of software process management. a mechanism for coping with software process model modifications raised in software process execution is introduced. this mechanism is based on modification points. moreover, we discuss the architecture of the environment melmac which supports software process modeling as well as software process execution. volker gruhn can principles of object-oriented system documentation be applied to user documentation? mary elizabeth raven richard thomson software quality test accuracy test coverage the software quality index, spql (software product quality level), is proposed. the index indicates the software quality based on program test results and consists of two subindices: the test accuracy index and the test coverage index. the test accuracy index can be measured by applying the capture- recapture method. pseudo defects called control defects are seeded prior to the test and their capture ratio is measured as the ratio of the number of detected defects to the estimated number of detectable defects (estimated by applying the software reliability growth model). two experimental results are presented and the spql is compared with both the ordinary capture-recapture method and the software reliability growth prediction based method. the result indicates that the spql is practical. m. ohba shell functions and path variables, part 2 mr. collyer continues his discussion with a detailed description of the addpath function. stephen collyer using style to understand descriptions of software architecture gregory abowd robert allen david garlan distributed data access in ac we have modified the c language to support a programming model based on a shared address space with physically distributed memory. with this model users can write programs in which the nodes of a massively parallel processor can access remote memory without message passing. ac provides support for distributed arrays as well as pointers to distributed data. simple array references and pointer dereferencing are sufficient to generate low-overhead remote reads and writes. we have implemented these ideas in a compiler based on the gnu c compiler and targeted at cray research's t3d. initial performance measurements show that ac generates code for remote accesses which is considerably faster than that of the native compiler for structures up to about 16 words in size and virtually equivalent for larger transfers. william w. carlson jesse m. draper an integrated toolset for engineering software configurations configuration management in toolkit oriented software development environments (sde), such as the unix system, is a long standing nuisance. mostly, one has to face the choice between poorly or not at all integrated, independent tools, or highly integrated, most specialized, and often language dependent environments. the first choice offers very limited support for a complex task that needs a broad informational basis. the second choice often takes away the programmers' most cherished tools, forces him to adopt some different work discipline, and thereby eventually restricts his creativity. the toolset described in this paper integrates a dedicated version control system and shape, a significantly enhanced make [feld79a] program, on the basis of a common object model. this object model comprises multiple versions of software objects as well as conventional file system objects. taking this approach made it possible to have a sufficiently integrated toolsystem for engineering software configurations while retaining the flexibility of the basic toolbox philosophy, permitting the use of 'off-the-shelf' tools, e.g. editors or compilers. axel mahler andreas lampen from recursion to iteration: what are the optimizations? transforming recursion into iteration eliminates the use of stack frames during program execution. it has been studied extensively. this paper describes a powerful and systematic method, based on incrementalization, for transforming general recursion into iteration: identify an input increment, derive an incremental version under the input increment, and form an iterative computation using the incremental version. exploiting incrementalization yields iterative computation in a uniform way and also allows additional optimizations to be explored cleanly and applied systematically, in most cases yielding iterative programs that use constant additional space, reducing additional space usage asymptotically, and run much faster. we summarize major optimizations, complexity improvements, and performance measurements. yanhong a. liu scott d. stoller compiling into apl in some programming situations apl code can become as obtuse as that of other languages. one possible way to deal with such instances is to create a high- order programming language (hopl) which provides an easy-to-code, easy-to- maintain structure and which may be compiled straight-forwardly into apl functions for execution. since the hopl is designed for relatively narrow fields of use, it can be tailored directly to the problem which it solves. two of the areas where the hopl concept has proven useful are implementation of a simple, interactive graphics system and selection of data records from a database. we draw examples from these areas to discuss the concept and use of hopl's. george r. mayforth controlling propagation of operations using attributes on relations controlling the propagation of operations through a collection of objects connected by various relationships has been a problem both for the object- oriented and the data base communities. operations such as copy, destroy, print, and save must propagate to some, but not all, of the objects in a collection. such operations can be implemented using ad hoc methods on objects, at the cost of extra work and loss of clarity. the use of propagation attributes on the relationships between objects permits a concise, intuitive specification of the manner in which operations should propagate from one object to another. these concepts have been implemented in the object-oriented language dsm and have been used to write applications. james rumbaugh mini-exec: a portable executive for 8-bit microcomputers as microprocessor systems and single-chip microcomputers become more complex, so do the software systems developed for them. in many cases, software is being designed that incorporates multiple control functions running asynchronously on a single microprocessor. here, discussion focuses on the motivation for running such multiple functions under the control of a real- time multitasking executive. a successfully implemented executive whose design is portable and suitable for use on most 8-bit microprocessors is presented. thomas l. wicklund about logical clocks for distributed systems michel raynal separating access control policy, enforcement, and functionality in extensible systems extensible systems, such as java or the spin extensible operating system, allow for units of code, or extensions, to be added to a running system in almost arbitrary fashion. extensions closely interact through low-latency but type-safe interfaces to form a tightly integrated system. as extensions can come from arbitrary sources, not all of whom can be trusted to conform to an organization's security policy, such structuring raises the question of how security constraints are enforced in an extensible system. in this paper, we present an access control mechanism for extensible systems to address this problem. our access control mechanism decomposes access control into a policy- neutral enforcement manager and a security policy manager, and it is transparent to extensions in the absence of security violations. it structures the system into protection domains, enforces protection domains through access control checks, and performs auditing of system operations. the access control mechanism works by inspecting extensions for their types and operations to determine which abstractions require protection and by redirecting procedure or method invocations to inject access control operations into the system. we describe the design of this access control mechanism, present an implementation within the spin extensible operating systems, and provide a qualitative as well as quantitative evaluation of the mechanism. robert grimm brian n. bershad a session with tinker: interleaving program testing with program design tinker is an experimental interactive programming system which integrates program testing with program design. new procedures are created by working out the steps of the procedure in concrete situations. tinker displays the results of each step as it is performed, and constructs a procedure for the general case from sample calculations. the user communicates with tinker mostly by selecting operations from menus on an interactive graphic display rather than by typing commands. this paper presents a demonstration of our current implementation of tinker. henry lieberman carl hewitt unix backup and recovery charles curley evaluation of java thread performance on two different multithreaded kernels modern programming languages and operating systems encourage the use of threads to exploit concurrency and simplify program structure. an integral and important part of the java language is its multithreading capability. despite the portability of java threads across almost all platforms, the performance of java threads varies according to the multithreading support of the underlying operating system and the way java virtual machine maps java threads to the native system threads. in this paper, a well-known compute-intensive benchmark, the ep benchmark, was used to examine various performance issues involved in the execution of threads on two different multithreaded platforms: windows nt and solaris. experiments were carried out to investigate thread creation and computation behavior under different system loads, and to explore execution features under certain extreme situations such as the concurrent execution of very large number of java threads. some of the experimental results obtained from threads were compared with a similar implementation using processes. these results show that the performance of java threads differs depending on the various mechanisms used to map java threads to native system threads, as well as on the scheduling policies for these native threads. thus, this paper provides important insights into the behavior and performance of java threads on these two platforms, and highlights the pitfalls that may be encountered when developing multithreaded java programs. yan gu b. s. lee wentong cai an investigation into coupling measures for c++ lionel briand prem devanbu walcelio melo strategies for developing reusable software components in ada-95 shan barkataki stuart harte pat dousette gary johnson role model designs and implementations with aspect-oriented programming this paper describes research in applications of aspect-oriented programming (aop) as captured in the aspectj language. in particular, it compares object- oriented and aspect-oriented designs and implementations of role models. sections 1, 2, and 3 provide background information on role models, object- oriented role model implementations, and aspect-oriented programming, respectively. new aspect-oriented designs for role models are explored in sections 4, 5, and 6. the base reference for this exploration is the role object pattern. although useful for role models, this pattern introduces some problems at the implementation level, namely object schizophrenia, significant interface maintenance, and no support for role composition. our research has resulted in alternative aspect- oriented designs that alleviate some of these problems. section 7 discusses how an agent framework that implements role models has been partially reengineered with aspects. the reengineering addressed concerns that are orthogonal or cross cut both the core and the role behavior. the aspect oriented redesign significantly reduced code tangling, overall method and module count, and total lines of code. these results and other conclusions are presented in section 8. elizabeth a. kendall a role-based access control model for protection domain derivation and management trent jaeger frederique giraud nayeem islam jochen liedtke a future apl: examples and problems m. gfeffer java bytecode compression for low-end embedded systems a program executing on a low-end embedded system, such as a smart-card, faces scarce memory resources and fixed execution time constraints. we demonstrate that factorization of common instruction sequences in java bytecode allows the memory footprint to be reduced, on average, to 85% of its original size, with a minimal execution time penalty. while preserving java compatibility, our solution requires only a few modifications which are straightforward to implement in any jvm used in a low-end embedded system. lars ræder clausen ulrik pagh schultz charles consel gilles muller editorial saveen reddy programming with xforms, part 2: writing an application thor sigvaldason a time-sensitive object model for real-time systems process-oriented models for real-time systems focus on the timing constraints of processes, a focus that can adversely affect resulting designs. data dependencies between processes create scheduling interactions that limit the times at which processes may execute. processes are then designed to fit available windows in the overall system schedule. "fitting in" frequently involves fragmenting processes to fit scheduling windows and/or designing program and data structures for speed rather than for program comprehension. the result is often a system with very sensitive timing that is hard to understand and maintain. as an alternative to process-oriented design, we present time-sensitive objects: a data-oriented model for real-time systems. the time-sensitive object (tso) model structures systems as time-constrained data, rather than time constrained processing. object values are extended to object histories in which a sequence of time constrained values describe the evolution of the object over time. systems comprise a set of objects and their dependencies. the tso model describes the effects of object operations and the propagation of change among related objects. periodic objects, a class of objects within the tso model, are described in detail in this article and compared with traditional periodic processes. advantages of time-sensitive objects are identified, including greater scheduling independence when processes have data dependencies, more opportunity for concurrency, and greater inherent capability for detection of and tolerance to timing errors. h. rebecca callison distributed concurrent smalltalk: a language and system for the interpersonal environment t. nakajima y. yokote m. tokoro s. ochiai t. nagamatsu development of a parallel algorithmic language (abstract only) rolla* \\- a new style of programming is needed to meet the increased demands of computational speed. a software system based on dataflow and designed to use parallelism in future architectures is envisioned. the language model is designed so that any algorithm can be easily communicated to both man and machine. the basic entity of the model is that of units which are interconnected by a series of data paths like pipes. a unit is any process, file or device capable of consuming or supplying data. consuming and supplying pairs are categorized by the type of flow of data between units. a notation has been developed to express the dataflow graphs which exposes the parallel nature of the model. ports, index numbers and process names allow an unambiguous representation of the graphs and support nesting to any level. further research is being done in the area of unit description and dataflow control. roger eggen john r. metzner towards an efficient exploitation of loop-level parallelism in java jose oliver eduard ayguade nacho navarro integrated concurrency analysis in a software development enviornment the inherent difficulties of analyzing concurrent software make reliance on a single technique or a single monolithic tool unsatisfactory. a better approach is to apply multiple analysis and verification techniques by coordinating the activities of a variety of small tool components. we describe how this approach has shaped the design of a set of tool components to support concurrency analysis in the arcadia-1 software development environment. implementation and experience with key components is described. m. young r. taylor k. forester d. brodbeck upcoming events corporate linux journal staff a solution to a problem with morel and renvoise's "global optimization by suppression of partial redundancies" morel and renvoise have previously described a method for global optimization and code motion by suppression of partial redundancies [l]. morel and renvoise use data flow analysis to determine expression computations that should be inserted at the end of certain basic blocks and to determine redundant computations that can be eliminated. the execution of these techniques results in the movement of loop invariant expressions out of the loop. in addition to [l] morel and renvoise's techniques can also be applied to subexpressions of larger expressions. then, however, in certain special cases these optimization techniques move expressions to places where some of its subexpressions are neither available nor moved together with the expression. in this paper we present a modification of morel and renvoise's algorithm that avoids the above described situations. karl-heinz drechsler manfred p. stadel another d___1 acronym michael d. shapiro refining by architectural styles or architecting by refinements carlo montangero laura semini object-oriented programming through type extension in ada 9x ed seidewitz technical correspondence diane crawford stop the presses: linus wins the nokia award phil hughes a debate on language and tool support for design patterns design patterns have earned a place in the developer's arsenal of tools and techniques for software development. they have proved so useful, in fact, that some have called for their promotion to programming language features. in turn this has rekindled the age-old debate over the mechanism that belong in programming languages versus those that are better served by tools. the debate comes full circle when one contemplates code generation and methodological tool support for patterns. the authors compare and contrast programming languages, tools, and patterns to assess their relative merits and to clarify their roles in the development process. craig chambers bill harrison john vlissides object oriented thinking in forth l. forsley sealed calls in java packages ayal zaks vitaly feldman nava aizikowitz linux as an internet kiosk kevin mccormick continuations without copying scott l. burson testing ada designers and code. test gen-ada testing tool t. s. radi active objects in hybrid most object-oriented languages are strong on reusability or on strong-typing, but weak on concurrency. in response to this gap, we are developing hybrid, an object-oriented language in which objects are the active entities. objects in hybrid are organized into domains, and concurrent executions into activities. all object communications are based on remote procedure-calls. unstructured sends and accepts are forbidden. to this the mechanisms of delegation and delay queues are added to enable switching and triggering of activities. concurrent subactivities and atomic actions provided for compactness and simplicity. we show how solutions to many important concurrent problems, such a pipelining, constraint management and "administration" can be compactly expressed using these mechanisms. o. m. nierstrasz product review: accelerated x cde and display server for pc unix bradley willson an overview of pcte and pcte+ the pcte project has defined a public tool interface on which software engineering environments can be constructed. the interface definition was put into the public domain in september 1986 and several implementations on several machines now exist. the pcte+ project was set up to define a public tool interface, based on the pcte work, that could also serve for the development of defense and other high-security applications. this paper summarises the current status of pcte activity, presents the principal concepts of pcte and the evolutions that are being proposed in the pcte+ project. gerard boudier ferdinando gallo regis minot ian thomas kernel korner: block device drivers: interrupts michael k. johnson parameter interdependencies of file placement models in a unix system a file assignment case study of a computer system running unix is presented. a queueing network model of the system is constructed and validated. a modeling technique for the movement of files between and within disks is proposed. a detailed queueing network model is constructed for several file distributions in secondary storage. the interdependencies between the speed of the cpu, the swapping activity, the visit ratios and the multiprogramming level are examined and included in the modeling technique. the models predict the performance of several possible file assignments. the various file assignments are implemented and comparisons between the predicted and actual performance are made. the models are shown to accurately predict user response time. alfredo de j. perez-davila lawrence w. dowdy concurrent aggregates: using multiple-access data abstractions to manage complexity in concurrent programs andrew a. chien exceptions in object-oriented languages alexander borgida module test case generation while considerable attention has been given to techniques for developing complex systems as collections of reliable and reusable modules, little is known about testing these modules. in the literature, the special problems of module testing have been largely ignored and few tools or techniques are available to the practicing tester. without effective testing methods, the development and maintenance of reliable and reusable modules is difficult indeed. we describe an approach for systematic module regression testing. test cases are defined formally using a language based on module traces, and a software tool is used to automatically generate test programs that apply the cases. techniques for test case generation in c and in prolog are presented and illustrated in detail. d. hoffman c. brealey cool: system support for distributed programming rodger lea christian jacquemot eric pillevesse risks to the public in computer systems peter g. neumann designing robust java programs with exceptions exception handling mechanisms are intended to help developers build robust systems. although an exception handling mechanism provides a basis for structuring source code dealing with unusual situations, little information is available to help guide a developer in the appropriate application of the mechanism. in our experience, this lack of guidance leads to complex exception structures. in this paper, we reflect upon our experiences using the java exception handling mechanism. based on these experiences, we discuss two issues we believe underlie the difficulties encountered: exceptions are a global design problem, and exception sources are often difficult to predict in advance. we then describe a design approach, based on work by litke for ada programs, which we have used to simplify exception structure in existing java programs. martin p. robillard gail c. murphy linking codesign and reuse in embedded systems design this paper presents a complete codesign environment for embedded systems which combines automatic partitioning with reuse from a module database. special emphasis has been put on satisfying the requirements of industrial design practice and on the technical and economic constraints associated with automotive control applications. the object-oriented database architecture allows efficient management of a large number of modules. experimental results from a real-world example demonstrate the viability and advantages of the presented methodology. m. meerwein c. baumgartner w. glauert specification-based test oracles for reactive systems debra j. richardson stephanie leif aha t. owen o'malley the use and implementation of interval data types in fortran the use and implementation of interval data types in fortran are examined. in particular, the innovations introduced in the intrinsic support for intervals in fortetm developer 6 fortran 95 from sun microsystems inc. are described and justified. implications for future extensions to the fortran language are also described. g. william walster some sad remarks about string handling in c p. w. abrahams an incremental project plan: introducing cleanroom method and object-oriented development method yukio motoyoshi shigeru otsuki ajapack: experiments in performance portable parallel java numerical libraries shigeo itou satoshi matsuoka hirokazu hasegawa visual software engineering with rules george w. cherry implementing separate compilations in pascal g. nani query-based debugging of object-oriented programs raimondas lencevicius urs hölzle ambuj k. singh circular scheduling: a new technique to perform software pipelining suneel jain using tcl and tk from your c programs this month we'll show you how to use tcl and tk form your c programs. matt welsh automated fortran conversion gregory aharonian problems with software complexity measurement theoretical and methodological problems in the development of program-based measures of software complexity are enumerated. three issues are analyzed: the potential confounding influence of non-program factors; the desirable features of complexity measures, and the difficulties in validating complexity measures. the analysis suggests that researchers must find ways to control the effects of non-program factors, perhaps by restricting their focus to a well- defined programming environment. further, both the designers and users of complexity measures must be aware of the inherent limitations of such tools. robert l. sedlmeyer joseph k. kearney william b. thompson michael a. adler michael a. gray efficient atomic broadcast using deterministic merge we present an approach for merging message streams from producers distributed over a network, using a deterministic algorithm that is independent of any nondeterminism of the system, such as the amount of time the messages are delayed by the network, or their arrival order. thus, if this algorithm is replicated at multiple "mergers", then each merger will merge the message streams in exactly the same way. the technique is therefore a solution to atomic broadcast and global atomic multicast [12]. we assume that each producer has access to (approximately) synchronized clocks and can estimate the expected message rates of all producers. we propose an algorithm, called the bias algorithm. to measure the performance of the bias algorithm, we assume that messages are generated by memoryless processes operating at known message rates, and we measure the expected total merge delay at a given time l. for the case of two producer processes, we give optimal algorithms in this metric, and show that the optimal algorithm converges to our bias algorithm when l ⇒ ∞. we reconfirm our optimality result using dynamic programming theory, and we use simulations to validate the robustness of this optimality result under more realistic conditions. marcos kawazoe aguilera robert e. strom parametric polymorphism in java: an approach to translation based on reflective features mirko viroli antonio natali modeling implications of the personal software process watts s. humphrey microtal - a machine-dependent, high-level microprogramming language the design and implementation of a high-level microprogramming language is described. the language is a subset of an existing systems programming language, tal, which allows algorithms to be written and debugged using that language. the procedure may then be recompiled using the microtal compiler to produce a semantically equivalent microprogram which is accessed via an opcode rather than a procedure call. the microtal compiler automatically generates code to handle interrupts and page faults which may occur during the execution of the procedure. joel f. bartlett ada: a commercial flop and proud of it!-or-how to deal with java envy this paper provides a brief summary of a presentation provided at the 1998 sigada conference in washington dc. part of the "commercializing ada" workshop, the presentation discusses the alleged commercial failure of ada, and how we can improve ada's commercial lot in life. dave wood type-directed partial evaluation olivier danvy the state of software engineering practice watts s. humphrey david h. kitson tim c. kasse formal power series compiler support for garbage collection in a statically typed language we consider the problem of supporting compacting garbage collection in the presence of modern compiler optimizations. since our collector may move any heap object, it must accurately locate, follow, and update all pointers and values derived from pointers. to assist the collector, we extend the compiler to emit tables describing live pointers, and values derived from pointers, at each program location where collection may occur. significant results include identification of a number of problems posed by optimizations, solutions to those problems, a working compiler, and experimental data concerning table sizes, table compression, and time overhead of decoding tables during collection. while gc support can affect the code produced, our sample programs show no significant changes, the table sizes are a modest fraction of the size of the optimized code, and stack tracing is a small fraction of total gc time. since the compiler enhancements are also modest, we conclude that the approach is practical. amer diwan eliot moss richard hudson ussa - universal syntax and semantics analyzer boris burshteyn generic functions by nonstandard name scoping in apl we show how to acheive generic functions as in abstract datatypes (such as the simula class construct or ada package notion) for typeless languages, specifically apl. we do this by altering the standard dynamic scoping of names in apl to a scheme we call downward scoping. james t. kajiya commonly asked questions about ada: the standardized development language robert mathis tags miguel carrio scheme86: a system for interpreting scheme scheme86 is a computer system designed to interpret programs written in the scheme dialect of lisp. a specialized architecture, coupled with new techniques for optimizing register management in the interpreter, allow scheme86 to execute interpreted scheme at a speed comparable to that of compiled lisp on conventional workstations. andrew berlin henry wu tasking communication deadlocks in concurrent ada programs j. cheng k. araki k. ushijima prompter: a knowledge based support tool for code understanding as an experiment for the application of knowledge-based techniques to large- scale software maintenance, a program called prompter which produces annotations for programs written in an assembly language is being developed. prompter adopts object-oriented approach and represents both hardware and programming knowledge as class/instance definitions. object-oriented representation helps decompose the knowledge and organize it in a hierarchical way. it is shown that most of conventions such as data definitions which are the major obstacles to understanding system programs can be well formalized in this framework. koichi fukunaga transparent forwarding: first steps traditional object-oriented systems tend to be single-user. as we move from personal to interpersonal computing, we must look for ways to extend our programming paradigms. this research extends the smalltalk-80 system to send messages transparently to objects residing on remote machines. we discuss two models for remote message sends, describe our current implementation, and suggest areas for future research. paul l. mccullough book review: the mosaic handbook for the x window system** morgan hall domain analysis: from art form to engineering discipline g. arango tags: trains, agendas, and gerunds roger k. w. hui kenneth e. iverson the class storage and retrieval system: enhancing reusability in object- oriented systems michael l. nelson tilemahos poulis difficulties in developing re-usable software components arising from the lack of user redefinition of standard assignment steve roberts the apl 90 project: new directions in apl interpreters technology this paper presents some aspects of a new implementation of apl, the apl 90 system. this implementation includes concepts inherited from lisp, such as the notion of property lists, or the ability to manipulate simply the internal representation of user defined functions or operators. these two features allow a great flexibility both in describing the system and using it. the work presented here is being implemented under unix on a sm 90 computer. jean-jacques girardot an adaptive tenuring policy for generation scavengers one of the more promising automatic storage reclamation techniques, generation scavenging, suffers poor performance if many objects live for a fairly long time and then die. we have investigated the severity of this problem by simulating a two- generation scavenger using traces taken from actual 4-h sessions. there was a wide variation in the sample runs, with garbage- collection overhead ranging from insignificant, during three of the runs, to severe, during a single run. all runs demonstrated that performance could be improved with two techniques: segregating large bitmaps and strings, and adapting the scavenger's tenuring policy according to demographic feedback. we therefore incorporated these ideas into a commercial smalltalk implementation. these two improvements deserve consideration for any storage reclamation strategy that utilizes a generation scavenger. david ungar frank jackson how to best return the value of a function m. sakkinen programming pearls jon bentley penguin's progress: just folks peter h. salus improving data locality with loop transformations in the past decade, processor speed has become significantly faster than memory speed. small, fast cache memories are designed to overcome this discrepancy, but they are only effective when programs exhibit data locality. in the this article, we present compiler optimizations to improve data locality based on a simple yet accurate cost model. the model computes both temporal and spatial reuse of cache lines to find desirable loop organizations. the cost model drives the application of compound transformations consisting of loop permutation, loop fusion, loop distribution, and loop reversal. to validate our optimization strategy, we implemented our algorithms and ran experiments on a large collection of scientific programs and kernels. experiments illustrate that for kernels our model and algorithm can select and achieve the best loop structure for a nest. for over 30 complete applications, we executed the original and transformed versions and simulated cache hit rates. we collected statistics about the inherent characteristics of these programs and our ability to improve their data locality. to our knowledge, these studies are the first of such breadth and depth. we found performance improvements were difficult to achieve bacause benchmark programs typically have high hit rates even for small data caches; however, our optimizations significanty improved several programs. kathryn s. mckinley steve carr chau-wen tseng detection of deadlocks in multiprocess systems j. g. hunt integrating object-oriented analysis and formal specifications betty h. c. cheng performance re-engineering of embedded real-time systems minsoo ryu jungkeun park kimoon kim yangmin seo seongsoo hong letter from the editor: changes at lj corporate linux journal staff user manuals: what does the user really need? the purpose of the user manual is to assist the user in effectively operating a system. this objective is complicated by the wide diversity of user skills, and user tasks. the quality of a user manual is affected not only by the level of effort exerted by the system developer in writing the manual, but also by the degree of matching between the user manual and the user's needs. the documentation writer must supply both effort and direction in writing the user manual. this paper addresses the direction issue. roy l. chafin mixjuice (poster session): an object-oriented language with simple and powerful module mechanism this paper describes an object-oriented language called _mixjuice_, which uses the differences of class hierarchies as a unit of information hiding and reuse. in this language, classes --- templates of objects --- and modules --- units of information hiding and reuse --- are completely orthogonal, thus supporting _separation of concerns_. mixjuice is basically based on java. however, its module mechanism is simpler and more powerful than that of java. the mixjuice programmers can make use of all java libraries and execution platforms1. yuuji ichisugi pc++/streams: a library for i/o on complex distributed data sources the design and implementation of portable, efficient, and expressive mechanisms for i/o on complex distributed data structures---such as found in adaptive parallel applications---is a challenging problem that we address in this paper. we describe the design, programmer interface, implementation, and performance of pc++/streams, a library that provides an expressive mechanism for i/o on distributed arrays of variable-sized objects in pc++, an object-parallel language. pc++/streams is intended for developers of parallel programs requiring efficient high-level i/o abstractions for checkpointing, scientific visualization, and debugging. pc++/streams is an implementation of d/streams, a language-independent abstraction for buffered i/o on distributed data structures. we describe the d/streams abstraction and present performance results on the intel paragon and sgi challenge showing that d/streams can be implemented efficiently and portably. jacob gotwals suresh srinivas dennis gannon apl: a profitability language sybron corporation is a diversified manufacturing company operating in twenty- one countries around the world. the complexity and geographical distribution of sybron operations have increased as the company has grown. in the last eight years, innovative apl systems combined with an extensive communications network have provided corporate and division executives with powerful new tools for management decision making. timely information is an invaluable resource. at sybron, it is available in similar form from common databases at appropriate levels of management regardless of geographical constraints. the power to acquire, analyze and distribute information, however, is not an end in itself. it has positive impact on the corporate 'bottom line' only if it is continuously responsive to organizational goals and business objectives. sybron's organizational goal is to provide a favorable return on investment to our shareholders. apl systems are a valuable tool in achieving that objective. william g. vonberg combination of inheritance hierarchies harold ossher william harrison portability and power with the f programming language the authors combine over forty years of language-design committee experience to create the world's most portable, yet efficient, powerful, yet simple programming language david epstein dick hendrickson the concert signature representation: idl as intermediate language in the concert multilanguage distributed programming system, interface specification is the responsibility of programming languages, not separate idl. however, an idl is still necessary in order to define equivalence between declarations in different languages. a single representation is also desirable internally to economize on aspects of the implementation. consequently, concert has an idl as an intermediate language, produced by compiler front- ends and not normally manipulated by programmers. it is formally separated into a contract, which defines interoperability and an endpoint modifier, which captures the local choice of representation. only contracts are used to define interface equivalence. our choice of what kinds of information to put in the contract was motivated by a desire to be minimal, thereby enabling maximum feasible interoperability between different expressions of the same abstraction in the same or different languages. joshua s. auerbach james r. russell the software knowledge base we describe a system for maintaining useful information about a software project. the "software knowledge base" keeps track of software components and their properties; these properties are described through binary relations and the constraints that these relations must satisfy. the relations and constraints are entirely user-definable, although a set of predefined libraries of relations with associated constraints is provided for some of the most important aspects of software development (specification, design, implementation, testing, project management). the use of the binary relational model for describing the properties of software is backed by a theoretical study of the relations and constraints which play an important role in software development. bertrand meyer iso/iec 10514 - 1, the standard for modula-2: changes, clarifications and additions m. schönhacker c. pronk real-time programming and priority interrupt systems angel alvarez some ideas on support for fault tolerance in comandos, an object oriented distributed system brendan tangney vinny cahill chris horn dominic herity alan judge gradimir starovic mark sheppard connecting software components with declarative glue brian w. beach the implications of cache affinity on processor scheduling for multiprogrammed, shared memory multiprocessors raj vaswani john zahorjan process improvement towards iso 9001 certification in a small software organization elif demirors onur demirors oguz dikenelli billur keskin using the fortran 90 do while edward reid an extensible static analysis tool for cobol programs software tools are an important part of the programming environment. perhaps one of the most pervasive type of software tools is the "static analyzer", as exemplified by cross reference listing tools, and call graph generators. in this paper, we describe an "extensible" static analysis tool (cesat) which is based on the use of a data base management system and associated query language. the use of the dbms query language allows an virtually unlimited number of tools to be easily constructed, resulting in an "extensible" software tool. warren harrison using specialized procedures and specification-based analysis to reduce the runtime costs of modularity mark t. vandevoorde john v. guttag larger issues in user interface management d r olsen reengineering a legacy system using design patterns and ada-95 object-oriented features shan barkataki stu harte tong dinh representing monads we show that any monad whose unit and extension operations are expressible as purely functional terms can be embedded in a call-by-value language with "composable continuations". as part of the development, we extend meyer and wand's characterization of the relationship between continuation-passing and direct style to one for continuation-passing vs. general "monadic" style. we further show that the composable-continuations construct can itself be represented using ordinary, non-composable first-class continuations and a single piece of state. thus, in the presence of two specific computational effects - storage and escapes - any expressible monadic structure (e.g., nondeterminism as represented by the list monad) can be added as a purely definitional extension, without requiring a reinterpretation of the whole language. the paper includes an implementation of the construction (in standard ml with some new jersey extensions) and several examples. andrzej filinski retargetable compilation for low power most research to date on energy minimization in dsp processors has focuses on hardware solution. this paper examines the software-based factors affecting performance and energy consumption for architecture-aware compilation. in this paper, we focus on providing support for one architectural feature of dsps that makes code generation difficult, namely the use of multiple data memory banks. this feature increases memory bandwidth by permitting multiple data memory accesses to occur in parallel when the referenced variables belong to different data memory banks and the registers involved conform to a strict set of conditions. we present novel instruction scheduling algorithms that attempt to maximize the performance, minimize the energy, and therefore, maximize the benefit of this architectural feature. experimental results demonstrate that our algorithms generate high performance, low energy codes for the dps architectural features with multiple data memory banks. our algorithm led to improvements in performance and energy consumption of 48.3% and 66.6% respectively in our benchmark examples. wen-tsong shiue multi-level steering in distributed laboratories beth plale karsten schwan structured analysis and object-oriented development are not compatible donald firesmith highly reliable upgrading of components jonathan e. cook jeffrey a. dage surfing the net for software engineering notes mark doernhoefer an ada real-time executive rate scheduler marvin early programming on an already full brain christopher fry critical path reduction for scalar programs michael schlansker vinod kathail cascaded refactoring for framework refactoring of source code has been studied as a preliminary step in the evolution of object-oriented software. we extend the concept of refactoring to the whole range of models used to describe a framework in our methodology: feature model, use case model, architecture, design, and code. we view framework evolution as a two- step process: refactoring and extension. the refactoring step is a set of refactorings, one for each model, that cascades through them. the refactorings chosen for a model become the rationale or constraints for the choice of refactorings of the next model. the cascading of refactorings is aided by the alignment of the models. alignment is a traceable mapping between models that preserves the commonality-variability aspects of the models. greg butler lugang xu courage in profiles kenneth r. anderson mixin-based inheritance the diverse inheritance mechanisms provided by smalltalk, beta, and clos are interpreted as different uses of a single underlying construct. smalltalk and beta differ primarily in the direction of class hierarchy growth. these inheritance mechanisms are subsumed in a new inheritance model based on composition of mixins, or abstract subclasses. this form of inheritance can also encode a clos multiple-inheritance hierarchy, although changes to the encoded hierarchy that would violate encapsulation are difficult. practical application of mixin- based inheritance is illustrated in a sketch of an extension to modula-3. gilad bracha william cook envy: a non-volatile, main memory storage system this paper describes the architecture of envy, a large non-volatile main memory storage system built primarily with flash memory. envy presents its storage space as a linear, memory mapped array rather than as an emulated disk in order to provide an efficient and easy to use software interface. flash memories provide persistent storage with solid-state memory access times at a lower cost than other solid-state technologies. however, they have a number of drawbacks. flash chips are write- once, bulk-erase devices whose contents cannot be updated in-place. they also suffer from slow program times and a limit on the number of program/erase cycles. envy uses a copy-on-write scheme, page remapping, a small amount of battery backed sram, and high bandwidth parallel data transfers to provide low latency, in-place update semantics. a cleaning algorithm optimized for large flash arrays is used to reclaim space. the algorithm is designed to evenly wear the array, thereby extending its lifetime. software simulations of a 2 gigabyte envy system show that it can support i/o rates corresponding to approximately 30,000 transactions per second on the tpc-a database benchmark. despite the added work done to overcome the deficiencies associated with flash memories, average latencies to the storage system are as low as 180ns for reads and 200ns for writes. the estimated lifetime of this type of storage system is in the 10 year range when exposed to a workload of 10,000 transactions per second. michael wu willy zwaenepoel garp: a graphical/textual language for concurrent programming concurrent systems in which the number of processes and their interconnections can change dynamically suffer from the problem of ensuring that process interconnections are correctly maintained at all times. we propose a hybrid solution to this problem in which processes are described textually, but interconnections are described graphically using a graph grammar to constrain the legal sets of processes and interconnections that the system may evolve. this paper discusses garp, a hybrid graphical/textual concurrent programming language that acts as a testbed for our ideas, and illustrates its use with an example. s. m. kaplan s. k. goering letters corporate linux journal staff writing device drivers: a systematic approach the paper presents the development of a method leading to systematic programming of i/o-interfaces (i/o-drivers). the concept of information hiding by means of interrupt routines allows one to achieve both clarity and efficiency. a. wupit -coral: a multigrain, multithreaded processor architecture recently popularized hardware multithreading (hmt) architectures, such as smt, multiscalar and terra do not provide flexible and efficient methods of thread management and synchronization in hardware. the α-coral architecture is a tool for investigation of a more dynamic approach to thread management. unlike other architectures, there are no strict requirements on timing and size of threads, and no static partitioning of resources. α-coral provides for simultaneous multiprogramming and multithreading environment, which is mostly managed in hardware. to other architectures, α-coral adds on demand register allocation, fast variable size thread creation and destruction, as well as quick synchronization through a shared register file. while other architectures attempt to port existing compilers, the α-coral architecture is supported by a custom compiler system. this system provides for a simple method of mapping hierarchical internal representation of the program to variable size threads. this paper examines a new approach to hardware multithreading, involving minimal extensions to the instruction set of conventional risc superscalar architectures. the α-coral architecture and compiling support introduce a multi-grain multithreaded architecture which extends wide-superscalar processor cores to support hierarchical multithreading. a simulator was developed and results are presented to demonstrate the feasibility of our design approach. mark n. yankelevsky constantine d. polychronopoulos genie forth roundtable mike haas programming perl 3rd edition paul barry building awareness of system testing issues managers of large software projects with system test groups face a problem that they may not be aware of. vague notions among project members about the role of system testing tend to cloud the relationship between developers and testers with misunderstanding and disappointment. the main source of the problem is unrealistic expectations about what can be accomplished during the system test phase of the project. this has the effect of de-emphasizing the developers' responsibility for software quality and imposing a definition of success on system testers that is not possible to achieve. this paper describes a series of role awareness seminars at which a discussion technique was used to clarify expectations between developers and testers. transcending the discussion of roles was a fundamental message: the quality of software is determined during the development phase and cannot be radically improved during the system test phase. n. h. petschenik filters doing it your way: a look at several of the more flexible filters, programs that read some input, perform some operation on it, and write the altered data as output malcolm murphy the emergent approach to object allocation in computational field minoru uehara using uml for software process modeling dirk jäger ansgar schleicher bernhard westfechtel a case against using procedure calls for input/output r. fischer software reliability modeling accounting for program size variation due to integration or design changes estimation of software reliability quantities has traditionally been on stable systems; i.e., systems that are completely integrated and are not undergoing design changes. also, it is assumed that test results are completely inspected for failures. this paper describes a method for relaxing the foregoing conditions by adjusting the lengths of the intervals between failures experienced in tests as compensation. the resulting set of failure intervals represents the set that would have occured for a stable system in its final configuration with complete inspection. the failure intervals are then processed as they would be for a complete system. the approach is developed for the execution time theory of software reliability, but the concepts could be applied to many other models the estimation of quantities of interest to the software manager are illustrated. j. d. musa a. iannino what really is rapid prototyping for real-time system? (abstract) al mok a note on the berry-meekings style metric a modification of the berry-meekings "style metric"---applied to software from the corporate environment---finds little relationship between this style metric and error proneness. warren harrison curtis r. cook generic programming: apl and smalltalk early generic concepts in programming languages were mixed-type arithmetic (e.g., "+" used with any combination of fixed- and floating-point numbers) and "print" functions which could be applied to any of a language's objects. generic concepts reduce the number of terms which must be remembered and permit considerable condensation of language design. three different paths in the development of generic-concept languages have been followed by apl, algol 68, and object-oriented languages such as simula 67 and smalltalk. apl was the earliest and one of the most interesting applications of generic-concept methods, but now makes the weakest use of these ideas. my talk will use slides and movies to show how the message-object programming system smalltalk makes use of generic concepts embedded in class descriptions to give rise to a wide variety of simply programmed dynamic simulations including graphic animation, music synthesis, document layout and retrieval, and apl-like calculation features. many of these systems have been brought to life by child and adult novice programmers. a short list of references is given for those wishing more introductory information about smalltalk. alan c. kay rapid prototyping with tcl/tk a discussion of rapid prototyping and how it can benefit programmers in creating software to match the customer's needs richard schwaninger oop in the real world some advocates of oop have promised that it will make all code reusable, shorten development cycles, remove the applications backlog, cure the common cold and plug the hole in the ozone layer. how well does actual experience bear this out? success stories are published much more often than failures and false starts. this is very unfortunate since the former two are often much more instructive. this panel will attempt to bring the less successful experiences to light, and hopefully, will provide lessons on how to do better. rick denatale charles irby john lalonde burton leathers reed phillips observ - a prototyping language and environment the observ methodology for software development is based on rapid construction of an executable specification, or prototype, of a systems, which may be examined and modified repeatedly to achieve the desired functionality. the objectives of observ also include facilitating a smooth transition to a target system, and providing means for reusing specification, design, and code of systems and subsystems. we are particularly interested in handling embedded systems, which are likely to have concurrency and have some real-time requirements. the observ prototyping language combines several paradigms to express the behavior of a system. the object-oriented approach provides the basic mechanism for building a system from a collection of objects, with well- defined interfaces between them. we use finite-state machines to model the behavior of individual objects. at a lower level, activities that occur within objects, either upon entry to a state or in transition between thus allowing a nonprocedural description. the environment provided to a prototype builder is as important as the language. we have made an attempt to provide flexible tools for executing or simulating the prototype being built, as well as for browsing and static checking. the first implementation of the tools was window based but not graphic. a graphic front end, name cruise, was developed afterwards. a simulation sequence focuses on a single object, which can be as complex as necessary, possibly the entire system, and expects all the interactions between it and the outside world to be achieved by communication between the simulator and the user. the simulator allows the user to easily switch back and forth from one object to another, simulating each object in isolation. to enable testing the behavior of a prototype in a realistic environment, it is possible to construct objects that imitate the environment objects. we also allow simulation of systems with missing pieces, by calling upon the user to simulate any such missing piece by himself. shmuel tyszberowicz amiram yehudai optimizing dynamically-dispatched calls with run-time type feedback urs hölzle david ungar code reviews enhance software quality richard a. baker kava - a powerful and portable reflective java (poster session) _kava_ is a powerful and portable reflective java that uses byte code rewriting to implement behavioural reflection. in our poster we briefly overview the _kava_ system, give an example of its use, and contrast it with simple byte code rewriting. more details about kava are available from _http://www.cs.ncl .ac.uk /research/dependability/reflection_. ian s. welch robert j. stroud an example of a software process model based on unit workload network we are developing a software engineering and management database called kyotodb based on an object-base-data model. it has two types of objects. the first represents engineering data such as specifications, programs, etc., and encapsulates operations on these data. the second represents management data and encapsulates operations on the management data. each instance of the second type object, representing each individual unit workload, includes process programs and operations on management data (schedule, cost, quality, etc.) for that unit workload. the process program in unitwokload class invokes tools (editors, compilers, debuggers, etc.), and pass engineering data (files) or management data to those tools. the process programs also specify procedures to maintain semantic consistency between many engineering data, as well as between many human interactions. figure 1 shows an example hierarchical structure in kyotodb. figure 2 is an example of process program in class unit workload. yoshihiro matsumoto tsuneo ajisaka java bytecode to native code translation: the caffeine prototype and preliminary results cheng-hsueh a. hsieh john c. gyllenhaal wen-mei w. hwu postscript: the forgotten art of programming hans devreught a safe, efficient regression test selection technique regression testing is an expensive but necessary maintenance activity performed on modified software to provide confidence that changes are correct and do not adversely affect other portions of the softwore. a regression test selection technique choses, from an existing test set, thests that are deemed necessary to validate modified software. we present a new technique for regression test selection. our algorithms construct control flow graphs for a precedure or program and its modified version and use these graphs to select tests that execute changed code from the original test suite. we prove that, under certain conditions, the set of tests our technique selects includes every test from the original test suite that con expose faults in the modified procedfdure or program. under these conditions our algorithms are safe. moreover, although our algorithms may select some tests that cannot expose faults, they are at lease as precise as other safe regression test selection algorithms. unlike many other regression test selection algorithms, our algorithms handle all language constructs and all types of program modifications. we have implemented our algorithms; initial empirical studies indicate that our technique can significantly reduce the cost of regression testing modified software. gregg rothermel mary jean harrold a common error in the object structuring of object-oriented design methods d. c. rine tmc: target-meta-cross: an engineer's viewpoint greg lisle using posix threads to implement ada tasking: description of work in progress e. w. giering t. p. baker automated support for seamless interoperability in polylingual software systems interoperability is a fundamental concern in many areas of software engineering, such as software reuse or infrastructures for software development environments. of particular interest to software engineers are the interoperability problems arising in _polylingual_ software systems. the defining characteristic of polylingual systems is their focus on uniform interaction among a set of components written in two or more different languages.existing approaches to support for interoperability are inadequate because they lack _seamlessness:_ that is, they generally force software developers to compensate explicitly for the existence of multiple languages or the crossing of language boundaries. in this paper we first discuss some foundations for polylingual interoperability, then review and assess existing approaches. we then outline polyspin, an approach in which interoperability can be made transparent and existing systems can be made to interoperate with no visible modifications. we also describe polyspinner, our prototype implementation of a toolset providing automated support for polyspin. we illustrate the advantages of our approach by applying it to an example problem and comparing polyspin's ease of use with that of an alternative, corba-style approach. daniel j. barrett alan kaplan jack c. wileden creating abstract superclasses by refactoring this paper focuses on object-oriented programming and one kind of structure- improving transformation (refactoring) that is unique to object-oriented programming: finding abstract superclasses. we decompose the operation of finding an abstract superclass into a set of refactoring steps, and provide examples. we discuss techniques that can automate or automatically support these steps. we also consider some of the conditions that must be satisfied to perform a refactoring safely; sometimes to satisfy these conditions other refactorings must first be applied. william f. opdyke ralph e. johnson debugging distributed applications with replay capabilities daniel neri laurent pautet samuel tardieu a study of the applicability of existing exception-handling techniques to component-based real-time software technology this study focuses on the current state of error-handling technology and concludes with recommendations for further research in error handling for component-based real-time software. with real-time programs growing in size and complexity, the quality and cost of developing and maintaining them are still deep concerns to embedded software industries. component-based software is a promising approach in reducing development cost while increasing quality and reliability. as with any other real-time software, component-based software needs exception detection and handling mechanisms to satisfy reliability requirements. the current lack of suitable error-handling techniques can make an application composed of reusable software nondeterministic and difficult to understand in the presence of errors. jun lang david b. stewart object oriented experiences john hunt more letters corporate linux journal staff windows nt as a personal or intranet server larry press response to remarks on recent algorithms for lalr lookahead sets fred ives remote network commands a guide to using rlogin, rcp, and rsh to transfer andmanipulate data across a network. jens hartmann an implementation of standard ml modules standard ml includes a set of module constructs that support programming in the large. these constructs extend ml's basic polymorphic type system by introducing the dependent types of martin lof's intuitionistic type theory. this paper discusses the problems involved in implementing standard ml's modules and describes a practical, efficient solution to these problems. the representations and algorithms of this implementation were inspired by a detailed formal semantics of standard ml developed by milner, tofte, and harper. the implementation is part of a new standard ml compiler that is written in standard ml using the module system. david macqueen modules in f walt brainerd java annotation-aware just-in-time (ajit) complilation system ana azevedo alex nicolau joe hummel a brief introduction to lisp g. j. sussman g. l. steele r. p. gabriel embedding python in multi-threaded c/c++ applications python provides a clean intuitive interface to complex, threaded applications. ivan pulleyn evaluating the limits of message passing via the shared attraction memory on cc-coma machines: experiences with tcgmsg and pvm kaushik ghosh stephen r. breit eliminating redundant barrier synchronizations in rule-based programs anurag acharya transparent disconnected operation for fault-tolerance james jay kistler m. satyanarayanan executable connectors: towards reusable design elements stephane ducasse tamar richner efficient java exception handling in just-in-time compilation seungll lee byung-sun yang suhyun kim seongbae park soo-mook moon kemal ebcioglu erik altman cats: computer aided testing of software this paper describes the implementation of an automated test system for apl functions. it extends an implementation of assertive comments in apl to derive a notation for formal specification using pre and post conditions. these conditions are apl statements and so can be built into test functions. data supplied to provide examples is captured and subjected to mutations to test behaviour under edge conditions. the techniques make extensive use of modern apl ideas such as defined operators, phrasal forms and function assignment. maurice jordan felix, an object-oriented operating system m. lester r. christensen uml 2001: a standardization odyssey cris kobryn an empirical study of the performance of the apl370 compiler w.-m. ching r. nelson n. shi from the editor: promoting linux corporate linux journal staff an empirical study of the reliability of unix utilities the following section describes the tools we built to test the utilities. these tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. next, we will describe the tests we performed, giving the types of input we presented to the utilities. results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. the final section presents concluding remarks, including suggestions for avoiding the types of problems detected by our study and some commentary on the bugs we found. we include an appendix with the user manual pages for fuzz and ptyjig. barton p. miller louis fredriksen bryan so book review: internet programming with python dwight johnson kernel korner: the bullet points: linux 2.4 the bullet points: linux 2.4: a look at what's new in the next kernel release joseph pranevich operating systems raphel a. finkel interface specification with temporal logic m. kooij recovery in distributed systems using asynchronous message logging and checkpointing david b. johnson willy zwaenepoel impulse-86: a substrate for object-oriented interface design impulse-86 provides a general and extensible substrate upon which to construct a wide variety of interactive user interfaces for developing, maintaining, and using knowledge-based systems. the system is based on five major building blocks: editor, editor window, propertydisplay, menu, and operations. these building blocks are interconnected via a uniform framework and each has a well-defined set of responsibilities in an interface. customized interfaces can be designed by declaratively replacing some of the building blocks in existing impulse-86 templates. customization may involve a wide range of activities, ranging from simple override of default values or methods that control primitive operations (e.g., font selection), to override of more central impulse-86 methods (e.g., template instantiation). most customized interfaces require some code to be written--- to handle domain- specific commands. however, in all cases, the impulse-86 substrate provides considerable leverage by taking care of the low-level details of screen, mouse, and keyboard manipulation. impulse-86 is implemented in strobe, a language that provides object-oriented programming support for lisp. this simplifies customization and extension. reid g. smith rich dinitz paul barth applying process programming to the model the primary thesis of this position paper is that process programming is analogous to programming in an key respect not previously emphasized: that it will proceed more effectively if preceded by a set of activities to determine the requirements, architecture, and design of the process. b. boeh f. belz interprocedural control flow analysis of first-order programs with tail-call optimization knowledge of low-level control flow is essential for many compiler optimizations. in systems with tail-call optimization, the determination of interprocedural control flow is complicated by the fact that because of tail- call optimization, control flow at procedure returns is not readily evident from the call graph of the program. this article shows how interprocedural control-flow analysis of first-order programs can be carried out using well- known concepts from parsing theory. in particular, we show that context- insensitive ( or zeroth-order) control-flow analysis corresponds to the notion of follow sets in context-free grammars, while context-sensitive (or first- order) control-flow analysis corresponds to the notion of lr(1) items. the control-flow information so obtained can be used to improve the precision of interprocedural dataflow analyses as well as to extend certain low-level code optimizations across procedure boundaries. saumya k. debray todd a. proebsting linux code freeze linus torvalds a generic editor art griesser solving two-state logic problems with boolean arrays: an approach unique to apl many great advances in science and mathematics were preceded by notational improvements. while a given algorithm can be implemented in any general purpose programming language, discovery of algorithms is heavily influenced by the notation used to investigate them. it is often conjectured that apl notation leads to unique solutions. in logic problems the values for variables are often limited to two states or options: yes/no, true/false, on/off, guilt/innocence, etc. this paper provides one example of how apl provides a unique solution to such problems. kenneth fordyce manuel alfonseca james brown gerald sullivan understanding memory allocation of scheme programs manuel serrano hans-j. boehm from semi-syntactic lexical analyzer to a new compiler model y h shyu an overview of the alf project colin potts toward real microkernels jochen liedtke distributing forth frank sergeant virtual nodes/distributed systems working group anthony gargaro an empirical study of the effects of careful page placement in linux satyendra bahadur viswanathan kalyanakrishnan james westall basic compiler algorithms for parallel programs jaejin lee david a. padua samuel p. midkiff object race detection we present an on-the-fly mechanism that detects access conflicts in executions of multi-threaded java programs. access conflicts are a conservative approximation of data races. the checker tracks access information at the level of objects (_object races_) rather than at the level of individual variables. this viewpoint allows the checker to exploit specific properties of object-oriented programs for optimization by restricting dynamic checks to those objects that are identified by escape analysis as potentially shared. the checker has been implemented in collaboration with an "ahead-of- time"java compiler. the combination fo static program analysis (escape- analysis) and inline instrumentation during code generation allows us to reduce the runtime overhead of detecting access conflicts. this overhead amounts to about 16-129% in time and less than 25% in space for typical benchmark applications and compares favorably to previously published on-the- fly mechanism that incurred an overhead of about a factor of 2-80 in time and up to a factor of 2 in space. christoph von praun thomas r. gross anintroduction to ada the most important features of ada are covered briefly in this paper. as with many aspects of programming languages, what is most important is subjective. therefore the subjects selected for this introduction are the result ofthe author's experience in studying the language. you may end up with a separate list after your study of the language. ada is an attempt by the department of defense to introduce a single high- level language to the real-time computer world. a proliferation of programming languages are used in real-time computing. most of them are not taught in computer science departments. a high percentage of the application areas still use assembly language. for example, in the avionics area, assembly language is used more than 50% of the time and the dod approved language for this application, jovial, is not known by most programmers who develop avionic systems. sabina h. saib an experimental determination of sufficient mutant operators a. jefferson offutt ammei lee gregg rothermel roland h. untch christian zapf efficient loop-level parallelism in ada michael hind edmond schonberg a simplified domain-testing strategy bingchiang jeng elaine j. weyuker targeting a traditional compiler to a distributed environment g. eisenhauer r. jha j. m. kamrad integrating gnat and gcc richard kenner dcg: an efficient, retargetable dynamic code generation system dynamic code generation allows aggressive optimization through the use of runtime information. previous systems typically relied on ad hoc code generators that were not designed for retargetability, and did not shield the client from machine-specific details. we present a system, dcg, that allows clients to specify dynamically generated code in a machine-independent manner. our one-pass code generator is easily retargeted and extremely efficient (code generation costs approximately 350 instructions per generated instruction). experiments show that dynamic code generation increases some application speeds by over an order of magnitude. dawson r. engler todd a. proebsting shared variables this is a summary of the discussions which took place in the session on the sharing of variables in ada 9x. ken elsom evaluating framework architecture structural stability jagdish bansiya extending apl2 to include program control structures david a. selby composable ada software components and the re-export paradigm bryce bardin christopher thompson more letters corporate linux journal staff objects as closures: abstract semantics of object-oriented languages we discuss denotational semantics of object-oriented languages, using the concept of closure widely used in (semi) functional programming to encapsulate side effects. it is shown that this denotational framework is adequate to explain classes, instantiation, and inheritance in the style of simula as well as smalltalk-80. this framework is then compared with that of kamin, in his recent denotational definition of smalltalk-80, and the implications of the differences between the two approaches are discussed. uday reddy automatic generation of i/o prefetching hints through speculative execution (poster session) fay chang garth gibson sara in the design room this paper presents an overview of sara (system architects apprentice), an interactive environment which was created so that designers might enhance their capabilities to effectively design concurrent computer systems. sara's requirements, methodology, support tools, and experiences are summarized in turn. the paper ends with a discussion of collaborative capabilities needed in the "design room". gerald estrin enhanced derived type facilities mastering algorithms with c john kacur modeling the distributed termination convention of csp krzysztof r. apt nissim francez visualizing the performance of higher-order programs oscar waddell j. michael ashley criteria for software modularization a central issue in programming practice involves determining the appropriate size and information content of a software module. this study attempted to determine the effectiveness of two widely used criteria for software modularization, strength and size, in reducing fault rate and development cost. data from 453 fortran modules developed by professional programmers were analyzed. the results indicated that module strength is a good criterion with respect to fault rate, whereas arbitrary module size limitations inhibit programmer productivity. this analysis is a first step toward defining empirically based standards for software modularization. david n. card gerald t. page frank e. mcgarry the impact of operating system structure on memory system performance j. bradley chen brian n. bershad finding the object (workshop session) mark a. whiting d. michael devaney specification and verification of an object request broker gregory duval saustall - a new software environment saustall (sequential algorithmic universal set-theoretical associative logical language) started as an attempt to define a new programming language. i have widened its scope to include the total software environment. it is hoped that saustall will be accepted as a general purpose language considerably better than conventional programming languages. further, saustall attempts to overcome the conventional division of software into compilers, editors, command language, etc. it may be thought presumptuous to place before the public something which is a long way from availability. the reason for doing it is in the hope of getting reactions from the software community which will improve saustall. robert k. bell to see ourselves as others see us: evaluating software documentation joan lee grasp: an executable specification language for ada tasking problems involving ada tasking can be difficult to specify because there are few existing methods for visualizing them. currently, software engineers must specify tasking problems by hand using informal data flow diagrams [4] or booch diagrams [1]. several attempts have been made at providing a workstation environment for the specification of programs: notable examples are the tags (technology for the automated generation of systems) environment with iorl ® (input/output requirements language) [5], and the adagraph environment with pamela (process abstraction method for embedded large applications)[3]. these methods are useful for the specification of tasking problems, but they do have some drawbacks. while data flow diagrams, warnier-orr diagrams, and booch diagrams have been commonly used for specification, they are not directly executable, and must be rewritten as source code by a programmer. design errors may creep in all too easily during the translation. the automated tools also have their problems. iorl provides a good model for decomposing a system into concurrent processes, but it utilizes an unstructured, flowchart-like tool for the specification of process logic. a system is represented in iorl using a simple symbology of boxes and arrows that is not as refined as those of yourdon or gane and sarson. pamela is based on data flow diagram techniques, but uses additional symbols that may not be intuitive to those unfamiliar with the language. and, although pamela allows the overall structure of a system to be developed graphically, the logic associated with each procedural component must be developed as text: graphical representations of procedural logic are not supported. a graphical specification language (grasp: graphical approach to the specification of programs) for tasking problems is proposed to overcome these problems. the current version of the language utilizes two presently existing software engineering tools, the data flow diagram [4] and the warnier-orr diagram [2,7], with some modifications. (early in the research effort, it was determined that the data flow depicted in standard data flow diagrams was not sufficient for the specification of tasking programs, so this data flow was elaborated.) a grasp specification is first developed using grasp/g, a graphical language that incorporates data flow diagrams and warnier-orr diagrams. this specification may be automatically mapped into grasp/t, a textual variant of grasp/g that serves as an intermediate language. a grasp translator can then map this grasp/t specification into ada source code. the two-tiered approach allows the graphics-intensive grasp/g tools to be isolated for development on various machines, while leaving the grasp/t tools in an easily portable state. at present, a prototype of the grasp language [6] is being tested with grasp graphics and translation tools written in pascal for the apple macintosh computer. the prototype has been used to specify small problems such as the dining philosophers. while the prototype does not yet address the needs of complex i/o or elements unique to ada (generic instantiation, private units), it does handle the decomposition of a system into concurrent processes well, and "hooks" have been provided in the language for further work. kelly i. morrison namespaces semipermeable membranes for apl applications namespaces are proposed as an extension to apl which can provide for large apl applications the same sort of structure and communications control provided in living structures by semipermeable membranes. living systems are partitioned by a variety of semipermeable membranes. those membranes provide protection and isolation for subenvironments of the entity. they also control the flow of information and material between the subenvironments. large applications designed for maintainability and extensibility are similar to living systems. they are hierarchically subdivided into simple subunits which are defined in terms of other subunits and which communicate with them in controlled ways. apl and most other programming environments do not provide suitable means to enforce the isolation and communication control conventions that are appropriate for large applications. instead isolation depends on the good intentions and discipline of the implementers and the maintenance programmers. ronald c. murray elementary functions packages for ada this paper incorporates some of the discussions by the acm/sigada working group on ada numerics, the iso/tc97/sc22/wg9 rapporteur group on an elementary functions package, and the ada-europe ada numerics working group. this is not a report from any one of those groups nor does it attempt to reflect all the discussions or potential resolutions of the issues. this paper is the author's attempt to provide information on the current thinking of these groups about a standard specification for an elementary functions package for ada so that a broader group can assess the utility and trade-offs involved in such standard specifications and libraries. the following people have been involved in developing and influencing the ideas presented here: jim cody, paul cohen, sandy cohen, ken dritz, brian ford, graham hodgson, jan kok, gil myers, brian smith, jon squire, and bill whitaker. any confusion or inaccuracies are the author's. these people represent a combination of interests in the ada language ranging from numerical analysis to embedded application development. the sigada numerics working group (sigada numwg) has met at the sigada meetings in pittsburgh, pa (july, 1986), charleston, wv (november, 1986), fort lauderdale, fl (january, 1987), and seattle, wa (august, 1987), several times at argonne national laboratory near chicago, il and at various locations in the washington, dc area. the numerics working group of ada-europe has met many times on this and related topics. a number of proposals for elementary functions in ada ([firth 1982], [whitaker & eicholtz 1982], [witte 1983], [symm, wichmann, kok & winter 1984], [kok & symm 1984], and [kok 1987]) have been studied by the sigada numwg. these were considered along with recent work in numerical analysis about the calculation of the elementary functions and about the properties of floating point arithmetic. various aspects of ada's real arithmetic, type mechanism, generic packages, and library structures have also been considered as they relate to the specification and implementations of a potentially widely used package. the proposed package specification is given at the very end of the paper as a convenient reference point. robert f. mathis the linux position doc searls cost-optimal code motion we generalize knoop et al.'s lazy code motion (lcm) algorithm for partial redundancy elimination so that the generalized version also performs strength reduction. although knoop et al. have themselves extended lcm to strength reduction with their lazy strength reduction algorithm, our approach differs substantially from theirs and results in a broader class of candidate expressions, stronger safety guarantees, and the elimination of the potential for performance loss instead of gain. also, our general framework is not limited to traditional strength reduction, but rather can also handle a wide variety of optimizations in which data-flow information enables the replacement of a computation with a less expensive one. as a simple example, computations can be hoisted to points where they are constant foldable. another example we sketch is the hoisting of polymorphic operations to points where type analysis provides leverage for optimization. our general approach consists of placing computations so as to minimize their cost, rather than merely their number. so long as the cost differences between flowgraph nodes obey a certain natural constraint, a cost- optimal code motion transformation that does not unnecessarily prolong the lifetime of temporary varibles can be found using techniques completely analogous to lcm. specifically, the cost differences can be discovered using a wide variety of forward data-flow analyses in a manner which we describe. max hailperin the game of life with ada tasks the game of life is a popular programming project, both in computer science courses and on the world wide web. it is commonly implemented with a bounded array of values representing the cells of life. this project provides an appealing visual representation of cells living or dying over repeated generations.an ada implementation using tasks allows a number of interesting features. state information can be retained between calls. a generic declaration defines a conceptually unbounded grid, which a distributed implementation with tasks can potentially implement. tasking communication provides an exercise in both inter-cell communication and in deadlock prevention. lastly, the independence of cell communication can allow for relaxing the synchronization of cell generations. gertrude levine a survey of object oriented analysis and design methods (tutorial) martin fowler factors affecting readability no one has found a way to really help writers create readable prose. robert gunning developed a method for calculating the 'fog index' and rudolph flesch worked out more than one formula for measuring the simplicity of writing. by one of flesch's formulas (the one without personal pronouns), ronald s. lemos in the february, 1985 issue of communications of the acm (cacm) was able to prove that cacm required two less years of school to read than datamation. statistics can prove anything. i have no idea what sophomore in high school could read the cacm cover to cover and even understand most of it. flesch's book 'the art of plain talk' was given to me at a yourdon systems analysis course. the instructor handed it to each of us, saying something like "read this and you'll be a manager in no time" (supposedly, management is handed to the least efficient person who can also write well). the book is full of examples, mostly journalistic, showing how good writers evoke human interest. of course, these writers had human events, thoughts and feelings as their focal points, not software, i doubt whether any of the graduates of that week ever used flesch as a reference for grading their own documentation. how would bernard shaw have documented software? or mingus played it? this paper addresses these burning issues. chris hallgren let's evaluate performance algebraically marco bernardo x window system administration an introduction to x structure, configuration and customization jay ts visiclang - a visible compiler for clang d. resler establishing experience factories at daimler-benz: an experience report frank houdek kurt schneider eva wieser network objects andrew birrell greg nelson susan owicki edward wobber fortran 95 michael metcalf programming as driving: unsafe at any speed? christopher fry henry lieberman software engineering in the year 2001: the scientific engineering of software (panel session) michael sintzoff comparing mark-and sweep and stop-and-copy garbage collection stop- and-copy garbage collection has been preferred to mark-and-sweep collection in the last decade because its collection time is proportional to the size of reachable data and not to the memory size. this paper compares the cpu overhead and the memory requirements of the two collection algorithms extended with generations, and finds that mark-and-sweep collection requires at most a small amount of additional cpu overhead (3-6%) but, requires an average of 20% (and up to 40%) less memory to achieve the same page fault rate. the comparison is based on results obtained using trace-driven simulation with large common lisp programs. benjamin zorn software experience with concurrent c and lisp in a distributed system this paper discusses a software system made possible by the current advances in distributed processing, high-speed computer networks, and concurrent language implementation. the system discussed herein consists of several software layers. at the core of the system lies the unix operating system. this particular operating system provides an extensible set of communication domains which support communication within one unix system as well as within several computers inside a local area network. concurrent c‡, a concurrent programming language, provides the capability for dynamically creating processes within a set of interconnected computers. the user interface to the system consists of a lisp interpreter written in concurrent c. the resulting system uses a data- driven mechanism such that processes can start executing once their respective input data becomes available. the system has been used for testing tasks for the real-time robotics vision and for the simulation of large vlsi designs. roberto salama wentai liu ronald s. gyurcsik product review: happy hacking keyboard jeremy dinsel transitioning legacy assets to a product line architecture a successful software system evolves over time, but this evolution often occurs in an ad-hoc fashion. one approach to structure system evolution is the concept of software product lines where a core architecture supports a variety of application contexts. however, in practice, the high cost and high risks of redevelopment as well as the substantial investments made to develop the existing systems most often mandate significant leverage of the legacy assets. yet, there is little guidance in the literature on how to transition legacy assets into a product line set-up. in this paper, we present re-place, an approach developed to support the transition of existing software assets towards a product line architecture while taking into account anticipated new system variants. we illustrate this approach with its application in an industrial setting. joachim bayer jean-françois girard martin wurthner jean-marc debaud martin apel a mechanism for environment integration this paper describes research associated with the development and evaluation of odin-an environment integration system based on the idea that tools should be integrated around a centralized store of persistent software objects. the paper describes this idea in detail and then presents the odin architecture, which features such notions as the typing of software objects, composing tools out of modular tool fragments, optimizing the storage and rederivation of software objects, and isolating tool interconnectivity information in a single centralized object. the paper then describes some projects that have used odin to integrate tools on a large scale. finally, it discusses the significance of this work and the conclusions that can be drawn about superior software environment architectures. geoffrey clemm leon osterweil the second international symposium on constructing software engineering tools (coset2000) (workshop session) jonathan gray louise scott ian ferguson visual aid for fortran program debugging this paper presents a new type of debugging tool for fortran programs. we call this the "dock" system, referring to the dock where ships stay during repairs. dock provides new debugging functions such as slow display of program execution. major functions and a brief indication of the methods of implementation are given in this paper. four "execution modes" control execution speed. execution is displayed on the screen in full screen mode. the source program listing is displayed with the executing statement shown in a different color or brightness. at certain places, the program stops temporarily in a status called "stationary mode". in the stationary mode, several debugging functions are available. k. takahashi t. aso m. kobayashi apl2 syntax: is it really right to left? this paper discusses the scanning of apl2 statements and shows that it is dynamic and that the language is inherently interpretive. james a. brown issues associated with porting applications to the compartmented mode workstation harvey h. rubinovitz self: the power of simplicity self is a new object-oriented language for exploratory programming based on a small number of simple and concrete ideas: prototypes, slots, and behavior. prototypes combine inheritance and instantiation to provide a framework that is simpler and more flexible than most object-oriented languages. slots unite variables and procedures into a single construct. this permits the inheritance hierarchy to take over the function of lexical scoping in conventional languages. finally, because self does not distinguish state from behavior, it narrows the gaps between ordinary objects, procedures, and closures. self's simplicity and expressiveness offer new insights into object-oriented computation. david ungar randall b. smith the c programming language rex jaeschke 1983 invited address solved problems, unsolved problems and non-problems in concurrency this is an edited transcript of a talk given at last year's conference. to preserve the flavor of the talk and the questions, i have done very little editing---mostly eliminating superfluous words and phrases, correcting especially atrocious grammar, and making the obvious changes needed when replacing slides by figures. the tape recorder was not functioning for the first few minutes, so i had to recreate the beginning of the talk. leslie lamport efficient implementation of adaptive software adaptive programs compute with objects, just like object-oriented programs. each task to be accomplished is specified by a so-called propagation pattern which traverses the receiver object. the object traversal is a recursive descent via the instance variables where information is collected or propagated along the way. a propagation pattern consists of (1) a name for the task, (2) a succinct specification of the parts of the receiver object that should be traversed, and (3) code fragments to be executed when specific object types are encountered. the propagation patterns need to be complemented by a class graph which defines the detailed object structure. the separation of structure and behavior yields a degree of flexibility and understandability not present in traditional object-oriented languages. for example, the class graph can be changed without changing the adaptive program at all. we present an efficient implementation of adaptive programs. given an adaptive program and a class graph, we generate an efficient object-oriented program, for example, in c++. moreover, we prove the correctness of the core of this translation. a key assumption in the theorem is that the traversal specifications are consistent with the class graph. we prove the soundness of a proof system for conservatively checking consistency, and we show how to implement it efficiently. jens palsberg cun xiao karl lieberherr a linear-time algorithm for computing the memory access sequence in data- parallel programs data-parallel languages, such as high performance fortran, are widely regarded as a promising means for writing portable programs for distributed-memory machines. novel features of these languages call for the development of new techniques in both compilers and run-time systems. in this paper, we present an improved algorithm for finding the local memory access sequence in computations involving regular sections of arrays with cyclic(k) distributions. after establishing the fact that regular section indices correspond to elements of an integer lattice, we show how to find a lattice basis that allows for simple and fast enumeration of memory accesses. the complexity of our algorithm is shown to be lower than that of the previous solution for the same problem. in addition, the experimental results demonstrate the efficiency of our method in practice. ken kennedy nenad nedeljkovic ajay sethi compilation techniques for block-cyclic distributions compilers for data-parallel languages such as fortran d and high-performance fortran use data alignment and distribution specifications as the basis for translating programs for execution on mimd distributed-memory machines. this paper describes techniques for generating efficient code for programs that use block-cyclic distributions. these techniques can be applied to programs with symbolic loop bounds, symbolic array dimensions, and loops with non-unit strides. we present algorithms for computing the data elements that need to be communicated among processors both for loops with unit and non-unit strides, a linear-time algorithm for computing the memory access sequence for loops with non-unit strides, and experimental results for a hand-compiled test case using block-cyclic distributions seema hiranandani ken kennedy john mellor-crummey ajay sethi an adaptive load balancing method in the computational field model minoru uehara mario tokoro an approach to the automatic generation of software functional architecture carol s. smidts on the evolution of graphical notation for program design p.-n. robillard auto-loading kernel modules: make your system leaner modularizing the kernel. preston f. crow off-and-on tokens chris clark a text-based representation for program variants k. narayanaswamy performance measurement of parallel ada: an applications based approach andre goforth philippe collard matthew marquardt use of virus functions to provide a virtual apl interpreter under user control by inserting a simple statement at the beginning of an apl function, it is possible to cause the execution of a reconstructed version of the function, rather than of its original statements. this reconstruction can simulate the effect of an arbitrary change in the apl language; thus, the apl programmer can have available a private, "virtual" version of the interpreter we show some general- and special - purpose functions which can be used for this purpose, and give elementary examples of their application. j. b. gunn local states in distributed computations: a few relations and formulas eddy fromentin michel raynal the calculation of easter ryan stansifer ada design language/case developers matrix judy kerner on the inadequacies of data state space sampling as a measure of the trustworthiness of programs larry j. morell jeffrey voas remark on "fast floating-point processing in common lisp" we explain why we feel that the comparison betwen common lisp and fortran in a recent article by fateman et al. in this journal is not entirely fair. j. k. reid a novel codesign methodology for real-time embedded cots multiprocessor-based signal processing systems the process of designing large real-time embedded signal processing systems is plagued by a lack of coherent specification and design methodology (sdm). powerful frameworks exist for each individual phase of this canonical design process, but no single methodology exists which enables these frameworks to work together coherently, i.e., allowing the output of a framework used in one phase to be consumed by a different framework used in the next phase. a specification and design methodology (sdm) known as "search-explore-refine" (ser) was developed by gajski, vahid, et al. for an application and technology domain that is different from that of real-time embedded signal processing systems implemented with commercial-off-the-shelf multiprocessing hardware and software. however, due to similarities between the fundamental design objects of these two domains, a new sdm was developed and prototyped based on ser known as the magic sdm. the "tools and rules" of the magic sdm are presented. the magic sdm achieves a high degree of model continuity based largely on its use of standards-based computation (vsipl) and communication (mpi) middleware. randall s. janka linda m. wills a compacting incremental collector and its performance in a production quality compiler martin larose marc feeley yet another survey of ada training r. w. sebesta prototype software complexity metrics tools c. c. cook m. nanja using formal methods to reason about architectural standards kevin j. sullivan john socha mark marchukov interprocedural optimization: eliminating unnecessary recompilation michael burke linda torczon java, vhdl-ams, ada or c for system level specifications? wolfgang nebel defining families - commonality analysis mark a. ardis david a. cuka a comparative analysis of functional correctness douglas d. dunlop victor r. basili stop the presses corporate linux journal staff developing hp's network advisor using smalltalk in a large project team tom wisdom the visual compiler-compiler sic (abstract) lothar schmitz problem oriented protocol design the prevention of deadlock in certain types of distributed simulation systems requires special synchronization protocols. these protocols often create an excessive amount of performance-degrading communication; yet a protocol with the minimum amount of communication may not lead to the fastest network finishing time. we propose a protocol that attempts to balance the network's need for auxiliary synchronization information with the cost of providing that information. using an empirical study we demonstrate the efficiency of this protocol. also, we show that the synchronization requirements at different interfaces may vary; an integral part of our proposal assigns a protocol to an interface according to the interface's synchronization needs. david m. nicol paul f. reynolds panel on making products john uebbing feedback control real-time scheduling: support for performance guarantees in unpredictable environments chenyang lu john a. stankovic tarek abdelzaher sang h. son gang tao a definition optimization technique used in a code translation algorithm data flow analysis is used to optimize variable definitions in a program that translates microprocessor object code to a higher order language. david m. dejean george w. zobrist prototypes as assets, not toys: why and how to extract knowledge from prototypes kurt schneider on the assessment of ada performance john c. knight processor reordering algorithms toward efficient gen_block redistribution saeri lee hyun-gyoo yook mi-soo koo myong-soon park an incremental programming environment this document describes an incremental programming environment (ipe) based on compilation technology, but providing facilities traditionally found only in interpretive systems. ipe provides a comfortable environment for a single programmer working on a single program. in ipe the programmer has a uniform view of the program in terms of the programming language. the program is manipulated through a syntax-directed editor and its execution is controlled by a debugging facility, which is integrated with the editor. other tools of the traditional tools cycle (translator, linker, loader) are applied automatically and are not visible to the programmer. the only interface to the programmer is the user interface of the editor. peter h. feiler raul medina-mora at the forge: advanced "new" labels reuven m. lerner combining tasking and transaction this position paper discusses the issues in design and development of a transaction support for ada 95. transactions and other fault tolerance mechanisms are reviewed, and their applicability in a concurrent programming language is analyzed. possible ways of integration are presented, and implementation problems are discussed. jörg kienzle little giants - the new fortran subsets loren meissner a design framework to efficiently explore energy-delay tradeoffs comprehensive exploration of the design space parameters at the system-level is a crucial task to evaluate architectural tradeoffs accounting for both energy and performance constraints. in this paper, we propose a system-level design methodology for the efficient exploration of the memory architecture from the energy-delay combined perspective. the aim is to find a sub-optimal configuration of the memory hierarchy without performing the exhaustive analysis of the parameters space. the target system architecture includes the processor, separated instruction and data level- one caches, the main memory, and the system buses. the methodology is based on the sensitivity analysis of the optimization function with respect to the tuning parameters of the cache architecture (mainly cache size, block size and associativity). the effectiveness of the proposed methodology has been demonstrated through the design space exploration of a real-world example: a microsparc2-based system running the mediabench suite. experimental results have shown an optimization speed up of 329 times with respect to the full search, while the near-optimal system-level configuration is characterized by a distance from the optimal full search configuration in the band of 10%. william fornaciari donatella sciuto cristina silvano vittorio zaccaria an automata-theoretic approach to modular model checking in modular verification the specification of a module consists of two part. one part describes the guaranteed behavior of the module. the other part describes the assumed behavior of the system in which the module is interacting. this is called the assume- guarantee paradigm. in this paper we consider assume-guarantee specifications in which the guarantee is specified by branching temporal formulas. we distinguish between two approaches. in the first approach, the assumption is specified by branching temporal formulas too. in the second approach, the assumption is specified by linear temporal logic. we consider guarantees in ∀ ctl, and ∀ ctl*. we develop two fundamental techniques: building maximal models for ∀ ctl and ∀ ctl* formulas and using alternating automata to obtain space-efficient algorithms for fair model checking. using these techniques we classify the complexity of satisfiability, validity, implication, and modular verification for ∀ ctl and ∀ ctl*. we show that modular verification is pspace- complete for ∀ ctl and is exspace-complete for ∀ ctl*. we prove that when the assumption is linear, these bounds hold also for guarantees in ctl and ctl*. on the other hand, the problem remains exspace- hard even when we restrict the assumptions to ltl and take the guarantees as a fixed ∀ ctl formula. orna kupferman moshe y. vardi panel: research and development issues in software reliability engineering michael lyu hans: an open linking engine based on microsoft ole (abstract) jesper lilleso jorgensen peter bak nielsen the fortran 90 bindings for opengl it is important to provide a good fortran interface to opengl and related libraries for scientific visualization in mathematical software. opengl has had fortran 77 bindings for some time, however these bindings rely upon several extensions to the fortran 77 standard. by using the new features of fortran 90 it is possible to define bindings for opengl that do not depend on any extensions to the standard and provide access to the full functionality of opengl. william f. mitchell an architecture for flexible, evolvable process-driven user-guidance environments complex toolsets can be difficult to use. user interfaces can help by guiding users through the alternative choices that might be possible at any given time, but this tends to lock users into the fixed interaction models dictated by the user-interface designers. alternatively, we propose an approach where the tool utilization model is specified by a process, written in a process definition langauge. our approach incorporates a user-interface specification that describes how the user-interface is to respond to, or reflect, progress through the execution of the process definition. by not tightly binding the user-guidance process, the associated user-interfaces, and the toolset, it is easy to develop alternative processes that provide widely varying levels and styles of guidance and to be responsive to evolution in the processes, user interfaces, or toolset. in this paper, we describe this approach for developing process-driven user-guidance environments, a lossely coupled architecture for supporting this separation of concerns, and a generator for automatically binding the process and the user interface. we report on a case study using this approach. although this case study used a specific process definition language and a specific toolset, the approach is applicable to other process definition languages and toolsets, provided they meet some basic, sound software engineering requirements. timothy j. sliski matthew p. billmers lori a. clarke leon j. osterweil generalized algorithmic debugging and testing this paper presents a method for semi-automatic bug localization, generalized algorithmic debugging, which has been integrated with the category partition method for functional testing. in this way the efficiency of the algorithmic debugging method for bug localization can be improved by using test specifications and test results. the long-range goal of this work is a semi- automatic debugging and testing system which can be used during large-scale program development of nontrivial programs. the method is generally applicable to procedural languages and is not dependent on any ad hoc assumptions regarding the subject program. the original form of algorithmic debugging, introduced by shapiro, was however limited to small prolog programs without side-effects, but has later been generalized to concurrent logic programming languages. another drawback of the original method is the large number of interactions with the user during bug localization. to our knowledge, this is the first method which uses category partition testing to improve the bug localization properties of algorithmic debugging. the method can avoid irrelevant questions to the programmer by categorizing input parameters and then match these against test cases in the test database. additionally, we use program slicing, a data flow analysis technique, to dynamically compute which parts of the program are relevant for the search, thus further improving bug localization. we believe that this is the first generalization of algorithmic debugging for programs with side-effects written in imperative languages such as pascal. these improvements together makes it more feasible to debug larger programs. however, additional improvements are needed to make it handle pointer-related side-effects and concurrent pascal programs. a prototype generalized algorithmic debugger for a pascal subset without pointer side-effects and a test case generator for application programs in pascal, c, dbase, and lotus have been implemented. peter fritzson nahid shahmehri mariam kamkar tibor gyimothy context-insensitive alias analysis reconsidered recent work on alias analysis in the presence of pointers has concentrated on context-sensitive interprocedural analyses, which treat multiple calls to a single procedure independently rather than constructing a single approximation to a procedure's effect on all of its callers. while context-sensitive modeling offers the potential for greater precision by considering only realizable call-return paths, its empirical benefits have yet to be measured. this paper compares the precision of a simple, efficient, context- insensitive points-to analysis for the c programming language with that of a maximally context-sensitive version of the same analysis. we demonstrate that, for a number of pointer-intensive benchmark programs, context-insensitivity exerts little to no precision penalty. we also describe techniques for using the output of context-insensitive analysis to improve the efficiency of context- sensitive analysis without affecting precision. erik ruf filtering import: a basic mechanism for reusability m. ancona p. nieddu what is java, really? let's skip the hype. this article explains what java is and points you to the right places if you want to dive in rudi cilibrasi concepts of object-oriented programming (abstract) raimund k. ege reusable ada packages for information systems development (rapid): an operational center of excellence for software reuse t. vogelsong object oriented programming: the fundamentals richie bielak linda in context how can a system that differs sharply from all currently fashionable approaches score any kind of success? here's how. nicholas carriero david gelernter standardized extensions to modula-2 c. pronk r. j. sutcliffe m. schönhacker a. wiedemann wait-free synchronization a wait-free implementation of a concurrent data object is one that guarantees that any process can complete any operation in a finite number of steps, regardless of the execution speeds of the other processes. the problem of constructing a wait-free implementation of one data object from another lies at the heart of much recent work in concurrent algorithms, concurrent data structures, and multiprocessor architectures. first, we introduce a simple and general technique, based on reduction to a concensus protocol, for proving statements of the form, "there is no wait-free implementation of x by y." we derive a hierarchy of objects such that no object at one level has a wait-free implementation in terms of objects at lower levels. in particular, we show that atomic read/write registers, which have been the focus of much recent attention, are at the bottom of the hierarchy: thay cannot be used to construct wait-free implementations of many simple and familiar data types. moreover, classical synchronization primitives such astest&set; and fetch&add;, while more powerful than read and write, are also computationally weak, as are the standard message-passing primitives. second, nevertheless, we show that there do exist simple universal objects from which one can construct a wait-free implementation of any sequential object. maurice herlihy the flow of control notations pancode and boxcharts d. jonsson a theoretical comparison between mutation and data flow based test adequacy criteria aditya p. mathur weichen e. wong making real-time reactive systems reliable keith marzullo mark wood four dark corners of requirements engineering research in requirements engineering has produced an extensive body of knowledge, but there are four areas in which the foundation of the discipline seems weak or obscure. this article shines some light in the "four dark corners," exposing problems and proposing solutions. we show that all descriptions involved in requirements engineering should be descriptions of the environment. we show that certain control information is necessary for sound requirements engineering, and we explain the close association between domain knowledge and refinement of requirements. together these conclusions explain the precise nature of requirements, specifications, and domain knowledge, as well as the precise nature of the relationships among them. they establish minimum standards for what information should be represented in a requirements language. they also make it possible to determine exactly what it means for requirements and engineering to be successfully completed. pamela zave michael jackson where does reuse start? will tracz a systematic kernel development jørgen f. sogaard-anderson camilla Østerberg rump hans henrik løvengreen pool: a persistent object-oriented language c. j. harrison majid naeem bounds on the shared memory requirements for long-lived & adaptive objects (extended abstract) in this paper we prove: for any constant d there is a large enough n such that there is no long-lived adaptive implementation of collect or renaming in the read write model with n processes that uses d or less mwmr registers. in other words, there is no implementation of a long-lived and adaptive renaming or collect object in the atomic read/write model that uses o(1) multi-writer- multi-reader registers and any number of single-writer-multi- reader registers. in 1980 burns and lynch [1] proved that at least n multi- writer-multi-reader (mwmr) registers are necessary in any mutual exclusion algorithm that uses only mwmr registers (i.e., atomic registers). it is also relatively easy to see that any adaptive non-trivial algorithm uses at least one multi-writer-multi-reader (mwmr) register even when there are n single- writer- multi-reader (swmr) registers. here we extend the techniques of burns and lynch and prove that adaptive algorithms that use both swmr and mwmr registers such as, collect and renaming, need in addition to the (n) swmr registers a non-constant, f(n) number of mwmr registers. yehuda afek pazi boxer dan touitou quality guidelines = designer metrics in spite of the significant body of research on traditional source code metrics, there has been a general failure to produce conclusive evidence as to their effectiveness for measuring software quality. we describe and recommend a potentially much more powerful and sensitive quality assessment alternative, software quality guidelines. software quality guidelines are presented as "designer metrics", that is, user-defined rules or constraints relating to measurable features of a program's structure, semantics, and syntax that affect its quality. to provide a methodology for designing, applying, and validating software quality guidelines, we recommend and briefly summarize ieee standard 1061. this standard gives a process for constructing and implementing a software quality metrics framework that can be tailor-made to meet quality requirements for a particular project and/or organization. our paper then demonstrates how software quality guidelines fit within the ieee framework and gives an example illustrating how user-defined guidelines can be applied to evaluate or assess the quality of an ada source unit. this guideline-based assessment of quality is then compared with an analysis based on traditional mccabe and halstead metrics. finally, we introduce a tool being developed by saic, called adarevu, as an effective mechanism for implementing and applying user-defined quality guidelines for ada source code. david a. workman richard crutchfield software patterns douglas c. schmidt mohamed fayad ralph e. johnson future directions in ada - distributed execution and heterogeneous language interoperability toolsets while the ada community has seen and embraced the development of ada 95 [1], with its enhanced object oriented features and various annexes, much of the rest of the commercial world continues to ignore ada as a viable tool for software system building. efforts have been ongoing for some time to provide rationale showing the superiority of ada 95 over other choices such as c and c++, but with limited success in the commercial marketplace. in this paper, we put forward the idea that the ada community should focus on: 1) interoperability with components built in other languages, and 2) convenient, easy to use toolsets for composing distributed systems from heterogeneous language components. anthony gargaro gary smith ronald j. theriault richard a. volz raymond waldrop improving ipc by kernel design jochen liedtke linux system adminstration: undelete command mark komarinski eraser: a dynamic data race detector for multithreaded programs multithreaded programming is difficult and error prone. it is easy to make a mistake in synchronization that produces a data race, yet it can be extremely hard to locate this mistake during debugging. this article describes a new tool, called eraser, for dynamically detecting data races in lock-based multithreaded programs. eraser uses binary rewriting techniques to monitor every shared-monory reference and verify that consistent locking behavior is observed. we present several case studies, including undergraduate coursework and a multithreaded web search engine, that demonstrate the effectiveness of this approach. stefan savage michael burrows greg nelson patrick sobalvarro thomas anderson fault-tolerant computing based on mach ozalp babaoglu correspondent computing in the context of the preliminary study reported here, correspondent computing is loosely defined as the generation of effects (results or outcomes) which are correspondent to the effect of an operation in a computer program. the operation whose effect is the target of correspondent computing is called the primary operation. operations generate correspondent effects are called correspondent operations. the philosophy of correspondence is that if executing an operation can produce a significant effect, an equivalently significant effect may be generated by another semantically correspondent operation. in general, correspondent operations can be categorized as the following three types. 1) reciprocal type. the behaviour of a reciprocal operation is semantically the inverse of the primary operation. 2) duplicate type. the behaviour of a duplicate operation is semantically equivalent to the primary operation. 3) residual (or complementary) type. an operation that fulfills the definition of correspondent operations, but is neither a duplicate, nor a reciprocal is termed as residual operation. all these three types of correspondent operations can be easily derived from the primary operation on the basis of correspondence. we believe that correspondent computing has the potential of becoming a powerful tool and can be used in the areas such as fault tolerance, artificial intelligence, graphics, database systems, etc. in the following paragraphs, we discuss the use of correspondent computing for error detection. from the viewpoint of software fault tolerance, the correct behavior of some operations are more critical than the others although every operation is a constituent part of the program. therefore, the effect generated by these operations must be examined, and hence they are the primary operations - the targets of correspondent computing. using correspondent computing, the effect of a primary operation can then be checked by comparing with the effects generated by its correspondent operations. the specification of the comparative test is base on the precise relationship between the primary and correspondent operations. if an error occurs, the relationship between the two effects would not match that of the two operations. the greatest advantage of this error detection scheme is the simple and concise specification for the comparative test. as an example, suppose the program sum will sum up an array of n integer numbers. two correspondent operations are implemented for the purpose of error detection. array pa contains the original data of n integer numbers. pen-nan lee requirements-driven software test: a process-oriented approach muthu ramachandran a customizable substrate for concurrent languages we describe an approach to implementing a wide-range of concurrency paradigms in high-level (symbolic) programming languages. the focus of our discussion is sting, a dialect of scheme, that supports lightweight threads of control and virtual processors as first-class objects. given the significant degree to which the behavior of these objects may be customized, we can easily express a variety of concurrency paradigms and linguistic structures within a common framework without loss of efficiency. unlike parallel systems that rely on operating system services for managing concurrency, sting implements concurrency management entirely in terms of scheme objects and procedures. it, therefore, permits users to optimize the runtime behavior of their applications without requiring knowledge of the underlying runtime system. this paper concentrates on (a) the implications of the design for building asynchronous concurrency structures, (b) organizing large-scale concurrent computations, and (c) implementing robust programming environments for symbolic computing. suresh jagannathan jim philbin reclamation of memory for dynamic ada tasking while working on several large-scale ada projects, i have found what i consider a major problem with the ada tasking mechanism, that is, the reclamation of memory allocated to a dynamic task after the task has been terminated. i have written my own utility package to specifically deallocate the memory assigned to any such task. the problem arises from the fact that dynamically created tasks are generated by the use of allocators. the ada language reference manual ( lrm ) states "an implementation must guarantee that any object created by the evaluation of an allocator remains allocated for as long as this object or one of its subcomponents is accessible directly or indirectly, that is, as long as it can be denoted by some name."1 this stipulation presents an interesting problem when applied to tasks. if the access type of a task is created within a library unit, memory allocated to execute this task will remain allocated for the entire span of the execution of program, even though the actual task object may have been terminated. diagram 1 illustrates that memory is retained after a task is terminated. a more truly dynamic tasking model is needed in the ada language. the model which is provided within the language does allow for the creation of tasks at runtime. this facility is useful when creating tasks that exist for the lifetime of the program, or for applications where memory management is not relevant. the problem with the current tasking model is that it does not allow a task, whose master is a library unit, to be created dynamically, to perform a specific action, and then to terminate and release all the memory reserved for the execution of the task. this failure to properly deallocate memory seems to be a problem inherent in the ada language. it was at first believed that the algorithms used in ada compilers to implement dynamic allocation of objects would enable them to perform sufficient garbage collection to improve management of the heap at runtime. this, however, did not prove to be the case. an optional unchecked_deallocation procedure was added to the standard predefined library units that may be delivered with an ada development system. the unchecked_deallocation procedure does solve the problem of most storage reclamation, but the developers of the language did not allow memory allocated to dynamically created task objects to be reclaimed in a similar manner. the lrm states "if x designates a task object, the call free(x) has no effect on the task designated by the value of this task object."2 i suspect that this specification was an attempt to prevent deallocating a currently active task. because the language implementors are specifically prohibited from properly deallocating dynamic task memory, it is left to the individual ada programmer to implement his or her own solution. for my solution i have identified two major functions which any algorithm attempting to solve this problem must address. first, the algorithm must have the ability to dynamically create tasks whose masters are blocks that can be exited, reclaiming the memory used to execute the tasks. second, the algorithm must increase the scope of this task to areas outside of this block to allow task communication. this increase of scope must be done within the confines of the ada language so that the utility is portable. the concept behind the utility is the declaration of an access type of a task type within a block statement or a subprogram that can be exited after the completion of the task ( see diagram 2 ). i will refer to this block as the master block. since the access type is declare inside the master block, any task that is created by an allocator of that access type ( referred to as the dynamic task ) will have the master block as its master. after the task is terminated, the block is exited and all the storage allocated to that task is reclaimed. note that if the environment task ( the task that calls the main program ) were to call the master block, all other activity within the environment task would be suspended until the dynamic task completed. upon completion, control would be released from the master block and given back to the environment task. the same effect would be exhibited by any other task calling the master block. obviously, this reduction of execution to a fully sequential path is not acceptable; a second task must be created which can call the master block. i will refer to this task at the base task. task communication can be accomplished by use of the unchecked_conversion utility ( see diagram 2 ). to provide visibility to the dynamically created task, the access type within the master block is not checked when converting it to an object of a second access task type who's master is the program unit. this point will be elaborated on in the task communication section. several more complex questions remain in the description of the algorithm. how does the algorithm create more than one task? how does the algorithm determine when a task is terminated? how does the algorithm control access to the pointers to prevent access to deallocated storage? how does the algorithm allow for rendezvous with these tasks? these questions are addressed individually below. philip j. lefebvre lisp view: using common lisp and clos to hide a c toolkit hans muller the interactive performance of slim: a stateless, thin-client architecture taking the concept of thin clients to the limit, this paper proposes that desktop machines should just be simple, stateless i/o devices (display, keyboard, mouse, etc.) that access a shared pool of computational resources over a dedicated interconnection fabric --- much in the same way as a building's telephone services are accessed by a collection of handset devices. the stateless desktop design provides a useful mobility model in which users can transparently resume their work on any desktop console.this paper examines the fundamental premise in this system design that modern, off-the-shelf interconnection technology can support the quality-of-service required by today's graphical and multimedia applications. we devised a methodology for analyzing the interactive performance of modern systems, and we characterized the i/o properties of common, real-life applications (e.g. netscape, streaming video, and quake) executing in thin-client environments. we have conducted a series of experiments on the sun ray 1 implementation of this new system architecture, and our results indicate that it provides an effective means of delivering computational services to a workgroup.we have found that response times over a dedicated network are so low that interactive performance is indistinguishable from a dedicated workstation. a simple pixel encoding protocol requires only modest network resources (as little as a 1mbps home connection) and is quite competitive with the x protocol. tens of users running interactive applications can share a processor without any noticeable degradation, and many more can share the network. the simple protocol over a 100mbps interconnection fabric can support streaming video and quake at display rates and resolutions which provide a high-fidelity user experience. brian k. schmidt monica s. lam j. duane northcutt ada exception handling: an axiomatic approach a method of documenting exception propagation and handling in ada programs is proposed. exception propagation declarations are introduced as a new component of ada specifications, permitting documentation of those exceptions that can be propagated by a subprogram. exception handlers are documented by entry assertions. axioms and proof rules for ada exceptions given. these rules are simple extensions of previous rules for pascal and define an axiomatic semantics of ada exceptions. as a result, ada programs specified according to the method can be analyzed by formal proof techniques for consistency with their specifications, even if they employ exception propagation and handling to achieve required results (i.e., nonerror situations). example verifications are given. david c. luckham w. polak padded string: treating string as sequence of machine words pei-chi wu feng- jian wang a framework for management software project development ho leung tsoi ddd - a free graphical front-end for unix debuggers andreas zeller dorothea lutkehaus thoughts on "extended pascal - illustrative examples" ronald t. house the p2d2 project: building a portable distributed debugger robert hood multiple-domain analysis methods diane t. rover abdul waheed kde--the next generation ready to jazz up your kde desk marjorie richardson a rationale for the design and implementation of ada benchmark programs russell m. clapp trevor mudge the mutual exclusion problem has been solved "a common assumption underlying mutual exclusion algorithms in shared memory systems is that: b. a memory reference to an individual word is mutually exclusive." leslie lamport a comparative performance evaluation of write barrier implementation antony l. hosking j. eliot b. moss darko stefanovic the semantics of lazy (and industrious) evaluation since the publication of two influential papers on lazy evaluation in 1976 [henderson and morris, friedman and wise], the idea has gained widespread acceptance among language theoreticians---particularly among the advocates of "functional programming" [henderson80, backus78]. there are two basic reasons for the popularity of lazy evaluation. first, by making some of the data constructors in a functional language non-strict, it supports programs that manipulate "infinite objects" such as recursively enumerable sequences, which may make some applications easier to program. second, by delaying evaluation of arguments until they are actually needed, it may speed up computations involving ordinary finite objects. first, there are several semantically distinct definitions of lazy evaluation that plausibly capture the intuitive notion. second, non-trivial lazy spaces are similar in structure (under the approximation ordering) to universal domains (as defined by scott [scott81, scott76]) such as the p@@@@ model for the untyped lambda calculus. third, we prove that neither initial algebra specifications [adj76,77] nor final algebra specifications [guttag78, kamin80] have the power to define lazy spaces. fourth, although lazy spaces have the same "higher-order" structure as p@@@@, they nevertheless have an elegant, natural characterization within first order logic. in this paper, we develop a simple, yet comprehensive first order theory of lazy spaces relying on three axiom schemes asserting (1) the principle of structural induction for finite objects; (2) the existence of least upper bounds for directed sets; and (3) the continuity of functions. robert cartwright james donahue on the use of regular expressions for searching text the use of regular expressions for text search is widely known and well understood. it is then surprising that the standard techniques and tools prove to be of limited use for searching structured text formatted with sgml or similar markup languages. our experience with structured text search has caused us to reexamine the current practice. the generally accepted rule of "leftmost longest match" is an unfortunate choice and is at the root of the difficulties. we instead propose a rule which is semantically cleaner. this rule is generally applicable to a variety of text search applications, including source code analysis, and has interesting properties in its own right. we have written a publicly available search tool implementing the theory in the article, which has proved valuable in a variety of circumstances. charles l. a. clarke gordon v. cormack storage reclamation models for ada programs given ada's semantics regarding dynamically allocated objects, do programmers believe that storage reclamation is impractical? at first glance, it would appear that given these semantics, one cannot derive workable models for reclaiming all unneeded objects. in reality, ada provides features that allow programmers to define storage reclamation models that operate at close to 100 percent capacity. this paper describes methods by which ada programs can reclaim objects. examples of ada storage reclamation models are presented along with their associated algorithms. a taxonomy of units that perform storage reclamation is also discussed. geoffrey o. mendal corrigenda: laws of programming chris f. kemerer cooking with linux: linux leadership matt welsh using role components in implement collaboration-based designs michael vanhilst david notkin consistent detection of global predicates robert cooper keith marzullo programming the perl dbi bill cunningham equal rights for functional objects or, the more things change, the more they are the same henry g. baker abstract interpretation and program modelling david schmidt best of technical support gena shurtleff the evolution of an access control system don van wyck basic fvwm configuration: tips for using fvwm, an x-windows manager john m. fisk filter fusion todd a. proebsting scott a. watterson nobody reads documentation marc rettig performance measures of the ada rendezvous anthony sterrett marvin minei ramifications of re-introducing asynchronous exceptions to the ada language thomas j. quiggle c vs ada: arguing performance religion david syiek visualizing and querying software structures mariano consens alberto mendelzon arthur ryman reuse of debuggers for visualization of reuse robert biddle stuart marshall john miller-williams ewan tempero style-based refinement for software architecture david garlan the control of priority inversion in ada g. levine porting progress applications to linux an explanation of the work required to take an existing progress application and deploy it on linux, and the advantages and disadvantages of doing so thomas barringer efficient stack allocation for tail-recursive languages chris hanson interprocedural may-alias analysis for pointers: beyond k-limiting existing methods for alias analysis of recursive pointer data structures are based on two approximation techniques: k-limiting, and store- based (or equivalently location or region-based) approximations, which blur distinction between elements of recursive data structures. although notable progress in inter-procedural alias analysis has been recently accomplished, very little progress in the precision of analysis of recursive pointer data structures has been seen since the inception of these approximation techniques by jones and muchnick a decade ago. as a result, optimizing, verifying and parallelizing programs with pointers has remained difficult. we present a new parametric framework for analyzing recursive pointer data structures which can express a new natural class of alias information not accessible to existing methods. the key idea is to represent alias information by pairs of symbolic access paths which are qualified by symbolic descriptions of the positions for which the alias pair holds. based on this result, we present an algorithm for interprocedural may-alias analysis with pointers which on numerous examples that occur in practice is much more precise than recently published algorithms [cwz90, he90, lr92, cbc93]. alain deutsch apl2 and the cms system: exploiting the apl2/rexx connection apl2 and rexx are both powerful interpretive languages. apl2 generally isolates the user/programmer from the operating environment, whereas rexx includes integral hooks to directly access the environment. the second release of apl2 includes the ability to access rexx functions and some variables, using the external function call (associated processor) facility. apl2 programmers can use rexx and appropriately-designed rexx procedures and functions to, for example: access host system function and services; perform arithmetic with arbitrarily-large precision; provide tailored access to applications packages (e.g., script dcf); perform string and word manipulation; tailor the apl2 environment; carry-over variables between workspaces in a given apl2 session; communicate with the host after leaving apl2. rexx programmers can test rexx procedures from within apl2, and can access apl2 workspaces and operations, to bring the power of apl2 to manipulate rexx data objects. the rexx/apl2 interface is decried and some applications are discussed. attention is spent on the proper design of rexx and apl2 code to take advantage of (or even to allow the use of) the existing interface. d. m. weintraub optimistic parallelization of communicating sequential processes david f. bacon robert e. strom an object oriented extension to apl this paper describes an object oriented extension of apl, which is currently being implemented in a new apl system. this extension is integrated in a rather conservative way to apl. however, all the paradigms of object oriented languages (message passing, instance variables, methods, classes and inheritance) are made available to the user, without losing any of the successful features of apl. the paper first explains what are the interests of object oriented programming, and what is expected from this introduction in the language. it then discusses syntactic and semantic choices, and shows how the selected solutions fit the philosophy of apl. an example of this new style of programming in apl is proposed, then the internals of the implementation are presented, showing that this new possibility is made available at the cost of a minimal system overhead. finally, the paper discusses about the results obtained so far. jean jacques girardot sega sako taking concurrency seriously (position paper) i'd like to propose a challenge to language designers interested in concurrency: how well do your favorite constructs support highly-concurrent data structures? for example, consider a real-time system consisting of a pool of sensor and actuator processes that communicate via a priority queue in shared memory. processes execute asynchronously. when a sensor process detects a condition requiring a response, it records the condition, assigns it a priority, and places the record in the queue. whenever an actuator process becomes idle, it dequeues the highest priority item from the queue and takes appropriate action. the conventional way to prevent concurrent queue operations from interfering is to execute each operation as a critical section: only one process at a time is allowed to access the data structure. as long as one process is executing an operation, any other needing to access the queue must wait. although this approach is widely used, it has significant drawbacks. it is not fault-tolerant. if one process unexpectedly halts in the middle of an operation, then other processes attempting to access the queue will wait forever. although it may sometimes be possible to detect the failure and preempt the queue, such detection takes time, it may be unreliable, and it may be impossible to restore the data structure to a consistent state. critical sections force faster processes to wait for slower processes. such waiting may be particularly undesirable in heterogeneous architectures, where some processors may be much faster than others. for example, a fast actuator process should not have to remain idle whenever a much slower sensor process is enqueuing a new item. such waiting is also undesirable if each processor is dedicated to a single process, where delaying a process means idling a valuable hardware resource. similar concerns arise even in systems not subject to real-time demands or failures. for example, process execution speeds may vary considerably if processors are multiplexed among multiple processes. if a process executing in a critical region takes a page fault, exhausts its quantum, or is swapped out, then other runnable processes needing to use that resource will be unable to make progress. an implementation of a concurrent object is wait-free if it guarantees that any process will complete any operation within a fixed number of steps, independent of the level of contention and the execution speeds of the other processes. to construct a wait-free implementation of the shared priority queue, we must break each enqueue or dequeue operation into a non-atomic sequence of atomic steps, where each atomic step is a primitive operation directly supported by the hardware, such as read, write, or fetch-and-add. to show that such an implementation is correct. it is necessary to show that (1) each operation's sequence of primitive steps has the desired effect (e.g., enqueuing or dequeuing an item) regardless of how it is interleaved with other concurrent operations, and (2) that each operation terminates within a fixed number of steps regardless of variations in speed (including arbitrary delay) of other processes. support for wait-free synchronization requires genuinely new language constructs, not just variations on conventional approaches such as semaphores, monitors, tasks, or message-passing. i don't know what these constructs look like, but in this position paper, i would like to suggest some research directions that could lead, directly or indirectly, to progress in this area. we need to keep up with work in algorithms. to pick just one example, we now know that certain kinds of wait-free synchronization, e.g., implementing a fifo queue from read/write registers, require randomized protocols in which processes flip coins to choose their next steps [3, 1]. the implications of such results for language design remain unclear, but suggestive. we also need to pay more attention to specification. although transaction serializability has become widely accepted as the basic correctness condition for databases and certain distributed systems, identifying analogous properties for concurrent objects remains an active area of research [2]. m. herlihy dynamic selection and reuse of implementations in the object-oriented programming paradigm h. m. al-haddad k. m. george thomas gersten comparing and combining software defect detection techniques: a replicated empirical study murray wood marc roper andrew brooks james miller a fresh approach to representing syntax with static binders in functional programming tell category theorists about the concept of abstract syntax for a language and they may say "that's just the initial algebra for a sum-of- products functor on the category of sets". despite what you might think, they are trying to be helpful since the initiality property is the common denominator of both definitions by structural recursion and proofs by structural induction [5, sect. 4.4]. in recent years we have learned how to extend this initial algebra view of abstract syntax to encompass languages with statically scoped binders. in the presence of such binders one wants to abstract away from the specific names of bound variables, either by quotienting parse trees by a suitable notion of alpha-equivalence, or by replacing conventional trees with ones containing de bruijn indices [1]. by changing from the category of sets to other well-known, but still 'set-like' categories of sheaves or presheaves, one can regain an initial algebra view of this even more than normally abstract syntax---the pay-off being new and automatically generated forms of structural recursion and induction that respect alpha-equivalence [2, 3]. one good test of these new ideas is to see if they give rise to new forms of functional programming. in fact they do. the paper [6] sketches a functional programming language for representing and manipulating syntactical structure involving binders, based on the mathematical model of variable-binding in [3, 4]. in this ml-like language there are new forms of type for names and name-binding that come along with facilities for declaring fresh names, for binding names in abstractions and for pulling apart such name-abstractions via pattern-matching. the key idea is that properly abstract uses of names, i.e. ones that do not descend below the level of alpha-conversion, can be imposed on the user by a static type system that deduces information about the freshness of names. even though we appear to be giving users a 'gensym' facility, the type system restricts the way it can be used to the extent that we keep within effect-free functional programming, in the sense that the usual laws of pure functional programming remain valid (augmented with new laws for names and name-abstractions). in this talk i will introduce this new approach to representing languages static binders in functional programming and discuss some of the difficulties we have had verifying its semantic properties. andrew m. pitts predicting potential cobol performance on low level machine architectures as a cobol host, a computer architecture should efficiently execute those language constructs that are most frequently used in actual programs. however, when the language's control and data structures are at a far higher level than the control and data structures of the underlying machine, the compiler designer is faced with a large number of potential choices for mapping these high level structures to the low level architecture. this is the case for implementing cobol on a typical mini-computer architecture. restricting the choices to an implementable set meeting a given performance criteria and predicting the resulting performance level requires a complex analysis. this paper gives an example of the methodology used in one such analysis -- the performance of cobol on low level architectures. this area was chosen because of the difficulty of the problem caused by the lack of support of cobol constructs and data types on low level architectures used in most mini- and micro- computers. the motorola mc68000 and a risc-type architecture were selected as representative architectures. jerome a. otto product review: sockmail noah yasskin internal representation and rule development in object-oriented design this article proposes a cognitive framework describing the software development process in object-oriented design (ood) as building internal representations and developing rules. rule development (method construction) is performed in two problem spaces: a rule space and an instance space. rules are generated, refined, and evaluated in the rule space by using three main cognitive operations: infer, derive, and evoke. cognitive activities in the instance space are called mental simulations and are used in conjunction with the infer operation in the rule space. in an empirical study with college students, we induced different representations to the same problem by using problem isomorphs. initially, subjects built a representation based on the problem description. as rule development proceeded, the initial internal representation and designed objects were refined, or changed if necessary, to correspond to knowledge gained during rule development. differences in rule development processes among groups created final designs that are radically different in terms of their level of abstraction and potential reusability. the article concludes by discussing the implications of these results for object-oriented design. jinwoo kim f. javier lerch herbert a. simon rs-fdra: a register sensitive software pipelining algorithm for embedded vliw processors the paper proposes a novel software-pipelining algorithm, _register sensitive force directed retiming algorithm (rs- fdra)_, suitable for optimizing compilers targeting embedded vliw processors. the key difference between rs- fdra and previous approaches is that our algorithm can handle _code size constraints_ along with latency and resource constraints. this capability enables the exploration of pareto "optimal" points with respect to _code size_ and _performance_. rs-fdra can also minimize the increase in "_register pressure_" typically incurred by software pipelining. this ability is critical since, the need to insert spill code may result in significant performance degradation. extensive experimental results are presented demonstrating that the extended set of optimization goals and constraints supported by rs-fdra enables a thorough compiler-assisted exploration of trade-offs among performance, code size, and register requirements, for time critical segments of embedded software components. cagdas akturan margarida f. jacome apl*plus iii for windows version 1.2 - brief notes dick bowman conservative garbage collection for general memory allocators this paper explains a technique that integrates conservative garbage collection on top of general memory allocators. this is possible by using two data structures named malloc-tables and jump-tables that are computed at garbage collection time to map pointers to beginning of objects and their sizes. this paper describes malloc-tables and jump-tables, an implementation of a malloc/jump-table based conservative garbage collector for doug lea's memory allocator, and experimental results that compare this implementation with boehm-demers-weiser gc, a state-of-the-art conservative garbage collector. gustavo rodriguez-rivera java:introduction vishal shah introduction to special section on software testing the field of software testing spans mathematical theory, the art and practice of validation, and methodology of software development. to cover this range would require a textbook (or several texts), not a trio of articles. but the work presented in this special section is a kind of "test set." each paper is a significant contribution within one of the three broad areas. the reader must now make the assessment that is critical to any review of test points: are they representative? my own answer is 'no'; these articles are provocative and revealing rather than routine summaries. and perhaps that is what software testing is all about: good tests are the ones that provide new insights, not the ones that cover well worn ground. r. hamlet anchoring data quality dimensions in ontological foundations yair wand richard y. wang writing and intelligient serial driver randolph bentson history management system scott a. kramer embedded uml: a merger of real-time uml and co-design in this paper, we present a proposal for a uml profile called `embedded uml'. embedded uml represents a synthesis of various ideas in the real-time uml community, and concepts drawn from the hardware-software co-design field. embedded uml first selects from among the competing real-time uml proposals, the set of ideas which best allow specification and analysis of mixed hw-sw systems. it then adds the necessary concept of underlying deployment architecture that uml currently lacks in complete form, using the notion of an embedded hw-sw `platform'. it supplements this with the concept of a `mapping', which is a platform-dependent refinement mechanism that allows efficient generation of an optimised implementation of the executable specification in both hw and sw. finally, it provides an approach which supports the development of automated analysis, simulation, synthesis and code generation tool capabilities which can be provided for design usage even while the embedded uml standardisation process takes place. grant martin luciano lavagno jean louis-guerin arabica richard taylor david redmiles the pan language-based editing system powerful editing systems for developing complex software documents are difficult to engineer. besides requiring efficient incremental algorithms and complex data structures, such editors must accommodate flexible editing styles, provide a consistent, coherent, and powerful user interface, support individual variations and projectwide configurations, maintain a sharable database of information concerning the documents being edited, and integrate smoothly with the other tools in the environment. pan is a language-based editing and browsing system that exhibits these characteristics. this paper surveys the design and engineering of pan, paying particular attention to a number of issues that pervade the system: incremental checking and analysis, information retention in the presence of change, tolerance for errors and anomalies, and extension facilities. robert a. ballance susan l. graham michael l. van de vanter priority inversion in ada dennis cornhill lui sha predicting program behavior using real or estimated profiles david w. wall penguin's progress: hacking an industry doc searls linux programmer's reference second edition ibrahim f. haddad analysis and experience with an information system development methodology manmahesh kantipudi joseph e. urban a general approach for run-time specialization and its application to ccharles consel françois noÃl guest president's letter: the future is not what is used to be oscar n. garcia schlaer-mellor object-oriented analysis rules neil lang observations on program-wide ada exception propagation peter t. brennan a portable platform for distributed event environments bernd bruegge writing man pages using groff learn how to document your programs just like real programmers do matt welsh issues in optimizing ada code david rosenfeld mike ryer a case-tool for supporting navigation in the class hierarchy the central role that class hierarchy plays in object-oriented systems is discussed. the study focuses on design and maintenance stages where the requirements of navigation facilities are greater than at other stages of the systems lifecycle. it will be shown that different kinds of navigation are needed and it is necessary to browse up and down the class hierarchy before it is possible to decide which classes have to be changed. a flexible navigation tool which can support these design and maintenance stages is also presented. the tool helps the designer to navigate the class hierarchy and to investigate the effect of the intended changes on the entire class hierarchy. a. putkonen m. kiekara a model for implementing an object-oriented design without language extensions jennifer hamilton step development tools: metastep language system step development tools (sdt) is a general-purpose microprogram development system. the metastep language system is composed of four tools of the sdt needed to write microprograms: a definition processor, a retargetable assembler, a retargetable cross-assembler, and a relocatable linker. these tools are of commercial quality, providing complete languages, quality diagnostics, full interface to other support tools, and high performance. the language system supports microcode debug by providing complete run-time information and by interacting directly with other debug tools provided in sdt. d. l. wilburn s. schleimer best of technical support corporate linux journal staff a truly generative semantics-directed compiler generator this paper describes semantic processing in the compiler generating system mug2. mug2 accepts high-level descriptions of the semantics of a programming language including full runtime semantics, data flow analysis, and optimizing transformations. this distinguishes mug2 from systems such as yacc [joh75], hlp [hlp78], pqcc [pqc79], or its own former version [grw77] with respect to expressive power and convenience. in this respect, mug2 comes close to semantics-directed systems such as [mos76], [jos80], [set81], [pau82]. in contrast to these, mug2 is not a universal translator system where program independent semantic properties have to be evaluated at compilation time. the description concepts of mug2 allow a far reaching separation of language vs. program dependent semantics, thus constituting a truly generative approach to semantics-directed compiler generation. harald ganzinger robert giegerich ulrich möncke reinhard wilhelm a technique for the architectural implementation of software subsystems this paper presents a technique to implement operating system software more efficiently: as part of the processor architecture. such an'architectural'implementation has two significant advantages over a conventional'software'implementation. first, it allows performance critical modules of the operating system to be implemented through specialized hardware or firmware. second, it allows the implementation of the operating system to evolve with technology without any change to the basic design structure. the basis for the proposed technique is the application of a software structuring methodology called'type extension'. a model for the architectural implementation of a subsystem, which is structured through this methodology, is presented. it is shown that this model encompasses all conventional forms of implementation--- hardware, firmware, software. the model establishes that architectural implementation allows a range of implementation choices; a software implementation utilizes only a small subset of these choices. anand jagannathan some experimental estimators for developmental and delivered errors in software development projects experimental estimators are presented relating the expected number of software problem reports (b) in a software development project to the overall reported professional effort (e) in "man months" the number of subprograms (n) the overall count of thousands of coded source statements of software(s). [equation] these estimators are shown to be consistent with data obtained from the air force's rome air development center, the naval research laboratory, and japan's fujitsu corporation. although the results are promising, more data is needed to support the validity of these estimators. victor schneider an extensible program representation for object-oriented software brian a. malloy john d. mcgregor anand krishnaswamy murali medikonda modula-2 input/output procedure using polymorphic and open-ended data type extensions (abstract only) the modula-2 language does not include special statements or standard procedures for handling input and output as in other languages. by extending modula-2 to include a polymorphic structure, called a variant data type(1), and allowing open-ended structures, it is possible to define procedures that have a function and syntax similar to the i/o statements and procedures of other languages. the variant data type consists of a tag field followed by several options, each containing only one field (unless a structure definition is used). in an assignment operation, the tag field is implicitly adjusted to indicate the data type of the item assigned to the variant typed object. a variable of variant type is considered to be assignment compatible with another value if either both are exactly the same type or if one of the options of the variant type is compatible with the value being assigned to it. for open-ended data structures, the concept of an open array is generalized to allow stack and dynamically allocated open arrays to be defined which allows them to be sized at run-time. the open array concept is extended to procedures allowing them to have open parameter lists which can accept a variable number of arguments when called. these concepts can further be used to eliminate the need for several other standard procedures in the language definition. the variant type and open parameter lists are used in the definition module given below to define a set of pascal-like input/output procedures. two variant data types are defined, invar for the read procedures and outvar for the write procedures. the options in these data types allow passing of the basic types as well as structured types using the array of word option. this option must be specified last in the list since it is type compatible with all types. the word var in type invar indicates that a field is accessed by reference. the data type file is a hidden type for use in defining file variables. the parameter lists for the input and output procedures allow the file variable to occur anyplace in the argument list. it can even occur several times or not at all in a single argument list or several different file variables can appear. the read and write procedures can be programmed to not accept a file variable after the first argument. violations cannot be detected at compilation time and would have to be flagged as run-time errors. instead of adding these restrictions, they can be allowed as variations from standard pascal. the result is that a single read or write statement can act like several statements accessing different files. the file variables input and output are defined for use in accessing a terminal's keyboard and screen respectively, as in pascal. importing these file variables is only necessary if they are to be explicitly used otherwise they can be programmed as the defaults in the implementation module. the implementation module cannot know whether input and output have actually been important by another module or whether they are used as the default file variables so it must initialize them upon program startup. thomas r. leap type extensions software systems represent a hierarchy of modules. client modules contain sets of procedures that extend the capabilities of imported modules. this concept of extension is here applied to data types. extended types are related to their ancestor in terms of a hierarchy. variables of an extended type are compatible with variables of the ancestor type. this scheme is expressed by three language constructs only: the declaration of extended record types, the type test, and the type guard. the facility of extended types, which closely resembles the class concept, is defined in rigorous and concise terms, and an efficient implementation is presented. n. wirth interoperable tools based on omis thomas ludwig roland wismuller arndt bode an apl traceguide program zalman rubinstein introducing a+: ... a new apl system! jon mcgrew john mizel brian redman fortran object oriented programming examples loren p. meissner john s. cuthbertson william b. clodius an eductive interpreter for lucid a. a. faustini w. w. wadge replacing passive tasks with ada9x protected records c. douglass locke thomas j. mesler david r. vogel a realization of a concurrent object-oriented programming chang-hyun jo chan- ho lee jea gi son forthought chuck moore application of axiomatic methods to a specification analyser the goal of this paper was to model a specification language and its analyser using axiomatic methods derived from those applied previously to abstract data type and state transition specifications. the models attempt to cover many interesting features of psl/psa, a widely used specification language and analyser for information systems. simple properties expected to hold for actual psl/psa were formalized and proved about some models, with assumptions about undefined parts. both model formulation and property proofs were performed within the affirm specification and verification system. the results show (1) the applicability of axiomatic methods for modeling a new kind of software system, (2) insights into the psl/psa class of specification system, (3) a possible route for formal definition of such analysers, and (4) additional lessons on the art of specification, modeling, verification, and validation. susan l. gerhart matchmaker: an interface specification language for distributed processing matchmaker, a language used to specify and automate the generation of interprocess communication interfaces, is presented. the process of and reasons for the evolution of matchmaker are described. performance and usage statistics are presented. comparisons are made between matchmaker and other related systems. possible future directions are examined. michael b. jones richard f. rashid mary r. thompson experiences in object oriented development john a. jurik roger s. schemenaur from the classroom to the real world (panel session) forth is interesting because of its differences. panelists will discuss experiences with forth as a university teaching tool, evaluation of forth and its derivative languages in practical computer applications, the position of forth in the industrial environment, and forth practices outside of the united states. lawrence forsley cryptographic sealing for information secrecy and authentication the problem of computer security can be considered to consist of four distinct components: secrecy (ensuring that information is only disclosed to authorized users), authentication (ensuring that information is not forged), integrity (ensuring that information is not destroyed), and availability (ensuring that access to information can not be maliciously interrupted). the paper describes a new protection mechanism called cryptographic sealing that provides primitives for secrecy and authentication. the mechanism is enforced with a synthesis of classical cryptography, public-key cryptography, and a threshold scheme. david k. gifford variable-length string input in ada jeffrey carter high level input/output in modula-2 albert l. crawford dynamic class loading in the java virtual machine sheng liang gilad bracha software reliability modeling (tutorial) there are a number of views as to what software reliability is and how it should be quantified. some people believe that this measure should be binary in nature so that an imperfect program would have zero reliability while a perfect one would have a reliability value of one. this view parallels that of program proving whereby the program is either correct or incorrect. others, however, feel that software reliability should be defined as the relative frequency of the times that the program works as intended by the user. this view is similar to that taken in testing where a percentage of the successful ewes is used as a measure of program quality. according to the latter viewpoint, software reliability is a probabilistic measure and can be defined as follows: let f be a class of faults, defined arbitrarily, and t be a measure of relevant time, the units of which are dictated by the application at hand. then the reliability of the software package with respect to the class of faults f and with respect to the metric t, is the probability that no fault of the class occurs during the execution of the program for a prespecified period of relevant time. a number of models have been proposed during the past fifteen years to estimate software reliability and several other performance measures. these are based mainly on the failure history of software and can be classified according to the nature of the failure process studied as indicated below. times between failures models: in this class of models the process under study is the time between failures. the most common approach is to assume that the time between, say, the (i-1)st and ith failures, follows a distribution whose parameters depend on the number of faults remaining in the program during this interval. failure count models: the interest of this class of models is in the number of faults or failures in specified time intervals rather than in times between failures. the failure counts are assumed to follow a known stochastic process with a time dependent discrete or continuous failure rate. fault seeding models: the basic approach in this class of models is to "seed" a known number of faults in a program which is assumed to have an unknown number of indigenous faults. input domain based models: the basic approach taken here is to generate a set of test cases from an input distribution which is assumed to be representative of the operational usage of the program. because of the difficulty in obtaining this distribution, the input domain is partitioned into a set of equivalence classes, each of which is usually associated with a program path. in this tutorial we discuss the key models from the above classes and the related issues of parametric estimation, unification of models, bayesian interpretation, validation and comparison of models, and determination of optimum release time. amrit l. goel deciding when to forget in the elephant file system modern file systems associate the deletion of a file with the immediate release of storage, and file writes with the irrevocable change of file contents. we argue that this behavior is a relic of the past, when disk storage was a scarce resource. today, large cheap disks make it possible for the file system to protect valuable data from accidental delete or overwrite.this paper describes the design, implementation, and performance of the elephant file system, which automatically retains all important versions of user files. users name previous file versions by combining a traditional pathname with a time when the desired version of a file or directory existed. storage in elephant is managed by the system using file-grain user-specified retention policies. this approach contrasts with checkpointing file systems such as plan-9, afs, and wafl that periodically generate efficient checkpoints of entire file systems and thus restrict retention to be guided by a single policy for all files within that file system.elephant is implemented as a new virtual file system in the freebsd kernel. douglas s. santry michael j. feeley norman c. hutchinson alistair c. veitch ross w. carton jacob ofir a fast apl algorithm for logic minimization this paper presents an algorithm for logic reduction which exploits apl's array and boolean processing capabilities. the traditional algorithm, the quine-mccluskey method, requires extensive looping and hence is very slow when implemented in an interpretive language. the apl algorithm accomplishes the same function as the quine-mccluskey method, but in a totally different way and much faster. alfred a. schwartz requirements for case tools in early software reuse v. karakostas inheritance of interface specifications (extended abstract) four alternatives for the semantics of inheritance of specifications are discussed. the information loss and frame axiom problems for inherited specifications are also considered. gary t. leavens ada-java communication in adept anthony gargaro towards a conceptual framework for object oriented software metrics neville i. churcher martin j. shepperd lazy evaluation of c++ static constructors marc sabatella on the feasibility of teaching backus-type functional programming (fp) as a first language frank g. pagan posix thread libraries the authors have studied five felix garcia javier fernandez lazy rewriting on eager machinery the article introduces a novel notion of lazy rewriting. by annotating argument positions as lazy, redundant rewrite steps are avoided, and the termination behavior of a term-rewriting system can be improved. some transformations of rewrite rules enable an implementation using the same primitives as an implementation of eager rewriting. wan fokkink jasper kamperman pum walters module weakness: a new measure yogesh singh pradeep bhatia semaphores in ada-94 kwok-bun yue opg: an optimizing parser generator george h. roberts task time lines as a debugging tool debugging distributed programs is more difficult than debugging sequential ones. this paper describes a means of making this easier by presenting the user with a "time line" view of an ada program. each time line shows the successive states of an ada task as easily recognisable symbols on a time axis. communication and sychronisation between tasks is shown graphically by lines connecting the time lines. the paper shows how this representation can be used to depict typical ada tasking activity, and describes a tool that provides a useful interface to the representation. j. s. briggs s. d. jamieson g. w. randall i. c. wand virtual classes: a powerful mechanism in object-oriented programming the notions of class, subclass and virtual procedure are fairly well understood and recognized as some of the key concepts in object-oriented programming. the possibility of modifying a virtual procedure in a subclass is a powerful technique for specializing the general properties of the superclass. in most object-oriented languages, the attributes of an object may be references to objects and (virtual) procedures. in simula and beta it is also possible to have class attributes. the power of class attributes has not yet been widely recognized. in beta a class may also have virtual class attributes. this makes it possible to defer part of the specification of a class attribute to a subclass. in this sense virtual classes are analogous to virtual procedures. virtual classes are mainly interesting within strongly typed languages where they provide a mechanism for defining general parameterized classes such as set, vector and list. in this sense they provide an alternative to generics. although the notion of virtual class originates from beta, it is presented as a general language mechanism. o. l. madsen b. moller-pedersen venturing forth with a flashlite paul frenger fortran to and from apl2 and j r. g. selfridge the fifth development environment paul a. snow k. michael parker structure exits, not loops until recently, pascal was the first programming language taught to students. as more schools choose ada or c++ as a first language, the debate on structured programming has been reopened ([rob95]). we are no longer restricted to the while-statement: exit/break-statements can be used to exit a loop from the middle, and return from a procedure or function is allowed within a loop statement. do these constructs violate the principle of structure programming? this article claims that more general loop constructs can be objectively justified, because they simplify the verification of programs. a program that is simple to verify is also easy to explain and understand. mordechai ben-ari compared anatomy of the programming languages pascal and c v hayward improving the accuracy of petri net-based analysis of concurrent programs spurious results are an inherent problem of most static analysis methods. these methods, in an effort to produce conservative results, overestimate the executable behavior of a program. infeasible paths and imprecise alias resolution are the two causes of such inaccuracies. in this paper we present an approach for improving the accuracy of petri net-based analysis of concurrent programs by including additional program state information in the petri net. we present empirical results that demonstrate the improvements in accuracy and, in some cases, the reduction in the search space that result from applying this approach to concurrent ada programs. a. t. chamillard lori a. clarke a simple technique for handling multiple polymorphism certain situations arise in programming that lead to multiply polymorphic expressions, that is, expressions in which several terms may each be of variable type. in such situations, conventional object-oriented programming practice breaks down, leading to code which is not properly modular. this paper describes a simple approach to such problems which preserves all the benefits of good object-oriented programming style in the face of any degree of polymorphism. an example is given in smalltalk-80 syntax, but the technique is relevant to all object-oriented languages. daniel h. h. ingalls letters to the editor corporate linux journal staff memory management support for tiled array organization (abstract) a novel method is described which adds support for sub-array ordered (tiled) arrays to a conventional mmu. the resulting pages contain two dimensional tiles of data rather than one dimensional strips, making tiled virtual memory transparently available to general purpose application programs. the method is further extended to support sub-arrays with three or more dimensions. the image mmu of the kodak prophecy color publishing system is shown as a product implementation for color picture arrays. gary newman attitudes to ada in the uk high-reliability software sector (plenary session) ian gilchrist distributed ada: a suggested solution for ada 9x this paper contains a suggested solution to the problems of implementing ada in non-shared memory distributed systems for incorporation into ada9x. the solution is based on three fundamental principles: firstly that there should be minimal disturbance to the existing ada language definition; secondly that the security aspects of the language are not compromised; and thirdly that the requirements of implementors of distributed systems be met in full. b. dobbing evolution of configuration management rudy bazelmans documentation testing one definition of documentation is 'any written or pictorial information describing, defining, specifying, reporting, or certifying activities, requirements, procedures, or results.' (1). documentation is as important to a product's success as the product itself. if the documentation is poor, non-existent, or wrong, it reflects on the quality of the product and the vendor.at the bell atlantic systems integration & testing center documentation testing is an important function that receives as much attention as the testing of software and hardware. because the bell atlantic systems integration & testing center is iso9001 certified, an enormous effort was undertaken to ensure quality assurance of all products including documentation. both a test procedure and test plan for documentation has been implemented to ensure this quality.this article will describe what documentation is, why document testing is important, and how document testing is performed at the bell atlantic systems integration & testing center.other information pertaining to documentation, such as human factors, how to achieve document comprehensiveness, and comprehensibility, although important, are beyond the reach of this report. salvatore mamone viewpoint analysis: a case study j. c. s. p. leite endian-safe record representation clauses for ada programs mike mardis book review: operating systems: design and implementation, 2nd edition boytcho peytchev realizing software productivity through a software first design process toolset stephen a. bailey bruce a. burton the universe model: an approach for improving the modularity and reliability of concurrent programs we present the universe model, a new approach to concurrency management that isolates concurrency concerns and represents them in the modular interface of a component. this approach improves program comprehension, module composition, and reliability for concurrent systems. the model is founded on designer- specified invariant properties, which declare a component's dependencies on other concurrent components. process scheduling is then automatically derived from these invariants. we illustrate the advantages of this approach by applying it to a real-world example. reimer behrends r. e. kurt stirewalt extended shared-variable sessions this paper proposes two extensions to make shared variables of apl more useful: shared variables that persist across apl sessions, and a facility to reject incoming offers. karl soop roderic a. davis bochs: a portable pc emulator for unix/x kevin p. lawton free mdd-based software optimization techniques for embedded systems chunghee kim luciano lavagno alberto sangiovanni-vincentelli portability and the unix operating system daniel a. cañas laura m. esquivel performance analysis of the rio multimedia storage system with heterogeneous disk configurations jose renato santos richard muntz apl as the foundation for a universal computer language some ways in which universal computer languages differ from those in common use today. universal dictionary. universal alphabet. universal character representation. universal keyboard. universal communication among computer systems. elimination of an operating system as a separate entity. some advantages of universal computer languages over those in common use today. easier to teach and to learn. easier and faster creation and modification of software. faster execution. easier transfer of software from one system to another. fewer application software packages required. some of the changes and additions required to make apl an acceptable universal computer language. implement the apl interpreter at the chip level. add language features to meet operating system requirements. eliminate the features now required to communicate with operating systems of the present kind. improve communication among users of a system. improve the handling of integers. improve the handling of graphics. improve alphabet, character representation, and keyboard. the second edition of my monograph on this subject will be available at the copenhagen apl conference. stephen w. dunwell performance management activities within unix international corporate unix international on seamless prototyping hal berghel paths between imperative and functional programming thomas ball scope and access classes in apl this paper proposes and discusses some enhancements to the current treatment of the scope of apl objects (i.e., functions, variables) and introduces a scheme of access classes for variables within apl functions. m. rys soft documentation - challenges and techniques for handling mixed-media documents (panel) a pragmatic forum where practitioner panelists deliver short papers concerning the impact of the "distributed interactive environment" on the integration of disparate documentation (by different authors, on different media, in different locations) into a coherent document for transferring information to meet a given, yet variable, need-to-know of the reader/viewer(s). interactive design, development, and programming provide the opportunities for both technical and end-user personnel to create on-line documentation. the panelists will address new challenges and techniques involved in providing cohesive communication within the areas of: -terminology, writing/editing styles, management, quality control, access, retrieval, publication, distribution, update, and audit controls. interactive audience participation in an open discussion is encouraged. frank melanson scheduler-conscious synchronization efficient synchronization is important for achieving good performance in parallel programs, especially on large-scale multiprocessors. most synchronization algorithms have been designed to run on a dedicated machine, with one application process per processor, and can suffer serious performance degradation in the presence of multiprogramming. problems arise when running processes block or, worse, busy- wait for action on the part of a process that the scheduler has chosen not to run. we show that these problems are particularly severe for scalable synchronization algorithms based on distributed data structures. we then describe and evaluate a set of algorithms that perform well in the presence of multiprogramming while maintaining good performance on dedicated machines. we consider both large and small machines, with a particular focus on scalability, and examine mutual-exclusion locks, reader-writer locks, and barriers. our algorithms vary in the degree of support required from the kernel scheduler. we find that while it is possible to avoid pathological performance problems using previously proposed kernel mechanisms, a modest additional widening of the kernel/user interface can make scheduler-conscious synchronization algorithms significantly simpler and faster, with performance on dedicated machines comparable to that of scheduler-oblivious algorithms. leonidas i. kontothanassis robert w. wisniewski michael l. scott why the use clause is beneficial r. racine a notation for problematic architecture interactions the progression of component-based software engineering (cbse) is essential to the rapid, cost- effective development of complex software systems. given the choice of well- tested components, cbse affords reusability and increases reliability. however, applications developed according to this practice can often suffer from difficult maintenance and control, problems that stem from improper or inadequate integrate solutions. avoiding such unfortunate results requires knowledge of what causes the interoperability problems in the first place. the time for this assessment is during application design. in this paper, we define _problematic architecture interactions_ using a simple notation with extendable properties. furthermore, we delineate a multi-phase process for pre-integration analysis that relies on this notation. through this effort, potential problematic architecture interactions can be illuminated and used to form the initial requirements of an integration architecture. l. davis r. gamble j. payton g. jonsdottir d. underwood an online computation of critical path profiling jeffrey k. hollingsworth apl syntax and semantics this paper presents a working model of apl syntax and semantics that incorporates explicit representations of functions, operators, and syntax, thus providing a basis for the clear and explicit statement of extended facilities in the language, as well as a tool for experimentation upon them. use of the model is illustrated in the treatment of the syntax of operators, and in the discussion of a number of new or recently- proposed facilities including indirect assignment, the operators axis, derivative, inverse, and til, and the functions link, and from. the entire model is included in an appendix. kenneth e. iverson measurement of processor occupancy in a cyclic non-preemptive real-time control system an algorithm which is used to measure occupancy in stromberg-carlson's digital central office (dco) call processors is presented. the method is self- calibrating, making it independent of differences in processor speed, type, and other variables which affect processor performance. required is an external event generator which can generate an event to the processor at fixed time intervals. calibration needs one time interval with no events to occur before valid occupancy measurements may be made. no clock internal to the processor is required. the method is highly accurate even at high occupancies and incurs very little processor overhead to perform its measurements. processor occupancy can be displayed at the end of any user- specified time period. richard w. moulton introduction to this classic reprint and commentaries nina wishblow overview of the debian gnu/linux system ian murdock paragon: novel uses of type hierarchies for data abstraction mark sherman linux journal demographics corporate linux journal staff keywords: special identifier idioms chris clark a block-and-actions generator as an alternative to a simulator for collecting architecture measurements m. huguet t. lang y. tamir minimizing system modification in an incremental design approach _in this paper we present an approach to mapping and scheduling of distributed embedded systems for hard real-time applications, aiming at minimizing system modification cost. we consider an incremental design process that starts from an already existing system running a set of applications. we are interested to implement new functionality so that the already running applications are disturbed as little as possible and there is a good chance that, later, new functionality can easily be added to the resulted system. the mapping and scheduling problem are considered in the context of a realistic communication model based on a tdma protocol._ paul pop petru eles traian pop zebo peng high performance fortran language specification (part iii) corporate high performance fortran forum ada 9x validation nelson h. weiderman configuration control in an ada programming support environment mark marcus kirk sattley c. mugur stefanescu operators for program control structured programming constructs such as if-then-else and do-while have traditionally been used to give a natural flow to programs written in one of the algol-like languages. languages with a functional basis, such as apl and lisp, provide (and are characterized by) alternative means of controlling the flow of expression. these include function modularity, the application of functions across data structures, the use of data structures to control program flow, and the mapping of operations across data and across other operations. in apl2, the use of these methods has been enhanced by the generalization of apl data structures, the addition of the primitive operator each(″), and the ability to define operators. this paper will examine ways in which these features may be used to give a non-procedural look to apl2 functions. edward v. eusebi object oriented operating systems: an emerging design methodology object oriented design of operating systems has evolved from pure protection considerations to a more general methodology of design as exemplified in intel's iapx-432 machine. this paper compares and contrasts, from an architectural point of view, eight major object oriented operating systems. five different architectural aspects have been chosen as a basis for this analysis. these aspects include: uniformity of the object approach, object type extensibility, the process concept, the domain concept, and object implementation techniques. ariel pashtan logical vs. physical file system backup norman c. hutchinson stephen manley mike federwisch guy harris dave hitz steven kleiman sean o'malley can entity-based information systems live with exceptions? (abstract only) alex borgida piwg analysis methodology daniel roy lakshmi gupta path feasibility, linear optimizers and the evaluate standard form p. david coward an integrated software environment for reuse r. demillo w. du r. stansifer design and code reuse based on fuzzy classification of components a bottleneck in software reuse is the classification schema and retrieval method of components. particularly when large repositories of components are available, classification and retrieval for reuse should be flexible to allow the selection also of components which, although not perfectly matching requirements, are adaptable with a limited effort. this paper presents a fuzzy classification model for a repository storing descriptors of components. these descriptors include fuzzy-weighted keyword pairs describing components functionalities extracted from code and its design documentation. a mechanism for semi-automatic extraction of keywords, and for automatic assignement of fuzzy weights to keyword pairs based on text retrieval algorithms is provided. e. damiani m. g. fugini 25 years of quantum cryptography the fates of _sigact news_ and quantum cryptography are inseparably entangled. the exact date of stephen wiesner's invention of "conjugate coding" is unknown but it cannot be far from april 1969, when the premier issue of _sigact news_\\---or rather _sicact news_ as it was known at the time---came out. much later, it was in _sigact news_ that wiesner's paper finally appeared [74] in the wake of the first author's early collaboration with charles h. bennett [7]. it was also in _sigact news_ that the original experimental demonstration for quantum key distribution was announced for the first time [6] and that a thorough bibliography was published [19]. finally, it was in _sigact news_ that doug wiedemann chose to publish his discovery when he reinvented quantum key distribution in 1987, unaware of all previous work but wiesner's [73, 5].most of the first decade of the history of quantum cryptography consisted of this lone unpublished paper by wiesner. fortunately, bennett was among the few initiates who knew of wiesner's ideas directly from the horse's mouth. his meeting with the first author of this column in 1979 was the beginning of a most fruitful lifelong collaboration. it took us five more years to invent quantum key distribution [4], which is still today the best-known application of quantum mechanics to cryptography. the second author joined in slightly later, followed by a few others. but until the early 1990's, no more than a handful of people were involved in quantum cryptographic research. since then, the field has taken off with a vengeance, starting with artur k. ekert's proposal to use quantum nonlocality for cryptographic purposes [33].the golden age started in earnest when ekert organized the first international workshop on quantum cryptography in broadway, england, in 1993. since then, many conferences have been devoted at least partly to quantum cryptography, which has become a major international topic. the purpose of the aforementioned 1993 bibliography in _sigact news_ was to cite as much as possible _all_ papers ever written on the subject, including unpublished manuscripts: there were 57 entries in total. today, such an undertaking would be nearly impossible owing to the explosion of new research in the field.the purpose of this column is to give an overview of the current research in quantum cryptography. it is not our intention to be exhaustive and we apologize in advance to any researcher whose work we may have omitted. note that we do not necessarily agree with the claims in every paper mentioned here: this column should not be construed as a seal of approval! gilles brassard claude crepeau new products corporate linux journal staff the whiteboard: tracking usability issues: to bug or not to bug? chauncey wilson kara pernice coyne a framework for framework documentation greg butler rudolf k. keller hafedh mili examples of mathml stephen m. watt xuehong li an efficient approach to data flow analysis in a multiple pass global optimizer data flow analysis is a time-consuming part of the optimization process. as transformations are made in a multiple pass global optimizer, the data flow information must be updated to reflect these changes. various approaches have been used, including complete recalculation as well as partial recalculation over the affected area. the approach presented here has been designed for maximum efficiency. data flow information is completely calculated only once, using an interval analysis method which is slightly faster than a purely iterative approach, and which allows partial recomputation when appropriate. a minimal set of data flow information is computed, keeping the computation and update cost low. following each set of transformations, the data flow information is updated based on knowledge of the effect of each change. this approach saves considerable time over complete recalculation, and proper ordering of the various optimizations minimizes the amount of update required. s. jain c. thompson a compiler independent approach to test and configuration management for ada f. blumberg m. mcnickle a. reedy d. stephenson using grep: moving from dos? discover the power of this linux utility eric goebelbecker building projects with imake here's an explanation of how imake works and how you can use it to build your executables--an article for programmers with c and unix programming skills otto hammersmith an applicative compiler for a parallel machine a compiler for the applicative language hope is described. the compiler is itself written in hope and generates a machine independent compiler target language, suitable for execution on the parallel reduction machine alice. the advantages of writing a compiler in a very high level applicative language are discussed, and the use of program transformation and other techniques to turn the initial 'runnable specification' into a more efficient (if less clear) program are outlined. extensions to the hope language and the compiler which can exploit the parallelism and various execution modes of alice are described. ian w. moor a software development environment for law-governed systems this paper describes a software development environment based on a new approach for managing large-scale evolving systems. under this approach, the conventional notion of a system is augmented with a new component called the law of the system, which is an explicit and strictly enforced set of rules about the operation of the system, about its evolution, and about the evolution of the law itself. the resulting combination is called a law- governed system. naftaly h. minsky david rozenshtein comments on "a view from the trenches". ada vs. modula-2 vx. praxis j r greenwood session 9b: design methods r. t. yeh letters to the editor corporate linux journal staff block structured object programming charles rapin the current state of corba (invited presentation) brad balfour from the publisher corporate linux journal staff safely creating correct subclasses without seeing superclass code a major problem for object-oriented frameworks and class libraries is how to provide enough information about an extensible superclass so that programmers can safely create new subclasses without studying superclass code. the goal of my work is to determine what information is needed so library providers do not have to give away the code of library superclasses. a closely related goal is to provide guidelines that simplify reasoning about classes that inherit from such frameworks and class libraries. the long-term goal of my research is to develop tool support to automatically generate some of the necessary documentation and to assist programmers in reasoning about how to create correct subclasses. clyde d. ruby concurrent replicating garbage collection we have implemented a concurrent copying garbage collector that uses replicating garbage collection. in our design, the client can continuously access the heap during garbage collection. no low-level synchronization between the client and the garbage collector is required on individual object operations. the garbage collector replicates live heap objects and periodically synchronizes with the client to obtain the client's current root set and mutation log. an experimental implementation using the standard ml of new jersey system on a shared-memory multiprocessor demonstrates excellent pause time performance and moderate execution time speedups. james o'toole scott nettles adapting a debugger for optimised programs william s. shu analysis of timing hazards in ada programs safety-critical ada programs often contain asynchronous tasks and are, therefore, prone to timing hazards. neither rigorous testing nor rigorous formal verification is currently feasible for timing hazard detection. we propose an inexpensive static analysis that can assist in the identification of timing hazards. only a few ada constructs can generate results that depend on the asynchronous timing of tasks within a program. using the techniques of data dependency analysis, these timing dependencies can be traced through the program. if an output is timing dependent, manual inspection is required to determine whether the timing dependency is deliberate or is an unintended timing hazard. louise e. moser p. m. melliar-smith on generic formal package parameters in ada 9x jun shen gordon v. cormack stop the presses: linus torvalds releases linux 1.2.0 phil hughes relational programs the objective of this research is to produce useful, low- cost methods for developing correct concurrent programs from formal specifications. in particular, we address the design and verification of the synchronization and communication portions of such programs. often, this portion can be implemented using a fixed, finite amount of synchronization related data, i.e., it is "finite-state." nevertheless, even when each program component contains only one bit of synchronization related data, the number of possible global synchronization states for _k_ components is about 2_k_, in general. because of this "state-explosion" phenomenon, the manual verification of large concurrent programs typically requires lengthy, and therefore error- prone, proofs. using a theorem prover increases reliability, but requires extensive formal labor to axiomatize and solve verification problems. automatic verification methods (such as reachability analysis and temporal logic model checking) use state- space exploration to decide if a program satisfies its specification, and are therefore also subject to state- explosion. to date, proposed techniques for ameliorating state-explosion either require significant manual labor, or work well only when the program is highly symmetric and regular (e.g., many functionally similar components connected in similar ways).to overcome these drawbacks, we advocate the _synthesis of programs from specifications._ this approach performs the refinement from specifications to programs automatically. thus, the amount of formal labor is reduced to writing a formal specification and applying the appropriate synthesis step at each stage of the derivation. while nontrivial, writing a formal specification is necessary in any methodology that guarantees correctness. farokh b. bastani fail-safe programming in compiler optimization jonathan l. schilling integration of complexity metrics with the use of decision trees we would like to present the use of the decision tree approach in the integration of various complexity metrics on the example of distinguishing between random and ordinary programs. following our proposition that randomness indicates meaningless we can state that with the approach we are able to measure the "meaning" of computer programs. the main contributions stated in the paper is that the new way of metrics integration enabling one to combine metrics with various measurement units. peter kokol janez brest milan zorman vili podgorelec embedded/real-time programming in ada pat rogers forth in space, or, so near yet so far out paul frenger comparing data synchronization in ada 9x and orca protected object types are one of three major extensions to ada 83 proposed by ada 9x. this language feature is intended for light-weight data synchronization between tasks. the orca parallel programming language has a very similar construct, the shared data-object, with which we have over five years of experience, both in usage and implementation. this paper compares protected objects and shared data- objects, with regard to design, usage, and implementation. henri e. bal enhancing array dataflow dependence analysis with on-demand global value propagation vadim maslov a hierarchial cpu scheduler for multimedia operating systems pawan goyal xingang guo harrick m. vin formalizing hierarchical object-oriented design method riri huang a reusability measurement framework and tool for ada 95 margaretha w. price steven a. demurjian donald m. needham portia: an instance-centered environment for smalltalk eric gold mary beth rosson reusable software components trudy levine dominators, super blocks, and program coverage in this paper we present techniques to find subsets of nodes of a flowgraph that satisfy the following property: a test set that exercises all nodes in a subset exercises all nodes in the flowgraph. analogous techniques to find subsets of edges are also proposed. these techniques may be used to significantly reduce the cost of coverage testing of programs. a notion of a super block consisting of one or more basic blocks in that super block must be exercised by the same input. dominator relationships among super blocks are used to identify a subset of the super blocks whose coverage implies that of all super blocks and, in turn, that of all basic blocks. experiments with eight systems in the range of 1-75k lines of code show that, on the average, test cases targeted to cover just 29% of the basic blocks and 32% of the branches ensure 100% block and branch coverage, respectively. hiralal agrawal an object oriented architecture william j. dally james t. kajiya reader survey results corporate linux journal staff a concurrent copying garbage collector for languages that distinguish (im)mutable data this paper describes the design and implementation of a concurrent compacting garbage collector for languages that distinguish mutable data from immutable data (e.g., ml) as well for languages that manipulate only immutable data (e.g., pure functional languages such as haskell). the collector runs on shared-memory parallel computers and requires minimal mutator/collector synchronization. no special hardware or operating system support is required. our concurrent collector derives from sequential semi-space copying collectors. the collector requires that a heap object includes forwarding pointer in addition to its data fields. an access to an immutable object can be satisfied either by the original or the forwarded copy of the object. a mutable object is always accessed through the forwarded copy, if one exists. measurements of this collector in a standard ml compiler on a shared-memory computer indicate that it eliminates perceptible garbage-collection pauses by reclaiming storage in parallel with the computation proper. all observed pause times are less than 20 milliseconds. we describe extensions for the concurrent collection of multiple mutator threads and refinements to our design that can improve its efficiency. lorenz huelsbergen james r. larus functions and data can dance as equal partners it is difficult today to think of machines of the early fifties. a vaguely recollected machine, which probably combines features of more than one real machine, had as memory a rotating drum. the instruction format included an operation code and four addresses. the first three addresses were not unusual: the locations of two operands and the result of the operation. the fourth address was the location of the next instruction. a programmer had to pick a spot on the drum that will be under the read head when the instruction finishes. it required a bit of ingenuity and lots of experimentation to get the most out machines from that era. the job was not made any easier by the fact the machines typically had about a thousand words of storage.it was not long before assemblers and compilers took over the job of managing machine operations. programs were divided into two major sections: procedures and data. the small storage capacity raised a problem: to store all numbers with the same number of bits required a choice. if the word size was small, precision was limited; if large, relatively few numbers could be stored. a compromise was reached by storing numbers in several different formats that used more storage when greater precision was required. data-types had arrived.it was to be a several years before the term _polymorphism_ was applied in the computer field. an early use of the word is cited in the oed: "the various portraits of her majesty astonish by their perplexing polymorphism…" [**1839** _fraser's_ mag. xx. 699].compilers in the fifties required a set of declarations that preceded the executable code. the names of variables were classified into groups that specified the storage formats of their respective members. different instruction sequences were needed to perform arithmetic on differently formatted numbers.nonnumeric data was a category by itself. before long nonnumeric subcategories, i. e., data-types, were recognized. some of them formed nested sequences, just as the early numeric types --- bits, small integers, large integers, and floating point --- formed a natural sequence. the domain of a function was the set of data-types on which it could operate.a typical oops requires declaration of the data-types and functions that apply to them in a different way. instead of considering functions and the data- types that they can accept, it considers data-types and the sets of functions that can use them. as in the case of fortran, declarations must be complete before programming begins. the sets of functions could be thought of as _function-types,_ although the term has been little used, if at all.no matter just how the information on data and function types were stored in the early computer and compiler days, and no matter whether they were stored with the cpu or on disk or drum, not many thought then about arrays of functions. the michigan algorithm decoder (mad), a compiler of the late fifties, is the only place i know of which supported indexed function name variables. j. philip benkard an implementation of coco dan nagle a generic account of continuation-passing styles we unify previous work on the continuation-passing style (cps) transformations in a generic framework based on moggi's computational meta-language. this framework is used to obtain cps transformations for a variety of evaluation strategies and to characterize the corresponding administrative reductions and inverse transformations. we establish generic formal connections between operational semantics and equational theories. formal properties of transformations for specific evaluation orders follow as corollaries. essentially, we factor transformations through moggi's computational meta- language. mapping λ-terms into the meta- language captures computation properties (e.g., partiality, strictness) and evaluation order explicitly in both the term and the type structure of the meta-language. the cps transformation is then obtained by applying a generic transformation from terms and types in the meta-language to cps terms and types, based on a typed term representation of the continuation monad. we prove an adequacy property for the generic transformation and establish an equational correspondence between the meta-language and cps terms. these generic results generalize plotkin's seminal theorems, subsume more recent results, and enable new uses of cps transformations and their inverses. we discuss how to aply these results to compilation. john hatcliff olivier danvy corporate linux coexisting with the big boys: integrating linux into a large scale production network running sparcs and windows markolf gudjons x-isp and maintaining multiple account records even for the experienced administrator, x-isp provides an easy way to manage multiple accounts, keep track of usage expense and time on- line chris ledantec concurrent development of software systems m. aoyama caps as a requirements engineering tool robert steigerwald gary hughes valdis berzins a practical and flexible flow analysis for higher-order languages a flow analysis collects data-flow and control-flow information about programs. a compiler can use this information to enable optimizations. the analysis described in this article unifies and extends previous work on flow analysis for higher-order languages supporting assignment and control operators. the analysis is abstract interpretation based and is parameterized over two polyvariance operators and a projection operator. these operators are used to regulate the speed and accuracy of the analysis. an implementation of the analysis is incorporated into and used in a production scheme compiler. the analysis can process any legal scheme program without modification. others have demonstrated that a 0cfa analysis can enables the optimizations, but a 0cfa analysis is o(n )3). an o(n) instantiation of our analysis successfully enables the optimization of closure representations and procedure calls. experiments with the cheaper instantiation show that it is as effective as 0cfa for these optimizations. j. michael ashley r. kent dybvig analyzing and measuring reusability in object-oriented design margaretha w. price steven a. demurjian experience with the automatic temporal analysis of multitasking ada designs in this paper, we report on experience gained in the temporal analysis of multitasking ada designs. the analysis tool-set, developed as part of the caede project, includes an operational specification language, and deadlock, starvation and critical race analyzers. we identify design parameters that lead to costly analysis, and then describe analysis heuristics that can lead to less costly analysis. several design examples in which we apply our heuristics, are described. in one of the examples, a 50-fold reduction in analysis cost was obtained with the application of one of the heuristics. finally, we make recommendations for design environment features that would support the application of analysis heuristics. gerald m. karam raymond j. a. buhr research on synthesis of concurrent computing systems (extended abstract) the object of our research is the codification of programming knowledge for the synthesis of concurrent programs. we present sample rules and techniques that we show can be used to derive two concurrent algorithms: dynamic programming (for the class of problems that run in polynomial time on sequential machines) and array multiplication. for both derived concurrent versions the code runs in linear time. the concurrent versions are significant and complex algorithms, though they are not new and already have been reported in the literature. the synthesis knowledge for these derivations is embodied in seven synthesis rules. we expect these rules to generalize to other classes of algorithms. we have also discovered a pair of techniques called virtualization and aggregation. this pair of techniques (plus the seven rules) is shown to be powerful enough to synthesize kung's systolic array architecture [kung-76] from a specification of matrix multiplication. richard m. king the structure and performance of interpreters interpreted languages have become increasingly popular due to demands for rapid program development, ease of use, portability, and safety. beyond the general impression that they are "slow," however, little has been documented about the performance of interpreters as a class of applications.this paper examines interpreter performance by measuring and analyzing interpreters from both software and hardware perspectives. as examples, we measure the mipsi, java, perl, and tcl interpreters running an array of micro and macro benchmarks on a dec alpha platform. our measurements of these interpreters relate performance to the complexity of the interpreter's virtual machine and demonstrate that native runtime libraries can play a key role in providing good performance. from an architectural perspective, we show that interpreter performance is primarily a function of the interpreter itself and is relatively _independent_ of the application being interpreted. we also demonstrate that high-level interpreters' demands on processor resources are comparable to those of other complex compiled programs, such as gcc. we conclude that interpreters, as a class of applications, do not currently motivate special hardware support for increased performance. theodore h. romer dennis lee geoffrey m. voelker alec wolman wayne a. wong jean-loup baer brian n. bershad henry m. levy representing and querying reusable object frameworks hafedh mili houari sahraoui ilham benyahia phoenix: a low-power fault-tolerant real-time network-attached storage device phoenix is a real-time network-attached storage device (nasd) that guarantees real-time data delivery to network clients even across single disk failure. the service interfaces that phoenix provides are best-effort/real- time reads/writes based on unique object identifiers and block offsets. data retrieval from phoenix can be serviced in server push or client pull modes. phoenix's real-time disk subsystem performance results from a standard cycle- based scan-order disk scheduling mechanism. however, the disk i/o cycle of phoenix is either completely active or completely idle. this on-off disk scheduling model effectively reduces the power consumption of the disk subsystem, without increasing the buffer size requirement. phoenix also exploits unused disk storage space and maintains additional redundancy beyond the generic raid5-style parity. this extra redundancy, typically in the form of block replication, reduces the time to reconstruct the data on the failed disk. this paper describes the design, implementation, and evaluation of phoenix, one of the first, if not the first, nasds that support fault- tolerant, real-time, and low-power network storage service. anindya neogi ashish raniwala tzi-cker chiueh kernel korner: booting the kernel alessandro rubini application and experimental evaluation of state space reduction methods for deadlock analysis in ada an emerging challenge for software engineering is the development of the methods and tools to aid design and analysis of concurrent and distributed software. over the past few years, a number of analysis methods that focus on ada tasking have been developed. many of these methods are based on some form of reachability analysis, which has the advantage of being conceptually simple, but the disadvantage of being computationally expensive. we explore the effectiveness of various petri net- based techniques for the automated deadlock analysis of ada programs. our experiments consider a variety of state space reduction methods both individually and in various combinations. the experiments are applied to a number of classical concurrent programs as well as a set of "real- world" programs. the results indicate that petri net reduction and reduced state space generation are mutually beneficial techniques, and that combined approaches based on petri net models are quite effective, compared to alternative analysis approaches. s. duri u. buy r. devarapalli s. m. shatz an object addressing mechanism for statically typed languages with multiple inheritance in this paper we are concerned with addressing techniques for statically typed languages with multiple inheritance. the addressing techniques are responsible for the efficient implementation of record field selection. in object-oriented languages, this record selection is equivalent to the access of methods. thus, the efficiency of these techniques greatly affects the overall performance of an object-oriented language. we will demonstrate that addresses, in such systems, cannot always be calculated statically and show how symbol tables have been used as address maps at run time. the essence of the paper is a new addressing technique that can statically calculate either the address of a field or the address of the address of the field. this technique is powerful enough to support an efficient implementation of multiple inheritance with implicit subtyping as described by cardelli. r. c. h. conner a. dearle r. morrison a. l. brown the ieee software engineering standards process software engineering has emerged as a field in recent years, and those involved increasingly recognize the need for standards. as a result, members of the institute of electrical and electronics engineers (ieee) formed a subcommittee to develop these standards. this paper discusses the ongoing standards development, and associated efforts. fletcher j. buckley standards for the sake of standards - a recipe for failure c. mckay linear expression bounding steven ryan efficient approximation for models of multiprogramming with shared domains queueing network models of multiprogramming systems with memory constraints and multiple classes of jobs are important in representing large commercial computer systems. typically, an exact analytical solution of such models is unavailable, and, given the size of their state space, the solution of models of this type is approached through simulation and/or approximation techniques. recently, a computationally efficient iterative technique has been proposed by brandwajn, lazowska and zahorjan for models of systems in which each job is subject to a separate memory constraint, i.e., has its own memory domain. in some important applications, it is not unusual, however, to have several jobs of different classes share a single memory "domain" (e.g., ibm's information management system). we present a simple approximate solution to the shared domain problem. the approach is inspired by the recently proposed technique which is complemented by a few approximations to preserve the conceptual simplicity and computational efficiency of this technique. the accuracy of the results is generally in fair agreement with simulation. alexandre brandwajn william m. mccormack acorn: apl to c on real numbers a prototype apl to c compiler (acorn: apl to c on real numbers) was produced while investigating improved tools for solving numerically intensive problems on supercomputers. acorn currently produces code which runs slower than hand- coded cray fortran, but we have identified the major performance bottlenecks, and believe we know how to remove them. although created in a short time on a limited budget, and intended only as a proof of the feasibility of compiling apl for numerically intensive environments, acorn has shown that straightforward compiled apl will be able to compete with hand-optimized fortran in many common supercomputer applications. robert bernecky charles brenner stephen b. jaffe george p. moeckel enterprise frameworks: guidelines for selection mohamed e. fayad david s. hamu the design of an object-oriented operating system (abstract): a case study of choices roy h. campbell peter w. madany session 8a: editors i. miyamoto analysis of operation delay and execution rate constraints for embedded systems rajesh k. gupta manageability, availability and performance in porcupine: a highly scalable, cluster-based mail service this paper describes the motivation, design, and performance of porcupine, a scalable mail server. the goal of porcupine is to provide a highly available and scalable electronic mail service using a large cluster of commodity pcs. we designed porcupine to be easy to manage by emphasizing dynamic load balancing, automatic configuration, and graceful degradation in the presence of failures. key to the system's manageability, availability, and performance is that sessions, data, and underlying services are distributed homogeneously and dynamically across nodes in a cluster. yasushi saito brian n. bershad henry m. levy optimization issues relating to subunits tom burger a module mechanism for constraints in smalltalk thinglab ii, a rewrite of thinglab, provides two representations of objects: fully-exposed and interpreted things, or hidden and compiled modules. both representations provide the full power of the thinglab ii constraint hierarchy (an ordering of constraint preferences), and both can be manipulated by the graphical user- interface. this paper briefly describes modules and their environmental support in thinglab ii. it also describes the process by which the modulecompiler translates a collection of objects (a thinglab ii thing) into a single object with compiled and optimized smalltalk-80 methods (a module). b. n. freeman-benson object management in a case environment evan w. adams masahiro honda terrence c. miller evaluating the performance efficiency of ada compilers mitchell j. bassman gerald a. fisher anthony gargaro integrating segmentation and paging protection for safe, efficient and transparent software extensions the trend towards extensible software architectures and component-based software development demands safe, efficient, and easy-to-use extension mechanisms to enforce protection boundaries among software modules residing in the same address space. this paper describes the design, implementation, and evaluation of a novel intra- address space protection mechanism called _palladium,_ which exploits the segmentation and paging hardware in the intel x86 architecture and efficiently supports safe kernel-level and user-level extensions in a way that is largely transparent to programmers and existing programming tools. based on the considerations on ease of extension programming and systems implementation complexity, _palladium_ uses different approaches to support user-level and kernel-level extension mechanisms. to demonstrate the effectiveness of the _palladium_ architecture, we built a web server that exploits the user-level extension mechanism to invoke cgi scripts as local function calls in a safe way, and we constructed a compiled network packet filter that exploits the kernel-level extension mechanism to run packet-filtering binaries safely inside the kernel at native speed. the current _palladium_ prototype implementation demonstrates that a protected procedure call and return costs 142 cpu cycles on a pentium 200mhz machine running linux. tzi-cker chiueh ganesh venkitachalam prashant pradhan the msl compiler writing project e. f. elsworth the solution of separable queueing network models using mean value analysis because it is more intuitively understandable than the previously existing convolution algorithms, mean value analysis (mva) has gained great popularity as an exact solution technique for separable queueing networks. however, the derivations of mva presented to date apply only to closed queueing network models. additionally, the problem of the storage requirement of mva has not been dealt with satisfactorily. in this paper we address both these problems, presenting mva solutions for open and mixed load independent networks, and a storage maintenance technique that we postulate is the minimum possible of any "reasonable" mva technique. john zahorjan eugene wong managing apl public code for an in-house apl system (before and after logos) this paper will present apl public library management concepts used in a large in-house apl system development environment and describe a number of tools developed for this purpose. public libraries, workspaces, functions and variables are discussed as well as documentation, reporting of changes, audit records, backup versions, and test versions. user tools discussed include functions for describing apl public files, libraries, workspaces, functions and variables, locating these objects, keyword searches, and a glossary of terms. other user facilities include a news system and an online system for submitting problem reports and requests for enhancements. further maintenance tools include automatic generation of wsdoc listings when changes are made and automatically generated reports providing a variety of information such as libraries or workspaces lacking descriptions, lists of apl public workspaces and files with their status, function source verification, and exception reporting of workspace changes. a discussion of logos, a programming environment for apl offered by ip sharp associates, is also presented and how it enhances the capabilities previously described. d. f. stoneburner scalable operating systems, or what do a million processors mean? falk langhammer an operating systems course using minix j. h. hays some notes on software design: reply to a reaction robert mclaughlin behavior sampling: a technique for automated retrieval of reusable components andy podgurski lynn pierce reusing a compiler m. ancona g. dodero a. clematis efficient call graph analysis we present an efficient algorithm for computing the procedure call graph, the program representation underlying most interprocedural optimization techniques. the algorithm computes the possible bindings of procedure variables in languages where such variables only receive their values through parameter passing, such as fortran. we extend the algorithm to accommodate a limited form of assignments to procedure variables. the resulting algorithm can also be used in analysis of functional programs that have been converted to continuation-passing-style. we discuss the algorithm in relationship to other call graph analysis approaches. many less efficient techniques produce essentially the same call graph. a few algorithms are more precise, but they may be prohibitively expensive depending on language features. mary w. hall ken kennedy benchmark test to estimate optimization quality of compilers m. klerer h. liu deadlock control with ada95 the ada programming language is unique in its high-level concurrency facilities, supported by a secure and powerful set of programming constructs. ada 95 now has additional capabilities, specifically the protected type construct.although the ada rendezvous is prone to deadlock, a careful discipline can ensure that this type of dead state does not occur. restricting task communication assists in preventing deadlock, yet such restrictions may be too limiting and/or too inefficient for an application. indeed, several researchers have suggested that deadlock should not be prevented, but monitored and corrected during testing and/or run time.this paper takes the position that deadlock should be prevented by following specific guidelines in the use of the rendezvous. we define deadlock, show how it can occur with ada tasks, and outline mechanisms that are appropriate for its control. trudy levine on the automatic generation of data distributions s. k. s. gupta s. d. kaushik c.-h. huang j. r. johnson r. w. johnson p. sadayappan a portable high-speed pascal to c translator k. bothe b. hohberg ch. horn o. wikarski metamodeling: how, why and what? peter kokol network applications in ada 95 jo forth is alive and well and living in a hypercube somewhere in wyoming charles p. howerton measuring file access patterns in unix irene hu hypernode reduction modulo scheduling josep llosa mateo valero eduard ayguade antonio gonzalez review of the state-of-the-art (session summary) the opening session of the 5th international software process workshop covered a wide range of topics so it is not possible to capture its full scope in this brief summary. this paper briefly outlines the main views expressed by the participants and then provides a precis of the participants' comments. because of the dynamic nature of those discussions, however, i have taken the liberty of grouping related points for a more coherent presentation. i also take full responsibility for any errors or omissions. even though this first session was entitled "state-of-the-art," there was little discussion of actual process modeling experience. from the many examples given in the proceedings, however, there was considerable evidence of practical experience and the discussions reflected a general consensus that process modeling methods have been found both practical and helpful. while no examples were given of the unsuccessful application of formal methods, there was a strong minority view that such low-technology methods as procedure manuals and software standards were widely used and are often quite effective. there was also general agreement that tools which include an implicit process were not true process models. to qualify as a process model, the process used by the tool should be explicitly defined. a strong consensus also held that the time had now come to more broadly apply these modeling methods to a range of well-known software process activities. it was felt this would provide useful insights on their development and introduction. while there was no focused discussion on the objectives of process modeling, the purposes noted fell into three general categories: to provide a precise framework for understanding, experimenting with, and reasoning about the process; to facilitate process automation; to provide a basis for process control. an important role of process models is improvement of the combined human/technology activities involved in producing software. because of the dynamic nature of such people intensive work, it was suggested that these models should include the recursive capability to improve themselves. a subject that was widely discussed and returned to again in subsequent workshop sessions was the special impact of the human element in software process models. while it was agreed that the human element adds considerable complexity, there were widely divergent viewpoints. these ranged from considering human-related issues as outside our area of competence to believing that such issues were central to all process work. bill curtis opened this first session with a discussion of key issues and the following challenge: "how much of actual software development behavior will be affected (by process modeling) and what will be the benefit?" he then divided software process issues into two classes: the control process and the learning process. the former concerns management's need for an orderly framework for evaluating progress while the latter more closely approximates the exploratory and often intuitive nature of much software development work. sam redwine noted that software engineering could likely learn from the management methods used in developing and managing teams in professional sports. the ensuing discussion was then focused by bill curtis' suggestion that configuration management would be a good place to initially apply process models since this well-understood function clearly distinguishes between product creations and the control of product evolution. peter feiler pointed out that configuration management should be viewed as having both support and control functions since it both helps the professionals do quality work and provides a controlled framework for system evolution. a number of other software development areas were then suggested for process modeling, including bug tracking, the product build process, and testing. anthony finkelstein noted that his process research focused on aspects of the requirements process because he feels this area has less support, is less well-defined, and that any results are thus more likely to have a substantial impact. mark kellner raised the issue of the objectives of process modeling. with traditional software development, for example, software products are executed by machines while process programs must be understood and at least partially performed by people. this causes a fundamental paradigm shift. bill curtis suggested that an important role of process programs is to provide models for experimentation and learning. peter feiler also noted that process programs provide a precise basis for documenting, communicating, validating, simulating, controlling, and automating software development work. sam redwine further pointed out that an attachment to manny lehman's paper included a comprehensive listing of the potential roles of process models. bill curtis then asked how the subject of process programming differed from traditional computer science. colin tully noted that with process programs, we are trying to describe processes that are not entirely executed by machine. in this effort, we have tended to accept the existing paradigms for software development. since these do not seem to fit too well, he questioned whether we are on the right track and if we know what paradigms are most appropriate. gail kaiser and frank belz both felt that we would learn much from the paradigms of real-time systems design since both involve multiple asynchronous views of complex activities. peter feiler questioned whether process models differed that much from many other areas which involve people and tools in an overall process. a common problem is the search for promising areas to automate. karen huff then made the observation that when dealing with people we can often focus on what we want done rather than the more explicit details of how it is to be accomplished. watts humphrey added that distinctions should be made between dealing with machines and people. with people, for example, the analog of the instruction set is neither clear, consistent across environments, or stable. there are also questions of motivation and accuracy and, as demonstrated by manufacturing experience, people do not perform very well when treated as machines. as demonstrated by the japanese in automobile production or by alcoa with aluminum sheet production, human performance is enhanced when they feel they own the process they are using. this is achieved in such cases by involving them in continuous process improvement. colin tully pointed out that this was manny lehman's issue with process programming. dave garlan next asked why, in a state-of-the- art session, we were not hearing much about experience. was it that there were no real successes? lolo penedo pointed to the pml-pce work (described in the roberts and snowdon papers) as a good example. wilhelm schaefer noted they had considerable success in formally modeling the berlin headquarters of a large software operation. dieter rombach said that there is a growing body of modeling experience and that by devoting too much effort to selecting the right formalism we might repeat the futile programming search for the one right language. since there is not likely to be one best formalism, we should pick some real process examples and use available methods to model them. bill curtis then raised the question of the qualifications for a process program. is make, for example, a process program? bob balzer contended that a tool with a built in, though not explicit, process was not a process program. dewayne perry agreed, although he felt that make partially qualified in that it coordinated the operation of other tools. frank belz then pointed out one of the dangers of building implied processes into tools. by selecting a single way to do a job which could be done in several different ways, the entire process is constrained to this single alternative. taly minsky noted an important distinction between tools and the larger processes which control them. with configuration management, for example, we are not dealing with one person systems. often the actions of a single individual can impact many others. this generally requires a consensus decision system. when one module is to be changed or promoted, the other involved modules must be, so to speak, consulted. when conflicts arise, these must be flagged and resolved before the proposed action can be taken. this requires sets of rules which raises the further question of what rules are appropriate and who writes them (see discussion of session #3 on policies). taly also suggested that there should likely be a rule-making hierarchy with different people having different rule- making authority. bob balzer then suggested a separation of process modeling issues. one category concerned the activity domains which we can learn about and mechanize. as we see what is successful, we can make changes to provide further improvements. the other issue concerns the formal representation of the process, including its use of tools. mark dowson objected to the neatness of this paradigm. he interpreted bob balzer as saying that you first devise a model of the activity of interest, you than formalize its representation, and then finally mechanize it for execution. actual processes, he feels, do not work this way. we typically start with a vague idea of how to proceed, hack up a mechanization which works, and then improve it until it performs effectively. when we have worked with it long enough, we may finally understand the process and may in fact end up in much the same place. steve reiss suggested that in this work we should distinguish between those classes of models which can be mechanized and those that cannot. the latter, he feels, have to be dynamic because they generally deal with human behavior and management issues. watts humphrey suggested that bob balzer include a third category: developing and improving the process. in fact, processes should generally have the recursive character of including the means for their own improvement. to achieve process ownership, effective human-intensive processes should then include these three activity classes. bob agreed that improvement was an essential process activity. mark dowson suggested an important difference between reasoning about a process and process execution. he much prefers to learn from the process than from some product which contains an implied process. deciphering a process by studying a tool that embodies it is about as productive as attempting to understand a program from its object code. that is what convinced him of the importance of specification languages. karen huff also noted that a useful way to improve one's understanding of a process is to separate the enacting activities from those concerned with deciding what to do. david notkin then raised an issue which largely occupied the concluding discussion of this session. he suggested that we might be using formalization and technology because that is what we like. he asserted that we can do a lot with low- technology processes. mark dowson agreed and noted that a lot of current process models used are based on manuals, procedures and standards. we cannot expect to find the one right formalization for everything; maybe different parts of the process should use different approaches. watts humphrey noted that the current low-technology representations of process models often cause problems. for example, the standards and procedures manuals in many organizations are out of date and gather dust. while real processes are not static, their representations in tools and manuals tend to petrify. when we learn to use this technology appropriately it should help us to validate processes, to provide an appropriate and understandable level of abstraction, to support their execution, and to help keep them dynamic. if the process representation cannot conveniently be kept current, it will soon become obsolete and either unduly constrain behavior, be ignored, or be replaced. marc kellner reinforced this view with his experiences in modeling several military processes for maintaining large software systems. in all cases they have such low-tech process descriptions as standards and procedures manuals but they are always out of date and unused. in no case that they have studied are the people actually following the official processes which are documented in the manuals. on close examination, in fact, they have found that these manuals are quite inaccurate and misleading. clive roberts noted that low-technology process standards are much better than nothing. often, in fact, projects fail through not understanding their own process. dewayne perry added that, even when the programmers understand their process, they can also fail when they do not have the proper tools to understand their problems. mark dowson concluded that projects that get their process wrong are sure to fall on their face. bob balzer returned to david notkin's point with the following three comments. we are now stuck with informal and out of date process definitions. if we do not explicitly represent our processes, we must leave them to the management system, and that has not worked very well for software so far. with formalization, we can get below the surface of our processes to better understand and improve them. bill curtis concluded the session by noting that organizations work with processes of all levels of formality. the typical financial audit and control systems are quite formal. arthur anderson, for example, has an extensive training program for their new employees that teaches them exactly how to work with clients. while these guidelines may not be followed precisely in every case, they provide a working framework. in software, many organizations do not have any explicit representation of their process. it is like being lost; they feel out of control. often, in fact, they are. watts humphrey an adaptive globally-synchronizing clock algorithm and its implementation on a myrinet-based pc cluster cheng liao margaret martonosi douglas w. clark a message passing standard for mpp and workstations jack j. dongarra steve w. otto marc snir david walker general strategies for dynamic reconfiguration in recent years there has been a great deal of attention in the software engineering community on the development of techniques and tools that provide support for _dynamic reconfiguration,_ the ability to make changes to a running application. the changes of interest include 1) adding/removing/moving components; 2) adding/removing bindings (communication channels); 3) changing the characteristics of the components or bindings. my work in this area has focused both on software support for dynamic reconfiguration of parallel applications and on frameworks and static software analysis techniques for determining the validity of component-level adaptations in the context of dynamic reconfiguration. elizabeth l. white an evaluation of staged run-time optimizations in dyc brian grant matthai philipose markus mock craig chambers susan j. eggers kernel korner: the "virtual file system" in linux alessandro rubini extensible file system (elfs): an object-oriented approach to high performance file i/o scientific applications often manipulate very large sets of persistent data. over the past decade, advances in disk storage device performance have consistently been outpaced by advances in the performance of the rest of the computer system. as a result, many scientific applications have become i/o-bound, i.e. their run-times are dominated by the time spent performing i/o operations. consequently, the performance of i/o operations has become critical for high performance in these applications. the elfs approach is designed to address the issue of high performance i/o by treating files as typed objects. typed file objects can exploit knowledge about the file structure and type of data. typed file objects can selectively apply techniques such as prefetching, parallel asynchronous file access, and caching to improve performance. also, by typing objects, the interface to the user can be improved in two ways. first, the interface can be made easier to use by presenting file operations in a more natural manner to the user. second, the interface can allow the user to provide an "oracle" about access patterns, that can allow the file object to improve performance. by combining these concepts with the object-oriented paradigm, the goal of the elfs methodology is to create flexible, extensible file classes that are easy to use while achieving high performance. in this paper we present the elfs approach and our experiences with the design and implementation of two file classes: a two dimensional dense matrix file class and a multidimensional range searching file class. john f. karpovich andrew s. grimshaw james c. french a scheme for the translation of the tucker taft select-and statement into standard ansi ada ronald j. theriault concepts and paradigms of object-oriented programming peter wegner some comments on "a solution to a problem with morel and renvoise's 'global optimization by suppression of partial redundancies'" abstract: drechsler and stadel presented a solution to a problem with morel and renvoise's "global optimization by suppression of partial redundancies." we cite some earlier generalizations of morel and renvoise's algorithm that solve the same problem, and we comment on their applicability. arthur sorkin safety nets for error trapping l. morgenstern java: a language for software engineering (tutorial) jim waldo orphan problems and remedies in distributed systems djemal h. abawajy book review: linux installation and getting started phil hughes some improvements on utah standard lisp k namba documenting the software development process the software engineering process group (sepg) at the data systems division of litton systems, inc., was given the task of documenting the software development process used within the division. this paper describes how the sepg at litton accomplished this task. it discusses the sources we used for guidance and describes the resulting documentation for defining the software development process and the methods and tools that support the process. after reviewing the existing software process documentation at litton, the sepg concluded that three separate documents were required: a revised set of software policies and procedures (ppgs), a software engineering handbook, and a software management handbook. the sepg established working groups to develop these documents. the working group responsible for the software engineering handbook decided to develop it as a user manual for the software development process. following weiss' guidelines for developing a usable user manual, the working group developed storyboards for sections of the manual. a model initially developed at ibm and refined by sei and others was used to describe the software development process as a series of work tasks, each of which has entry criteria, exit criteria, objectives, and steps to perform. several authors developed the storyboards and the corresponding modules of the handbook. the handbook was partitioned into short modules, each of which has a topic sentence and a figure (where applicable). the result is a modular software engineering handbook that is easy to read and maintain. the use of working groups and the development of the software engineering handbook as a user manual proved to be efficient and effective methods for generating high quality software process documentation. june s. hopkins jean m. jernow guarded methods vs. inheritance anomaly szabolcs ferenczi clarity mcode: a retargetable intermediate representation for compilation brian t. lewis l. peter deutsch theodore c. goldstein book review: unix network programming, volume 1, second edition david bausum product-based process models ataru t. nakagawa kokichi futatsugi operations for programming in the all a primary goal of software engineering is to improve the process of software development. it is being recognised that recent integrated programming environments have made significant progress towards this aim. this paper describes new operations, suitable for such environments, which are applicable in a much wider scope of programming, termed here as programming in the all. development of software in this new scope is carried out incrementally in program fragments of various types, called fragtypes. fragtypes range from a simple expression type to a complete subsystem type, and therefore are suited to the development of non- trivial software. the proposed operations on fragtypes have been incorporated in the design of the programming environment mupe-2 for modula-2, which is currently under development at mcgill university. nazim h. madhavji linux.com marjorie richardson an approach to software configuration control the purpose of this paper is to discuss the process by which a system's life cycle and its associated life cycle products are managed to ensure the quality and integrity of the system. we call this process configuration control. although many of the ideas in this paper are applicable to systems in general, the focus of this paper is on configuration control of systems with software content. it is becoming apparent to many, in both government and private industry, that the high cost of maintenance of existing computer systems may be attributed to poor configuration control early in the system's life cycle. for example, in an article entitled "a corporate road, map for systems development in the '80s, the following claim appears, william bryan stanley siegel gary whiteleather macroexpand-all: an example of a simple lisp code walker richard c. waters code reader vikas k. kamat user administration: how to successfully manage your users david bandel garbage: two new structures g. alan creak fault tolerance of clock synchronization algorithms krasimir djambazov edita djambazova an introduction to algol 68 henry j. bowlden a survey of extensions to apl a survey of proposed extensions to the apl language is made with emphasis placed on the motivations for various proposals, the differences between them and the consequences of their adoption. some issues of a more general nature concerning the purpose, process and direction of language extension are also discussed. an extensive bibliography is provided with annotations concerning the nature, development and influence of various authors' works. areas of extension encompassed by the survey include nested arrays, complex numbers, uniform application of functions, laminar extension, primitive functions, control structures, direct definition, operators, system functions and variables, name scope control and event trapping. karl fritz ruehr the structure and value of modularity in software design the concept of information hiding modularity is a cornerstone of modern software design thought, but its formulation remains casual and its emphasis on changeability is imperfectly related to the goal of creating added value in a given context. we need better explanatory and prescriptive models of the nature and value of information hiding. we evaluate the potential of a new theory---developed to account for the influence of modularity on the evolution of the computer industry---to inform software design. the theory uses _design structure matrices_ to model designs and _real options_ techniques to value them. to test the potential utility of the theory for software we apply it to parnas's kwic designs. we contribute an extension to design structure matrices, and we show that the options results are consistent with parnas's conclusions. our results suggest that such a theory does have potential to help inform software design. principal values and branch cuts in complex apl complex numbers are useful in science and engineering and, through analogy to the complex plane, in two-dimensional graphics, such as those for integrated- circuit layouts. the extension of apl to complex numbers requires many decisions. almost all have been discussed in detail in a recent series of papers. one topic requiring further discussion is the choice of branch cuts and principal values for the primitive apl functions that require them. conventional mathematical notation and the experience of other computer languages are of only moderate help. for example, one cannot find in the mathematics or computer-science literature a definitive value for the principal value of the arcsin of 3. the extension of apl to the complex domain presents a unique opportunity to define a set of choices that will best serve apl and other languages. this paper recommends locations of all branch cuts, directions of continuity of the branch cuts, and values at the branch points. it also recommends that comparison tolerance be used in the selection of principal values. the results apply to apl, other languages, applications packages, and vlsi hardware for complex calculations. paul penfield letters to the editor corporate linux journal staff the space problem russell m. clapp trevor mudge bugfind: a tool for debugging optimizing compilers j. m. caron p. a. darnell improving the precision of inca by preventing spurious cycles the inequality necessary condition analyzer (inca) is a finite-state verification tool that has been able to check properties of some very large concurrent systems. inca checks a property of a concurrent system by generating a system of inequalities that must have integer solutions if the property can be violated. there may, however, be integer solutions to the inequalities that do not correspond to an execution violating the property. inca thus accepts the possibility of an inconclusive result in exchange for greater tractability. we describe here a method for eliminating one of the two main sources of these inconclusive results. stephen f. siegel george s. avrunin limitations of graham-glanville style code generation d spector p k turner endeavors richard taylor david redmiles modelling stars using xml we suppose collections of xml data described by document type definitions (dtds). this data has been generated by applications and plays a role of oltp database(s). a star schema, a well-known technique used in data warehousing, can be applied. then dimension information is supposed to be contained in xml data. we will use the notions of subdtd and view, and formulate referential integrity constraints in xml environment. we use simple pattern matching capabilities of current xml query languages for xml view specification and tree embedding algorithms for these purposes. a dimension hierarchy is defined as a set of logically connected collections of xml data. facts may be also conceived as elements of an xml document. due to the structural complexity of xml data the approach requires subtler formal model than it is done with conventional dimension and fact tables described by classical star schemes. in consequence, our approach captures more from heterogeneity of source databases than it is done in classical relational approaches to data warehousing. jaroslav pokorny taming architectural evolution in the world of software development _everything_ evolves. so, then, do software architectures. unlike source code, for which the use of a configuration management (cm) system is the predominant approach to capturing and managing evolution, approaches to capturing and managing architectural evolution span a wide range of disconnected alternatives. this paper contributes a novel architecture evolution environment, called mae, which brings together a number of these alternatives. the environment facilitates an incremental design process in which all changes to all architectural elements are integrally captured and related. key to the environment is a rich system model that combines architectural concepts with those from the field of cm. not only does this system model form the basis for mae, but in precisely capturing architectural evolution it also facilitates automated support for several innovative capabilities that rely on the integrated nature of the system model. this paper introduces three of those: the provision of design guidance at the architectural level, the use of specialized software connectors to ensure run- time reliability during component upgrades, and the creation of component- level patches to be applied to deployed system configurations. andre van der hoek marija mikic-rakic roshanak roshandel nenad medvidovic acquiring experiences with executable process models the primary thesis of this position paper is that it is mandatory to experiment with executable process models in order to obtain user feedback, to identify requirements on architectural components to support such models, and to investigate the impact of automation in the process itself. this paper briefly describes three generations of investigations with respect to the formalizing, modeling, and encoding of software life-cycle processes. providing for executable processes has been one of the most important goals of this work. maria h. penedo ackermann's function in ada b a wichmann book review: essential linux marjorie richardson proof linking: modular verification of mobile programs in the presence of lazy, dynamic linking although mobile code systems typically employ link-time code verifiers to protect host computers from potentially malicious code, implementation flaws in the verifiers may still leave the host system vulnerable to attack. compounding the inherent complexity of the verification algorithms themselves, the need to support lazy, dynamic linking in mobile code systems typically leads to architectures that exhibit strong interdependencies between the loader, the verifier, and the linker. to simplify verifier construction and provide improved assurances of verifier integrity, we propose a modular architecture based on the concept of proof linking. this architecture encapsulates the verification process and removes dependencies between the loader, the verifier, and the linker. we also formally model the process of proof linking and establish properties to which correct implementations must conform. as an example, we instantiate our architecture for the problem of java bytecode verification and assess the correctness of this instantiation. finally, we briefly discuss alternative mobile code verification architectures enabled by the proof-linking concept. philip w. l. fong robert d. cameron timing-driven hw/sw codesign based on task structuring and process timing simulation dinesh ramanathan ali dasdan rajesh gupta automated compiler construction based on top-down syntax analysis and attribute evaluation w. f. elsworth m. b. a. parkes systems programming in concurrent prolog concurrent prolog [28] combines the logic programming computation model with guarded-command indeterminacy and dataflow synchronization. it will form the basis of the kernel language [21] of the parallel inference machine [36], planned by japan's fifth generation computers project. this paper explores the feasibility of programming such a machine solely in concurrent prolog (in the absence of a lower-level programming language), by implementing in it a representative collection of systems programming problems. ehud shapiro in support of the ada 9x real-time facilities the ada9x mapping has been criticised as being over ambitious and at odds with the "minimum change maximum benefit" ajpo dictum. consequently there is a need to cut back the scope of the proposed changes. this report considers the proposed changes which affect the programming of real-time single processor and multiprocessor systems. we argue that protected records, select waiting, asynchronous transfer of control, nested asynchronous transfer of control are all essential core facilities, and if any of these are omitted the success of ada 9x in addressing the needs of the real-time community would be in serious jeopardy. if language simplification are deemed necessary, we would advocate the removal of user-defined timers, finalisation (but not abort exit handlers), and protected record functions. a. burns a. j. wellings test data generation v. rajanna product review news dick bowman letters to the editor corporate linux journal staff distributed programming, a purely functional approach (poster) eleni spiliopoulou ian holyer neil davies a tribute to fig-forth paul frenger fantastic, unique, uml tool for the java environment (fuut-je) (poster session) in this extended abstract, we describe an experimental, uml model driven, java prototyping and development tool. ghica van emde boas from the editor corporate linux journal staff a multiple-stack manipulation procedure james f. korsh gary laison formal specification and verification of the kernel functional unit of the osi session layer protocol and service using ccs this paper describes an application of formal methods to protocol specification, validation and verification. formal methods can be incorporated in protocol design and testing so that time and resources are saved on implementation, testing, and documentation. in this paper we show how formal methods can be used to write the control sequence, i.e. pseudo code, which can be formally tested using automated support. the formal specification serves as a blueprint for a correct implementation with desired properties.as a formal method we chose a process algebra called "plain" calculus of communicating systems (ccs). our specific objectives were to: 1) build a ccs model of the kernel functional unit of osi session layer service: 2) obtain a session protocol specification through stepwise refinement of the service specification; and 3) verify that the protocol specification satisfies the service specification. we achieved all of our objectives. verification and validation were accomplished by using the ccs's model checker, the edinburgh concurrency workbench (cwb). we chose plain ccs because of itssuccinct, abstract, and modular specifications, strong mathematical foundation which allows for formal reasoning and proofs, and existence of the automated support tool which supports temporal logic. the motivation for this work is: 1) testing the limits of ccs's succinct notation; 2) combining ccs and temporal logic; and 3) using a model-checker on a real- life example. milica barjaktarovic shiu-kai chin kamal jabbour book review: advanced programming in the unix environment david bausum surveyor's forum: the file assignment problem lawrence w. dowdy derrell v. foster automatic input of flow chart in document image the technology of document image processing has the possibility of automatic document input. by utilizing it, the flow chart in document image as the alternate expression of fortran source statements is automatically input into the computer. the paper reports the algorithm and the experimental results of field segmentation and classification in document image, the recognition of flow chart including control lines and blocks, and hand-written alpha- numerical characters in the blocks and the explanatory fields, the conversion of flow chart into program functions and the generation of fortran source statements. it showed the possibility that program functions such as initialization, input/output format and branch address were determined by the analysis of flow chart and explanatory description under the writing rule of it and also the restriction that the flow chart should have the good quality in order to recognize the flow and characters and be subject to the writing rule. s. ito a continuum of disk scheduling algorithms a continuum of disk scheduling algorithms, v(r), having endpoints v(0) = sstf and v(1) = scan, is defined. v(r) maintains a current scan direction (in or out) and services next the request with the smallest effective distance. the effective distance of a request that lies in the current direction is its physical distance (in cylinders) from the read/write head. the effective distance of a request in the opposite direction is its physical distance plus r x (total number of cylinders on the disk). by use of simulation methods, it is shown that this definitional continuum also provides a continuum in performance, both with respect to the mean and with respect to the standard deviation of request waiting time. for objective functions that are linear combinations of the two measures, μw \\+ kow, intermediate points of the continuum are seen to provide performance uniformly superior to both sstf and scan. a method of implementing v(r) and the results of its experimental use in a real system are presented. robert geist stephen daniel on being apl literate gregg taylor context-sensitive interprocedural points-to analysis in the presence of function pointers this paper reports on the design, implementation, and empirical results of a new method for dealing with the aliasing problem in c. the method is based on approximating the points-to relationships between accessible stack locations, and can be used to generate alias pairs, or used directly for other analyses and transformations. our method provides context-sensitive interprocedural information based on analysis over invocation graphs that capture all calling contexts including recursive and mutually-recursive calling contexts. furthermore, the method allows the smooth integration for handling general function pointers in c. we illustrate the effectiveness of the method with empirical results from an implementation in the mccat optimizing/parallelizing c compiler. maryam emami rakesh ghiya laurie j. hendren isolation and analysis of optimization errors mickey r. boyd david b. whalley assurance technology r. j. martin reuse of compiler analysis in a programming environment productivity in the development of software can be increased by reusing code and design analysis. following this approach we have developed an incremental optimizing compiler that reuses target code and compiler analysis. in order to be practical, it shares a database of information with other tools in a programming environment. the analysis performed by a compiler is reused to greatly reduce the recompilation time during program development and to incrementally produce target code that is optimized across a larger unit than the unit of recompilation. the resulting code is as optimized as that produced in a batch environment, while saving up to 96% of the time for recompilation. the database used in the incremental optimizing compiler is also useful for other tools in a programming environment and is an approach to solving the problems of debugging optimized code. m. p. blivens m. l. soffa beyond your first shell script how to write versatile, robust bourne shell scripts that will run flawlessly under other shells as well brian rice experimental evaluation of a generic abstract interpretation algorithm for prolog abstract interpretation of prolog programs has attracted many researchers in recent years, partly because of the potential for optimization in prolog compilers and partly because of the declarative nature of logic programming languages that make them more amenable to optimization than procedural languages. most of the work, however, has remained at the theoretical level, focusing on the developments of frameworks and the definition of abstract domains. this paper reports our effort to verify experimentally the practical value of this area of research. it describes the design and implementation of the generic abstract interpretation algorithm gaia that we originally proposed in le charlier et al. [1991], its instantiation to a sophisticated abstract domain (derived from bruynooghe and janssens [1988]) containing modes, types, sharing, and aliasing, and its evaluation both in terms of performance and accuracy. the overall implementation (over 5000 lines of pascal) has been systematically analyzed on a variety of programs and compared with the complexity analysis of le charlie et al. [1991] and the specific analysis systems of hickey and mudambi [1989], taylor [1989; 1990], van roy and despain [1990], and warren et al. [1988]. baudouin le charlier pascal van hentenryck jack: a toolkit for manipulating articulated figures the problem of positioning and manipulating three dimensional articulated figures is often handled by ad hoc techniques which are cumbersome to use. in this paper, we describe a system which provides a consistent and flexible user interface to a complex representation for articulated figures in a 3d environment. jack is a toolkit of routines for displaying and manipulating complex geometric figures, and it provides a method of interactively manipulating arbitrary homogeneous transformations with a mouse. these transformations may specify the position and orientation of figures within a scene or the joint transformations within the figures themselves. jack combines this method of 3d input with a flexible and informative screen management facility to provide a user-friendly interface for manipulating three dimensional objects. cary b. phillips norman i. badler minimum cost interprocedural register allocation steven m. kurlander charles n. fischer functional programming with apl2 norman thomson best of tech support gena shurtleff design of a new user interface for apl much progress has been made in apl systems during the last 10 years, however, most implementations still use the standard del editor, systems commands and a character oriented, user- typed, command interface. even the apl systems available for new graphics based workstation systems, such as macintosh and sun, do not fully exploit the user interface standards for those systems. it is now time to consider establishing a new standard user interface for apl running on hardware systems which support bit mapped graphics displays and a pointing device. this paper presents such a design which is based in part on the macintosh user interface, but includes special features for handling the apl workspace environment. john e. howland adapt and ada 9x s. j. goldsack a. a. holzbacher-valero r. volz r. waldrop fortran 95 gotcha! (variation on a theme) norman s. clerman a form-based approach to human engineering methodologies experience in the development and maintenance of software leads to the design of methodologies for different phases of the software engineering process. such methodologies attempt to usefully support the programmer's thought process for re-creating only good, standard patterns of programming without limiting creativity. however methodologies, as they are generally used, are limited in their impact on software quality. in this paper we present an approach for human engineering methodologies based on forms. the advantages of using a form-based interface for a software engineering environment are discussed by focusing on the design of forms, on the impact of forms on the software engineering process, and on the improved tool support facilitated by the standardization achieved by forms. h. c. kuo c. h. li j. ramanathan the mantis parallel debugger steven s. lumetta david e. culler the debian distribution jon a. murdock software engineering research agendas (panel session): a view from the trenches peter freeman ilene birkwood jack munson bob martin al pietrasanta vic basili susan gerhart management barriers to software reuse harry f. joiner calculating joint queue-length distributions in product-form queuing networks a new computational algorithm called distribution analysis by chain (dac) is developed. this algorithm computes joint queue-length distributions for product-form queuing networks with single-server fixed rate, infinite server, and queue-dependent service centers. joint distributions are essential in problems such as the calculation of availability measures using queuing network models. the algorithm is efficient since the cost to evaluate joint queue-length probabilities is of the same order as the number of these probabilities. this contrasts with the cost of evaluating these probabilities using previous algorithms. the dac algorithm also computes mean queue lengths and throughputs more efficiently than the recently proposed recal and mvac algorithms. furthermore, the algorithm is numerically stable and its recursion is surprisingly simple. edmundo de souza e silva s. s. lavenberg focus: kernel internals don marti semicolon-bracket notation: a hidden resource in apl the semicolon- bracket (sb) notation for indexing an array in apl has often been regarded unfavorably because of its anomalous character relative to other apl functions: the function symbol is a pair of brackets rather than a single character; the bracket pair is bound to the associated array at a higher precedence than any other function; and the requisite number of semicolons must appear literally within the brackets, so that the construct within the brackets when one or more semicolons are present is not an apl object, and therefore cannot be substituted for by another apl expression without semicolons. nevertheless, this notation provides a simple way to index arrays of any rank, and has proven to be very convenient in practice. in this paper, it will be shown that the sb notation can (a) easily be extended to defined functions, and that it can also be used (b) to allow a multiplicity of names to be used unambiguously to the left of assignment, (c) to allow multiple statements on a line in a manner similar to that provided by the diamond separator in some apl systems, and (d) to extend and generalize the use of bracketed expressions as an axis operator. a. d. falkoff a linux kernel module implementation of restricted ada tasking an ada tasking kernel is implemented as a layer beneath the linux operating system on a pc- compatible machine. this implementation is derived from yodaiken's real-time linux kernel, with new scheduling and synchronization primitives introduced specifically to support the gnat ada runtime system. primitive operations for real-time ada tasks are directly implemented on top of the underlying hardware in the form of a loadable linux kernel module. this design not only reduces execution overhead and improves control over execution timing for tasks, but also opens the door for a simple restricted-tasking runtime system that could be certified for safety-critical applications. hongfeng shen theodore p. baker recovery time of dynamic allocation processes artur czumaj novice-to-novice: a beginner's guide to compiling source code corporate linux journal staff the benefits of enumerated types in modula-2 mark cashman the myth and realities of c.a.s.e. for documentation hypertext seems to be the major focus of user documentation groups, and even some system documentation people. but system developers, engineers and architects are interested in c.a.s.e. more of our documentation work will be coming from or going into c.a.s.e. solutions and integrated systems, and documentation groups should begin to look very seriously at what c.a.s.e. advertises itself to be, what it is in fact, and what role documentors are likely to play in the face of this touted "software development revolution." c.a.s.e. is an acronym (and we thought those went out in the 70s) for computer-aided software engineering. very little is new here, except the combination. a c.a.s.e. tool is one of dozens of possible pieces of software, techniques, and documents that were developed since the 70s, and which, theoretically, can be combined to create a black box for system generation. the object of the exercise is to put clear, organized, system documentation into the black box, and out comes a working system, for whichever computer (called a platform) you want, with complete user documentation, both on-line and "manual". the concept should be highly threatening to documentation people, and yet i have not heard a single murmur in the community. the reason, i suspect, is not only that hypertext is more distracting, being perceived as more glamorous, but also that c.a.s.e., in order to sound modern and revolutionary, has enveloped itself with a great deal of new jargon. the first step of the process, supposedly, needs to be done only the first time you begin using c.a.s.e. actually, it is such an expensive step, that most firms ignore it, hoping for the quick and dirty answer. plus ça change, plus c'est la meme chose. in this first step the system architect analyzes the organization, known as "the enterprise", that wants the computer system. he or she determines the minimum amount of data that the enterprise uses to do its business, and establishes the relations and processes that link pieces of data into meaningful combinations, called "information." the result is "the enterprise model". a system is conceived as a view of the enterprise model, that is, one department's use of the data and processes that really belong to the enterprise. those of you who are familiar with this material may realize that i am simplifying some of this, but those of you who are not familiar with it are probably already confused, right? for those who are a trifle bewildered, let me get a bit more specific. we have an analyst who has created a very theoretical model of a corporation that says that a company is just a collection of data, and processes (programs, people, authorizations) that manipulate that data. since most corporations keep more data than they need, and do many processes at least twice, this model can be useful in cutting waste. because the object of the effort is to create systems that are clean, simple, and use a database, you can see why you begin to write your first c.a.s.e. system by doing not only a spring cleaning, but a complete analysis of the foundation of your house! because this model is so important, and because corporations are always changing, this model must be kept as a large software (and sometimes hardware) system of its own, with a huge database that documents each item of data, each process, and each combination. this work ordinarily takes years, because the data must be reconciled to show that customer name, and ship-to name, are in the end actually the same, and that each of them points to that basic data item called name. i have chosen an easy one, but you can imagine. the crucial thing about the model is that it must be right. and since the documentation is going to be used later for creating working systems and documentation, it should be clear and concise.* some system analysts or software engineers believe that this accuracy means that only they can write this sort of documentation. they completely loose track of the importance of the text for later documents, and for the transfer of knowledge to another group of c.a.s.e. analysts who will build the view or modify it. this is not throw away documentation, and it must be done especially well. such work is enormously time consuming, as i have already made clear, and so i have seen groups just put in the name of a data item, its length, and its data type (alphanumeric, usually), and ignore description, aliases, and other information not important now, but vital to a succeeding generation of c.a.s.e. tool users. once the enterprise model is completely documented, system analysis can begin. the system is established as a view, by selecting the data elements it inputs and outputs. if the documentation of the data has been correctly done, all one needs is a list. the system is probably already documented as a high-level process, but the details are designed using "upper c.a.s.e." or front end c.a.s.e. tools. these are microcomputer based drawing tools for ye olde demarco and gane and sarson diagrams, with a little module at the back that makes a small database of the relations and data items. all of this is fun, so this is usually where less far-thinking firms want to begin using c.a.s.e., and it is probably where there is the largest population of c.a.s.e. tools in the spectrum of possibilities. diagrams are sexy, and people who can't draw think that this is all just a game, but can be passed off as hard work. for the user, there are screen mock-ups that fit into this process. these are usually part of a "fourth generation language" for one is either hidden, or very obvious, as part of the bottom level of the diagramming technique, creating pseudo-code for the "primitive" level of diagrams. the second-largest number of c.a.s.e. tools is probably in this part of the spectrum. they take the pseudocode, which they tell you is not really programming, but documentation, and the data descriptions, and create some form of brute-force computer code. the object of this exercise is to create something nearly foolproof, that is repeatable. some code-generators produce cobol because it is comforting, or c because it is trendy. supposedly, you put this code on your machine, compile it, or simply copy load modules, depending on the sophistication of the tool, and the whole system is supposed to run first go. what actually happens is that there needs to be real programming done in order to tweak the system. if you have an unusual platform, or one with some unusual devices running on it, or you want to have a real-time system, there must be code written to tailor the brute-force code to fit. virtually no vendor of c.a.s.e. tools has all of these parts. they may present a unified package by buying the rights to remarket someone else's portion of the spectrum. the big 8 (or are there only four now?) accounting and consulting firms usually just buy the company with the c.a.s.e. tools instead of the marketing rights, but the result is the same: there are some rough edges that need to be smoothed over by more code and perhaps some documentation. all right, i have been describing a system---one made up of hardware for graphics, such as "mouses", software for diagramming and code generation, and methodology for doing the right steps in the right order. the usual possibilities for documentation are apparent, but what is special about this tool? in some ways there is nothing special about it. it only brings together the collection of problems that we have been working on for a long time, and are still hoping to solve, such as better on-line help, better system's documentation for tools that can be changed as the user desires, and better procedures and forms. in another way, it is very special and important, because the heart of c.a.s.e. is documentation. but the conglomeration of these documentation tools is now being put into the hands of software engineers and system analysts, and marketed as though no writing of prose were necessary. to make my point even clearer, let me explain the usual look and feel of c.a.s.e. products: the data dictionary, encyclopaedia, or central repository (i've heard all of these three names, and there may be many others) is just an on-line glossary. the standard names of data are entered, followed by pseudonyms, type (alphanumeric), and length (4 bytes, packed). following this meager description is some prose to say how the data should be used, how it differs from something similar, if it does, and any other important information to the potential user of this data. if this glossary entry is a composite item, made up of other data items, then there is a list of what it contains, usually provided by the software, e.g. "mailing label" contains "full name", and "address". when you look up address you find it contains adr line1, adr line2…postal code. what most of the project spends its time doing is looking at these data entries to decide if this is the one they want, and if it is, connecting it to process entries, or other data entries. for instance, an analyst wants to build a system for keeping an inventory of office furniture (there have been thefts lately), so he needs to find some identifier for the office furniture, and he might just use the catalogue number from the purchase order for this furniture. he them builds a new record that attaches that catalogue number with the name of the employee who has been assigned that desk. the employee name, and the catalogue number exist in the glossary, so he must find them, probably by starting a search through the personnel record description, and the purchase order records. he then copies the data names, and creates his new record containing them. he then attaches the record to his new system name. sorting and searching requires some good text, preferably brief, and to the point. there are a great many names in any enterprise. naming data well is a special skill that is best done by someone with a good vocabulary, but only prose will make it clear in the end that this is the right data item, and where it can be copied from. sorting and searching is fully 50% of any c.a.s.e.-related, system-development project once the glossary is begun. these encyclopaedic data bases can, of course, be searched in any order. so here we have not only the usual problems of on-line documentation---how to put it all on a single screen and make it clear---but also the complications of a hypertext---we know not in what order the information will appear to our reader; and we can give him no clues in terms of position in a book, as to what he might have been expected to read before he began reading here. once the data has been identified, or more usually, while it is being identified, the children are playing in the demarco, gane-sarson sandbox. here are diagrams that are good tools for the designer, but which are useless without the glossary, and so are often ignored once the preliminary design phase is done. the circles and boxes ordinarily cannot have the well-designed names shown in demarco, et al., because these diagrams are being used by the system to generate linkages for use later in the code generator. in some systems comments are allowed, and are usually ignored. the diagrams are the most attractive part of the process, but they are often cumbersome as they extend past the boundaries of the screen, or, when reduced, are too small to read. so while they are updatable and so on, they are employed much less than the glossary over the life cycle of the system. imposed on all of this development so far is the notion of life cycle. this is some plan that can only partially be imposed by the system, and involves a good deal of good common sense. everyone, without exception, hated the 1970s system development methodologies that were books of long-winded aphorisms followed by forms to copy a thousand times and fill out, one for each of the one thousand data items to be used in this system. and yet behind all these forms and writing down of mother's reminders to wash behind your ears, there was some important stuff, which cannot be done away with. most c.a.s.e. tools try to gloss over the methodology by giving you a diagram, and telling you to wing it, hoping that you have learned enough in a course on the subject to do the right thing. teaching planning, good habits and good models on which to base standards is still a job for writing, film or some other, non-software medium. all c.a.s.e. tools give lip-service to the methodology. none addresses it squarely for someone who has never used a c.a.s.e. tool before. the bulk would be bad psychology; the exposure to the real length of time all of this will take, and the effort involved would kill sales. screen mock-ups begin pretty early on. supposedly this is where user testing begins. the usual 4gl method of mocking up screens is integrated with the glossary and the diagramming tools to put the right data with the right length of field on the screen. text for prompts may be taken from the glossary. it is a rare firm that sees this testing as the first opportunity for usability testing of the documentation and user interface, because the c.a.s.e. myth says that documentation will come as part and parcel of the system development process. this means that the documentation that users will be expected to see will be the prose in the glossary---if anyone could be bothered to write it. although there is nothing to stop an enlightened system developer from hiring a psychologist to help in the development of the user interface, or a good word monger to put the mot juste in the prompt to keep the screen clear, and easy to use, yet not condescending, there is also nothing in the c.a.s.e. methodology to promote the use of experts here beside the user, who knows his system best. here i may add that in addition to the myth of c.a.s.e. there is something of a myth that the users knows what they want, meaning what they need. the reality is that they know what they like, and that may just be the best of whatever they have seen so far. shown something better, they usually ask for the system to be redesigned, and the cyclical part of that life cycle comes into play. most c.a.s.e. tools recognize the cyclical problem, and try to take care of it here in the screen development. much time may be wasted here because only software engineers and users are involved rather than experts in user interfaces. once the screens are accepted, then coding begins. of course this is not supposed to be programming, just the creation of pseudo-code, that last element in the yourdon, demarco, gane-sarson methodology. actually it is programming in some version of a fourth generation language, with some third generation features to take care of "platform" dependencies. since this coding is part of the c.a.s.e. product no system documentation is needed over and above the glossary and diagramming. actually, since the tools are usually separate, either there is some "reverse engineering" to take the code back to the glossary, or the code is really undocumented. as we know from many published papers, 4gls are not easily interpreted without the help of documents. once the generated code is merged with the platform-dependent modules, the code is supposed to be running. actually, patches may well be required. there is practically no way to document the patches neatly in any of the glossary or diagramming niches, except in text. some c.a.s.e. tools, which i will carefully not name, then claim that it is possible to use all the documentation that has been accumulated thus far to generate the user documentation automatically. what happens is a long print out of the help screens, which are, in turn, the text from the glossary. reams of text with sgml tags is technically documentation, but of the sordid kind that i, for one, would like to see abolished. there can never be a point to documentation built out of homogenous blocks of object descriptions. there is no place for insight. if the c.a.s.e. tool permits the addition of text for prefaces and the like, then the documentation becomes ancillary to the construction of the system, and all the usual problems of keeping the documentation current will reappear. having sat through what must seem like a diatribe on the design of c.a.s.e. tools, let me say that i like them, and believe that they are exciting additions to system development. what i decry is the promotion of them as minimizing the need for experts, such as documentation and user interface specialists, and even good programmers who can code in assembler. it is a myth that a few good software engineers, and a few sterling users will necessarily build great systems with c.a.s.e. tools in practically no time. let me highlight the role of documentation experts in c.a.s.e. needs document. in order to get any system implemented in any methodology, it is still necessary to have a needs document to ask for money and man power. talent is required to get money where it is tight, and there is lots of scope here for the right sort of expert. feasibility studies. feasibility studies, now written in plain language, must be made available to users as well as upper management with the new notions of c.a.s.e. talent is once again required. requests for proposals. many firms will need to hire consultants to develop the enterprise models and systems. good requests for proposal are hard to find, and as a result, estimates for the work are often wildly out of line, and the whole project costs too much. a really good writer would be invaluable for these. there is not sufficient work being done in this area, and i believe that a few text books could be written for this work. information technology strategies. once the enterprise model is developed, and integrated into what it usually called an "information technology strategy", this plan has to be promulgated in the company, and among potential contractors to develop systems to adhere to it. these documents are either detailed lists of hardware and software, or collections of aphorisms that would do the boy scouts proud. i haven't seen any work done in this area, and there certainly should be. glossary entries. the texts that appear in the encyclopaedia for modelling are critical. the whole notion of c.a.s.e. stands or falls by them. the wording must be honed to perfection so that it is easy to comprehend in a glance at the screen, and yet perfectly accurate. good writing skills should be complemented by genius in order to make the processes of searching and selecting items just tolerable. to permit the text to be used as user documentation, as well as system documentation, requires heaven-sent revelation beyond the scope of genius. although there is certainly need for, and scope for, research in technique here, i quietly harbor the belief that only art, not science, can deal with this problem. still, real people need to do this, and there should be real training in how to approach the creativity of the effort. screen design and prompts. the wording on screens, as well as the layout, the typefaces, the amount of reverse video, and the balance of information on one or many screens takes some special expertise. the placement of the context- sensitive help in relation to the fields on the screen, and the manipulation of that help text, all need the same kind of attention as people are now giving to hypertext applications. this kind of help is a version, a variant, or a close cousin to a hypertext application, with special constraints to ensure that the text is concurrent with the glossary text. i have not yet seen anyone advertise hypertext as a c.a.s.e. tool, but this may be my limited acquaintance with the myriad c.a.s.e. products on the market. code documentation. system documentation of the insightful kind, which i believe still to be in its infancy, is very much needed for these projects. project documentation. project documentation is also a requirement here as the system development methodologies have been shrunk to a few diagrams--- who is responsible, and who had accomplished what so far? this is an on-going and thankless task, and perhaps rather mechanical, but tactical, as well as strategic, action is necessary. training materials. training in methodology needs to be done, if not in a fat book, then in a course. more training materials. user training is still necessary. even for systems that are so carefully designed that the user gets the point immediately, there is always some advanced knowledge, additional policy, or procedure indoctrination necessary to make things come together. tutorials, procedures and policies. tutorials, and other user documentation needs to be developed that may not easily come from wedges of text used to describe specific data items or program routines. i should point out here that i have mentioned that the better sort of glossary contains documentation about procedural elements of a system as well as programs, but usually there is just a pointer to the policy and procedure manual, not a complete procedure or policy. there is a text about the book containing the procedures and where to get a copy if you are lucky. policy and philosophy. c.a.s.e. is based on ideas, about information resource management, about reusable code, about strategic planning. all of these ideas may require not only the information technology strategic plan document, but corporate policies to ensure that the plan is followed, and to integrate the plan into the corporate goals. policy writing is not normally considered a documentation function, but it most certainly must be done with the cooperation of the documentation leader of the enterprise model. wording must match, and so on. there is certainly scope for research into how this can best be achieved. i perhaps have omitted something, and you will see it, and comment on it. documentation, it seems to be me, is made more valuable because of c.a.s.e. tools, and requires more skill than is now given to the training of software engineers and analysts. most computer science programmes do not give any instruction in writing skills for documentation, and throw the wrong kind of professional into the breech of c.a.s.e. while i have every sympathy for the marketers of c.a.s.e. tools who want to emphasize the good design, the coherence of systems' and corporate goals, and the benefits of simplified maintenance, there is a myth here that all of this can be done without very many people, and those few are system engineers and users. the truth is that a great deal of the value of c.a.s.e. lies in particularly well constructed documentation, and good programming skills in a high-level language. there is no simple black box that eliminates the need for talent, for clear thinking, for programming and for maintenance. the only thing that may have been eliminated is the coding of simple tasks in cobol. none of the documentation is eliminated, rather adroitly constructed texts and supplementary materials are more important than ever before. c.a.s.e. tools require the skills of experts to use, and they challenge us to do much further research to bring the promise of c.as.e. to fruition. d. patterson a programming language framework for designing user interfaces programming language researchers increasingly recognize that a high proportion of application development costs involve the interface with users of the application, including various dialogues, input formats, error checking, help and explanation messages, and the like. they also increasingly recognize that maintenance costs tend to overshadow development costs. these two factors even multiply their adverse effects: as the user needs evolve, it is the interface with a system which generally requires the most maintenance. the user relationship is even said to account for about 60 percent of the maintenance problem [lientz and swanson 81]. surprisingly, few programming language constructs are designed to address the area of user interface design. on the contrary, traditional programming language constructs are strongly oriented towards improving programmers' effectiveness in developing the algorithmic and data manipulation aspects of an application. a programmer is basically left to reinvent each time the required procedures to deal with user commands and inputs. michel pilote software architecture (panel): the next step for object technology bruce anderson mary shaw larry best kent beck structural testing of programs a. a. omar f. a. mohammed rt prolog: a real time prolog written in ada prolog is a very powerful language and its declarative nature lends itself quite well to the development of expert systems and natural language interfaces. there are many laboratory situations in which an expert system could help technicians or operators accomplish their task. however, in these same situations there is usually need for a real time capability. a possible scenario would be to have an expert system executing in prolog which must respond to some external interrupt, either from a sensor or an operator. the expert system should be able to suspend its current processing and service the interrupt. the interrupt might need to be serviced in a procedural manner or it might entail the answering of a different question by the expert system. after the interrupt request has been serviced, the expert system should return to its previous task. the above capability is difficult to obtain in prolog as it is not designed to be a real time language. it is also difficult for an expert system to stop the search for one answer, answer a second question, and then resume the first quest. we are performing investigative research in the development of a real time prolog, called rt prolog, which would have the above capabilities. an expert system written in rt prolog would have the capability of responding to external interrupts and then resuming its original task. the interrupt can be serviced either in a procedural manner or by another copy of the expert system. to accomplish this, the rt prolog interpreter will be developed as a task in ada. an interrupt then can activate another task as a service routine. this task can be a normal procedural service routine or even another copy of the prolog interpreter. this scheme should provide the needed flexibility to handle both the expert system and real time aspects of the problem. rt prolog is being developed on an ibm at using the janus/ada compiler from rr software. the first step in the development was to build a small prolog interpreter in ada. this interpreter will then be made into a task type, so that other copies can be spawned, if needed. the last step will be to test the interrupt capability of the system by using a data acquisition system. g. scott owen a parallel `make' utility based on linda's tuple-space we describe a prototype of a parallel `make' utility that executes on multiple workstations and achieves a significant real-time speedup. this utility is implemented by means of logically shared memory based on the linda system's tuple space. it makes work with distributed computing easy to conduct, since it can be built on top of an existing operating systems without modifying it, and since it permits easy experimentation with strategies for distributing the work. c. j. fleckenstein d. hemmendinger towards a framework for investigating temporal properties in interaction helen parker chris roast jawed siddiqi is ada too big? a designer answers the critics many have criticized the department of defense's new computer language, ada, saying it is too large, too complicated, or too difficult to use. are they right? and are there some simplifications that could be made to ada without destroying its usefulness? brian a. wichmann why state-of-the-art is not state-of-the-practice (panel) richard denney dick kemmerer nancy leveson alberto savoia ansi forth report george shaw efficient object sampling via weak references the performance of automatic memory management may be improved if the policies used in allocating and collecting objects had knowledge of the lifetimes of objects. to date, approaches to the pretenuring of objects in older generations have relied on profile-driven feedback gathered from trace runs. this feedback has been used to specialize allocation sites in a program. these approaches suffer from a number of limitations. we propose an alternative that through efficient sampling of objects allows for on-line adaption of allocation sites to improve the efficiency of the memory system. in doing so, we make use of a facility already present in many collectors such as those found in java virtual machines: weak references. by judiciously tracking a subset of allocated objects with weak references, we are able to gather the necessary statistics to make better object-placement decisions. ole agesen alex garthwaite asap - a simple assertion pre-processor igor d.d. curcio how use case modeling policies have affected the success of various projects (or how to improve use case modeling) periannan chandrasekaran structured programming in macintosh assembly language (abstract) ever since dijkstra first pointed out the harmfulness of the goto twenty years ago, programmers in high-level language have become more and more accustomed to writing structured programs. no such development has taken place in assembly language due to the lack of such constructs as the if-then, if-then- else, while-do, and repeat-until statements of pascal. it is possible, however, to achieve the same effect by the exercise of programming discipline if all branching in the program is made to follow the rules of structured programming. for example, an if-then statement such as if x = 0 then y := 1 else y := 2 endif may be implemented in macintosh assembly language as if1 tst.w x(a5) ;if x < then goto; else1 bne else1 move.w #1,y(a5) ;y := 1 bra endif1 ;goto endif1 else1 move.w #2,y(a5) ;y :=2 endif1 while such programs may appear somewhat strange at first, consistent application of the discipline makes it very easy to convert any algorithm expressed as a structured high-level language program to macintosh assembly language completely mechanically and, therefore, correctly. a textbook on macintosh assembly language incorporating these principles is under development by the author. clint foulk an apl compiler for the unix timesharing system an apl compiler running under the unix timesharing system on a pdp 11/70 is described. timothy a. budd more flexible formatting with sgmltools a brief overview of the latest sgmltools is presentedone of its developers cees de groot interweave: object caching meets software distributed shared memory michael l. scott sandhya dwarkadas srinivasan parthasarthy rajeev balasubramonian deqing chen grigorios magklis athanasios papathanasiou eduardo pinheiro umit rencuzogullari chunquiang tang sh-boom: the sound of the risc market changing george william shaw a statement-oriented approach to data abstraction j. steensgaard-madsen cyberdesk: a framework for providing self-integrating ubiquitous software services anind k. dey gregory abowd mike pinkerton andrew wood solving ordinary differential equations using taylor series george corliss y. f. chang fault tolerance in distributed ada 95 in this paper we present a project to provide fault tolerance in distributed ada 95 applications by means of replication of partitions. replication is intended to be largely transparent to an application. a group communication system is used for replica management. we examine some of the possibilities for implementing such a system and highlight some of the difficulties encountered in the context of the programming language ada 95. thomas wolf the frequency of dynamic pointer references in "c" programs barton p. miller new methods for dynamic storage allocation (fast fits) the classical methods for implementing dynamic storage allocation can be summarized thus first fit and best fit the available blocks of storage are linked together in address order. storage is allocated from the first available block of sufficient length, or from a block with the minimum excess length. storage can be allocated or released in multiples of two words. in the long run, if numerous pieces of storage of more-or-less random lengths are allocated and released at more-or-less random intervals, the storage becomes fragmented, and a number of uselessly small blocks develop, particularly near the beginning of the list. although these fragments usually comprise a small proportion of the storage (typically around 10 per cent), a lot of time can be wasted chaining through them. buddy methods here the task of managing the storage is reduced in size by constraining the way in which the storage can be divided up, e.g. into blocks with lengths which are powers of 2. this eliminates chaining through long lists of uselessly small blocks; on the other hand, space is wasted in rounding up the length requested to an allowable size, and typically about 40 per cent more storage is required to satisfy the same allocations than when using first fit or best fit. the methods presented in this paper are externally compatible with first fit and best fit, and require roughly the same amount of storage for a given sequence of allocations. they use, however, a completely different internal data structure, one effect of which is to reduce the number of blocks that have to be visited to perform a typical allocation or release operation. these new methods exhibit roughly the same space performance as first fit, and a time performance which falls between those of first fit and buddy. c. j. stephenson eat your own dog food brent w. benson binding performance at language design time an important research goal in software engineering and programming languages is the development of principles underlying the specification of computable problems, the translation of these problems into efficient and correct programs, and the performance analysis of these programs. this paper uncovers some of these principles, which are used to design a problem specification language l1 restricted to problems that can be compiled automatically into programs whose worst case asymptotic time and space complexities are linear in the input/output space. any problem expressible in l1 can be compiled into a linear cost program. a compiler for l1 has been implemented in the rapts transformational programming system. j. cai r. paige managing stored voice in the etherphone system the etherphone system was developed at xerox parc to explore methods of integrating voice into existing distributed personal computing environments. an important component of the etherphone system, the voice manager, provides operations for recording, playing, editing, and otherwise manipulating digitized voice based on an abstraction that we call voice ropes. it was designed to allow: unrestricted use of voice in client applications, sharing among various clients, editing of voice by programs, integration of diverse workstations into the system, security at least as good as that of conventional file servers, and automatic reclamation of the storage occupied by unneeded voice. as with text, we want the ability to incorporate voice easily into electronic mail messages, voice-annotated documents, user interfaces, and other interactive applications. because the characteristics of voice differ greatly from those of text, special mechanisms are required for managing and sharing stored voice. the voice manager reduces the work generally associated with building voice applications by providing a convenient set of application- independent abstractions for stored voice. clients view voice ropes as immutable sequences of voice samples referenced by unique identifiers. in actuality, a voice rope consists of a list of intervals within voice files that are stored on a special voice file server. a database stores the many-to-many relationships that exist between voice ropes and files. maintaining voice on a publicly accessible server facilitates sharing among various clients. these facilities for managing stored voice in the etherphone system were designed with the intent of moving voice data as little as possible. once recorded in the voice file server, voice is never copied until a workstation sends a play request; at this point the voice is transmitted directly to an etherphone, a microprocessor-based telephone instrument. in particular, although workstations initiate most of the operations in the etherphone system, there is little reason for them to receive the actual voice data since they have no way of playing it. adding such voice facilities to a diverse and complex software base presents challenging problems to the systems builder since much of the existing workstation and server software cannot be changed or extended. manipulating stored voice solely by textual references, besides allowing efficient sharing and resource management, has made it easy to integrate voice into documents. the only requirements placed on a workstation in order to make use of the voice services are that it have an associated etherphone and an rpc implementation. the etherphone system uses secure rpc for all control functions and des encryption for transmitted voice. these ensure the privacy of voice communication, which is important even in a research environment, although the network is inherently vulnerable to interception of information. storing the voice in its encrypted form protects the voice on the server and also means that the voice need not be reencrypted when played. all in all, the voice manager provides better security than most conventional file servers. the performance of operations for editing and managing recorded voice must be compatible with human response times: sub- second response at a peak rate of several operations per second is more than adequate. performance measurements confirm that the voice manager easily meets these requirements. in conclusion, the major technical contributions presented in this paper involve the use of simple databases to: describe the results of editing operations such that existing voice passages need not be moved, copied, or decrypted, and provide a modified style of reference counting that allows the automatic reclamation of obsolete voice. approximately 50 etherphones are in daily use in the computer science laboratory at xerox parc. we have had a voice mail system running since 1984 and a prototype voice editor available for demonstrations and experimental use since the spring of 1986. d. terry d. swinehart argo/uml richard taylor david redmiles validation, verification, and testing of computer software w. richards adrion martha a. branstad john c. cherniavsky a multiple-platform multi-language distributed object-oriented messaging system5 yen-ping shan john debinder rick denatale cindy krauss pat mueller software architecture based on communicating residential environments this paper describes an alternative approach to software architecture, where the classical division of responsibilities between operating systems, programming languages and compilers, and so forth is revised. our alternative is organized as a set of self-contained environments which are able to communicate pieces of software between them, and whose internal structure is predominantly descriptive and declarative. the base structure within each environment (its diversified shell) is designed so that it can accomodate such arriving software modules. the presentation of that software architecture is done in the context of an operational implementation, the screen system (system of communicating residential environments). erik sandewall claes strömberg henrik sörensen manageable object-oriented development: abstraction, decomposition, and modeling john a. anderson john d. sheffler elaine s. ward alexi - a case study in design issues for lisp capabilities in ada d. douglas smith error repair in shift-reduce parsers local error repair of strings during cfg parsing requires the insertion and deletion of symbols in the region of a syntax error to produce a string that is error free. rather than precalculating tables at parser generation time to assist in finding such repairs, this article shows how such repairs can be found during shift-reduce parsing by using the parsing tables themselves. this results in a substantial space saving over methods that require precalculated tables. furthermore, the article shows how the method can be integrated with lookahead to avoid finding repairs that immediately result in further syntax errors. the article presents the results of experiments on a version of the lalr(1)-based parser generator bison to which the algorithm was added. bruce j. mckenzie corey yeatman lorraine de vere techniques for improving language-based editors syned is a working language-based editor which includes a complete parser for editing the c language. the design ideas which permit parsing in syned also result in the solution of several important language-based editor problems. we describe these ideas in sufficient detail to make them accessible to others. j. r. horgan d. j. moore c++ for scientific applications of iterative nature tom houlder fastcgi persistent applications for your web server: fastcgi allows apache to run and manage persistent cgi-like scripts, overcoming cgi's worst shortcomings paul heinlein application of formal methods to system and software specification william g. wood a multiparadigm approach to compiler construction timothy p. justice rajeev k. pandey timothy a. budd an annotation language for optimizing software libraries this paper introduces an annotation language and a compiler that together can customize a library implementation for specific application needs. our approach is distinguished by its ability to exploit high level, domain- specific information in the customization process. in particular, the annotations provide semantic information that enables our compiler to analyze and optimize library operations as if they were primitives of a domain- specific language. thus, our approach yields many of the performance benefits of domain-specific languages, without the effort of developing a new compiler for each domain. this paper presents the annotation language, describes its role in optimization, and illustrates the benefits of the overall approach. using a partially implemented compiler, we show how our system can significantly improve the performance of two applications written using the plapack parallel linear algebra library. samuel z. guyer calvin lin on application of self-similar pictures in education zoran putnik sigdoc 98: program, travel, and registration news phyllis galt programmable design environments: integrating end-user programming with domain-oriented assistance michael eisenberg gerhard fischer verification of a distributed algorithm (abstract) kaisa sere marina walden ada as a preprocessor language p. l. baker cellular disco: resource management using virtual clusters on shared-memory multiprocessors despite the fact that large-scale shared-memory multiprocessors have been commercially available for several years, system software that fully utilizes all their features is still not available, mostly due to the complexity and cost of making the required changes to the operating system. a recently proposed approach, called disco, substantially reduces this development cost by using a virtual machine monitor that leverages the existing operating system technology.in this paper we present a system called cellular disco that extends the disco work to provide all the advantages of the hardware partitioning and scalable operating system approaches. we argue that cellular disco can achieve these benefits at only a small fraction of the development cost of modifying the operating system. cellular disco effectively turns a large-scale shared-memory multiprocessor into a virtual cluster that supports fault containment and heterogeneity, while avoiding operating system scalability bottle-necks. yet at the same time, cellular disco preserves the benefits of a shared-memory multiprocessor by implementing dynamic, fine- grained resource sharing, and by allowing users to overcommit resources such as processors and memory. this hybrid approach requires a scalable resource manager that makes local decisions with limited information while still providing good global performance and fault containment.in this paper we describe our experience with a cellular disco prototype on a 32-processor sgi origin 2000 system. we show that the execution time penalty for this approach is low, typically within 10% of the best available commercial operating system for most workloads, and that it can manage the cpu and memory resources of the machine significantly better than the hardware partitioning approach. kinshuk govil dan teodosiu yongqiang huang mendel rosenblum a critique of the ada 9x mutual control mechanism (requeue) and an alternative mapping (onlywhen) ufuk verun tzilla elrad ada vs. modula-2: a response from the ivory tower m b feldman extension of record types niklaus wirth limitations on the portability of real time ada programs this paper describes areas of the ada language where transportability is restricted by the fact that implementation choices are allowed. transportability guidelines have been developed previously for ada [2,3,10], but have taken the approach that only features of the language which are guaranteed to be supported by all implementations should be used. the "common intersection" approach rules out use of many features that were included in ada because they are required for real-time applications. this paper addresses the transportability of real-time ada programs from a realistic point of view. specifically, time-critical applications are of little use being transportable if they execute too slowly on all available implementations; and reasonable people will not wish to transport an application to architectures and/or compilation systems that do not support the requirements of the application. a practical approach to managing these difficulties is presented as a set of transportability guidelines. these guidelines are intended to improve the transportability of programs which must rely on language features that are implementation dependent. t. griest m. bender surveyor's forum: a recurring bug helmut richter a survey of adaptable grammars h. christiansen estimating software fault content before coding stephen g. eick clive r. loader m. david long lawrence g. votta scott vander wiel semantical interprocedural parallelization: an overview of the pips project françois irigoin pierre jouvelot remi triolet a facility for the downward extension of a high-level language this paper presents a method whereby a high-level language can be extended to provide access to all the capabilities of the underlying hardware and operating system of a machine. in essence, it is a facility that allows a user to make special purpose extensions to a language without requiring the compiler to be modified for each extension. extensions are specified in an assembler-like language that is used at compile time to produce executable code to be combined with compiler-generated code. this facility has been implemented in a systems-programming language and was designed to provide access to facilities not directly available in the language. the way in which the facility was implemented calls for a minimum of user-visible language changes and is well suited for generating code sequences for any language. the facility provides the extension writer access to information available in the high-level language during compilation, permits the selective generation of user-defined code sequences depending on the context in which they are being used, provides for the integration of this code with compiler-generated code, and provides for the generation of user-understandable error messages when an extension is used incorrectly. thomas n. turba an experimental study of people creating spreadsheets nine experienced users of electronic spreadsheets each created three spreadsheets. although participants were quite confident that their spreadsheets were accurate, 44 percent of the spreadsheets contained user- generated programming errors. with regard to the spreadsheet creation process, we found that experienced spreadsheet users spend a large percentage of their time using the cursor keys, primarily for the purpose of moving the cursor around the spreadsheet. users did not spend a lot of time planning before launching into spreadsheet creation, nor did they spend much time in a separate, systematic debugging stage. participants spent 21 percent of their time pausing, presumably reading and/or thinking, prior to the initial keystrokes of spreadsheet creation episodes. polly s. brown john d. gould processor frequency setting for energy minimization of streaming multimedia application in this paper, we describe a software-controlled approach for adaptively minimizing energy in embedded systems for realtime multimedia processing. energy is optimized by clock speed setting: the software controller dynamically adjusts processor clock speed to the frame rate requirements of the incoming multimedia stream. the speed-setting policy is based on a system model that correlates clock speed with best-case, average- case and worst-case sustainable frame rate, accounting for data-dependency in multimedia streams. experiments on an mp3 decoding application show that computational energy can be drastically reduced with respect to fixed- frequency operation. andrea acquaviva luca benini bruno riccò oaklisp: an object-oriented scheme with first class types the scheme papers demonstrated that lisp could be made simpler and more expressive by elevating functions to the level of first class objects. oaklisp shows that a message based language can derive similar benefits from having first class types. kevin j. lang barak a. peralmutter a tour through cedar warren teitelman the promise of multiparadigm languages as pedagogical tools this paper presents a discussion of why languages that support multiple paradigms (i.e. multiparadigm languages) have the potential to be good pedagogical tools for teaching programming skills. several examples are given that demonstrate how different programming paradigms are expressed in a working multiparadigm language. the examples, though brief, provide a glimpse of how much expressiveness a simple multiparadigm design can embody and they suggest that the potential role of multiparadigm languages as teaching tools is promising. john placer assessing ada 9x oop: building a reusable components library bernard banner edmond schonberg burs automata generation a simple and efficient algorithm for generating bottom-up rewrite system (burs) tables is described. a small code- generator generator implementation produces burs tables efficiently, even for complex instruction set descriptions. the algorithm does not require novel data structures or complicated algorithmic techniques. previously published methods for on-the- fly elimination of states are generalized and simplified to create a new method, triangle trimming, that is employed in the algorithm. a prototype implementation, burg, generates burs tables very efficiently. todd a. proebsting io-lite: a unified i/o buffering and caching system this article presents the design, implementation, and evaluation of io -lite, a unified i/o buffering and caching system for general-purpose operating systems. io-lite unifies all buffering and caching in the system, to the extent permitted by the hardware. in particular, it allows applications, the interprocess communication system, the file system, the file cache, and the network subsystem to safely and concurrently share a single physical copy of the data. protection and security are maintained through a combination of access control and read-only sharing. io-lite eliminates all copying and multiple buffering of i/o data, and enables various cross- subsystem optimizations. experiments with a web server show performance improvements between 40 and 80% on real workloads as a result of io- lite. vivek s. pai peter druschel willy zwaenepoel best of technical support corporate linux journal staff techniques for avoiding conditional execute in apl2 s. m. mansour intensional equality ;=) for continuations andrew w. appel a distributed file service based on optimistic concurrency control sape j. mullender andrew s. tanenbaum from the publisher: staff changes and an activism request phil hughes lightweight lexical source model extraction software engineers maintaining an existing software system often depend on the mechanized extraction of information from system artifacts. some useful kinds of information---source models---are well known: call graphs, file dependences, etc. predicting every kind of source model that a software engineer may need is impossible. we have developed a lightweight approach for generating flexible and tolerant source model extractors from lexical specifications. the approach is lightweight in that the specifications are relatively small and easy to write. it is flexible in that there are few constraints on the kinds of artifacts from which source models are extracted (e.g., we can extract from source code, structured data files, documentation, etc.). it is tolerant in that there are few constraints on the condition of the artifacts. for example, we can extract from source that cannot necessarily be compiled. our approach extended the kinds of source models that can be easily produced from lexical information while avoiding the constraints and brittleness of most parser- based approaches. we have developed tools to support this approach and applied the tools to the extraction of a number of different source models (file dependences, event interactions, call graphs) from a variety of system artifacts (c, c++, clos, eiffel. tcl, structured data). we discuss our approach and describe its application to extract source models not available using existing systems; for example, we compute the implicitly-invokes relation over field tools. we compare and contrast our approach to the conventional lexical and syntactic approaches of generating source models. gail c. murphy david notkin a knowledge structure for reusing abstract data types in ada software production david w. embley scott n. woodfield profile-guided optimization across process boundaries we describe a profile-driven compiler optimization technique for _inter- process optimization_, which dynamically inlines the effects of sending messages. profiling is used to find optimization opportunities, and to dynamically trigger recompilation and optimization at run-time. we apply the optimization technique on the concurrent programming language erlang, letting recompilation take place in a separate erlang process, and taking advantage of the facilities provided by erlang to dynamically replace code at run-time. we have implemented a prototype inter-process profiler and optimizer, that can handle small programs. measurements on synthetic benchmarks show encouraging speedups of up to 1.8 times. erik johansson sven-olof nyström optimizing compilation of clp( r ) constraint logic programming (clp) languages extend logic programming by allowing the use of constraints from different domains such as real numbers or boolean functions. they have proved to be ideal for expressing problems that require interactive mathematical modeling and complex combinatorial optimization problems. however, clp languages have mainly been considered as research systems, useful for rapid prototyping, by not really competitive with more conventional programming languages where efficiency is a more important consideration. one promising approach to improving the performance of clp systems is the use of powerful program optimizations to reduce the cost of constraint solving. we extend work in this area by describing a new optimizing compiler for the clp language clp( r ). the compiler implements six powerful optimizations: reordering of constraints, removal of redundant variables, and specialization of constraints which cannot fail. each program optimization is designed to remove the overhead of constraint solving when possible and keep the number of constraints in the store as small as possible. we systematically evaluate the effectiveness of each optimization in isolation and in combination. our empirical evaluation of the compiler verifies that optimizing compilation can be made efficient enough to allow compilation of real-world programs and that it is worth performing such compilation because it gives significant time and space performance improvements. andrew d. kelly kim marriott andrew macdonald peter j. stuckey roland yap towards a discipline of class composition assembling classes to form systems is a key concept in object-oriented software development. however, in order to be composed, classes have to be compatible in a certain way. in current object-oriented languages the compatibility with a specific behaviour is expressed by inheritance from a class which describes this behaviour. unfortunately inheritance is neither intended to express compatibility nor does it enforce compatibility. this deficiency may lead to inconsistent systems. therefore, in the project "10-- 14" inheritance is restricted to express compatibility between classes. two kinds of compatibilities are distinguished: compatibility required for the integration of subsystems (conformance) and compatibility required for reuse (imitation). a formal basis for the discipline is developed in order to get mathematically precise notions and to enable the verification of object- oriented systems. a design strategy is built on top of these compatibility notions. in actual case studies the discipline has already shown to support a clean design and to promote the reusability of class hierarchies. franz weber the als ada compiler global optimizer d. a. taffs m. w. taffs j. c. rienzo t. r. hampson system administration: ip masquerading code follow-up chris kostick a greedy concurrent approach to incremental code generation the psep system represents a novel approach to incremental compilation for block structured languages. psep implements a very fine grain, "greedy" approach as a highly concurrent system of two processes: an editor and a code generator. the design allows the two processes to execute without locking their shared data objects, utilizing semantic information about the concurrent system to guarantee the consistency of the shared objects. this design is compared with the more common "demand" approach to incremental compilation. ray ford duangkaew sawamiphakdi best of technical support corporate linux journal staff ada usage/performance specification m. w. borger j. b. goodenough product review: xvscan michael montoure an object-oriented approach to domain analysis s. shlaer s. j. mellor questions from the osw booth kim johnson measuring cognitive activities in software engineering pierre n. robillard patrick d'astous francoise detienne willemien visser using standard fortran - past, present, and future alan clarke preliminary ideas of a conceptual programming language cheng-xiang zhai software support for speculative loads anne rogers kai li actor reflection without meta-objects tanaka tomoyuki estimating the number of test cases required to satisfy the all-du-paths testing criterion the all-du-paths software testing criterion is the most discriminating of the data flow testing criteria of rapps and weyuker. unfortunately, in the worst case, the criterion requires an exponential number of test cases. to investigate the practicality of the criterion, we develop tools to count the number of complete program paths necessary to satisfy the criterion. this count is an estimate of the number of test cases required. in a case study of an industrial software system, we find that in eighty percent of the subroutines the all-du-paths criterion is satisfied by testing ten or fewer complete paths. only one subroutine out of 143 requires an exponential number of test cases. j. bieman j. schultz using scheduler information to achieve optimal barrier synchronization performance parallel programs frequently use barriers to synchronize successive steps in an algorithm. in the presence of multiprogramming the choice of spinning versus blocking barriers can have a significant impact on performance. we demonstrate how competitive spinning techniques previously designed for locks can be extended to barriers, and we evaluate their performance. we design an additional competitive spinning technique that adapts more quickly in a dynamic environment. we then propose and evaluate a new method that obtains better peformance than previous techniques by using scheduler information to decide between spinning and blocking. the scheduler information technique makes optimal choices incurring little overhead. leonidas kontothanassis robert w. wisniewski the embeddable common lisp giuseppe attardi benchmarking ada: a rationale matthew j. lodge abstract class hierarchies, factories, and stable designs friedrich steimann shifting gears: changing algorithms on the fly to expedite byzantine agreement amotz bar-noy danny dolev cynthia dwork h. raymond strong automatic verification of requirements implementation requirements of event-based systems can be automatically analyzed to determine if certain safety properties hold. however, we lack comparable methods to verify that implementations maintain the properties guaranteed by the requirements. we have built a tool that compares implementations written in c with their requirements. requirements describe events which cause state transitions. implementations are annotated to describe changes in the values of their requirement's variables, and dataflow analysis techniques are used to determine the set of events which cause particular state changes. to show that an implementation is consistent with its requirements, we show that each event causing a change of state in the implementation appears in the requirements, and that all the events specified to cause state changes in the requirements appear in the implementation. the annotation language encourages programmers to describe local program behaviors. these behaviors are collected into system-level behaviors, which are compared to those in the requirements. since our analysis is not based on program code, annotations can describe behaviors at any level of granularity. we illustrate the use of our tool with several different annotations of a temperature-control system. marsha chechik john gannon programming languages considered as abstract data types this paper is an investigation of programming languages from the viewpoint of abstract data types. the elements of this abstract data type are programs or program segments and the operators are the grammatical rules of the grammar. given this framework, the paper explores the algebraic axioms needed to define this programming language abstract data type (pladt). j. craig cleaveland a survey of structured and object-oriented software specification methods and techniques this article surveys techniques used in structured and object-oriented software specification methods. the techniques are classified as techniques for the specification of external interaction and internal decomposition. the external specification techniques are further subdivided into techniques for the specification of functions, behavior, and communication. after surveying the techniques, we summarize the way they are used in structured and object- oriented methods and indicate ways in which they can be combined. this article ends with a plea for simplicity in diagram techniques and for the use of formal semantics to define these techniques. the appendices show how the reviewed techniques are used in 6 structured and 19 object-oriented specification methods. roel wieringa application of software reliability modelling to product quality and test process w. k. ehrlich j. p. stampfel j. r. wu an algebraic data type specification language and its rapid prototyping environment luc jadoul luc duponcheel willy van puymbroeck less complex elementary functions the textbook definitions of complex elementary functions---e.g., sin(z)---are usually given in terms of formulae which involve complex operations. in this paper, we give formulae for complex elementary functions in which the real and imaginary parts can be computed separately using only real operations. many of these formulae are obvious, but some---e.g., asin(z), acos(z), atan(z), asinh(z), acosh(z), atanh(z)---are not obvious and/or not well-known. henry g. baker helping the automated validation process of user interfaces systems bruno d'ausbourg christel seguin guy durrieu pierre roche a checkable interface language for pointer-based structures we present a technique for analyzing structural constraints on data aggregates in high-level languages. our technique includes a formal constraint language and a dataflow algorithm for automatically checking equality constraints. the constraint language is used to augment the type information on program interfaces. for example, one can specify that a procedure must return aggregates a and b where each element in aggregate a points to some element in aggregate b, and that parameter c will have the properties of a rooted tree both on input and output. our dataflow algorithm tracks the constraints which must apply at each statement in order for the procedure to satisfy its interface, and detects invalid programs which fail to satisfy the constraints on their interfaces. we apply our technique to several examples. our work is motivated by the requirements for expressive interface definition languages for distributed systems, and by the desire to mechanically check program modules against their interfaces. our analysis techniques also yield information which may enable compilers and stub generators to produce better implementations. james r. russell robert e. strom daniel m. yellin supporting distributed applications: experience with eden andrew p. black extending apl: what more can a programmer ask for? this paper explores certain underdeveloped parts of apl which ought to grow if apl is to qualify as implementation language for large and maintainable software systems. following specific problems are discussed: * integrating apl with other parts of information processing environment. • operating systems. • programs written in other languages. • data bases. * using independently developed functions and subsystems. • problems with names. • problems with space. * execution control of object attributes. * communications among functions. • passing parameters. • returning values. • transferring control. * information hiding modules. • packaging related functions. • packaging data with functions. • local functions. dragan bozinovic strategies for the lossless encoding of strings as ada identifiers henry g. baker program optimization and parallelization using idioms programs in languages such as fortran, pascal, and c were designed and written for a sequential machine model. during the last decade, several methods to vectorize such programs and recover other forms of parallelism that apply to more advanced machine architectures have been developed (particularly for fortran, due to its pointer-free semantics). we propose and demonstrate a more powerful translation technique for making such programs run efficiently on parallel machines which support facilities such as parallel prefix operations as well as parallel and vector capabilities. this technique, which is global in nature and involves a modification of the traditional definition of the program dependence graph (pdg), is based on the extraction of parallelizable program structures ("idioms") from the given (sequential) program. the benefits of our technique extend beyond the above-mentioned architectures and can be viewed as a general program optimization method, applicable in many other situations. we show a few examples in which our method indeed outperforms existing analysis techniques. shlomit s. pinter ron y. pinter not screens nor files but words wolf wejgaard stop the presses michael k. johnson programming atomic actions in ada a. burns a. j. wellings xinix time-sharing operating system jorge buenabadd chávez theoretical comparison of testing methods comparison of software testing methods is meaningful only if sound theory relates the properties compared to actual software quality. existing comparisons typically use anecdotal foundations with no necessary relationship to quality, comparing methods on the basis of technical terms the methods themselves define. in the most seriously flawed work, one method whose efficacy is unknown is used as a standard for judging other methods! random testing, as a method that can be related to quality (in both the conventional sense of statistical reliability, and the more stringent sense of software assurance), offers the opportunity for valid comparison. r. hamlet computer-time garbage collection by sharing analysis simon b. jones daniel le metayer position paper for ispw kouichi kishida phone-based cscw: tools and trials telephones are the most ubiquitous, best-networked, and simplest computer terminals available today. they have been used for voice mail but largely overlooked as a platform for asynchronous cooperative-work applications such as event calendars, issue discussions, and question-and-answer gathering. hypervoice is a software toolkit for constructing such applications. its building blocks are high-level presentation formats for collections of structured voice messages. the presentation formats can themselves be presented and manipulated, enabling significant customization of applications by phone. results of two field trials suggest social-context factors that will influence the success or failure of phone-based cooperative work applications in particular settings. paul resnick analysis and development of java grande benchmarks j. a. mathew p. d. coddington k. a. hawick an experimental testbed for embedded real time ada 95 the modifications made to ada during the 9x process have resulted in a language that is ideally suited to programming real-time systems. in this paper we investigate the difficulties in realising this potential. in particular, we consider the issues raised when porting the public gnat system on to a bare processor and producing a predictable and effective run-time system. as we were also concerned with issues of distribution via a can broadcast bus, support for the can protocol was included in the run-time system. in addition to investigating the performance of real-time ada 95 applications we were also interested in more general issues associated with embedded kernel support for ada 95. to facilitate these investigations and obtain the required level of performance, the thread package via which gnat implement various aspects of the ada language has been re-implemented. we discuss the architecture selected for this embedded kernel implementation and its relation to the architecture of the gnat compiler. w. m. walker p. t. woolley a. burns code reusability in the large versus code reusability in the small as a general rule, the goals of software engineering involve the devel opment of techniques for improving software development productivity. it is no surprise, then, that a lot of attention has been focused on facilitating the reuse of program code. however, most of this attention has been directed with only shortsighted and self-supporting goals, and has thus condemned code reuse techniques to limited areas of success. this paper brings to light some of the issues involving code reusability, which include technical, social, economic, and psychological considerations. code reusability "in the large" is contrasted with code reusability "in the small," ; and methods for improving code reusability are examined. mitchell d. lubars developing domain knowledge through the reuse of project experiences software development is no longer a homogenous field. software is being developed for an increasingly diverse set of applications and user populations, each with different characteristics and development constraints. as a consequence, researchers and practitioners have begun to realize the importance of identifying and understanding the characteristics and special development needs of application domains. this paper presents a method for developing and refining knowledge about application domains by creating a repository of project experiences. subsequent projects can then benefit from these experiences by locating similar projects and reusing the knowledge accumulated in the repository. we develop a framework for a system to capture relationships between development projects and resources for developing software, including process models, methods, technologies, and tools. we then show how this information can be reused to improve the productivity and quality of software development efforts. scott henninger architectural analysis of component-based systems victoria stavridou efficient software performance estimation methods for hardware/software codesign kei suzuki alberto sangiovanni-vincentelli insuring that ada compiler systems satisfy user needs jorg e. rodriguez static slicing in the presence of goto statements a static program slice is an extract of a program which can help our understanding of the behavior of the program; it has been proposed for use in debugging, optimization, parallelization, and integration of programs. this article considers two types of static slices: executable and nonexecutable. efficient and well-founded methods have been developed to construct executable slices for programs without goto statements; it would be tempting to assume these methods would apply as well in programs with arbitrary goto statements. we show why previous methods do not work in this more general setting, and describe our solutions that correctly and efficiently compute executable slices for programs even with arbitrary goto statements. our conclusion is that goto statements can be accommodated in generating executable static slices. jong-deok choi jeanne ferrante exploiting reusable specifications through analogy neil maiden alistair sutcliffe delta storage for arbitrary non-text files christoph reichenberger highly efficient and encapsulated re-use of synchronization code in concurrent object-oriented languages satoshi matsuoka kenjiro taura akinori yonezawa a general-arrays implementation of association lists association lists are a data structure in which arbitrary values can be bound to equally arbitrary keys, and from which the bound values can subsequently be retrieved by specifying the appropriate key. association lists are quite easy to implement in apls which support general arrays; the implementation here is as two-column matrices with keys in the first column and the corresponding values in the second. michael kent modeling and analysis of a virtual reality system with time petri nets rajesh mascarenhas dinkar karumuri ugo buy robert kenyon concurrency analysis in the presence of procedures using a data-flow framework evelyn duesterwald mary lou soffa session 2b: error processing r. kemmerer executable documentation: testing the documentation, documenting the testing f. ballard bof on inheritance william cook penguin's progress: desktops of the future peter salus sentry: a novel hardware implementation of classic operating system mechanisms a hardware protection mechanism called the sentry is introduced which requires minimal overhead to be invoked and no overhead when providing protection. it is fail-safe, fault tolerant, and can be added to most existing architectures. it provides a convenient low level service suitable as the underlying primitive around which more sophisticated mechanisms can be established. gene c. barton a translator from c to a -calculus representation michael karr a graphical fp language f g pagan extensible protected types o. p. kiddle a. j. wellings structured interrupts ted hills java servlets an introduction to writing and running java servlets on linux doug welzel integrating configuration management into a generic environment the software development process consists of a number of complex activities for work coordination, organization, communication, and disciplines that are essential for achieving quality software, maintaining system integrity, and keeping the software process manageable. software engineering environments can be helpful instruments in pursuing these goals when they are integrated, open to extension, and capable of adapting to real processes as they occur in software development projects. adaptability and the ability to perform adaptations rapidly are crucial features of sees. in this paper we are presenting an approach to rapid environment extension that provides the means to capture characteristics of software development processes and realize environment support for these processes by using existing tools. an object oriented environment infrastructure is the basis for achieving these goals while providing and maintaining an integrated behavior of the environment. the presented approach is demonstrated by defining a set of classes for version control and configuration management that model the behavior of an existing configuration management toolkit. axel mahler andreas lampen java and distributed object models: an analysis marjan hericko matjaz b. juric ales zivkovic ivan rozman tomaz domajnko marjan krisper tsiopak - a proposal for a new sharp apl file system communication between os files is often necessary when information that is manipulated and stored in sharp apl files is required to be used by non-apl routines. for example, one may use apl for number crunching and have the output processed by sas or fortran to plot the data. the arising demand for procedures that would make data accessible from apl to other systems motivated the author to take action. tsiopak was developed with the intention to provide a standard interface between sharp apl and the mvs operating system facilitating the accessibility of data created through apl and other languages. in addition, the reader will find that starting from the section titled the tsiopak file system, the format for each section looks very much like the sharp apl reference manual. this is not a coincidence, what the author has tried to achieve with this work is to have a file system functionally identical to sharp's using os files. the author's efforts have gone as far as trying to replicate the format of sharp's manual. in addition, parts of the sharp apl file system have not been implemented in tsiopak due to the lack of time and resources. hence, this paper is a proposal for a new file system which would make apl-generated data generally available. carlos g. leon hoard emery d. berger kathryn s. mckinley robert d. blumofe paul r. wilson modeling components and frameworks with uml cris kobryn interface language for supporting programming styles we suggest a novel application of a formal specification language: we use it to support some programming conventions and encourage certain styles of programming. the larch/c interface language (lcl) is a language for documenting the interfaces of ansi c programs. it is designed to support a style of c programming where a program is organized around a set of software modules. even though c does not support abstract data types, lcl supports the specifications of abstract data types, and provides guidelines on how abstract types can be implemented in c. a lint-like program checks for some conformance of c code to its lcl specification. yang meng tan configuration management using sysl r. thomson i. sommerville evolutionary design of complex software (edcs) john salasin howie shrobe using a fine-grained comparative evaluation technique to understand and design software visualization tools paul mulholland real-time and embedded systems john a. stankovic gfsr pseudorandom number generation using apl this paper demonstrates the effectiveness of apl for implementing the generalized feedback shift register (gfsr) approach to producing a random number stream. various approaches to generating streams of pseudorandom numbers computationally have been devised, dating at least as far back as norbert wiener. these are normally discussed in the context of a course in modeling and simulation, since there may be practical implications to consider when building simulation models in apl or any other high level language. this is particularly the case for those circumstances when a (pseudo) random number sequence needs to be repeated, or when multiple streams are needed. in such cases a pseudorandom number generator other than that supplied in the programming language command set needs to be implemented. the linear congruential method for pseudorandom number generation is still commonly used, and is easily implemented in apl or any other high level language; however, it is known to have undesirable n-space uniformity characteristics. moreover, the period length of its random number stream is limited by the underlying machine's word size. this is a serious issue, since at present day computer speeds, a simulation run could exhaust such a random number stream in a few hours. very long period (vlp) pseudorandom number generators remedy this deficiency.one class of these, generalized feedback shift register (gfsr) pseudorandom number generators, is based on algebraic manipulation of irreducible trinomials of high order. the manner in which this is accomplished particularly lends itself to elegant apl implementation. charles winton simple objects for standard ml john reppy jon riecke analyzing exotic instructions for a retargetable code generator exotic instructions are complex instructions, such as block move, string search, and string edit, which are found on most conventional computers. recent retargetable code generator and instruction set analysis systems have not dealt with exotic instructions. a method to analyze exotic instructions is presented which provides the information needed by a retargetable code generator. the analysis uses source-to-source transformations to prove the equivalence of high-level language operators to exotic instructions. examples are presented which illustrate the analysis process. thomas m. morgan lawrence a. rowe non-linear type extensions bob brown communication issues working group roger van scoy extending the scope of syntactic abstraction oscar waddell r. kent dybvig design of a high-speed prolog machine (hpm) ryosei nakazaki akihiko konagaya shin'ichi habata hideo shimazu mamoru umemutra masahiro yamamoto minoru yokota takashi chikayama layering and multiple views of data abstraction in ada: techniques and experiences this paper describes the results of a study to investigate alternative techniques for using private types and packages to limit visibility of declarative items in ada. shortfalls of the conventional technique of applying private types, with regard to issues of recompilation, source code integrity, limiting visibility, performance, support of debugging, and interrelating abstractions, are identified. possibly other developers of large ada systems have encountered these issues; an analysis of over two million lines of ada software developed for the army, navy, and air force shows that private types are rarely used. this paper describes a series of unconventional techniques for using private types and packages to layer more abstract packages on top of less abstract packages, and to develop multiple views of abstract data types applicable to different classes of users. the promising techniques were applied on an actual software engineering problem, the development of a tree- builder component of an ada-to-diana translator. a description of experiences, in which the techniques realized benefits for changing and debugging the software and for improving reliability, is provided. s. keller j. perkins k. o'leary leases: an efficient fault-tolerant mechanism for distributed file cache consistency caching introduces the overhead and complexity of ensuring consistency, reducing some of its performance benefits. in a distributed system, caching must deal with the additional complications of communication and host failures. leases are proposed as a time-based mechanism that provides efficient consistent access to cached data in distributed systems. non-byzantine failures affect performance, not correctness, with their effect minimized by short leases. an analytic model and an evaluation for file access in the v system show that leases of short duration provide good performance. the impact of leases on performance grows more significant in systems of larger scale and higher processor performance. c. gray d. cheriton estimating software fault-proneness for tuning testing activities giovanni denaro technical correspondence r. l. earle r. b. kieburtz a. silberschatz power conscious fixed priority scheduling for hard real-time systems youngsoo shin kiyoung choi a less dynamic memory allocation scheme for algol-like languages the conventional storage allocation scheme for block structured languages requires the allocation of stack space and the building of a display with each procedure call. this paper describes a technique for analyzing the call graph of a program in a block structured language that makes it possible to eliminate these operations from many call sequences, even in the presence of recursion. thomas p. murtagh comparison of program testing strategies elaine j. weyuker stewart n. weiss dick hamlet "teaching systems design through systems documentation" this paper describes a teaching methodology, illustrating: • use of case studies having general appeal to full-time students and professional persons, • generally-accepted system documents, • a prototype sequence for presentation of each document type, • measures of success of the methodology, and • pitfalls. john p. walter building partitioned architectures based on the ravenscar profile the requirement to support software partitioning is a recurring theme within high integrity and safety critical systems. the partition concept is used to implement differing access protection levels for applications of varying criticality levels executing on the same processor. partitions can also be used in fault tolerant systems that require high availability, redundancy or dynamic re-configuration.the _ravenscar profile_ was a major output of the 8th international real-time ada workshop. the profile defines a subset of the ada95 tasking constructs that matches the requirements of safety critical, high integrity and hard real-time systems by eliminating constructs with high overhead or non-deterministic behavior (semantically or temporally) whilst retaining those elements that form the basic building blocks for constructing analyzable and deterministic real-time software.this paper describes how a cots ada95 compilation system that implements the ravenscar profile can be used in the implementation of a partitioned architecture in an integrated modular avionics context based on the arinc 653 application executive (apex) standard. brian dobbing experience with cst: programming and implementation cst is a programming language based on smalltalk-802 that supports concurrency using locks, asynchronous messages, and distributed objects. in this paper, we describe cst: the language and its implementation. example programs and initial programming experience with cst are described. our implementation of cst generates native code for the j-machine, a fine- grained concurrent computer. some compiler optimizations developed in conjunction with that implementation are also described. w. horwat a. a. chien w. j. dally patterns for decoupling data structures and algorithms dung (“zung”) nguyen stephen b. wong a new approach to fault tolerance in distributed ada programs j. c. knight m. e. rouleau prettyprinting in an interactive programming environment prettyprint algorithms designed for printing programs on paper are not appropriate in an interactive environment where the interface to the user is a crt screen. we describe a data representation and an algorithm that allow the efficient generation of program displays from a parsed internal representation of a program. the displays show the structure of the program by consistent and automatic indentation. they show the program in varying levels of detail by replacing unimportant parts with ellipsis marks. the relative importance of program parts is determined jointly by the structure of the program and by the current focus of attention of the programmer. martin mikelsons tadpoles and frogs: metamorphosis in apl code thomas j. pritchard integrating the understanding of quality in requirements specification and conceptual modeling the notion of _quality_ of information system models and other conceptual models is not well understood. however, recent quality frameworks have tried to take a more systematic approach. we have earlier developed a framework for understanding and assessing the quality of models in general, with emphasis on models made in conceptual modeling languages. at the same time, there is a long tradition on discussing quality of more specialized models, e.g. in the form of requirements specifications. several authors have created taxonomies of useful properties of models and requirements specifications, the most comprehensive overview being presented by alan davis.we have in this paper extended our quality framework for models based on the work by davis on quality in requirement specifications, looking upon a requirements specification as a specific type of model. comparing the approaches we find on the one hand that the properties as summarized by davis are subsumed by our framework on a high level, and that there are aspects within our framework that are not covered by davis. on the other hand, the comparison has resulted in a useful extension and deepening of our framework on this specific kind of model, and in this way improved the practical applicability of our framework when applied to discussing the quality of requirements specifications. john krogstie support for specifying temporal behavior in ada designs r. j. a. buhr g. m. karam r. casselman the multiway rendezvous the multiway rendezvous is a natural generalization of the rendezvous in which more than two processes may participate. the utility of the multiway rendezvous is illustrated by solutions to a variety of problems. to make their simplicity apparent, these solutions are written using a construct tailor-made to support the multiway rendezvous. the degree of support for multiway rendezvous applications by several well-known languages that support the two- way rendezvous is examined. since such support for the multiway rendezvous is found to be inadequate, well-integrated extensions to these languages are considered that would help provide such support. arthur charlesworth towards a unified event-based software architecture john c. grundy john g. hosking warwick b. mugridge systematic reuse: a scientific or an engineering method? ruben prieto-diaz an introduction to the extended pascal language tony hetherington whither object orientation? what is object orientation, anyway? walter g. fil scheduling issues in ada 9x c. douglass locke encapsulation constructs in systems programming languages w. f. appelbe a. p. ravn the zark library of utility functions the primary appeal of apl has always been its _productivity._ you can develop and maintain computer applications faster with apl than with any other high-level programming language. apl has a number of distinct traits that contribute to this productivity advantage over other languages. however, language productivity is only one side of the programmer productivity triangle. the other two are programmer experience and utility software. the programmer with the most productive language, the most experience, and the best utility software will develop better code, and faster than the programmer who is deficient in any of these three areas. experienced apl programmers generally score well in the first two areas, but poorly in the third, probably because apl obviates the perceived need for high quality utility software. programmers in other languages do not have this luxury and so tend to develop and rely upon more extensive and higher quality utility software. consequently, the productivity advantage of apl programmers over other programmers is not as significant as it can be. zarklib, the zark library of utility functions, is a scheme that attempts to remedy this problem. it is both an extensive collection of apl utility software and a motivational scheme. it guarantees its growth and maintenance by employing a "pyramid" scheme that motivates apl programmers to give until it hurts, and then give again. gary a. bergquist comparison of delivered reliability of branch, data flow and operational testing: a case study many analytical and empirical studies of software testing effectiveness have used the probability that a test set exposes at least one fault as the measure of effectiveness. that measure is useful for evaluating testing techniques when the goal of testing is to gain confidence that the program is free from faults. however, if the goal of testing is to improve the reliability of the program (by discovering and removing those faults that are most likely to cause failures when the software is in the field) then the measure of test effectiveness must distinguish between those faults that are likely to cause failures and those that are unlikely to do so. delivered reliability was previously introduced as a means of comparing testing techniques in that setting. this paper empirically compares reliability delivered by three testing techniques, branch testing, the all-uses data flow testing criterion, and operational testing. the subject program is a moderate-sized c-program (about 10,000 loc) produced by professional programmers and containing naturally occurring faults. phyllis g. frankl yuetang deng case: a testbed for modeling, measurement and management graham tate june verner ross jeffery communication optimizations for parallel c programs yingchun zhu laurie j. hendren a general, fine-grained, machine independent, object-oriented language birger andersen from formal models to formally based methods: an industrial experience we address the problem of increasing the impact of formal methods in the practice of industrial computer applications. we summarize the reasons why formal methods so far did not gain widespead use within the industrial environment despite several promising experiences. we suggest an evolutionary rather than revolutionary attitude in the introduction of formal methods in the practice of industrial applications, and we report on our long-standing experience which involves an academic institution. politecnico di milano, two main industrial partners, enel and cise, and occasionally a few other industries. our approach aims at augmenting an existing and fairly deeply rooted informal industrial methodology with our original formalism, the logic specification language trio. on the basis of the experiences we gained we argue that our incremental attitude toward the introduction of formal methods within the industry could be effective largely independently from the chosen formalism. emanuele ciapessoni piergiorgio mirandola alberto coen- porisini dino mandrioli angelo morzenti design and analysis of hierarchical software metrics ronald e. prather observations on the odmg-93 proposal for an object-oriented database language won kim generating a test oracle from program documentation: work in progress a fundamental assumption of software testing is that there is some mechanism, an oracle, that will determine whether or not the results of a test execution are correct. in practice this is often done by comparing the output, either automatically or manually, to some pre-calculated, presumably correct, output [17]. however, if the program is formally documented it is possible to use the specification to determine the success or failure of a test execution, as in [1], for example. this paper discusses ongoing work to produce a tool that will generate a test oracle from formal program documentation. in [9], [10] and [11] parnas et al. advocate the use of a relational model for documenting the intended behaviour of programs. in this method, tabular expressions are used to improve readability so that formal documentation can replace conventional documentation. relations are described by giving their characteristic predicate in terms of the values of concrete program variables. this documentation method has the advantage that the characteristic predicate can be used as the test oracle -- it simply must be evaluated for each test execution (input & output) to assign pass or fail. in contrast to [1], this paper discusses the testing of individual programs, not objects as used in [1]. consequently, the method works with program documentation, written in terms of the concrete variables, and no representation function need be supplied. documentation in this form, and the corresponding oracle, are illustrated by an example. finally, some of the implications of generating test oracles from relational specifications are discussed. dennis peters david l. parnas a model solution for the c3i domain this paper1 briefly describes a specific portion of recent work performed by the domain specific software architecture (dssa) project at the software engineering institute (sei) --- the development and use of a model solution for message translation and validation in the c3i domain. based on this experience and our involvement with programs in the c3i domain, future considerations are described. these considerations involve identifying potential models within a domain and making recommendations for developing and documenting model solutions which will enable the models to be reused. c. plinta k. lee object management issues for software engineering environments workshop report during recent years, several research efforts in the area of software development environments have focused on the provision of uniform object management systems (oms) as a framework for tool integration and communication. this paper summarizes discussions of an oms workshop on the issues that arise in defining an appropriate data model for an oms. maria h. penedo erhard ploedereder ian thomas acm forum robert l. ashenhurst distributed real-time system specification and verification in aptl in this article, we propose a language, asynchronous propositional temporal logic (aptl), for the specification and verification of distributed hard real- time sytems. aptl extends the logic tptl by dealing explicitly with multiple local clocks. we propose a distributed-system model which permits definition of inequalities asserting the temporal precedence of local clock readings. we show the expressiveness of aptl through two nontrivial examples. our logic can be used to specify and reason about such important properties as bounded clock rate drifting. we then give a 220(n) tableau-based decision procedure for determining aptl satisfiability, where n is the size (number of bits) of the input formula. farn wang aloysius k. mok e. allen emerson the impact project (panel session): determining the impact of software engineering research upon practice the purpose of this panel is to introduce the impact project to the community, and to engage the community in a broad ranging discussion of the project's goals, approaches, and methods. some of the project's early findings and directions will be presented. leon j. osterweil lori a. clarke michael evangelist jeffrey kramer dieter rombach alexander l. wolf introduction lynellen d. s. perry the last word stan kelly-bootle a comparison of the object-oriented and process-oriented paradigms (abstract only) rob strom acm president's letter: are operating systems obsolete? peter j. denning coordinating distributed applets with shade/java p. ciancarini d. rossi apl2os: design considerations for a nested array file system apl2os is an external function for the apl2 system, designed to enable apl2 applications to access operating system files (and information about these files) in a straightforward and efficient way, using the power of apl2 syntax to maximum advantage. the design goals and approaches for apl2os are discussed, in the context of a summary of its features. david m. weintraub software architecture: a roadmap david garlan book review: the no b.s. guide to linux zach beane the design of an integrated, interactive and incremental programming environment we are currently implementing a system to help experienced programmers during the development, implementation and debugging of their programs. this system, built on top of a screen oriented structural editor, offers possibilities of attaching descriptors to every portion of the program and to maintain - simultaneously - different versions of the program being written, including tentative hypothetical versions. it comprises a mecanism to maintain minimal consistency between modified parts of code, the non-modified parts of code and the attached descriptors, as well as an evaluation module functioning in different modes : normal evaluation, symbolic evaluation and checking evaluation. the standard programming aids, such as indexors, pretty printers, trace packages, undo- and history-facilities are generalized to handle the descriptors and unfinished programs as well. harald wertz iterators: signs of weakness in object-oriented languages henry g. baker new tiling techniques to improve cache temporal locality yonghong song zhiyuan li backtracking without trailing in clp ( rlin ) existing clp languages support backtracking by generalizing traditional prolog implementations: modifications to the constraint system are trailed and restored on backtracking. although simple and efficient, trailing may be very demanding in memory space, since the constraint system may potentially be saved at each choice point. this article proposes a new implementation scheme for backtracking in clp languages over linear (rational or real) arithmetic. the new scheme, called semantic backtracking, does not use trailing but rather exploits the semantics of the constraints to undo the effect of newly added constraints. semantic backtracking reduces the space complexity compared to implementations based on trailing by making it essentially independent of the number of choice points. in addition, semantic backtracking introduces negligible space and time overhead on deterministic programs. the price for this improvement is an increase in backtracking time, although constraint-solving time may actually decrease. the scheme has been implemented as part of a complete clp system clp (rlin) and compared analytically and experimentally with optimized trailing implementations. experimental results on small and real-life problems indicate that semantic backtracking produces significant reduction in memory space, while keeping the time overhead reasonably small. pascal van hentenryck viswanath ramachandran a workplan for business process reengineering and a challenge for information science and technology bill schwartz betty w. hwang c. jinshong hwang introduction to object-oriented design (abstract) lori stipp grady booch automatic inline allocation of objects julian dolby guest editor's introduction richard a. demillo a functional for loop c. k. yuen paradigms for design and implementation in ada an examination of the respective advantages and disadvantages of three characteristic paradigms of design and implementation in ada illustrates the importance of choosing the appropriate paradigm for a given set of circumstances. václav rajlich locality in distributed computations the defining characteristic of a distributed system is the (temporal) distance between components; communication time is non-trivial compared to processing time. because of this, the design of efficient distributed computations involves trade-offs between maximizing the amount of parallelism and minimizing communication costs. we argue that any sort of centralized control, whether in user-level algorithms or at the systems level, impairs efficiency. we then introduce a scheme for automatic process synchronization that meets our locality criterion. david k. garnick a. toni cohen vads apse: an integrated ada programming support environment ed matthews greg burns scene: using scenario diagrams and active text for illustrating object- oriented programs kai koskimies hanspeter mössenböck stars process concepts summary hal hart jerry doland dick drake bill ett jim king herb krasner lee osterweil jim over exact analysis of the cache behavior of nested loops we develop from first principles an exact model of the behavior of loop nests executing in a memory hicrarchy, by using a nontraditional classification of misses that has the key property of composability. we use presburger formulas to express various kinds of misses as well as the state of the cache at the end of the loop nest. we use existing tools to simplify these formulas and to count cache misses. the model is powerful enough to handle imperfect loop nests and various flavors of non-linear array layouts based on bit interleaving of array indices. we also indicate how to handle modest levels of associativity, and how to perform limited symbolic analysis of cache behavior. the complexity of the formulas relates to the static structure of the loop nest rather than to its dynamic trip count, allowing our model to gain efficiency in counting cache misses by exploiting repetitive patterns of cache behavior. validation against cache simulation confirms the exactness of our formulation. our method can serve as the basis for a static performance predictor to guide program and data transformations to improve performance. siddhartha chatterjee erin parker philip j. hanlon alvin r. lebeck message pattern specifications: a new technique for handling errors in parallel object oriented systems as object oriented techniques enable the fabrication of ever more sophisticated systems, the need grows for a mechanism to ensure the consistent and 'correct' behaviour of each object at run-time. we describe a new, in- source specification mechanism, message pattern specifications (mps), to directly satisfy this need in a succinct, orthogonal and disciplined manner. targeted for use in parallel object oriented systems, mps allows programmers to enunciate the 'legal' patterns of run-time behaviour in which their objects may engage. furthermore, it supports the definition of methods for object recovery or graceful failure in case these specifications are violated during execution. jan a. purchase russel l. winder operating system support for high-speed communication peter druschel rationale for fault exposure ratio k john d. musa portable resource control in java preventing abusive resource consumption is indispensable for all kinds of systems that execute untrusted mobile coee, such as mobile object sytems, extensible web servers, and web browsers. to implement the required defense mechanisms, some support for resource control must be available: accounting and limiting the usage of physical resources like cpu and memory, and of logical resources like threads. java is the predominant implementation language for the kind of systems envisaged here, even though resource control is a missing feature on standard java platforms. this paper describes the model and implementation mechanisms underlying the new resource-aware version of the j-seal2 mobile object kernel. our fundamental objective is to achieve complete portability, and our approach is therefore based on java bytecode transformations. whereas resource control may be targeted towards the provision of quality of service or of usage-based billing, the focus of this paper is on security, and more specificlly on prevention of denial-of-service attacks orginating from hostile or poorly implemented mobile code. walter binder jane g. hulaas alex villazón improving ada tasking performance gary frankel book review: unix systems for modern architectures** randolph bentson software engineering in the year 2001 (panel session) robert balzer michael jackson alan kay michael sintzoff fp with data abstraction and strong typing this paper begins by presenting arguments for including data abstraction and compile time type checking in functional programming languages, and discussing in general terms the mechanisms required to provide support for these features. it then goes on to present brief introductions to the algebraic style of formally specifying abstract data types and to the fp style of writing functional programs. the middle section describes a version of fp that includes type checking and data abstraction. the key to this is the development of a framework for describing the fp type system in terms of the already existing algebra of fp programs. the paper concludes with an example program illustrating the style of fp programming made possible by our extensions. john guttag james horning john williams a generic object-oriented concurrency mechanism for extensibility and reuse of synchronization components laurent thomas toward a formal theory of extensible software as software projects continue to grow in scale and scope, it becomes important to reuse software. an important kind of reuse is _extensibility,_ i.e., the extension of software without accessing existing code to edit or copy it. in this paper, we propose a rigorous, semantics-based definition of software extensibility. then we illustrate the utility of our definitions by applying them to several programs. the examination shows how programming style affects extensibility and also drives the creation of a variant of an existing design pattern. we consider programs in both object-oriented and functional languages to prove the robustness of our definitions. shriram krishnamurthi matthias felleisen active design documents guy a. boy which paradigm can improve the reliability of next-generation measurement system software satoshi imai takahiro yamaguchi givargis a. danialy concurrent reading and writing of clocks as an exercise in synchronization without mutual exclusion, algorithms are developed to implement both a monotonic and a cyclic multiple-word clock that is updated by one process and read by one or more other processes. leslie lamport analyzing stores and references in a parallel symbolic language we describe an analysis of a parallel language in which processes communicate via first-class mutable shared locations. the sequential core of the language defines a higher-order strict functional language with list data structures. the parallel extensions permit processes and shared locations to be dynamically created; synchronization among processes occurs exclusively via shared locations. the analysis is defined by an abstract interpretation on this language. the interpretation is efficient and useful, facilitating a number of important optimizations related to synchronization, processor/thread mapping, and storage management. suresh jagannathan stephen weeks an abstract machine for clp( r ) an abstract machine is described for the clp( r ) programming language. it is intended as a first step toward enabling clp( r ) programs to be executed with efficiency approaching that of conventional languages. the core constraint logic arithmetic machine (clam) extends the warren abstract machine (wam) for compiling prolog with facilities for handling real arithmetic constraints. the full clam includes facilities for taking advantage of information obtained from global program analysis. joxan jaffar peter j. stuckey spiro michaylov roland h. c. yap developments in pascal-fc g. l. davies recovering software architecture from multiple source code analyses melissa p. chase steven m. christey david r. harris alexander s. yeh software testability: an experiment in measuring simulation reusability software reuse can be more readily enabled if the testing of the code in the previous environment is still applicable to the new environment. the reusability of previous verification efforts is an important parameter is assessing the "immediate" reusability of the code; in this paper, the verification technique that we are focusing on is software testing. this paper presents the use of a software testability measure, sensitivity analysis, as a quantitative assessment of the reusability of previous verification. the ability to reuse verification is a factor to consider in determining the reusability of code. we have applied this technique to a large nasa supersonic software simulation, high speed civil transport (hsct), and the reusability results of that application suggest a possible concern about the sufficiency of the original verification. j. voas j. payne r. mills j. mcmanus programming pearls jon bentley a set of output functions for a utility library richard levine assessing and enhancing software testing effectiveness although many techniques for testing software have been proposed over the last twenty years, there is still not enough solid evidence to indicate which (if any) of these techniques are effective. it is difficult to perform meaningful comparisons of the cost and effectiveness of testing techniques; in fact, even defining these terms in a meaningful way is problematic. consider an erroneous program _p,_ its specification _s,_ and a test data adequacy criterion _c_ (such as 100% branch coverage). even if we restrict the size of the test sets to be considered, there are a huge number of different test sets that satisfy criterion _c_ for _p_ and _s._ since these adequate test sets typically have different properties, in order to investigate effectiveness (or other properties) rigorously, the entire space of test sets must be considered (according to some reasonable probability distribution) and appropriate probabilistic analysis and/or statistical sampling techniques must be used.in earlier research, supported by nsf grant ccr-9206910, we developed analytical tools and an experiment design to address these issues and applied them to comparing a number of well-known testing techniques. the primary measure of effectiveness considered was probability that an adequate test set would detect at least one fault and the most of the experiment subjects were fairly small. the main thread of this research project extends that work in several directions: additional measures of cost and effectiveness are considered, analytical and experimental tools are developed for these measures, and experiments are conducted on larger programs. writing c functions callable by fortran and vice versa peter s. shenkin the marketing of forth workshop report warren bean non-technical aspects of using ada david a. feinberg object-oriented specification of reactive systems h.-m. järvinen r. kurki- suonio m. sakkinen k. systä at the forge: writing modules for mod_perl reuven m. lerner emerging issues (session summary) david garlan crl/pascal: a pascal-oriented cross reference language and its applications xu baowen an ada 9x subset for inheritance-based reuse and its translation to ada 83 (part 1) michael hirasuna programming pearls jon bentley an efficient cache-based access anomaly detection scheme sang l. min jong-deok choi literate programming christopher j. van wyk cooking with linux matt welsh bugs in the programs security on the internet is receiving increasing attention as more and more organizations are becoming dependent on the network. the use of the internet for electronic commerce, government operations, research activities, and entertainment has now reached the point that attacks against the network and the systems connected to it have become major news items. while the press highlights a few high-profile incidents, the actual number of attacks is much higher. the cert coordination center works with the internet community to deal with incidents and responded to over 8,000 incidents in 1999. the incident projection for year 2000 is 17,000 to 20,000. at the same time, the amount of damage resulting from the incidents is also increasing. while the press often focuses on cases of web site graffiti, more serious cases of financial fraud, extortion, and debilitating denial of service attacks are being reported at increasing rates. richard d. pethia safe and leakproof resource management using ada83 limited types henry g. baker delegation with ada 9x liisa räihä time-machine computing: a time-centric approach for the information environment this paper describes the concept of time- machine computing (tmc), a time- centric approach to organizing information on computers. a system based on time-machine computing allows a user to visit the past and the future states of computers. when a user needs to refer to a document that he/she was working on at some other time, he/she can travel in the time dimension and the system restores the computer state at that time. since the user's activities on the system are automatically archived, the user's daily workspace is seamlessly integrated into the information archive. the combination of spatial information management of the desktop metaphor and time traveling allows a user to organize and archive information without being bothered by folder hierarchies or the file classification problems that are common in today's desktop environments. tmc also provides a mechanism for linking multiple applications and external information sources by exchanging time information. this paper describes the key features of tmc, a time-machine desktop environment called "timescape," and several time-oriented application integration examples. jun rekimoto interfacing user processes an kernel in high level language m ancona a clematis v gianuzzi ada 9x reusable components jeffrey r. carter the algorithms capture approach to ada transition david p. wood an evaluation of the real-time performances of svr4.0 and svr4.2 unix is one of the most widely used operating systems on current workstations. however, unix was originally designed as a multitasking and time-sharing system with little concern for supporting real-time applications. recent versions of unix have incorporated real-time features and the designers of these systems claim to provide better response times than the standard unix kernel. in order to assess the benefits of these new features and verify these claims, this paper compares the real-time performances of two popular versions of unix namely system v release 4.0 and system v release 4.2 for the intel platform. sherali zeadally surveyors' forum: notations for concurrent programming gregory r. andrews fred b. schneider cluster: an informal report shang lujun concurrent programming vs. concurrency control: shared events or shared data two views of concurrency in an object system exist. those pursuing concurrent programming believe that activities in the real world are inherently concurrent and therefore objects are themselves active. objects engage in shared events by sending and receiving messages. communicating sequential processes [hoar85a] and actors [agha86a] embrace this view. on the other hand, those pursuing models of concurrency control believe that objects are data and that concurrent access to data needs to be controlled by the system according to some correctness notion. database transactions, atomic objects [weih84a, schw84a] and nested objects [mart88a] embrace this view. concurrent programming, in our view, places a significant burden on programming. correct concurrent behavior is specified as combinations of interactions within a potentially large set of concurrent objects. a programmer must verify that the implementations of all the objects never produce undesirable interactions. correctness of concurrent behavior is left to the programmer. we are pursuing models embracing concurrency control primarily because a programmer is not required to consider concurrency. the operations on an object can be specified in terms of preconditions and postconditions and traditional program verification techniques can be used to verify an operation's implementation. a programmer only considers the serial behavior of an object in isolation; he need not concern himself with how other concurrent activities might affect the object. correctness of interleavings is left to the system. serializability is the usual correctness notion for concurrency control algorithms. in transaction terminology, each competing transaction executes a sequence of basic actions. any interleaving of the actions is correct if it is equivalent to some serial execution of the transaction. serializability allows a transaction to be programmed in isolation, that is without considering possible interleavings with other transactions. the system may indeed interleave the actions of several transactions but it is up to the system to make the interleaving appear serial. concurrent programming is apparently more general. a programmer can implement anything, including undesirable interactions like deadlock. the price for this generality is that the programmer must reason about global orderings of events and thus correctness is difficult to show. the traditional transaction model is not general enough for programming shared object systems. for example, several researchers, [bern87a, garc87a, pu88a], have recognized that transactions are too restrictive for long-lived activities. the problem is that the transaction model is too conservative. only reading and writing a data item at a single layer of abstraction is modeled. once a read-write, write-read or write-write dependency is established between two transactions, it remains for the life of the transaction and limits further interleavings. our approach is to discover and explore less restrictive correctness notions that still allow programmers to implement operations on objects in isolation. in [mart88a] we present two such correctness notions: externally serializable computations and semantically verifiable nonserializable computations. both correctness notions assume the nested object model. in [mart87a] we give a nested object solution to the dining philosophers' problem [dijk71a]. nested objects incorporate both the semantics of an object and the data abstraction hierarchy of an object. nested objects form a nested object system. a nested object system is hierarchical; objects exist at different levels of the system. the execution of an operation on an object at level i results in the execution of operations on objects at level i-1. however, only top level objects are viewed externally. a computation at level i is a description of the state change made to level i objects and the return values produced by executing a partially ordered set of operations on level i objects. the computations at each level together form an n-level system computation. externally serializable computations are n-level system computations in which the top level objects are left in states that could be produced by serial computations. however, lower level objects may be left in states that no serial computation could produce. because both data abstraction hierarchies and operations semantics are considered in the nested object model, dependencies established between concurrent computations can be systematically ignored. long-lived computations can execute efficiently if dependencies can later be ignored. nested objects are more general than other models of concurrency control. transactions are two-level nested objects that read and write basic data items. atomic objects are two-level nested objects that perform abstract operations. the 1988 object based concurrent programming workshop did not directly address the differences between concurrent programming and concurrency control. perhaps future workshops can contrast the generality, the applicability, the programmability, the security and the performance implications of models from both concurrent programming and concurrency control. bruce martin dhrystone benchmark (ada version 2): rationale and measurements rules r. p. weicker commonobjects: an overview alan snyder wisconsin architectural research tool set mark d. hill james r. larus alvin r. lebeck madhusudhan talluri david a. wood a comparison of ada and java as a foundation teaching language java has entered the software arena in unprecedented fashion, upstaging languages and technologies that are longstanding players in the industry. almost unheard of before 1995, the language and its surrounding technology are attracting increasing attention not merely in the hardware and software communities but also among lay users and in the popular press. this phenomenon has not escaped the attention of academia, and a growing number of colleges and universities are looking at java as a candidate "foundation" language on which to base computer science curricula.this paper looks at the role of a programming language for teaching computer science courses and compares ada and java against the identified criteria. it concludes that ada is the superior choice, based on technical factors such as its more comprehensive feature set and its methodology-neutral design, and also taking into account external factors including the availability of good but inexpensive compilation environments.section 1 provides a brief overview of the java technology. section 2 identifies the criteria relevant to choosing a programming language for a foundation-level computer science course. section 3 compares ada and java with respect to the criteria related to technical aspects of the language, and section 4 compares the languages with respect to external factors. section 5 summarizes the conclusions of this analysis.appendix a furnishes a summary of the java language. it is assumed that the reader is familiar with ada, and thus an ada language summary is not included. web sites with links to ada information include [sigada], [kempe1], and [adaic]. benjamin m. brosgol informed prefetching and caching r. h. patterson g. a. gibson e. ginting d. stodolsky j. zelenka an experiment in estimating reliability growth under both representative and directed testing brian mitchell steven j. zeil di: an interactive debugging interpreter for applicative languages s. k. skedzielewski r. k. yates r. r. oldehoeft rethinking documentation and interface: reflections on categorical approaches dennis wixon score '82 - a summary (at ibm systems research institute, 3/23-3/24/82) "score '82", the first workshop on software counting rules, was attended by practitioners who are working with "software metrics". the concern was with methodologies for counting such software measurables as the number of "operators", "operands" or the number of lines of code in a program. a "metric" can be a directly countable "measurable" or a quantity computable from one or several such "measureables". "metrics" quantify attributes of the software development process, the software itself, or some aspect of the interaction of the software with the processor that hosts it. in general, a "metric" should be useful in the development of software and in measuring its quality. it should have some theory to support its existence, and it should be based on actual software data. this workshop was concerned principally with the data aspects of "metrics", especially with the rules underlying the collection of the data from which they are computed. john e. gaffney slicing object-oriented software loren larsen mary jean harrold technical correspondence diane crawford the testing of an apl compiler wai-mee ching alex katz treating failure as state l. wong b. c. ooi apl2 - a very superior fortran properly written programs in apl2 exceed the corresponding apl programs in brevity and clarity by as large a margin as apl programs exceed those written in fortran. this paper takes a topic from the area of finance and uses it to illustrate this contention. norman thomson linux on low-end hardware trenton b. tuggle ps-algol's device-independent output statement p. c. philbrow i. armour m. p. atkinson j. livingstone modulo scheduling with multiple initiation intervals nancy j. warter-perez noubar partamian extending the scope of the program library kjell-hakan narfelt dick schefstrom units of distribution for distributed ada trevor mudge modeling and validation of the real-time mach scheduler hiroshi arakawa daniel i. katcher jay k. strosnider hideyuki tokuda the program dependence graph and its use in optimization in this paper we present an intermediate program representation, called the program dependence graph (pdg), that makes explicit both the data and control dependences for each operation in a program. data dependences have been used to represent only the relevant data flow relationships of a program. control dependences are introduced to analogously represent only the essential control flow relationships of a program. control dependences are derived from the usual control flow graph. many traditional optimizations operate more efficiently on the pdg. since dependences in the pdg connect computationally related parts of the program, a single walk of these dependences is sufficient to perform many optimizations. the pdg allows transformations such as vectorization, that previously required special treatment of control dependence, to be performed in a manner that is uniform for both control and data dependences. program transformations that require interaction of the two dependence types can also be easily handled with our representation. as an example, an incremental approach to modifying data dependences resulting from branch deletion or loop unrolling is introduced. the pdg supports incremental optimization, permitting transformations to be triggered by one another and applied only to affected dependences. jeanne ferrante karl j. ottenstein joe d. warren best of technical support corporate linux journal staff a fortran iv to quickbasic translator rizaldo b. caringal phan minh dung algorithms and models max j. schindler displaying idl instances r. snodgrass prolog puzzles r. j. casimir rt-modula2: an embedded in modula2 language for writing concurrent and real time programs juan hernández juan antonio sanchez formal specification and development of an ada compiler - a vdm case study the vienna development method (vdm) has been employed by dansk datamatik center (ddc) on a large-scale, industrial ada compiler development project. vdm is a formal specification and development method in that it insists on the initial specifications and all design steps being expressed in a formal (mathematically based) notation. this paper gives an overview of how vdm was used in the various steps of the ddc ada project, and we guide the reader through the steps involved from the initial formal specification of ada down to the actually coded multipass compiler. finally we report on the quantitative and qualitative experiences we have gained, both as regards the technical suitability of vdm for the project and as regards the implications on software management and quality assurance. geert b. clemmensen ole n. oest software systems architecting (abstract) eberhardt rechtin combining cfg and recursive functions to get a new language chen haiming debugging program design documents: graphics vs text-based methodologies f. layne wallace judy solano workshop summary first international workshop on requirements engineering: foundation of software quality (refsq;94) klaus pohl gernot starke peter peters will the real se metaphors please stand up!: (or, i never metaphor i didn't like!) david a. nelson oo process and metrics for effort estimation granville miller axiomatic bootstrapping: a guide for compiler hackers if a compiler for language l is implemented in l, then it should be able to compile itself. but for systems used interactively commands are compiled and immediately executed, and these commands may invoke the compiler; so there is the question of how ever to cross-compile for another architecture. also, where the compiler writes binary files of static type information that must then be read in by the bootstrapped interactive compiler, how can one ever change the format of digested type information in binary files? here i attempt an axiomatic clarification of the bootstrapping technique, using standard ml of new jersey as a case study. this should be useful to implementors of any self-applicable interactive compiler with nontrivial object-file and runtime-system compatibility problems. andrew w. appel website automation toolkit andrew johnson apl: a preferred language jack rudd from objects to actors: study of a limited symbiosis in smalltalk-80 in this paper we describe an implementation of actors in smalltalk-80, named actalk. this attempt is designed as a minimal extension preserving the smalltalk-80 language. actors are active and autonomous objects, as opposed to standard passive smalltalk-80 objects. an actor is built from a standard smalltalk-80 object by associating a process with it and by serializing the messages it could receive into a queue. we will study the cohabitation and synergy between the two models of computation: transfer of active messages (message and thread of activity) between passive objects, and exchange of passive messages between active objects. we propose a sketch of methodology in order to have a safe combination between these two programming paradigms. we show how to extend the actalk kernel into various extensions to define the basic actor model of computation, higher level programming constructs such as the 3 types of message passing (asynchronous, synchronous, and eager) proposed in the abcl/1 programming language, and distributed architectures. all these examples are constructed as simple extensions (by using inheritance) of our kernel model. j.-p. briot software reuse and standardization for smes: the cim-exp perspective george l. kovacs a comparative evaluation of object definition techniques for large prototype systems although prototyping has long been touted as a potentially valuable software engineering activity, it has never achieved widespread use by developers of large-scale, production software. this is probably due in part to an incompatibility between the languages and tools traditionally available for prototyping (e.g., lisp or smalltalk) and the needs of large- scale-software developers, who must construct and experiment with large prototypes. the recent surge of interest in applying prototyping to the development of large- scale, production software will necessitate improved prototyping languages and tools appropriate for constructing and experimenting with large, complex prototype systems. we explore techniques aimed at one central aspect of prototyping that we feel is especially significant for large prototypes, namely that aspect concerned with the definition of data objects. we characterize and compare various techniques that might be useful in defining data objects in large prototypesystems, after first discussing some distinguishing characteristics of large prototype systems and identifying some requirements that they imply. to make the discussion more concrete, we describe our implementations of three techniques that represent different possibilities within the range of object definition techniques for large prototype systems. jack c. wileden lori a. clarke alexander l. wolf formal modeling of synchronization methods for concurrent objects in ada 95 one important role for ada programming is to aid engineering of concurrent and distributed software. in a concurrent and distributed environment, objects may execute concurrently and need to be synchronized to serve a common goal. three basic methods by which objects in a concurrent environment can be constructed and synchronized have been identified [1]. to formalize the semantics of these methods and to provide a formal model of their core behavior, we provide some graphic models based on the petri net formalism. the purpose of this formal modeling is to illustrate the possibility of automatic program analysis for object-oriented features in ada-95. models for the three distributed- object synchronization methods are discussed, and a potential deadlock situation for one of the methods/models is illustrated. we conclude with some comparison of the three methods in terms of the model abstractions. ravi k. gedela sol m. shatz haiping xu book review: red hat linux secrets, second editon duane hellums a simple and efficient implmentation approach for single assignment languages functional and single assignment languages have semantically pure features that do not permit side effects. this lack of side effects makes detection of parallelism in programs much easier. however, the same property poses a challenge in implementing these languages efficiently. a preliminary implementation of a compiler for the single assignment language, sisal, is described. the compiler uses reference counts for memory management and copy avoidance. performance results on a wide range of benchmark programs show that sisal programs compiled by our implementation run 2-3 times slower on average than the same programs written in c, pascal, and fortran, and orders of magnitude faster than other implementations of single assignment languages. extensions of these techniques for multiprocessor implementations are proposed. kourosh gharachorloo vivek sarkar john l. hennessy esprit de corps suite spencer rugaber introduction to multi-threaded programming a description of thread programming basics for c programmers brian masney smalltalk and exploratory programming d. w. sandberg surveyor's forum: retargetable code generators m. ganapathi j. l. hennessy c. n. fischer codewizard for linux ben crowder transactions for concurrent object-oriented programming systems concurrent object-oriented programming systems (coops) require support for fault tolerance, concurrency control, consistent commitment of changes and program-initiated rollback. it is sometimes suggested that the classical transaction processing model successfully applied in databases and operating systems be integrated directly into coops facilities. this is clearly desirable, but by itself is too limiting. coops applications require several granularities of transaction-like facilities. a number of transaction-like mechanisms were addressed at the workshop, and no doubt there are many others suitable for coops. here i briefly survey four levels of granularity that were discussed at the workshop. in their workshop paper linearizable concurrent objects and in [1], maurice herlihy and jeannette wing describe a fine granularity correctness condition for coops, linearizability. linearizability requires that each operation should appear to "take effect" instantaneously and that the order of non- concurrent operations should be preserved. my workshop paper concurrent meld discusses the incorporation of atomic blocks as a programming facility into the meld coops. atomic blocks in effect implement linearizability at the level of blocks (essentially compound statements, which may encompass an entire method). in his workshop paper nested objects and in [4], bruce martin describes small grain mechanisms for both externally serializable and semantically verifiable operations for coops. externally serializable operations enforce serializability among top-level operations but permit non- serializable computations on subobjects. semantically verifiable operations do not enforce serializability at all, but instead consider the semantics of apparently conflicting operations in preventing inconsistencies from being introduced. my paper also discusses our plans for integrating medium granularity transactions --- that is, classical transactions with atomicity and serializability as the correctness conditions --- into meld. serializable transactions have been implemented in a separate version of meld. not yet integrated into the main line. this implementation uses an algorithm for distributed optimistic concurrency control with multiple versions [2], but two phase locking, timestamps, etc. seem equally feasible for coops depending on the anticipated applications. we are currently working on large grain transaction-like facilities for coops. our particular concern is an important class of coops applications --- open- ended activities [5] such as cad/cam. vlsi design, medical informatics, office automation and software development --- characterized by uncertain duration, uncertain developments and interaction with other concurrent activities. classical transactions are not suitable for open-ended activities for the following reasons: fault tolerance implies atomicity, so if there is some failure, the transaction is rolled back and retried. this is entirely inappropriate when the 'transaction' consists of fixing a bug by browsing and editing a number of source files, compiling and linking, executing test cases and generating traces, etc., which may easily take several hours. concurrency control requires that users are isolated, and the transactions appear to have been executed in some serial order. but it is not acceptable for a programmer to be locked out from editing a source file just because some other programmer had previously read the file but has not yet finished his changes to other files. consistent commitment implies simultaneously making all of a transaction's updates publicly available only when the first transaction completes and commits. but programmers must release certain source files so they can be viewed and/or compiled and linked by other programmers cooperating on the same subsystem, while continuing in progress work on other files. an abort deletes all changes made during the transaction, so they are never available to other transactions. a programmer may realize that his original ideas about the cause of a program error were incorrect and decide to start over, but he may want to keep his incorrectly modified versions available for reference. thus the traditional transaction model is not suitable for open-ended activities such as software development, at least where the 'transactions' are at the granularity of bug fixes, completion of a milestone, or release of a product. an extended transaction processing model, with supporting programming facilities, is necessary for coops. we have investigated an extended transaction model, commit-serializability, as a candidate for integration into meld and other coops. the term commit- serializability denotes a model where all committed transactions are in fact serializable in the standard sense, but these transactions may not correspond in a simple way to those transactions that were initiated. in particular, the initiated transactions may be divided during operation and parts committed separately in such ways that these transactions are not serializable. consider two in-progress transactions t1 and t2. t1 is divided under program or user control into t3 and t4, and shortly thereafter t3 commits while t4 continues. t2 may view the committed updates of t3, some of which were made by t1 before the division, and then itself commits. t4 may then view the committed updates of t2 before it commits. t2, t3 and t4 are serializable, but t1 and t2 are not. the originally initiated transaction t1 in effect disappears, and in particular is neither committed nor aborted. commit-serializability is supported by two new transaction processing operations, split- transaction and join-transaction [5], in addition to the standard begin-transaction, commit-transaction and abort-transaction operations. the split-transaction operation supports the kind of division described above; the inverse join-transaction operation merges a completed transaction into an in-progress transaction to commit their results together. the two-way versions of the split and join operations take the following arguments. either statement may be executed as the last statement of any method invoked as part of a transaction t. split- transaction (a: (areadset, awriteset, amessage), b: (breadset, bwriteset, bmessage)) join-transaction (s: tid) when the split- transaction operation is invoked during a transaction t, there is a treadset consisting of all objects read by t but not updated and a twriteset consisting of all objects updated by t. in coops, treadset is the set of instance variables read by t, including the instance variables of objects (transitively) sent a message where the responding method accessed but did not modify these instance variables of that object. twriteset is the set of instance variables assigned by t, including the instance variables of objects (transitively) sent a message where the responding method modified these instance variables of that object. in both cases, the object whose method initiated t acts as coordinator of the cohort. treadset is divided, not necessarily disjointly, into areadset and breadset. twriteset is divided disjointly into awriteset and bwriteset; awriteset normally must also be disjoint from breadset and bwriteset from areadset. amessage and bmessage are the selectors of messages sent to the same object ($self in meld) to continue the two transactions. in the special case where a is immediately committed, amessage is the operation commit-transaction, and objects in awriteset may also appear in either breadset or bwriteset. when the join-transaction operation is invoked during a transaction t, target transaction s must be ongoing --- and not in conflict with t. treadset and twriteset are added to sreadset and swriteset, respectively, and s may continue or commit. one difficulty of join-transaction that did not arise for split-transaction is the necessity of naming another transaction s, other than the current one t. this can be done by treating each transaction as a special object, and references to such special objects can be stored in instance variables (tid in the figure), sent as arguments in messages and even sent messages --- although these objects respond only to transaction operations. (this is orthogonal to treating different classes of transaction processing characteristics as mixin superclasses, as in avalon [3].) more work on naming issues is needed. in the cases of both split-transaction and join-transaction, the originally initiated transaction t is divided or merged, respectively, so the net effect is as if it had never existed. the tables, logs, etc. used in the transaction manager implementation may be updated to expunge knowledge of t and replace it with knowledge of a and b or s, respectively, or this may be optimized to use t's identification for one of a or b, or for s. g. e. kaiser linux system administration: adding a new disk to a linux system aeleen frisch a research environment for incremental data flow analysis this articles describes an environment intended for the study of incremental data flow analysis and the discovery of the types of programs for which this technique is effective. high-level data flow analysis using attribute grammars allows calculation of both data flow information and measurement information as computations on attribute sets. our results show that in the worst case our method of incremental data flow analysis uses a large amount of cpu time (and memory), but for programs with the same structure as a "typical program", performance is quite good. zhiqiang tan karen a. lemone hcc \\- a portable ansi c compiler (with a code generator for the powerpcs) mohd hanafiah abdullah a style analysis of c programs a large quantity of well-respected software is tested against a series of metrics designed to measure program lucidity, with intriguing results. although slanted toward software written in the c language, the measures are adaptable for analyzing most high-level languages. r e. berry b a.e. meekings considerations in choosing a concurrent/distributed object-oriented programming language michael l. nelson why softwarre fails this note summarizes conclusions from a three year study about why released software fails. our method was to obtain mature-beta or retail versions of real software applications and stress test them until they fail. from an analysis of the causal faults, we have synthesized four reasons why software fails. this note presents these four classes of failures and discusses the challenges they present to developers and testers. the implications for software testers are emphasized. james a. whittaker alan jorgensen sniff (abstract): a pragmatic approach to a c++ programming environment walter r. bischofberger a functional approach to type constraints of generic definitions myung ho kim re-creation and evolution in a programming environment john r. nestor fortran 90 is almost object-oriented: some thoughts about fortran and oop lawrie schonfelder serializing web application requests mr. wilson tells us how he improved web response time and kept users happy using the generic network queueing system (gnqs) colin wilson computer user manuals in print: do they have a future? what sort of a role will the printed page play in the computer user manuals of the future? i believe that print does have a future in this area, but not perhaps the future we might have foreseen five years ago. at that time nothing was less controversial than the viability of print as a medium of documentation. that viability is in question now, and to show how the questioning developed i propose to examine its beginnings in the recent past. i shall then go into some detail on how the controversy about print is being maintained at present. finally, i shall explain how i feel the controversy is likely to be resolved, by making four predictions about the future of computer user manuals, and those whose job it is to produce them. john b mckee sai-sddltm ...a tool for automated documentation sai-sddl tm (science applications, inc. software design and documentation language) is a licensed computer program which aids in the documentation as well as the design of computer software. the three components of sai-sddl are: the processor, which is a pascal program; the language, which is simple, unrestrictive, and keyword-oriented; and the methodologies, which are numerous and well established. this workshop focuses on the methodologies and the results obtained by applying them. donald a. heimburger marcia a. metcalfe mtool: a method for detecting memory bottlenecks aaron goldberg john hennessy adaptable object migration: concept and implementation migration is one example of the insufficiently used potentials of distributed systems. although migration can enhance the efficiency and the reliability of distributed systems, it is still rarely used. two limitations contained in nearly all existing migration implementations prevent a widespread usage: migration is restricted to processes and the migration mechanism, i.e. the way state is transferred, is not adaptable to changing requirements.in our approach, migration is an operation provided by every object of any type. triggered by higher level migration policies, the object migrates itself using an object- specific migration mechanism. changing requirements are handled by higher level migration policies that adapt migration by exchanging the object's mechanisms.adaptable migration was implemented within the birlix operating system. different migration mechanisms are accomplished by different meta objects, which can be attached to other objects. if an object has to be migrated, the meta object does the migration. changing environmental requirements are handled by exchanging the meta object. as a result, each object has its own migration mechanism. the approach has been examined by implementing a couple of well-known migration mechanisms via meta objects. this paper describes the meta object implementation of the charlotte migration mechanism. wolfgang lux noncorrecting syntax error recovery a parser must be able to continue parsing after encountering a syntactic error to check the remainder of the input. to achieve this, it is not necessary to perform corrections on either the input text or the stack contents. a formal framework is provided in which noncorrecting syntax error recovery concepts are defined and investigated. the simplicity of these concepts allows the statement of provable properties, such as the absence of spurious error messages or the avoidance of skipping input text. these properties are due to the fact that no assumptions about the nature of the errors need be made to continue parsing. helmut richter graph-based code selection techniques for embedded processors code selection is an important task in code generation for programmable processors, where the goal is to find an efficient mapping of machine- independent intermediate code to processor-specific machine instructions. traditional approaches to code selection are based on tree parsing which enables fast and optimal code selection for intermediate code given as a set of data-flow trees. while this approach is generally useful in compilers for general- purpose processors, it may lead to poor code quality in the case of embedded processors. the reason is that the special architectural features of embedded processors require performing code selection on data-flow graphs, which are a more general representation of intermediate code. in this paper, we present data-flow graph-based code selection techniques for two architectural families of embedded processors: media processors with support for simd instructions and fixed-point dsps with irregular data paths. both techniques exploit the fact that, in the area of embedded systems, high code quality is a much more important goal than high compilation speed. we demonstrate that certain architectural features can only be utilized by graph- based code selection, while in other cases this approach leads to a significant increase in code quality as compared to tree-based code selection. a "card-marking" scheme for controlling intergenerational references in generation-based garbage collection on stock hardware p. r. wilson t. g. moher designing component-based frameworks using patterns in the uml grant larsen the use of lexical affinities in requirements extraction y. s. maarek d. m. berry comments on "the cost of selective recompilation and environment processing" bevin r. brett notes on termination of occam processes d. talla code selection through object code optimization jack w. davidson christopher w. fraser are we still having fun?: a minority report from hopl-ii scott guthery programming marjorie richardson linux programming bible ben crowder specware jim mcdonald semantics for the architecture interchange language, acme david wile asynchronous transfer of control in ada 9x the taft proposal is described in detail. it involves introducing a new "and" clause into the select statement thereby providing a means of programming asynchronous transfer of control without the use of the abort facility. an evaluation of the proposal considers some of its limitations, the major one of which is the possibility of deadlock if a group of tasks use the facility to affect each other. ways of removing this difficulty are considered. the model is then compared with the alternative of introducing asynchronous exceptions. finally a unified model is presented that combines the advantages of the taft proposal and asynchronous exceptions. a. burns a. j. wellings g. l. davies analysis of transaction management performance there is currently much interest in incorporating transactions into both operating systems and general-purpose programming languages. this paper provides a detailed examination of the design and performance of the transaction manager of the camelot system. camelot is a transaction facility that provides a rich model of transactions intended to support a wide variety of general-purpose applications. the transaction manager's principal function is to execute the protocols that ensure atomicity. the conclusions of this study are: a simple optimization to two-phase commit reduces logging activity of distributed transactions; non-blocking commit is practical for some applications; multithreaded design improves throughput provided that log batching is used; multi-casting reduces the variance of distributed commit protocols in a lan environment; and the performance of transaction mechanisms such as camelot depend heavily upon kernel performance. d. duchamp icmake part 3 frank brokken k. kubat system administration marjorie richardson leapfrogging: a portable technique for implementing efficient futures a future is a language construct that allows programmers to expose parallelism in applicative languages such as multilisp [5] with minimal effort. in this paper we describe a technique for implementing futures, which we call leapfrogging, that reduces blocking due to load imbalance. the utility of leapfrogging is enhanced by the fact that it is completely platform- independent, is free from deadlock, and places a bound on stack sizes that is at most a small constant times the maximum stack size encountered during a sequential execution of the same computation. we demonstrate the performance of leapfrogging using a prototype implementation written in c++. david b. wagner bradley g. calder the measured performance of personal computer operating systems j. b. chen y. endo k. chan d. mazieres a. dias m. seltzer m. d. smith a methodology for software cost estimation ali arifoglu using linux and dos together taking the pain out of installing linux on a machine for the first time marty leisner research directions in software reuse: where to go from here? software reuse is no longer in its infancy. we are able to look back at more than 15 years of research and should use the opportunity of such a symposium to critically evaluate the past research in order to identify promising future research areas in software reuse. in this paper, we give a broader view of reuse and some of the so far less- considered areas, which we believe may support software reuse to get off the ground. we mention our ongoing research in software reuse, discussing reuse experiments in the areas of long-term software evolution and component programming. furthermore, we indicate the critical importance of interactions among the reuse and related communities within software engineering, such as the object- oriented and the software maintenance communities. harald gall mehdi jazayeri rene klosch execution analysis of dsm applications: a distributed and scalable approach lionel brunie laurent lefèvre olivier reymann modeling distributed file systems this paper describes different methods and techniques used to model, analyze, evaluate and implement distributed file systems. distributed file systems are categorized by the distributed system hardware and software architecture, in which they are implemented as well as by the file systems' functions. in addition, distributed file system performance depends on the load executed in the system. modeling and anlysis of distrubed file systems requires new methods to approximate complexity of the system and to provide a useful solution. the complexity of the distributed file system is reflected in the possible placement of the files, file replication, and migration of files and processes. the synchronization mechanisms are needed to control file access. file sharing involves load sharing. in distributed environment. anna hac generalized and stationary scrolling we present a generalized definition of scrolling that unifies a wide range of existing interaction techniques, from conventional scrolling through pan and zoom systems and fish- eye views. furthermore it suggests a useful class of new scrolling techniques in which objects do not move across the display. these "stationary scrolling" techniques do not exhibit either of two problems that plague spatial scrolling system: discontinuity in salience and the undermining of the user's spatial memory. randall b. smith antero taivalsaari reusable software components trudy levine evaluating software engineering methods and tool - part 4: the influence of human factors chris sadler barbara ann kitchenham an ada code generator for vax 11/780 with unix this paper describes the final phase of an ada compiler which produces code for the vax 11/780 running the unix operating system. problems encountered in the implementation of subprogram calls, parameter passing, function return values, and exception handling are discussed and their solutions outlined. an underlying requirement for the code generator has been speed of implementation consistent with being a test bed for an ada implementation. to accomplish this, a common model for the target environment has been assumed. the assumptions include: the vaxis a stack machine, a single address space is used, only the general case is implemented (no optimization of special cases), the hardware does as much work as possible, run time routines for lengthy code sequences are acceptable, and the conventions given in the vax architecture, hardware, and software manuals are used. the code generator has been running on a pdp-10 with tops-10, producing a vax assembly language source program as output. it has been available to local users since the beginning of 1980. mark sherman andy hisgen david alex lamb jonathan rosenberg design and verification of secure systems this paper reviews some of the difficulties that arise in the verification of kernelized secure systems and suggests new techniques for their resolution. it is proposed that secure systems should be conceived as distributed systems in which security is achieved partly through the physical separation of its individual components and partly through the mediation of trusted functions performed within some of those components. the purpose of a security kernel is simply to allow such a 'distributed' system to actually run within a single processor; policy enforcement is not the concern of a security kernel. this approach decouples verification of components which perform trusted functions from verification of the security kernel. this latter task may be accomplished by a new verification technique called 'proof of separability' which explicitly addresses the security relevant aspects of interrupt handling and other issues ignored by present methods. j. m. rushby termworks (abstract): a flexible framework for implementing (nearly) arbitrary kinds of terms andreas tonne how to verify concurrent ada programs: the application of model checking ada 95 is an expressive concurrent programming language with which it is possible to build complex multi-tasking applications. much of the complexity of these applications stem from the interactions between the tasks. this paper argues that model checking tools are now mature enough that they can be used by engineers to verify the logical correctness of their tasking algorithms. the paper illustrates the approach by showing the correctness of an ada implementation of the atomic action protocol. a. burns a. j. wellings the ada test and verification systems (atvs) c. hobin book review: linux: installation, configuration, use michael scott shappe a user interface toolkit based on graphical objects and constraints one of the most difficult aspects of creating graphical, direct manipulation user interfaces is managing the relationships between the graphical objects on the screen and the application data structures that they represent. coral (constraint-based object-oriented relations and language) is a new user interface toolkit under development that uses efficiently- implemented constraints to support these relationships. using coral, user interface designers can construct interaction techniques such as menus and scroll bars. more importantly, coral makes it easy to construct direct- manipulation user interfaces specialized to particular applications. unlike previous constraint- based toolkits, coral supports defining constraints in the abstract, and then applying them to different object instances. in addition, it provides iteration constructs where lists of items (such as those used in menus) can be constrained as a group. coral has two interfaces: a declarative interface that provides a convenient way to specify the desired constraints, and a procedural interface that will allow a graphical user interface management system (uims) to automatically create coral calls. pedro szekely brad myers software error analysis: a real case study involving real faults and mutations the paper reports on a first experimental comparison of software errors generated by real faults and by 1st-order mutations. the experiments were conducted on a program developed by a student from the industrial specification of a critical software from the civil nuclear field. emphasis was put on the analysis of errors produced upon activation of 12 real faults by focusing on the mechanisms of error creation, masking, and propagation up to failure occurrence, and on the comparison of these errors with those created by 24 mutations. the results involve a total of 3730 errors recorded from program execution traces: 1458 errors were produced by the real faults, and the 2272 others by the mutations. they are in favor of a suitable consistency between errors generated by mutations and by real faults: 85% of the 2272 errors due to the mutations were also produced by the real faults. moreover, it was observed that although the studied mutations were simple faults, they can create erroneous behaviors as complex as those identified for the real faults. this lends support to the representativeness of errors due to mutations. murial daran pascale thevenod-fosse recycling continuations jonathan sobel daniel p. friedman subsystem design guidelines for extensible general-purpose software paul grefen roel wieringa linearity and the pi-calculus the economy and flexibility of the pi- calculus make it an attractive object of theoretical study and a clean basis for concurrent language design and implementation. however, such generality has a cost: encoding higher-level features like functional computation in pi- calculus throws away potentially useful information. we show how a linear type system can be used to recover important static information about a process's behavior. in particular, we can guarantee that two processes communicating over a linear channel cannot interfere with other communicating processes. after developing standard results such as soundness of typing, we focus on equivalences, adapting the standard notion of barbed bisimulation to the linear setting and showing how reductions on linear channels induce a useful "partial confluence" of process behaviors. for an extended example of the theory, we prove the validity of a tail-call optimization for higher-order functions represented as processes. naoki kobayashi benjamin c. pierce david n. turner model like an egyptian when the ancient egyptians built the pyramids, they built them with a strong sturdy base, and then added layers in gradually smaller increments until the top came to a point. we visit these monuments today and marvel at their architectural ingenuity. imagine---they had advanced to such a stage, that they were able to determine that the pyramids would stand a lot longer if they were built with the point at the top, instead of at the bottom. why hasn't the software development community been able to achieve this same level of advancement? one of the big questions facing many software organizations today is choosing between object-oriented or structured analysis as the method for analyzing requirements. while many make this choice based on being trendy, or building on past experience, etc., they fail to realize that the result of their choice may be the equivalent of building an upside-down pyramid. the key to making the right choice is based on an in-depth understanding of the patterns that exist in the designs of systems in specific application groups. our experience applying the practical ada design method (padadm) across a variety of applications has identified that communication systems, database systems, and control systems have distinguishing patterns in how they aggregate ada design objects. when analyzing the requirements for these systems, knowledge of these patterns is fundamental for choosing an analysis method that aggregates its information in ways that map to those of the final design. all systems will contain a mixture of data (object), process (function), and state (behavior) characteristics, but certain analysis methods choose one of these characteristics as its base, and then build the rest on top. if your system characteristics aren't distributed with the same percentages as the method expects, you're building an upside-down pyramid. so, lets stop choosing the trendiest method, and look for the one that lets us model like the egyptians! michael i. frankel how and why to encapsulate class trees dirk riehle gaining general acceptance for uimss b a myers pulse: a methodology to develop software product lines joachim bayer oliver flege peter knauber roland laqua dirk muthig klaus schmid tanya widen jean- marc debaud modeling industrial embedded systems with uml the main purpose of this paper is to present how the unified modeling language (uml) can be used for modeling industrial embedded systems. by using a car radios production line as a running example, the paper demonstrates the modeling process that can be followed during the analysis phase of complex control applications. in order to guarantee the continuity mapping of the models, the authors propose some guidelines to transform the use case diagrams into a single object diagram, which is one of the main diagrams for the next development phases. joão m. fernandes ricardo j. machado henrique d. santos graphic formalisms should integrate communication, control, and data flow george w. cherry the jini architecture: dynamic services in a flexible network ken arnold implementing ada 9x features using posix threads: design issues e. w. giering frank mueller t. p. baker authentication in distributed systems: theory and practice butler lampson martín abadi michael burrows edward wobber team: a support environment for testing, evaluation, and analysis current research indicates that software reliability needs to be achieved through the careful integration of a number of diverse testing and analysis techniques. to address this need, the team environment has been designed to support the integration of and experimentation with an ever growing number of software testing and analysis tools. to achieve this flexibility, we exploit three design principles: component technology so that common underlying functionality is recognized; generic realizations so that these common functions can be instantiated as diversely as possible; and language independence so that tools can work on multiple languages, even allowing some tools to be applicable to different phases of the software lifecycle. the result is an environment that contains building blocks for easily constructing and experimenting with new testing and analysis techniques. although the first prototype has just recently been implemented, we feel it demonstrates how modularity, genericity, and language independence further extensibility and integration. lori a. clarke debra j. richardson steven j. zeil a library of high level control operators christian queinnec automatic construction of incremental lr(1) - parsers dashing yeh uwe kastens stimulus-response machines: a new visual formalism for describing classes and objects george w. cherry building tcl-tk guis for hrt-hood systems this work explores tcl-tk 8.0 as a building tool for script-based guis in ada95 real-time systems. tcl-tk 8.0 is a library that makes graphic programming easier, but it suffers from being non-thread-safe. an application architecture is proposed, the deferred server, which provides transparent use of tcl-tk to multithreaded ada95 applications via tash, a thick binding that allows ada95 single-threaded code to use tcl- tk. we find that only a minimal extension to tcl-tk 8.0 and tash is required to support it, while a successful prototype has been implemented based on these ideas. likewise, the early integration of tcl-tk graphic user interfaces in hrt-hood designs is examined; unfortunately, in this respect, we conclude that this is not feasible. however, a hrt-hood conform distributed configuration is outlined in wich the user interface becomes a multithreaded remote service based on the deferred server architecture. juan carlos diaz martin isidro irala veloso jose manuel rodriguez garcia using ibcs2 under linux eric youngdale optimal mutex policy in ada 95 priority inversion is any situation where low priority tasks are served before higher priority tasks. it is recognized as a serious problem for real-time systems. in this paper, we present some of the important features of the real-time annex of ada 95. we also implement the _optimal mutex policy (omp)_ in ada 95 to better illustrate ada's new usefulness for real-time programming. a detailed discussion of this protocol and other related issues are presented. jim abu-ras take command init: init is the driving force that keeps our linux box alive, and it is the one that can put it to death. this article is meant to summarize why init is so powerful and how you can instruct it to behave differently from its default behavio

< calculus, a model of the constructors and kinds of tilt's intermediate language. inspired by coquand's result for type theory, we prove decidability of constructor equivalence for λ s< subscrpt>≤ by exhibiting a novel --- though slightly inefficient --- type-directed comparison algorithm. the correctness of this algorithm is proved using an interesting variant of kripke-style logical relations: unary relations are indexed by a single possible world (in our case, a typing context), but binary relations are indexed by two worlds. using this result we can then show the correctness of a natural, practical algorithm used by the tilt compiler. christopher a. stone robert harper does case make the customer happier robert mclaughlin the common pse interface set (cais) object management system (oms) r. m. thall the implementation of the cilk-5 multithreaded language matteo frigo charles e. leiserson keith h. randall eliminating redundancies in sum-of-product array computations array programming languages such as fortran 90, high performance fortran and zpl are well-suited to scientific computing because they free the scientist from the responsibility of managing burdensome low-level details that complicate programming in languages like c and fortran 77. however, these burdensome details are critical to performance, thus necessitating aggressive compilation techniques for their optimization. in this paper, we present a new compiler optimization called array subexpression elimination (ase) that lets a programmer take advantage of the expressibility afforded by array languages and achieve enviable portability and performance. we design a set of micro- benchmarks that model an important class of computations known as stencils and we report on our implementation of this optimization in the context of this micro-benchmark suite. our results include a 125% improvement on one of these benchmarks and a 50% average speedup across the suite. also we show a speedup of 32% improvement on the zpl port of the nas mg parallel benchmark and a 29% speedup over the hand-optimized fortran version. further, the compilation time is only negligibly affected. steven j. deitz bradford l. chamberlain lawrence snyder avcs: the apl version control system this paper described avcs, which is an apl-oriented version control system devised as a tool to track the history of software projects and to control concurrent access to project components. the basics of version control systems are explained, and specific aspects of applying a version control system methodology to project development in apl environments are considered. particular attention is given to features which differentiate the approach accepted in avcs from that of available version control and project management systems. specification of avcs's data maintenance, programming and user interface is presented to the extent required in order to explain how the system works. in conclusion, the possible application of avcs to solving problems of porting apl projects across different environments is outlined. nikolai i. puntikov maxim a. volodin alexei a. kolesnikov laboratory experiment with the 3rolesplaying method the paper addresses the problem of making experiments for software engineering during laboratories. there is a need for experimental investigations. if experiments are performed in the lab together with training they must also have high educational value. in the paper, there is discussion about joining education and research. then summary of the 3rolesplaying method which satisfies goals of both of them is presented. results of an experiment that took place in the technical university of gdansk, poland in the summer semester of 1997/98 are described, and verification of the results value is made. anna bobkowska experiences in developing an ada cross compiler hansheng chen yuneng chen deiqui shen lin xu hanming jiang ren shi actra - an industrial strength concurrent object-oriented programming system jeff mcaffer john duimovich on performance and space usage improvements for parallelized compiled apl code loop combination has been a traditional optimization technique employed in apl compilers, but may introduce dependencies into the combined loop. we propose an analysis method by which the compiler can keep track of the change of the parallelism when combining high-level primitives. the analysis is necessary when the compiler needs to decide a trade-off between more parallelism and a further combination. we also show how the space usage, as well as the performance, improves by using system calls with the aid of garbage collection to implement a dynamic memory allocation. a modification of the memory management scheme can also increase available parallelism. our experimental results indicate that the performance and the space usage improve appreciably with the above enhancements. dz-ching ju wai-mee ching chuan-lin wu minimizing cost of local variables access for dsp-processors erik eckstein andreas krall experience report on software reuse project: its structure, activities, and statistical results sadahiro isoda using simulation to build inspection efficiency benchmarks for development projects lionel briand khaled el emam oliver laitenberger thomas fussbroich linux survey results phil hughes assertive comments in apl programming current practice in apl programming (and in most other languages) is to use comments to describe what is intended by subsequent lines within a function. an alternative approach to commenting is described, wherein assertions (a concept borrowed from program proof techniques) are inserted into apl code, specifying the present state of relevant variables, etc. it is shown that this approach is especially appropriate for apl, as most apl design is accomplished by working from a desired end result, backwards. david m. weintraub program slicing program slicing is a method used by experienced computer programmers for abstracting from programs. starting from a subset of a program's behavior, slicing reduces that program to a minimal form which still produces that behavior. the reduced program, called a "slice", is an independent program guaranteed to faithfully represent the original program within the domain of the specified subset of behavior. finding a slice is in general unsolvable. a dataflow algorithm is presented for approximating slices when the behavior subset is specified as the values of a set of variables at a statement. experimental evidence is presented that these slices are used by programmers during debugging. experience with two automatic slicing tools is summarized. new measures of program complexity are suggested based on the organization of a program's slices. mark weiser a practical method for code generation based on exhaustive search an original method for code generation has been developed in conjunction with the construction of a compiler for the c programming language on the dec-10 computer. the method is comprehensive, determining evaluation order and doing register allocation and instruction selection simultaneously. it uses exhaustive search rather than heuristics, and is table-driven, with most machine-specific information isolated in the tables. testing and evaluation have shown that the method is effective, that the search process is not too time consuming, and that the compiler is capable of producing code as good as that of other optimizing compilers. david w. krumme david h. ackley hybrid global/local search strategies for dynamic voltage scaling in embedded multiprocessors in this paper, we explore a hybrid global/local search optimization framework for dynamic voltage scaling in embedded multiprocessor systems. the problem is to find, for a multiprocessor system in which the processors are capable of dynamically varying their core voltages, the optimum voltage levels for all the tasks in order to minimize the average power consumption under a given performance constraint. an effective local search approach for static voltage scaling based on the concept of a _period graph_ has been demonstrated in [1]. to make use of it in an optimization problem, the period graph must be integrated into a global search algorithm. _simulated heating_, a general optimization framework developed in [19], is an efficient method for precisely this purpose of integrating local search into global search algorithms. however, little is known about the management of computational (compile-time) resources between global search and local search in hybrid algorithms, such as those coordinated by simulated heating. in this paper, we explore various hybrid search management strategies for power optimization under the framework of simulated heating. we demonstrate that careful search management leads to significant power consumption improvement over add-hoc global search / local search integration, and explore alternative approaches to performing hybrid search management for dynamic voltage scaling. neal k. bambha shuvra s. bhattacharyya jurgen teich eckart zitzler best of technical support corporate linux journal staff a methodology for benchmarking java grande applications j. m. bull l. a. smith m. d. westhead d. s. henty r. a. davey testing nondeterministic message-passing programs with nope dieter kranzlmuller object-oriented megaprogramming (panel) peter wegner william scherlis james purtilo david luckham ralph johnson the perils of top-down design richard a. zahniser software quality assessment technology necessities for software quality measurement and assurance technology have been increased. b. boehm and mccall proposed software evaluation criteria. based on these studies, g. murine developed software quality metrics (sqm). sqm was applied to several projects in nec experimentally. an outline of the experiment will be presented and the results discussed. software quality measurement and assurance technology (sqmat) was developed in nec as a total technology for applying to various types and size of software projects, throughout the software life cycle. toshihiko sunazuka motoei azuma noriko yamagishi call path profiling robert j. hall toward a theory of maximally concurrent programs (shortened version) typically, program design involves constructing a program p that implements a given specification s; that is, the set p of executions of p is a subset of the set s of executions satisfying s. in many cases, we seek a program p that not only implements s, but for which p = s. then, every execution satisfying the specification is a possible execution of the program; we then call p maximal for the specification s. we argue that maximality is an important criterion in the context of designing concurrent programs because it disallows implementations that do not exhibit enough concurrency. in addition, a maximal solution can serve as a basis for deriving a variety of implementations, each appropriate for execution on a specific computing platform. this paper also describes a method for proving the maximality of a program with respect to a given specification. even though we prove facts about possible executions of programs, there is no need to appeal to branching time logics; we employ a fragment of linear temporal logic for our proofs. the method results in concise proofs of maximality for several non-trivial examples. the method may also serve as a guide in constructing maximal programs. rajeev joshi jayadev misra issues in the design and specification of class libraries gregor kiczales john lamping mi - an object oriented environment for integration of scientific applications scientific and engineering software is often produced by integration of existing software components of the size of a whole program. however, on the average, scientific software was not developed for reusability and is quite distant from the user model of the application problem; integration and retrofitting is as such a costly process. an architecture, methodology and several c++ class libraries for supporting integration are introduced. the architecture separates a software component layer, and an integration layer. the latter in based on the concept of software model, that is an abstraction of components and a representation of the system differing from its actual physical structure. the methodology is based on matching needs with existing models. the c++ class libraries are explained in some detail. the application to two major systems is analysed and the ideas behind seven other systems are briefly outlined. some lessons learned are summarised in the conclusions. andrea spinelli paolo salvaneschi mauro cadei marino rocca analyzing the role of aspects in software design j. andres diaz pace marcelo r. campo semantic correctness of structural editing tadeusz gruzlewski zbigniew weiss estimating software attributes: some unconventional points of view tom gilb data abstraction in glisp glisp is a high-level language that is based on lisp and is compiled into lisp. it provides a versatile abstract- data-type facility with hierarchical inheritance of properties and object- centered programming. the object code produced by glisp is optimized, so that it is about as efficient as handwritten lisp. an integrated programming environment is provided, including editors for programs and data-type descriptions, interpretive programming features, and a display-based inspector/editor for data. glisp is fully implemented. gordon s. novak parametrized programming in lileanna will tracz where is the evidence against arrays and pointers? markku sakkinen failure and success factors in reuse programs: a synthesis of industrial experiences michel ezran maurizio morisio colin tully letters to the editor corporate linux journal staff embedding security policies into a distributed computing environment this paper discusses the implementation of security policies in multipolicy systems. multipolicy systems are systems supporting a multitude of security policies, each policy governing the applications within its own and precisely defined security domain.the paper argues that within multipolicy systems, traditional approaches for implementing security policies such as security kernels are both too weak and too strong. in order to support this thesis, we will discuss architectural issues of the implementation of policy separation, policy persistency, total mediation and putting off-the-shelf applications under the control of security policies. whenever our statements are illustrated by examples, these examples are taken from a case study we implemented for the osf distributed computing environment. udo halfmann winfried e. kuhnhauser clanger: an interpreted systems programming language clanger is a powerful, yet simple, command language for the nemesis operating system. it uses runtime type information to interface directly with operating system components. clanger is a combination of command-line interpreter, scripting language, debugger and prototyping tool. this paper describes why such a language is possible, how it is being implemented, and outlines the language as it currently stands. timothy roscoe protecting internal state variables from subclasses the use of child packages to encapsulate derived tagged types and their operations should protect the state variables of the parent type that are to be manipulated by the parent operations alone. this protection can be achieved through the declaration of a data component of a limited record type, previously declared with no attributes and an access type discriminant that visibly points to an incomplete definition. the details of the data component (either scalar or composite, depending on application), whose full type declaration is in the parent package body, is accessible only through the discriminant from within the package body. this mechanism provides full information hiding of the details and maximum protection of the contents of those instance variables. susan fife dorchak s. rollins guild incorporating load dependent servers in approximate mean value analysis queueing network performance modelling technology has made tremendous strides in recent years. two of the most important developments in facilitating the modelling of large and complex systems are hierarchical modelling, in which a single load dependent server is used as a surrogate for a subsystem, and approximate mean value analysis, in which reliable approximate solutions of separable models are efficiently obtained. unfortunately, there has been no successful marriage of these two developments; that is, existing algorithms for approximate mean value analysis do not accommodate load dependent servers reliably. this paper presents a successful technique for incorporating load dependent servers in approximate mean value analysis. we consider multiple class models in which the service rate of each load dependent server is a function of the queue length at that server. in other words, load dependent center k delivers "service units" at a total rate of f@@@@ (n@@@@) when n@@@@ customers are present. we present extensive experimental validation which indicates that our algorithm contributes an average error in response times of less than 1% compared to the (much more expensive) exact solution. in addition to the practical value of our algorithm, several of the techniques that it employs are of independent interest. john zahorjan edward d. lazowska point of view: lisp as an alternative to java erann gat modlisp this paper discusses the design and implementation of modlisp, a lisp-like language enhanced with the idea of modes. this extension permits, but does not require, the user to declare the types of various variables, and to compile functions with the arguments declared to be of a particular type. it is possible to declare several functions of the same name, with arguments of different type (e.g. plus could be declared for integer arguments, or rational, or real, or even polynomial arguments) and the system will apply the correct function for the types of the arguments. the modlisp language differs from other abstract data type languages such as clu [liskov & zilles, 1974; liskov et al, 1977] and russell [donahue,1977] in that it allows dynamic construction of new parametrised data types and possesses a unified semantics covering interpreted and compiled code, which can call one another at will. in short, it is lisp-like. james h. davenport richard d. jenks a browsing interface for s-expressions kei yuasa failure correction techniques for large disk arrays the ever increasing need for i/o bandwidth will be met with ever larger arrays of disks. these arrays require redundancy to protect against data loss. this paper examines alternative choices for encodings, or codes, that reliably store information in disk arrays. codes are selected to maximize mean time to data loss or minimize disks containing redundant data, but are all constrained to minimize performance penalties associated with updating information or recovering from catastrophic disk failures. we also codes that give highly reliable data storage with low redundant data overhead for arrays of 1000 information disks. g. a. gibson l. hellerstein r. m. karp d. a. patterson interactive editing systems: part ii norman meyrowitz andries van dam developing language neutral class libraries with the system object model (som) mike conner nurcan coskun scott danforth larry loucks andy martin larry raper roger sessions plasma-ii: an actor approach to concurrent programming g. lapaime p. salle using godiva for data flow analysis terrence w. pratt developing parallel applications using high-performance simulation eric a. brewer william e. weihl analytical and empirical evaluation of software reuse metrics prem devanbu sakke karstu walcelio melo william thomas observations on software architecture/style analysis will tracz software engineering: a motivation dan ghica the shared regions approach to software cache coherence on multiprocessors the effective management of caches is critical to the performance of applications on shared-memory multiprocessors. in this paper, we discuss a technique for software cache coherence tht is based upon the integration of a program-level abstraction for shared data with software cache management. the program-level abstraction, called shared regions, explicitly relates synchronization objects with the data they protect. cache coherence algorithms are presented which use the information provided by shared region primitives, and ensure that shared regions are always cacheable by the processors accessing them. measurements and experiments of the shared region approach on a shared-memory multiprocessors accessing them. measurements and experiments of the shared region approach on a shared-memory multiprocessor are shown. comparisons with other software based coherence strategies, including a user- controlled strategy and an operating system-based strategy, show that this approach is able to deliver better performance, with relatively low corresponding overhead and only a small increase in the programming effort. compared to a compiler-based coherence strategy, the shared regions approach still performs better than a compiler that can achieve 90% accuracy in allowing cacheing, as long as the regions are a few hundred bytes or larger, or they are re-used a few times in the cache. harjinder s. sandhu benjamin gamsa songnian zhou mpc++ approach to parallel computing environment mpg++ is an extension of c++, which supports control parallel programming and metalevel programming. the mpg++ metalevel programming facility enables users to extend or modify c++ language features. library designers may supply abstractions in mpg++ source files where those abstractions are defined by the mpg++ metalevel programming facility. the users can use the right abstractions for their demands by just including files in their source files. yutaka ishikawa linuxworld conference & expo marjorie richardson towards a practical specification language recognition of the value of formal specifications in the design and verification of large software systems is becoming more widespread. specification languages themselves, however, are difficult to develop in part because of the inherent conflict between the goals of clarity and formalism required by these languages. this paper discusses the role of specification languages, examples of specifications in two currently implemented languages, affirm and special, and makes some suggestions towards a more practical specification language. anne-marie g. discepolo dynamic currency determination in optimized programs compiler optimizations pose many problems to source-level debugging of an optimized program due to reordering, insertion, and deletion of code. on such problem is to determine whether the value of a varible is current at a breakpoint---that is, whether its actual value is the same as its expected value. we use the notion of dynamic currency of a variable in source-level debugging and propose the use of a minimal unrolled graph to reduce the run- time overhead of dynamic currency determination. we prove that the minimal unrolled graph is an adequate basis for performing bit-vector data flow analyses at a breakpoint. this property is used to perform dynamic currency determination. it is also shown to help in recovery of a dynamically noncurrent variable. d. m. dhamdhere k. v. sankaranarayanan the ada binding for posix s. boyd seta1 working group on building, debugging and testing real-time and distributed systems tucker taft experiences with compiler-directed storage reclamation james hicks some empirical results regarding the efficiency of the entrapment procedure for scheduling jobs on identical machines (abstract only) amar dev amar eugen vasilecu screen management in the "real world" an application-based screen management system was developed in apl for a mainframe-based interactive database. it requires no special hardware or software support and therefore runs on any asynchronous communications device. the system includes features unique to most on-line databases yet common to many screen-oriented pc systems. the foundation of the design is the division of all report information into discrete, logical sections. by dividing a report in this way and by providing the user with a powerful set of commands to allow movement both between and within these logical sections, one achieves a very flexible and user-friendly interface. edmund w. stawick a more efficient rmi for java christian nester michael philippsen bernhard haumacher letters corporate linux journal staff transactions and objects (workshop session) bruce martin krithi ramamritham the lifeline of apl: education howard a. peelle three approximation techniques for astral symbolic model checking of infinite state real-time systems astral is a high-level formal specification language for real-time systems. it has structuring mechanisms that allow one to build modularized specifications of complex real-time systems with layering. based upon the astral symbolic model checler reported in [13], three approximation techniques to speed-up the model checking process for use in debugging a specification are presented. the techniques are random walk, partial image and dynamic environment generation. ten mutation tests on a railroad crossing benchmark are used to compare the performance of the techniques applied separately and in combination. the test results are presented and analyzed. zhe dang richard a. kemmerer a practical experience with ada* portability james e. walker s. denise skyles pamela gilliam a project-based course in compiler construction the paper describes the experience gained by teaching a project-based course in compiler construction. the course is a blend of theoretical concepts and practical considerations that go into the development of a compiler. a project in compiler writing is an important component of this course. asp, a subset of standard pascal, is used as the source language. the compiler for asp is to be developed in various phases: character manipulator, lexical analyzer, syntax analyzer, semantic analyzer, and code generator. recursive descent method is used to parse the various syntactic entities. the code generator emits code for a hypothetical machine called aoc (algol object code). a simulator executes this code. harbans l. sathi using interactive multimedia for teaching and learning object oriented software design (poster session) sun-hea choi sandra cairncross experience with enactable software process models marc i. kellner cryptanalysis and protocol failures (abstract) in this lecture examples will be given of key distribution protocols that distribute keys to unintended recipients, secrecy protocols that publicly reveal the contents of (supposedly) secret communications, digital signature protocols that make forgery easy --- all based on cryptoalgorithms that are sound so far as is known. in at least one case the cryptographic algorithm that is employed is vernam encryption/decryption with a properly chosen one time key which is well known to be unconditionally secure; in spite of which the protocol fails totally. from the standpoint of applications there is scarcely any topic of greater importance than the cryptanalysis of protocols, since protocols are --- in the vernacular of advertising --- "where the rubber meets the road", i.e. where the principles of cryptography get applied to the practice of insuring the integrity of information. the design and/or analysis of cryptographic algorithms is the domain of the mathematician and the cryptographer and can be carried out in large part without regard to applications. the design and analysis of protocols, however, is inextricably linked to the system in which the protocol is to be used, and originates with an application: the function of the protocol being to realize the integrity properties required by the application. cryptographic algorithms are simply component elements in the design of protocols --- and as we've indicated, the security of the one does not necessarily imply the security of the other. when expressed in this way, protocol failures do not seem so improbable or surprising as they do when described as defined above. in real life though, almost every example of a true protocol failure is also an example of what can aptly be characterized as "well i'll be damned" discoveries, since this describes the reaction of most people when they first have such a failure pointed out to them. similiarly, if a protocol calls for one of the participants --- who may be a "trusted" key generation bureau for example --- to start by constructing a composite number as the product of two primes, chosen so as to make the factorization of their product be computationally infeasible, the suspicion must be that the product is not of this form. it is easy to verify in probability that a number is not a prime, and computationally feasible for numbers of a few hundred decimal digits in size to do so deterministically. it is generally believed by computational number theorists, however, that it just as difficult to test whether a composite number is the product of more than two factors as it is to factor it. consequently, if a protocol calls for such a composite number to be generated by one of the participants, it is essential in the cryptanalysis to examine whether there are any exploitable consequences of it being the product of more than two prime numbers. for example, it is easy to conceal a covert channel in a signature protocol that calls for the use of a modulus which is the product of two primes, if the modulus is the product of three primes instead. there is a long list --- too long for a single paper and much too long for an abstract --- of examples of protocol failures that derive from a quantity not being what it is supposed to be, or what it is advertised to be. the two examples above should give the reader a feeling for what is involved in protocol analysis. the cryptanalysis of protocols consists of three steps: carefully enumerate all of the properties of all of the quantities involved; both those explicitly stated in the protocol and those implicitly assumed in the setting. take nothing for granted. in other words go through the list of properties assuming that none of them are as they are claimed or tacitly assumed to be unless a proof technique exists to verify their nature. for each such violation of property, critically examine the protocol to see if this makes any difference in the outcome of the execution of the protocol. combinations of parameters as well as single parameters must be considered. finally, if the outcome can be influenced as a result of a violation of one or more of the assumed properties, it is essential to then determine whether this can be exploited to advance some meaningful deception. there are several well known protocols in which it is possible to influence the outcome by violating the assumed properties of one or more of the parameters involved, but in which no known meaningful deception can be worked or furthered as a result. protocol failures occur whenever the function of the protocol can be subverted as a consequence of the violations. this lecture will illustrate the application of these rules for the cryptanalysis of protocols with several examples of pure protocol failures discovered using them. gustavus j. simmons type inference with rank 1 polymorphism for type-directed compilation of ml atsushi ohori nobuaki yoshida a specification for ada machine code insertions t j fleck object identity identity is that property of an object which distinguishes each object from all others. identity has been investigated almost independently in general- purpose programming languages and database languages. its importance is growing as these two environments evolve and merge. we describe a continuum between weak and strong support of identity, and argue for the incorporation of the strong notion of identity at the conceptual level in languages for general purpose programming, database systems and their hybrids. we define a data model that can directly describe complex objects, and show that identity can easily be incorporated in it. finally, we compare different implementation schemes for identity and argue that a surrogate-based implementation scheme is needed to support the strong notion of identity. setrag n. khoshafian george p. copeland viewing a dssa in context: problems versus solutions jean-marc debaud objective view point: the abcs of writing c++ classes: operators g. bowden wise correction f. h. d. van batenberg test metrics for software quality this paper discusses bell northern research's experience in utilizing an extended set of test metrics for assuring the quality of software. the theory and use of branch and path class coverage is discussed and the reaction of users in described. this paper also discusses the effect of using co-resident inspection procedures in achieving cost-effective testing for a high degree of test coverage. james ronback how does pascal-xsc compare to other programming languages with respect to the ieee standard? pascal janssens annie cuyt introducing ada 9x john barnes automatic resumption mechanism for program debugging takao shimomura forms based documentation to support structured develomemt and c.a.s.e. implementation one of the most important aspects of structured development is the creation and enforcement of standards. standards define how a given methodology is to be used within your organization. examples of standards might include which forms and other documents must be bundled together steps in the approval process when and by whom certain project steps must be done maximum size of a module of code it is by now common to have some minimal standards for actual program code and program documentation. however, too often only lip service is given to enforcement of organizational standards for system analysis and system design. in my view, documentation and standardization of these development stages is crucial. despite the current vogue, such documentation standards will not be achieved simply by purchasing and adopting a case tool. as a supplement to such a tool, especially at the beginning, a manual forms based method is necessary. through the use of forms, analysts are provided with an initial set of standards which may be used in your projects. there is a learning curve the development team must go through in order to gain experience using standards. after one or more projects, a decision is often made to modify the initial minimum set of standards to fit the needs of a particular company. this process is similar to that which you must carry out in tailoring the software life cycle model to your own framework. in practice, a manual system reduces the time it takes to learn a new set of tools. d. bellin a review of ibm's apl2 for os/2 (entry ed.) gregg taylor a methodology for high-level software specification construction ying jing he zhijun wu zhaohui li jiangyun fan weicheng xu zhaohui viewing a programming environment as a single tool programming environments support the creation, modification, execution and debugging of programs. the goal of integrating a programming environment is more than simply building tools that share a common data base and provide a consistent user interface. ideally, the programming environment appears to the programmer as a single tool; there are no firewalls separating the various functions provided by the environment. this paper describes the techniques used to integrate magpie, an interactive programming environment for pascal. display windows, called browsers, provide a consistent approach for interacting with the pascal source code or the execution state of the program. incremental compilation allows the programmer to specify debugging actions in pascal, eliminating the need for a separate debugging language. norman m. delisle david e. menicosy mayer d. schwartz and/or parallel execution of logic programs: exploiting dependent and- parallelism zheng yuhua tu honglei xie li linux in a box for dummies ralph krause operating system support for adaptive distributed real-time systems in dragon slayer h. f. wedde g. s. alijani w. g. brown s. chen g. kang a global communication optimization technique based on data-flow analysis and linear algebra reducing communication overhead is extremely important in distributed-memory message-passing architectures. in this article, we present a technique to improve communication that considers data access patterns of the entire program. our approach is based on a combination of traditional data-flow analysis and a linear algebra framework, and it works on structured programs with conditional statements and nested loops but without arbitrary goto statements.the distinctive features of the solution are the accuracy in keeping communication set information, support for general alignments and distributions including block-cyclic distribu-tions, and the ability to simulate some of the previous approaches with suitable modifications. we also show how optimizations such as message vectorization, message coalescing, and redundancy elimination are supported by our framework. experimental results on several benchmarks show that our technique is effective in reducing the number of messages (anaverage of 32% reduction), the volume of the data communicated (an average of 37%reduction), and the execution time (an average of 26% reduction). m. kandemir p. banerjee a. choudhary j. ramanujam n. shenoy linux gazette: two cent tips marjorie richardson technical correspondence: on tanenbaum, van staveren, and stevenson's ``using peephole optimization on intermediate code'' steven pemberton combinatory representation of mobile processes a theory of combinators in the setting of concurrent processes is formulated. the new combinators are derived from an analysis of the operation called asynchronous name passing, just as an analysis of logical substitution gave rise to the sequential combinators. a system with seven atoms and fixed interaction rules, but with no notion of prefixing, is introduced, and is shown to be capable of representing input and output prefixes over arbitrary terms in a behaviourally correct way, just as sk-combinators are closed under functional abstraction without having it as a proper syntactic construct. the basic equational correspondence between concurrent combinators and a system of asynchronous mobile processes, as well as the embedding of the finite part of π-calculus in concurrent combinators, is proved. these results will hopefully serve as a cornerstone for further investigation of the theoretical as well as pragmatic possibilities of the presented construction. kohei honda nobuko yoshida an empirical study of a wide-area distributed file system the evolution of the andrew file system (afs) into a wide-area distributed file system has encouraged collaboration and information dissemination on a much broader scale than ever before. we examine afs as a provider of wide-area file services to over 100 organizations around the world. we discuss usage characteristics of afs derived from empirical measurements of the system. our observations indicate that afs provides robust and efficient data access in its current configuration, thus confirming its viability as a design point for wide-area distributed file systems. mirjana spasojevic m. satyanarayanan keep your eye on the ball russell fish ada reuse within the context of an ada programming support environment a. reedy c. shotton e. yodis f. c. blumberg on some extensions of syntactic error recovery technique based on phrase markers t g muchnick interfaces for strongly-typed object-oriented programming this paper develops a system of explicit interfaces for object-oriented programming. the system provides the benefits of module interfaces found in languages like ada and modula-2 while preserving the expressiveness that gives untyped object- oriented languages like smalltalk their flexibility. interfaces are interpreted as polymorphic types to make the system sufficiently powerful. we use interfaces to analyze the properties of inheritance, and identify three distinct kinds of inheritance in object- oriented programming, corresponding to objects, classes, and interfaces, respectively. object interfaces clarify the distinction between interface containment and inheritance and give insight into limitations caused by equating the notions of type and class in many typed object-oriented programming languages. interfaces also have practical consequences for design, specification, and maintenance of object-oriented systems. p. s. canning w. r. cook w. l. hill w. g. olthoff debugging optimized code with dynamic deoptimization self's debugging system provides complete source-level debugging (expected behavior) with globally optimized code. it shields the debugger from optimizations performed by the compiler by dynamically deoptimizing code on demand. deoptimization only affects the procedure activations that are actively being debugged; all other code runs at full speed. deoptimization requires the compiler to supply debugging information at discrete interrupt points; the compiler can still perform extensive optimizations between interrupt points without affecting debuggability. at the same time, the inability to interrupt between interrupt points is invisible to the user. our debugging system also handles programming changes during debugging. again, the system provides expected behavior: it is possible to change a running program and immediately observe the effects of the change. dynamic deoptimization transforms old compiled code (which may contain inlined copies of the old version of the changed procedure) into new versions reflecting the current source-level state. to the best of our knowledge, self is the first practical system providing full expected behavior with globally optimized code. urs hölzle craig chambers david ungar ape - a set of macros to format ada programs s., sankar within arm's reach: compilation of left-linear rewrite systems via minimal rewrite systems a new compilation technique for left-linear term- rewriting systems is presented, where rewrite rules are transformed into so- called minimal rewrite rules. these minimal rules have such a simple form that they can be viewed as instructions for an abstract rewriting machine (arm). wan fokkink jasper kamperman pum walters on the inevitable intertwining of specification and implementation contrary to recent claims that specification should be completed before implementation begins, this paper presents two arguments that the two processes must be intertwined. first, limitations of available implementation technology may force a specification change. for example, deciding to implement a stack as an array (rather than as a linked list) may impose a fixed limit on the depth of the stack. second, implementation choices may suggest augmentations to the original specification. for example, deciding to use an existing pattern-match routine to implement the search command in an editor may lead to incorporating some of the routine's features into the specification, such as the ability to include wild cards in the search key. this paper elaborates these points and illustrates how they arise in the specification of a controller for a package router. william swartout robert balzer an empirical investigation of program spectra mary jean harrold gregg rothermel rui wu liu yi reasoning about continuations with control effects we present a new static analysis method for first-class continuations that uses an effect system to classify the control domain behavior of expressions in a typed polymorphic language. we introduce two new control effects, goto and comefrom, that describe the control flow properties of expressions. an expression that does not have a goto effect is said to be continuation following because it will always call its passed return continuation. an expression that does not have a comefrom effect is said to be continuation discarding because it will never preserve its return continuation for later use. unobservable control effects can be masked by the effect system. control effect soundness theorems guarantee that the effects computed statically by the effect system are a conservative approximation of the dynamic behavior of an expression. the effect system that we describe performs certain kinds of control flow analysis that were not previously feasible. we discuss how this analysis can enable a variety of compiler optimizations, including parallel expression scheduling in the presence of complex control structures, and stack allocation of continuations. the effect system we describe has been implemented as an extension to the fx-87 programming language. p. jouvelot d. k. gifford towards a metrics suite for object oriented design shyam r. chidamber chris f. kemerer pinching pennies while losing dollars all real-time embedded software projects eventually hit the wall when they trade elegance for efficiency. the major recipient of the blame is often 'that darn tasking' or 'those silly oo concepts'. the actual culprit, unknowingly, is the front line software engineer. this paper is a group of test cases that can be applied to compiler to give direction to the developers as to the performance of the compiler. tony lowe strategies for data abstraction in lisp the benefits of abstract data types are many and are generally agreed upon [liskov and zilles 1974, linden 1976]. new languages are being constructed which provide for and enforce the use of data abstractions [liskov et al 1977, wulf et al 1976]. however, many of us are not in a position to use these new languages, but must stick to our installation's compiler. how then can we obtain the benefits of data abstraction? we discuss the implementation of data abstraction in a lisp program and the subtleties involved in doing so: specifically, how it is possible to enforce proper data abstraction in a language which does not provide for abstract data types. barbara k. steele classification in object-oriented systems peter wegner tools: a unifying approach to object-oriented language interpretation k. koskimies j. paakki a very fast prolog compiler on multiple architectures toshiaki kurokawa naoyuki tamura yasuo asakawa hideaki komatsu an introduction to java servlet programming vandana pursnani declarative event-oriented programming conal elliott user interface support for the integration of software tools: an iconic model of interaction this paper presents a model of interaction based on an iconic representation of objects. an application of the model to an iconic shell for unix is described. finally a client server architecture for the implementation of the model is introduced. we show that a software development environment can take advantage of such a model and architecture in order to provide a consistent, adaptable and extensible user interface. michel beaudouin-lafon ada and c: differences as the language for system programming zensho nakao masaya kinjo masahiro nakama a probe-based monitoring scheme for an object-oriented distributed operating system partha dasgupta the gnat compilation model one of the novel features of gnat is its unusual approach to the compilation process and the handling of the ada library. the words novel and unusual only apply from a traditional ada compilation perspective. by contrast, a typical c or c++ programmer would find many aspects of the model quite familiar. in gnat, sources are independently compiled to produce a set of objects, and the set of object files thus produced is submitted to the binder/linker to generate the resulting executable. this approach removes all order-of- compilation considerations, and eliminates the traditional monolithic library structure. not only is the model very simple to understand, but it makes it easier to build hybrid systems in multiple languages, and is much more compatible with conventional configuration management tools (ranging from the simple unix make program to sophisticated compilation management environments) than the conventional library structure. needless to say, the approach we present is fully compatible with the ada rules of compilation. robert dewar fuzzy array dataflow analysis exact array dataflow analysis can be achieved in the general case if the only control structures are do-loops and structural ifs, and if loop counter bounds and array subscripts are affine expressions of englobing loop counters and possibly some integer constants. in this paper, we begin the study of dataflow analysis of dynamic control programs, where arbitrary ifs and whiles are allowed. in the general case, this dataflow analysis can only be fuzzy. jean-françois collard denis barthou paul feautrier letters to the editor corporate linux journal staff an rca 1802 software simulator alberto pasquale a checklist for developing software quality metrics reesa e. abrams the measurement of software science parameters in software designs metrics of software quality have historically focused on code quality despite the importance of early and continuous quality evaluation in a software development effort. while software science metrics have been used to measure the psychological complexity of computer programs as well as other quality related aspects of algorithm construction, techniques to measure software design quality have not been adequately addressed. in this paper, software design quality is emphasized. a general formalism for expressing software designs is presented, and a technique for identifying and counting software science parameters in design media is proposed. paul a. szulewski mark h. whitworth philip buchan j. barton dewolf pi: a case study in object-oriented programming pi is a debugger written in c + +. this paper explains how object-oriented programming in c + + has influenced pi's evolution. the motivation for object- oriented programming was to experiment with a browser-like graphical user interface. the first unforeseen benefit was in the symbol table: lazy construction of an abstract syntax-based tree gave a clean interface to the remainder of pi, with an efficient and robust implementation. next, though not in the original design, pi was easily modified to control multiple processes simultaneously. finally, pi was extended to control processes executing across multiple heterogeneous target processors. t. a. cargill fortran 90/95/hpf information file (part 1, compilers) michael metcalf the role of software configuration management in a measurement-based software engineering program kristopher g. sprague the wild-west revisited david r. pitts barbara h. miller protected records in ada 9x mike kamrad programming pearls jon bentley derivation of fault tolerance properties of distributed algorithms philippe queinnec gerard padiou varying length strings in fortran j. l. schonfelder joint actions based authorization schemes authorization policy requirements in commercial applications are often richer compared to military applications in terms of the types of privileges required, and more complex in terms of both the nature and degree of interactions between participating objects. delegation and joint action mechanisms allow a more flexible and dynamic form of access control, thereby enabling the representation of sophisticated authorization policies. this paper explores some issues that need to be addressed when designing such joint actions based authorization policies. we describe some approaches to supporting joint actions based authorization policies, and their ramifications for trust of various components of the implementation. we consider an example from the medical field, and define attributes relevant to the design of joint action schemes and present three schemes for supporting joint action based authorization policies. vijay varadharajan phillip allen reimplementing the cedar file system using logging and group commit the workstation file system for the cedar programming environment was modified to improve its robustness and performance. previously, the file system used hardware-provided labels on disk blocks to increase robustness against hardware and software errors. the new system does not require hardware disk labels, yet is more robust than the old system. recovery is rapid after a crash. the performance of operations on file system metadata, e.g., file creation or open, is greatly improved. the new file system has two features that make it atypical. the system uses a log, as do most database systems, to recover metadata about the file system. to gain performance, it uses group commit, a concept derived from high performance database systems. the design of the system used a simple, yet detailed and accurate, analytical model to choose between several design alternatives in order to provide good disk performance. r. hagmann ubiquitous applications: embedded systems to mainframe over the last 10 years, smalltalk has moved from the "parc" to main street as a standard object-oriented (oo) fifth generation language (5gl) for enterprise computing. to meet the needs of application developers, smalltalk environments and tools have matured from the original research implementations to full- featured, multiplatform development environments. a recent study of development tools conducted by software productivity research in massachusetts for a software productivity consortium ranked smalltalk first in most categories. what is surprising about this study is the application: a demanding telephone switch traditionally dominated by c or proprietary talc languages such as chill, protel, and plex. the fact that smalltalk ranked so highly is a testimony that smalltalk is an application 5gl that scales. this article discusses the major technical challenges addressed by smalltalk implementors and application developers working on a wide spectrum of applications. dave thomas applicative programming and digital design steven d. johnson how to write a c++ language extension proposal for ansi-x3j16/iso-wg21 corporate x3j16 working group on extensions dr. ada 95 george morrone object oriented framework development marcus eduardo markiewicz carlos j. p. de lucena towards a weighted operational profile e. k. aggarwal m. pavan kumar vinay santurkar radha ratnaparkhi source-to-source translation: ada to pascal and pascal to ada an implementation of translators between ada and pascal is described. the method used is to define subsets of each language between which there is a straightforward translation and to translate each source program to its respective sublanguage by program transformations. a common internal tree representation is used. the underlying organization of the translators is described, and some of the difficulties we have confronted and solved are discussed. paul f. albrecht phillip e. garrison susan l. graham robert h. hyerle patricia ip bernd krieg bruckner working results on software re-engineering julio cesar sampaio do prado leite using integer sets for data-parallel program analysis and optimization vikram adve john mellor-crummey new development of apl technology of modelling: apl*plus + c++ compiler dmitri gusev igor pospelov an apl compiler for a vector processor timothy a. budd completely validated software: error-based validation completeness (panel session) w. e. howden toward user-defined element types and architectural styles robert deline the design of an object oriented graphics interface (abstract only) a description of an advanced graphics interface design that provides the applications developer with a very high level graphics environment is presented. the object oriented design is shown to be appropriate to achieving device and implementation independence. this approach is also shown to provide a flexible means of managing non-graphic information associated with graphic objects. implementation, using standard graphics primitives, is proposed. dennis moreau the bottom line (workshop session): using oop in a commercial environment k. c. burgess yakemovic suzana hutz bruwin with only one process viewable and operational at any moment, the standard terminal forces the user to continually switch between contexts. yet this is unnatural and counter-intuitive to the normal working environment of a desk where the worker is able to view and base subsequent actions on multiple pieces of information. the window manager is an emerging computing paradigm which allows the user to create multiple terminals on the same viewing surface and to display and act upon these simultaneous processes without loss of context. though several research efforts in the past decade have introduced window managers, they have been based on the design or major overhaul of a language or operating system; the window manager becomes a focus of---rather than a tool of---the system. while many of the existing implementations provide wide functionality, most implementations and their associated designs are not readily available for common use; extensibility is minimal. this paper describes the design and implementation of bruwin, the brown university window manager, stressing how such a design can be adapted to a variety of computer systems and output devices, ranging from alphanumeric terminals to high-resolution raster graphics displays. the paper first gives a brief overview of the general window manager paradigm and existing examples. next we present an explanation of the user-level functions we have chosen to include in our general design. we then describe the structure and design of a window manager, outlining the five important parts in detail. finally, we describe our current implementation and provide a sample session to highlight important features. norman meyrowitz margaret moser data encapsulation using fortran 77 modules traditionally, the preferred method for developing numerical models in the fortran language, as standardized in 1977, has been the top-down structured approach. this method emphasizes the relation between procedures necessary to solve a particular programming problem. an alternative perspective is to consider data and manipulation of data fundamental to solution of a programming problem. a programming construct arising from this perspective may be termed a module. a module is comprised of a set of procedures and data they manipulate. modules may be used to encapsulate data, that is, hide module data from and prevent their direct use by routines outside the module. information computed from module data is obtained indirectly through invocation of procedures. modules, comprised of associated groups of data and respective procedures, provide program structure beyond that available using procedural- oriented practices alone. construction and use of modules potentially improves program clarity, reduces argument passing, and encapsulates data at the level where they are manipulated. use of fortran 77 modules, like the use of top- down structured programming, is a programming techique allowed but not supported or enforced by the fortran 77 language. consequently, successful implementation of modules with fortran 77 and realization of potential benefits depend on voluntary adherence to the programming technique. use of other computer languages (such as modula-2, ada, c++, and fortran 90) which, to varying degrees, support or enforce programming techniqes similar to the use of fortran 77 modules might significantly increase programming efficiency and realization of potential benefits. lewis l. delong david b. thompson janice m. fulford grail/kaos: an environment for goal-driven requirements engineering r. darimont e. delor p. massonet a. van lamsweerde computer capacity planning using queueing network models this paper presents several computer capacity planning case studies using a modeling tool, best/1, derived from the theory of queueing networks. all performance predictions were evaluated based on the selected service levels such as response times and throughputs. advantages and disadvantages of using the modeling approach are also briefly discussed. t. l. lo analysis of function applications in deep arrays function arrays are easy to understand in apl when the leaves are primitive scalar functions. herein other kinds of functions, called generically subscalars, are studied as items in functions arrays. some example primitive shave an internal analog of structure. they are explicated using an experimental depth operator and some defined subprimitive functions. subscalar leaves of function arrays interrupt the promulgation of scalar conformance in a resumable fashion reminiscent of the data types of kajiya [6]. j. philip benkard control structures in apl*plus iii christopher h. lee performance evaluation of concurrent systems using timed petri nets it is shown that the behavior of petri nets with exponentially distributed firing times can be represented by labeled directed "state" graphs in which labels describe the probabilities of transitions between vertices of the graph. for bounded petri nets the corresponding state graphs are finite, and are isomorphic to finite state markov chains, stationary descriptions can thus be obtained by standard techniques. an immediate application of such a model is performance analysis of concurrent systems, and in particular queueing systems with exponentially distributed arrival and service times. a simple example of an interactive computer system model is used as an illustration of performance evaluation. wlodzimierz m. zuberek run-time check elimination for ada 9x an approach to the elimination of run-time checks in ada 9x is presented. the approach is a flow analysis approach based on a combination of range propagation and assertion propagation. range propagation computes estimates for the dynamic characteristics of program entities, for example the values of objects, while assertion propagation maintains valid assertions derived from assignments and conditions of the program. this approach offers a simple alternative to the more complex approach of a theorem prover. peter lutzen moller a prototype approach to instrument network software (abstract only) this paper presents the prototype approach used in the development of a distributed instrumentation network and the benefits derived from it. each node in the network is a dedicated microcomputer which is used for instrument control, data acquisition, and preliminary data analysis. data is saved on a mass storage at the control node. general purpose interface bus (ieee488) is used in the network. a prototype was developed primarily to study the feasibility of the system and get user recommendation for change in requirements. due to inherent characteristics of this language the prototype was compact and the development was fast. only two of the seven nodes were included in the prototype. due to the success of the prototype the actual system is currently being developed. the prototype was helpful in several ways. many of the techniques used in the prototype are adapted to the actual system, which is in a different computer and language. in a small scale project with low staffing and tight time schedules this is found to be a more practical approach. this work is supported by doe contract de-ac02-80et15601. thomas philip safety checking of machine code we show how to determine statically whether it is safe for untrusted machine code to be loaded into a trusted host system. our safety-checking technique operates directly on the untrusted machine-code program, requiring only that the initial inputs to the untrusted program be annotated with typestate information and linear constraints. this approach opens up the possibility of being able to certify code produced by any compiler from any source language, which gives the code producers more freedom in choosing the language in which they write their programs. it eliminates the dependence of safety on the correctness of the compiler because the final product of the compiler is checked. it leads to the decoupling of the safety policy from the language in which the untrusted code is written, and consequently, makes it possible for safety checking to be performed with respect to an extensible set of safety properties that are specified on the host side. we have implemented a prototype safety checker for sparc machine-language programs, and applied the safety checker to several examples. the safety checker was able to either prove that an example met the necessary safety conditions, or identify the places where the safety conditions were violated. the checking times ranged from less than a second to 14 seconds on an ultrasparc machine. zhichen xu barton p. miller thomas reps prevention of task overruns in real-time non-preemptive multiprogramming systems real-time multiprogramming systems, in which a hardware processor is dynamically assigned to run multiple software processes each designed to control an important device (user), are considered. each software process executes a task in response to a service request repeatedly coming from the corresponding user. each service task is associated with a strict deadline, and thus the design problem that we are concerned with is to ensure that the service tasks requested can always be executed within the associated deadlines, i.e., no task overrun occurs. this problem was studied by several investigators for the cases where preemptive scheduling strategies are used. in contrast, very few studies have been conducted for cases of non-preemptive scheduling. in this paper we show that a non-preemptive strategy, called relative urgency non-preemptive (runp) strategy, is optimal in the sense that if a system runs without a task overrun under any non-preemptive strategy, it will also run without a task overrun under the runp strategy. then an efficient procedure used at the design time for detecting the possibility of a task overrun in a system using the runp strategy is presented. the procedure is useful in designing overrun-free real-time multiprogramming systems that yield high processor utilizations. some special types of systems using the runp strategy for which even simpler detection procedures are available are also discussed. k. h. kim mahmoud naghibzadeh exploiting process lifetime distributions for dynamic load balancing mor harchol-balter allen b. downey pluggable authentication modules for linux: an implementation of a user- authentication api andrew g. morgan mondrian: a teachable graphical editor henry lieberman virtual memory primitives for user programs andrew w. appel kai li orthogonal latin squares: an application of experiment design to compiler testing orthogonal latin squares---a new method for testing compilers \---yields the informational equivalent of exhaustive testing at a fraction of the cost. the method has been used successfully in designing some of the tests in the ada compiler validation capability (acvc) test suite. robert mandl programming in c++ stephen c. dewhurst kathy stark designing executable abstractions gerard j. holzmann facilitating abstraction and reuse with expecttk navid sabbaghi a representation model for procedural program maintenance ramachenga r. valasareddi doris l. carver ada translation tools development: automatic translation of fortran to ada m. parsian b. basdell y. bhayat i. caldwell n. garland b. jubanowsky j. robinette system commands: an apl backwater revisited system commands have always had a very important role in, or, more accurately, just outside, the apl language. they are the pidgin dialect which provides communication between the clean, abstract world of pure apl and the often messy environment of the apl system which allows apl programs to be executed. despite their importance in the development and use of all apl applications, very little attention has been paid to them. they remain, in contrast to apl itself, ineffectual, irregular, and inconsistent. they have no regular syntax and return no usable result. there are two possible ways to deal with this situation. get rid of them, or make them better. to date, most apl systems have been gradually implementing the first approach. one by one, system commands have been superseded by system functions and variables which do approximately the same thing. in the case of system variables such as &square;io and &square;rl whose values effect the actual results of calculations as implicit arguments, this change has properly moved the ability to set these parameters into the province of the language itself. in the case of the system functions, however, the move has blurred the useful distinction between the language and the host environment and introduced a most unattractive collection of obscure english words and abbreviations into a previously ideographic language. by moving the english words out of apl and into the realm of system commands, we open up the possibility of having the actual words used be a function of some natural language selection facility such as &square;nlt in ibm's apl2. one of the problems with the &square;-function approach is that apl functions take data arguments and return data results. this is, of course, what allows them to be used in ever more complex expressions. unfortunately, since the environmental constructs acted on by system functions are not data, the arguments and results of system functions often bear only slightly on the main action which is taken as a side effect. another problem is that an apl function works within a certain name environment (workspace), and generally cannot affect things outside of that environment. as large applications increasingly tend to rely on cooperation between many different workspaces, the need to make extra- workspace adjustments to the environment has become apparent. this is especially true in analogic apl which is designed to facilitate the sharing of both functions and variables between workspaces. starting a brand new implementation of apl is a great opportunity. we at analogic decided to take advantage of it by providing a better environment for our favorite language to run in. our approach was to leave the language proper alone (analogic is committed to strict adherence to the iso apl standard), but to rethink the way that apl applications interact with their environment. to take advantage of these enhancements, an applications programmer must set up a variety of links between different workspaces or name contexts. usually, this only needs to be done once at the time an application is put together. the exact mechanism, called threaded workspaces, is described elsewhere [1]. the important points for this discussion are that the actions generally need not be dynamic and that there are many variations in the details of the connections. we decided that system functions were not the answer. doing as many things as we envisioned by using system functions would have entitled either an unacceptably large number of confusingly named &squarel-functions; (like the 135 found in stsc apl*plus/pc) or unacceptably obscure arguments to a smaller number of &square;-functions (like the left arguments of &square;ws and &square;fd in sharp apl). system commands, many of which are already required by the iso standard, can provide the answer so long as they are given a regular and self-documenting syntax, and the universe of names visible to any command is expanded to include names in workspaces other than the one in which the command is invoked. it should be understood that analogic apl is a work in progress. the ideas expressed here reflect the current state of the specification of analogic apl. details such as the names of individual fields are likely to change as we the implementation progresses and we gather experience. michael j. a. berry probability of apl linda alvord cooking with linux matt welsh solving the "n<=8 queens" problem with csp and modula-2 m e goldsby when are two classes equivalent? k. rangarajan a. balasubramaniam automated application programming environment with the availability of high-performance, low-cost hardware, apl provides a cost-effective means of developing custom software for the small business environment, and perhaps the best alternative to trying to adapt to an off- the-shelf package. since apl does not inherently deal with system specific screen manipulation and file management techniques, programming these interface requirements from apl is typically tedious, potentially reducing the attractiveness of apl as the development language. strategies for dealing with the realities of screen and file management in the context of small business systems are discussed in this paper. low level functions with universal application are described along with code generation utilities to automatically produce the user and file interfaces for application packages. the programmer makes use of the screen and file management utilities to specify screen i/o and file management requirements resulting in a set of automatically generated screen and file functions for use with the application program. thus, the programmer is relieved of the tedious task of dealing with the programming of the user and file interfaces and can concentrate on the "core" of the application program. furthermore, system specific code in the application program (for displays and files) minimally affects portability since the utilities confine such code to a handful of low level functions. a working example of a small business system constructed with the assistance of screen and file management utilities is discussed. yap s. chua charles n. winton performing data flow testing on classes mary jean harrold gregg rothermel cheap tricks frank c. sergeant concurrent compacting garbage collection of a persistent heap james o'toole scott nettles david gifford a framework for event-based software integration although event-based software integration is one of the most prevalent approaches to loose integration, no consistent model for describing it exists. as a result, there is no uniform way to discuss event-based integration, compare approaches and implementations, specify new event-based approaches, or match user requirements with the capabilities of event-based integration systems. we attempt to address these shortcomings by specifying a generic framework for event-based integration, the ebi framework, that provides a flexible, object-oriented model for discussing and comparing event- based integration approaches. the ebi framework can model dynamic and static specification, composition, and decomposition and can be instantiated to describe the features of most common event-based integration approaches. we demonstrate how to use the framework as a reference model by comparing and contrasting three well-known integration systems: field, polylith, and corba. daniel j. barrett lori a. clarke peri l. tarr alexander e. wise dynamic priorities, priority scheduling and priority inheritance scheduling requirements are a major portion of the set of requirements that must be addressed by the ada 9x project. this is evident by the numerous revision requests (rrs) submitted (over 25) to the ada 9x project office and the subsequent revision issues (ri-7005 and ri-7007). the related 9x requirements are: * r1 - flexible scheduling * r2 - flexible entry queue ordering * ir2 - rendezvous performance * ir3 - extended priority support * wr1 - flexible select algorithm this report summarizes the workshop's discussions on this important area. the discussions first considered dynamic priorities, and then priority scheduling and priority inheritance. fred maymir-ducharme the glow cache coherence protocol extensions for widely shared data stefanos kaxiras james r. goodman adaptable, reusable code this paper discusses the concept of adaptability as a means for reaping the cost and schedule reduction benefits of reuse. adaptability strives to implement the variability identified by domain analyses while managing the cost of implementation, extension, and use. the paper discusses a context for understanding different domain-specific reuse approches relative to adaptability and analyzes experience in designing and developing adaptable code. the experience is drawn from the arpa software technology for adaptable, reliable systems (stars) joint demonstration project with u.s. navy. margaret j. davis multiprocessor real-time threads karsten schwan hongyi zhou generic virtual memory management for operating system kernels we discuss the rationale and design of a generic memory management interface, for a family of scalable operating systems. it consists of a general interface for managing virtual memory, independently of the underlying hardware architecture (e.g. paged versus segmented memory), and independently of the operating system kernel in which it is to be integrated. in particular, this interface provides abstractions for support of a single, consistent cache for both mapped objects and explicit i/o, and control of data caching in real memory. data management policies are delegated to external managers. a portable implementation of the generic memory management interface for paged architectures, the paged virtual memory manager, is detailed. the pvm uses the novel history object technique for efficient deferred copying. the gmi is used by the chorus nucleus, in particular to support a distributed version of unix. performance measurements compare favorably with other systems. e. abrossimov m. rozier m. shapiro introducing the network information service for linux nis is a system for sharing system information between machines. mr. brown tells us how to set up and use it preston brown points-to analysis in almost linear time bjarne steensgaard static evaluation of functional programs static evaluation underlies essentially all techniques for a priori semantic program manipulation, i.e. those that stop short of fully general execution. included are such activities as type checking, partial evaluation, and, ultimately, optimized compilation. this paper describes a novel approach to static evaluation of programs in functional languages involving infinite data objects, i.e. those using normal order or "lazy" evaluation. its principal features are abstract interpretation on a domain of demand patterns, and a notion of function "reversal". the latter associates with each function f a derived function f' mapping demand patterns on f to demand patterns on its formal parameter. this is used for a comprehensive form of strictness analysis, aiding in efficient compilation. this analysis leads to a revised notion of basic block, appropriate as an intermediate representation for a normal order functional language. an implementation of the analysis technique in prolog is sketched, as well as an effort currently underway to apply the technique to the generation of optimized g-machine code. gary lindstrom parallel programming with coordination structures steven lucco oliver sharp gated ssa-based demand-driven symbolic analysis for parallelizing compilers peng tu david padua ada 9x implementation joyce l. tokar a distributed file system in apl this paper describes an extension to the apl-s system that allows accessing directly from one host computer the files based on other computers tied to the same public network. apl-net makes easy the development of distributed processing applications by handling internally all synchronization problems inherent to cooperating processes on different machines. jean-pierre barasz designing new languages or new language manipulation systems using ml the ml language and the denotational theory of programming languages fit together to yield a powerful tool in (new) language design. to effectively support this claim, we give a running definition of a simple pascal-like programming language with a new and non-standard programming construct. all the provided examples have been tested on a unix-4.2bsd implementation of ml. p. jouvelot garbage collection of timestamped data in stampede stampede is a parallel programming system to facilitate the programming of interactive multimedia applications on clusters of smps. in a stampede application, a variable number of threads can communicate data items to each other via channels, which are distributed, synchronized data structures containing timestamped data such as images from a video camera. channels are not queue-like: threads may produce and consume items out of timestamp order; they may produce and consume items sparsely (skipping timestamps), and multiple threads (including newly created threads) may consume an item in a channel. these flexibilities are required due to the complex dynamic parallel structure of applications, to support increased parallelism, and because of real-time requirements. under these circumstances, a key issue is the "garbage collection condition": when can an item in a channel be garbage collected? in this paper we specify precisely stampede's semantics concerning timestamps, and we describe two associated garbage collection conditions& a weak condition, and a more expensive but stronger condition. we then describe a distributed, concurrent algorithm that implements these two gc conditions. we present performance numbers that show little or no application-level performance penalty to using this algorithm for aiding automatic garbage collection in a cluster. we conclude with some remarks about the implementation in the stampede system. rishiyur s. nikhil umakishore ramachandran an object-oriented operating system interface this paper discusses an object-oriented interface from the smalltalk-80 programming environment to a unix-like operating system. this interface imposes an object-oriented paradigm on operating system facilities. we discuss some of the higher order abstractions that were created to make use of these facilities, and discuss difficulties we encountered implementing this interface. several examples of cooperating smalltalk and operating system processes are presented. juanita j. ewing ecos graphs: a dataflow programming language j. c. huang jose muñoz hal watt george zvara objective view point: object-orientation and c++ (part ii of ii) g. bowden wise using perturbation analysis to measure variation in the information content of test sets we define the information content of test set t with respect to a program p to be the degree to which the behavior of p on t approximates the overall behavior of p. informally, the higher the information content of a test set, the greater the likelihood an error in the data state of a program will be manifested under testing.perturbation analysis injects errors into the data state of an executing program and traces the impact of those errors on the intervening states and the program's output. the injection is performed by perturbation functions that randomly change the program's data state. using perturbation analysis we demonstrate that different test sets may satisfy the same testing criterion but have significantly different information content.we believe that "consistency of information content" is a crucial measure of the quality of a testing strategy. we show how perturbation analysis may be used to assess individual testing strategies and to compare different testing strategies.the "coupling effect" of mutation testing implies that there is little variation among mutation-adequate test sets for a program. this implication is investigated for two simple programs by analyzing the variation among several mutation-adequate test sets. larry morell branson murrill parameterized specifications for software reuse jingwen cheng ada features and real-time embedded applications elisabeth broe christiansen core php programming: using php to build dynamic web sites allen riddell programming and debugging distributed real-time applications in ada a. d. hutcheon d. s. snowden a. j. wellings supporting the restructuring of data abstractions through manipulation of a program visualization with a meaning-preserving restructuring tool, a software engineer can change a program's structure to ease future modifications. however, deciding how to restructure the program requires a global understanding of the program's structure, which cannot be derived easily by directly inspecting the source code. we describe a manipulable program visualization---the star diagram\\---that supports the restructuring task of encapsulating a global data structure. the star diagram graphically displays information pertinent to encapsulation, and direct manipulation of the diagram causes the underlying program to be restructured. the visualization compactly presents all statements in the program that use the given global data structure, helping the programmer to choose the functions that completely encapsulate it. additionally, the visualization elides code unrelated to the data structure and to the task and collapses similar expressions to help the programmer identify frequently occurring code fragments and manipulate them together. the visualization is mapped directly to the program text, so manipulation of the visualization also restructures the program. we present the star diagram concept and describe an implementation of the star diagram built upon a meaning-preserving restructuring tool for scheme. we also describe our creation of star diagram generators for c programs, and we test the scalability of the star diagram using large c and mumps programs. robert w. bowdidge william g. griswold upfront corporate linux journal staff beyond traditional program slicing traditional program slices are based on variables and statements. slices consist of statements that potentially affect (or are affected by) the value of a particular variable at a given statement. two assumptions are implicit in this definition: 1) that variables and statements are concepts of the programming language in which the program is written, and 2) that slices consist solely of statements.generalised slicing is an extension of traditional slicing where variables are replaced by arbitrary named program entities and statements by arbitrary program constructs. a model of generalised slicing is presented that allows the essence of any slicing tool to be reduced to a node marking process operating on a program syntax tree. slicing tools can thus be implemented in a straight- forward way using tree-based techniques such as attribute grammars.a variety of useful program decompositions are shown to be instances of generalised slicing including: call graph generation, interface extraction, slicing of object-oriented inheritance hierarchies and slices based on type dependences. examples are also given of how slicing can enhance understanding of formal compiler specifications and aid the creation of subset language specifications. anthony m. sloane jason holdsworth achieving reusability through an interactive ada interface builder karen mackey mike downs judy duffy jim leege on evolution of fortran robert r. van tuyl are applicative languages inefficient? carl g. ponder patrick mcgeer anthony p-c. ng an integration of logic and object-oriented programming f. mellender stack assembler language for a compiler course gerald wildenberg coordinating user interfaces for consistency j. nielsen fsm - a fullscreen manager fsm is a set of utility functions used in developing fullscreen apl systems at the upjohn company. defining panels and validating input data can be a tedious and time consuming process. fsm simplifies the construction of user dialogues by providing two methods for designing panels, by creating help panels for each screen and input field, and by providing three types of documentation on each panel. it automates checking of user input, scrolling on selected fields, aborting input under program or user signal, and controlling field processing with edit codes, missing values, fill characters, and left/right justification. fsm was originally developed in conjunction with work on a pharmaceutical manufacturing receiving system. the use of nested arrays simplified implementation and contributes to its ease of use. the purpose of the paper will be to describe the capabilities of fsm as influenced by feedback from other development analysts. r. j. busman macro processing in high-level languages a macro language is proposed. it enables macro processing in high-level programming languages. macro definitions in this language refer to the grammars of the respective programming languages. these macros introduce new constructs in programming languages. it is described how to automatically generate macro processors from macro definitions and programming language grammars written in the lex-yacc format. examples of extending high-level languages by means of macros are given. alexander sakharov system acquisition based on software product assessment jean mayrand françois coallier a formal framework for the derivation of machine-specific optimizers robert giegerich modalities in analysis and verification mads dam a precompiler for modular, transportable pascal max j. egenhofer andrew u. frank brackets are a pl's best friend bracket notation has been widely criticized, recently at apl86, as inconsistent and anomalous, and operators and functions have been developed in the new apls to wipe outat least some uses of brackets. although adin falkoff has valiantly tried to defend them (and jon mcgrew apologetically), brackets clearly need all the friends they can get.this paper has been written to point out again that brackets have more virtues than vices, that they represent a valuable notational convenience that has not yet been fully exploited, and that they could well be exploited to stand for another component of apl on a par with functions and operators. n. holmes graph rewrite systems for program optimization graph rewrite systems can be used to specify and generate program optimizations. for termination of the systems several rule-based criteria are developed, defining exhaustive graph rewrite systems. for nondeterministic systems stratification is introduced which automatically selects single normal forms. to illustrate how far the methodology reaches, parts of the lazy code motion optimization are specified. the resulting graph rewrite system classes can be evaluated by a uniform algorithm, which forms the basis for the optimizer generator optimix. with this tool several optimizer components have been generated, and some numbers on their speed are presented. uwe assmann partial method compilation using dynamic profile information the traditional tradeoff when performing dynamic compilation is that of fast compilation time versus fast code performance. most dynamic compilation systems for java perform selective compilation and/or optimization at a method granularity. this is the not the optimal granularity level. however, compiling at a sub- method granularity is thought to be too complicated to be practical. this paper describes a straightforward technique for performing compilation and optimizations at a finer, sub-method granularity. we utilize dynamic profile data to determine intra-method code regions that are rarely or never executed, and compile and optimize the code without those regions. if a branch that was predicted to be rare is actually taken at run time, we fall back to the interpreter or dynamically compile another version of the code. by avoiding compiling and optimizing code that is rarely executed, we are able to decrease compile time significantly, with little to no degradation in performance. futhermore, ignoring rarely-executed code can open up more optimization opportunities on the common paths. we present two optimizations---partial dead code elimination and rare-path-sensitive pointer and escape analysis---that take advantage of rare path information. using these optimizations, our technique is able to improve performance beyond the compile time improvements john whaley examples of event handling in apl2 event handling is a feature that has recently been added to many apl systems. ibm's vsapl release 4.0 and apl2 include a dyadic system function @@@@ea (execute alternate) which provides a simple event handling facility. this dyadic function can easily be simulated using the event handling facilities offered by other vendors. event handling provides a significant new direction in capability to apl. some applications that are inconvenient or impractical are easily accomplished using event handling. the examples given show some novel uses that may not be obvious on first introduction to @@@@ea. although the examples given here are specific to apl2 the ideas behind them may be used on any apl system with sufficiently powerful event handling. alan graham dynamically configurable distributed objects michael j. lewis andrew s. grimshaw rehost of a real-time interrupt-driven simulation onto a dos/pc/ada environment using ood daniel f. waterhouse daniel l. dyke efficient implementation of bit-vector operation in common lisp henry g. baker high-integrity code generation for state-based formalisms we are attempting to create a translator for a formal state-based specification language (rsml-ε) that is suitable for use in safety-critical systems. for such a translator, there are two main concerns: the generated code must be shown to be semantically equivalent to the specification, and it must be fast enough to be used in the intended target environment. we address the first concern by providing a formal proof of the translation, and by keeping the implementation of the tool as simple as possible. the second concern is addressed through a variety of methods: (1) decomposing a specification into parallel subtasks, (2) providing provably- correct optimizations, and (3) making worst-case performance guarantees on the generated code. michael w. whalen upfront corporate linux journal staff practical results from measuring software quality robert b. grady software reuse - issues and prospectives (panel session) guillermo arango martin l. griss will tracz mansour zand framework for debugging domain-specific languages premkumar devanbu using kids as a tool support for vdm y. ledru search and strategies in opl opl is a modeling language for mathematical programming and combinatorial optimization. it is the first language to combine high-level algebraic and set notations from mathematical modeling languages with a rich constraint language and the ability to specify search procedures and strategies that are the essence of constraint programming. this paper describes the facilities available in opl to specify search procedures. it describes the abstractions of opl to specify both the search tree (search) and how to explore it (strategies). the paper also illustrates how to use these high-level constructs to implement traditional search procedures in constraint programming and scheduling. pascal van hentenryck laurent perron jean-françois puget lessons from the design of the eiffel libraries the nature of programming is changing. most of the software engineering literature still takes for granted a world of individual projects, where the sole aim is to produce specific software systems in response to particular requirements, little attention being paid to each system's relationship to previous or subsequent efforts. this implicit model seems unlikely to allow drastic improvements in software quality and productivity. such order-of-magnitude advances will require a process of industrialization, not unlike what happened in those disciplines which have been successful at establishing a production process based on the reuse of quality-standardized components. this implies a shift to a "new culture" [14] whose emphasis is not on projects but instead on components. the need for such a shift was cogently expressed more than 20 years ago by doug mcilroy in his contribution, entitled mass- produced software components [10], to the now-famous first conference on software engineering: software production today appears in the scale of industrialization somewhere below the more backward construction industries. i think its proper place is considerably higher, and would like to investigate the prospects for mass- production techniques in software. [...] my thesis is that the software industry is weakly founded [in part because of] the absence of a software components subindustry [...] a components industry could be immensely successful. although reuse has enjoyed modest successes since this statement was made, by all objective criteria mcilroy's prophecy has not been fulfilled yet; many technical and non-technical issues had to be addressed before reuse could become a reality on the scale he foresaw. (see [1] and [20] for a survey of current work on reuse.) one important development was needed to make this possible: the coming age of object-oriented technology, which provides the best known basis for reusable software construction. (that the founding document of object-oriented methods, the initial description of simula 67, was roughly contemporary with mcilroy's paper tends to confirm a somewhat pessimistic version of redwine and riddle's contention [18] that "it takes on the order of 15 to 20 years to mature a technology to the point that it can be popularized to the technical community at large.") much of the current excitement about object- oriented software construction derives from the growing realization that the shift is now technically possible. this article presents the concerted efforts which have been made to advance the cause of component-based software development in the eiffel environment [12, 17] through the construction of the basic eiffel libraries. after a brief overview of the libraries, this article reviews the major language techniques that have made them possible (with more background about eiffel being provided by the sidebar entitled "major eiffel techniques"); it then discusses design issues for libraries of reusable components, the use of inheritance hierarchies, the indexing problem, and planned developments. bertrand meyer process modelling and empirical studies of software evolution (workshop) rachel harrison martin shepperd john w. daly converting an ada 83 application to ada 95 this paper can also be found at the oc systems web site. oliver cole book review: programming with gnu software randyl britten the structuring of systems using upcalls david d. clark stimulus response machines: an ada-based graphic formalism for describing class and object behavior george w. cherry an algebraic semantics of subobjects jonathan g. rossie daniel p. friedman testing object-oriented software (abstract) edward berard spill-free parallel scheduling of basic blocks b. natarajan m. schlansker an explorative journey from architectural tests definition down to code tests execution _our research deals with the use of the software architecture (sa) as a reference model for the conformance testing of the implemented system with respect to its architectural specification, at the integration test level. having formerly identified an approach to derive architectural test plans, we investigate here the practical meaning of a high level test case defined in terms of architectural processes and messages, such as the ones derived by our approach. indeed, establishing a relation between sa tests (here formulated as paths derived over labeled transition systems expressing the sa dynamics) and concrete, executable tests is not obvious at all. in this paper we describe the steps to be followed to refine architectural tests into code level tests, and we do so in an empirical context by illustrating our hands-on experience in running some of the derived architectural tests on the trmcs case study. we present interesting insights and some preliminary attempts to generalize problems and solutions._ antonia bertolino paola inverardi henry muccini smalltalk in the business world (panel): the good, the bad, and the future yen-ping shan ken auer andrew j. bear jim adamczyk adele goldberg tom love dave thomas ada 95 and c++ their role in future object-oriented development (panel) john lewis bill loftus rajiv tewari specifying reusable components using z: realistic sets and dictionaries r. l. london k. r. milsted newlines and lexer states chris clark nrl invitational workshop on testing and proving: two approaches to assurance carl e. landwehr susan l. gerhart john mclean donald i. good nancy leveson affix grammar driven code generation affix grammars are used to describe the instruction set of a target architecture for purposes of compiler code generation. a code generator is obtained automatically for a compiler using attributed parsing techniques. a compiler built on this model can automatically perform most popular machine- dependent optimizations, including peephole optimizations. code generators based on this model demonstrate retargetability for the vax1-11, iapx2-86, z-80003, pdp4-11, mc-68000, ns32032, fom, and ibm-370 architectures. mahadevan ganapathi charles n. fischer conference on computer-aided software engineering summary report tanehiro tatsuta advertisers index corporate linux journal staff dynamic compilation in jalapeño (panel session) ron cytron vivek sarkar making a difference - the impact of inspections paul sawyer alicia flanders dennis wixon multilanguage programming on the jvm: the ada 95 benefits the latest trend in our industry, "pervasive computing", predicts the proliferation of numerous, often invisible, computing devices embedded in consumer appliances connected to the ubiquitous internet. secure, reliable applications combined with simplicity of use will make or break a company's reputation in this market.the java "write once, run anywhere" paradigm, introduced by sun in the mid-90s, is embodied in a widely available computing platform targeting pervasive devices. although the java virtual machine was designed to support the semantics of the java programming language, it can also be used as a target for other languages.the ada 95 programming language is a good match for the java platform from the standpoint of its portability, security, reliability, and rich feature set. in this article we explain the features that have made ada the language of choice for software-critical applications and how these features complement the java programming language while increasing the overall reliability and flexibility of the java platform. franco gasperoni gary dismukes the coupling effect: fact or fiction fault-based testing strategies test software by focusing on specific, common types of errors. the coupling effect states that test data sets that detect simple types of faults are sensitive enough to detect more complex types of faults. this paper describes empirical investigations into the coupling effect over a specific domain of software faults. all the results from this investigation support the validity of the coupling effect. the major conclusion from this investigation is that by explicitly testing for simple faults, we are also implicitly testing for more complicated faults. this gives us confidence that fault-based testing is an effective means of testing software. a. offutt inc: a language for incremental computations an incremental computation is one that is performed repeatedly on nearly identical inputs. incremental computations occur naturally in many environments, such as compilers, language-based editors, spreadsheets, and formatters. this article describes a proposed tool for making it easy to write incremental programs. the tool consists of a programming language, inc, and a set of compile-time transformations for the primitive elements of inc. a programmer defines an algorithm in inc without regard to efficient incremental execution. the transformations automatically convert this algorithm into an efficient incremental algorithm. inc is a functional language. the implementation of an inc program is a network of processes. each inc function is transformed into a process that receives and transmits messages describing changes to its inputs and outputs. we give an overview to the language and illustrate the incremental techniques employed by inc. we present the static and incremental complexity bounds for the primitive inc functions. we also present some example programs illustrating inc's flexibility. daniel m. yellin robert e. strom efficient interprocedural analysis for program parallelization and restructuring an approach to efficient interprocedural analysis for program parallelization and restructuring is presented. such analysis is needed for parallelizing loops which contain procedure calls. our approach captures call effect on data dependencies by propagating the precise information of array subscripts from the called procedure. this allows the optimizing compiler to choose an efficient yet precise data dependence test scheme depending on the complexity of array reference patterns. the other existing methods do not provide such flexibility, hence may suffer from either imprecision or inefficiency. the paper also discusses usage of classical summary information in several important transformations for program parallelization. experimental results are reported. zhiyuan li pen- chung yew software tools and environments steven p. reiss letters to the editor corporate linux journal staff extending ordinary inheritance schemes to include generalization the arrangement of classes in a specialization hierarchy has proved to be a useful abstraction mechanism in class-based object oriented programming languages. the success of the mechanism is based on the high degree of code reuse that is offered, along with simple type conformance rules. the opposite of specialization is generalization. we will argue that support of generalization in addition to specialization will improve class reusability. a language that only supports specialization requires the class hierarchy to be constructed in a top down fashion. support for generalization will make it possible to create super-classes for already existing classes, hereby enabling exclusion of methods and creation of classes that describe commonalties among already existing ones. we will show how generalization can coexist with specialization in class-based object oriented programming languages. furthermore, we will verify that this can be achieved without changing the simple conformance rules or introducing new problems with name conflicts. c. h. pedersen subset/g pl/i and the pl/i standard subset/g pl/i (g for general purpose) is a subset of full standard pl/i. both subset/g pl/i and standard pl/i are defined by standards issued by the american national standards institute. subset/g evolved in the late 1970's as a result of a growing realization that full pl/i was a remarkably effective (if much maligned) language but at the same time a difficult language to implement and to teach. subset/g was designed so as to preserve the most useful properties of pl/i while deleting features that were either little used, uneconomic to implement, or inappropriate to what we now know about good programming practice. full standard pl/i is a descendant of the f-level pl/i language originally developed by ibm in the early 1960's. one of the design objectives of the original language was that it should be applicable to scientific programming, commercial programming, and systems programming. part of the original rationale for this objective was that pl/i was intended to replace fortran, cobol, and assembly language. but there was also another reason: the growing number of applications that spanned more than one category. subset/g also has this design objective, although some other design objectives of early pl/i were dropped, notably the principle that any construct that could reasonably be given a meaning should be acceptable. that rationale remains a major reason why subset/g is a significant and useful language despite the many other languages that have emerged since pl/i was first designed. paul w. abrahams architectures with pictures raymond j. a. buhr ronald s. casselman component primer jon hopkins a flexible semantic analyzer for ada a technique for writing semantic analysis phases of compilers is described. the technique uses simula classes and virtual procedures to create a flexible and modular program. this technique is used to implement a semantic analysis phase of a compiler front end for the preliminary ada language. because the design is extremely flexible and modular, the front end is able to accommodate changes in the ada language and its semantics as they are published. several problems were encountered when implementing ada's semantics. these problems are described and their solutions presented. the front end also produces tcolada, the specified intermediate language for various ada compiler contracts. this output has been used by an experimental compiler back end. [9] the front end is written as two programs which perform lexical analysis, syntactic analysis, semantic analysis, and tcolada generation. the front end is coded in simula, and has been running on dec tops-10 and tops-20 systems since september 1979. mark s. sherman martha s. borkan algorithms for translating ada multitasking algorithms are presented for translating the multitasking constructs of ada into the language ada-m. the purpose of the translation is to study various implementations of ada tasking and their relative problems, merits, and efficiencies. the multiprocessing constructs of ada-m are lower level than those of ada and, hence, flexible enough to permit development of a variety of compilation techniques for ada tasking. ada-m is sufficiently high-level, however, to permit the implementations to be developed quickly and understandably. requirements for data structures, scheduling, and other pertinant elements of ada tasking compilation are identified by the translation. d. r. stevenson a framework for run-time systems and its visual programming language alan m. durham ralph e. johnson online data-race detection via coherency guarantees dejan perkovic peter j. keleher reveng: a cost-effective approach to reverse-engineering xavier a. debest rudiger knoop jurgen wagner the finnapl keyword editor most terminals used do not support apl characters. although many microcomputers support soft fonts and some interpreters are equipped with a keyword option, and end-user apl applications should do without apl characters when in normal use, there still is a need for managing apl programming from terminals without apl characters. the authors propose a simple toolbox for the management of such situations in vm/sp environment. juhani sandberg olli paavola tauno ylinen a comparative study of coarse- and fine-grained safe regression test-selection techniques regression test-selection techniques reduce the cost of regression testing by selecting a subset of an existing test suite to use in retesting a modified program. over the past two decades, numerous regression test-selection techniques have been described in the literature. initial empirical studies of some of these techniques have suggested that they can indeed benefit testers, but so far, few studies have empirically compared different techniques. in this paper, we present the results of a comparative empirical study of two safe regression test-selection techniques. the techniques we studied have been implemented as the tools dejavu and testtube; we compared these tools in terms of a cost model incorporating precision (ability to eliminate unnecessary test cases), analysis cost, and test execution cost. our results indicate, that in many instances, despite its relative lack of precision, testtube can reduce the time required for regression testing as much as the more precise dejavu. in other instances, particularly where the time required to execute test cases is long, dejavu's superior precision gives it a clear advantage over testtube. such variations in relative performance can complicate a tester's choice of which tool to use. our experimental results suggest that a hybrid regression test-selection tool that combines features of testtube and dejavu may be an answer to these complications; we present an initial case study that demonstrates the potential benefit of such a tool. john bible gregg rothermel david s. rosenblum how to structure a f90 procedure library clive page propagators and concurrent constraints chris laffra jan van den bos fortran 8x - its public review brian t. smith an alternative to asynchronous transfer of control in ada 9x charles j. antonelli richard a. volz a formal approach to program inversion in this paper, we introduce a formal approach to inverting programs, which is developed in [2]. we demonstrate its usefulness in programming by applying it to develop an in- place algorithm for the so-called lu-multiplication, which might be otherwise hard to find. wei chen violatility analysis framework for product lines evolution of a software intensive system is unavoidable. in fact, evolution can be seen as a part of reuse process. during the evolution of the software asset, the major part of the system functionality is normally reused. so the key issue is to identify the volatile parts of the domain requirements. additionally, there is promise that tailored tool support may help supporting evolution in software intensive systems. in this paper, we describe the volatility analysis method for product lines. this highly practical method has been used in multiple domains and is able to express and estimate common types of evolutional characteristics. the method is able to represent volatility in multiple levels and has capacity to tie the volatility estimation to one product line member specification. we also briefly describe current tool support for the method. the main contribution of this paper is a volatility analysis framework that can be used to describe how requirements are estimated to evolve in the future. the method is based on the definition hierarchy framework. juha savolainen juha kuusela tool support for software architecture frances paulisch the derivation of microcode by symbolic execution given a description of a computer called the "target" and a micro processor called the "host" we would like to generate a micro program which when executed by the host will simulate the target. we accomplish this by first breaking the target down into a set of small segments. then, logical conditions for an appropriate microcode are generated for each segment by a combination of symbolic execution and simplification. when completed, the segments are assembled into a working code by solving the set of logical conditions. john wade ulrich the design, implementation, and evaluation of jade jade is a portable, implicitly parallel language designed for exploiting task- level concurrency.jade programmers start with a program written in a standard serial, imperative language, then use jade constructs to declare how parts of the program access data. the jade implementation uses this data access information to automatically extract the concurrency and map the application onto the machine at hand. the resulting parallel execution preserves the semantics of the original serial program. we have implemented jade as an extension to c, and jade implementations exist for s hared-memory multiprocessors, homogeneous message-passing machines, and heterogeneous networks of workstations. in this atricle we discuss the design goals and decisions that determined the final form of jade and present an overview of the jade implementation. we also present our experience using jade to implement several complete scientific and engineering applications. we use this experience to evaluate how the different jade language features were used in practice and how well jade as a whole supports the process of developing parallel applications. we find that the basic idea of preserving the serial semantics simplifies the program development process, and that the concept of using data access specifications to guide the parallelization offers significant advantages over more traditional control-based approaches. we also find that the jade data model can interact poorly with concurrency patterns that write disjoint pieces of a single aggregate data structure, although this problem arises in only one of the applications. martin c. rinard monica s. lam an integrated tool environment for ada compiler validations michael tonndorf augmentation of object-oriented programming by concepts of abstract data type theory: the modpascal experience object-oriented programming and abstract data type (adt) theory have emerged from the same origin of computer science: the inability to deal efficiently with 'programming in the large' during the early seventies. each of the approaches has led to significant practical and theoretical results resp. nevertheless it is still unsatisfactory that up to now the mutual influence seems to be limited to more or less syntactical issues (e.g. the provision of packages, clusters, forms). in this paper we report on the object-oriented language modpascal that was developed as part of the integrated software development and verification (isdv) project. we show how the essence of concepts of adt theory as algebraic specifications, enrichments, parameterized specifications or signature morphisms as well as their semantics can be consistently integrated in an imperative object-oriented language. furthermore, as the experience of using modpascal as target language of the isdv system has shown, we claim that without similar support of theoretical concepts techniques like formal specification of programs or algebraic verification loose their power and even applicability. walter g. olthoff exploiting non-uniform reuse for cache optimization claudia leopold the revival transformation the notion that a definition of a variable is dead is used by optimizing compilers to delete code whose execution is useless. we extend the notion of deadness to that of partial deadness, and define a transformation, the revival transformation, which eliminates useless executions of a (partially dead) definition by tightening its execution conditions without changing the set of uses which it reaches or the conditions under which it reaches each of them. lawrence feigen david klappholz robert casazza xing xue reliable object storage to support atomic actions brian m. oki barbara h. liskov robert w. scheifler ada validation := ada conformity assessment phil brashear the consequences of one's first programming language who has not seen programs written in one programming language that have the style of another language? having experienced "fortran with semicolons" and "c with a basic flavor" over the years, it occurred to me to wonder whether the programmer's first programming language had an effect on programming ability as profound as the effect of one's native language on one's thought patterns. over many years of programming, teaching programming, and debugging other people's programs, it seemed to me that something akin to the sapir-whorf hypothesis applied to programmers--- especially those who have never been taught to abstract the development of algorithms from the development of programs. indeed, i began to worry about the consequences of loosing on the computer centers (and computer science departments) of a horde of programmers whose total background is a rudimentary basic learned in elementary or high school from a teacher who has virtually no actual programming experience. richard l. wexelblat object-oriented mechanismsthree groups of requirements were considered in this session: * object-oriented programming Â- or more specifically, the mechanisms of inheritance and polymorphism. * general references * final isationexperience of object-oriented programming amongst the majority of delegateswas fairly limited, so rather than attempt to assess the value of object-oriented mechanisms in themselves attention was focussed on their potentialimpact on the real-time requirements. the session began with a briefintroduction of the requirements, and of the possible strategies forsatisfying them in ada 9x, followed by a general discussion.colin atkinson optimization of array subscript range checks compile-time elimination of subscript range checks is performed by some optimizing compilers to reduce the overhead associated with manipulating array data structures. elimination and propagation, the two methods of subscript range check optimization, are less effective for eliminating global redundancies especially in while-loop structures with nonconstant loop guards. this paper describes a subscript range check optimization procedure that can eliminate more range checks than current methods. two transformations called inner-loop guard elimination and conservative expression substitution are introduced to enhance propagation of range checks in nested while-loops and to define a partial order on related range checks. global elimination is improved by considering range checks performed before control reaches a statement and after control leaves a statement. a unique feature of this method is the simplification of the available range-check analysis system for global elimination. jonathan m. asuru incremental testing of object-oriented class structures mary jean harrold john d. mcgregor kevin j. fitzpatrick beyond the hype (panel session): sequel to the trial of the gang of four neil harrison frank buschmann james coplien david ungar john vlissides tips for optimizing memory usage: how to make the most of your computer's memory. jeff tranter issues in supporting event-based architectural styles antonio carzaniga elisabetta di nitto david s. rosenblum alexander l. wolf experiences in implementing ada 9x protected records and requeue paul reed scalable on-the-fly detection of the first races in parallel programs jeong-si kim yong-kee jun software caching and computation migration in olden the goal of the olden project is to build a system that provides parallelism for general purpose c programs with minimal programmer annotations. we focus on programs using dynamic structures such as trees, lists, and dags. we demonstrate that providing both software caching and computation migration can improve the performance of these programs, and provide a compile-time heuristic that selects between them for each pointer dereference. we have implemented a prototype system on the thinking machines cm-5. we describe our implementation and report on experiments with ten benchmarks. martin c. carlisle anne rogers bigmac ii: a fortran language augmentation tool this paper describes the motivation, design, implementation, and some preliminary performance characteristics of bigmac ii, a macro definition capability for creating language enhancers and translators. bigmac ii enables the user to specify transformations through strex, a fortran-like language, which enables the specification of macros which are then used to interpretively alter incoming programs. bigmac ii is specially adapted to the processing of fortran programs. this paper shows how it can be used as a deprocedurizer (or flattener), a dialect-to-dialect translator, a portability and version control aid, and a device for creating language enhancements as sophisticated as new control structures and abstract data types. eugene w. myers leon j. osterweil native code process-originated migration in a heterogeneous environment dynamic migration has been investigated in several research efforts as a vehicle for load sharing, resource sharing, communication overhead reduction, failure robustness, and several other contexts. this paper presents a summary of related work in dynamic migration, and suggests motivations for considering heterogeneous migration. next a design for heterogeneous migration, along with its restrictions is presented. a prototype implementation is described. conclusions are drawn and future work is suggested. charles m. shub letters to the editor corporate linux journal staff a brief look at extension programming before and now liisa räihä the architecture of a uml virtual machine current software development tools let developers model a software system and generate program code from the models to run the system. however, generating code and installing a non- trivial system induces a time delay between changing the model and executing it that makes rapid model prototyping awkard if not impossible. this paper presents the architecture of a virtual machine for uml that interprets uml models without any intermediate code-generation step. the paper shows how to embed uml in a metalevel architecture so that a key property of model-based systems, the casual connection between models and model instances, is guaranteed. with this architecture, changes to a model have immediate effects on its execution, providing users with rapid feedback about the model's structure and behavior. this approach supports model innovation better than today's code-generation approaches. dirk riehle steven fraleigh dirk bucka- lassen nosa omorogbe ordered mutation testing i. m. m. duncan d. j. robson a model for implementing euclid modules and prototypes richard c. holt david b. wortman hopt: a myopic version of the stochopt automatic file migration policy the stochopt automatic file migration policy (proposed by a.j. smith) minimizes the expected retention and recall costs of an abitrarily sized file. we consider the application of the stochopt policy to a file system in which the file inter-reference time (irt) distributions are characterized by strictly monotonically decreasing hazard rates (sdhr) (also known as decreasing failure rates, dfr). we show that in this case the stochopt policy can be simply stated in terms of a scaled hazard rate, i.e., the hazard rate divided by the file size. such decreasing failure rate distributions have been used by smith to model empirically observed file inter-reference times. frank olken the efficiency of storage management schemes for ada programs rajiv gupta mary lou soffa book review: cgi developer's resource reuven lerner tm: a systematic methodology of software metrics lem o. ejiogu building distributed ada applications from specifications and functional components dennis l. doubleday mario r. barbacci charles b. weinstock michael j. gardner randall w. lichota the java developer's kit arman danesh performance of real/ixtm-fully preemptive real time unix borko furht j. parker d. grostick demise of the metacompiler in cmforth jay melvin temporal sequence learning and data reduction for anomaly detection the anomaly-detection problem can be formulated as one of learning to characterize the behaviors of an individual, system, or network in terms of temporal sequences of discrete data. we present an approach on the basis of instance-based learning (ibl) techniques. to cast the anomaly-detection task in an ibl framework, we employ an approach that transforms temporal sequences of discrete, unordered observations into a metric space via a similarity measure that encodes intra-attribute dependencies. classification boundaries are selected from an a posteriori characterization of valid user behaviors, coupled with a domain heuristic. an empirical evaluation of the approach on user command data demonstrates that we can accurately differentiate the profiled user from alternative users when the available features encode sufficient information. furthermore, we demonstrate that the system detects anomalous conditions quickly \\--- an important quality for reducing potential damage by a malicious user. we present several techniques for reducing data storage requirements of the user profile, including instance- selection methods and clustering. as empirical evaluation shows that a new greedy clustering algorithm reduces the size of the user model by 70%, with only a small loss in accuracy. terran lane carla e. brodley tsl: task sequencing language david helmbold david luckham non-volatile memory for fast, reliable file systems mary baker satoshi asami etienne deprit john ouseterhout margo seltzer handling crosscutting constraints in domain-specific modeling jeff gray ted bapty sandeep neema james tuck a distributed debugger for amoeba we describe a debugger that is being developed for distributed programs in amoeba. a major goal in our work is to make the debugger independent of the amoeba kernel. our design integrates many facilities found in other debuggers, such as execution replay, breakpointing, and an event-based view of the execution of the target program. this paper discusses the influence of amoeba's architecture on the attainability of our goals and the desired functionality of the debugger. we also consider such problems as how to deal with timeouts and interactions between the target program and its environment. i. j. p. elshoff on the type extensions of oberon-2 libero nigro preliminary experience from the dice system a distributed incremantal compiling environment abstracts the dice system is a highly integrated programming environment which provides programmer support in the case where the programming environment resides in a host computer and the program is running on a target computer that is connected to the host. such a system configuration is also suitable for remote debugging and maintenance of production versions of programs that has been installed in a user environment. the system contains tools such as an screen- oriented structure editor, a statement-level incremental compiler, a screen- oriented debugger and a program database. the debugger uses the same facility for pointing and display as does the editor, and it uses the incremental compiler for insertion of breakpoints and statement evaluation. most of these tools are automatically generated from compact descriptions. this paper describes some aspects of a prototype version of the system and gives some preliminary data on the performance. also, strategies for implementing portable programming environments are discussed and are exemplified by the dice system. peter fritzson optimistic concurrency control for abstract data types maurice herlihy software first: applying ada megaprogramming technology to target platform selection trades a. r. filarey w. e. royce r. rao p. schmutz l. doan-minh the effects of solid state paging devices in a large time-sharing system this paper reports the results of some measurements taken on the effects two new solid state paging devices, the stc 4305 and the intel 3805, have on paging performance in the michigan terminal system at the university of michigan. the measurements were taken with a software monitor using various configurations of the two solid state devices and the fixed head disk, which they replace. measurements were taken both during regular production and using an artificial load created to exercise the paging subsystem. the results confirmed the expectation that the solid state paging devices provide shorter page-in waiting times than the fixed-head disk, and also pointed up some of the effects which their differing architectures have on the system. john sanguinetti a presentation and comparison of four information system development methodologies p mannino john reid reports john reid design of a machine-independent optimizing system for emulator development methods are described to translate a certain machine-independent intermediate language (iml) to efficient microprograms for a class of horizontal microprogrammable machines. the iml is compiled directly from a high-level microprogramming language used to implement a virtual instruction set processor as a microprogram. the primary objective of the iml-to-host machine interface design is to facilitate language portability. transportability is accomplished by use of a field description model and a macro expansion table which describe the host machine to the translator system. register allocation scheme and control flow analysis are employed to allocate the symbolic variables of the iml to the general-purpose registers of the host machine. a set of 5-tuple microoperations (function, input, output, field, phase) is obtained with the aid of the field description model. then a compaction algorithm is used to detect the parallelism of microoperations and to generate suboptimal code for a horizontal microprogrammable machine. the study concludes with a description of the effects of the above methods upon the quality of microcode produced for a specific commercial computer. perng-ti ma t. g. lewis genie forth roundtable paul thomas eiffel: a case tool supporting object-oriented software construction david rine implementation of a prototype cais environment p carr r stevenson j alea j berthold g croucher visicola, a model and a language for visibility control in programming languages michael klug object-oriented programming techniques with ada: an example p. collard system administration: pgfs: the postgres file system brian bartholomew disk arm movement in anticipation of future requests when a disk drive's access arm is idle, it may not be at the ideal location. in anticipation of future requests, movement to some other location may be advantageous. the effectiveness of anticipatory disk arm movement is explored. various operating conditions are considered, and the reduction in seek distances and request response times is determined for them. suppose that successive requests are independent and uniformly distributed. by bringing the arm to the middle of its range of motion when it is idle, the expected seek distance can be reduced by 25 percent. nonlinearity in time versus distance can whittle that 25 percent reduction down to a 13 percent reduction in seek time. nonuniformity in request location, nonpoisson arrival processes, and high arrival rates can whittle the reduction down to nothing. however, techniques are discussed that maximize those savings that are still possible under those circumstances. various systems with multiple arms are analyzed. usually, it is best to spread out the arms over the disk area. the both arms should be brought to the middle. richard p. king a module system for scheme this paper presents a module system designed for large-scale programming in scheme. the module system separates specifications of objects from their implementations, permitting the separate development, compilation, and testing of modules. the module system also includes a robust macro facility. we discuss our design goals, the design of the module system, implementation issues, and our future plans. pavel curtis james rauen thinking objectively: an introduction to software stability mohammed e. fayad adam altman concurrent objects in ada 95 although implicit parallelism is one of the most important advantages besides methodological and software engineering aspects of the object-oriented paradigm, only a few papers are directly dedicated object-oriented concurrency. based on [boyd87], [clark87], [collard89], and [baker91], each of which appeared in the acm ada letters, this paper addresses the implementation of concurrent object classes using ada 95. it offers a uniform template of the structure of concurrent object subclasses leading to polymorphic methods. finally, a brief remark is made on the importance of generic object classes. hans loeper amro khattab peter neubert a paradigm for distributed debugging three critical problems associated with distributed debugging are controlling the debugging process in the absence of a global clock; maintaining transparency so that the debugger does not change the order or timing of events, and reproducing an execution sequence to be able to verify that a fault has been corrected. a paradigm is put forward that successfully addresses these three problems. to demonstrate the feasibility of this paradigm, an instantiation has been constructed. a description is given of the resulting debugger for a dataflow language. nancy j. wahl stephen r. schach programming languages: towards greater commonality brian l. meek iso/iec 10514 - 1, the standard for modula-2: process aspects c. pronk m. schönhacker compiling optimized array operations at run-time experiments were performed for apl functions on a vax-750 running unix to compare the execution times of two types of interpreted code run time run time compiled code for the dyadic function subtract. the data type and size varied over integer, real, one element, and a 5x5x5x5x5 (=3125) element arrays. the experiment showed a range of more than 2.5:1 in execution times of interpretive code to compiled code. thomas w. christopher ralph w. wallace metrics for targeting candidates for reuse: an experimental approach stephen r. schach xuefeng yang constraining control continuations, when available as first-class objects, provide a general control abstraction in programming languages. they liberate the programmer from specific control structures, increasing programming language extensibility. such continuations may be extended by embedding them in functional objects. this technique is first used to restore a fluid environment when a continuation object is invoked. we then consider techniques for constraining the power of continuations in the interest of security and efficiency. domain mechanisms, which create dynamic barriers for enclosing control, are implemented using fluids. domains are then used to implement an unwind-protect facility in the presence of first-class continuations. finally, we demonstrate two mechanisms, wind-unwind and dynamic-wind, that generalize unwind-protect. daniel p. friedman christopher t. haynes effective software architecture design: from global analysis to uml descriptions _it is now generally accepted that separating software architecture into multiple views can help in reducing complexity and in making sound decisions about design trade-offs. our four views are based on current practice; they are loosely coupled, and address different engineering concerns [1]. this tutorial will teach you how global analysis can improve your design, and how to use uml to describe these views. you will learn: (1) the purpose of having separate software architecture views, (2) the difference between using uml for software architecture and the use of uml for designing oo implementations, (3) how to apply global analysis to analyze factors that influence the architecture and to develop strategies that guide the design, (4) the importance of designing for anticipated change to produce more maintainable architectures, and (5) how to incorporate software architecture design in your software process._ _this tutorial is aimed at experienced software engineers, architects, and technical managers. it is assumed that participants know the basic uml diagrams. experience in developing models and software design is helpful._ robert l. nord daniel j. paulish dilip soni christine hofmeister a practical method for lr and ll syntactic error diagnosis and recovery this paper presents a powerful, practical, and essentially language- independent syntactic error diagnosis and recovery method that is applicable within the frameworks of lr and ll parsing. the method generally issues accurate diagnoses even where multiple errors occur within close proximity, yet seldom issues spurious error messages. it employs a new technique, parse action deferral, that allows the most appropriate recovery in cases where this would ordinarily be precluded by late detection of the error. the method is practical in that it does not impose substantial space or time overhead on the parsing of correct programs, and in that its time efficiency in processing an error allows for its incorporation in a production compiler. the method is language independent, but it does allow for tuning with respect to particular languages and implementations through the setting of language-specific parameters. michael g. burke gerald a. fisher the development of a modula-2 validation suite k. n. king david a. crick composite design patterns dirk riehle efficient periodic execution of ada tasks the ada language has been faulted for not providing an efficient and reliable mechanism to support periodic execution of tasks. formulations that rely on delay statements in each periodic task or on user-supplied dispatcher tasks are unreliable or inefficient. this paper presents a simple model based on periodic pseudo- interrupts. this model can be efficiently implemented within the confines of the 1983 ada language standard. thomas j. quiggle towards fully abstract semantics for local variables the store model of halpern-meyer-trakhtenbrot is shown---after suitable repair ---to be a fully abstract model for a limited fragment of algol in which procedures do not take procedure parameters. a simple counter-example involving a parameter of program type shows that the model is not fully abstract in general. previous proof systems for reasoning about procedures are typically sound for the hmt store model, so it follows that theorems about the counter-example are independent of such proof systems. based on a generalization of standard cpo based models to structures called locally complete partial orders (lcpo's), improved models and stronger proof rules are developed to handle such examples. a. r. meyer k. sieber response to a note on structured interrupts ted hills the role of opaque types to build abstractions a. corradi l. leonardi interactive design of structures: a program for everyone in this paper we present a program for the design and analysis of planar member frame structures. the intuitive graphic user interface covers all steps necessary for designing a load bearing structure, from geometry design to system optimization.this project is the first step in converting from a pool of apl*plus ii programs towards interactive windows applications written in dyalogapl/w, and was carried out by dipl.-ing. johann riebenbauer, institute of structural design (institut f r tragwerkslehre) at the technical university of graz, austria, and the group of civil engineering (ig f r bauwesen) zenkner&handel.the; computational section of the described application is composed of proven space frame algorithms, which where developed by prof. peter klement over the last three decades in apl, and which represent the backbone of the new graphic user interface (gui) in dyalogapl/w. (in the appendix the reader can find a small vocabulary, which can be used to translate the german words in the illustrations.) johann riebenbauer joachim hoffmann automatic incremental state saving darrin west kiran panesar suif: an infrastructure for research on parallelizing and optimizing compilers robert p. wilson robert s. french christopher s. wilson saman p. amarasinghe jennifer m. anderson steve w. k. tjiang shih-wei liao chau-wen tseng mary w. hall monica s. lam john l. hennessy designing reusable designs (panel session): experiences designing object- oriented frameworks allen wirfs-brock john vissades ward cunningham ralph johnson lonnie bollette linux web server toolkit a review of the "linux web server toolkit", a book that takes the reader completely through the procedure of building a web server keith p. de solla the design of the rexx language m cowlishaw ada and asis: justification of differences in terminology and mechanisms sergey rybin alfred strohmeier best of technical support corporate linux journal staff modulo scheduling of loops in control-intensive non-numeric programs daniel m. lavery wen-mei w. hwu genericity versus inheritance reconsidered: self-reference using generics as shown by the work of bertrand meyer, it is possible to simulate genericity using inheritance, but not vice-versa. this is because genericity is a parameterization mechanism with no way to deal with the polymorphic typing introduced using inheritance. nevertheless, if we focus on the use of inheritance as an implementation technique, its key feature is the dynamic binding of self-referential operation calls. this turns out to be basically a parameterization mechanism that can in fact be simulated using generics and static binding. and for some applications this approach may actually be of more than academic interest. ed seidewitz a bayesian differential debugging model for software reliability an assumption commonly made in early models of software reliability is that the failure rate of a program is a constant multiple of the number of bugs remaining. this implies that all bugs have the same effect upon the overall failure rate. the assumption is challenged and an alternative proposed. the suggested model results in earlier bug-fixes having a greater effect than later ones (the worst bug show themselves earlier and so are fixed earlier), and the dfr properly between bug-fixes (confidence in programs increases during periods of failure-free operation, as well as at bug-fixes). the model shows a high degree of mathematical tractability, and allows a range of reliability, and allows a range of reliability measures to be calculated exactly. predictions of total execution time to achieve a target reliability, are obtained. b. littlewood an efficient garbage compaction algorithm johannes j. martin software architecture classification for estimating the cost of cots integration daniil yakimovich james m. bieman victor r. basili verified compilation in micro-gypsy w. young test plan methodology reza pazirandeh a lisp machine lisp is the second oldest computer-programming language still in active use. our implementation is based on a powerful microprogrammed processor designed specifically for lisp. this processor supports a tagged macro-architecture; it manipulates items which have a built- in data-type field. the system supports several important new storage-management features, including a real-time garbage collector with hardware assist (using the basic algorithm of baker). the software itself is written in lisp to a much larger extent than in previous systems. in fact, there are only two levels in which code is written: lisp and microcode. among other things this improves the consistancy of system interfaces. the system design incorporates the personal computer philosophy. we believe the personal computer will predominate in the future since it is preferable to time-sharing in most cases and technological trends are greatly reducing its cost penality. in the case of very large programs, the personal computer can be cost-effective today, due to the phenomenon of thrashing encountered in time-sharing systems. richard d. greenblatt thomas f. knight john t. holloway david a. moon a modula-2 implementation of csp m collado r morales j j moreno design of guis from a programming perspective this paper presents a design method for graphical user interfaces. the method is based on semantic specification. based on formalized style rules and guidelines, a user interface design proposal is derived from the functionality of an application. the advantages of this method are: the automation of parts of the design process, automatic design evaluation, and automatic mapping to multiple user interface toolkits. the paper proposes a specification language for graphical user interfaces. this language will ease the transition from the functional design of an application to the user interface design by a semantically driven designer of user interfaces. the paper gives an ada example of the use of the language. ole lauridsen the perils of reconstructing architectures s. jeromy carrière rick kazman adapters and binders: overcoming problems in the design and implementation of the c++-stl volker simonis letters to the editor corporate linux journal staff using xinetd jose demonstrates how to start configuring and tweaking xinetd. jose nazario anonymous "things" used as locals l. morgenstern pattern-based reverse-engineering of design components rudolf k. keller reinhard schauer sebastien robitaille patrick page using key object opportunism to collect old objects barry hayes assessing software libraries by browsing similar classes, functions and relationships amir michail david notkin software development environment based on hypernet nenad marovac task concurrency management methodology to schedule the mpeg4 im1 player on a highly parallel processor platform this paper addresses the concurrent task management of complex multi-media systems, like the mpeg4 im1 player, with emphasis on how to derive energy-cost _vs_ time- budget curves through task scheduling on a multi-processor platform. starting from the original "standard" specification, we extract the concurrency originally hidden by implementation decisions in a "grey-box" model. then we have applied two high-level transformations on this model to improve the task-level concurrency. finally, by scheduling the transformed task-graph, we have derived energy-cost _vs_ time-budget curves. these curves will be used to get globally optimized design decisions when combining subsystems into one complete system or to be used by a dynamic scheduler. the results on the mpeg4 im1 player confirm the validity of our assumptions and the usefulness of our approach. chun wong paul marchal peng yang an approach to program behavior modeling and optimal memory control percy tzelnic izidor gertner the mirage rapid interface prototyping system james e. mcdonald paul d. j. vandenberg melissa j. smartt guidelines for embedded software documentation nenad marovac programming languages and systems for prototyping concurrent applications concurrent programming is conceptually harder to undertake and to understand than sequential programming, because a programmer has to manage the coexistence and coordination of multiple concurrent activities. to alleviate this task several high-level approaches to concurrent programming have been developed. for some high-level programming approaches, prototyping for facilitating early evaluation of new ideas is a central goal. prototyping is used to explore the essential features of a proposed system through practical experimentation before its actual implementation to make the correct design choices early in the process of software development. approaches to prototyping concurrent applications with very high-level programming systems intend to alleviate the development in different ways. early experimentation with alternate design choices or problem decompositions for concurrent applications is suggested to make concurrent programming easier. this paper presents a survey of programming languages and systems for prototyping concurrent applications to review the state of the art in this area. the surveyed approaches are classified with respect to the prototyping process. wilhelm hasselbring towards a compiler front-end for ada this paper discusses the current development of a compiler front-end for ada at the university of karlsruhe. the front-end is independent of the target- machine and will compile ada into an intermediate language aida, essentially an attributed structure tree. the front-end is written in its own language using ada-0 and lis as preliminary compilers for the bootstrap. the compiler in its present form relies heavily on the formal definition of ada which is under development at cii and iria. gerhard goos georg winterstein getting to know gdb mike loukides andy oram output from generic packages ada83 suffers from being a non-polymorphic statically bound language in that, the types of data need to be known before run-time. the generic mechanism helps to overcome this limitation in that a non-type specific package can be written and instantiated for several actual data types, depending on the generic formal parameter chosen. one problem that is not solved by the generic mechanism is that of input and output of data values. since actual data types are represented in different ways, there is no single routine which can be called upon to get or put items of a generic formal type. this paper outlines a possible solution to this problem. paul slater an automated documentation system for a large scale manufacturing engineering research project the automated manufacturing research facility (amrf) being implemented at the national bureau of standards (nbs) involves the development of a software system integrating the various information processing, communications and data storage functions required in a totally automated manufacturing environment. the project contains a five year software development effort by more than thirty research staff organizationally partitioned into many units working concurrently on different parts of the system and supplemented by software acquired through procurement or contractual effort. as the facility is implemented in modular blocks, new software development will be undertaken as research into factory automation technology continues. the automated manufacturing research facility (amrf) being implemented at the national bureau of standards (nbs) involves the development of a software system integrating the various information processing, communications and data storage functions required in a totally automated manufacturing environment. the project contains a five year software development effort by more than thirty research staff organizationally partitioned into many units working concurrently on different parts of the system and supplemented by software acquired through procurement or contractual effort. as the facility is implemented in modular blocks, new software development will be undertaken as research into factory automation technology continues. in such a research environment, a system is needed for maintaining documentation for the software life cycle for the following purposes: (1) tracking progress of individual module development, (2) allowing for the availability of up-to-date information on module description to other members of the project who need to interface to a given module, (3) developing a cross reference of module and data element relationships, (4) providing a documentation format that includes specific reporting requirements to upper level management, and (5) generating working level documentation that can be easily modified and serve as an up-to-date reference for anyone with interest in the project or special subproject. this paper describes the structure of a software system that is functioning in a research environment and satisfies the following design constraints: (i) provide minimum additional effort on the part of the system developers, and (2) reflect the structured thinking of the developer during the software life cycle. the key to such a management system is the early documentation of the system modules identified through a decomposition of all functions from the integrated system down to the lowest level subroutines. each component module of the system has a group of seven documents to track it through the life cycle. the decomposition provides each developer with an understanding of where his module fits into the overall system and permits module documentation to be produced that can be limited to just a few pages. the documenter is encouraged to enter information as it becomes known so that the system always reflects the latest status of the research effort. the automated system is being developed as an on-line interactive menu-driven system that allows the developer to enter information about his modules. the final system version will be capable of the following functions: (1) full- screen editing, (2) menu driven interface between the user and the system, (3) cross referencing between module and data elements, (4) generation of reports, (5) generation of work schedules, (6) monitoring of system milestones, (7) checking for appropriate information, and (8) managing a data element dictionary. howard m. bloom carl e. wenger the design of a virtual machine for ada an implementation of ada should be based on a machine-independent translator generating code for a virtual machine, which can be realised on a variety of machines. this approach, which leads to a high degree of compiler portability, has been very successful in a number of recent language implementation projects and is the approach which has been specified by the u.s. army and air force in their requirements for ada implementations. this paper discusses the rationale, requirements and design of such a virtual machine for ada. the discussion concentrates on a number of fundamental areas in which problems arise: basic virtual machine structure, including storage structure and addressing; data storage and manipulation; flow of control; subprograms, blocks and exceptions; and task handling. l. j. groves w. j. rogers module weakness: a new measure yogesh singh pradeep bhatia forcing the completion of abnormal tasks d. papay code generation for fixed-point dsps this paper examines the problem of code-generation for digital signal processors (dsps). we make two major contributions. first, for an important class of dsp architectures, we propose an optimal o(n) algorithm for the tasks of register allocation and instruction scheduling for expression trees. optimality is guaranteed by sufficient conditions derived from a structural representation of the processor instruction set architecture (isa). second, we develop heuristics for the case when basic blocks are directed acyclic graphs (dags). guido araujo sharad malik dynamically displaying a pascal program in color this paper describes a method of using color to display the actual structure of a pascal program on a color monitor. this enhancement not only increases a programmer's understanding of the code, but also aids in detecting common structural errors. the paper identifies several structures deserving of color and the properties that must be adhered to when assigning colors to these structures. a simple coloring scheme illustrates this discussion. the last section describes enhancements and directions for future research. john f. cigas "=" considered harmful dean w. gonzalez a methodology for controlling the size of a test suite this paper presents a technique to select a representative set of test cases from a test suite that provides the same coverage as the entire test suite. this selection is performed by identifying, and then eliminating, the redundant and obsolete test cases in the test suite. the representative set replaces the original test suite and thus, potentially produces a smaller test suite. the representative set can also be used to identify those test cases that should be rerun to test the program after it has been changed. our technique is independent of the testing methodology and only requires an association between a testing requirement and the test cases that satisfy the requirement. we illustrate the technique using the data flow testing methodology. the reduction that is possible with our technique is illustrated by experimental results. m. jean harrold rajiv gupta mary lou soffa graphical applications using metacard how to write graphical applications using metatalk, a metacard scripting language scott raney hardware-software co-synthesis of fault-tolerant real-time distributed embedded systems santhanam srinivasan niraj k. jha a compilation of automatic differentiation tools presented at the 1995 international convention on industrial and applied mathematics, hamburg chris bischof fred dilley as strong as possible mobility (poster session) an executing thread, in an object oriented programming language, is spawned, directly or indirectly, by a main process. this in turn gets its instructions from a primary class. in java there is no close coupling of a thread and the objects from which they were created. the use of a container abstraction allows us to group threads and their respective objects into a single structure. a container that holds threads whose variables are all housed within the container is a perfect candidate for strong migration. to achieve this we propose a combination of three techniques to allow the containers to migrate in a manner that approaches strong mobility yet does not resort to retaining bindings to resources across distant and unreliable networks. tim walsh paddy nixon simon dobson a survey of process migration mechanisms jonathan m. smith practical uses of operators in sharp apl/hp at apl86 in manchester, the language features of sharp apl/hp on a hewlett packard range of minicomputers were introduced in a paper written by this author, entitled 'apl procedures (user defined operators, functions and token strings)'. this is the first of several papers which have been planned to supplement this introductory work. the models of several new and existing operators are described in detail, all of which may be directly executed in sharp apl/hp. the models take advantage of various features of sharp apl/hp including extended assignment and procedure arrays [1] (often referred to as function arrays in their most common form). the paper will pursue a comparison of the less general implementation of operators in apl2, with special reference to ed eusebi's pioneering work in the field of 'operators for program control and recursion' [2 3], wherein an alternative definition for several of these operators will be proposed. the paper will then proceed to describe 'operators applying to enclosed arrays' (for example a pervasive operator) and various 'mathematical operators' (instances of which are the power and transitive closure operators), always highlighting these models with examples of their practical application. robert hodgkinson deterministic replay of java multithreaded applications jong-deok choi harini srinivasan frangipani: a scalable distributed file system chandramohan a. thekkath timothy mann edward k. lee letters to the editor corporate linux journal staff a highly integrated tool set for program development support this paper describes the design of the integrated user interface of the software development environment ipsen (integrated programming support environment). we explain the characteristic features of the ipsen user interface, namely the structured layout of the screen, the command-driven tool activation, and especially the highly integrated use of the ipsen tool set. we demonstrate those features by taking a sample set of tools of the ipsen environment. that tool set supports all the programming-in-the-small activities within ipsen. finally, we sketch the realization of two prototypes running on an ibm-at and a net of sun workstations. gregor engels thorsten janning wilhelm schäfer mode: a uims for smalltalk while the model-view-controller (mvc) framework has contributed to many aspects of user interface development in smalltalk, interfaces produced with mvc often have highly coupled model, view, and controller classes. this coupling and the effort required to use mvc make user interface creation a less effective aspect of smalltalk. the mode development environment (mode) is a user interface management system (uims) which addresses the above issues. mode is composed of two major components: the mode framework and the mode composer. the mode framework accommodates an orthogonal design which decouples the user interface components and increases their reusability. the mode composer reduces the effort of using mode by providing a direct-manipulation user interface to its users. this paper discusses the importance of orthogonality and illustrates its incorporation into the design of mode. a comparison of the mode framework and the mvc framework is included. yen-ping shan synchronization problems solvable by generalized pv systems peter b. henderson yechezkel zalcstein applying reusability to software process definition j. mogilensky d. stipe some comments on "pitfalls in prolog programming" w f clocksin using predicate abstraction to reduce object-oriented programs for model checking while it is becoming more common to see model checking applied to software requirements specifications, it is seldom applied to software implementations. the automated software engineering group at nasa ames is currently investigating the use of model checking for actual source code, with the eventual goal of allowing software developers to augment traditional testing with model checking. because model checking suffers from the state-explosion problem, one of the main hurdles for program model checking is reducing the size of the program. in this paper we investigate the use of abstraction techniques to reduce the state-space of a real-time operating system kernel written in c++. we show how informal abstraction arguments could be formalized and improved upon within the framework of predicate abstraction, a technique based on abstract interpretation. we introduce some extensions to predicate abstraction that all allow it to be used within the class-instance framework of object-oriented languages. we then demonstrate how these extensions were integrated into an abstraction tool that performs automated predicate abstraction of java programs. william visser seungjoon park john penix two eiffel implementations dan wilder efficient implementation of concurrent object-oriented programs l. v. kale fast i/o efficient file processing apl is a very powerful language in array processing. when properly combined with file processing it can improve efficiency while dramatically reducing cpu and connect time. this paper will illustrate a technique that has been implemented at the travelers insurance company using ibm's apl2 product running under tso mvsixa. lori d. mcnichols linux apprentice: shell functions and path variables part 1 a description of a set of shell utilities to simplify the maintenance of your path variables. stephen collyer an automatic object-oriented parser generator for ada although many parser generator tools (aka compiler compilers) are available, very few create a parse tree. instead, the user must add actions to the grammar to create the parse tree. this is a tedious, mechanical task, which could easily be automated. we propose a simple scheme to map a grammar to an object-oriented hierarchy, and provide a tool, called adagoop, that creates lexer and parser specifications complete with all of the actions necessary to create the parse tree. martin c. carlisle structured programming with limited private types in ada: nesting if for the soaring eagles henry g. baker ada compiler evaluation capability (acec) data analysis: an overview air force systems command impacts of life cycle models on software configuration management alan m. davis edward h. bersoff integrating noninterfering versions of programs the need to integrate several versions of a program into a common one arises frequently, but it is a tedious and time consuming task to integrate programs by hand. to date, the only available tools for assisting with program integration are variants of text-based differential file comparators; these are of limited utility because one has no guarantees about how the program that is the product of an integration behaves compared to the programs that were integrated. this paper concerns the design of a semantics-based tool for automatically integrating program versions. the main contribution of the paper is an algorithm that takes as input three programs a, b, and base, where a and b are two variants of base. whenever the changes made to base to create a and b do not "interfere" (in a sense defined in the paper), the algorithm produces a program m that integrates a and b. the algorithm is predicated on the assumption that differences in the behavior of the variant programs from that of base, rather than differences in the text, are significant and must be preserved in m. although it is undecidable whether a program modification actually leads to such a difference, it is possible to determine a safe approximation by comparing each of the variants with base. to determine this information, the integration algorithm employs a program representation that is similar (although not identical) to the dependence graphs that have been used previously in vectorizing and parallelizing compilers. the algorithm also makes use of the notion of a program slice to find just those statements of a program that determine the values of potentially affected variables. the program-integration problem has not been formalized previously. it should be noted, however, that the integration problem examined here is a greatly simplified one; in particular, we assume that expressions contain only scalar variables and constants, and that the only statements used in programs are assignment statements, conditional statements, and while-loops. susan horwitz jan prins thomas reps requirements for an imperative language to host logic programming in a seamless way giuseppe callegarin calculation and use of an environment's characteristic software metric set since both cost/quality goals and production environments differ, this study presents an approach for customizing a characteristic set of software metrics to an environment. the approach is applied in the software engineering laboratory (sel), a nasa goddard production environment, to 49 candidate process and product metrics of 652 modules from six (51,000 - 112,000 line) projects. for this particular environment, the method yielded the characteristic metric set {source lines, fault correction effort per executable statement, design effort, code effort, number of i/o parameters, number of versions}. the uses examined for a characteristic metric set include forecasting the effort for development, modification, and fault correction of modules based on historical data. victor r. basili richard w. selby the impact of operating system scheduling policies and synchronization methods of performance of parallel applications anoop gupta andrew tucker shigeru urushibara programming large and flexible systems in ada o. roubine book review: unix philosphy belinda frasier modeling and analyzing software architectures robert t. monroe editorial pointers diane crawford storage use analysis and its applications manuel serrano marc feeley book review: perl 5 by example sid wentworth letters to the editor corporate linux journal staff constraint programming pascal van hentenryck critical success factors for implementing software quality plans john g. sefcik on the conversion of indirect to direct recursion procedure inlining can be used to convert mutual recursion to direct recursion. this allows use of optimization techniques that are most easily applied to directly recursive procedures, in addition to the well-known benefits of inlining. we present tight (necessary and sufficient) conditions under which inlining can transform all mutual recursion to direct recursion, and those under which heuristics to eliminate mutual recursion always terminate. we also present a technique to eliminate mutually recursive circuits that consist of only tail calls. from this, we conclude that tail recursion elimination should be interleaved with inlining. owen kaser c. r. ramakrishnan shaunak pawagi the definition of dependence distance several definitions of dependence distance can be found in the literature. a single coherent definition is the vector distance between the iteration vectors of two iterations involved in a dependence relation. different ways to associate iteration vectors with iterations can give different dependence distances to the same program, and have different advantages. michael wolfe acm president's letter: the horse or the herring david h. brandin jspace: implementation of a linda system in java parcal ledru performance analysis of concurrent-read exclusive-write martin reiman paul e. wright operating-system directed power reduction this paper presents a new approach for power reduction by taking a global, software-centric view. it analyzes the sources of power consumption: tasks that require services from hardware components. when a component is not used by any task, it can enter a sleeping state to save power. operating systems have detailed information about tasks; therefore, os is the best place for identifying hardware idleness and shutting down unused components. we implement this technique in linux and show that it can save more than 50% power compared to traditional hardware- centric shutdown techniques. yung-hsiang lu luca benini giovanni de micheli using partial-order methods in the formal validation of industrial concurrent programs we have developed a formal validation tool that has been used on several projects that are developing software for at&t;'s 5ess telephone switching system. the tool uses holzmann's supertrace algorithm to check for errors such as deadlock and livelock in networks of communicating processes. the validator invariably finds subtle errors that were missed during thorough simulation and testing; however, the brute-force search it performs can result in extremely long running times, which can be frustrating to users. recently, a number of researchers have been investigating techniques known as _partial- order methods_ that can significantly reduce the running time of formal validation by avoiding redundant exploration of execution scenarios. in this paper, we describe the design of a partial-order algorithm for our validation tool and discuss its effectiveness. we show that a careful compile-time static analysis of process communication behavior yields information that can be used during validation to dramatically improve its performance. we demonstrate the effectiveness of our partial-order algorithm by presenting the results of experiments with actual industrial examples drawn from a variety of 5ess application domains, including call processing, signalling, and switch maintenance. patrice godefroid doron peled mark staskauskas optimistic concurrency control for abstract data types maurice herlihy a study of locking objects with bimodal fields object locking can be efficiently implemented by bimodal use of a field reserved in an object. the field is used as a lightweight lock in one mode, while it holds a reference to a heavyweight lock in the other mode. a bimodal locking algorithm recently proposed for java achieves the highest performance in the absence of contention, and is still fast enough when contention occurs. however, mode transitions inherent in bimodal locking have not yet been fully considered. the algorithm requires busy-wait in the transition from the light mode (inflation), and does not make the reverse transition (deflation) at all. we propose a new algorithm that allows both inflation without busy-wait and deflation, but still maintains an almost maximum level of performance in the absence of contention. we also present statistics on the synchronization behavior of real multithreaded java programs, which indicate that busy-wait in inflation and absence of deflation can be problematic in terms of robustness and performance. actually, an implementation of our algorithm shows increased robustness, and achieves performance improvements of up to 13.1% in server- oriented benchmarks. tamiya onodera kiyokuni kawachiya coside and parallel object-oriented languages r. winder g. roberts m. wei achieving independence in logarithmic number of rounds benny chor michael o. rabin tobac: a test case browser for testing object-oriented software ernst siepmann a. richard newton interactive blackbox debugging for concurrent languages we describe a novel approach to the design of portable integrated debugging tools for concurrent languages. our design partitions the tools set into two categories. the language specific tools take into account the particular features of a programming language for on-line experimenting with and monitoring of distributed programs. the language independent tools support off-line presentation and analysis of the monitored information. the separation of the language independent tools from the language specific tools allows adapting the tools to support debugging for a broad category of concurrent programming languages. the separation of interactive experimentation from off-line analysis provides for efficient exploitation of both user time and machine resources. we present an overview of our design and describe the implementation of a prototype debugging facility for occam. g. goldszmidt s. katz s. yemini a voyage to oberon atanas radenski a trace-driven analysis of the unix 4.2 bsd file system john k. ousterhout herve da costa david harrison john a. kunze mike kupfer james g. thompson extending compound assignments for c++ compound assignments have been created with efficiency, for handling a subset of things that simple assignments can do. this gain in efficiency is due to the interpretation of the compound assignment as a message that modifies self ("this"). however, c++ does not extend the same interpretation of compound assignments to user defined types. in this paper, we illustrate how the notion of compound assignments is natural in c++, how it can be implemented and indicate that the resulting gain in efficiency is akin to the gain in efficiency obtained by optimizing c compilers that translate complex expressions to simple ones that use compound assignments. t. b. dinesh asynchronous transfer of control working group thomas j. quiggle the benefits of bottom-up design this paper examines an inconsistency in generic 'top-down' design meth ods and standards employed in the implementation of reliable software. many design approaches adopt top-down ordering when defining the structure, interfaces, and processing of a system. however, strict adherence to a top-down sequencing does not permit accurate description of a system's error handling functions. the design of a system's response to errors is becoming critical as the reliability requirements of systems increase. this paper describes how top- down methods such as object oriented design and structured design do not adequately address the issues of error handling, and suggests using a bottom- up substep within these methods to eliminate the problem. gregory mcfarland integrating e-mail in a programming class: implications for teaching programming malini krishnamurthi on integration of programming paradigms alan mycroft linux in banking mr. shoham tells us how his company set up an internet banking system using linux for a bank in western canada idan shoham a graph based architectural (re)configuration language for several different reasons, such as changes in the business or technological environment, the configuration of a system may need to evolve during execution. support for such evolution can be conceived in terms of a language for specifying the dynamic reconfiguration of systems. in this paper, continuing our work on the development of a formal platform for architectural design, we present a high- level language to describe architectures and for operating changes over a configuration (i.e., an architecture instance), such as adding, removing or substituting components or interconnectons. the language follows an imperative style and builds on a semantic domain established in previous work. therein, we model architectures through categorical diagrams and dynamic reconfiguration through algebraic graph rewriting. michel wermelinger antónia lopes jose luiz fiadeiro application screen management: an apl2 approach this paper describes a comprehensive full screen management system written in apl2. it provides an application i/o interface that may be adapted to a variety of screen management specific subsystems in both interactive and background environments. besides the usual character and numeric field support, additional field types and attributes are supported by the system described here. this system differs from past efforts in several respects. it is written in apl2, and takes full advantage of apl2's extended data structures and vector notation for its implementation and application code interfaces. additionally, it was designed as a generalized application- user i/o interface, and as such, provides a great deal of isolation between the application and the specific screen management subsystem employed by the screen functions. it is also easily extended to support new features, such as additional field types, as well as new screen management subsystem specific facilities as they become available. to date, such extensions have been made without impacting the existing application base. in actual application usage, these functions have been found to be easy to use, and have in most instances resulted in simpler application code than would have been possible otherwise. stephen deerhake a study of large object spaces michael hicks luke hornof jonathan t. moore scott m. nettles fault tolerant distributed ada s. arevalo a. alvarez using idle workstations in a shared computing environment the butler system is a set of programs running on andrew workstations at cmu that give users access to idle workstations. current andrew users use the system over 300 times per day. this paper describes the implementation of the butler system and tells of our experience in using it. in addition, it describes an application of the system known as gypsy servers, which allow network server programs to be run on idle workstations instead of using dedicated server machines. d. nichols development of successful object-oriented frameworks todd hansen scheduling periodic time-critical tasks on a multiprocessor system (abstract only) we study the problem of scheduling periodic-time-critical tasks on a multiprocessor system. a periodic-time-critical task consists of a certain number of computations, each arising at a fixed interval of time, called the period of the task. the ratio of the computation time to the period is called the utilization factor of the task. each computation has a prescribed deadline which must be guaranteed to be met, failing which there is likely to result an irrepairable loss. the scheduling problem is to specify the processor on which a computation is to be made and the order in which such computations will be made, such that every computation meets its deadline. the goal is to minimize the number of processors needed. the problem is np- complete. as such, we devise a heuristic algorithm and analyze its performance. the algorithm assigns tasks based on the utilization factors of the tasks. we analyze the worst case performance of the algorithm by obtaining bounds on the ratio of the maximum number of processors needed by the algorithm to the minimum number of processors required. it is shown that this ratio is 2. s. k. dhall s. davari-hadikiashari a region-based memory manager for prolog we extend tofte and talpin's region-based model for memory management to support backtracking and cuts, which makes it suitable for use with prolog and other logic programming languages. we describe how the extended model can be implemented and report on the performance of a prototype implementation. the prototype implementation performs well when compared to a garbage-collecting prolog implementation using comparable technology for non-memory-management issues. henning makholm an approach to software design documentation this paper presents an approach to developing system design documentation and then programming the software code directly into the specifications. an introduction to the purpose and content of design documentation is given, along with a description of current practices which are used in the development process. it is recommended that the documents be developed using automated methods. the code is then implemented directly into the files containing the specifications. these techniques should result in better configuration management and control of software changes. it is more likely that the specifications will be kept current throughout the life cycle of the system. the documentation should be more useful to the maintenance software engineer. the methods should be less costly compared to current techniques. mike burlakoff lessons from norstar's distributed call processing system liam casey two issues in parallel language design in this article, we discuss two programming language features that have value for expressibility and efficiency: nonstrictness and nondeterminism. our work arose while assessing ways to enhance a currently successful language, sisal [mcgraw et al. 1985]. the questions of how best to include these features, if at all, has led not to conclusions but to an impetus to explore the answers in an objective way. we will retain strictness for efficiency reasons and explore the limits it may impose, and we will experiment with a carefully controlled form of nondeterminism to assess its expressive power. a. p. w. böhm r. r. oldehoeft critical run-time design tradeoff sin an ada implementation (panel session) robert a. conti a method for programming languages comparitive analysis ivan stanev looking for the event list e. l. miranda porting apl-programs via ascii-transliteration a scheme for translating apl characters to ascii text is introduced, and an apl application is developed which reads and writes objects to and from files using the scheme. the system is available freely for several apl dialects; it can be used to port apl applications across different environments. the translation scheme used is proposed as a standard for the ascii representation of apl characters, and feedback on this scheme is invited. johann mitlöhner forth interface to ms-dostm interrupts r. h. davis an overview of c++ bjarne stroustrup two flaws of the current c++ language definition ronald fischer design of a distributed implementation of abcl/i in this paper we describe the design decisions leading to a distributed implementation of the abcl language. we shortly review the abcl (actor based concurrent language) computation model. we discuss some slight changes we bring to the current abcl/1 language in order to distribute objects among several machines. identification, reference, allocation and communication are the four concepts which need some evolution. the status of a first prototype implementation on a network of workstations (lisp/suns+ethernet) is reported. j.-p. briot j. de ratuld mesa from the perspective of a designer turned user the mesa language and run-time environment were designed for the purpose of building systems programs such as compilers, operating systems, graphics software and so on. it is a relatively high-level language with strong type checking and supports the ideas of modular programming and abstract data types. although the system is compiler-based, one can debug programs interactively in source-language terms. the language has been in rather heavy use since 1976 by a programming community of a few hundreds of professional programmers within the xerox corporation, and a large amount of code has been written in it. this talk will give a designer's retrospective view of mesa from the vantage point of a serious user and will attempt to evaluate its strengths and weaknesses as a vehicle for systems programming. james g. mitchell empirical measurements of six allocation-intensive c programs benjamin zorn dirk grunwald documenting framework behavior neelam soundarajan model checking java programs automatic state exploration tools (model checkers) have had some success when applied to protocols and hardware designs, but there are fewer success stories about software. this is unfortunate, since the software problem is worsening even faster than the hardware and protocol problems. model checking of concurrent programs is especially interesting, because they are notoriously difficult to test, analyze, and debug by other methods. this talk will be a description of our initial efforts to check java programs using a model checker. the model checker supports dynamic allocation, thread creation, and recursive procedures (features that are not necessary for hardware verification), and has some special optimizations and checks tailored to multi-threaded java program. i will also discuss some of the challenges for future efforts in this area. david dill software development in core: the application of ada and spiral development richard simonian the distributed programming environment on the internet chang-hyun jo jea gi son younwoo kang phill soo lim testing races in parallel programs with an otot strategy suresh k. damodaran- kamal joan m. francioni letters to the editor corporate linux journal staff test and analysis of software architectures some dod programs now require prospective contractors to demonstrate the superiority of their software architectures for new weapons systems. this acquisition policy provides new software engineering challenges that focus heavily on the test and analysis of software architectures in order to determine the "best" architecture in terms of its implementability, affordability, extendability, scalability, adaptability, and maintainability --- not overlooking whether or not it will meet the functional requirements of the system. will tracz experiences developing and using an object-oriented library for program manipulation tim bingham nancy hobbs dave husson objective view point: an overview of the standard template library g. bowden wise dynamic typing for distributed programming in polymorphic languages while static typing is widely accepted as being necessary for secure program execution, dynamic typing is also viewed as being essential in some applications, particularly for distributed programming environments. dynamics have been proposed as a language construct for dynamic typing, based on experience with languages such as clu, cedar/mesa, and modula-3. however proposals for incorporating dynamic typing into languages with parametric polymorphism have serious shortcomings. a new approach is presented to extending polymorphic lnanguages with dynamic typing. at the heart of the approach is the use of dynamic type dispatch, where polymorphic functions may analyze the structure of their type arguments. this approach solves several open problems with the traditional approach to adding dynamic typing to polymorphic languages. an explicity typed language xmldyn is presented; this language uses refinement kinds to ensure that dynamic type dispatch does not fail at run-time. safe dynamics are a new form of dynamics that use refinement kinds to statically check the use of run-time dynamic typing. run-time errors are isolated to a separate construct for performing run-time type checks dominic duggan in-house documentation in a small college canisius college is a private, jesuit college with about 2,500 day students and 2,000 evening students. there is one computer center serving both academic and administrative needs with a staff of 19 people. as a functional unit within the computer center, user services consists of a manager, two programmer analysts, and a documentation librarian. we are responsible for three supervised public computing sites; an apple lab, an ibm lab, and the main computing lab which contains apples, ibm pcs and vax terminals. there are two non-staffed public computing sites containing vax terminals. to staff the labs, there is a minimum of 10 student consultants and about 35 site operators reporting to the public site coordinator, a graduate assistant position usually filled by an mba student. the staff of user services supports instructional and research computing as well as administrative uses of microcomputers. we don't have the luxury to specialize in a particular type of computer, software, or user. each of us needs to be familiar with much of the supported software and be ready to answer those quick questions from a variety of sources. often, a well-written handout is the fastest and most effective way of doing this. a well-written handout is one you give to a person and you don't see that person again to explain the handout. this paper will describe the methods by which we develop, catalogue, and distribute what are hopefully well-written handouts. documentation can be formal or informal. we are all familiar with the following scenario. someone asks you a question. you go through the procedure with them and accomplish the task at hand. but for that person to perform this procedure on their own, the next day or a few months from now, what was done needs to be written down or, if appropriate, pointed out in the manual. the most accurate instructions are written as you work through the procedure, not after the procedure is done. but every question and every procedure can not become a formal document. so what should be documented? obviously, handouts that address frequently asked questions are worthwhile and handouts that introduce new capabilities or methods to the user are valuable. start with the application packages that are in heaviest use on your campus. the most common type of document is designed to get a person started on a software package. it provides the basics: how to start up the package, how to enter information, how to load, save and print the document or file. it does not cover more advanced topics mark castner cathy bacon christopher alexander: an introduction for object-oriented designers doug lea composition patterns: an approach to designing reusable aspects requirements such as distribution or tracing have an impact on multiple classes in a system. they are _cross-cutting_ requirements, or _aspects_. their support is, by necessity, scattered across those multiple classes. a look at an individual class may also show support for cross-cutting requirements tangled up with the core responsibilities of that class. scattering and tangling make object-oriented software difficult to understand, extend and reuse. though design is an important activity within the software lifecycle with well-documented benefits, those benefits are reduced when cross-cutting requirements are present. this paper presents a means to mitigate these problems by separating the design of cross-cutting requirements into _composition patterns_. composition patterns require extensions to the uml, and are based on a combination of the subject-oriented model for composing separate, overlapping designs, and uml templates. this paper also demonstrates how composition patterns map to one programming model that provides a solution for separation of cross-cutting requirements in code--- aspect-oriented programming. this mapping serves to illustrate that separation of aspects may be maintained throughout the software lifecycle. siobhán clarke robert j. walker optimizing parallel programs with explicit synchronization we present compiler analyses and optimizations for explicitly parallel programs that communicate through a shared address space. any type of code motion on explicitly parallel programs requires a new kind of analysis to ensure that operations reordered on one processor cannot be observed by another. the analysis, based on work by shasha and snir, checks for cycles among interfering accesses. we improve the accuracy of their analysis by using additional information from post-wait synchronization, barriers, and locks. we demonstrate the use of this analysis by optimizing remote access on distributed memory machines. the optimizations include message pipelining, to allow multiple outstanding remote memory operations, conversion of two-way to one-way communication, and elimination of communication through data re-use. the performance improvements are as high as 20-35% for programs running on a cm-5 multiprocessor using the split-c language as a global address layer. arvind krishnamurthy katherine yelick concurrent meld our original goal was to design a programming language for writing software engineering environments. the most important requirements were reusability and the ability to integrate separately developed tools [1]. our scope was later expanded to general applications, and then to parallel and distributed systems. our current focus is on 'growing' distributed software environments and tools, that is, building a core environment or tool assuming a long-term evolution path. meld is a multiparadigm language that combines object- oriented, macro dataflow, transaction processing and module interconnection styles of programming [2]. the most unusual aspect is the dataflow at the source level among the inputs and outputs of statements. classes may define constraints in addition to instance variables and methods, which are triggered by changes to instance variables and interleaved along with the statements of a method. meld's constraints are unidirectional, and similar in purpose to active values. figure 1 gives a trivial example. when a message o is received by an instance of class c, the two statements in method o and any constraints affected by either statement are executed in dataflow order. this means that instance variable b is computed by o as a function of argument c, and only then the constraint recomputes a as a function of the new value of b. simultaneously with either of these computations, or before or after or in between, d is assigned to the result of a function of argument e; there is no data dependency between this statement and either the other statement in the method or the constraint. meld's multiple paradigms lead to three granularities of concurrency: statements from the macro dataflow for fine to medium grain concurrency within a method or among methods. this permits a smaller granularity than dataflow among the inputs and outputs of methods. atomic methods, which provide a low- level form of concurrency control, force a larger granularity. objects for medium to large grain parallelism, with synchronous and asynchronous message passing among remote or local objects. many other concurrent object-oriented languages provide synchronous or asynchronous message passing, but not both. atomic transactions for high- level concurrency control among users (including tools where interleaved operation is inappropriate). most other object-based systems provide only one form of concurrency control. a method may be invoked synchronously or asynchronously. in the synchronous case, the caller waits for return with respect to its own thread of control. in the asynchronous case, the caller continues and the invocation creates a new thread of control. programs may involve an arbitrary number of threads created dynamically during program execution. several threads may operate within the same address space and one thread may, in effect, operate across multiple address spaces. in either case, the invoked method runs concurrently with any other methods currently active on the same object. these other methods may be reading and writing the same instance variables. the default synchronization among such methods is done by dataflow, as within a single method. there is a serious problem with this approach. figure 2 operates as indicated if m and n happen to begin execution at exactly the same time, due to simultaneous arrival of messages m and n from other objects. d is computed from the value of c given as argument to n, b is computed from the new value of d, and a from the new value of b. however, if message n arrives a bit later than m, then b is computed from the old value of d, and then the new value of d is computed from the argument c and a is computed from the new value of b. both statements in n could be executed concurrently since there is no dataflow between them. on the other hand, if message m arrives a bit later than n, then b may be computed from either the old or new value of d and a may be computed from either the old or new value of b. this is obviously rather bewildering for the programmer, since it is necessary that the resulting computation be deemed 'correct' in all of these cases. we support this is to permit maximum concurrency for those applications (and programmers!) that can handle the non-determinism. our intuition is that non- determinism will not be a problem in many cases. the programmer has in mind a parallel algorithm where he thinks in terms of the dataflow necessary to produce the correct solution. he typically uses a conventional language, such as fortran or c, to implement his algorithm. the parallelizing compiler must then uncover the parallelism again using dataflow techniques. we avoid this hide-and-seek by allowing the programmer to make the dataflow explicit. there are many programs, however, written without cognizance of the dataflow. we also support these programmers with sequential blocks and atomic blocks. figure 3 shows method o of class c written as a sequential block rather than a parallel block (i.e., a parallel block is enclosed in curly braces and a sequential block in square brackets). in this example, all of method o executes in the order in which the statements are written. any constraints whose inputs are changed during the execution of o are triggered as before. sequential blocks remove concurrency within methods, making them easier to write without the need for the single-assignment mindset, but do not affect concurrency among methods. figure 4 indicates the ordering if m and n happen to start at the same time. b is computed from the old value of d, a from the new value of b and then the new value of d from the argument c. but if there is a race condition, a may see the new value of b or b may see the new value of d, but not both. in figure 5, method o executes atomically with respect to the receiver object. atomic blocks are indicated with parentheses. both b and d are updated, and only after o terminates is the constraint triggered. the two statements in o may themselves be executed in either sequential or dataflow order, since it is necessary to include them within an inner sequential or parallel block, not shown, if the atomic block is used. in this case it does not matter. methods m and n are serialized in figure 6, so they appear atomic to each other. we currently support pessimistic concurrency control by locking the object at the computational grain size of individual methods, or a block within a method. we are integrating real transactions that cut across methods and objects/ we use distributed optimistic concurrency control with multiple versions, since we expect a majority of read-only transactions (in our primary application domain of distributed software development tools). atomic blocks using validation rather than locking will be surrounded by angle brackets rather than parentheses, and may be either sequential or parallel; begin_transaction, abort_transaction and commit_transaction statements are provided for transactions that begin in one method and may end in another according to circumstances determined at run-time. g. e. kaiser reflection as a mechanism for software integrity verification the integrity verification of a device's controlling software is an important aspect of many emerging information appliances. we propose the use of reflection, whereby the software is able to examine its own operation, in conjunction with cryptographic hashes as a basis for developing a suitable software verification protocol. for more demanding applications meta- reflective techniques can be used to thwart attacks based on device emulation strategies. we demonstrate how our approach can be used to increase the security of mobile phones, devices for the delivery of digital content, and smartcards. diomidis spinellis the last word stan kelly-bootle a note on the detection of an ada compiler bug while debugging an anna program s. sankar applicative caching: programmer control of object sharing and lifetime in. distributed implementations of applicative languages the "referential transparency" of applicative language expressions demands that all occurrences of an expression in a given context yield the same value. in principle, that value therefore needs to be computed only once. however, in recursive programming, a context usually unfolds dynamically, precluding textual recognition of multiple occurrences, so that such occurrences are recomputed. to remedy the lack, in applicative languages, of an ability to store and retrieve a computed value under programmer control, "caching functionals" are proposed which allow the programmer to selectively avoid recomputation without overt use of assignment. the uses and implementation of such mechanisms are discussed, including reasons and techniques for purging the underlying cache. our approach is an extension of the early notion of "memo function", enabling improved space utilization and a "building-block" approach. robert m. keller m. ronan sleep mapping uml designs to java william harrison charles barton mukund raghavachari correctness and composition of software architectures mark moriconi xiaolei qian demonic memory for process histories demonic memory is a form of reconstructive memory for process histories. as a process executes, its states are regularly checkpointed, generating a history of the process at low time resolution. following the initial generation, any prior state of the process can be reconstructed by starting from a checkpointed state and re-executing the process up through the desired state, thereby exploiting the redundancy between the states of a process and the description of that process (i.e., a computer program). the reconstruction of states is automatic and transparent. the history of a process may be examined as though it were a large two-dimensional array, or address space- time, with a normal address space as one axis and steps of process time as the other. an attempt to examine a state that is not physically stored triggers a "demon" which reconstructs that memory state before access is allowed. regeneration requires an exact description of the original execution of the process. if the original process execution depends on non-deterministic events (e.g., user input), these events are recorded in an exception list, and are replayed at the proper points during re- execution. while more efficient than explicitly storing all state changes, such a checkpointing system is still prohibitively expensive for many applications; each copy (or snapshot) of the system's state may be very large, and many snapshots may be required. demonic memory saves both space and time by using a virtual copy mechanism. (virtual copies share unchanging data with the objects that they are copies of, only storing differences from a prototype or original [mibk86].) in demonic memory, the snapshot at each checkpoint is a virtual copy of the preceding checkpoint's snapshot. hence it is called a virtual snapshot. in order to make the virtual snapshot mechanism efficient, state information is initially saved in relatively large units of space and time, on the order of pages and seconds, with single-word/single-step regeneration undertaken only as needed. this permits the costs of indexing and lookup operations to be amortized over many locations. p. r. wilson t. g. moher combining analyses, combining optimizations modern optimizing compilers use several passes over a program's intermediate representation to generate good code. many of these optimizations exhibit a phase-ordering problem. getting the best code may require iterating optimizations until a fixed point is reached. combining these phases can lead to the discovery of more facts about the program, exposing more opportunities for optimization. this article presents a framework for describing optimizations. it shows how to combine two such frameworks and how to reason about the properties of the resulting framework. the structure of the frame work provides insight into when a combination yields better results. to make the ideas more concrete, this article presents a framework for combining constant propagation, value numbering, and unreachable-code elimination. it is an open question as to what other frameworks can be combined in this way. cliff click keith d. cooper object-oriented concurrent programming in cst cst is a programming language based on smalltalk-80 that supports concurrency using locks, asynchronous messages, and distributed objects. distributed objects have their state distributed across many nodes of a machine, but are referred to by a single name. distributed objects are capable of processing many messages simultaneously and can be used to efficiently connect together large collections of objects. they can be used to construct a number of useful abstractions for concurrency. we describe the cst language and give examples of its use. w. j. dally a. a. chien how to use modules harvey richardson mapping performance data for high-level and data views of parallel program performance r. bruce irvin barton p. miller foundations of object-based concurrent programming (panel session) gul agha akinori yonezawa peter wegner samson abramsky testing distributed ada programs e. j. dowling toward a resourceful method of software fault tolerance ray giguette johnette hassell an object-oriented model of software configuration management hal render roy campbell adl - an interface definition language for specifying and testing software this paper describes an interface definition language called adl which extends omg's corba interface definition language with formal specification constructs. in addition to adl's use in formal documentation, adl's primary use is for testing software. adl can be adapted for use with most programming languages. this paper also presents an overview of a testing technology based on adl and presents the highlights of a test-data description language (tdd) used to describe test-data. sriram sankar roger hayes advanced module systems (invited talk) robert harper benjamin c. pierce practical adaption of the global optimization algorithm of morel and renvoise d. m. dhamdhere programming models for irregular applications katherine a. yelick bidirectional object layout for separate compilation andrew c. myers experiences with the ada semantic interface specification (asis): developing a tool with a view steven j. blake james b. bladen measuring formal specification with -metric this paper explains the use of - metrics in analysing and comparison of various formal specification languages. peter kokol a comparison of the effects of structured vs. non-structured and modularized vs. non-modularized programs on run time (abstract only) this paper presents the basis for analytical research, examining the various effects of modifying or amending a particular program in both its basic structure and its degree of modularity on its ultimate run time. we begin with a program which produces as output a series of tables of high card and distribution counts of 1000 bridge hands. the deal of each bridge hand is accomplished by a random number generator function and the program employs a test module to ascertain valid results. the original program is written in turbo pascal with a high degree of modularity. it is later modified by an "unwrapping process" to produce a less modularized program which effects the same results as the original. the comparison of source lines of code, degree of modularity, number of modules, and run time in addition to actual output are noted and analyzed. the programs were executed on an ibm pc. the original program contains 272 lines of code, six procedures (in addition to the main control module), and has an execution time of approximately 12 minutes. the first of the six procedures is the deal of the cards (which is invoked once per iteration, for a total of 1000 calls). the next three procedures are counting modules (each of which is invoked four times per iteration for a total of 12000 calls). the fifth procedure is a test module which supplies the programmer with a sample hand and counting data for validation purposes (which is invoked once per 100 deals for a total of 10 calls). finally, the sixth procedure is one which computes frequency data (and is invoked a total of 6 times after the 1000 deal iteration has terminated). the program is highly structured with a complexity measure of 212. due to its high degree of modularity, it can be read and debugged with little or no difficulty. the less modularized version is acquired by an unwrapping process as follows: the deal procedure is eliminated and its code is placed directly in the main module; the three counting modules are no longer invoked as procedures but are instead placed within the main module. since each of these three modules had been called four times per iteration (once per each bridge hand) this results in four times as much code; the test and frequency modules are left intact. this version of the program contains 373 lines of code, two procedures (in addition to the main control module), and has an execution time of approximately 13 minutes. the first procedure is the test module (which as before is invoked once per 100 deals for a total of 10 calls). the second procedure is the frequency procedure (invoked six times after the 1000 deal iteration). the main module is now much larger than in the original version, having absorbed four procedures with a high degree of repetition of code. this program contains a complexity measure of 316. due to its lack of modularity and much higher sloc and complexity measures, its readability and facility for debugging are greatly reduced. it is especially noteworthy that the unwrapped version of the program, although containing 13000 less procedure calls than the original, did not have a shorter run time than the original. a summary of the results: domenick j. pinto sandra k. honda developing object-oriented framworks using domain models mehmet aksit francesco marcelloni bedir tekinerdogan gaps marc rettig performance limitations of the java core libraries allan heydon marc najork a reuse and composition protocol for services dorothea beringer laurence melloul gio wiederhold object-oriented framework-based software development: problems and experiences jan bosch peter molin michael mattsson perolof bengtsson reuse with protÉgÉ-ii: from elevators to ribosomes john h. gennari russ b. altman mark a. musen software engineering data analysis techniques (tutorial) amrit l. goel miyoung shin is apl a programming language or isn't it? it is a matter of both pride and concern to apl programmers that apl appears to develop in ways different from all other programming languages. this talk will attempt to answer why this is so. comparisons will be made between the directions of apl development and those of the algol family and lisp. alan j. perlis manual and compiler for the terse and modular language dem chris houser fundamental limitations on the use of prefetching and stream buffers for scientific applications daniel m. pressel composition and refinement of discrete real-time systems reactive systems exhibit ongoing, possibly nonterminating, interaction with the environment. real-time systems are reactive systems that must satisfy quantitative timing constraints. this paper presents a structured compositional design method for discrete real-time systems that can be used to combat the combinatorial explosion of states in the verification of large systems. a composition rule describes how the correctness of the system can be determined from the correctness of its modules, without knowledge of their internal structure. the advantage of compositional verification is clear. each module is both simpler and smaller than the system itself. composition requires the use of both model-checking and deductive techniques. a refinement ruleguarantees that specifications of high-level modules are preserved by their implementations. the statetime toolset is used to automate parts of compositional designs using a combination of model-checking and simulation. the design method is illustrated using a reactor shutdown system that cannot be verified using the statetime toolset (due to the combinatorial explosion of states) without compositional reasoning. the reactor example also illustrates the use of the refinement rule. jonathan s. ostroff software process support through software configuration management software development embodies a range of software processes. such processes can be captured in software process models. two of the reasons for describing software processes through models are to document and communicate a particular process to others, and to encode knowledge of the process in a form processable by computer. software process modeling has received attention by researchers in recent years - as this series of workshop indicates. efforts are expended on determining what processes are to be modeled, what of power of the modeling language is, and how the model can be instantiated and executed. this position paper examines a different trend to supporting software processes. software development environments (sde) support the evolution of software through teams of developers by providing software version and configuration management and system build functionality. this sde functionality supports and embodies certain aspects of the software process. we have examined and experimented hands-on with a number of commercial software development environments. the environments include the rational environment, apollo dsee, sun nse, ist istar, biin sms, and atherton software backplane. these environments have advanced the functionality provided to support software evolution over commonly used development support tools such as unix make/rcs, and dec vms mms/cms. a number of observations can be made about these environments and the way they have attempted to - at least partially - capture (i.e., instantiate) software processes. separation of mechanisms and policy: this notion has been in practice in operating systems for a number of years. the primitives (mechanisms) should be abstract enough to contain some of the semantics of the process model to be instantiated. for example the concept of managed workspace or transaction provides a higher-level abstraction than the check-out/check-in primitives and a working directory. a good set of primitives does not unnecessarily restrict the ability to build desirable executable process models. such restrictions are detected when process model instantiations and executions are attempted. for example, the check-out operation usually performs two functions - making a program unit modifiable, and locking the unit to prevent concurrent modification. if optimistic development (i.e., permit concurrent development) is to be supported the user would have to resort to the branching function. sun nse directly supports optimistic development, but currently does not provide the primitives to provide serialized development (i.e., locking). policies can generally be encoded by writing an envelope of functions on top of the available primitives. in summary, slow progress is being made in separation of mechanism and policy and in encoding process models. source code control evolves to software configuration management: development support in common practice provides source code control for individual files and a system build facility. newer sdes have expanded their functionality to support object code management, management of configurations, transparent access to source repositories, management of work areas, and primitives for reflecting development activities. in doing so, they relieve developers from managing versions of object code themselves, from constantly retrieving copies of files from the repository for viewing purposes, and from concerning themselves with the version history of individual files while being focused on evolving a system. the models embedded in the functionality of these environments are often not clearly described by the manufacturer, and sometimes best understood when related to the model in another environment. configuration composition vs. evolution: these are two major approaches in maintaining configurations. configuration composition refers to defining a configuration template, which lists the components. together with a set of selection criteria, this template can be instantiated into a bound configuration. the binding may take several steps, such as source variant selection, version selection, tool version selection, and tool parameter selection. appropriate selection criteria allow for a range of desirable configuration instantiations. such as experimental, conservative, or stable. apollo dsee is an good example of this approach. the evolution approach is reflected in environments such as sun nse. the user initially creates a system configuration, populates it with elements, and preserves it. changes to the system are performed relative to a preserved configuration. most configuration management operations are performed at the level of configurations, while versioning of individual files is largely transparent. the rational environment is an environment that combines the two approaches. each of the two major approaches gives different perspective of a development process. repository-oriented vs. transaction-oriented: again, two major approaches in managing software evolution will create different perceptions of a development process. the repository-oriented approach (or product-oriented approach) centers its support for development management on the products to be managed. the history of product evolution is reflected in the repository and its organization. environments, such as biin sms, have applied the repository mechanisms to provide for management of the work area. the transaction- oriented approach (or process-oriented approach) is centered around the changes and the activities necessary for the changes. istar is an example of a purely process-oriented support facility by directly modeling development steps and integrating it with project planning. other transaction-oriented environments, such as sun nse, take a more modest approach by providing nested transactions as a key concept. a transaction plays the role of a change activity. a transaction log reflects product history. non-terminating transactions act as repositories. as a matter of fact, one can find a spectrum of environment support ranging from repository-oriented to transaction- oriented: repository, work area management, nested transactions, activities/tasks, trigger-based task instantiation, and plan-based task instantiation. concurrency and coordination: some environment builders have recognized the need for different degrees of freedom in concurrency and coordination of development. a closely cooperating team of developers working on a particular subsystem want more freedom passing partially complete pieces among themselves than developers of different subsystems. some environments are supporting both a cooperating team producing the next configuration releasable outside the team, and teams independently working on different development paths of the same subsystem (e.g., field release with bug fixes and further development) or different subsystems (i.e., partitioning of the system). this is new functionality and different environments provide different degrees of freedom. scopes of visibility: related to the previous point is the desire to provide scopes of visibility. individual developers should be able to made snapshots of their work by freezing a configuration without making it visible to others. developers should be able to make their work available to their team mates before it becomes visible to a test team or the general public. various approaches are being tried. use of access control mechanisms seems to be an obvious choice, but they tend to be one of the least developed areas in environments. the use of multiple repositories (one for each scope of visibility) has the problem of potentially high cost of recompilation as elements are moved from repository to repository. some repository mechanisms provide a viewing mechanism based on a status attribute. different users are limited to viewing elements with different attribute values. evolution- oriented environments utilize nested transactions to reflect restrictions in visibility of changes. the above reflects the state-of-the-art in commercial software development environments and their support for the process of software evolution. many of these environments are reasonably robust that they can be employed in real projects and scaled up to handle management of large system development. on one hand, it is encouraging to see that there is progress being made in the functionality provided by these environments. on the other hand it can be a little discouraging to see the limitations that exist in capturing software processes, validating their model, instantiating them, and evolving and adapting them. this is, where researchers in the area of processing modeling can provide guidance to environment builders as to appropriate mechanisms to be made available, as well as facilities for capturing process models, executing them, and allowing for adaptations and changes during execution. peter p. feiler sparse arrays in j roger k. w. hui extending object oriented programming in smalltalk smalltalk is an object oriented programming language with behavior invoked by passing messages between objects. objects with similar behavior are grouped into classes. these classes form a hierarchy. when an object receives a message, the class or one of its superclasses provides the corresponding method to be executed. we have built an experimental personal information environment (pie) in smalltalk that extends this paradigm in several ways. a pie object, called a node, can have multiple perspectives, each of which provides independent specialized behaviors for the object as a whole, thus providing multiple inheritance for nodes. nodes have metadescription to guide viewing of the objects during browsing, provide default values, constrain the values of attributes, and define procedures to be run when values are sought or set. all nodes have unique names which allow objects to migrate between users and machines. finally attribute lookup for nodes is context sensitive, thereby allowing alternative descriptions to be created and manipulated. this paper first reviews smalltalk, then discusses our implementation of each of the above capabilities within pie, a smalltalk system for representing and manipulating designs. we then describe our experience with pie applied to software development and technical writing. our conclusion is that the resulting hybrid is a viable offspring for exploring design problems. ira p. goldstein daniel g. bobrow live memory analysis for garbage collection in embedded systems patrik persson professional development seminars: introduction to pascal this intensive one-day tutorital introduces the concepts and abstractions upon which pascal is based, current methods for specifying the syntax of a language, pascal's relation to software engineering, and the status of the pascal standardization activity. a multimedia environment is used to develop pascal programs on-line. at the end of the session, participants should be able to read and analyze pascal code, differentiate between "true pascal" and non-conforming pascal-like code, and have a working knowledge of the syntactic definitions of pascal: bnf, railroad diagrams, and a pascal standard. a. winsor brown software effort metrics: how to join them p. kokol antipatterns in software architecture t. mobray a certifying compiler for java this paper presents the initial results of a project to determine ifthe techniques of proof- carrying code and certifying compilers can be applied to programming languages of realistic size and complexity. the experiment shows that: (1) it is possible to implement a certifying native-code compiler for a large subset of the java programming language; (2) the compiler is freely able to apply many standard local and global optimizations; and (3) the pcc binaries it produces are of reasonable size and can be rapidly checked for type safety by a small proof- checker. this paper also presents further evidence that pcc provides several advantages for compiler development. in particular, generating proofs of the target code helps to identify compiler bugs, many of which would have been difficult to discover by testing. christopher colby peter lee george c. necula fred blau mark plesko kenneth cline three way logic in apl true - false logic can be expanded to include a neutral "do not know" option. new primitive functions replace _and, or_ and _not_ in such a way as to generalize the rules of logic. primitive functions in apl support this extension with ease. further generalizations to multilevel logic are also possible. zdenek v. jizba do we really need sqa to produce quality software?: no! well maybe. it depends. yes! joel henry bob blasewitz task assignment in a distributed system (extended abstract): improving performance by unbalancing load mark e. crovella mor harchol-balter cristina d. murta proposal for a prototyping kit the three-levelled ansi/sparc architecture for database systems forms a framework on which software prototypes can be built. the external level corresponds to screen panels, the conceptual level to the data model, and the internal level to the stored files of the prototype. the paper identifies prototype design tools and building- blocks with respect to this architecture. a screen design tool illustrates ideas using nested arrays. a novel aspect is, that the paper takes a step towards an operational formulation of the prototyping approach by emphasizing the computerized implementation. jan jantzen war on the workspace!: supporting continuously changing commercial software using a relational database we describe a relatively simple but effective set of techniques that is used to support a commercial computer application maintained by several developers and used by several thousand businesses. the techniques allow for complete control over a system that is in a continuous state of maintenance and enhancement. all functions and data are controlled by a proprietary relational database written in apl with almost no reliance on the contents of the workspace. the workspace contains only the few functions required to initialize the application and utility functions that are there simply to avoid defining them at load time. all other functions are defined as they are needed by specific user commands and erased thereafter. edward j. shaw an integrated system for program testing using weak mutation and data flow analysis the idea of weak mutation testing is to construct test data which would force program components such as expressions and variable references to produce a wrong 'result' if they were to contain certain types of error, for example, off-by-a-constant or wrong-variable. the idea of data flow driven testing is to construct test data which forces the execution of different interactions between variable definitions and references in a program. this paper describes a tool for fortran 77 programs which has been developed to help a user apply the weak mutation and data flow testing techniques. the tool instruments a given source program and collects a program execution history. it is then able to report on the completeness of the test data with respect to weak mutation and a family of data flow path selection criteria. some preliminary experiments with use of the tool are described. m. r. girgis m. r. woodward "d2r": a dynamic dataflow representation for task scheduling johann rost ts: an optimizing compiler for smalltalk ts (typed smalltalk) is a portable optimizing compiler that produces native machine code for a typed variant of smalltalk, making smalltalk programs much faster. this paper describes the structure of ts, the kinds of optimizations that it performs, the constraints that it places upon smalltalk, the constraints placed upon it by an interactive programming environment, and its performance. ralph e. johnson justin o. graver laurance w. zurawski industrial priorities for software engineering research (panel) barry boehm a complete notation for ada charts j m bishop a taxonomy of coordination mechanisms used by real-time processes a taxonomy of the coordination mechanisms for the synchronization and communication of concurrent processes is proposed. the taxonomy deals with the issues of a real-time software architecture that are application domain independent. the taxonomy will help the designer select the best coordination mechanism for a particular situation.a software reuse domain analysis methodology has been used to develop the taxonomy. while ada is the programming language that has been considered here, some of the attributes and guidelines are still valid for other programming languages. jose l. fernandez integrated program measurement and documentation tools this paper describes an attempt to integrate the collection and the efficient utilisation of measurements in the development and the use of programs. the work presented consists in three parts: \\- the design of both static and dynamic measurement tools, \\- examples of data processing on measurements collected on a sample of pascal programs, \\- the design of a quantitative documentation of a program, which is automatically built as measurements are collected. the first and third steps have been developed inside an existing programming environment, mentor, and we shall discuss the advantages we found in integrating the tools in such an environment. anne schroeder an automatic trace analysis tool generator for estelle specifications this paper describes the development of tango, an automatic generator of backtracking trace analysis tools for single-process specifications written in the formal description language, estelle. a tool generated by tango automatically checks the validity of any execution trace against the given specification, and supports a number of checking options. the approach taken was to modify an estelle-to-c++ compiler. discussion about nondeterministic specifications, multiple observation points, and on-line trace analysis follow. trace analyzers for the protocols lapd and tp0 have been tested and performance results are evaluated. issues in the analysis of partial traces are also discussed. s. alan ezust gregor v. bochmann higher-order dataflow and its implementation on stock hardware p. rondogiannis w. w. wadge surfing the net for software engineering notes mark doernhoefer static analysis of upper and lower bounds on dependences and parallelism existing compilers often fail to parallelize sequential code, even when a program can be manually transformed into parallel form by a sequence of well- understood transformations (as in the case for many of the perfect club benchmark programs). these failures can occur for several reasons: the code transformations implemented in the compiler may not be sufficient to produce parallel code, the compiler may not find the proper sequence of transformations, or the compiler may not be able to prove that one of the necessary transformations is legal. when a compiler fails to extract sufficient parallelism from a program, the programmer may try to extract additional parallelism. unfortunately, the programmer is typically left to search for parallelism without significant assistance. the compiler generally does not give feedback about which parts of the program might contain additional parallelism, or about the types of transformations that might be needed to realize this parallelism. standard program transformations and dependence abstractions cannot be used to provide this feedback. in this paper, we propose a two-step approach to the search for parallelism in sequential programs. in the first step, we construct several sets of constraints that describe, for each statement, which iterations of that statement can be executed concurrently. by constructing constraints that correspond to different assumptions about which dependences might be eliminated through additional analysis, transformations, and user assertions, we can determine whether we can expose parallelism by eliminating dependences. in the second step of our search for parallelism, we examine these constraint sets to identify the kinds of transformations needed to exploit scalable parallelism. our tests will identify conditional parallelism and parallelism that can be exposed by combinations of transformations that reorder the iteration space (such as loop interchange and loop peeling). this approach lets us distinguish inherently sequential code from code that contains unexploited parallelism. it also produces information about the kinds of transformations needed to parallelize the code, without worrying about the order of application of the transformations. furthermore, when our dependence test is inexact we can identify which unresolved dependences inhibit parallelism by comparing the effects of assuming dependence or independence. we are currently exploring the use of this information in programmer-assisted parallelization. william pugh david wonnacott programming pearls jon bentley planning and design of information systems using oodpm offer drori memo andreas geyer-schulz aspect-oriented programming with adaptive methods karl lieberherr doug orleans johan ovlinger surveyor's forum: idiomatic programming james l. peterson a low-level tasking package for ada a standard package of low-level ada tasking operations is needed, to support real-time embedded applications for which the existing ada tasking operations are too slow or provide insufficient control over timing. this paper suggests criteria for such a package, and gives examples of how specific low-level tasking operations might be used to solve some important real-time problems. t. p. baker parallel changes in large scale software development: an observational case study dewayne e. perry harvey p. siy lawrence g votta built-in faculties of spider programming language for heuristic algorithms implementation e. golemanova t. golemanov k. krachanov using critics to analyze evolving architectures jason e. robbins david m. hilbert david f. redmiles a gesture based user interface prototyping system gid, for gestural interface designer, is an experimental system for prototyping gesture-based user interfaces. gid structures an interface as a collection of "controls": objects that maintain an image on the display and respond to input from pointing and gesture-sensing devices. gid includes an editor for arranging controls on the screen and saving screen layouts to a file. once an interface is created, gid provides mechanisms for routing input to the appropriate destination objects even when input arrives in parallel from several devices. gid also provides low level feature extraction and gesture representation primitives to assist in parsing gestures. r. b. dannenberg d. amon interactive modular programming in scheme this paper presents a module system and a programming environment designed to support interactive program development in scheme. the module system extends lexical scoping while maintainig its flavor and benefits and supports mutually recursive modules. the programming environment supports dynamic linking, separate compilation, production code compilation, and a window-based user interface with multiple read-eval-print contexts. sho-huan simon tung compiler and tool support for debugging object protocols we describe an extension to the java programming language that supports static conformance checking and dynamic debugging of object "protocols," i.e., sequencing constraints on the order in which methods may be called. our java protocols have a statically checkable subset embedded in richer descriptions that can be checked at run time. the statically checkable subtype conformance relation is based on nierstrasz' proposal for regular (finite- state) process types, and is also very close to the conformance relation for architectural connectors in the wright architectural description language by allen and garlan. richer sequencing properties, which cannot be expressed by regular types alone, can be specified and checked at run time by associating predicates with object states. we describe the language extensions and their rationale, and the design of tool support for static and dynamic checking and debugging. sergey butkevich marco renedo gerald baumgartner michal young implementation of event handling in gna95gp gnat ada 95 graphics package (gna95gp) is a 2d graphics library that supports graphics application development in ada 95. it provides support for 2d drawing, event handling and storage of graphics primitives. this paper addresses the implementation of the event handling module of gna95gp under windows 3.1. using this event handling module, an user can write interactive programs under pc environment in ada 95 language. this event handling module is the result of an attempt to develop an event handling package using microsoft windows functions and ada 95 programming language features. this paper describes event handling and its implementation in gna95gp. similar to other graphics packages such as gks, srgp, and phigs, event handling is an important part in gna95gp. shan kuang k. m. george lan li a note on "on the conversion of indirect to direct recursion" in the article "on the conversion of indirect to direct recursion"(acm lett. program. lang. 2, 1-4. pp. 151-164), a method was introduced to convert indirect to direct recursion. it was claimed that for any call graph, there is a mutual-recursion elimination sequence if and only if no strongly connected component contains two node-disjoint circuits. we first give a counterexample and then provide a correction. ting yu owen kaser automatic checking of instruction specifications mary fernández norman ramsey analysis capabilities for requirements specified in statecharts b. e. melhart n. g. leveson m. s. jaffe metrics for real-time systems t. hand garbage collection and memory management huw evans peter dickman kernel korner: linux 2.4 spotlight: isa plug and play if you are tired of the comple joseph pranevich a first look at kde programming mr. sweet teaches us how to write an application for the kde desktop--for the experienced gui programmer david sweet reasoning about object-oriented programs that use subtypes programmers informally reason about object-oriented programs by using subtype relationships to classify the behavior of objects of different types and by letting supertypes stand for all their subtypes. we describe formal specification and verification techniques for such programs that mimic these informal ideas. our techniques are modular and extend standard techniques for reasoning about programs that use abstract data types. semantic restrictions on subtype relationships guarantee the soundness of these techniques. gary t. leavens william e. weihl orbit: an optimizing compiler for scheme david kranz richard kelsey jonathan rees paul hudak james philbin interactive system for structured program production the automation of program development procedures is one of the important subjects to be solved for improving productivity. the documentation of program specification and coding process involve many mechanical operations. presently, however, they are done mostly by human efforts. this greatly hinders the improvement of both productivity and program quality. for the purpose of solving all the problems, we developed an experimental system sdl/pad. upon receipt of the specifications described in a structured form which has been defined in accordance with structured design, the sdl/pad. system automatically generates programs written with a computer language such as pl/m or the like and specification documents. this system eliminates the troublesome inconsistency between documents and source programs. h. maezawa m. kobayashi k. saito y. futamura implementing fortran77 support in the gnu gdb debugger farooq butt hbench:java: an application-specific benchmarking framework for java virtual machines xiaolan zhang margo seltzer feature analysis of turbo prolog w hankley basic operations of the visicola scope model michael klug ada, c, c++, and java vs. the steelman this paper compares four computer programming languages (ada95, c, c++, and java) with the requirements of "steelman", the original 1978 requirements document for the ada computer programming language. this paper provides a view of the capabilities of each of these languages, and should help those trying to understand their technical similarities, differences, and capabilities. david a. wheeler transporting a portable operating system: unix to an ibm minicomputer the "portable" unix operating system was transported to an ibm series/1 minicomputer. the process of transporting is described with emphasis on (1) adapting to the target machine architecture; (2) the selection of the approach taken to transporting; (3) a description of the problems encountered; (4) the degree of portability of the unix system; and (5) a summary of the portability lessons learned. paul j. jalics thomas s. heines forum diane crawford linux around the world corporate linux journal staff letters to the editor corporate linux journal staff authentication and revocation in spm extended abstract claudio calvelli vijay varadharajan designing configurable software; compass implementation concepts eric w. booth michael e. stark apl function definition notation a pure functional notation for defining apl objects is described, and constructed with previous work in this area. the notation is extended to address both theoretical and pragmatic programming considerations. the notation is compatible with existing implementations, and is shown to straightforwardly incorporate popular extensions to the language. john bunda a small hybrid jit for embedded systems geetha manjunath venkatesh krishnan getting started with aspectj gregor kiczales erik hilsdale jim hugunin mik kersten jeffrey palm william griswold some uses of truncated boolean vectors in analysis the truncated first occurrence vector (tfov) is a data construct which the author has found useful in a variety of application areas. it is based on comparisons of adjacent items in a set ordered by some criterion. the use of the tfov and its several derivatives improve efficiency and reduce the load on system resources. a contributing factor in performance improvement is the algorithm used in the apl implementations of monadic grade and its offshoot dyadic grade. it is interesting to note that an ordered set improves the efficiency of processes other than those utilizing the tfov. calculation of the range and median of a numeric set is an example of this. as new primitives have been added to apl, construction and generation of the tfov has been simplified. the paper shows both the early and the later idioms used in programs. the techniques for calculating frequency distributions and the related statistical mode or modes are shown. the development of sub-totals by group in simple (single criterion) and hierarchical cases are illustrated. the generation of (ordered) defining subsets and "arrival" and "departure" sequences are shown. along with the ordered defining subset (ods) a pointer to the ods can be developed. this technique has been found useful in applications requiring data compaction. the tfov has proven useful in the development of control vectors for formatting tables especially for insertion of sub-total. a function for pagination of documents which eliminates "widows" has been developed, as have functions for reformatting paragraphs in text processing. in these latter cases, while the ordering algorithm is simply that of position, the concept is a direct outgrowth of experience with the tfov. the techniques and concepts discussed in this paper are not intended to be exhaustive but are intended to stimulate programmers in considering an approach often overlooked. it is interesting to note that, while these ideas are applicable in other languages, their recognition required the primitives and arrays of apl. howard j. smith y+: a yacc preprocessor for certain semantic actions joseph c. h. park a comment on the notation of the wirfs-brock et al object-oriented design method rose mazhindu-shumba dyad definition kenneth e. iverson the designer as user: building requirements for design tools from design practice software tools that support the design and development of interactive computing systems are an exciting possibility. the potential pay- off is great: user interface management systems, for example, promise not only to speed the process of specifying, implementing and maintaining user interface code, but also to guide the content of the user interfaces they support. as for any tool intended for human use, however, the success of software design tools will hinge on a thorough understanding of the problems they seek to address---design as it is practiced in the real world. mary beth rosson wendy kellogg susanne maass multitasking without comprimise: a virtual machine evolution the multitasking virtual machine (called from now on simply mvm) is a modification of the java virtual machine. it enables safe, secure, and scalable multitasking. safety is achieved by strict isolation of application from one another. resource control augment security by preventing some denial-of-service attacks. improved scalability results from an aggressive application of the main design principle of mvm: share as much of the runtime as possible among applications and replicate everything else. the system can be described as a 'no compromise'approach --- all the known apis and mechanisms of the java programming language are available to applications. mvm is implemented as a series of carefully tuned modifications to the java hotspot virtual machine, including the dynamic compiler. this paper presents the design of mvm, focusing on several novel and general techniques: an in-runtime design of lightweight isolation, an extension of a copying, generational garbage collector to provide best-effort management of a portion of the heap space, and a transparent and automated mechanism for safe execution of user-level native code. mvm demonstrates that multitasking in a safe language can be accomplished with a high degree of protection, without constraining the language, and and with competitive performance characteristics grzegorz czajkowski laurent daynes towards a minimal object-oriented language for distributed and concurrent programming matthias radestock susan eisenbach user manager software mr. williams presents a tool to handle all of your user-administration tasks branden williams an exercise in evaluating significance of software quality criteria the paper presents an exercise in mapping user requirements to software quality model in order to disclose the product quality characteristics the developer should take special care of. house of quality of the quality function deployment method [1,2] has been adapted for dealing with system requirements. stanislaw szejko compiling functional languages with flow analysis suresh jagannathan andrew wright expanding (\\\\) on patterns gregg w. taylor linux makes the big leagues, hewlett packard interworks 97 sam williams linux apprentice understanding /dev: this article gives us a basic introduction to device files and their uses preston f. crow analysis of queueing network models with population size constraints and delayed blocked customers queueing network models - qnm's with population size constraints and delayed blocked customers occur due to multiprogramming level - mpl constraints in computer systems and window flow- control mechanisms in computer communication networks - ccn's. the computational cost of existing algorithms is unacceptable for large numbers of chains and high population sizes. a fast approximate solution technique based on load concealment is presented to solve such qnm's. the solution procedure is non-iterative in the case of fixed rate poisson arrivals, while iteration is required in the case of quasi-random arrivals. each iteration requires the solution of a single chain network of queues comprised of stations visited by each chain. we then present an algorithm to detect saturated chains and determine their maximum throughput. a fast solution algorithm due to reiser for closed chains is also extended to the case of quasi-random arrivals. the accuracy of the proposed solution techniques is compared to previous techniques by applying it to a test case, reported in the literature, and a set of randomly generated examples. alexander thomasian paul bay the quality of questionnaire based software maintenance studies magne jøgensen completely validated software: mathematics-based software engineering for completely validated software (panel session) richard c. linger mostly reuse: another code sharing option phillip mccoog rick smith efficient composite data flow analysis applied to concurrent programs gleb naumovich lori a. clarke leon j. osterweil software visualization for debugging ron baecker chris digiano aaron marcus discovering auxiliary information for incremental computation yanhong a. liu scott d. stoller tim teitelbaum the maintenance of intermediate values in goal-directed evaluation in programming languages that support goal-directed evaluation to make use of alternative results, an expression can produce a value, suspend, and later be resumed to produce another value. this causes control backtracking to earlier points in a computation and complicates the maintenance of intermediate values. this paper presents a space-efficient algorithm computing the lifetimes of intermediate values that is used by an optimizing compiler for the icon programming language. the algorithm is applicable to other programming languages that employ goal-directed evaluation. kenneth walker ralph e. griswold a production system language kdops jing li yulin feng software architecture and mobility p. ciancarini c. mascolo specification techniques for object-oriented software (abstract) mahesh h. dodani extensionality and intensionality of the ambient logics davide sangiorgi book reviews: linux in a nutshell sid wentworth exploiting short-lived variables in superscalar processors luis a. lozano guang r. gao corba and the future of application development bill beckwith programming with fortran changes made to, and proposed for, fortran seem to be directed to improving the code that can be written in it. at the same time, supporters of fortran deplore fortran's declining popularity, and suggest improvements to improve fortran code further. however, fortran's popularity might be boosted by making it a better system to program in by providing support for the programming process rather than improving the product. neville holmes "transitioning an asis application: version 1 to ada95 2.0" asis (ada semantic interface specification) applications written to the "version 1" standard or subsequent 1.x permutations of this standard (such as version 1.1.1) were developed with and for ada83. porting such applications to the ada95 asis standard, version 2.0, is a non-trivial task. such an effort needs to address the major changes to the semantic abstractions of the language as represented by the version 2.0 ada95 asis specification.this paper is an experience report that documents an approach used to port several version 1.0 asis applications to version 2.0 compliance. a methodology for "finding" the mappings between the two specifications is documented along with the results (a mapping table of the asis queries) of that approach. joseph r. wisniewski towards an evolutive kernel of mesurements on ada sources developed on an integrated software engineering environmentsoftware engineering environment henry basson jean claude derniame inside risks: a few old coincidences computer puns considered harmful: presented here are two old examples of harmful input sequences that might be called computer puns. each has a double meaning, depending upon context. xerox parc's pioneering wysi-wyg editor bravo [1] had a lurking danger. in edit mode, bravo interpreted the sequence edit as "everything deleted insert t," which did exactly that--- transformed a large file into the letter 't' without blinking. after the first two characters, it was still possible to undo the 'ed,' but once the 'i' was typed the only remaining fallback was to replay the recorded keystroke log from the beginning of the editing session (except for 'edit') against the still-unaltered original file. a similar example was reported by norman cohen of softech: he had been entering text using the university of maryland line editor on the univac 1100 for an hour or two, when he entered two lines that resulted in the entire file being wiped out. the first line contained exactly 80 characters (demarcated by a final carriage return); the second line began with the word "about." cohen said: "because the first line was exactly 80 characters long, the terminal handler inserted its own cr just before mine, but i started typing the second line before the generated cr reached the terminal. when i finished entering the second line, a series of queued output lines poured out of the terminal. it seems that, having received the cr generated by the terminal handler, the editor interpreted my cr as a request to return from input mode to edit mode. in edit mode, the editor processed the second line by interpreting the first three letters as an abbreviation for abort and refused to be bothered by the rest of the line. had the editing session been interrupted by a system crash, an autosave feature would have saved all but the last 0-20 lines i had entered. however, the editor treated the abort request as a deliberate action on my part, and nothing was saved. two wrongs make a right (sometimes):a somewhat obscure wiring fault remained undetected for many years in the harvard mark i. each decimal memory register consisted of 23 ten-position stepping switches (plus a sign switch). registers were used dually as memory locations and as adders. the wires into (and out of) the least significant two digits of the last register were crossed, so that the least significant position was actually the second-to-least position and vice versa with respect to memory. no problems arose for many years during which that register was used fortuitously only for memory in the computation of tables of bessel functions of the nth kind; the read-in error corrected itself on read-out. the problem finally manifested itself on the n + 1st tables when that register was used as an adder and a carry went in the wrong direction. this was detected only because it was standard practice in those days to difference the resulting tables by hand (using very old adding machines). things have changed and we have learned a lot; however, similar problems continue to arise, often in new guises. discussion:today's systems have comparable dangers lurking, with even more global effects. in user interfaces, we have all experienced a slight error in a command having devastating consequences. in software, commands typed in one window or in one directory may have radically different effects in other contexts. programs are often not written carefully enough to be independent of environmental irregularities and less-than-perfect users. search paths provide all sorts of opportunities for similar computer puns (including triggering of trojan horses). accidental deletion is still quite common, although we now have undelete operations. in hardware, various flaws in chip designs have persisted into delivery. many of you will have similar tales to tell. please contribute them. conclusions: designers of human interfaces should spend much more time anticipating human foibles. crosschecking and backups are ancient techniques, but still essential. computers do not generally appreciate puns. peter g. neumann example process program code, coded in appl/a leon osterweil from design to redesign software engineering environments have to support design methodologies whose main activity is not the generation of new independent programs, but the maintenance, integration, modification and explanation of existing ones. especially for software systems in ill- structured problem domains where detailed specifications are not available (like artificial intelligence and human-computer communication), incremental, evolutionary redesign has to be efficiently supported. to achieve this goal we have designed and constructed an object-oriented, knowledge-based user interface construction kit and a large number of associated tools and intelligent support systems to be able to exploit this kit effectively. answers to the "user interface design question" are given by providing appropriate building blocks that suggest the way user interfaces should be built. the object-oriented system architecture provides great flexibility, enhances the reusability of many building blocks, and supports redesign. because existing objects can be used either directly or with minor modifications, the designer can base a new user interface on standard and well-tested components. g. fischer a. c. lemke c. rathke a pragmatic set operation and its implementation in c tom bennet batch vs. timesharing: closing the gap this paper describes and analyzes the current state of ibm timesharing. the functionality of the timesharing system is described through the evolution of the remote job processing facility. the constraints to the replacement of batch by timesharing are discussed, acknowledging that these constraints are offset by the potential of timesharing to provide an enhanced computing environment via its man-machine interface. finally the actual implementations of common computing tasks are compared against the full potential inherent to timesharing. bruce lakin industrializing software production software production is very complex, even if tool-aided, as tools are complex too. specialization is necessary to simplify the work. stating analogy between traditional industry and software production, and drawing lessons from observations of some actual experiences, we propose a systematical specialization approach, namely industrialized programming (ip). ip is based on a two- dimension life cycle. every phase contains intelligence, formalization and checking. the formalization seems like "execution" in traditional industry. ip is composed of: 1-analysts for creative tasks in the intelligence step; 2-specialists for the formalization tasks, each of them is specialized in one part of formalization techniques for all analysts. when accurately managed, ip will allow anyone to work according to his own capacity, without negative human side-effects. huang weiqiao documentation: document distribution maintenance pamela j. purcell macro-by-example: deriving syntactic transformations from their specifications this paper presents two new developments. first, it describes a "macro-by- example" specification language for syntactic abstractions in lisp and related languages. this specification language allows a more declarative specification of macros than conventional macro facilities do by giving a better treatment of iteration and mapping constructs. second, it gives a formal semantics for the language and a derivation of a compiler from the semantics. this derivation is a practical application of semantics- directed compiler development methodology. e. e. kohlbecker m. wand vmcm, a pcte based version and configuration management system k. berrada f. lopez r. minot standard fixpoint iteration for java bytecode verification java bytecode verification forms the basis for java-based internet security and needs a rigorous description. one important aspect of bytecode verification is to check if a java virtual machine (jvm) program is statically well-typed. so far, several formal specifications have been proposed to define what the static well-typedness means. this paper takes a step further and presents a chaotic fixpoint iteration, which represents a family of fixpoint computation strategies to compute a least type for each jvm program within a finite number of iteration steps. since a transfer function in the iteration is not monotone, we choose to follow the example of a nonstandard fixpoint theorem, which requires that all transfer functions are increasing, and monotone in case the bigger element is already a fixpoint. the resulting least type is the artificial top element if and only if he jvm program is not statically well- typed. the iteration is standard and close to sun's informal specification and most commercial bytecode verifiers. zhenyu qian using failure cost information for testing and reliability assessment a technique for incorporating failure cost information into algorithms designed to automatically generate software-load-testing suites is presented. a previously introduced reliability measure is also modified to incorporate this cost information. examples are presented to show the usefulness of including cost information when testing or assessing software. elaine j. weyuker trace: a tool for logging operating system call transactions diomidis spinellis a mobile debugger for mobile programs m. ranganathan anurag acharya laurent andrey virginie schaal winwin: a system for negotiating requirements ellis horowitz joo h. lee june sup lee decompile java class files with soot! (poster session) jerome miecznikowski etienne gagnon decay-usage scheduling in multiprocessors decay-usage scheduling is a priority-aging time-sharing scheduling policy capable of dealing with a workload of both interactive and batch jobs by decreasing the priority of a job when it acquires cpu time, and by increasing its priority when it does not use the (a) cpu. in this article we deal with a decay-usage scheduling policy in multiprocessors modeled after widely used systems. the priority of a job consists of a base priority and a time- dependent component based on processor usage. because t he priorities in our model are time dependent, a queuing- theoretic analysis---for instance, for the mean job response time---seems impossible. still, it turns out that as a consequence of the scheduling policy, the shares of the available cpu time obtained by jobs converge, and a deterministic analysis for these shares is feasible: we show how for a fixed set of jobs with large processing demands, the steady-state shares can be obtained given the base priorities, and conversely, how to set the base priorities given the required shares. in addition, we analyze the relation between the values of the scheduler parameters and the level of control it can exercise over the steady-state share ratios, and we deal with the rate of convergence. we validate the model by simulations and by measurements of actual systems. d. h. j. epema a distributed hypercube file system for the hypercube, an autonomous physically interconnected file system is proposed. the resulting distributed file system consists of an i/o organization and a software interface. the system is loosely-coupled architecturally but from operating systems point of view a tightly-coupled system is formed in which interprocessor messages are handled differently from file accesses. a matrix multiplication algorithm is given to show how the distributed file system is utilized. r. j. flynn h. hadimioglu polymorphism and type checking in object-oriented languages p. grogono a. bennett debunking the myths about fortran craig t. dedo relaxing the constraints on ada's limited private types through functional expressions john beidler modeling software tools with icon this paper describes a new software test automation tool, a powerful new programming language, and the software development process that resulted when these tools were combined. a small development team of software developers and potential customers devised the unconventional process to meet a short deadline. the process produced an operational prototype or model of the entire software system that customers were able to use during the time it was being developed. the first model of the buster automated testing system was conceived, designed, and implemented ahead of schedule in less than six months, complete with many features and components. the buster system provides a test- information subsystem with facilities for multi-project test sharing, per- project test storage and planning, and test downloading for lab use. a separate test execution facility is also included that features test- result logging, a results database, and per-session i/o recording. the customer, at&t; 3b4000 system test, reports that system soak tests that had taken three weeks now can be completed in one week, using buster. the software modeling technique that was used to create the buster test system is a new idea that can be used to produce reliable low-cost software in many applications. unlike more conventional software engineering approaches, including rapid prototyping, the model can be used by customers as it is slowly evolved into a finished product. the model is used to embody and test designs and identify missing requirements before making large investments in production level code and documentation. in addition, software modeling makes it possible to develop comprehensive system test suites long before production level software is available. the process is composed of three major components: brainstorming and team building with customers, high-level language engineering, and automated software testing. o. r. fonorow texas: good, fast, cheap persistence for c++ vivek singhal sheetal v. kakkad paul r. wilson best of technical support corporate linux journal staff reuse-driven interprocedural slicing mary jean harrold ning ci a methodology for monitor development in concurrent programs j. Ángel velasquez-iturbide reliance on correlation data for complexity metric use and validation stephen macdonell book review: the future of software david k. billings end user programming/informal programming howie goodell sarah kuhn david maulsby carol traynor c++'s destructors can be destructive l. s. tang efficient algorithms for bidirectional debugging this paper discusses our research into algorithms for creating anefficient bidirectional debugger in which all traditional forward movement commands can be performed with equal ease in the reverse direction. we expect that adding these backwards movement capabilities to a debugger will greatly increase its efficacy as a programming tool. the efficiency of our methods arises from our use of event countersthat are embedded into the program being debugged. these counters areused to precisely identify the desired target event on the fly as thetarget program executes. this is in contrast to traditional debuggers that may trap back to the debugger many times for some movements. for reverse movements we re-execute the program (possibly using two passes) to identify and stop at the desired earlier point. our counter based techniques are essential for these reverse movements because they allow us to efficiently execute through the millions of events encountered during re- execution. two other important components of this debugger are its i/o logging and checkpointing. we log and later replay the results of system callsto ensure deterministic re-execution, and we use checkpointing to bound theamount of re- execution used for reverse movements. short movements generally appear instantaneous, and the time for longer movements is usually bounded within a small constant factor of the temporal distance moved back. bob boothe a plea for readable pleas for a readable prolog programming style randy m. kaplan distributed software engineering dennis heimbigner alexander l. wolf type inference and type checking for functional programming languages takuya katayama proxy: an interpreter for rapid prototyping burt leavenworth layered virtual machine/object-oriented design ken shumate comparison as a value-yielding operation markku sakkinen compilation using modules with f90 michael schwarz an ad hoc approach to the implementation of polymorphism r. morrison a. dearle r. c. h. connor a. l. brown adam, an ada simulation library magnus sjöland rune thyselius barbro sjöland parsing and compiling using prolog this paper presents the material needed for exposing the reader to the advantages of using prolog as a language for describing succinctly most of the algorithms needed in prototyping and implementing compilers or producing tools that facilitate this task. the available published material on the subject describes one particular approach in implementing compilers using prolog. it consists of coupling actions to recursive descent parsers to produce syntax- trees which are subsequently utilized in guiding the generation of assembly language code. although this remains a worthwhile approach, there is a host of possibilities for prolog usage in compiler construction. the primary aim of this paper is to demonstrate the use of prolog in parsing and compiling. a second, but equally important, goal of this paper is to show that prolog is a labor-saving tool in prototyping and implementing many non-numerical algorithms which arise in compiling, and whose description using prolog is not available in the literature. the paper discusses the use of unification and nondeterminism in compiler writing as well as means to bypass these (costly) features when they are deemed unnecessary. topics covered include bottom-up and top-down parsers, syntax-directed translation, grammar properties, parser generation, code generation, and optimizations. newly proposed features that are useful in compiler construction are also discussed. a knowledge of prolog is assumed. jacques cohen timothy j. hickey ada(*) vs. modula-2: a view from the trenches richard bielak making requirements measurable (tutorial) bashar nuseibeh suzanne robertson open distributed processing (panel) oscar nierstrasz alan snyder anthony s. williams william cook genoa: a customizable language- and front-end independent code analyzer premkumar t. devanbu higher order objects in pure object-oriented languages thomas kuhne a modest proposal: c++ resyntaxed ben werther damian conway a conceptual programming environment conceptual programming means having programmers work directly with their models of how their system is put together. it means providing them with the means for designing, coding and maintaining systems on a computer using the pictures and text they normally use on paper. an environment for conceptual programming requires flexibility to support a wide range of languages and graphics to support languages based on pictures. the garden system is a prototype conceptual programming environment that uses an object-oriented framework to meet these requirements. s. p. reiss on the need for a popular formal semantics david a. schmidt a practical method of documenting and verifying ada programs with packages we present a method of formal specification of ada programs containing packages. the method suggests concepts and guidelines useful for giving adequate informal documentation of packages by means of comments. the method depends on (1) the standard inductive assertion technique for subprograms, (2) the use of history sequences in assertions specifying the declaration and use of packages, and (3) the addition of three categories of specifications to ada package declarations: (a) visible specifications, (b) boundary specifications, (c) internal specifications. axioms and proof rules for the ada package constructs (declaration, instantiation, and function and procedure call) are given in terms of history sequences and package specifications. these enable us to construct formal proofs of the correctness of ada programs with packages. the axioms and proof rules are easy to implement in automated program checking systems. the use of history sequences in both informal documentation and formal specifications and proofs is illustrated by examples. david c. luckham wolfgang polak borrowed-virtual-time (bvt) scheduling: supporting latency-sensitive threads in a general-purpose scheduler systems need to run a larger and more diverse set of applications, from real-time to interactive to batch, on uniprocessor and multiprocessor platforms. however, most schedulers either do not address latency requirements or are specialized to complex real-time paradigms, limiting their applicability to general-purpose systems.in this paper, we present _borrowed-virtual-time (bvt) scheduling,_ showing that it provides low-latency for real-time and interactive applications yet weighted sharing of the cpu across applications according to system policy, even with thread failure at the real-time level, all with a low-overhead implementation on multiprocessors as well as uniprocessors. it makes minimal demands on application developers, and can be used with a reservation or admission control module for hard real-time applications. kenneth j. duda david r. cheriton verifying security protocols with brutus due to the rapid growth of the "internet" and the "world wide web" security has become a very important concern in the design and implementation of software systems. since security has become an important issue, the number of protocols in this domain has become very large. these protocols are very diverse in nature. if a software architect wants to deploy some of these protocols in a system, they have to be sure that the protocol has the right properties as dictated by the requirements of the system. in this article we present brutus, a tool for verifying properties of security protocols. this tool can be viewed as a special-purpose model checker for security protocols. we also present reduction techniques that make the tool efficient. experimental results are provided to demonstrate the efficiency of brutus. e. m. clarke s. jha w. marrero surveyor's forum: recovering an error david p. reed completeness in formal specification language design for process-control systems we show how uml class diagrams can be used to document design by refinement in the early design stages. this is illustrated by an example from the area of embedded real-time and hybrid systems. a precise semantics is given for the uml class diagrams by translation to the z schema calculus. e.-r. olderog a. p. ravn a process-oriented methodology for assessing and improving software trustworthiness a high-level, technical summary of the trusted software methodology (tsm) is provided in this paper. the trust principles and trust classes that comprise the tsm are presented and several engineering investigations and case studies surrounding the tsm are outlined. appendices are included that highlight important areas of the tsm. edward amoroso carol taylor john watson jonathan weiss toward better software automation adid jazaa the algorithm capture approach to ada transition d. wood accelerating apl programs with sac the paper investigates, how sac, a purely functional language based on c syntax, relates to apl in terms of expressiveness and run-time behavior. to do so, three different excerpts of real world apl programs are examined. it is shown that after defining the required apl primitives in sac, the example programs can be re-written in sac with an almost one-to-one correspondence. run-time comparisons between interpreting apl programs and compiled sac programs show that speedups due to compilation vary between 2 and 500 for three representative benchmark programs. clemens grelck sven-bodo scholz stop the presses corporate linux journal staff evaluating software engineering methods and tools part 6: identifying and scoring features barbara ann kitchenham lindsay jones hybrid run-time power management technique for real-time embedded system with voltage scalable processor this paper presents a new run-time power management technique for real-time embedded systems which consist of a voltage scalable processor and power controllable peripheral devices. we have observed that there exist significant trade-offs in terms of energy consumption between the dynamic power management (dpm) scheme and the dynamic voltage scaling (dvs) scheme over a wide range of system operating conditions. the proposed technique fully exploits workload- variation slack time by partitioning the task into several timeslots and shut down the unneeded peripheral device on timeslot-by-timeslot basis. through extensive simulations, the novelty and the usefulness of the proposed technique are demonstrated. minyoung kim soonhoi ha toward precise measurements using software normalization pu-lin yeh jin-cherng lin an application-independent concurrency skeleton in ada 95 matthew b. dwyer matthew j. craig eric runquist toward a non-atomic era: l-exclusion as a test case most of the research in concurrency control has been based on the existence of strong synchronization primitives such as test and set. following lamport, recent research promoting the use of weaker primitives, "safe" rather than "atomic," has resulted in construction of atomic registers from safe ones, in the belief that they would be useful tools for process synchronization. we argue that the properties provided by atomic operations may be too powerful, masking core difficulties of problems and leading to inefficiency. we therefore advocate a different approach, to skip the intermediate step of achieving atomicity, and solve problems directly from safe registers. though it has been shown that "test and set" cannot be implemented from safe registers, we show how to achieve a fair solution to l-exclusion, a classical concurrency control problem previously solved assuming a very powerful form of atomic "test and set". we do so using safe registers alone and without introducing atomicity. the solution is based on the construction of a simple novel non-atomic synchronization primitive. danny dolev eli gafni nir shavit lambda, the ultimate label or a simple optimizing compiler for scheme optimizing compilers for higher-order languages need not be terribly complex. the problems created by non-local, non-global variables can be eliminated by allocating all such variables in the heap. lambda lifting makes this practical by eliminating all non-local variables except for those that would have to be allocated in the heap anyway. the eliminated non-local variables become local variables that can be allocated in registers. since calls to known procedures are just gotos that pass arguments, lifted lambda expressions are just assembly language labels that have been augmented by a list of symbolic names for the registers that are live at that label. william d. clinger lars thomas hansen transparency and reflection in distributed systems the recursive composition of systems to form functionally equivalent transparently distributed systems is an important paradigm for constructing distributed systems. the extent to which such recursive transparency can be achieved depends crucially on the semantics and functionality offered by the underlying systems. it is therefore important that systems should be designed so that their functionality scales gracefully in a distributed environment.in order to build a transparent extension to a system, it is necessary to be able to intercept its basic operations and extend their meaning to a distributed environment. this requires the underlying system to have a clean structure with well-defined interfaces and the reflective capability necessary to intercept and extend operations crossing those interfaces. thus, both reflection and transparency are important aspects of the design of extensible distributed systems.it is possible to make transparent extensions to object-oriented systems built on top of micro-kernel architectures but the lack of reflective capabilities in the current generation of object-oriented programming languages can make this unnecessarily awkward. more research is required to develop languages whose computational model and implementation are a better match for the underlying platforms which support them. robert stroud improving the quality of compiler construction with object-oriented techniques david basanta gutierrez candida luengo diez raul izquierdo castanedo jose emilio labra gayo juan manuel cueva concurrent programming language - lisptalk chang li painless panes for smalltalk windows current windowing systems (i.e., macintosh, smalltalk) give the user flexibility in the layout of their computer display, but tend to discourage construction of new window types. glazier is a knowledge-based tool that allows users to construct and test novel or special purpose windows for smalltalk applications. the use of glazier does not require understanding smalltalk's windowing framework (goldberg, 1984; goldberg & robson, 1983). as a new window is specified, glazier automatically constructs the necessary smalltalk class, and methods (programs). windows are interactively specified in a glazier window - the user specifies type and location of panes through mouse motions. panes can contain text, bit-maps, lists, dials, gauges, or tables. the behavior of a pane is initially determined by glazier as a function of the pane type and related defaults. these default behaviors allow the window to operate, but do not always display the application information desired. in that case, the user can fix the window's behavior by further specification. such alterations require only knowledge of the application, not of the windowing system. glazier allows the prototyping and development of full-fledged smalltalk windows, and allows a flexibility that will change window usage in two ways. first, it will allow end users to construct special purpose windows for viewing data from an application in manners unanticipated by the system designers. second, system developers will be encouraged to prototype and evaluate many window configurations before settling on a final choice. both alternatives will result in windows that are more satisfying to the end-user. the makeup of smalltalk or macintosh-style windows is typically viewed as a fixed component of the computer interface. windows are provided to the end user by the system designer and cannot be customized. sadly, users are not allowed the flexibility of their window contents that windows allow for display contents. thus, the user is forced to use windows that may not precisely fit the needs for his or her use of the application. of course, the option of adding new windows is available to some users. a skilled smalltalk user can construct a special-purpose window in an afternoon. completion of such a task requires detailed knowledge of smalltalk's model- view-controller (mvc) paradigm (goldberg, 1984; goldberg & robson, 1983). this is perceived as an inconvenient, tedious task and is hardly something a novice smalltalk programmer can or should attempt. this paper discusses glazier, a tool that encapsulates knowledge about building smalltalk windows, and assists a user in developing new smalltalk windows. glazier works as an assistant, relieving the user from the burden of thinking about windowing details. instead, the user needs only to understand how to operate the data structures for the application being displayed by the window. window development now becomes a symbiotic process, glazier provides the knowledge on how to build the window and the user provides knowledge about how the application is used and how the window should behave. there are a numerous other systems to support interface development in a like manner. bass (1985) describes a system for developing vt100 style interfaces on top of base-level applications. the system supports a wide range of user needs, but cannot be configured dynamically by the user. the trillium system (henderson, 1986) supports prototyping of copying machine interfaces and allows designers to build and test control panels. other user interface management systems support construction of front ends for applications (hayes, szekely, & lerner, 1985). none of these systems, however, has provided the user or developer with a dynamic environment for building generic windows. glazier allows a user to build a wide range of window types, and use them as they are being built. this paper will discuss the operation of the glazier, the method for constructing windows, and finally the implications of this new window construction technique. james h. alexander reusable software components trudy levine static worst-case timing analysis of ada roderick chapman alan burns andy wellings take command a little devil called tr: here's a useful command for translating or deleting characters in a file hans de vreught building a timeline editor from prefab parts: the architecture of an object- oriented application this article describes interval, a software tool that allows authors to create dynamic timelines. it is one tool in intermedia, a framework developed at brown university's institute for research in information and scholarship (iris) that allows professors and students to create linked multimedia documents and encourages exploration, connectivity, and visualization of ideas. the system was written using an object-oriented extension to c, macapp, and a set of underlying building blocks, or functional groups of objects. this paper describes interval and discusses the architecture of the interval application, focusing on the design of the object-oriented architecture and on the use of appropriate building blocks. concluding sections evaluate object- oriented programming and outline future work. l. nancy garrett karen e. smith pointer analysis for programs with structures and casting suan hsi yong susan horwitz thomas reps the validation and implementation of real-time robotics systems using cleopatra object-oriented physically-correct specifications azer bestavros ada software engineering and optimized code gary frankel decentralized simulation of resource managers jeffrey m. jaffe a forth exception handler b. j. rodriguez estimating the distribution of software complexity within a program this paper proposes an approach to the characterization of complexity within computer software source texts. we estimate the information content of individual program tokens as the basis for a relative ordering of tokens by their 'uncertainty' or 'perculiarity' within the context of the program in which they reside. the analysis method used is in part an extension of software science methods. the information gained from the analysis highlights language usage anomalies and potential errors. this information may be useful in guiding software review activities. more theoretical work and experimental validation will be necessary before the analysis technique may be used in a productive environment. thomas g. moher requirements for a layered software architecture supporting cooperative multi- user interaction flavio de paoli andrea sosio a practical approach to semantic configuration management a configuration management (cm) tool is supposed to build a consistent software system following incremental changes to the system. the notion of consistency usually is purely syntactic, having to do with the sorts of properties analyzed by compilers. semantic consistency traditionally has been studied in the field of formal methods and has been considered an impractical goal for cm. although the semantic cm problem is undecidable, it is possible to obtain a structural approximation of the semantic effects of a change in a finite number of steps. our approximation technique is formalized in logic and is based on information- theoretic properties of programs. the method in its present form applies to many but not all software systems, and it is programming-language independent. to the best of our knowledge, the semantic cm problem has not been formalized previously in nonsemantic terms, and we believe that our simplified formulation offers the potential for considerably more powerful debugging and configuration management tools. m. moriconi optimal testing policies for software systems an important problem of practical concern is to determine how much testing should be done before a system is considered ready for release. this decision, of course, depends on the model for the software failure phenomenon and the criterion used for evaluating system readiness. in this paper, we first develop a cost model based on the time dependent failure rate function of goel and okumoto. next, we derive policies that yeild the optimal values of the level of test effort (b*) and software release time (t*). the sensitivity of the optimal solution is also numerically evaluated. amrit l. goel the modular structure of complex systems this paper discusses the organization of software that is inherently complex because there are very many arbitrary details that must be precisely right for the software to be correct. we show how the software design technique known as information hiding or abstraction can be supplemented by a hierarchically- structured document, which we call a module guide. the guide is intended to allow both designers and maintainers to identify easily the parts of the software that they must understand without reading irrelevant details about other parts of the software. the paper includes an extract from a software module guide to illustrate our proposals. d. l. parnas p. c. clements d. m. weiss improving the software reusability in object-oriented programming jingwen cheng linux apprentice linux security for beginners: mr. withers takes a look at basic security issues and how to solve them using available tools alex withers what happened to integrated environments? (panel session) 20 years ago, in 1979, a landmark community-wide process was launched to establish notional requirements for integrated software engineering environments. the resulting "stoneman" document was published in february 1980. bred of the software engineering research community and catalyzed by the government ada sponsor, this "integrated environment movement" branched out and was embraced widely in the software engineering community in the 1980's as a needed, achievable, centrist approach to accelerate the benefits of disciplined software engineering into mainstream practice. the case tool industry bloomed as products integrating lifecycle activities and artifacts emerged, and research evolved to environments integrated by support for emerging, maturing notions of software processes. yet, at the end of the 1990's, this movement appears to have virtually died, and more and more production software organizations are instead (or again) using old-fashioned stand-alone development tools and struggling to match up tools and their outputs and inputs to do software engineering. the purpose of this panel is to explore the reasons why the integrated environment movement has retreated, whether the recent tools and methods are or are not achieving the goals of integrated environments, whether those goals have been superceded by other goals better served without integrated environments, and what these examinations indicate about future requirements and research needs. hal hart barry boehm s. tucker taft tony wasserman directions for user interface management systems research m green experimental comparison of software metrics (abstract only) software complexity metrics attempt to objectively measure the difficulty involved in creating and maintaining a program. this experiment will compare five complexity metrics as measures for reading comprehension of programs. the metrics compared are halstead's effort, mccabe's cyclomatic, oviedo's program complexity, gilb's logical complexity, and data dependency tree count. the metrics will be applied to independent groups of four functionally equivalent pl/i programs. these results will be compared to the subjective ranking of a control group of competent programmers after agreement among the experts has been determined. these results will be compared using the friedman two-way analysis of variance by ranks and a multiple comparison procedure. the friedman two-way analysis of variance by ranks determines if a significant difference exists between metrics. once this has been established the multiple comparison procedure will be used to determine how the metrics differ. beth clark why ada is not just another programming language since there have been hundreds of high level languages developed over the past twenty years, many people are asking why there is so much fuss about ada. the question is frequently asked---why isn't ada just another programming language. although ada was developed to meet department of defense requirements, those requirements were really for embedded computer systems i.e. those in which the computer is integrated with additional hardware. such systems certainly exist in the nonmilitary environment e.g. process control, microwave ovens, and so ada is applicable to many cases which have no connection with the military. in addition to being a programming language, ada provides support for software engineering concepts, as well as a programming support environment, and it is this combination which is unique. ada is unique non-technically for social, economic, and political reasons which relate to the way in which it was developed and the way in which it is being viewed by many people and organizations. ada is unique technically because of its support for the concept of software components, its excellent blend of modern useful features, and its support for the production of very large software systems. jean e. sammet engines build process abstractions engines are a new programming language abstraction for timed preemption. in conjunction with first class continuations, engines allow the language to be extended with a time-sharing implementation of process abstraction facilities. to illustrate engine programming techniques, we implement a round-robin process scheduler. the importance of simple but powerful primitives such as engines is discussed. christopher t. haynes daniel p. friedman feature inversion: a practice on language versions determination k. c. wong cooking with linux: tasty kde desktop themes marcel gagne palette: an extensible visual editor eric j. golin steven danz susan larison diana miller-karlow implementing functional languages on a combinator-based reduction machine s. m. sarwar s. j. hahn j. a. davis aplo: a simple modern apl a. graham logic enhancement: a method for extending logic programming languages languages based on first order logic have rapidly gained popularity as practical programming languages in several fields. with experience, several problems with the most popular such language, prolog, have come to light. several proposals for changes and extensions to prolog have been made, but proposals have been expensive to build and evaluate. an inexpensive method for extension is described that relies on preprocessors and checkers written in prolog itself. the method is efficient and applies to any logic programming language that permits manipulation of programs as objects. several extensions have been built, including modules, macros, functional notation, repetition notation, debugging, and profiling; the first three are explored in detail. paul r eggert d val schorre static interpretation of modules martin elsman upfront corporate linux journal staff analysis of steady-state segment storage utilizations in a log-structured file system with least-utilized segment cleaning the steady-state distribution of storage utilizations of segments in a log-structured file system with least- utilized (greedy) segment cleaning is found using analytic methods. furthermore, it is shown that as the number of segments increases, this distribution approaches a limiting continuous distribution, which is also derived. these results could be useful for preliminary performance analysis of lfs-type system designs prior to the development of detailed simulation models or actual implementation. john t. robinson expressional loops this paper proposes an expressional loop notation (xloop) based on the ideas described in [16,17] which makes it practical to express loops as compositions of functions. the primary benefit of xloop is that it brings the powerful metaphor of expressions and decomposability to bear on the domain of loops. wherever this metaphor can be applied, it makes algorithms much easier to construct, understand, and modify. xloop applies the expressional metaphor to loops by introducing a new data type series. a series is an ordered one dimensional sequence of data objects. series are used to represent intermediate results during a computation. algorithms which would typically be rendered as iterative loops are instead represented as compositions of functions operating on series. for example, the program sum_vect computes the sum of the elements in a vector of integers by using enum_vector to create a series of the integers in the vector and then using sum to compute their sum. richard c. waters adding concurrency to a statically type-safe object-oriented programming language p. a. buhr g. ditchfield c. r. zarnke best of technical support corporate linux journal staff best of technical support corporate linux journal staff improving fast mutual exclusion eugene styer a study of dead data members in c++ applications peter f. sweeney frank tip hope: an experimental applicative language an applicative language called hope is described and discussed. the underlying goal of the design and implementation effort was to produce a very simple programming language which encourages the construction of clear and manipulable programs. hope does not include an assignment statement; this is felt to be an important simplification. the user may freely define his own data types, without the need to devise a complicated encoding in terms of low- level types. the language is very strongly typed, and as implemented it incorporates a typechecker which handles polymorphic types and overloaded operators. functions are defined by a set of recursion equations; the left- hand side of each equation includes a pattern used to determine which equation to use for a given argument. the availability of arbitrary higher-order types allows functions to be defined which 'package' recursion. lazily-evaluated lists are provided, allowing the use of infinite lists which could be used to provide interactive input/output and concurrency. hope also includes a simple modularisation facility which may be used to protect the implementation of an abstract data type. r. m. burstall d. b. macqueen d. t. sannella language translators: a reasoned synopsis the authors present a reasoned synopsis of the state of the art of compiler techniques. a classification scheme is employed that divides compilers according to two layers. the first layer is the input layer, dealing with the kind of language that the compiler will find. the second layer is the architectural layer, that will define the compiler's internals and the method employed to produce the output. luigi benedicenti tullio vernazza object-oriented programming werner w. schulz corrigendum: ``external representations of objects of user-defined type'' peter j. l. wallis practice of quality modelling and measurement on software life-cycle m. hirayama h. sato j. tsuda the ada compiler validation capability the ada compiler validation capability consists of tests, tools, procedures, and documentation designed to enforce (and encourage) development of compilers that conform to the ada language standard. in this paper, we discuss our approach to solving the principal problems faced in developing and using such a capability. john b. goodenough a bounded first-in, first-enabled solution to the _l_-exclusion problem this article presents a solution to thefirst-come, first-enabled -exclusionproblem of fischer et al. [1979]. unlike their solution, thissolution does not use powerful read-modify-write synchronizationprimitives and requires only bounded shared memory. use of theconcurrent timestamp system of dolevand shavir [1989] is key in solving the problem within bounded sharedmemory. yehuda afek danny dolev eli gafni michael merritt nir shavit a glimpse of icon a language for the rest of us: this article gives a quick introduction to the programming language icon, developed at the university of arizona clinton jeffery shamim mohamed adaptable components grady h. campbell adverbial programming m. berry ada 9x: a technical summary s. tucker taft interpartition communication with shared active packages pascal ledru sajjan g. shiva a technique for creating small fast compiler frontends a technique using minimal perfect hash functions to generate small fast table driven frontends is described. a parser for the pascal language generated by this method is then presented. thomas j. sager standards development preference survey leonard l. tripp the design of an interactive compiler for optimizing microprograms microprogramming has traditionally been done in assembly language because of the perceived need for fast execution; compiler technology does not yet exist for discovering and performing many of the clever tricks of an experienced microprogrammer. unfortunately, programming at the machine- instruction level is both tedious and error-prone. a possible compromise between these two approaches is that of an interactive compiler, where the programmer guides the crafting of critical data structures and sections of code, while the compiler ensures that the resulting code has the same semantics as the original program, generates code where speed is not critical, and performs bookkeeping tasks. we are in the process of implementing a prototype of such a system. this paper describes the system being developed and discusses some of the key design issues. s. r. vegdahl software review paul frenger a generic customizable framework for inverse local consistency gerard verfaillie david martinez christian bessière a proposed standard set of principles for object-oriented development david c. rine confessions of a used-program salesman: lessons learned software reuse is the second oldest programming profession. ever since the first program logic board was wired, people have been looking for ways of saving time and money by building upon other's efforts and not "not re- inventing any wheels." this article summarizes the lessons i have learned as used-program salesman. using this analogy, i will examine efforts made to institutionalize software reuse. will tracz investigations of the software testing coupling effect fault-based testing strategies test software by focusing on specific, common types of faults. the coupling effect hypothesizes that test data sets that detect simple types of faults are sensitive enough to detect more complex types of faults. this paper describes empirical investigations into the coupling effect over a specific class of software faults. all of the results from this investigation support the validity of the coupling effect. the major conclusion from this investigation is the fact that by explicitly testing for simple faults, we are also implicitly testing for more complicated faults, giving us confidence that fault-based testing is an effective way to test software. a. jefferson offutt verification of andf components this paper presents validation work done on andf at the open software foundation research institute. the ultimate andf scenario splits a compiler into two separate components (producer/installer). this changes the compiler validation process as the two components have to be validated separately. this paper presents the originality and the difficulties of such an approach and summarizes the status of two pieces of software to which the osf-ri has contributed: the andf validation suite and the general andf interpreter. frederic broustaut christian fabre francois de ferriere eric ivanov mauro fiorentini use of petri net invariants to detect static deadlocks in ada programs b. shenker t. murata s. m. shatz experience with causally and totally ordered communication support: a cautionary tale robert cooper the objectworld, a classless, object-based, visual programming language the objectworld is an experimental programming system combining the concepts of object-oriented programming and visual programming to enable software reuse. new objects are assembled from prefabricated ones. the concepts of object-oriented programming are characterized and the objectworld is classified according to these concepts. two examples demonstrate programming in the objectworld. franz penz thomas wollinger a systematic approach to multiple inheritance implementation j. templ granularity of modules in object-based concurrent systems we examine the interaction of abstraction, distribution, and synchronization in determining the granularity of modules in object-based concurrent systems. the relation between linearizability and serializability as correctness criteria for processes with internal concurrency is explored. module granularity in object-based programming languages depends on the following factors: the abstraction boundary (unit of encapsulation) the information-hiding interface a module presents to its users or clients the distribution boundary (unit of name space) the boundary of the name space accessible from within a module the synchronization boundary (unit of concurrency) the boundary at which threads entering a module synchronize with ongoing activities in the module the abstraction boundary specifies the unit of encapsulation and is the fundamental determinant of object granularity. it determines the interface between an object and its clients and the form in which resources provided by an object to its clients may be invoked. the distribution boundary is the boundary of accessible names visible from within an object. the abstraction boundary is encountered by a user of a module looking inward while the distribution boundary is encountered by an agent within a module looking outward. the distribution boundary may be coarser than the abstraction boundary, as in block-structure languages, or finer, when a large abstraction (say in airline reservation system) is implemented by distributed components. when the abstraction and distribution boundaries coincide, then modules can communicate only by message passing. we say a module is distributed if its distribution boundary coincides with its abstraction boundary. distribution is defined independently of the notion of concurrency in terms of inaccessibility of nonlocal names. concurrency may likewise be defined independently of distribution in terms of threads and thread synchronization. threads may in general interact in an undisciplined fashion in a single large namespace with no notion of abstraction or distribution. object-based programming requires a disciplined use of threads for objects with a well- defined notion of abstraction and distribution. concurrent objects (processes) require incoming threads associated with messages or remote procedure calls to synchronize with ongoing activities within a process. if synchronization can occur only at the abstraction boundary of processes, then processes are internally sequential and our model is one of communicating sequential processes. in this case, the unit of abstraction coincides with the unit of concurrency. communicating sequential processes whose name space boundary coincides with their abstraction boundary are called distributed sequential processes. distributed sequential processes have the same interface for abstraction, distribution, and concurrency, as illustrated in figure 1. distributed sequential processes are attractively simple, both conceptually and in terms of their implementation. however, their expressive power may be inadequate for certain applications. abstractions that encapsulate a large data structure, such as the database of an airline reservation system, may require fine- grained concurrency within an abstraction both to model the application accurately and for purposes of efficiency. conversely, a network of sequential computers with multiple objects at each node is naturally modeled by a unit of concurrency coarser than its unit of abstraction. in order to understand design alternatives for internal process concurrency we distinguish between sequential processes with a single thread, quasi- concurrent processes with at most one active thread, and concurrent processes with multiple threads, as illustrated in figure 2. sequential process: a process that has just one thread. quasi-concurrent process: a process that has at most one active thread. concurrent process: a process that may have multiple active threads. languages supporting sequential processes include ada [dod], csp [ho1], and nil [sy]. messages to perform operations are queued at the process interface until the already executing process is ready to accept them. rendezvous synchronization joins an incoming thread with the active thread for purposes of execution and then separates the threads so that invoking and invoked processes may again proceed in parallel. languages supporting quasi-concurrent processes include abcl/1 [ybs] and concurrent smalltalk [yt]. monitors [ho, ha] are quasi-concurrent processes which allow threads to be suspended while waiting for a condition to be fulfilled and to be resumed when the condition is satisfied. monitors differ from sequential processes in having internal "condition queues" of suspended threads as well as entry queues of threads at the interface. a waiting thread can become active only if the currently executing thread terminates or is suspended. suspended threads usually have priority over incoming threads so that incoming threads can execute only if there is no suspended thread ready to resume execution. the process is the synchronization boundary for incoming threads, but suspended threads are synchronized at a finer level of granularity, namely at the statement level. argus [ls], clouds [lb], and actor languages [ag] have fine-grained concurrent processes with multiple threads. concurrent processes do not require incoming threads to be synchronized at the process boundary. synchronization can be delayed till a thread attempts to access shared data in critical regions (atomic objects in argus). at this time the thread may be suspended until the shared data can safely be accessed. concurrent processes support fine-grained concurrency control that permits synchronization to be delayed from process entry time to the time of entry to critical regions. both quasi-concurrent and fully-concurrent processes require fine-grained synchronization within the abstraction boundary. quasi-concurrent processes support fine-grained synchronization at the statement level by means of conditional wait statements that automatically resume when their condition becomes true. concurrent processes must perform both synchronization and data protection within processes and may use locking to protect data within a process from concurrent access by multiple threads [ls]. sequential and quasi-concurrent processes do not need to protect local data from concurrent access since at most one thread can be active within a process. however, quasi-concurrent processes may leave data in an unstable state when a process is suspended, and may require transaction protocols [bg] to ensure correctness. languages like abcl/1 and concurrent smalltalk, and monitor languages which support quasi-concurrent processes without transactions, cannot protect local data from being modified while a thread executing a transaction on that data is suspended. correctness for concurrent processes may be defined in terms of correctness for sequential processes in terms of linearizability [hw]. a computation within a concurrent process is said to be linearizable if it is equivalent to a sequential computation in which methods are executed in an instantaneous non-overlapping order within their interval of invocation and response. a linearized computation for a typed process is acceptable if it is consistent with the requirements (axioms) of the data type. for example, a computation within a concurrent queue is acceptable if it can be linearized so that queue elements are enqueued before they are dequeued and dequeued in a first-in- first-out order. linearizability and acceptability are locally definable. a computation on a collection of concurrent objects is acceptable if there is an acceptable linearization for each of the objects in the collection. a program is correct if all its potential computations are acceptable. linearizability allows correctness to be specified for each type in terms of its sequential properties (axioms). but it does not allow dependency relations between objects of the kind that arise in transactions to be specified, such as the dependence of two account objects between which funds are transferred. correctness of concurrently executing transactions may be defined by serializability, namely by the requirement that computations with concurrently executing transactions are equivalent to some serial order of execution of the transactions. serializability flattens a collection of concurrently executing transactions into some serial order of execution. this is a much more radical restructuring of the computation than linearizability which freely allows concurrency between objects since this has no impact on local correctness. linearizability and serializability have in common the idea of reasoning about concurrent computations by postulating their equivalence to some sequential computation. linearizability is less ambitious in requiring only sequential equivalence for concurrently executing methods of individual objects and is able to capture type correctness. serializability aims to capture exclusive access to shared resources by sequences of methods and requires a much stronger form of sequential equivalence that treats sequences of methods rather than individual methods as the units to be serialized. objects and transactions are two complementary structuring mechanisms for concurrent systems. objects partition the system state into encapsulated components with characteristic behavior determined by sets of applicable operations. transactions partition system actions into sequences with exclusive access to shared objects during the complete duration of their execution. object-based transaction systems support structuring of applications in terms of both objects and transactions. concurrent object-based languages such as abcl/1 and concurrent smalltalk do not contain a notion of transactions while transaction systems are generally not object-based. in order to develop object-based transaction systems, both the notion of object- based encapsulation and of transaction-based atomicity must be supported in a single system. the complementary concepts of linearizability and serializability reduce reasoning about concurrent systems to reasoning about equivalent sequential systems. modularity in object-based transaction systems differs from the traditional object-based notion of modularity. it encompasses temporal modularity associated with atomic actions as well as state-based modularity. module granularity in such systems is determined by temporal boundaries of atomic actions as well as by spatial boundaries associated with abstraction, distribution, and synchronization. the space-time interaction of spatial and temporal module granularity is the subject of current research. peter wegner faster combinator reduction using stock hardware a. c. norman decoupling synchronization from local control for efficient symbolic model checking of statecharts william chan richard j. anderson paul beame david h. jones david notkin william e. warner product review: bru 2000 for x11 garrett smith a note on the vector c language kuo-cheng li new language features and other language issues (session summary) andy wellings long term file migration: development and evaluation of algorithms the steady increase in the power and complexity of modern computer systems has encouraged the implementation of automatic file migration systems which move files dynamically between mass storage devices and disk in response to user reference patterns. using information describing 13 months of user disk data set file references, we develop and evaluate (replacement) algorithms for the selection of files to be moved from disk to mass storage. our approach is general and demonstrates a general methodology for this type of problem. we find that algorithms based on both the file size and the time since the file was last used work well. the best realizable algorithms tested condition on the empirical distribution of the times between file references. acceptable results are also obtained by selecting for replacement that file whose size times time to most recent reference is maximal. comparisons are made with a number of standard algorithms developed for paging, such as working set, vmin, and gopt. sufficient information (parameter values, fitted equations) is provided so that our algorithms may be easily implemented on other systems. alan jay smith memory allocation with lazy fits dynamic memory allocation is an important part of modern programming languages. it is important that it be done fast without wasting too much memory. memory allocation using lazy fits is introduced, where pointer increments, which is very fast, is used as the primary allocation method and where conventional fits such as best fit or first fit are used as backup. some experimental results showing how lazy fits might perform are shown, and shows that the approach has the potential to be useful in actual systems. yoo chung soo-mook moon yesterday, my program worked. today, it does not. why? imagine some program and a number of changes. if none of these changes is applied ("yesterday"), the program works. if all changes are applied ("today"), the program does not work. which change is responsible for the failure? we present an efficient algorithm that determines the minimal set of failure-inducing changes. our delta debugging prototype tracked down a single failure-inducing change from 178,000 changed gdb lines within a few hours. andreas zeller there can be only one: a summary of the unix standardization movement vernard martin ada in concert in a concert, it is truly marvelous how a diverse group of musicians playing an assortment of musical instruments come together to produce beautiful music. a point that is frequently made by those familiar with the new ada standard, referred to here as ada(95), is how the features in ada(95) come together to enhance the software development process. this paper contains several examples that illustrate how the new features in ada(95) dramatically improve the correlation between the design of a solution and its eventual implementation at the source code level when compared to ada(83).the example, binary tree packaging, illustrates the design and implementation of recursive data structures. a data structure is recursive if the structure contains one or more substructures of the type being defined. as a result, recursive data structures lend themselves to processing with recursive algorithms. lists, binary trees, and n-ary trees are examples of data structures that may be defined recursively. the example presented here is a package that implements binary trees using a recursive paradigm. the paper compares an ada 83 solution with a solution using ada 95. the paper focuses on how ada 95 allows us to create a solution that more closely mirrors the specifications for the recursive data structure with improved time, space, and coding efficiency over an ada 83 implementation. in a sense, this paper provides one illustration of how the '95 standard improves ada. jack beidler correctness is congruent with quality c. m. lott the ot idea life-cycle (panel): from eureka! to shrink wrap laura hill bruce anderson adele goldberg gregor kiczales colin scott kevin tyson using linux in embedded and real-time systems when you need an embedded operating system, linux is a good place to start. here's why. rick lehrbaum automated method-extraction refactoring by using block-based slicing refactoring improves the design of existing code but is not complex to do by hand. this paper proposes a mechanism that automatically refactors methods of object-oriented programs by using program slicing. to restructure a method without changing its observable behavior, the mechanism uses block- based slicing that does not extract the fragments of code from the whole program but from the region consisting of some consecutive basic-blocks of the program. a refactoring tool implementing the mechanism constructs a new method that contains the extracted code and re-forms the source method that contains this tool, a programmer indicates only a variable of interest in the code that he/she wants to refactor and then selects a suitable method from among the candidates created by the tool. the programmer does not have to test the refactored code since the mechanism is based on data-and control-flow analysis. thus the tool enables programmers to avoid manual refactoring whose process is error-prone and time-consuming katsuhisa maruyama linux system initialization david a. bandel exceptions handling in fortran john reid research directions in software composition oscar nierstrasz theo dirk meijler covering the life cycle with ada: ada all the way george w. cherry experience with evolutionary prototyping in a large software project s hekmatpour compiler-based i/o prefetching for out-of-core applications current operating systems offer poor performance when a numeric application's working set does not fit in main memory. as a result, programmers who wish to solve "out-of-core" problems efficiently are typically faced with the onerous task of rewriting an application to use explicit i/o operations (e.g., read/write). in this paper, we propose and evaluate a fully automatic technique which liberates the programmer from this task, provides high performance, and requires only minimal changes to current operating systems. in our scheme the compiler provides the crucial information on future access patterns without burdening the programmer; the operating system supports nonbinding prefetch and release hints for managing i/o; and the operating systems cooperates with a run-time layer to accelerate performance by adapting to dynamic behavior and minimizing prefetch overhead. this approach maintains the abstraction of unlimited virtual memory for the programmer, gives the compiler the flexibility to aggressively insert prefetches ahead of references, and gives the operating system the flexibility to arbitrate between the competing resource demands of multiple applications. we implemented our compiler analysis within the suif compiler, and used it to target implementations of our run-time and os support on both research and commercial systems (hurricane and irix 6.5, respectively). our experimental results show large performance gains for out-of-core scientific applications on both systems: more than 50% of the i/o stall time has been eliminated in most cases, thus translating into overall speedups of roughly twofold in many cases. angela demke brown todd c. mowry orran krieger the closure statement: a programming language construct allowing ultraconcurrent execution martin rem fast instruction cache performance evaluation using compile-time analysis david b. whalley an abstract type for statistics collection in simula although the use of abstract types has been widely advocated as a specification and implementation technique, their use has often been associated with programming languages that are not widely available, and examples published to date are rarely taken from actual applications. simula is a widely available language that supports the use of abstract types. the purposes of this paper are (1) to demonstrate the application of the concepts of data abstraction to a common problem; (2) to demonstrate the use of data abstraction in a widely available language; and (3) to provide a portable facility for statistics collection that may make the use of simula more attractive. a discussion of the background of and requirements for an abstract type for statistics collection is presented, followed by a specification for the type using traces. a simula implementation, with examples of its use, is given. finally, implementation of the abstract type in other languages is discussed. carl e. landwehr on distribution of coordinated atomic actions the concept of coordinated atomic (ca) actions was introduced about two years ago. since then this research has addressed many new problems and, in particular, problems of ca action distribution and implementation. recently we have gained some experience in implementing different schemes in ada 95 and java. our intention within this paper is to discuss how distributed ca action schemes can be realised. in particular, we outline different ways of action component distribution, trade-offs, applications for which these schemes are applicable. we discuss a wide range of schemes (some of them have not yet been implemented) based on a classification of various approaches to ca action distribution; to do this we analyse all possible ways of different action component distribution. we believe that this general discussion should help to better understand the current state of ca action implementation and is important for future research in ca actions. a. romanovsky a. f. zorzo object-oriented experiences mohamed fayad dennis de champeaux log files: an extended file service exploiting write-once storage a log service provides efficient storage and retrieval of data that is written sequentially (append-only) and not subsequently modified. application programs and subsystems use log services for recovery, to record security audit trails, and for performance monitoring. ideally, a log service should accommodate very large, long-lived logs, and provide efficient retrieval and low space overhead. in this paper, we describe the design and implementation of the clio log service. clio provides the abstraction of log files: readable, append-only files that are accessed in the same way as conventional files. the underlying storage medium is required only to be append-only; more general types of write access are not necessary. we show how log files can be implemented efficiently and robustly on top of such storage media---in particular, write-once optical disk. in addition, we describe a general application software storage architecture that makes use of log files. this work was supported in part by the defense advanced research projects agency under contracts n00039-84-c-0211 and n00039-86-k-0431, by national science foundation grant dcr-83-52048, and by digital equipment corporation, bell-northern research and at&t; information systems. r. finlayson d. cheriton xil and yil: the intermediate languages of tobey typically, the choice of intermediate representation by a particular compiler implementation seeks to address a specific goal. the intermediate language of the tobey compilers, xil, was initially chosen to facilitate the production of highly optimal scalar code, yet, it was easily extended to a higher level form yil in order to support a new suite of optimizations which in most existing compilers are done at the level of source to source translation. in this paper we will discuss those design features of xil that were important factors in the production of optimal scalar code. in addition we will demonstrate how the strength of the yil abstraction lay in its ability to access the underlying low level representation. kevin o'brien kathryn m. o'brien martin hopkins arvin shepherd ron unrau fortran 8x draft loren p. meissner software engineering for user interfaces the discipline of software engineering can be extended in a natural way to deal with the issues raised by a systematic approach to the design of human- machine interfaces. two main points are made: that the user should be treated as part of the system being designed, and that projects should be organized to take account of the current (small) state of a priori knowledge about how to design interfaces. because the principles of good user-interface design are not yet well specified (and not yet known), interfaces should be developed through an iterative process. this means that it is essential to develop tools for evaluation and debugging of the interface, much the same way as tools have been developed for the evaluation and debugging of program code. we need to develop methods of detecting bugs in the interface and of diagnosing their cause. the tools for testing interfaces should include measures of interface performance, acceptance tests, and benchmarks. developing useful measures is a non-trivial task, but a start can and should be made. stephen w. draper donald a. norman apl editor features for productivity and quality this paper describes features for a full screen apl program-and-data editor which provide capability above and beyond basic text editing functions. these features result from the integration of numerous separate utility operations into the editor. the combined processes enable the apl programmer to be more productive while at the same time generate a higher quality result. one such implementation, editfs, modelled on spf, is the basis of this description. timothy p. holls suggestion for a parametrized class model jose de oliveira guimaraes completeness criteria for testing elementary program functions program testing metrics are based on criteria for measuring the completeness of a set of program tests. branch testing measures the percentage of program branches that are traversed during a set of tests. mutation testing measures the ability of a set of tests to distinguish a program from similar programs. a criterion for test completeness is introduced in this paper which measures the ability of a set of tests to distinguish between functions which are implemented by parts of programs. the criterion is applied to functions which are implemented by different kinds of programming language statements. it is more effective than branch testing and incorporates some of the advantages of mutation testing. its effectiveness can be discussed formally and it can be described as part of an integrated approach to testing. a tool can be used to implement the method. william e. howden consistency management for complex applications peri tarr lori a. clarke preliminary results with the initial implementation of qlisp qlisp, a dialect, of common lisp, has been proposed as a multiprocessing programming language which is suitable for studying the styles of parallel programming at the medium-grain level. an initial version of qlisp has been implemented on a multiprocessor and a number of experiments with it conducted. this paper describes the implementation and reports on some of the experiments. ron goldman richard gabriel a language-based design for portable data files c. burch enforcing trace properties by program transformation we propose an automatic method to enforce trace properties on programs. the programmer specifies the property separately from the program; a program transformer takes the program and the property and automatically produces another "equivalent" pogram satisfying the property. this separation of concerns makes the program easier to develop and maintain. our approach is both static and dynamic. it integrates static analyses in order to avoid useless transformations. on the other hand, it never rejects programs but adds dynamic checks when necessary. an important challenge is to make this dynamic enforcement as inexpensive as possible. the most obvious application domain is the enforcement of security policies. in particular, a potential use of the method is the securization of mobile code upon receipt. thomas colcombet pascal fradet preemptable remote execution facilities for the v-system marvin m. theimer keith a. lantz david r. cheriton testing and enhancing a prototype program fusion engine patricia johann simplegraphics: tcl/tk visualization of real-time multi-threaded and distributed applications visualization of complex software applications is an exciting and challenging field. useful displays are invaluable for developers in analysis of their software systems, and for meaningful system presentations to customers at a higher conceptual level.there have been many notable visualization examples applicable to data centric systems, including the grasp code visualizer[11], the uw illustrating compiler[14], and the information space visualizations embodied in cone trees[7] and data walls[8]. as conceptual and abstraction levels of these advanced graphics increases and becomes more prevalent, and as the real-world applications become more complicated from concurrency and distribution, the interfacing cohesion process becomes more difficult.this paper describes concepts and issues for integrating visualization techniques into multi- threaded and distributed applications, and shows foundation use available across application domains. _simplegraphics_ is a term used in this paper to describe these powerful and rapidly developed visualizations applied to soft real-time applications. for the applications described here to meet the "simple" criteria they must be both conceptually simple and quickly developed - in say under 10 minutes. this leads to their use when appropriate in a "what if" manner, as well as taking them out if no longer needed. looking ahead, the newer embedded devices, such as the _autopc_ or the _palmpilot,_ have a small graphic foot print and will probably not allow for complicated graphics, leading to more use of abstractions like _simplegraphics._newer object oriented concurrency and distribution abstractions are being explored and applied to augment the work since the last international real-time ada workshop (irtaw)[6][5]. with pervasive concurrent and distributed systems under development, it becomes clear that better controls and visualizations techniques are necessary, and ada provides elegant solution techniques that can evolve with changing requirements. the graphic language, tcl/tk[9], is used here both for it's portability and its higher abstraction (through less syntax) supporting rapid development feedback.the benefits and impacts from this work include highlighting multi-threaded real-time issues, showing rapid turnaround through reuse, robust development through objects, better visualization techniques, and techniques for interfacing to other languages. in short, the goals of the 9th irtaw are addressed in various aspects within this body of work. scott arthur moody samuel kwok dale karr automatic management of operating-system resources olin shivers from software requirements to architectures _the first international workshop from software requirements to architectures (straw01) was held in toronto, ontario, canada, on may 14, 2001, just before the 23rd international conference on software engineering (icse). this brief paper outlines the motivation, goals and organisation of the workshop._ jaelson castro jeff kramer kernel korner michael k. johnson using large families for handling priority requests a burns take command: the cpio command corporate linux journal staff experimental results from dynamic slicing of c programs program slicing is a program analysis technique that has been studied in the context of several different applications in the construction, optimization, maintenance, testing, and debugging of programs. algorithms are available for constructing slices for a particular execution of a program (dynamic slices), as well as to approximate a subset of the behavior over all possible executions of a program (static slices). however, these algorithms have been studied only in the context of small abstract languages. program slicing is bound to remain an academic exercise unless one can not only demonstrate the feasibility of building a slicer for nontrivial programs written in a real programming language, but also verify that a type of slice is sufficiently thin, on the average, for the application for which it is chosen. in this article, we present results from using slice, a dynamic program slicer for c programs, designed and implemented to experiment with several different kinds of program slices and to study them both qualitatively and quantitatively. several application programs, ranging in size (i.e., number of lines of code) over two orders of magnitude, were sliced exhaustively to obtain average worst-case metrics for the size of program slices. g. a. venkatesh portable run-time support for dynamic object-oriented parallel processing mentat is an object-oriented parallel processing system designed to simplify the task of writing portable parallel programs for parallel machines and workstation networks. the mentat compiler and run-time system work together to automatically manage the communication and synchronization between objects. the run-time system marshals member function arguments, schedules objects on processors, and dynamically constructs and executes large-grain data dependence graphs. in this article we present the mentat run-time system. we focus on three aspects---the software architecture, including the interface to the compiler and the structure and interaction of the principle components of the run-time system; the run-time overhead on a component-by-component basis for two platforms, a sun sparcstation 2 and an intel paragon; and an analysis of the minimum granularity required for application programs to overcome the run-time overhead. andrew s. grimshaw jon b. weissman w. timothy strayer exploitation of software test technology edward f. miller small-scale structural reengineering of software william l. scherlis debugging standard ml without reverse engineering we have built a novel and efficient replay debugger for our standard ml compiler. debugging facilities are provided by instrumenting the user's source code; this approach, made feasible by ml's safety property, is machine- independent and back-end independent. replay is practical because ml is normally used functionally, and our compiler uses continuation-passing style; thus most of the program's state can be checkpointed quickly and compactly using call-with- current-continuation. together, instrumentation and replay support a simple and elegant debugger featuring full variable display, polymorphic type resolution, stack trace-back, breakpointing, and reverse execution, even though our compiler is very highly optimizing and has no run- time stack. andrew p. tolmach andrew w. appel concurrent object-oriented programming jacques cohen a proposal for control structures in apl an extension to apl is presented that introduces the "sequence" operator, allowing "case" and "while" blocks to be used as general control structures. these fully exploit apl's array- oriented expressivity and allow greater clarity in apl programming. denis p. samson dsl implementation using staging and monads the impact of domain specific languages (dsls) on software design is considerable. they allow programs to be more concise than equivalent programs written in a high-level programming languages. they relieve programmers from making decisions about data-structure and algorithm design, and thus allows solutions to be constructed quickly. because dsl's are at a higher level of abstraction they are easier to maintain and reason about than equivalent programs written in a high-level language, and perhaps most importantly they can be written by domain experts rather than programmers. the problem is that dsl implementation is costly and prone to errors, and that high level approaches to dsl implementation often produce inefficient systems. by using two new programming language mechanisms, program staging and monadic abstraction, we can lower the cost of dsl implementations by allowing reuse at many levels. these mechanisms provide the expressive power that allows the construction of many compiler components as reusable libraries, provide a direct link between the semantics and the low-level implementation, and provide the structure necessary to reason about the implementation. tim sheard zine-el- abidine benaissa emir pasalic towards an architecture handbook bruce anderson a generalized object model van nguyen brent hailpern a generative approach to reusing of ada subsystems james solderitsch timothy schreyer toward special-purpose program verification paul eggert real-time programming with gnat: specialised kernels versus posix threads the fact that most of the gnat ports are based on non real-time operating systems leads to a reduced usability for developing real-time systems. otherwise, existing ports over real-time operating systems are excessively complex, since gnat uses only a reduced set of their functionality, and with a very specific semantic. this paper describes the implementation of a low-level tasking support for the gnat run-time. in order to achieve a predictable real-time behaviour we have developed a very simple library, built to fit only the gnat tasking requirements. we have also designed a bare machine kernel which provides the minimum environment needed by the upper layers. juan a. de la puente jose f. ruiz jesus m. gonzalez-barahona letters to the editor corporate linux journal staff dragoon: an object-oriented notation supporting the reuse and distribution ofada softwareamong the more radical proposals for changes to the ada standard in ada9x arethose advocating the introduction of Â"object-orientedÂ" features exemplifiedby languages such as smalltalk and eiffel. dragoon is a language whichsupports the fundamental concepts of object-oriented programming in an adastyle, while retaining most of the features of current ada. in other words itsupports Â"ada-orientedÂ" object-oriented programming. it also employsenhanced forms of multiple inheritance to support concurrency anddistribution. therefore, although not designed specifically as a proposal forthe ada9x project, dragoon may serve as a basis for the development of ada inan Â"object-orientedÂ" direction. this paper provides a brief overview of thelanguage.c. atkinson andrea de maio r. bayan interprocedural static analysis of sequencing constraints this paper describes a system that automatically performs static interprocedural sequencing analysis from programmable constraint specifications. we describe the algorithms used for interprocedural analysis, relate the problems arising from the analysis of real-world programs, and show how these difficulties were overcome. finally, we sketch the architecture of our prototype analysis system (called cesar) and describe our experiences to date with its use, citing performance and error detection characteristics. kurt m. olender leon j. osterweil formal semantics of the data types of ada: abridged version steven holtsberg improving gang scheduling through job performance analysis and malleability the openmp programming model provides parallel applications a very important feature: job malleability. job malleability is the capacity of an application to dynamically adapt its parallelism to the number of processors allocated to it. we believe that job malleability provides to applications the flexibility that a system needs to achieve its maximum performance. we also defend that a system has to take its decisions not only based on user requirements but also based on run-time performance measurements to ensure the efficient use of resources. job malleability is the application characteristic that makes possible the run-time performance analysis. without malleability applications would not be able to adapt their parallelism to the system decisions. to support these ideas, we present two new approaches to attack the two main problems of gang scheduling: the excessive number of time slots and the fragmentation. our first proposal is to apply a scheduling policy inside each time slot of gang scheduling to distribute processors among applications considering their efficiency, calculated based on run-time measurements. we call this policy performance-driven gang scheduling. our second approach is a new re-packing algorithm, compress&join;, that exploits the job malleability. this algorithm modifies the processor allocation of running applications to adapt it to the system necessities and minimize the fragmentation and number of time slots. these proposals have been implemented in a sgi origin 2000 with 64 processors. results show the validity and convenience of both, to consider the job performance analysis calculated at run-time to decide the processor allocation, and to use a flexible programming model that adapts applications to system decisions. julita corbalan xavier martorell jesus labarta a register allocation technique using guarded pdg akira koseki hideaki komatsu yoshiaki fukazawa coordinating first-order multiparty interactions a first-order multiparty interaction is an abstraction mechanism that defines communication among a set of formal process roles. actual processes participate in a first-order interaction by enroling into roles, and execution of the interaction can proceed when all roles are filled by distinct processes. as in csp, enrolement statements can serve as guards in alternative commands. the enrolement guard-scheduling problem then is to enable the execution of first-order interactions through the judicious scheduling of roles to processes that are currently ready to execute enrolement guards. we present a fully distributed and message-efficient algorithm for the enrolement guard-scheduling problem, the first such solution of which we are aware. we also describe several extensions of the algorithm, including: generic roles; dynamically changing environments, where processes can be created and destroyed at run time; and nested-enrolement, which allows interactions to be nested. yuh-jzer joung scott a. smolka evaluating the tradeoffs of mobile code design paradigms in network management applications mario baldi gian pietro picco specification prototyping of concurrent ada programs in dproto ramon d. acosta generalization of pick's theorem for surface of polyhedra the pick's theorem is one of the rare gems of elementarymathematics because this is a very innocent sounding hypothesisimply a very surprising conclusion (bogomolny 1997). yet thestatement of the theorem can be understood by a fifth grader. calla polygon a lattice polygon if the co-ordinates of its vertices areintegers. pick's theorem asserts that the area of a lattice polygon_p_ is given by _a(p)_ = _i(p)_ \\+ _b(p) / 2 -1_ = _v(p) - b(p) / 2 - 1_ where _i(p), b(p)_ and_v(p)_ are the number of _interior_ lattice points, thenumber of _boundary_ lattice points and the _total_number of lattice points of _p_ respectively. it is worth tomention that the _i(p)_ (understand like _digital area_)is _digital mapping standard_ in usa since decade (morrison,j. l. 1988 and 1989). because the pick's theorem was firstpublished in 1899 therefore our planned presentation had timing its100 anniversary. currently it has greater importance than realizedheretofore because of the pick's theorem forms a connection betweenthe old _euclidean_ and the new _digital (discrete)geometry._ during this long period lots of proof had been madeof pick's theorem and many trial of its generalization from simplepolygons towards complex polygon networks, moreover tried to extendit to the direction of 3d geometrical objects as well. it is alsoturned out that nowadays the _inverse pick's formulas_ comesto the front instead of the original ones, consequently of powerfulspreading the digital geometry and mapping. today the question isnot the old one: how can we produce traditional area withoutco- ordinates, using only inside points and boundary points. just onthe contrary: _how is it possible to simply determine digitalboundary and digital area (namely the number of boundary points andinside points) using known co- ordinates of vertices._ theinverse formulas are: _b(p)= gcd (Äx, Äy,Äz)_ (1d pick's theorem) and _i(p)=a(p)-b(p)/2+1_ (2dpick's theorem) where gcd is the great common divisor of theco-ordinate differences of two-two neighboring vertices. the ourmain object is not these formulas to present, but we desire to showthat the _pick's theorem (after adequate redrafting) indeed validfor every spatial triangle which are determined by three arbitrarypoints of a 3d lattice._ the original planar theorem is only aspecial case of it. however if it is true then its valid not onlyfor triangles but all irregular polygons also which are lying inspace and have its vertices in spatial lattice points. finally ifthe extended pick's theorem is true for all face of a latticepolyhedron then it is true for total surface as well. consequentlywe developed so simple and effective algorithms which solveenumeration tasks without the time- and memory-wasting immediatecomputing. these algorithms make possible that using thevertex-co- ordinate list and the topological description of a convexor non-convex polyhedron (cube, prism, tetrahedron etc.) gettinganswer many elementary questions. for example, _how many vaxelscan be found on the complex surface of a polyhedron, how many onits edges or on its individual faces._ we succeeded to extendour results also to the surface of non- cornered geometric objects(circle, sphere, cylinder, cone, ellipsoid etc.), but anyway, thishave to be object of another presentation. mihaly agfalvi istvan kadar erik papp a brief survey of papers on scheduling for pipelined processors sanjay m. krishamurthy turing plus: a comparison with c and pascal s. perelgut j. r. cordy offers - a tool for hierarchical implicit analysis of sequential object- oriented programs rajeev r. raje daniel j. pease edward t. guy some measurements of timeline gaps in vax/vms joe gwinn stop the presses phil hughes multijava: modular open classes and symmetric multiple dispatch for java curtis clifton gary t. leavens craig chambers todd millstein test case generation using prolog for the validation of the kernel system calls of a family of unix 1) systems a knowledge based test environment was conceived. a prototype version is currently implemented in prolog. the knowledge base consists essentially of three parts: test case specifications of the various system calls, a test suite generator with predicates including information about unix system properties and sound test practices, and a test protocol archive including utilities to extract and prepare reports about the test results. all information in the knowledge base is stored as horn clauses, i.e. facts and rules immediately to be consulted and executed by a prolog interpreter. herbert pesch peter schnupp hans schaller anton paul spirk linking programs incrementally linking is traditionally a batch process that resolves cross-references between object modules and run-time libraries to produce a stand-alone executable image. because most program changes only involve a small part of the program, we have implemented an incremental linker, named inclink, that processes only the changed modules. inclink generates a new executable in time proportional to the size of change; in contrast, a batch linker generates an executable in time proporitonal to the size of the program. to minimize updates to the executable, inclink allocates extra space for every module. by allocating 24 percent more space in the executable for overflows, inclink can update a module in place over 97 percent of the time. measurements show that inclink is more than an order of magnitude faster than the unix [2] batch linker and that 88 percent of all links will take less than 2 s of cpu time on a microvax-2, independent of program size. russell w. quong mark a. linton a new iteration mechanism for the c++ programming language myung ho kim how to achieve modularity in distributed object allocation franco zambonelli the cathedral and the bazaar peter salus implicit context: easing software evolution and reuse software systems should consist of simple, conceptually clean software components interacting along narrow, well-defined paths. all too often, this is not reality: complex components end up interacting for reasons unrelated to the functionality they provide. we refer to knowledge within a component that is not conceptually required for the individual behaviour of that component as extraneous embedded knowledge (eek). eek creeps into a system in many forms, including dependences upon particular names and the passing of extraneous parameters. this paper proposes the use of implicit context as a means for reducing eek in systems by combining a mechanism to reflect upon what has happened in a system, through queries on the call history, with a mechanism for altering calls to and from a component. we demonstrate the benefits of implicit context by describing its use to reduce eek in the java swing library. robert j. walker gail c. murphy linux programming hints michael k. johnson object design of modularity, reuse and quality (abstract) douglas bennett fortran access to ieee 745 exceptions keith bierman which model of programming for lisp: sequential, functional or mixed? c. k. yuen an experimental study of individual subjective effort estimation and combinations of the estimates martin höst claes wohlin value dependence graphs: representation without taxation the value dependence graph (vdg) is a sparse dataflow-like representation that simplifies program analysis and transformation. it is a functional representation that represents control flow as data flow and makes explicit all machine quantities, such as stores and i/o channels. we are developing a compiler that builds a vdg representing a program, analyzes and transforms the vdg, then produces a control flow graph (cfg) [asu86] from the optimized vdg. this framework simplifies transformations and improves upon several published results. for example, it enables more powerful code motion than [clz86, fow87], eliminates as many redundancies as [awz88, rwz88] (except for redundant loops), and provides important information to the code scheduler [br91]. we exhibit a fast, one- pass method for elimination of partial redundancies that never performs redundant code motion [kfs92, ds93] and is simpler than the classical [mr79, dha91] or ssa [rwz88] methods. these results accrue from eliminating the cfg from the analysis/transformation phases and using demand dependences in preference to control dependences. daniel weise roger f. crew michael ernst bjarne steensgaard orthogonal version management one part of the "software configuration management" \--- software version control --- is the task of controlling different versions of documents. most existing version control systems accomplish this task by managing variant and revision trees of single documents. the structure of these trees depends on the chronological evolution of the software project. we call this form of organization "intermixed organization" of variants and revisions. this paper points out the disadvantages of that organization, introduces a new way of version management - the "orthogonal organization" \- and then compares the two organizations by means of an example. c. reichenberger a gqm-based tool to support the development of software quality measurement plans this paper focuses on the goal-question-metric (gqm) approach, which has been proposed as a goal-oriented approach for the measurement of products and processes in software engineering. first, the gqm is characterized, and then, the _gqm-plan_ tool, is described. the _gqm- plan_ was developed to support the preparation of measuring plans based on gqm. janaina c. abib tereza g. kirner constraintlisp: an object-oriented constraint programming language constraintlisp is an object-oriented constraint programming language. it is based on the constraint satisfaction problem (csp) model in artificial intelligence and the object-oriented technology. this integration provides both the declarative representation and efficiency of constraint solving of constraint programming, and the powerful problem modelling capability, maintainability, reusability and extensibility of the object-oriented technology. constraintlisp is aimed at providing practical solutions to real life constrained search problems such as scheduling, resource allocation, etc. it is also designed and implemented in an object-oriented approach. bing liu yuen-wah ku the task dependence net in ada software development jingde cheng the meta4 programming language meta4 is an object oriented language with first order reasoning capability that also includes reasoning about the _change_ that the classes go through over time, efficient partial evaluation via runtime propagation of specific values, explicit run- time code generation via a source code to machine code compiling engine - not an interpreter. coupled with a strong encapsulation principle and certain reasonable language restrictions and programmer supplied annotations, meta4 allows all parts of the program, including the compiler itself to be extended and specialized over time. therefore, meta4 is _mutable_. we introduce the concept of _run-time grammar extension_, an elegant and powerful facility for custom- programming of new _language semantics_ at the _grammar level_. in other words, its _semantics_ and _grammar_ can be extended or even modified, even at run time to solve specific problems. jason w. kim structural defects in object-oriented programming david rine a taxonomy of ada packages ken sumate kjell nielsen efficient software-based fault isolation robert wahbe steven lucco thomas e. anderson susan l. graham technical correspondence diane crawford separating key management from file system security no secure network file system has ever grown to span the internet. existing systems all lack adequate key management for security at a global scale. given the diversity of the internet, any particular mechanism a file system employs to manage keys will fail to support many types of use.we propose separating key management from file system security, letting the world share a single global file system no matter how individuals manage keys. we present sfs, a secure file system that avoids internal key management. while other file systems need key management to map file names to encryption keys, sfs file names effectively contain public keys, making them _self-certifying pathnames._ key management in sfs occurs outside of the file system, in whatever procedure users choose to generate file names.self-certifying pathnames free sfs clients from any notion of administrative realm, making inter-realm file sharing trivial. they let users authenticate servers through a number of different techniques. the file namespace doubles as a key certification namespace, so that people can realize many key management schemes using only standard file utilities. finally, with self-certifying pathnames, people can bootstrap one key management mechanism using another. these properties make sfs more versatile than any file system with built-in key management. david mazières michael kaminsky m. frans kaashoek emmett witchel an experimental coprocessor for implementing persistent objects on an ibm 4381 c. j. georgiou s. l. palmer p. l. rosenfeld new wave prototyping: use and abuse of vacuous prototypes hal berghel tacit definition roger k. w. hui kenneth e. iverson eugene e. mcdonnell eiffel linda: an object-oriented linda dialect robert jellinghaus the design of very fast portable compilers a. s. tanenbaum m. f. kaashoek k. g. langendoen c. j. h. jacobs efficient implementation of the smalltalk-80 system the smalltalk-80* programming language includes dynamic storage allocation, full upward funargs, and universally polymorphic procedures; the smalltalk-80 programming system features interactive execution with incremental compilation, and implementation portability. these features of modern programming systems are among the most difficult to implement efficiently, even individually. a new implementation of the smalltalk-80 system, hosted on a small microprocessor- based computer, achieves high performance while retaining complete (object code) compatibility with existing implementations. this paper discusses the most significant optimization techniques developed over the course of the project, many of which are applicable to other languages. the key idea is to represent certain runtime state (both code and data) in more than one form, and to convert between forms when needed. l. peter deutsch allan m. schiffman surveyor's forum: retargetable code generators william a. wulf joe newcomer bruce leverett rick cattell paul knueven software architecture recovery of a program family wolfgang eixelsberger michaela ogris harald gall berndt bellay programming pearls jon bentley distributed compilation metrics (abstract) software metrics typically are used to determine the complexity of code, the cost of software development, the effort required to develop software, or the probability of software errors. as programs increase in size and complexity, more of the limited cpu resource is consumed during compilation. in an effort to reduce the overall time for compilation of large (in excess of 1,000,000 lines of code) embedded ada programs, a distributed ada compilation system implemented on a network of computers has been developed. tests of this system, using small to medium size (1,000 to 50,000 lines of code) ada programs, have shown various levels of performance improvement over sequential compiles. the level of improvement, however, has not been consistent. at times, when the system is executing multiple concurrent compiles, there is a significant loss of efficiency. there has been relatively little work done on compilation time metrics [1,2]; and even less work done on distributed compilation time metrics which could be used to predict these major reductions in compilation performance. our research presents two models for predicting compilation time for large ada programs in a distributed environment. the first model involves the application of several standard metrics (lines of code, data structures, i/o intensity) and some non-standard metrics for compilation time. the weaknesses discovered in this model for a distributed environment are discussed. the second model combines some standard metrics with a scheduling algorithm to both predict distributed compilation time and determine an allocation of packages among distributed processors to optimize compilation time. we present tests of both of these models using modified piwg z tests. compilation time predictions from these models are also shown for larger programs. problems with an approach based on compilation time efficiency and recommendations for continued research in compilation metrics are discussed. donal gotterbarn timothy d. hammer using test case metrics to predict code quality and effort r. harrison l. g. samaraweera reflections on metaclass programming in som this paper reports on the evolution of metaclass programming in som (the ibm system object model). initially, som's use of explicit metaclasses introduced metaclass incompatibilities. this was cured by having som dynamically derive an appropriate metaclass by interpreting the "metaclass declaration" as a constraint. in effect, inheritance is given a new dimension, because the constraint is also inherited. the derived metaclass is the least solution to all these constraints. subsequently, this cure led to the possibility of metaclasses conflicting over the need to assign meaning to a method. the cure for this problem is a framework that facilitates the programming of metaclasses that cooperate on the assignment of meaning to methods. scott danforth ira r. forman dynamic instrumentation of threaded applications zhichen xu barton p. miller oscar naim design and development of minix distributed operating system students and faculty of university of mississippi are in the process of transforming minix into a truly distributed operating system. minix is a complete operating system and has all the components such as 1) process manager, 2) memory manager, 3) file manager, 4) device drivers, 5) inter process communication, 6) real time clock, 7) general i/o, 8) utilities, 9) state saver and 10) timing service. unlike major operating systems which are monolithic in structure, minix is itself a collection of processes that communicate with each other through message passing. the design of the minix distributed system involves the addition of a server process called net to the existing operating system. the functionality of the net server can be broadly classified into four major routines. they are: 1) the communication manager which transmits and receives the frames from the remote hosts through the hslan driver installed in kernel. it also performs the error detection and correction function and maintains the transmission protocol. 2) the interprocess communication manager which maps message buffer into the fixed size frames to be transmitted by the communication manager. it also has primitives such as request, reply, flow controlled send to initiate and maintain a virtual circuit with the remote system. 3) the resource manager which is employed as a child process of the net server process holds the status of the network. it maintains information such as remote logical address, number of process running in each system, resources available at each site etc. 4) finally, the network service manager which services the remote file request from the local process and also the local file request from the remote host. addition of these components in the net process would enhance the capabilities of the operating system and provide users access to remote file systems and remote resources and also enable users to exploit multiprocessor capabilities with the help of well defined algorithms and tools. to accomplish the transformation of minix the following changes have been made to incorporate the net process. modify a system tool called build which patches the independent files bootblok, kernel, memory manager, file manager and init into the memory resident portion of minix. build was modified so that one more component net could be added to the minix image. these were non-trivial changes because in addition to combining the object module build also puts the cs & ds for all these components at the beginning of kernel data space, so that kernel can load their memory maps in the proc table during system initialization. increase the storage size in the kernel data space where the cs & ds of all the components are stored so that build can install the cs & ds of net. assign and make an entry into the memory manager proc table so that net could make system calls to the memory manager. assign an entry into the file system proc table and assign the working directory, real uid, effective uid for the net process. the net process like other processes is designed such that it will continuously wait for and respond to messages from other processes. initially, it will be blocked waiting to receive a message from any process. modify the dump routine to display the status of the net process. finally, the net process is designed to continuously wait for its service request messages from other processes. the net process is structured in such a way that the type of request is resolved and switched into a table of service routines. on accomplishing the service request the process loops back to receive the next request, thus providing the foundation for the development of the internal net routines discussed above. k. s. ramesh program integration for languages with procedure calls given a program base and two variants, a and b, each created by modifying separate copies of base, the goal of program integration is to determine whether the modifications interfere, and if they do not, to create an integrated program that incorporates both sets of changes as well as the portions of base preserved in both variants. text-based integration techniques, such as the one used by the unix diff3 utility, are obviously unsatisfactory because one has no guarantees about how the execution behavior of the integrated program relates to the behaviors of base, a, and b. the first program integration algorithm to provide such guarantees was developed by horwitz, prins, and reps. however, a limitation of that algorithm is that it only applied to programs written in a restricted language---in particular, the algorithm does not handle programs with procedures. this article describes a generalization of the horwitz-prins-reps algorithm that handles programs that consist of multiple (and possibly mutually recursive) procedures. we show that two straightforward generalizations of the horwitz-prins-reps algorithm yield unsatisfactory results. the key issue in developing a satisfactory algorithm is how to take into account different calling contexts when determining what has changed in the variants a and b. our solution to this problem involves identifying two different kinds of affected components of a and b: those affected regardless of how the procedure is called, and those affected by a changed or new calling context. the algorithm makes use of interprocedural program slicing to identify these components, as well as components in base, a, and b with the same behavior. david binkley susan horwitz thomas reps an open-ended data representation model for eu_lisp the goal of this paper is to describe an open-ended type system for lisp with explicit and full control of bit-level data representations. this description uses a reflective architecture based on a metatype facility. this low-level formalism solves the problem of an harmonious design of a class taxononomy inside a type system. a prototype for this framework has been written in le-lisp and is used to build the integrated type and object systems of the eu_lisp proposal. christian queinnec pierre cointe corba: from vision to reality karen d. boucher an evaluation of java's i/o capabilities for high-performance computing phillip m. dickens rajeev thakur predicting execution time of real-time programs on contemporary machines marion g. harmon function points in an ada object-oriented design? ernie rains letters to the editor corporate linux journal staff the standard factor p. billingsley compiling objects into actors (abstract only) peter de jong best of technical support corporate linux journal staff a translator from small euclid to pascal p. e. pintelas k. p. ventouris m. d. papassimakopoulou formalization of the control stack j. a. velazques iturbide reusing single system requirements from application family requirements mike mannion hermann kaindl joe wheadon barry keepence securely wrapping cots products bob balzer disconnected operation in the coda file system disconnected operation is a mode of operation that enables a client to continue accessing critical data during temporary failures of a shared data repository. an important, though not exclusive, application of disconnected operation is in supporting portable computers. in this paper, we show that disconnected operation is feasible, efficient and usable by describing its design and implementation in the coda file system. the central idea behind our work is that caching of data, now widely used for performance, can also be exploited to improve availability. james j. kistler m. satyanarayanan equal rights - and wrongs - in lisp kent m. pitman challenges for forth james c. brakefield rethinking the taxonomy of fault detection techniques michael young richard n. taylor pdl/ada - a design language based on ada this paper discusses (1) some general concepts of design languages, (2) the development of the specific design language denoted as pdl/ada, (3) the specific features of pdl/ada, and (4) some problems encountered and techniques used in defining pdl/ada. an appendix shows two examples. because of space constraints, each of these items can only be touched on briefly. this paper assumes that the reader has a basic familiarity with the ada language (herein after referred to as ada), but detailed knowledge is not required. the prime technical focus of the work has been to replace an existing design language and notation which supports a specific design methodology with a design language based on ada without impacting the methodology. there was a clearcut decision to use ada to obtain the dual benefits of having a design language which was a subset of a programming language while still retaining just the concepts needed for a design (rather than a programming) language. jean e. sammet douglas w. waugh robert w. reiter implementation and evaluation of alternative process schedulers in minix r. guerrero l. leguizamon r. gallard an integrated general purpose automated test environment as software systems become more and more complex, both the complexity of the testing effort and the cost of maintaining the results of that effort increase proportionately. most existing test environments lack the power and flexibility needed to adequately test significant software systems. the convex integrated test environment (cite) is discussed as an answer to the need for a more complete and powerful general purpose automated software test system. peter a. vogel global multimedia system design exploration using accurate memory organization feedback arnout vandecappelle miguel miranda erik brockmeyer francky catthoor diederik verkest technical correspondence diane crawford multitasking/multiuser systems y. m. lee e. conjura goal-directed object-oriented programming in unicon clinton l. jeffery designing components versus objects: a transformational approach a good object-oriented design does not necessarily make a good component-based design, and vice versa. what design principles do components introduce? this paper examines component-based programming and how it expands the design space in the context of an event-based component architecture. we present a conceptual model for addressing new design issues these components afford, and we identify fundamental design decisions in this model that are not a concern in conventional object-oriented design. we use javabeans-based examples to illustrate concretely how expertise in component-based design, as embodied in a component taxonomy and implementation space, impacts both design and the process of design. the results are not exclusive to javabeans---they can apply to any comparable component architecture. david h. lorenz john vlissides the art of managing multiple processes mohamed e. fayad the situation in object-oriented specification and design george w. cherry approaches to change in an object-oriented database (abstract only) stanley zdonik best of technical support corporate linux journal staff the arcadia research project the arcadia research project is being conducted by the arcadia consortium to develop innovative technology in support of advanced project support environments. the arcadia consortium consists of a collection of separately- funded, informally coordinated research and development projects at the university of california at irvine, the university of colorado at boulder, the university of massachusetts at amherst, stanford university, the aerospace corporation, incremental systems corporation, and trw defense systems group1. some individuals who have moved to other organizations remain active participants in the consortium. the arcadia project is expected to provide significant advances in the ability to construct environments with such key characteristics as: scalability to the demands of very large projects; adaptability to differences among projects and unforeseen changes; extensibility to accommodate technology evolution; and pro-active support for the automatable aspects of the software life-cycle. aspects of these characteristics may become available for use in the environments of the mid-to-late 1990s. among the research areas being investigated by the consortium are: a) environment architecture, to provide an interaction model and execution framework for all environment components; b) process research and programming, to develop and automate improved software life-cycle models and to devise process programming language capabilities; c) object management, to provide enhanced support for advanced typing models and persistent object management; d) user interface management, to provide for graphical human-computer interaction interface methodology and system support; e) language support, to provide for standard representations and tool-building aids in support of multiple languages; f) analysis, to provide for improved analytical capabilities in support of software development; g) measurement and evaluation, to provide for improved automated support for analyzing and evaluating software processes and environment components. the research approach emphasizes the role of process programming and its impact on both the environment architecture and the user interaction metaphor. the consortium members have already made contributions in several of these areas, including (but not limited to): iris, "intermediate representation including semantics", a new intermediate representation for programs that is structurally uniform and can be semantically adapted to accommodate different programming languages; chiron, a prototype user interface methodology and system based on the ideas of separation of tool functionality from object depiction and separation of an abstract depiction from the physical depiction of objects; pgraphite, a prototype application generator for persistent and transient graphical data structures in ada programs; appl/a, a prototype process programming language based on ada, extending it to include, for example, programmable relations, and triggered effects; pmdb+, an extension of the project master data base conceptual model to include behavior of processes; and anna, an annotation language and toolset to support formalization in software development practices. ploiting the advantages of the inheritance relation. furthermore, particular transitions in the nets (e.g. transitions allocating persons to activities and documents or defining certain preconditions) can be described in a rule-based manner thus exploiting the benefits of declarative programming. a first and very rapid prototype of an fpe based on the above mentioned ideas about the process modeling language has been developed on top of the ai tool kee (knowledge engineering environment). however, the current specimen work is concentrating on developing a stable language definition and an fpe which fits into the framework of the esf reference architecture. the current specimen consortium lead by the university of dortmund/stz (d) incorporates as further members cap sesa innovation in grenoble (f), a sema- group branch in cologne (d), and stc technology in newcastle-under-lyme (gb). arcadia consortium larger scale systems require higher-level abstractions m. shaw cooking with linux: smell of fresh-baked kernels marcel gagne the rebirth of apl? john manges defect content estimations from review data claes wohlin per runeson emeralds: a small-memory real-time microkernel khawar m. zuberi padmanabhan pillai kang g. shin 1999 readers' choice awards: you voted, we counted--here are the results jason kroll code reviews the best code reviews are the ones that actually get done. larry colen a stakeholder-centric software architecture analysis approach sonia bot chung- horng lung mark farrell ocm - a monitoring system for interoperable tools roland wismuller jög trinitis thomas ludwig conservative pretty printing martin ruckert a retargetable debugger we are developing techniques for building retargetable debuggers. our prototype, 1db, debugs c programs compiled for the mips r3000, motorola 68020, sparc, and vax architectures. it can use a network to connect to faulty processes and can do cross-architecture debugging. 1db's total code size is about 16,000 lines, but it needs only 250--550 lines of machine-dependent code for each target. 1db owes its retargetability to three techniques: getting help from the compiler, using a machine-independent embedded interpreter, and choosing abstractions that minimize and isolate machine-dependent code. 1db reuses existing compiler function by having the compiler emit postscript code that 1db later interprets; postscript works well in this unusual context. norman ramsey david r. hanson a lisp environment at ibm t j watson research m. mikelsons c. n. alberga c. f. skutt implementation strategies for continuations scheme and smalltalk continuations may have unlimited extent. this means that a purely stack-based implementation of continuations, as suffices for most languages, is inadequate. several implementation strategies have been described in the literature. determining which is best requires knowledge of the kinds of programs that will commonly be run. danvy, for example, has conjectured that continuation captures occur in clusters. that is, the same continuation, once captured, is likely to be captured again. as evidence, danvy cited the use of continuations in a research setting. we report that danvy's conjecture is somewhat true in the commercial setting of macscheme+toolsmith , which provides tools for developing macintosh user interfaces in scheme. these include an interrupt- driven event system and multitasking, both implemented by liberal use of continuations. we describe several implementation strategies for continuations and compare four of them using benchmarks. we conclude that the most popular strategy may have a slight edge when continuations are not used at all, but that other strategies perform better when continuations are used and danvy's conjecture holds. will clinger anne hartheimer eric ost architectural patterns for complex real-time systems brian selic object time (keynote address) (abstract only) brian selic stars methodology area summary: volume ii: preliminary views on the software life cycle and methodology selection catherine w. mcdonald william riddle christine youngblut the effectiveness of error seeding b. meek k. k. siu linux distributions corporate linux journal staff product review: perforce software configuration management system tom bjorkholm a compiler for lazy ml lml is a strongly typed, statically scoped functional language with lazy evaluation. it is compiled trough a number of program transformations which makes the code generation easier. code is generated in two steps, first code for an abstract graph manipulation machine, the g-machine. from this code machine code is generated. some benchmark tests are also presented. lennart augustsson cilk: an efficient multithreaded runtime system cilk (pronounced "silk") is a c-based runtime system for multi-threaded parallel programming. in this paper, we document the efficiency of the cilk work-stealing scheduler, both empirically and analytically. we show that on real and synthetic applications, the "work" and "critical path" of a cilk computation can be used to accurately model performance. consequently, a cilk programmer can focus on reducing the work and critical path of his computation, insulated from load balancing and other runtime scheduling issues. we also prove that for the class of "fully strict" (well-structured) programs, the cilk scheduler achieves space, time and communication bounds all within a constant factor of optimal. the cilk runtime system currently runs on the connection machine cm5 mpp, the intel paragon mpp, the silicon graphics power challenge smp, and the mit phish network of workstations. applications written in cilk include protein folding, graphic rendering, backtrack search, and the *socrates chess program, which won third prize in the 1994 acm international computer chess championship. robert d. blumofe christopher f. joerg bradley c. kuszmaul charles e. leiserson keith h. randall yuli zhou fixpoint computation for polyvariant static analyses of higher-order applicative programs this paper presents an optimized general-purpose algorithm for polyvariant, static analyses of higher-order applicative programs. a polyvariant analysis is a very accurate form of analysis that produces many more abstract descriptions for a program than does a conventional analysis. it may also compute intermediate abstract descriptions that are irrelevant to the final result of the analysis. the optimized algorithm addresses this overhead while preserving the accuracy of the analysis. the algorithm is also parameterized over both the abstract domain and degree of polyvariance. we have implemented an instance of our algorithm and evaluated its performance compared to the unoptimized algorithm. our implementation runs significantly faster on average than the other algorithm for benchmarks reported here. j. michael ashley charles consel an object-oriented approach to educational software in building physics an object-oriented building-physics software a.t.o.n. (which is the acronym for the german "general thermal and ecological verifications") is described as an example for educational software written in apl. the software was developed at the department for structural analysis at the technical university of graz, and has been successfully used for teaching building-physics. a.t.o.n. was designed in only four months. it comes with an user-friendly interface, which is easy and effortless to learn. every manipulation of data is immediately monitored in graphical windows. in order to achieve this "question & answer"-concept a hybrid system was set up. this system consists of quick c- and fortran-dlls combined with the dyalogapl interpreter using causewaypro as an interactive development environment. georg reichard verification and validation of requirements for mission critical systems steve easterbrook error messages in f david epstein minimizing reference count updating with deferred and anchored pointers for functional data structures henry g. baker book reviews: running linux zach beane phased inspections and their implementation john c. knight e. ann myers at the forge: missing cgi.pm and other mysteries reuven lerner synchronization in portable device drivers we present an overview of the synchronization mechanisms offered to device drivers by different operating systems and develop a foundation for writing portable device drivers by unifying these mechanisms. our foundation has been used to implement an efficient portable cluster adapter driver for three different operating systems as part of the runtime system for a heterogeneous pc cluster. we show how our portable synchronization mechanisms map to the native synchronization mechanisms of these three operating systems. stein j. ryan apl procedures (user defined operators, functions and token strings) this paper describes some central aspects of an apl implementation on a hewlett packard minicomputer. the development of these ideas led to an elegant, consistent underlying structure for all procedures, where a procedure is defined as a structured sequence of apl expressions, instances of which are niladic functions, ambivalent functions, monadic operators and dyadic operators. further to this idea, the introduction of two new functions (tokenize and detokenize) and a single hyperoperator ( ) gave rise to the following features; ability to manipulate functions and operators as apl objects extended assignment applied to all apl objects ability to store preset (or initialized) values into the header of any procedure make direct use of the (usually restricted) facet of tokenizing and detokenizing in apl to generate token strings, which may be applied by the programmer to form individual variants of fx, cr and/or editing. these extensions have been superimposed upon a basic imprint of sharp apl. robert hodgkinson indentation styles in c john c. hansen roger yim adaptive and efficient mutual exclusion (extended abstract) a distributed algorithm is adaptive if its performance depends on k, the number of processes that are concurrently active during the algorithm execution (rather than on n, the total number of processes). this paper presents adaptive algorithm for mutual exclusion using only read and write operations. the worst case step complexity cannot be a measure for the performance of mutual exclusion algorithms, because it is always unbounded in the presence of contention. therefore, a number of different parameters are used to measure the algorithm's performance: the remote step complexity is the maximal number of steps performed by a process where a wait is counted as one step. the system response time is the time interval between subsequent entries to the critical section, where one time unit is the minimal interval in which every active process performs at least one step. the algorithm presented here has o(k) remote step complexity and o(log k) system response time, where k is the point contention. the space complexity of this algorithm is o(nn), where n is the range of processes' names. the space complexity of all previously known adaptive algorithms for various long-lived problems depends on n. we present a technique that reduces the space complexity of our algorithm to be a function of n, while preserving the other performance measures of the algorithm. hagit attiya vita bortnikov adaptation and commitment technology (act) william l. scherlis control transformations through petri nets e. l. miranda probe: a formal specification-based testing system ahmed al-amayreh abdullah mohd zin asynchronous transfer of control and interrupt handling this is a summary of the discussions on two requirement areas: * asynchronous transfer of control * interrupt handling thomas j. quiggle teaching object programming with ada 95 mike feldman jack beidler parsing with c++ classes damian conway concurrent reusable abstract data types jeffrey r. carter extended dynamic dependent and-parallelism in ace gopal gupta enrico pontelli letters to the editor corporate linux journal staff relating viewpoints: a preface to viewpoints 96 anthony finkelstein program indentation and comprehensibility richard j. miara joyce a. musselman juan a. navarro ben shneiderman some observations on using ching's apl-to-c translator john k. bates pragmatic definition of an object-oriented development process for ada m. whitcomb b. clark unboxed values and polymorphic typing revisited peter j. thiemann distributed file systems and distributed memory thomas w. doeppner alphabet soup: the internationalization of linux, part i stephen turnbull psail: a portable sail to c compiler - description and tutorial p. f. lemkin data space testing a complete software testing process must concentrate on examination of the software characteristics as they may impact reliability. software testing has largely been concerned with structural tests, that is, test of program logic flow. in this paper, a companion software test technique for the program data called data space testing is described. an approach to data space analysis is introduced with an associated notation. the concept is to identify the sensitivity of the software to a change in a specific data item. the collective information on the sensitivity of the program to all data items is used as a basis for test selection and generation of input values. michael paige effects of buffering semantics on i/o performance jose carlos brustoloni peter steenkiste ada tasking model and prolog predication cathy c. ncube a requirements to test tracking system (rtts) edward r. rang karen h. thelen object engineering (abstract) r. stonewall ballard an overview of miranda d turner object oriented design in a real-time multiprocessor environment object oriented design (ood), with its emphasis on data encapsulation and rigorously defined message passing interfaces, offers great possibilities for improving the reliability and maintainability of large-scale software. these improvements are made feasible for the first time in military avionics applications by the recent emergence of new computers with significant increases in processing power and memory capacity. ironically, this newly available hardware, because it features modularity and multiprocessing, calls for distributed software, and thereby introduces an interesting challenge to the object oriented software view. the message passing or operation invoking interface between two objects which reside on the same processor is well defined and readily handled within the ood paradigm. how to extend the paradigm to cover two objects hosted on two different processors requires some thought, since only data, not tasks, can be passed from one processor to another this distributed- objects challenge was faced early on in the development of ada software for the f-16a/b vhsic1 core avionics processor (vcap). this thirty month project, initiated in december 1987 by the ogden air logistics command (oo-alc), is prototyping hardware and software for a possible advanced avionics architecture (a3) upgrade to the f-16a/b fire control and stores management systems. the solution devised for vcap is called the remote object paradigm (rop). this paper describes relevant features of the vcap architecture and application, explains the rop methodology, and presents performance statistics to show the adequacy of rop to support hard real-time requirements. k. mcquown reusable software components trudy levine automatic function placement in distributed storage systems khalil amiri david petrou greg ganger garth gibson how much time do software professionals spend communicating? s. l. sullivan regions: an abstraction for expressing array computation most array languages, including fortran 90, matlab, and apl, provide support for referencing arrays by extending the traditional array subscripting construct found in scalar languages. we present an alternative to subscripting that exploits the concept of _regions_\\---an index set representation that can be named, manipulated with high-level operators, and syntactically separated from array references. this paper develops the concept of region-based programming and describes its benefits in the context of an idealized array language called _rl._ we show that regions simplify programming, reduce the likelihood of errors, and enable code reuse. furthermore, we describe how regions accentuate the locality of array expressions and how this locality is important when targeting parallel computers. we also show how the concepts of region-based programming have been used in zpl, a fully- implemented practical parallel programming language in use by scientists and engineers. in addition, we contrast region-based programming with the array reference constructs of other array languages. bradford l. chamberlain e. christopher lewis calvin lin lawrence snyder letters to the editor corporate linux journal staff stop the presses: an amazing year phil hughes book review: successful c++ book saveen reddy caching considerations for generational garbage collection paul r. wilson michael s. lam thomas g. moher on the relationships among three software metrics automatable metrics of software quality appear to have numerous advantages in the design, construction and maintenance of software systems. while numerous such metrics have been defined, and several of them have been validated on actual systems, significant work remains to be done to establish the relationships among these metrics. this paper reports the results of correlation studies made among three complexity metrics which were applied to the same software system. the three complexity metrics used were halstead's effort, mccabe's cyclomatic complexity and henry and kafura's information flow complexity. the common software system was the unix operating system. the primary result of this study is that halstead's and mccabe's metrics are highly correlated while the information flow metric appears to be an independent measure of complexity. sallie henry dennis kafura kathy harris 64 bit oberon gunter dotzel hartmut goebel the ninja project jose e. moreira samuel p. midkiff manish gupta pedro v. artigas peng wu george almasi gc: the data-flow graph format of synchronous programming based on an abstraction of the time as a discrete logical time, the synchronous languages, armed with a strong semantics, enable the design of safe real-time applications. some of them are of imperative style, while others are declarative. academic and industrial teams involved in synchronous programming defined together three intermediate representations, on the way to standardization: • ic, a parallel format of imperative style, • gc, a parallel format of data- flow style, • oc, a sequential format to describe automata. in this paper, we describe more specifically the format gc, and its links with the synchronous data-flow language signal. thanks to the first experimentations, gc reveals itself as a powerful representation for graph transformations, code production, optimization, hardware synthesis, etc. pascal aubry thierry gautier linux system administration mark komarinski the workspace manager: a change control system for apl rexford h. swain daniel f. jonusz portable profiling and tracing for parallel, scientific applications using c++ sameer shende allen d. malony janice cuny peter beckman steve karmesin kathleen lindlan quantitative comparison of power management algorithms yung-hsiang lu eui-young chung tajana Å imunic luca benini giovanni de micheli letters to the editor corporate linux journal staff integrating program transformations in the memory-based synthesis of image and video algorithms in this paper we discuss the interaction and integration of two important program transformations in high-level synthesis \---tree height reduction and redundant memory-access elimination. intuitively, these program transformations do not interfere with one another as they optimize different operations in the program graph and different resources in the synthesized system. however, we demonstrate that integration of the two tasks is necessary to better utilize available resources. our approach involves the use of a "meta-transformation" to guide transformation application as possibilities arise. results observed on several image and video benchmarks demonstrate that transformation integration increases performance through better resource utilization. david j. kolson alexandru nicolau nikil dutt parallelizing nonnumerical code with selective scheduling and software pipelining instruction-level parallelism (ilp) in nonnumerical code is regarded as scarce and hard to exploit due to its irregularity. in this article, we introduce a new code-scheduling technique for irregular ilp called "selective scheduling" which can be used as a component for superscalar and vliw compilers. selective scheduling can compute a wide set of independent operations across all execution paths based on renaming and forward-substitution and can compute available operations across loop iterations if combined with software pipelining. this scheduling approach has better heuristics for determining the usefulness of moving one operation versus moving another and can successfully find useful code motions without resorting to branch profiling. the compile- time overhead of selective scheduling is low due to its incremental computation technique and its controlled code duplication. we parallelized the spec integer benchmarks and five aix utilities without using branch probabilities. the experiments indicate that a fivefold speedup is achievable on realistic resources with a reasonable overhead in compilation time and code expansion and that a solid speedup increase is also obtainable on machines with fewer resources. these results improve previously known characteristics of irregular ilp. soo-mook moon kemal ebcioglu technical opinion: comparing java vs. c/c++ efficiency differences to interpersonal differences lutz prechelt group consistency model which separates the intra-group consistency maintenance from the inter-group consistency maintenance in large scale dsm systems according to the characteristics of large scale network computing systems, we proposed a group consistency model based on the concept of group to construct a dsm system. the novel model can use different inter-group and intra-group consistencies and lend itself to flexible, easily-managable, and application-suitable dsm in large scale systems. a group consistency model, which applies entry consistency among groups and lazy release consistency in a group, together with its implementation policy is discussed in this paper. it employs write-update and multiple-writer protocols in a group, and thus facilitates the simultaneous read and write in a group. the suitable protocols eliminate the false sharing and reduce the data acquiring time in a group. furthermore, the inter-group consistency also suits the features of data sharing among groups and transmits the data modifications originated from a group in bulk to reduce the network traffic. in the end, an example using group consistency model is given and the trivial group consistency is discussed. qun li hua ji li xie storage and sequence association corporate high performance fortran forum object-oriented programming for embedded systems stuart maclean sean smith trip report: uist'90 the annual symposium on user interface software and technology snowbird, utah, october 3-5, 1990 ellis s. cohen process migration in demos/mp process migration has been added to the demos/mp operating system. a process can be moved during its execution, and continue on another processor, with continuous access to all its resources. messages are correctly delivered to the process's new location, and message paths are quickly updated to take advantage of the process's new location. no centralized algorithms are necessary to move a process. a number of characteristics of demos/mp allowed process migration to be implemented efficiently and with no changes to system services. among these characteristics are the uniform and location independent communication interface, and the fact that the kernel can participate in message send and receive operations in the same manner as a normal process. michael l. powell barton p. miller self organising software architectures jeff magee jeff kramer linux means business linux for internet business applications: a look at how one company is moving aheadusing linux to provide internet services to its clients uche ogbuji optview: a new approach for examining optimized code caroline tice susan l. graham a c-to-forth compiler alexander sakharov the astoot approach to testing object-oriented programs this article describes a new approach to the unit testing of object-oriented programs, a set of tools based on this approach, and two case studies. in this approach, each test case consists of a tuple of sequences of messages, along with tags indicating whether these sequences should put objects of the class under test into equivalent states and/or return objects that are in equivalent states. tests are executed by sending the sequences to objects of the class under test, then invoking a user-supplied equivalence-checking mechanism. this approach allows for substantial automation of many aspects of testing, including test case generation, test driver generation, test execution, and test checking. experimental prototypes of tools for test generation and test execution are described. the test generation tool requires the availability of an algebraic specification of the abstract data type being tested, but the test execution tool can be used when no formal specification is available. using the test execution tools, case studies involving execution of tens of thousands of test cases, with various sequence lengths, parameters, and combinations of operations were performed. the relationships among likelihood of detecting an error and sequence length, range of parameters, and relative frequency of various operations were investigated for priority queue and sorted-list implementations having subtle errors. in each case, long sequences tended to be more likely to detect the error, provided that the range of parameters was sufficiently large and likelihood of detecting an error tended to increase up to a threshold value as the parameter range increased. roong-ko doong phyllis g. frankl distributed communications communications among multiple ada programs is a must, either as an explicit link or as an invisible link hidden beneath a distributed ada veneer. unisys has defined two ada package specifications for inter-program communications, that are derived from the requirements developed at the third international workshop on real-time ada issues (rtaw3). unisys is implementing these packages on several embedded real-time testbeds, both as standalone capabilities for application developers and as the communication layer supporting a distributed ada system. these specifications are proposed as a basis for a secondary standard for inter-program communication for ada9x. this paper describes how these specifications match the rtaw3 requirements and the motivation for their features. some details of the specifications are interspersed throughout the text, with a full specification contained in the appendix. m. kamrad j. cross a concurrent, generational garbage collector for a multithreaded implementation of ml this paper presents the design and implementation of a "quasi real-time" garbage collector for concurrent caml light, an implementation of ml with threads. this two- generation system combines a fast, asynchronous copying collector on the young generation with a non-disruptive concurrent marking collector on the old generation. this design crucially relies on the ml compile-time distinction between mutable and immutable objects. damien doligez xavier leroy ufo: a personal global file system based on user-level extensions to the operating system in this article we show how to extend a wide range of functionality of standard operation systems completely at the user level. our approach works by intercepting selected system calls at the user level, using tracing facilities such as the /proc file system provided by many unix operating systems. the behavior of some intercepted system calls is then modified to implement new functionality. this approach does not require any relinking or recompilation of existing applications. in fact, the extensions can even be dynamically "installed" into already running processes. the extensions work completely at the user level and install without system administrator assistance. individual users can choose what extensions to run, in effect creating a personalized operating system view for themselves. we used this approach to implement a global file system, called ufo, which allows users to treat remote files exactly as if they were local. currently, ufo supports file access through the ftp and http protocols and allows new protocols to be plugged in. while several other projects have implemented global file system abstractions, they all require either changes to the operating system or modifications to standard libraries. the article gives a detailed performance analysis of our approach to extending the os and establishes that ufo introduces acceptable overhead for common applications even though intercepting individual system calls incurs a high cost. albert d. alexandrov maximilian ibel klaus e. schauser chris j. scheiman at the forge: keeping programs trim with cgi_lite reuven lerner embedded documentation for semi-automatic program construction and software reuse the purpose of this paper is to present a mechanism to classify software components for software reuse and semi-automatic program construction. the paper goes on to describe how is the classification information incorporated into software components through flagged sentences in embedded software documentation. nenad marovac a convergent systems viewpoint on viewpoints jack c. wileden alan kaplan a taste of the modula-2 standard mark woodman reliability growth modeling from fault failure rates a variety of reliability growth models provide quantified measures of test effectiveness in terms that are directly relevant to project management [lyu96], but at the cost of restricting testing to _representative_ selection, in which test data is chosen to reflect the operational distribution of the program's inputs. during testing, data is collected on the observed times between program failures (or, similarly, numbers of failures within a time interval). these observations are fitted to one of various models, which can then be used to estimate the current reliability of the program. steven j. zeil incremental generation of lexical scanners it is common practice to specify textual patterns by means of a set of regular expressions and to transform this set into a finite automaton to be used for the scanning of input strings. in many applications, the cost of this preprocessing phase can be amortized over many uses of the constructed automaton. in this paper new techniques for lazy and incremental scanner generation are presented. the lazy technique postpones the construction of parts of the automaton until they are really needed during the scanning of input. the incremental technique allows modifications to the original set of regular expressions to be made and reuses major parts of the previous automaton. this is interesting in applications such as environments for the interactive development of language definitions in which modifications to the definition of lexical syntax and the uses of the generated scanners alternate frequently. j. heering p. klint j. rekers realistic compilation by partial evaluation michael sperber peter thiemann structured analysis and object oriented analysis the object-oriented paradigm still faces an open challenge: delivering huge software systems routinely and cost effectively. to quote ed yourdon: "a system composed of 100,000 lines of c++ is not be sneezed at, but we don't have that much trouble developing 100,000 lines of cobol today. the real test of oop will come when systems of 1 to 10 million lines of code are developed." although the object-oriented community has an opening flirtation with exploratory programming and rapid prototyping by exploiting reuse via inheritance, there is for now, in my opinion, no hope that huge systems can be developed without giving due attention to what a target system is supposed to do. which should produce an (electronic) (graphical) (pseudo-formal) document, the requirements, that a customer can initially sign off. we believe as well that for huge systems a programming language independent design activity, that bridges the requirements and the actual programming effort, is mandatory. it goes without saying that we do not suggest that these activities constitute a waterfall sequence. consequently, the object-oriented community needs to address the question whether well established analysis techniques, like structured analysis, jackson's jsd, etc. can be reused for object-oriented system development or whether a dedicated object-oriented analysis (and design) method is called for. the panel members have been asked to consider the following of question: what is the relationship between structured analysis (sa) and object oriented analysis (ooa)? more specifically: can sa be used effectively to produce the requirements for a system that will be designed and implemented in an oo fashion? if not, is it possible to adjust sa, what needs to be added? if sa cannot be used at all, what is the key obstacle? in case sa and ooa have different applicability ranges, how do we circumscribe --- positively and negatively - these ranges? any overlap? we appreciate that the organizing committee of this conference has selected this crucial topic. dennis de champeaux larry constantine ivar jacobson stephen mellor paul ward edward yourdon a periodic symmetrically-initiated load balancing algorithm for distributed systems k. benmohammed-mahieddine p. m. dew complex event processing (cep) david luckham cooking with linux matt walsh determinacy testing for nondeterminate logic programming languages this paper describes an algorithm for the code generation of determinacy testing for nondeterminate flat concurrent logic programming languages. languages such as andorra and pandora require that procedure invocations suspend if there is more than one candidate clause potentially satisfying the goal. the algorithm described has been developed specifically for a variant of flat pandora based on fghc, although the concepts are general. we have extended kliger and shapiro's decision-graph construction algorithm to compile "don't-know" procedures that must suspend for nondeterminate goal invocation. the determinacy test is compiled into a decision graph quite different from those of committed-choice procedures: they are more similar to decision trees optimized by code sharing. we present both empirical data of compilation results (code size and graph characteristics), and a correctness proof for our code-generation algorithm. e. tick m. korsloot generating java trace data steven p. reiss manos renieris object-oriented programming transition strategies peter j. barclay stephen j. jackson design and implementation of generics for the .net common language runtime the microsoft.net common language runtime provides a shared type system, intermediate language and dynamic execution environment for the implementation and inter-operation of multiple source languages. in this paper we extend it with direct support for parametric polymorphism (also known as generics), describing the design through examples written in an extended version of the c# programming language, and explaining aspects of implementation by reference to a prototype extension to the runtime. our design is very expressive, supporting parameterized types, polymorphic static, instance and virtual methods, "f-bounded" type parameters, instantiation at pointer and value types, polymorphic recursion, and exact run-time types. the implementation takes advantage of the dynamic nature of the runtime, performing just-in-time type specialization, representation-based code sharing and novel techniques for efficient creation and use of run-time types. early performance results are encouraging and suggest that programmers will not need to pay an overhead for using generics, achieving performance almost matching hand-specialized code. andrew kennedy don syme higher-order concurrent programs with finite communication topology (extended abstract) concurrent ml (cml) is an extension of the functional language standard ml(sml) with primitives for the dynamic creation of processes and channels and for the communication of values over channels. because of the powerful abstraction mechanisms the communication topology of a given program may be very complex and therefore an efficient implementation may be facilitated by knowledge of the topology. this paper presents an analysis for determining when a bounded number of processes and channels will be generated. the analysis proceeds in two stages. first we extend a polymorphic type system for sml to deduce not only the type of cml programs but also their communication behaviour expressed as terms in a new process algebra. next we develop an analysis that given the communication behaviour predicts the number of processes and channels required during the execution of the cml program. the correctness of the analysis is proved using a subject reduction property for the type system. hanne riis nielson flemming nielson threads and input/output in the synthesis kernal the synthesis operating system kernel combines several techniques to provide high performance, including kernel code synthesis, fine-grain scheduling, and optimistic synchronization. kernel code synthesis reduces the execution path for frequently used kernel calls. optimistic synchronization increases concurrency within the kernel. their combination results in significant performance improvement over traditional operating system implementations. using hardware and software emulating a sun 3/160 running sunos, synthesis achieves several times to several dozen times speedup for unix kernel calls and context switch times of 21 microseconds or faster. h. massalin c. pu how best to provide the services is programmers need robert l. glass static grouping of small objects to enhance performance of a paged virtual memory james w. stamos the ravenscar tasking profile - experience reporting the ravenscar profile was defined at the 18th international real-time ada workshop as a simple subset of the tasking features of ada, in order to support efficient, high integrity applications that need to be analysed for their timing properties. ada compiler vendor aonix subsequently implemented the profile via its raven product line, and is currently engaged in producing the formal certification material for qualifying the runtime system to the most stringent set of guidelines --- do-178b level a. this paper describes some of the experiences in implementing and qualifying the profile, and some early indications from the user community of its value to address the application domains that it was intended for. brian dobbing george romanski bernhardt and garay's expanding literacies robert r. johnson a distributed object-oriented framework for dependable multiparty interactions in programming distributed object-oriented systems, there are several approaches for achieving binary interactions in a multiprocess environment. usually these approaches take care only of synchronisation or communication. in this paper we describe a way of designing and implementing a more general concept: multiparty interactions. in a multiparty interaction, several parties (objects or processes) somehow "come together" to produce an intermediate and temporary combined state, use this state to execute some activity, and then leave this interaction and continue their normal execution. the concept of multiparty interactions has been investigated by several researchers, but to the best of our knowledge none have considered how failures in one or more participants of the multiparty interaction can be dealt with. in this paper, we propose a general scheme for constructing dependable multiparty interactions in a distributed object-oriented system, and describe its implementation in java. in particular, we extend the notion of multiparty interaction to include facilities for handling exceptions. to show how our scheme can be used, we use our framework to build an abstraction mechanism that supports cooperative and competitive concurrency in distributed systems. this mechanism is then applied to program a system in which multiparty interactions are more than simple synchronisations or communications. a. f. zorzo r. j. stroud software engineering with ada in a new key: formalizing and visualizing the object paradigm george w. cherry object-oriented programming with flavors this paper describes symbolics' newly redesigned object-oriented programming system, flavors. flavors encourages program modularity, eases the development of large, complex programs, and provides high efficiency at run time. flavors is integrated into lisp and the symbolics program development environment. this paper describes the philosophy and some of the major characteristics of symbolics' flavors and shows how the above goals are addressed. full details of flavors are left to the programmers' manual, reference guide to symbolics common lisp. (5) david a. moon reuse (panel): truth or fiction paul mccollough bob atkinson adele goldberg martin griss john morrison critical slicing for software fault localization developing effective debugging strategies to guarantee the reliability of software is important. by analyzing the debugging process used by experienced programmers, we have found that four distinct tasks are consistently performed: (1) determining statements involved in program failures, (2) selecting suspicious statements that might contain faults, (3) making hypotheses about suspicious faults (variables and locations), and (4) restoring program state to a specific statement for verification. this research focuses support for the second task, reducing the search domain for faults, which we refer to as _fault localization._we explored a new approach to enhancing the process of fault localization based on dynamic program slicing and mutation-based testing. in this new approach, we have developed the technique of critical slicing to enable debuggers to highlight suspicious statements and thus to confine the search domain to a small region. the critical slicing technique is partly based on "statement deletion" mutant operator of the mutation-based testing methodology. we have explored properties of critical slicing, such as the relationship among critical slicing, dynamic program slicing, and executable static program slicing; the cost to construct critical slices; and the effectiveness of critical slicing. results of experiments support our conjecture as to the effectiveness and feasibility of using critical slicing for fault localization.this paper explains our technique and summarizes some of our findings. from these, we conclude that a debugger equipped with our proposed fault localization method can reduce human interaction time significantly and aid in the debugging of complex software. richard a. demillo hsin pan eugene h. spafford interactive editing systems: part i norman meyrowitz andries van dam scheduling and page migration for multiprocessor compute servers several cache-coherent shared-memory multiprocessors have been developed that are scalable and offer a very tight coupling between the processing resources. they are therefore quite attractive for use as compute servers for multiprogramming and parallel application workloads. process scheduling and memory management, however, remain challenging due to the distributed main memory found on such machines. this paper examines the effects of os scheduling and page migration policies on the performance of such compute servers. our experiments are done on the stanford dash, a distributed-memory cache-coherent multiprocessor. we show that for our multiprogramming workloads consisting of sequential jobs, the traditional unix scheduling policy does very poorly. in contrast, a policy incorporating cluster and cache affinity along with a simple page-migration algorithm offers up to two-fold performance improvement. for our workloads consisting of multiple parallel applications, we compare space-sharing policies that divide the processors among the applications to time-slicing policies such as standard unix or gang scheduling. we show that space-sharing policies can achieve better processor utilization due to the operating point effect, but time-slicing policies benefit strongly from user-level data distribution. our initial experience with automatic page migration suggests that policies based only on tlb miss information can be quite effective, and useful for addressing the data distribution problems of space-sharing schedulers. rohit chandra scott devine ben verghese anoop gupta mendel rosenblum semaphore primitives and starvation-free mutual exclusion eugene w. stark best of technical support corporate linux journal staff letter on erts and lego richard currey time management for real-time systems loïc briand a note on "protecting against uninitialized abstract objects in modula-2" m a torbett parsing with c++ deferred expressions damian conway the process of object-oriented design dennis de champeaux doug lea penelope faure lessons learned with acme an environment integration experiment john w. gintell gerard memmi reuse through inheritance: a quantitative study of c++ software according to proponents of object-oriented programming, inheritance is an excellent way to organize abstraction and a superb tool for reuse. yet, few quantitative studies of the actual use of inheritance have been conducted. quantitative studies are necessary to evaluate the actual usefulness of structures such as inheritance. we characterize the use of inheritance in 19 existing c++ software systems containing 2,744 classes. we measure the class depth in the inheritance hierarchies, and the number of child and parent classes in the software. we find that inheritance is used far less frequently than expected. james m. bieman josephine xia zhao porting scientific and engineering programs to linux one can compile scientific and engineering code under linux using free fortran 77 options charles t. kelsey gary l. masters applying domain and design knowledge to requirements engineering w. lewis johnson martin s. feather david r. harris "improving software engineering management practice through modeling and experimentation with enabling technologies" (track introduction) david rine toward boxology: preliminary classification of architectural styles mary shaw paul clements more on fortran 8x pointer proposals loren meissner self-reproducing programs in common lisp peter norvig on the scale and performance of cooperative web proxy caching while algorithms for cooperative proxy caching have been widely studied, little is understood about cooperative-caching performance in the large-scale world wide web environment. this paper uses both trace-based analysis and analytic modelling to show the potential advantages and drawbacks of inter-proxy cooperation. with our traces, we evaluate quantitatively the performance-improvement potential of cooperation between 200 small-organization proxies within a university environment, and between two large-organization proxies handling 23,000 and 60,000 clients, respectively. with our model, we extend beyond these populations to project cooperative caching behavior in regions with millions of clients. overall, we demonstrate that cooperative caching has performance benefits only within limited population bounds. we also use our model to examine the implications of future trends in web-access behavior and traffic. alec wolman m. voelker nitin sharma neal cardwell anna karlin henry m. levy tcolada and the "middle end" of the pqcc ada compiler a compiler is traditionally partitioned into a (mostly) machine independent front end which performs lexical, syntactic, and semantic analysis, and a machine dependent back end which performs optimization and code generation. in the ada compiler being implemented at carnegie-mellon university in the pqcc project, it is useful to identify a set of phases occurring at the start of the back end - i.e., "middle end" \- after semantic analysis but before optimization. these phases, known collectively as "cwvm" (an abbreviation for "compiler writer's virtual machine") make basic representational choices and reflect these in an expanded program tree. this paper describes both tcolada \\- the intermediate language interface produced by the front end - and the phases comprising cwvm. tcolada is a graph structured high level representation of the source program which includes both the symbol table and the program tree. the cwvm phases perform transformations of the tcolada graph which fall into three categories: language oriented (e.g., expansion of checking for constructs such as array indexing), virtual machine oriented (e.g., translation of up-level addressing into "display" vector accesses), and actual machine oriented (e.g., expansion of component selection into address arithmetic). benjamin m. brosgol parallel searching for binary cartesian product files the problem of distributing buckets in a file among m disks to facilitate parallel searching for a set of queries is analysed in this paper. we are particularly concerned with the file distribution problem for binary cartesian product files, and partial match queries. a method is proposed and shown to be strict optimal under certain conditions. the performance of the proposed method is compared with those of an "ideal" strict optimal and du's heuristic allocation method. yuan y. sung migrating to linux, part 2 we continue with our look at converting an office from a commercial operating system to linux norman m. jacobowitz jim hebert viewing object as patterns of communicating agents following our own experience developing a concurrent object-oriented language as well of that of other researchers, we have identified several key problems in the design of a concurrency model compatible with the mechanisms of object- oriented programming. we propose an approach to language design in which an executable notation describing the behaviour of communicating agents is extended by syntactic patterns that encapsulate language constructs. we indicate how various language models can be accommodated, and how mechanisms such as inheritance can be modeled. finally, we introduce a new notion of types that characterizes concurrent objects in terms of their externally visible behaviour. oscar nierstrasz michael papathomas source control using vm/sp and cms john murray forth and the freebsd bootloader paul frenger using symbolic execution for verifying safety-critical systems safety critical systems require to be highly reliable and thus special care is taken when verifying them in order to increase the confidence in their behavior. this paper addresses the problem of formal verification of safety critical systems by providing empirical evidence of the practical applicability of symbolic execution and of its usefulness for checking safety-related properties. in this paper, symbolic execution is used for building an operational model of the software on which safety properties, expressed by means of a path description language (pdl), can be assessed. alberto coen-porisini giovanni denaro carlo ghezzi mauro pezze measurements of migratability and transportability stan skelton java paradigms for mobile agent facilities neelakantan sundaresan vinay rajagopalan configuration editing, generation and test within working contexts in a software development environment any progress is reflected in modifications of design documents. these changes must be attended by recording of versions in order to restore consistent states and to rebuild delivered systems for error detection. the introduction of versions implies the need for version selection mechanisms, to achieve the same degree of operability as known in versionless environments. this paper introduces a version selection mechanism based on the notion of working contexts. examples generated by the add document management system illustrate how editing, generation and test of configurations are eased using working contexts. hans-ulrich kobialka potentials and limitations of fault-based markov prefetching for virtual memory pages gretta bartels anna karlin darrell anderson jeffrey chase henry levy geoffrey voelker managing multi-variant software configuration peter j. nicklin debugging embedded systems implemented in c tom hand in search of design principles for programming environments stephanie houde royston sellman linux and unix shell programming marjorie richardson wrap a security blanket around your computer tcp_wrappers: a simple, elegant and effective means to safeguard your network services lee brotzman an organized, devoted, project-wide reuse effort many projects have struggled with attempting reuse; however, both size and complexity of a large project create additional obstacles that can minimize the potential for software reuse. therefore, successful software reuse can not be guaranteed solely by the use of ada, object-oriented development methods, taxonomies, storage and retrieval systems, commercial libraries, software methods, standards, or configuration management systems! it was proposed [bowen, 1990], that inorder for software reuse to succeed on large projects, these methods and techniques must be combined in a coordinated reuse effort. the software process must address issues related to the management of the development effort, in addition to addressing purely technical issues. this paper will present how th e concept of an organized, devoted, project- wide reuse effort has been practically applied by computer sciences corporation (csc), over the past year on the advanced automation system (aas), as part of the team led by international business machines (ibm) corporation. gregory m. bowen experiments with computer software complexity and reliability experiments with quantitative assessment and prediction of software reliability are presented. the experiments are based on the analysis of the error and the complexity characteristics of a large set of programs. the first part of the study concerns the data collection process and the analysis of the error data and complexity measures. the relationships between the complexity profile and the error data of the procedures of the programs are then investigated with the help of discriminant statistical analysis technique. the results of these analyses show that an estimation can be derived from the analysis of its complexity profile. d. potier j. l. albin r. ferreol a. bilodeau five principles for the formal validation of models of software metrics lem o. ejiogu the silicon palimpsest: a programming model for electrically reconfigurable processors charles johnsen david l. fox evaluating software engineering methods and tools: part 9: quantitative case study methodology this article is the first of three articles describing how to undertake a quantitative case study based on work done as part of the desmet project [1], [2]. in the context of methods and tool evaluations, case studies are a means of evaluating methods and tools as part of the normal software development activities undertaken by an organisation. the main benefit of such case studies is that they allow the effect of new methods and tools to be assessed in realistic situations. thus, case studies provide a cost-effective means of ensuring that process changes provide the desired results. however, unlike formal experiments and surveys, case studies do not have a well-understood theoretical basis. this series of articles provides guidelines for organising and analysing case studies so that your investigations of new technologies will produce meaningful results. barbara ann kitchenham lesley m. pickard programming pearls jon bentley bob floyd the performance of job classes with distinct policy functions manfred ruschitzka efficient shared memory with minimal hardware support shared memory is widely regarded as a more intuitive model than message passing for the development of parallel programs. a shared memory model can be provided by hardware, software, or some combination of both. one of the most important problems to be solved in shared memory environments is that of cache coherence. experience indicates, unsurprisingly, that hardware-coherent multiprocessors greatly outperform distributed shared-memory (dsm) emulations on message-passing hardware. intermediate options, however, have received considerably less attention. we argue in this position paper that one such option---a multiprocessor or network that provides a global physical address space in which processors can make non-coherent accesses to remote memory without trapping into the kernel or interrupting remote processors---can provide most of the performance of hardware cache coherence at little more monetary or design cost than traditional dsm systems. to support this claim we have developed the _cashmere_ family of software coherence protocols for ncc-numa (non-cache-coherent, non-uniform-memory access) systems, and have used execution-driven simulation to compare the performance of these protocols to that of full hardware coherence and distributed shared memory emulation. we have found that for a large class of applications the performance of ncc-numa multiprocessors rivals that of fully hardware-coherent designs, and significantly surpasses the performance realized on more traditional dsm systems. leonidas i. kontothanassis michael l. scott "alfonse, wait here for my signal!" stephen j. hartley kernel korner: the sysctl interface alessandro rubini compositional parallel programming languages in task-parallel programs, diversee activities can take place concurrently, and communication and synchronization patterns are complex and not easily predictable. previous work has identified compositionality as an important design principle for task-parallel programs. in this article, we discuss alternative approaches to the realization of this principle, which holds that properties of program components should be preserved when those co ponents are composed in parallel with other program components. we review two programming languages, strand and program composition notation, that support compositionality via a small number of simple concepts, namely, monotone operations on shared opbects, a uniform addressing mechanism, and parallel composition. both languages have been used extensively for large-scale application development, allowing us to provide an informed assessment of both their strengths and their weaknesses. we observe that while compositionality simplifies development of complex applications, the use of specialized languages hinders reuse of existing code and tools and the specification of domain decomposition strategies. this suggests an alternative approach based on small extensions to existing sequential languages. we conclude the article with a discussion of two languages that realized this strategy. ian foster a method for recording and analyzing design processes tsuyoshi nakajima related field analysis we present an extension of field analysis (sec [4]) called _related field analysis_ which is a general technique for proving relationships between two or more fields of an object. we demonstrate the feasibility and applicability of related field analysis by applying it to the problem of removing array bounds checks. for array bounds check removal, we define a pair of related fields to be an integer field and an array field for which the integer field has a known relationship to the length of the array. this related field information can then be used to remove array bounds checks from accesses to the array field. our results show that related field analysis can remove an average of 50% of the dynamic array bounds checks on a wide range of applications. we describe the implementation of related field analysis in the swift optimizing compiler for java, as well as the optimizations that exploit the results of related field analysis. aneesh aggarwal keith h. randall performance assertion checking sharon e. perl william e. weihl standardizing reuse roy rada james moore for text files t. zimmer melding data flow and object-oriented programming meld combines concepts from data flow and object-oriented programming languages in a unique approach to tool reusability. meld provides three units of abstraction --- equations, classes and features --- that together allow sufficient options for granularity and encapsulation to support the implementation of reusable tools and the composition of existing tools in parallel (i.e., interleaved) as well as in series. gail e. kaiser david garlan notes on "a methodology for implementing highly concurrent data objects" joseph p. skudlarek letters to the editor corporate linux journal staff status of dapse distributed ada support s boyd static analysis to reduce synchronization costs in data-parallel programs manish gupta edith schonberg imperative functional programming u. s. reddy a new notion of encapsulation generally speaking, a "module" is used as an "encapsulation mechanism" to tie together a set of declarations of variables and operations upon them. although there is no standard way to instantiate or use a module, the general idea is that a module describes the implementation of all the values of a given type. we believe that this is too inflexible to provide enough control: one should be able to use different implementations (given by different modules) for variables (and values) of the same type. when incorporated properly into the notation, this finer grain of control allows one to program at a high level of abstraction and then to indicate how various pieces of the program should be implemented. it provides simple, effective access to earlier-written modules, so that they are useable in a more flexible manner than is possible in current notations. it generalizes to provide the ability to indicate structural transformations, in a disciplined fashion, in order to achieve efficiency with respect to time or space. however, the program will still be understood at the abstract level and the transformations or implementations will be looked at only to deal with efficiency concerns. finally, some so-called "data types" (e.g. stack and subranges of the integers) can more properly be looked upon simply as restricted implementations of more general types (e.g. sequence and integer). thus, the notion of subtype becomes less important. david gries jan prins a specification-based adaptive test case generation strategy for open operating system standards aki watanabe ken sakamura the spin-off illusion: reuse is not a by-product based on the desire of enterprise management to find a smooth, low-risk way of introducing reuse methodology, this paper discusses several approaches taken. the illusion of using the by-product of software development to obtain reusable assets is the subject of the second approach. the third successful approach follows the software factory paradigm and has been successfully implemented at several ibm sites. michael wasmund consistency checking for multiple view software architectures consistency is a major issue that must be properly addressed when considering multiple view architectures. in this paper, we provide a formal definition of views expressed graphically using diagrams with multiplicities and propose a simple algorithm to check the consistency of diagrams. we also put forward a simple language of constraints to express more precise (intra- view and inter- view) consistency requirements. we sketch a complete decision procedure to decide whether diagrams satisfy a given constraint expressed in this language. our framework is illustrated with excerpts of a case study: the specification of the architecture of a train control system. pascal fradet daniel le metayer michael perin exploring issues of operating systems structuring: from microkernel to extensible systems the microkernel concept has once been the most advocated approach for operating system development. unfortunately, before its publicized advantages have been fully realized in an operating system implementation, current operating system researchers claim its weaknesses and make their ways to develop "extensible" operating systems. new operating systems like spin, ageis, cache kernel, apertos and scout, employ new concepts to support application specific customization and optimal allocation of system resource, in order to boost up the performance of certain applications. the microkernel concept in itself never contradicts with this purpose, as it is to provide basic efficient primitives for construction of system services. probably, the crucial problem is that the os architecture of most current microkernel implementations cannot suitably meet with the new requirements of extensibility. in this paper, we try to explore issues in developing a better os architecture that can fully enhance os extensibility. moreover, we investigate how microkernel abstraction can be remodeled to support better reconfiguration in operating systems. to cope with the conflicting issues of efficiency, flexibility and ease of reconfiguration, we suggest and discuss an approach of structuring operating systems. the approach is characteristed by lightweight meta-abstraction mechanism and progressive reflective reconfiguration. w. h. cheung anthony h. s. loong efficient context-sensitive pointer analysis for c programs this paper proposes an efficient technique for context-sensitive pointer analysis that is applicable to real c programs. for efficiency, we summarize the effects of procedures using partial transfer functions. a partial transfer function (ptf) describes the behavior of a procedure assuming that certain alias relationships hold when it is called. we can reuse a ptf in many calling contexts as long as the aliases among the inputs to the procedure are the same. our empirical results demonstrate that this technique is successful---a single ptf per procedure is usually sufficient to obtain completely context- sensitive results. because many c programs use features such as type casts and pointer arithmetic to circumvent the high-level type system, our algorithm is based on a low-level representation of memory locations that safely handles all the features of c. we have implemented our algorithm in the suif compiler system and we show that it runs efficiently for a set of c benchmarks. robert p. wilson monica s. lam generics and verification in ada this paper explores the restrictions a mechanism in the style of the ada generics facility would have to satisfy in order to be amenable to existing verification techniques. "generic verification" is defined and defended as the appropriate goal for any such facility. criteria are developed for generic verification to be possible and then ada is evaluated with respect to these criteria. an example of the application of these techniques to an ada unit is presented to show that generic verification is possible at least on a subclass of ada generic units. finally some potential applications of verified generic units are presented. william d. young donald i. good sound polymorphic type inference for objects jonathan eifrig scott smith valery trifonov architectural modeling in industry - an experience report juha kuusela alessandro maccari jianli xu observations on the portability of ada i/o edmund r. matthews a simple implementation technique for mixin inheritance michael e. goldsby profile driven weighted decomposition karen a. tomko edward s. davidson the last word stan kelly-bootle npic - new paltz interprocedural compiler michael hind a scary tale - sperry avionics module-testing bites the dust? n leveson the specification and testing of quantified progress properties in distributed systems _there are two basic parts to the behavioral specification of distributed systems: safety and progress. in earlier work, we developed a tool to monitor progress properties of corba components specified using the temporal operator_ transient. _in this paper, we address the specification and testing of transient properties that are quantified (over both bounded and unbounded domains)._ _we categorize typical quantifications that arise in practical systems and discuss possible implementation strategies. we define functional transience, a subclass of quantified transient properties that can be monitored in constant space and time. we outline the design and implementation of a tool for testing these properties in corba components._ prakash krishnamurthy paolo a. g. sivilotti object orientation and fortran 2002: part i malcolm cohen integrating software tool communication within an environment peraphon sophatsathit joseph urban acm algorithms policy fred t. krogh commonobjects: object-oriented programming in common lisp alan snyder automated regression test generation bogdan korel ali m. al-yami prolog dialects: a deja vu of basics r a sosnowski design and implementation of a configurable mixed-media file system silvano maffeis representing software systems architectures richard f. hilliard more letters corporate linux journal staff load distribution and balancing support in a workstation-based distributed system in distributed systems, load distribution and balancing are primary functions addressed to improvements on system performance and additional user comfort. incoming task allocation and remote process execution are main responsibilities of a well designed system to achieve such performance improvements. both aspects involve a number of non trivial tasks.as a basement for further automatic system decision in a distributed environment, this paper propose a user-supervised processor allocation scheduler, shows which information should be collected and when and how to collect and disseminate it to support the user decision.the main problems to be considered when implementing remote process execution are discussed and a design for an alternative system attempting to solve these problems is also shown. d. arredondo m. errecalde f. piccoli m. printista r. gallard s. flores public case-base and tool kit using a validated rrm software systems are important, yet poorly understood entities. users, and at times even developers, do not always understand what is occurring within the software in use. the exploratory visualization project attempts to address the technical issues involved in helping users understand running computations, especially long-lived distributed computations. the three facets of this problem that we investigate in our project are 1) creating accurate views of a running execution, 2) providing comprehensive and efficient access to a computation, and 3) responding to the user's interactions to promote understanding and optimize data collection. d. rine n. nada supporting the process of satisfying information needs with reusable software libraries: an empirical study retrieval tools for component-based software reuse libraries face two interrelated problems. the first is the ill- defined nature of information needs. the second is that large repositories will often use unfamiliar and esoteric vocabulary to describe software components. codefinder, a retrieval system designed to help developers locate software components for reuse, addresses these issues through an innovative combination of retrieval by reformulation and spreading activation. an empirical study comparing codefinder with two other systems showed evidence that subjects using codefinder with ill-defined tasks or mismatching vocabulary performed better than subjects using the other systems. the study confirmed the utility of spreading activation and retrieval by reformulation techniques for satisfying information needs of the kind encountered in software design. scott henninger the implementation of set functions in apl: a proposal this paper has been translated from "l'implantation de fonctions pour les ensembles en apl (une proposition)" (_les nouvelles d'apl,_ no. 22, mars 1997, pp. 81-86).proposals are formulated for the implementation of the primitive set functions: unique, union, intersection, and difference. user-defined functions are given, simulating those proposed primitive functions, illustrated with some examples.in a note added to a letter to the editor, published in _les nouvelles d'apl_ [1], it emphasizes the need to extend apl with primitive functions for handling sets. definitions are proposed in this paper for such primitives: the monadic function unique and the dyadic functions union, intersection, and difference. user defined functions are given which are equivalent to those proposed primitives, illustrated with some examples. concepts are based on those described by d. livingstone and by the authors referenced in his paper [2]. the number of occurrences of the items in the results are adapted to the needs imposed by handling relational database algebra [2]. joseph de kerf objects for changeable systems magnus christerson schematic pseudocode for program constructs and its computer automation by schemacode to achieve program control flow representation that is relatively independent of any given programming language, schematic pseudocode (spc) uses a perceptual notation system composed of schemata whose syntax rules are described by a grammar. source code documentation is supported by operational comments, and translation into a target procedural language is fully automatic. pierre n. robillard apl-berlin-2000 - an observation garth foster structural specification-based testing: automated support and experimental evaluation in this paper, we describe a testing technique, called structural specification-based testing (sst), which utilizes the formal specification of a program unit as the basis for test selection and test coverage measurement. we also describe an automated testing tool, called adlscope, which supports sst for program units specified in sun microsystems' assertion definition language (adl). adlscope automatically generates coverage conditions from a program's adl specification. while the program is tested, adlscope determines which of these conditions are covered by the tests. an uncovered condition exhibits aspects of the specification inadequately exercised during testing. the tester uses this information to develop new test data to exercise the uncovered conditions. we provide an overview of sst's specification- based test criteria and describe the design and implementation of adlscope. specification-based testing is guided by a specification, whereby the testing activity is directly related to what a component under test is supposed to do, rather than what it actually does. specification-based testing is a significant advance in testing, because it is often more straightforward to accomplish and it can reveal failures that are often missed by traditional code-based testing techniques. as an initial evaluation of the capabilities of specification- based testing, we conducted an experiment to measure defect detection capabilities, code coverage and usability of sst/adlscope; we report here on the results. juei chang debra j. richardson experiences in building c++ front end fuqing yang hong mei wanghong yuan qiong wu yao guo a laboratory for teaching object oriented thinking it is difficult to introduce both novice and experienced procedural programmers to the anthropomorphic perspective necessary for object-oriented design. we introduce crc cards, which characterize objects by class name, responsibilities, and collaborators, as a way of giving learners a direct experience of objects. we have found this approach successful in teaching novice programmers the concepts of objects, and in introducing experienced programmers to complicated existing designs. k. beck w. cunningham adapting unix for a multiprocessor environment existing unix data protection and synchronization mechanisms present difficulties when adapting unix to a multiprocessor environment, but solutions do exist. m. d janssens j. k annot a. j van de goor reflective facilities in smalltalk-80 computational reflection makes it easy to solve problems that are otherwise difficult to address in smalltalk-80, such as the construction of monitors, distributed objects, and futures, and can allow experimentation with new inheritance, delegation, and protection schemes. full reflection is expensive to implement. however, the ability to override method lookup can bring much of the power of reflection to languages like smalltalk-80 at no cost in efficiency. b. foote r. e. johnson encapsulating objects with confined types object-oriented languages provide little support for encapsulating objects. reference semantics allows objects to escape their defining scope. the pervasive aliasing that ensues remains a major source of software defects. this paper introduces kacheck/j a tool for inferring object encapulation properties in large java programs. our goal is to develop practical tools to assist software engineers, thus we focus on simple and scalable techniques. kacheck/j is able to infer _confinement_ for java classes. a class and its sublasses are confined if all of their instances are encapsulated in their defining package. this simple property can be used to identify accidental leaks of sensitive objects. the analysis is scalable and efficient; kacheck/j is able t infer confinement on a corpus of 46,000 classes (115 mb) in 6 minutes christian grothoff jens palsberg jan vitek multithreading in c++ john english architecting for large-scale systematic component reuse martin l. griss modal types as staging specifications for run-time code generation philip wickline peter lee frank pfenning rowan davies session 10b: programming methodologies m. zelkowitz a new resource for c++ programmers and an invitation for participation g. bowden wise a synchronization mechanism for typed objects in a distributed system d. decouchant s. krakowiak m. meysembourg m. riveill x. r. de pina cobol: is it dying - or thriving? robert l. glass ensemble: a graphical user interface development system for the design and use of interactive toolkits user interface development systems (uids), as opposed to user interface management systems or ui toolkits focus on supporting the design and implementation of the user interface. this paper describes ensemble, an experimental uids that begins to explore the electronic creation of interaction techniques as well as the corresponding design processes. issues related to the impact on the components of the development system are discussed. finally, problems with the current implementation and future directions are presented. m. k. powers the "catch 22" of re-engineering w. kozaczynski building an apl2 x-windows interface for vm and aix with a general apl2-to-c interface this paper describes apl2/x, an interface between x-windows and apl2, designed and built at the ibm cambridge scientific center. it currently runs under vm/cms and aix. the apl2/x vm version of the interface uses the apl2 associated processor 11 to communicate with x. the apl2/x aix version uses a new auxiliary processor, ap144, to achieve the same functionality.apl2/x enables all of the x-windows xlib calls and data- structures for use from the apl2 environment. in so doing, it enables apl2 to make use of a true windowing environment. several apl2 sample programs using the interface have been coded to illustrate and validate the interface.the basic x-windows xlib is actually a large application written in c. apl2/x has therefore been built upon, as well as heavily influenced, a general apl2-to-c interface that runs on various platforms: apl2 under vm/cms and mvs/tso, apl2 for the risc system/6000 under aix, and apl2/pc under pc/dos. john r. jensen kirk a. beaty cost: common object support toolkit cass w. everitt john van der zwaag robert j. moorhead dynamic semantic specification by two-level grammars for a block structured language with subroutine parameters m. badii f. abdollahzadeh multilevel exit and cycle aren't so bad w van snyder introducing granularity-dependent quantitative distance and diameter measures in common-sense reasoning contexts in this paper, i will present a method for constructing correlatedseries of granularity-dependent distance and diameter measures onthe basis of a theory of qualitative spatial concepts. eachgranularity-dependent measure function has as its range a discretesubset of r. as we proceed through a series of such functions, thedistance between the values in the range will become smaller andthe measurements returned by the functions will becomecorrespondingly more precise. my method for constructing the seriesof functions is partially based on work in kranz, luce, suppes, andtversky, _the foundations of measurement_, vol. 1. i use aresult from that volume to prove that my series of distance anddiameter measures converge to continuous, extensive distance anddiameter measures. but it is the discrete measures in the series,not the continuous limit measures, that should be used in analysesof common-sense concepts. unlike the continuous measure functions,arbitrary values for the discrete measure functions can, in mostcontexts, be determined through practical procedures. moreover, theability to move from one granularity-level to the next isappropriate for common-sense contexts in which the level ofprecision is typically kept at the minimum necessary to accomplishthe task at hand. maureen donnelly toward an objected-oriented forth leonard zettel technical correspondence: steensgaard-madsen's reply j. steensgaard-madsen a proposal for blocks and exits in apl a simple method of expressing structured programming constructs is proposed which avoids the need for the large number of keywords found in most structured languages. with this simple method, a countably infinite series of constructs may be formed. it is demonstrated that all conventional structured programming constructs may be expressed using only the constructs corresponding to -1, 0, 1, and 2. jim p. fiegenschue availability in the sprite distributed file system mary baker john ousterhout design of concurrent software bo sanden lazy versus strict philip wadler graphics for object oriented software engineering (workshop session) edward v. barard extending the bash prompt terminal and xterm prompts can be created incorporating standard escape sequences to give user name, current working directory, time and more giles orr virtual memory on systems without hardware support d. weinstein problem dynamics and working set principle as applied to concurrent processing for fast and efficient problem solving we need to not only exploit concurrency in the problem structure but also the problem dynamics which allows one to use result sharing and partial evaluation [kri 87]. intuitively it is clear that the interplay of problem structure and problem dynamics can be exploited to identify only the essential part of a computation as it evolves over a period of time. ignoring dynamics may result in redundant and hence inefficient, slow, and expensive computation. the notion of working set in the context of efficient processing of programs was introduced by denning [den 68]. for cost-effective computation it is essential to have no more computational resources than that demanded by the average working set of the computation. similarly, for performance reasons, the amount of computational resources must be at least equal to that demanded by the working set of the computation. thus, the idea of generating a working set and scheduling computational tasks from the working set is key to both cost- effective and high-performance computations. the working set model based on locality properties has been extensively studied in [den 68] which assumes sequential computational model. the design and implementation of a working set dispatcher is discussed in [ros 73]. the working set of processes in running state are computed periodically and the paging algorithm never chooses replacement pages from the working set of running processes. the notion of working set as applied to concurrent processing is proposed in [tok 83]. the contribution towards working set here is two-fold: due to principle of locality and simultaneity of execution. unlike in a sequential machine, where the computation flow is more like a thread, in a concurrent machine the computation flow is like a wavefront on the data flow graph. a potentially costly component of a data flow machine is the mechanism to recognize when an instruction node has all of its operands available. for cost and performance reasons only the working set of nodes need to be checked. the working set of computation need to be automatically created based on the structure of the data flow graph of the computation and node fetch and replacement policies. when problem dynamics is exploited in the form of result sharing and better decomposition techniques, as considered in [kri 87], the potential for efficient problem solving increases by supporting the working set creation and dispatch logic. we need both fast and dynamic analysis of the computation graph in order to identify the working set of computation. some of the issues involved in this respect are discussed. s. krishnaprasad fourth generation problems r. j. casimir manufacturing cheap, resilient, and stealthy opaque constructs christian collberg clark thomborson douglas low book review: managing afs: andrew file system daniel lazenby metah steve vestal address trace compression through loop detection and reduction e. n. elnozahy using c in cs1: evaluating the stanford experience eric s. roberts an iterative-design model for reusable object-oriented software we present an iterative-design approach for reusable object-oriented software that augments existing design methods by incorporating iteration into the design methodology and focuses on the set of problems within the domain, encouraging reuse of existing design information. the model has five separate stages which are described, before an example design is outlined using the model with sample code constructs in c++. our results have shown a high degree of code reuse when using the model, directly attributable to two distinct design stages. an analysis of these results is also presented. sanjiv gossain bruce anderson obtaining sequential efficiency for concurrent object-oriented languages concurrent object-oriented programming (coop) languages focus the abstraction and encapsulation power of abstract data types on the problem of concurrency control. in particular, pure fine-grained concurrent object- oriented languages (as opposed to hybrid or data parallel) provides the programmer with a simple, uniform, and flexible model while exposing maximum concurrency. while such languages promise to greatly reduce the complexity of large-scale concurrent programming, the popularity of these languages has been hampered by efficiency which is often many orders of magnitude less than that of comparable sequential code. we present a sufficiency set of techniques which enables the efficiency of fine- grained concurrent object-oriented languages to equal that of traditional sequential languages (like c) when the required data is available. these techniques are empirically validated by the application to a coop implementation of the livermore loops. john plevyak xingbin zhang andrew a. chien composition based object-oriented software development (coosd) methodology and supporting tools (abstract) jan bosch programming ecology or apl and the world at large apl, like any programming language, interacts with various environments and individuals. how it does so and how it evolves in response to these external influences are important to the continuing health and survival of the language. it can also have a profound effect on the evolution of these same environments and individuals, since apl is a part of their universe, just as they are part of its universe. this paper examines the demands and opportunities of interacting with these external influences and the means by which apl has tried to deal with them as a set of mutually interdependent systems, i.e., an ecology. it also proposes some future evolutionary directions for apl which should not only insure its survival but gain it a dominant position in the programming ecology. not the least conclusion is that apl must be responsible for its own success. jim lucas a comment on s. kang's and h. lee's paper on "analysis and solution of non- preemptive policies for scheduling readers and writers" (osr 32(2)) winfried e. k software review: to warp or not to warp scott ramsay macdonald focus on embedded systems rick lehrbaum uims support for direct manipulation interfaces s e hudson a geometrical data-parallel language jean-luc dekeyser dominique lazure philippe marquet design of the opportunistic garbage collector the opportunistic garbage collector (ogc) is a generational garbage collector for stock hardware and operating systems. while incorporating important features of previous systems, the ogc includes several innovations. a new bucket brigade heap organization supports advancement thresholds between one and two scavenges, using only two or three spaces per generation, and without requiring per-object counts. opportunistic scavenging decouples scavenging from the filling of available memory, in order to hide potentially disruptive scavenge pauses and improve efficiency. card marking efficiently records which small areas of the heap may contain pointers into younger generations, and is supported by a refinement of the crossing map technique, to enable scanning of arbitrary cards. p. r. wilson t. g. moher using annotated interface definitions to optimize rpc bryan ford mike hibler jay lepreau on "fortran reliability" (editorial) loren p. meissner towards an omg idl mapping for common lisp tom mowbray kendall white detecting uninitialized modula-2 abstract objects s greenfield r norton are file names enough? walter g. piotrowski concurrent haskell simon peyton jones andrew gordon sigbjorn finne gao report fgmsd-80-4 revisited b i blum overloading in preliminary ada ada permits the overloading of enumeration literals, aggregates, subprograms and operators, i.e. the declaration of the same designator with different meanings in the same scope. this leads to difficulties during the semantic analysis of expressions and subprogram calls. for selecting the meaning not only the designator but also the types of its operands or parameters and the type of its result must be used. we show that the identification of expressions is possible in two passes, the first bottom-up, the second top- down. guido persch georg winterstein manfred dausmann sophia drossopoulou objekt - a persistent object store with an integrated garbage collector d m harland b beloff reuse technologies and their niches ted j. biggerstaff the concurrent object-oriented language braid matthew huntbach extended pascal - illustrative features d a joslin programming practices: analysis of ada source developed for the air force, army,and navy in this paper, we discuss programming practices that have been applied to the development of ada source for the air force, army, and navy. we focus on practices shared by various developers producing ada software for different military applications. we identified these practices by applying automated, hierarchical, ada- specific metrics frameworks to the analysis of segments of ada source for several military projects. in total, over 2 million text lines of ada source were analyzed, with the amount of code examined for any individual project ranging from a few thousand to more than a million lines. our discussion includes the relationship of these practices to 1) the augmentation or attenuation of quality, 2) published guidelines and coding standards for ada, 3) programming practices for other von neumann languages, and 4) excerpts of analyzed ada source. j. perkins data locality and load balancing in cool large-scale shared memory multiprocessors typically support a multilevel memory hierarchy consisting of per-processor caches, a local portion of shared memory, and remote shared memory. on such machines, the performance of parallel programs is often limited by the high latency of remote memory references. in this paper we explore how knowledge of the underlying memory hierarchy can be used to schedule computation and distribute data structures, and thereby improve data locality. our study is done in the context of cool, a concurrent object- oriented language developed at stanford. we develop abstractions for the programmer to supply optional information about the data reference patterns of the program. this information is used by the runtime system to distribute tasks and objects so that the tasks execute close (in the memory hierarchy) to the objects they reference. we demonstrate the effectiveness of these techniques by applying them to several applications chosen from the splash parallel benchmark suite. our experience suggests that improving data locality can be simple through a combination of programmer abstractions and smart runtime scheduling. rohit chandra anoop gupta john l. hennessy the illinois functional programming interpreter a. d. robison a recursive interpreter for the icon programming language j. o'bagy r. e. griswold testing for linear errors in nonlinear computer programs faten h. afifi lee j. white steven j. zeil on the use of passive tasks in ada r. h. pierce reusable real-time executive in ada design issues alejandro alonso juan a. de la puente a note on the speed of prolog j. paakki evolutionary steps toward a distributed operating system: theory and implementation allison j. mull p. tobin maginnis automatic generation and use of abstract structure operators tim sheard an extensible programming environment for modula-3 this paper describes the design and implementation of a practical programming environment for the modula-3 programming language. the environment is organised around an extensible intermediate representation of programs and makes extensive use of reusable components. the environment is implemented in modula-3 and exploits some of the novel features of the language. mick jordan programming with the xforms library, part 1 thor sigvaldason data-localization for fortran macro-dataflow computation using partial static task assignment akimasa yoshida kenichi koshizuka hironori kasahara the use of goals to surface requirements for evolving systems annie i. antón colin potts local package instances are not equivalent to generic formal package parameters jun shen gordon v. cormack dominic duggan stop the presses michael k. johnson object-oriented metrics: an annotated bibliography robin whitty upfront corporate linux journal staff using formal procedure parameters to represent and transmit complex data structures niklas holsti automatic performance prediction to support cross development of parallel programs matthias schumann an expanded view of messages donald firesmith interview with orest zborowski phil hughes tachyon common lisp: an efficient and portable implementation of cltl2 tachyon common lisp is an efficient and portable implementation of common lisp 2nd edition. the design objective of tachyon is to apply both advanced optimization technology developed for risc processors and lisp optimization techniques. the compiler generates very fast codes comparable to, and sometimes faster than the code generated by unix c compiler. comparing with the most widely used commercial common lisp, tachyon common lisp compiled code is 2 times faster and the interpreter is 6 times faster than the lisp in gabriel benchmark suit. tachyon common lisp is the fastest among the lisp systems known to the authors. atsushi nagasaka yoshihiro shintani tanji ito hiroshi gomi junichi takahashi testing generic ada packages with ape daniel hoffman jayakrishnan nair paul strooper an alternative view of polymorphism we shall outline the traditional approaches to polymorphism, in the light of strachey's original work, and languages in the style of russell, and review the implications for programming language design of the developing interset amongst theoreticians in this subject. it will be shown that the "parametric type-polymorphism" favoured by the majority of today's designers, is actually a very limited form of polymorphism, and we shall show how a much more general concept of polymorhism can be constructed by placing arbitrary constraints on abstract storage types. we will then investigate ways of employing these "guarded cells", outlining several possible applications. the essence of our proposal is that in any language in which there is an unfettered abstraction mechanism, and a sufficiently rich universe of discourse, including types which are properly manipulable values, many forms of polymorphism can be built as abstractions, and so need not be built directly into the fabric of that language. we interpret this to mean that polymorphism is not a major factor in programming language design, whereas we take the view that achieving maximum expressivity is crucial to a design. we shall conclude by relating our work to that of theoreticians in the field, and showing the implications of our proposal for program verification. david m. harland martyn w. szyplewski john b. wainwright using tuple space communication in distributed object-oriented languages when object-oriented languages are applied to distributed problem solving, the form of communication restricted to direct message sending is not flexible enough to naturally express complex interactions among the objects. we transformed the tuple space communication model[29] for better affinity with object-oriented computation, and integrated it as an alternative method of communication among the distributed objects. to avoid the danger of potential bottleneck, we formulated an algorithm that makes concurrent pattern matching activities within the tuple space possible. satoshi matsuoka satoru kawai real-time requirements a. j. wellings avoiding weak links peter g. neumann support for constructing environments with multiple views john c. grundy john g. hosking warwick b. mugridge robert w. amor png: the definitive guide michael j. hammel an example of event-driven asynchronous scheduling with ada james v. chelini donna d. hughes leonard j. hoffman denise m. brunelle process oriented programming this paper argues that co-routines are related to concurrent processes. by adding interrupt mechanisms languages containing co-routine mechanisms can be used for concurrency. it is also the intention to bring some of the terminology used in simula constructs to the attention of others dealing with concurrency in object oriented languages. the use of processes to avoid global states in user interfaces are shortly commented upon. b. magnusson building linux clusters glen otero transactions and synchronization in a distributed operating system matthew j. weinstein thomas w. page brian k. livezey gerald j. popek kernel korner: running linux with broken memory rick van rein distributed shared memory with versioned objects michael j. feeley henry m. levy automatic microcode generation for horizontally microprogrammed processors a procedure is described which permits applications problems coded in a higher level language to be compiled to microcode for horizontally microprogrammed processors. an experimental language has been designed which is suitable for expressing computationally oriented problems for such processors in a distributed processing environment. source programs are compiled first to a machine independent intermediate language and then to a machine dependent form consisting of elementary microoperations, with optimizations performed during each step. the microoperations are then compacted into executable microinstructions for a specific target machine. the procedure has been implemented for experimental purposes and used to compile several different types of applications programs. the experimental results are presented with an interpretation and analysis, along with recommendations for future study. robert j. sheraga john l. gieser case studies on testing object-oriented programs roong-ko doong phyllis g. frankl determination of the conditional response for quantum allocation algorithms theodore brown apl toolkit richard levine normal forms for algebraic specifications of reusable ada packages robert steigerwald valdis berzins letters to the editor corporate linux journal staff comparing the reliability provided by tasks or protected objects for implementing a resource allocation service c. kaiser j. f. pradat-peyre profiles in mass storage: a tale of two systems the los alamos common file system (cfs) and the ncar mass storage system (mss) are file storage and file management systems that serve heterogeneous computing networks of supercomputers, general purpose computers, scientific workstations and personal computers. this paper details philosophical, implementation and performance aspects of the two mass storage systems. areas covered include the computing environment, the user interface, storage strategies and file movement strategies. b. collins d. kitts m. devaney beyond definition/use: architectural interconnection large software systems require decompositional mechanisms in order to make them tractable. traditionally, mils and idls have played this role by providing notations based on definition/use bindings. in this paper we argue that current mil/idls based on definition/use have some serious drawbacks. a significant problem is that they fail to distinguish between "implementation" and "interaction" relationships between modules. we propose an alternative model in which components interact along well-defined lines of communication ---or connectors. connectors are defined as protocols that capture the expected patterns of communication between modules. we show how this leads to a scheme that is much more expressive for architectural relationships, that allows the formal definition of module interaction, and that supports its own form of automated checks and formal reasoning. robert allen david garlan book review: internet public access guide** morgan hall chart drawing: a simplified tool for improving computer system documentation this paper takes a different approach to overcoming this documentation problem. the approach taken incorporates within wylbur, an on- line program maintenance system, a set of documentation tools for allowing the system designers and programmers to automatically draw the necessary charts, and more importantly, to maintain these charts as the system and program designs evolve. the documentation tools, called chart drawing, utilize the facilities of wylbur and do not require any additional hardware or software capabilities. the use of the chart drawing system also provides standardization in chart presentation and the use of charting symbols. kenneth a. hough larry r. hart reasoning of real-time distributed programming languages r. k. shyamasundar j. hooman r. gerth visualizing the geographic distribution of network users in real-time michael liedtke book review: beginning linux programming mark shacklette the causes and effects of infeasible paths in computer programs an analysis is presented of infeasible paths found in the nag library of numerical algorithms. the construction of program paths designed to maximise structural testing measures is shown to be impossible without taking infeasibilities into account. methods for writing programs which do not contain infeasible paths are also discussed. d. hedley m. a. hennell specifying data availability in multi-device file systems john wilkes raymie stata is object-oriented programming structured programming? bernd muller what your dos manual doesn't tell you about linux liam greenwood mixing apples and oranges: or what is an ada line of code anyway? d. firesmith the design of the c++ booch components this paper describes design issues encountered developing a reusable component library. the design applied encapsulation, inheritance, composition and type parameterization. the implementation uses various c++ mechanisms, including: virtual and static member functions, templates, and exceptions. the resulting library contains about 500 components (mostly template classes and functions) and an optional utility for instantiating templates. the components provide variations of basic collection/container abstractions with various time and space complexities. a key insight gained from this project: the design process centered on developing a "template for the templates" \--- designing a component framework and orderly process for generating the template classes. grady booch michael vilot is the answer always ada? patricia k. lawlis penguin playoffs awards: and the winners are… peter salus marjorie richardson jason kroll linux out of the real world: plant experiments runlinux ride the space shuttle kuzminsky cray pascal this paper presents an investigation of the design decisions taken in the implementation of a compiler for pascal on the cray-1 computer. the structured nature of pascal statements and data structures is contrasted with the 'powerful computing engine' nature of the cray-1 hardware. the accepted views of pascal as a simple one-pass language and the cray-1 as a vector processor are laid aside in favour of a multi-pass approach, taking account of the machine's scalar capabilities. the project as a whole, aims to produce highly efficient run-time code for applications likely to be programmed in pascal. some statistics are given to indicate the nature of such applications. n. h. madhavji i. r. wilson run-time code generation and modal-ml philip wickline peter lee frank pfenning real-time software engineering in ada: observations and recommendations m. borger m. klein r. veltre kernel korner: kernel-level exception handling joerg pommnitz using lifetime predictors to improve memory allocation performance dynamic storage allocation is used heavily in many application areas including interpreters, simulators, optimizers, and translators. we describe research that can improve all aspects of the performance of dynamic storage allocation by predicting the lifetimes of short-lived objects when they are allocated. using five significant, allocation-intensive c programs, we show that a great fraction of all bytes allocated are short-lived (> 90% in all cases). furthermore, we describe an algorithm for liftetime prediction that accurately predicts the lifetimes of 42--99% of all objects allocated. we describe and simulate a storage allocator that takes adavantage of lifetime prediction of short-lived objects and show that it can significantly improve a program's memory overhead and reference locality, and even, at times, improve cpu performance as well. david a. barrett benjamin g. zorn session 7a: testing c. v. ramamoorthy reasoning about java classes: preliminary report bart jacobs joachim van den berg marieke huisman martijn van berkum u. hensel h. tews clue: a common lisp user interface environment kerry kimbrough lamott oren an inverted approach to configuration management d. b. miller r. g. stockton c. w. krueger matching-based incremental evaluators for hierarchical attribute grammar dialects although attribute grammars have been very effective for defining individual modules of language translators, they have been rather ineffective for specifying large program-transformational systems. recently, several new attribute grammar "dialects" have been developed that support the modular specification of these systems by allowing modules, each described by an attribute grammar, to be composed to form a complete system. acceptance of these new hierarchical attribute grammar dialects requires the availability of efficient batch and incremental evaluators for hierarchical specifications. this paper addresses the problem of developing efficient incremental evaluators for hierarchical specifications. a matching- based approach is taken in order to exploit existing optimal change propagation algorithms for nonhierarchical attribute grammars. a sequence of four new matching algorithms is presented, each increasing the number of previously computed attribute values that are made available for reuse during the incremental update. alan carle lori pollock graphical specification of object oriented systems the graphical notation objectcharts, introduced in this paper, allows a developer to precisely specify the behaviour of object classes and to reason about the behaviour of particular configurations of objects. objectcharts combine object oriented analysis and design techniques and harel's statecharts to give a diagrammatic specification technique for object oriented systems. stephen bear phillip allen derek coleman fiona hayes multilingual software engineering using ada and c david k. hughes design and implementation of the interface to compiled languages in apl*plus ii apl*plus ii is a second- generation 32-bit apl interpreter that runs under the ms dos operating system. a recent release has introduced two new system functions --- ona and omload \--- that enable apl to dynamically load and execute programs written in compiled languages such as c and fortran. these enhancements presented both design and implementation challenges. on the design side, it was necessary to make it as easy as possible for the apl programmer to integrate non-apl routines into applications. the implementation side required considerable technical creativity and invention, since the ms dos environment provides no built-in functions for dynamic loading and execution of programs. james g. wheeler comments on "a correct and unrestrictive implementation of general semaphores" david hemmendinger apl compared with other languages according to halstead's theory joseph l. f. de kerf knowledge production from different worlds: what can happen when technical writers speak for engineers at the turn of the 20th century, technical writers in the united states were mostly engineers who both developed technology and wrote about it. during world war ii, however, some engineers seeking to increase the efficiency of technology development separated their engineering from their communication tasks. this trend opened up a new occupation for non- engineering technical writers who communicated knowledge made by engineers. while this specialization may have allowed engineers to develop technology more efficiently, it also allowed non-scientists to give voice to scientific knowledge and by the 1970s created tensions between practitioners in scientific fields and liberal arts-trained technical writers. how could non- scientists give scientific knowledge its material form through communication? and did this arrangement between engineers and writers too often render engineers mute within their own professions? this paper traces a cultural history of technical writing practice in the united states and explores current trends in the academy which aim to prepare engineers more adequately for communicating about their work. finally, this paper suggests that technical editors, as distinguished from traditional technical writers, can accommodate both an engineer's need to give voice to technology developments and a writer's contributions to shaping that voice into effective communication. bernadette longo the f programming language ralph frisbie richard hendrickson michael metcalf abstraction & modual decomposition - an example arthur gittleman punchcard voting systems b. williams a specification and code generation tool for message translation and validation charles plinta richard d'ippolito roger van scoy freeing the essence of a computation kenneth r. anderson automatic design and implementation of language data types s. t. shebs r. r. kessler what can programming languages contribute to software engineering, and vice versa? (panel) gregor kiczales authors' response walter tichy rolf adams annette weinert book review: learning the bash shell danny yee object-oriented development of real-time systems: verification of functionality and performance j. c. browne high resolution timing with low resolution clocks and microsecond resolution timer for sun workstations peter b. danzig stephen melvin a linux-based automatic backup system a step-by-step procedure for establishing a backup system that will save time and money. michael o'brien support for dynamic binding in strongly typed languages r e gantenbein classifying ada packages d l ross ada in the ecliplse project support environment r. h. pierce letters to the editor corporate linux journal staff profiling in an object-oriented design environment that supports ada 9x and ada 83 code generation object-oriented techniques for design and development have taken a strong hold in academia, industry, and government. our efforts in this area have been in the development of the object-oriented design environment, adam, that is programming-language independent and generates compilable code in ada 83, ada 9x, c++, and ontos c++. a key aspect of adam, short for active design and analyses modeling, is the requirement that software engineers supply profiles when defining the different components in their applications. a profile contains information on both the content (the purpose and constituent pieces) and context (interdependencies) for all components in an application. profiles are critical since they force software engineers to thoroughly understand and define each portion of an application. they are fundamental to the support of the ada 9x code generation process, and also provide the basis for analyzing an application. in this paper, we focus on profiles in adam and their support for the recently developed ada 9x code generator. we also briefly report on the role that adam has and will play in education and retraining, as related to object-oriented design and upgrading skills from ada 83 to ada 9x. k. el guemhioui steven a. demurjian t. j. peters h. j. c. ellis building monitors with unix and c neil dunstan translation of the protected type mechanism in ada 83 several features of the ada 94 language are useful to improve the efficiency of ada programs. especially the protected type mechanism is useful to improve the efficiency of concurrent ada programs sharing common data structures. in order to facilitate the transition to ada 94 before ada 94 compilers are widely available, this paper proposes the use of an adapter which can be either a methodology, or an automatic translator. the adapter accepts source including protected objects and produces ada 83 source. the results of several tests show that the performances of concurrent programs can be dramatically improved by using protected objects. pascal ledru a conversation with linus torvalds associate publisher belinda frazier talks with linux about the alpha prot and getting ready for 1.2 belinda frazier object-oriented design and pamela stowe boyd efficient dynamic dispatch without virtual function tables: the smalleiffel compiler olivier zendra dominique colnet suzanne collin documentation's recognition problem: what can we do about it? diana patterson chris hallgren a compiler approach to scalable concurrent-program design we describe a compilation system for the concurrent programming language program composition notation (pcn). this notation provides a single- assignment programming model that permits concurrent- programming concerns such as decomposition, communication, synchronization, mapping, granularity, and load balancing to be addressed separately in a design. pcn is also extensible with programmer-defined operators, allowing common abstractions to be encapsulated and reused in different contexts. the compilation system incorporates a concurrent-transformation system that allows abstractions to be defined through concurrent source-to-source transformations; these convert programmer-defined operators into a core notation. run-time techniques allow the core notation to be compiled into a simple concurrent abstract machine which can be implemented in a portable fashion using a run-time library. the abstract machine provides a uniform treatment of single-assignment and mutable data structures, allowing data sharing between concurrent and sequential program segments and permitting integration of sequential c and fortran code into concurrent programs. this compilation system forms part of a program development toolkit that operates on a wide variety of networked workstations, multicomputers, and shared-memory multiprocessors. the toolkit has been used both to develop substantial applications and to teach introductory concurrent-programming classes, including a freshman course at caltech. ian foster stephen taylor profile assisted register allocation william c. kreahling cindy norris linux apprentice: power printing with magicfilter bill cunningham tool support for requirements formalisation jaelson f. b. castro christian j. gautreau marco a. toranzo software architecture - a rational metamodel philippe kruchten formal software engineering carl a. gunter elsa l. gunter pamela zave applying design of experiments to software testing: experience report i. s. dunietz w. k. ehrlich b. d. szablak c. l. mallows a. iannino optimal clock synchronization we present a simple, efficient, and unified solution to the problems of synchronizing, initializing, and integrating clocks for systems with different types of failures: crash, omission, and arbitrary failures with and without message authentication. this is the first known solution that achieves optimal accuracy---the accuracy of synchronized clocks (with respect to real time) is as good as that specified for the underlying hardware clocks. the solution is also optimal with respect to the number of faulty processes that can be tolerated to achieve this accuracy. t. k. srikanth sam toueg of pyramids and igloos: a brief cultural perspective s. ron oliver animating an actor programming model interest in object-oriented programming systems for parallel computing has been growing for many years. this paper explores a concurrent object-oriented programming model known as actors, and describes an animation (or simulation) of this model implemented in another object-oriented programming system, namely smalltalk-80. this animation has been used for educational purposes, particularly at final-year undergraduate level. trevor p. hopkins style and literacy in apl … then the strut changed to the restless walk of a caged madman, then he whirled, and to a clash of cymbals in the orchestra and a cry of terror (perhaps faked) in the gallery, mascodagama turned over in the air and stood on his head. ---nabokov, ada "see me jump," said dick. "oh, my! this is fun. come and jump. come and do what i do." \---gray et al., the new fun with dick and jane there is a persistent belief in the apl community, reflecting perhaps some of the prejudice against apl outside that community, that "good style" in apl involves writing very short statements, using as few primitives as possible in each. it is easy enough to find an example in a discussion of apl in a general computing magazine: mathematicians and engineers love apl for its conciseness and power, but there's quite a price to pay: apl programs are almost unreadable. it's very easy to write a single-line program that would take an average apl programmer a good fifteen minutes to figure out. … typically, good apl programmers write one line of comment for every line of code and try to keep their program lines short. [1] such opinions don't spring forth full-grown from the forehead of zeus; they have their origins in the apl community (where else would an author, who does have his objective facts about apl straight, go for information about apl?). for example, a recent issue of apl quote quad carried a "style guide" for functions submitted to the journal, including this: "vertical: write each function vertically with several short lines, rather than horizontally with long lines" [2]. another example: an otherwise admirable document on apl programming standards, circulated in the san francisco area, contains an appendix that introduces two functions by saying "the following are two functions which do the same thing. the first is a one liner which is pornographic. the second uses the same code but broken down into more readable pieces." [3] we will examine those two functions later in this paper. first, let's explore this belief. short function lines seem to be a crucial point. that must mean, for example, that it should be considered better to write s +ω c pω z s÷c than z (+/ω)÷pω to calculate an average. the justification is that the short statements are easier to read. this claim has gone unexamined for far too long. we will consider it from two perspectives: in what way is a multiplicity of short statements easier to read than a single, longer one? for whom is this style in general easier to read? the first question is no doubt easy to answer. after all, if there's only one primitive (and assignment) in an expression, there can't be too much doubt about what that expression does, right? in c ω above, for instance, c is clearly the number of elements in ω whereas in a longer, more complicated expression, we might be daunted by the large number of funny squiggles, and never notice the p ω buried in there somewhere. but does this really get us much farther in finding out what's going on? we're calculating an average, after all---there are two other statements involved. let's see: the first statement, s +/ω is just as easy: we're adding up the elements in \gw. oh, and giving them the name s. hope that's not too complicated, two things at once; pity we couldn't break it up further. only one to go. well, z s÷c is quite easy. nothing to it either. we're dividing s by c, of course. but, wait--- what was it we were doing? let's see, s was a sum, and c was, hold on a minute while we check, a count, that's right. so we're dividing the sum of things in ω by the number of things in ω\--- of course, this must be an average. there, we're done. easy, wasn't it? go ahead, laugh. we know, you don't have trouble keeping three statements straight in your head. but when you're done laughing, think about the general claim in the light of this example. while breaking up a line into short segments does indeed produce a program whose lines are easier to read than the original, that isn't the appropriate unit of comparison: the lines in the fragmented version do nothing of interest. comparing the two programs is more interesting--- and here, the shoe is on the other foot. the fragmentation of thought, and the introduction of extra names into the calculation, makes it harder to keep track of what's going on, because to determine the meaning of the final result you must be aware of definitions that exist only in the immediate context, whereas in the brief and obvious (+/ω)÷pω everything is out where you can see it, and no reference is necessary to other parts of the expression. consider again the example paraphrased into english: instructions for computing an average (version 1) add up a bunch of numbers and divide by how many you had. instructions for computing an average (version 2) add up @@@@ bunch of numbers, and call the result "ess". count the numbers you added up, and call the result "cee". divide ess by cee. but why stop here? english is an even richer language than apl; it can probably provide even, ah, clearer versions of the instructions: instructions for computing an average (version 3) some instructions follow. the instructions are about to begin. you have a list of numbers. reserve a spot called "ess" to put sums in. put a zero in ess. is there at least one number in the list? if not, go to step 12. name the first number in the list "ex". add ex and ess, and call the result "ess" now. remove the first number from the list. go back to step 6. … we think we'll spare you (not to mention ourselves) the rest. actually we took some liberties there; that could be a lot clearer yet. so. of versions 1, 2, and 3 of the instructions, which do you think needs comments most? maybe there are people who disagree, but our own feeling is that we'd definitely need to accompany 3 with some descriptive text (suppose we hadn't said what these instructions were meant to achieve?), probably some such text would also be handy for 2, and 1 needs it not at all. we assume none of our readers would actually prefer version 3, so let's leave it aside for now. but anyone who likes his apl expressions sold short (would you believe that as a typo and we meant "told"? no? oh well) presumably feels utterances such as version 2 of the instructions above are somehow more natural for human beings. we can only recommend experiment. we'd suggest the following one: next time you want a raise, be clear. don't tell your boss something confusing like "i want another thirty thousand dollars". say, "call my salary 'ess'. call thirty thousand dollars 'tee'. add ess and tee, and call the result 'ess'. thanks!" who knows, it may work better that way. but we doubt it would be because of greater clarity… the intermediate names are only part of the story, though. version 3 of the instructions was not meant just to amuse us; it illustrates how easy it becomes to lose track of what's going on when a process is broken down into ridiculously tiny steps. it also illustrates a more dangerous aspect of the "short and vertical" style of apl programming: carried just a little farther, it becomes scalar thinking--- conditioning a writer of apl to use this style risks encouraging inappropriate, inefficient use of the language as a tool. the following function comes from an application in a real business environment: what's wrong with this function? why, in one sense, nothing at all: it ran, it gave the proper answers, its author was happy with it for a long time. in another sense--- well, would you like to maintain it? or would you rather deal with something like the following: there are, of course, a number of differences between the two, and some of them depend upon examination of the subfunction gpcalba (not shown here). one thing leaps immediately to one's notice: while there were a couple of fairly long lines in the first version, the second was certainly not produced by breaking them up! the two functions are equivalent; it took careful reading of gpcalba, a fairly involved function, to notice that there was no need at all for the nested loops and associated extra parameters in singleprems\\--- all the calculations were (or could be made) parallel for all cases, and all cases were always covered. had the consultant who did the work not been literate in apl, he could scarcely have made this simplification. perhaps more important--- the original form arose because scalar thinking is pervasive. people's approach to problems is conditioned by their habits. the first version of singleprems was written by an author not fluent in apl, who found it more congenial to address problems bit by bit. it reflects scalar thinking in its style: despite having middling long lines 2 and 6, most of the function conforms quite nicely to the "short and vertical" model. if you're conditioned to look at algorithms in tiny bits, chances are you'll look at problems the same way--- which means you'll lose much of the power of apl. consider again, for a moment, the last two versions of the instructions for computing an average given above, with attention to style. what situation can you conceive, in which you would express yourself that way? version 3, in particular: you would never address another human being that way (well, save maybe your boss, if you did think that would get you a raise). you might address a machine that way. but, we hope, only when you didn't have any other choice. there are indeed many situations when you do have to address machines that way: when writing an apl interpreter, for instance. the tiny steps are in fact closer to how machines must execute our instructions, than to how human beings conceive of them. this is a partial answer to one of our questions: who finds instructions easier when they're broken up into very short bits? some machines do. apl doesn't require this because of its history: it was not originally defined to instruct machines. it is a human language. but (tempting though it might be) we don't really feel it's fair to question the humanity of everyone who can't read the apl we write. fortunately, it isn't necessary. there are human beings we might want to address with very short sentences, made out of a very limited vocabulary: people who don't speak the language we're using very well. people whose language we don't speak very well. or, in writing: people of marginal literacy. people just learning to read. implied in these categories is the answer to our other question: in what way are instructions easier to understand when they're broken up into tiny bits? they're easier to understand in that you can focus on the meanings of the words themselves--- which you might want to do when you're not very sure of them. in all the cases in which people find it easier to understand broken english, it is generally expected that the problem is temporary. for some reason, in the case of apl, we have trouble even admitting that this is the problem. in apl, when the literacy problem is recognized, it's usually attacked by limiting--- often voluntarily--- the style of those who are literate to the comprehension level of those who are not. the problem is, indeed, partly one of writing. some people write english that's pretty hard to follow too. but at the level where it makes a critical difference to comprehension, to use only two or three words per sentence, the problem is simply learning to read. this is also the case with apl. the major difference between apl and most other executable languages is simply that apl has a syntax sufficient to express thought; the others do not, and must use a sequence of steps instead. any accompanying thoughts had better be expressed in comments, in these other languages, even if they're simply thoughts descriptive of the process. in apl, the best description of the thought is in the apl itself, and as in english, the thought can be expressed most concisely, directly, and meaningfully when we're not restricting ourselves to an illiterate audience. comments in english or some other natural language may still be desirable, but the useful ones are not descriptive, they're intentional: not what is this doing--- which the apl expresses better than english would--- but why is it doing it; or to what did the author expect to do it. let's look at a different real example, one published to argue for the precise opposite of the position we take in this paper. the accompanying figure lists the two functions from [3] mentioned at the outset. consider just the apl for the nonce, leaving aside the comments. we'll come back to those. spd may look a little forbidding at first--- if you're not used to reading apl, or if you think you should read it in the order a computer will execute it. but there's no need for reading in that direction: in the culture apl was developed, human beings are used to reading from left to right--- and that's a fine way of reading apl. as iverson remarks in a discussion of apl parsing rules, one important consequence of these rules is that in an unparenthesized sentence the right argument of any verb is the result of the entire phrase to the right of it. a sentence such as 3xp q*|r-5 can therefore be read from left to right; the overall result is three times the result of the remaining phrase, which is the maximum of p and the part following the , and so on. [4] let's read our example, then, from left to right, as we're accustomed to. the first thing that leaps to our attention is a parenthesis; we don't know much about what it encloses yet, but we notice that immediately to its right is a p. we see immediately, then, that the result of spd is a reshape of some value (giving a matrix result). since what we're reshaping has just been transposed (reading on to the right), the parallel between the @@@@ arguments, and the left argument of the reshape, is very suggestive of collapsing two axes into one. this impression is reinforced by reading a little farther; the object transposed was itself the result of another reshape, and it in turn was the result of a on the right argument. a glance at the expression to the left of the , though it may not reveal exactly what the value is, shows clearly that we're dealing with an overtake, since d is assigned right there as the shape of x, and (reading the parenthesized phrase also from left to right) we see that the argument is more than that. so: that's the whole function. we've just skimmed it, but this skimming tells us most of the story: the result is a matrix containing all of the argument x, but rearranged in some way, and also some padding. for many purposes, we could stop right there: a little experimentation would tell us the rest we might need to know. but we can read more carefully, too, and discover as much detail as we need. look for a moment at the other version, spread. perhaps you can skim it as quickly as the first; we couldn't. there are rather more temporary variables involved to keep track of; the final reshape is not so suggestively associated with a transpose; the overtake is so buried and separated from its argument that, even knowing what it is and where it must be, we have trouble finding it. but we didn't really finish reading the short spd; in particular, the expression c d+(col\col\- (d px)[1]), d seemed rather mysterious. notice, however, line [14] of the long spread\\--- it has exactly the same expression! slightly different variable names, and the x assignment has been moved, making it a little harder to see what one of the variables is… but no substantial difference. the rewrite, in other words, did absolutely nothing to clarify the one obscure part of this function. it is fairly clear that breaking this up further wouldn't do it. what does this do? well, there are two approaches--- one could try to simplify and analyze with no idea of where one was going; or one could take some knowledge of the intention of the function, together with reading of the rest of the function, to form a conjecture, which could then be verified. we took the second path; here's where comments come in handy--- we needed to know the author's intent. knowing the purpose of both functions (the comments on the first seem a little more helpful here), it was easy to conjecture that the overtake must be to pad the original data x to a number of rows which is an even multiple of the number of "logical columns" desired, col\\--- therefore, our conjecture was that this phrase must be equivalent to c (colx (1 d),÷col), 1 d x a short proof verified that this was indeed the case. note that the "skimming" we went through in the first place was crucial to form the conjecture (which made the analysis, we suspect, much shorter than if we'd had no idea where we were going). this skimming, as we began by showing, is much easier when all the context is immediately in front of us. that is, for the one part of both functions that is hard to understand, spd\\--- the "nasty" one- liner--- makes it easier to discover the meaning than the broken-up spread. our central conclusion was arrived at before examining this example: that the problem of written communications, in apl as in english, requires skills on the part of the reader as well as on the part of the writer. the responsibility for communication has been laid too heavily on writers in the apl community. modifying one's apl writing style to cater to an illiterate audience has been a recommended approach. we have tried to show, first, that becoming accustomed to a less expressive style of writing can hamper a writer's thinking in approaching a problem; second, that reading skills are valuable in themselves; and, finally, that --- assuming readers of apl are willing and able to read --- the often-recommended "short and vertical" style makes it harder, rather than easier, to read apl, especially when obscure phrases are encountered in either style. as with any aesthetic issue, the question of good style in apl cannot be settled prescriptively. a final, and perhaps the weightiest, reason for developing apl reading skills is that to read is, in the final analysis, the most sensible advice one can give to writers concerned with improving their own style. iverson has remarked that perhaps the most important habit in the development of good style in a language remains to be mentioned, the habit of critical reading. such reading should not be limited to collections of well- turned and useful phrases…nor should it be limited to topics in a reader's particular specialty.…one may benefit from the critical reading of mediocre writing as well as good; good writing may present new turns of phrase, but mediocre writing may spur the reader to improve upon it. [5] perlis and rugaber [6] have advocated teaching the recognition of particular phrases (often called "idioms" in the apl community) as a useful step in teaching both reading, and writing, of apl. published collections of such phrases include perlis and rugaber's report the apl idiom list [7] and the more recent finnapl idiom library [8]. as iverson remarks, such collections are certainly one kind of useful reading matter. but by themselves they are unlikely to make anyone literate, and in fact careless use of such collections in teaching can sometimes disguise illiteracy rather than promote literacy, if students feel encouraged to simply accept, recognize, and copy such phrases rather than actually reading and analyzing them (see pesch [9] for more discussion of this issue). we can provide no easy answers: but this much is clear--- writers of apl must assume a literate audience (as writers of english do) if they are to use the language effectively; and readers of apl (which is to say all of us) can read best by reading more. in the end, greater literacy is its own reward. michael j. berry roland h. pesch cartablanca - a pure-java, component-based systems simulation tool for coupled non-linear physics on unstructured grids this paper describes a component-based non-linear physical system simulation prototyping package written entirely in java using object-oriented design to provide scientists and engineers a "developer-friendly" software environment for large-scale computational method and physical model development. the software design centers on the jacobian-free newton-krylov solution method surrounding a finite-volume treatment of conservation equations. this enables a clean component-based implementation. we first provide motivation for the development of the software and then describe software structure. discussion of software structure includes a description of the use of java's built-in thread facility that enables data-parallel, shared-memory computations on a wide variety of unstructured grids with triangular, quadrilateral, tetrahedral and hexahedral elements. we also discuss the use of java's inheritance mechanism in the construction of a hierarchy of physics-systems objects and linear and non-linear solver objects that simplify development and foster software re-use. as a compliment to the discussion of these object hierarchies, we provide a brief review of the jacobian-free newton-krylov nonlinear system solution method and discuss how it fits into our design. following this, we show results from preliminary calculations and then discuss future plans including the extension of the software to distributed memory computer systems. w. b. vanderheyden e. d. dendy n. t. padial- collins a proposal for standard ml robin milner web counting with msql and apache learn all about apache modules and msql programming using a web counting program as an example randy jay yarger deriving hard real-time embedded systems implementations directly from sdl specifications object-oriented methodologies together with formal description techniques (fdt) are a promising way to deal with the increasing complexity of hard real- time embedded systems. however, fdts do not take into account non-functional aspects as real-time constraints. based on a new real- time execution model for fdt sdl proposed in previous works, a way to derive implementations of hard real-time embedded systems directly from sdl specifications is presented. in order to get it we propose a middleware that supports this model to organize the execution of the tasks generated from sdl system specification. additionally, a worst case real-time analysis, including the middleware overhead, is presented. finally, an example to generate the implementation from the sdl specification and a performance study is developed. j. m. alvarez m. diaz l. llopis e. pimentel j. m. troyal a view matcher for learning smalltalk the view matcher is a structured browser for smalltalk/v. it presents a set of integrated and dynamic views of a running application, intended to coordinate and rationalize a programmer's early understanding of smalltalk and its environment. we describe the system through two user scenarios involving exploration of the model-view-controller paradigm. john m. carroll janice a. singer rachel k. e. bellamy sherman r. alpert igor: a system for program debugging via reversible execution typical debugging tools are insufficiently powerful to find the most difficult types of program misbehaviors. we have implemented a prototype of a new debugging system, igor, which provides a great deal more useful information and offers new abilities that are quite promising. the system runs fast enough to be quite useful while providing many features that are usually available only in an interpreted environment. we describe here some improved facilities (reverse execution, selective searching of execution history, substitution of data and executable parts of the programs) that are needed for serious debugging and are not found in traditional single-thread debugging tools. with a little help from the operating system, we provide these capabilities at reasonable cost without modifying the executable code and running fairly close to full speed. the prototype runs under the dune distributed operating system. the current system only supports debugging of single-thread programs. the paper describes planned extensions to make use of extra processors to speed the system and for applying the technique to multi- thread and time dependent executions. stuart i. feldman channing b. brown javelin++: scalability issues in global computing michael o. neary sean p. brydon paul kmiec sami rollins peter cappello optimization in fortran vs. c optimal register assignment to loops for embedded code generation one of the challenging tasks in code generation for embedded systems is register assignment. when more live variables than registers exist, some variables will necessarily be accessed from data memory. because loops are typically executed many times and are often time-critical, good register assignment in loops is exceedingly important as accessing data memory can degrade performance. the issue of finding an optimal register assignment to loops has been open for some time. in this article, we present a technique for optimal (i.e., spill minimizing) register assignment to loops. first we present a technique for register assignment to architecture styles that are characterized by a consolidated register file. then we extend the technique to include architecture styles that are characterized by distributed memories and/or a combination of general- and special-purpose registers. experimental results demonstrate that although the optimal algorithm may be computationally prohibitive, heuristic versions obtain results with performance better than that of an existing graph coloring approach. david j. kolson alexandru nicolau nikil dutt ken kennedy the use and disuse of apl: an empirical study a sample of over 200 workspaces containing over 80,000 lines of code were analyzed to determine their composition and structure. statistics were gathered on the frequency of all monadic and dyadic functions, operators, system variables and functions as well as other workspace and defined function characteristics. the results of the study indicated that only a fraction of the apl primitive function set was being utilized despite the language's richness. the ubiquitous 80-20 phenomenon was well represented. in general, 80% of all apl usage occurred with 20% of the available function set. this seems to support a contention that more awareness needs to be directed toward "thinking primitive" instead of only "thinking array". some comparisons with a previous study by saal and weiss will be made. raymond kanner creating conditional transform words g. feierbach system documentation as software accuracy, timeliness, flexibility, and low cost---computer buyers want better documentation, they want it now, they want it to suit their needs, and they want it cheap. unfortunately, current documentation methods have failed to satisfy these needs. the inadequacies of the traditional technical manual are well known to anyone who has tried to use one, regardless of the herculean effort it probably took to collect and publish the damn thing. typically, the resources that could have been expended in making the manual easier to read were expended in an impossible battle to make the information available in a reasonable amount of time, without totally sacrificing the accuracy of its contents. the solution to this dilemma lies in changing the way that we view documentation. we must stop thinking of documentation as pieces of paper and start thinking of it as software. geoffrey james sickler a learning curve based simulation model for software development noriko hanakawa syuji morisaki ken-ichi matsumoto a variation on "take" frederick macaskill turbodostm multiprocessor operating system the turbodos operating system is a product of software 2000, inc., and is trademarked and copyrighted by them. at musys corp., we have used turbodos in conjunction with various slave processor boards to construct a wide variety of s-100 based computer systems, ranging from two to over sixty users. turbodos is designed for multiprocessor networks of z-80 based computers, although single user versions are available. extensive use is made of the z-80 instruction set to achieve a highly table oriented and reentrant architecture, which is very adaptable to the user's environment. in addition to musys, many companies are selling turbodos for specific hardware configurations on an oem basis. this is one of the primary distinctions with other multiprocessor operating systems, which are supported by only a single vendor. william a. schultz on the cost of fault-tolerant consensus: when there are no faults (preliminary version) we consider the consensus problem in an asynchronous model enriched with unreliable failure detectors and in the partial synchrony model. we consider algorithms that solve consensus and tolerate crash failures and/or message omisions. we prove tight lower bounds on the number of communication steps performed by such algorithms in failure-free executions. we present in a unified framework a number of related lower bound results. thus, we shed light on the relationship among different known lower and upper bounds, and at the time, illustrate a general technique for obtaining simple and elegant lower bound proofs. we also illustrate matching upper bounds: we algorithms that achieve the lower bound. idit keidar sergio rajsbaum error handling in large, object-based ada systems philippe kruchten assertion-oriented automated test data generation bogdan korel ali m. al-yami a practical and flexible flow analysis for higher-order languages j. michael ashley ad index corporate linux journal staff directions in computer security one of the primary thrusts in operating system security has come from the department of defense (dod), which early recognized the need for security controls in open use, multi-user, resource-shared computer systems.1 two features in particular, mandatory access controls and security kernel technology, have been strongly promoted by the dod. mandatory access controls, necessary to support a security policy that cannot be circumvented by any user (in dod's case, the national security policy regarding personnel clearances and data classifications) are being studied for their applicability to business2 and industry security problems. security kernel technology is an implementation of the reference monitor concept, a security enforcement abstraction which views a computer system as composed of subjects (e.g., processes, users) and objects (e.g., files) and a reference monitor which checks each access by a subject to an object. in the past ten years, several attempts to build secure operating systems have utilized security kernel technology. while none of these attempts was practical from a performance point of view, the security kernel research still serves as a basis for current attempts to build secure systems. in a continuing effort to promote secure systems for dod use, the dod computer security center was formed in 1981. one of the first tasks of the center was to draft a "trusted computer system evaluation criteria" which defines various levels of protection for computer systems.3 in addition to listing feature requirements, including auditing, labelling, mandatory access controls, discretionary access controls, identification and authentication, the criteria discuss both the structure and development techniques used to produce trusted systems. anne-marie g. claybrook runtime coupling of data-parallel programs m. ranganathan a. acharya g. edjlali a. sussman j. saltz reifying variants in configuration management using a solid software configuration management (scm) is mandatory to establish and maintain the integrity of the products of a software project throughout the project's software life cycle. even with the help of sophisticated tools, handling the various dimensions of scm can be a daunting (and costly) task for many projects. the contribution of this article is to (1)propose a method (based on the use creational design patterns) to simplify scm by reifying the variants of an object-oriented software system into language-level objects and (2)show that newly available compilation technology makes this proposal attractive with respect to performance (memory footprint and execution time) by inferring which classes are needed for a specific configuration and optimizing the generated code accordingly. jean-marc jezequel discussing aspects of aop tzilla elrad mehmet aksits gregor kiczales karl lieberherr harold ossher thread-specific heaps for multi-threaded programs garbage collection for a multi-threaded program typically involves either stopping all threads while doing the collection or involves copious amounts of synchronization between threads. however, a lot of data is only ever visible to a single thread, and such data should ideally be collected without involving other threads. given an escape analysis, a memory management system may allocate thread- specific data in thread-specific heaps and allocate shared data in a shared heap. garbage collection of data in a thread-specific heaps can be done independent of other threads and of data in their thread- specific heaps. for multi-threaded programs, thread-specific heaps allow reduced garbage collection latency for active threads. on multi-processor computers, thread- specific heaps allow concurrent garbage collection of different thread- specific heaps with minimal synchronization overhead. we present an escape analysis and a sample memory management system using thread-specific heaps. bjarne steensgaard apl programming without tears: it is time for a change p. naeve b. strohmeier p. wolf authentication, access control, and audit ravi sandhu pierangela samarati "the technical issues confronting ada" a few in-depth presentations exploring current technical (and management/control) issues confronting ada implementers and users. 4 distinguished ada experts now producing and using ada technology report their current progress. hal hart apl and mastermind a course in programming languages should teach students about the syntax and semantics of programming languages, as well as make them aware that a good choice of a programming language can often make a problem much easier to solve. this last point can best be made by giving students a problem that is much easier to solve in one language than in most others.the game of mastermind is an excellent vehicle for doing this. it is easy to understand, fun to play, and wonderfully easy to program in apl. mastermind has also been used to teach many concepts in combinatorics [5]. this paper outlines how to program the game of mastermind in apl. it also presents a strategy which is extremely simple but surprisingly effective. richard zaccone design and implementation of a distributed virtual machine for networked computers this paper describes the motivation, architecture and performance of a distributed virtual machine (dvm) for networked computers. dvms rely on a distributed service architecture to meet the manageability, security and uniformity requirements of large, heterogeneous clusters of networked computers. in a dvm, system services, such as verification, security enforcement, compilation and optimization, are factored out of clients and located on powerful network servers. this partitioning of system functionality reduces resource requirements on network clients, improves site security through physical isolation and increases the manageability of a large and heterogeneous network without sacrificing performance. our dvm implements the java virtual machine, runs on x86 and dec alpha processors and supports existing java-enabled clients. emin gun sirer robert grimm arthur j. gregory brian n. bershad the trw software productivity system this paper presents an overview of the trw software productivity system (sps), an integrated software support environment based on the unix operating system, a wide range of trw software tools, and a wideband local network. section 2 summarizes the quantitative and qualitative requirements analysis upon which the system is based. section 3 describes the key architectural features and system components. finally, section 4 discusses our conclusions and experience to date. barry w. boehm james f. elwell arthur b. pyster e. donald stuckle robert d. williams a distributed unix system based on a virtual circuit switch the popular unixtm operating system provides time-sharing service on a single computer. this paper reports on the design and implementation of a distributed unix system. the new operating system consists of two components: the s-unix subsystem provides a complete unix process environment enhanced by access to remote files; the f-unix subsystem is specialized to offer remote file service. a system can be configured out of many computers which operate either under the s-unix or the f-unix operating subsystem. the file servers together present the view of a single global file system. a single-service view is presented to any user terminal connected to one of the s-unix subsystems. computers communicate with each other through a high-bandwidth virtual circuit switch. small front-end processors handle the data and control protocol for error and flow-controlled virtual circuits. terminals may be connected directly to the computers or through the switch. operational since early 1980, the system has served as a vehicle to explore virtual circuit switching as the basis for distributed system design. the performance of the communication software has been a focus of our work. performance measurement results are presented for user process level and operating system driver level data transfer rates, message exchange times, and system capacity benchmarks. the architecture offers reliability and modularly growable configurations. the communication service offered can serve as the foundation for different distributed architectures. g. w.r. luderer h. che j. p. haggerty p. a. kirslis w. t. marshall linux expo at union bank of switzerland martin sjoelin actra-a multitasking/multiprocessing smalltalk d. a. thomas j. mcaffer b. barry design of a test plan and its test cases for a translator trong wu a transformation-access model for program visualization action-on-data displays cloyd l ezell experiences creating a portable cedar cedar is the name for both a language and an environment in use in the computer science laboratory at xerox parc since 1980. the cedar language is a superset of mesa, the major additions being garbage collection and runtime types. neither the language nor the environment was originally intended to be portable, and for many years ran only on d-machines at parc and a few other locations in xerox. we recently re- implemented the language to make it portable across many different architectures. our strategy was, first, to use machine-dependent c code as an intermediate language, second, to create a language-independent layer known as the portable common runtime, and third, to write a relatively large amount of cedar-specific runtime code in a subset of cedar itself. by treating c as an intermediate code we are able to achieve reasonably fast compilation, very good eventual machine code, and all with relatively small programmer effort. because cedar is a much richer language than c, there were numerous issues to resolve in performing an efficient translation and in providing reasonable debugging. these strategies will be of use to many other porters of high-level languages who may wish to use c as an assembler language without giving up either ease of debugging or high performance. we present a brief description of the cedar language, our portability strategy for the compiler and runtime, our manner of making connections to other languages and the unix* operating system, and some measures of the performance of our "portable cedar". r. atkinson a. demers c. hauser c. jacobi p. kessler m. weiser six misconceptions about reliable distributed computing werner vogels robbert van renesse ken birman apl: "abstract programming language" frederick macaskill a note on h.e. tompkins's minimum-period cobol style r r baldwin extending graham-glanville techniques for optimal code generation we propose a new technique for constructing code-generator generators, which combines the advantages of the graham-glanville parsing technique and the bottom-up tree parsing approach. machine descriptions are similar to yacc specifications. the construction effectively generates a pushdown automaton as the matching device. this device is able to handle ambigious grammars, and can be used to generate locally optimal code without the use of heuristics. cost computations are performed at preprocessing time. the class of regular tree grammars augmented with costs that can be handled by our system properly includes those that can be handled by bottom-up systems based on finite-state tree parsing automata. parsing time is linear in the size of the subject tree. we have tested the system on specifications for some systems and report table sizes. code analysis of saftey-critical and real-time software using asis the ravenscar profile is a restricted tasking profile that supports applications requiring separate threads of control yet would satisfy the certification requirements of high-integrity (safety-critical) real-time systems. if the ravenscar profile were to be used for systems having safety-critical and real- time requirements, it would be valuable to demonstrate that the application satisfies the restrictions. code analysis is an important technique to support this demonstration. ada semantic interface specification (asis) based tools provide an excellent capability for the automatic identification of violations to that set of the ravenscar profile restrictions, which can be determined through static code analysis. all but one of these restrictions can be identified using static code analysis using asis. this paper provides an approach to building such an asis-based tool. this tool might promote the use of automatic tools for the analysis of the ravenscar profile and other tasking profiles to support safety-critical and real-time requirements. this paper should be viewed as work in progress. william currie colket local type inference we study two partial type inference methods for a language combining subtyping and impredicative polymorphism. both methods are local in the sense that missing annotations are recovered using only information from adjacent nodes in the syntax tree, without long-distance constraints such as unification variables. one method infers type arguments in polymorphic applications using a local constraint solver. the other infers annotations on bound variables in function abstractions by propagating type constraints downward from enclosing application nodes. we motivate our design choices by a statistical analysis of the uses of type inference in a sizable body of existing ml code. benjamin c. pierce david n. turner access-right expressions richard b. kieburtz abraham silberschatz linux gazette mup: music publisher: here's a look at notation editors for producing sheet music under linux bob van der poel a pragmatic ada software design/development methodology a pragmatic design method is described which has been successfully applied to two medium size ada projects. this design method parallels (albeit at a much less sophisticated level) the domain oriented methodologies currently becoming popular with the advent of products such as microsoft visual basic (tm). wayne pullan an example of process description in hfsp hfsp considers software activities, in the first approximation, as mathematical functions which map their input objects to output objects and define them though hierarchical functional definition. activity a with input x1, …, xn and output y1, …, ym is denoted by a(x1 , …, xn | y1, …, ym). execution of a is performed functionally and it does nothing other than computing y1, …, ym from x1, …, xn. we call x1, … xn, y1, … ym attributes of a. if the activity is complex and cannot be performed by simply invoking tools, we have to decompose it into subactivities. we continue this decomposition process until every activity resulted from decomposition becomes a primitive one which could be performed by invoking existing tools or performed by human mental activity such as thinking or decision making. activity decomposition must specify how an activity, say a, is decomposed into subactivities a1, … ak and what relationship e holds among attributes of the activities involved. it also has to specify the condition c when this decomposition can happen. a a1, …, ak when c where e. figure 1 shows a hfsp description of a simplified version jsp process where every step can be completed successfully. it consists of type definition, object and tool definition and activity definition part. the activity definition part defines jsp process as a set of activity decompositions. it also specifies how attributes are bind to the values of objects in the objectbase and this is described in the with section using get and put. for the activities defined here, no decomposition condition is attached as every activity is decomposed uniquely. explicit definitions for attributes are omitted in this example as we use the following convention: when an output attribute y of an activity ai is given to another activity aj as one of its input attribute x, we omit the attribute definition x = y. instead, we put y for x in aj. we also used the convention that the name of a type is used as the name of an attribute of the type. if there are several attributes of the same type, we distinguish them by attaching modifier of the form `.xxx` to the type, like in `datatree.in` and `datatree.out`. this process description is only valid for the case that we never make a mistake in performing activities and the specification given is simple enough. redoing is introduced in hfsp to handle the cases when these conditions are not met[2,3]. figure 2 is an example of a redoing decomposition applied to the case where (1) extracting data structure is mistaken in `makeprogtree` step, (2) it is detected in `composetree' and (3) `makeprogtree` has to be redone. in figure 3, redoing is used to cope with structure clash. `ordercarsh` found in `composetree` is handled by redoing `makeprogtree`. takuya katayama masato suzuki designing new language or new manipulation systems using ml p jouvelot problems, methods, and structures (abstract) michael jackson x3j3 meeting, november 1987 john reid collaboration and composition: issues for a second generation process language over the past decade a variety of process languages have been defined and applied to software engineering environments. the idea of using a process language to encode a software process as a "process model", and enacting this using a process-sensitive environment is now well established. many prototype process-sensitive environments have been developed; but their use in earnest has been limited. we are designing a second generation process language which is a significant departure from current conventional thinking. firstly a process is viewed as a set of mediated collaborations rather than a set of partially ordered activities. secondly emphasis is given to how process models are developed, used, and enhanced over a potentially long lifetime. in particular the issue of composing both new and existing model fragments is central to our development approach. this paper outlines these features, and gives the motivations behind them. it also presents a view of process support for software engineering drawing on our decade of experience in exploiting a "first generation" process language, and our experience in designing and exploiting programming languages. b. c. warboys d. balasubramaniam r. m. greenwood g. n. c. kirby k. mayes r. morrison d. s. munro improvements to graph coloring register allocation we describe two improvements to chaitin-style graph coloring register allocators. the first, optimistic coloring, uses a stronger heuristic to find a k-coloring for the interference graph. the second extends chaitin's treatment of rematerialization to handle a larger class of values. these techniques are complementary. optimistic coloring decreases the number of procedures that require spill code and reduces the amount of spill code when spilling is unavoidable. rematerialization lowers the cost of spilling some values. this paper describes both of the techniques and our experience building and using register allocators that incorporate them. it provides a detailed description of optimistic coloring and rematerialization. it presents experimental data to show the performance of several versions of the register allocator on a suite of fortran programs. it discusses several insights that we discovered only after repeated implementation of these allocators. preston briggs keith d. cooper linda torczon a retro/prospective on apl graphpak this paper suggests two general directions that one could take to modernize apl2 graphpak and revitalize its evolution. one direction springboards off lessons learned during graphpak's first decade and from some thinking that evolved during early (c. 1980) experiments with general arrays. the second direction exploits general arrays and apl2 functionality at a user-level. some experiments are reported relating to both areas walt niehoff extrinsic procedures corporate high performance fortran forum changes and extensions in the c family of languages roger e. lessman specification-based testing of synchronous software synchronous programming makes the implementation of reactive software easier and safer. automatic formal verification methods based on model-checking have been developed within the synchronous approach to prove the satisfaction by the software of safety properties. but these methods often require huge memory or time amounts. as a solution to that problem we propose a set of formally defined testing techniques allowing for automatic test data generation. these techniques can be used independently or as a complement to formal verification, since they need the same set of specifications. ioannis parissis farid ouabdesselam fusing loops with backward inter loop data dependence amit ganesh linux for the end user-phase 1 clay shirky evolutionary design of complex software (edcs) demonstration days 1999 this report summarizes the product/technology demonstrations given at defense advanced research projects agency (darpa) evolutionary design of complex software (edcs) program demonstration days, held 28-29 june 1999 at the sheraton national hotel, arlington, va. wayne stidolph book review: discover linux marjorie richardson compiler correctness for parallel languages mitchell wand a metaobject protocol for c++ shigeru chiba migrating to linux, part 3: the future of linux in the soho environment norman m. jacobowitz scheduled activity cron and at: these linux utilities can make your computer do the right thing at the right time. john raithel java 2 distributed object middleware performance analysis and optimization matjaz b. juric ivan rozman simon nash efficient high-level iteration with accumulators accumulators are proposed as a new type of high-level iteration construct for imperative languages. accumulators are user-programmed mechanisms for successively combining a sequence of values into a single result value. the accumulated result can either be a simple numeric value such as the sum of a series or a data structure such as a list. accumulators naturally complement constructs that allow iteration through user-programmed sequences of values such as the iterators of clu and the generators of alphard. a practical design for high-level iteration is illustrated by way of an extension to modula-2 called modula plus. the extension incorporates both a redesigned mechanism for iterators as well as the accumulator design. several applications are illustrated including both numeric and data structure accumulation. it is shown that the design supports efficient iteration both because it is amenable to implementation via in-line coding and because it allows high-level iteration concepts to be implemented as encapsulations of efficient low-level manipulations. robert d. cameron full functional programming in a declarative ada dialect paul a. bailes dan johnston eric salzman li wang an axiomatic approach to information flow in programs a new approach to information flow in sequential and parallel programs is presented. flow proof rules that capture the information flow semantics of a variety of statements are given and used to construct program flow proofs. the method is illustrated by examples. the applications of flow proofs to certifying information flow policies and to solving the confinement problem are considered. it is also shown that flow rules and correctness rules can be combined to form an even more powerful proof system. gregory r. andrews richard p. reitman thread scheduling for multiprogrammed multiprocessors nimar s. arora robert d. blumofe c. greg plaxton designing families of data types using exemplars designing data types in isolation is fundamentally different from designing them for integration into communities of data types, especially when inheritance is a fundamental issue. moreover, we can distinguish between the design of families--- integrated types that are variations of each other---and more general communities where totally different but cohesive collections of types support specific applications (e.g., a compiler). we are concerned with the design of integrated families of data types as opposed to individual data types; that is, on the issues that arise when the focus is intermediate between the design of individual data types and more general communities of data types. we argue that design at this level is not adequately served by systems providing only class-based inheritance hierarchies and that systems which additionally provide a coupled subtype specification hierarchy are still not adequate. we propose a system that provides an unlimited number of uncoupled specification hierarchies and illustrate it with three: a subtype hierarchy, a specialization/generalization hierarchy, and a like hierarchy. we also resurrect a relatively unknown smalltalk design methodology that we call programming-by-exemplars and argue that it is an important addition to a designer's grab bag of techniques. the methodology is used to show that the subtype hierarchy must be decoupled from the inheritance hierarchy, something that other researchers have also suggested. however, we do so in the context of exemplar-based systems to additionally show that they can already support the extensions required without modification and that they lead to a better separation between users and implementers, since classes and exemplars can be related in more flexible ways. we also suggest that class-based systems need the notion of private types if they are to surmount their current limitations. our points are made in the guise of designing a family of list data types. among these is a new variety of lists that havenever been previously published: prefix-sharing lists. we also argue that there is a need for familial classes to serve as an intermediary between users and the members of a family. wilf r. lalonde dependency diagrams mark ray object oriented programming and virtual functions in conventional languages (an extended abstract) henrik johansson fast mutual exclusion for uniprocessors in this paper we describe restartable atomic sequences, an optimistic mechanism for implementing simple atomic operations (such as test-and-set) on a uniprocessor. a thread that is suspended within a restartable atomic sequence is resumed by the operating system at the beginning of the sequence, rather than at the point of suspension. this guarantees that the thread eventually executes the sequence atomically. a restartable atomic sequence has significantly less overhead than other software-based synchronization mechanisms, such as kernel emulation or software reservation. consequently, it is an attractive alternative for use on uniprocessors that do no support atomic operations. even on processors that do support atomic operations in hardware, restartable atomic sequences can have lower overhead. we describe different implementations of restartable atomic sequences for the mach 3.0 and taos operating systems. these systems' thread management packages rely on atomic operations to implement higher-level mutual exclusion facilities. we show that improving the performance of low-level atomic operations, and therefore mutual exclusion mechanisms, improves application performance. brian n. bershad david d. redell john r. ellis object structure in the emerald system emerald is an object-based language for the construction of distributed applications. the principal features of emerald include a uniform object model appropriate for programming both private local objects and shared remote objects, and a type system that permits multiple user-defined and compiler- defined implementations. emerald objects are fully mobile and can move from node to node within the network, even during an invocation. this paper discusses the structure, programming, and implementation of emerald objects, and emerald's use of abstract types. andrew black norman hutchinson eric jul henry levy the unified modeling language user guide geoff glasson frameworks and pattern languages: an intriguing relationship davide brugali katia sycara extensions for multi-module records in conventional programming languages an extended record facility is described that supports multi-module records by providing: incremental and distributed record type definition, allowing field names of a record to be declared in different modules with different subscopes relative to the root record declaration. field-level declaration of access to records by modules and procedures. specification of record representation in terms of the underlying computer memory. we also describe the uses of records that motivate these extensions as well as the compiler modifications required to implement this extended record facility. from this work, we conclude that the extended record support can be added as a simple, natural extension to existing programming language designs and its implementation entails only modest additions to the compiler and linker with no significant compilation time cost. d. r. cheriton m. e. wolf stop the presses: the linux trademark phil hughes two models of concurrent objects we propose two models of concurrent objects that address, respectively, methodological and semantic issues of object-oriented programming languages. the first is a conceptual model to aid in the design of object-oriented languages for concurrent and distributed applications, and the second is a computational model that can be used to define the semantics of such languages. the second model has evolved, in a sense, from the first, though it is intended to be both more neutral and more general. traditional approaches to concurrency can be divided into two camps: those that view the world in terms of synchronized accesses to shared memory, and those that view everything in terms of message passing. a pure, shared memory view is inappropriate for object-oriented languages, since it separates data from the processes that manipulate them. once we add data abstraction to the shared memory view, however, the differences between the two camps begin to cloud over. the remaining difference is between the kinds of "objects" that are passive or active. we propose a unifying model consisting of processes and threads. processes have a state, and may be either active (changing state) or dormant. threads are virtual, and simply indicate which processes are active. there is at most one thread in a given process at any time. threads may move from one process to another if the latter process is dormant. a thread that is not in any process is suspended. the total amount of concurrent activity in a system at any time is thus defined by the total number of threads actually in processes, that is, the number of active processes. each thread originates in some process that "owns" it. we can then distinguish between passive "server" processes without their own threads, and autonomous processes that own one or more threads. we can now use these notions of processes and threads as a reference model for describing the view of concurrency in the object model of a particular language or system. for example, we can distinguish passive and active objects by whether they have their own threads or not. we can also identify the granularity of concurrency by the correspondence between objects and processes. typically "top- level" objects will map to processes, and sub- objects will map to part of the state of a process, but we may also consider objects with internal concurrency that correspond to systems of processes. the differences between various shared memory models and message-passing models can be understood in terms of the policies which determine when a thread may enter a process (i.e., locks, waits and signals, synchronous or asynchronous message-passing, etc.). we have designed and implemented a concurrent object- oriented language called hybrid, based on this model, in which an object is either a process, or is inside a process, as part of another object [nierstrasz 1987; konstantas, et al. 1988]. [nier87c] [konst88] objects communicate with one another by invoking operations and responding to invocations in a remote procedure call fashion. the trace of call/return communications corresponds to a thread. we can understand communication between objects in different processes in terms of the policy for admitting a thread. threads are suspended on a queue (effectively a message queue) if the target is either busy, or blocked on a call of its own. this basic policy can be modified through the use of two language constructs, delay queues and delegation. a delay queue enables a process to selectively delay threads attempting to invoke certain operations. delegation in hybrid is a mechanism that enables a process to switch between threads by not blocking when calling an object in another process. two other constructs, one for managing hierarchies of threads, and another for managing transactions, were designed, but not implemented. although the language design adopted a message-passing communication model, the prototype implementation modeled threads as lightweight processes, and processes as shared, passive entities. the point is that the conceptual model of processes and threads made it fairly easy to propose and design communication and synchronization primitives consistent with a concurrent object-oriented paradigm, independently of the implementation strategy. although this model is useful as a framework for understanding concurrent objects and for designing language constructs, it is inadequate as a computational model. in particular, it says nothing about either the "state" of a process, or the events that cause it to change state. we see the need for a computational model that will be useful: begin {itemize} item for defining the semantics of concurrent, object- oriented languages like hybrid, item for comparing mechanisms of various languages and their implementation environments, item and for aiding language designers by providing a basis for language definition tools. end {itemize} we propose a new computational model that combines ideas from ccs [milner 1980] and actors [agha 1986]. [miln80] [agha86] our motivation for a new approach is based on the following positions: begin {enumerate} item concurrency cannot be modeled by non- determinism. item programs are not functions. end {enumerate} the first statement means that we reject approaches that attempt to interpret concurrency by an interleaving semantics. instead, events in a concurrent computation should be seen as being partially ordered. we believe there is an important difference between multiple observers seeing different orders of events in a truly concurrent computation, and a mono-processor non- deterministically selecting a particular total (i.e., serialized) order on the events of a pseudo-concurrent computation. the second statement expresses the conviction that standard views of programs as functions from inputs to outputs, not only discriminate against object- oriented languages by separating program from data, but they are poor at capturing concurrent computations built up of systems of cooperating programs. instead, we believe that computation, especially concurrent computation, can be better modeled in terms of communicating systems of concurrent agents. rather than distinguishing between the finite control and the "input" to a computation, we model them together as an initial system of concurrent agents (i.e., processes, or "objects"). the progress of a computation can be observed as a partial order of events, where each event represents a (synchronous) communication between a pair of agents, and yields a new, possibly concurrent, behaviour for each of the participants of the event. the "output" of a partial computation is a new system of agents, which may then continue to participate in events, if any are possible. computations may or may not terminate. we have designed and implemented a simple ccs-like language called abacus based on a subset of these ideas [nierstrasz 1988]. [nier88d] agents are specified using behaviour expressions which encapsulate the events (communications) the agent may participate in. behaviour expressions consist of input and output offers (i.e., guards) on event names. operators on behaviour expressions include non-deterministic choice and concurrent composition. a behaviour expression for a system of agents is a static description of possible computations that may result. events may take place when there are matching offers between concurrently composed agents. the resulting partial order of events is effectively a history of the computation for multiple observers. any initial sequence of observed events yields a new behaviour expression that describes the remaining possible computations. the current version of abacus has only the power of finite automata (there is a finite set of reachable states for any system). we are presently searching for the right set of primitives that will extend abacus to be computationally complete, yet permit behaviour expressions to remain directly interpretable. our long term goal is to use abacus as a tool for defining the semantics of languages like hybrid, and for providing a formal and implementable basis for studying and comparing constructs of concurrent and object-oriented programming languages. [agha 1986] g.a. agha, actors: a model of concurrent computation in distributed systems, the mit press, cambridge, massachusetts, 1986. [konstantas, et al. 1988] d. konstantas, o.m. nierstrasz and m. papathomas, "an implementation of hybrid, a concurrent object-oriented language", in active object environments, ed. d.c. tsichritzis, centre universitaire d'informatique, university of geneva, june 1988. [milner 1980] r. milner, a calculus of communicating systems, lecture notes in computer science 92, springer-verlag, 1980. [nierstrasz 1987] o.m. nierstrasz, "active objects in hybrid", acm sigplan notices, proceedings oopsla '87, vol. 22, no. 12, pp. 243-253, dec. 1987. [nierstrasz 1988] o. nierstrasz, "mapping object descriptions to behaviours", in active object environments, ed. d.c. tsichritzis, centre universitaire d'informatique, university of geneva, june 1988. o. m. nierstrasz an extended frame language this paper describes several extensions to the traditional frame based programming semantics. we generalize the treatment of slots to include frame- like semantics; in addition we allow slots to be referenced by complex terms, whereas previously described frame based systems typically allow only scalar slot references. the resulting frame system naturally accommodates complex data structures and provides an extremely powerful mechanism for specifying complex relationships between objects represented by frames. f. p. block n. c. chan a help system for command driven applications michael coyne aaron konstam lj interviews ldp's greg hankins marjorie richardson an end-to-end approach to the resequencing problem françois baccelli erol gelenbe brigitte plateau threaded code designs for forth interpreters p. joseph hong extensions to static scoping scope rules are presented as they appear in the language l, which is currently under development at mcgill. these rules permit a programmer to specify explicitly the duration and visibility of all objects. such specification can be used to declare variables or to create data structures which persist from one invocation of a block to the next. in addition, the data may be caused to persist from one run of the program to the next, obviating temporary files which are outside the scope and control of the programming language. the visibility of an object may also be specified to be larger than the block that contains its declaration. this facility allows the programmer to export some objects, yet keep others private. finally, a facility for call-site visibility is presented that provides some of the expressive power of dynamic scoping and macros without their inherent type insecurity. g. v. cormack process synchronization and ipc craig e. wills the ctalk programming language: a strategic evolution of apl this document presents ctalk, which is a new programming language quite different from apl itself. ctalk is an attempt to combine the "best" features of apl, but also of other languages such as lisp [9], and offer a language that looks "satisfactory" according to the criteria of the mainstream of computer science. the most important features of ctalk are its syntax, which is very close to the one of the c programming language, the adoption of fundamental conceps that were missing in apl, such as lexical binding, but also the integration of a wide range of array operations that are borrowed from apl./par> it is shown here that ctalk can be implemented more simply and more efficiently than apl, while keeping all the power of apl operations, and being an acceptable alternative to apl itself. jean-jacques girardot the effects of symbology and spatial arrangement on the comprehension of software specifications seventy-two participants were presented with specifications for each of three modular-sized computer programs. nine different specification formats were prepared for each program. these formats varied along two dimensions: type of symbology and spatial arrangement. the type of symbology included natural language, constrained language (pdl), and ideograms (flowchart symbols). the spatial arrangement included sequential, branching, and hierarchical versions. the participants answered a series of comprehension questions on each program using only the program specifications. three types of questions were presented: forward-tracing, backward-tracing, and input-output. both forward- and backward-tracing questions were answered more quickly from specifications presented in pdl or ideograms than in natural language. forward-tracing questions were answered most quickly from a branching arrangement, and backward-tracing questions were answered more quickly from branching and hierarchical arrangements. response times to the input-output questions did not vary significantly as a function of the type of symbology or the spatial arrangement. sylvia b. sheppard elizabeth kruesi bill curtis an approach to communication-efficient data redistribution we address the development of efficient methods for performing data redistribution of arrays on distributed-memory machines. data redistribution is important for the distributed-memory implementation of data parallel languages such as high performance fortran. an algebraic representation of regular data distributions is used to develop an analytical model for evaluating the communication cost of data redistribution. using this algebraic representation and the analytical model, an approach to communication- efficient data redistribution is developed. implementation results on the intel ipsc/860 are reported. s. d. kaushik c.-h. huang r. w. johnson p. sadayappan an efficient solution to the mutual exclusion problem using unfair and weak semaphore s. haldar d. subramanian generic implementation via analogies in the ada programming languages h harrison d b liu signature matching: a key to reuse amy moormann zaremski jeannette m. wing jomp - an openmp-like interface for java j. m. bull m. e. kambites world domination eric takes a serious look at what the world will be like when linux is the dominant operating system--or is he just kidding eric raymond a construction approach for software agents using components this paper presents a construction approach for software agents. a software agent is regarded as a main frame plus some components, and it is constructed by selecting suitable main frame and software components, assembling and running through the control mechanism on the main frame. the multi-agent system is organised on the client/server model. the developed system based on this method profits from software reuse in the distributed environment. liu hong lin zongkai zeng guangzhou performing work efficiently in the presence of faults cynthia dwork joseph y. halpern orli waarts optimization russell m. clapp trevor mudge determining the extent of lookahead in syntactic error repair many syntactic error repair strategies examine several additional symbols of input to guide the choice of a repair; a problem is determining how many symbols to examine. the goal of gathering all relevant information is discussed and shown to be impractical; instead we can gather all information relevant to choosing among a set of "minimal repairs." we show that finding symbols with the property "moderate phrase-level uniqueness" is sufficient to establish that all information relevant to these minimal repairs has been seen. empirical results on the occurrence of such symbols in pascal are presented. jon mauney charles n. fischer object based data engineering: the necessary evil of ada development jeffrey l. richardson from the publisher: january 2000 phil hughes efficient binary i/o of idl objects j. m. newcomer software metrics, measurement theory, and viewpoints erhard konrad programming in j/windows j has been available as shareware for the last four years. the core language has been largely complete from outset, but it is only comparatively recently that the implementation (at least for the pc) has included features needed for serious application development. the most significant change was the introduction of windows support in release 6, which for the first time allowed full screen applications to be written in j. following the release of 6.2, which included many additions to the windows support and a much better session manager, i started using the windows version of j in small applications, for which i would normally have used apl. since then i have switched over to using j for almost all development, except for client work where apl is specifically required. i believe that j is already comparable to commercial apl's for application development and will soon be superior. in this paper, i will describe programming the j/windows interface, and discuss some practical considerations in using it. chris burke software reliability measurement session many people think of reliability as a devoutly wished for but seldom present attribute of a program. this leads to the idea that one should make a program as reliable as one possibly can. unfortunately, in the real world software reliability is usually achieved at the expense of some other characteristic of the product such as program size, run or response time, maintainability, etc. or the process of producing the product such as cost, resource requirements, scheduling, etc. one wishes to make explicit trade-offs among the software product and process rather than let them happen by chance. such trade-offs imply the need for measurement. because of mounting development and operational costs, pressures for obtaining better ways of measuring reliability, have been mounting. this session deals with this crucial area. john d. musa supporting reuse and configuration: a port based scm model d. aquilino p. asirelli p. inverardi p. malara a comparison of the object-oriented and process paradigms rob strom viewstamped replication: a general primary copy brian m. oki barbara h. liskov control system development tools this paper provides a core of apl algorithms for control system development and demonstrates their use by solving a typical control problem. in doing so it outlines useful numerical techniques for simulating dynamic systems and for solving some of the central equations of control theory. although some sections of the paper are addressed to apl2 users, the majority of the paper applies to apl. moreover, by doing a little extra work to handle complex numbers and by installing a "callable" compiled eigenvalue-eigenvector routine, all of the material presented can be adapted to any apl system. while apl is a comfortable environment for control system development, apl2 contains two especially useful enhancements: 1) complex numbers included in a natural way, and 2) the function eigen. apl2's facility with complex numbers permits the direct and clear coding of frequency domain methods such as root locus, bode plots, and the generation of transfer functions. apl2's facility with complex numbers also makes it possible to include a native eigenvalue-eigenvector utility function, eigen. this function generates the eigenvalues and eigenvectors of general square matrices, which can then be used for root locus studies, for transforming system equations to canonical forms, and for efficiently solving the riccati and lyapunov equations. non- eigen-based functions are also provided so that all apl users will find enough tools to model, simulate, analyze, and develop regulators, observers, and filters for linear dynamic systems. scott kimbrough focus on embedded systems rick lehrbaum a taxonomy of datatypes brian meek faults on its sleeve: amplifying software reliability testing most of the effort that goes into improving the quality of software paradoxically does not lead to quantitative, measurable quality. software developers and quality- assurance organizations spend a great deal of effort preventing, detecting, and removing "defects"\---parts of software responsible for operational failure. but software quality can be measured only by statistical parameters like hazard rate and mean time to failure, measures whose connection with defects and with the development process is little understood. at the same time, direct reliability assessment by random testing of software is impractical. the levels we would like to achieve, on the order of 106 \\- 108 executions without failure, cannot be established in reasonable time. some limitations of reliability testing can be overcome but the "ultrareliable" region above 108 failure-free executions is likely to remain forever untestable. we propose a new way of looking at the software reliability program. defect- based efforts should amplify the significance of reliability testing. that is, developers should demonstrate that the actual reliability is better than the measurement. we give an example of a simple reliability-amplification technique, and suggest applications to systematic testing and formal development methods. dick hamlet jeff voas focus on software david a. bandel software pipelining utilizing parallelism at the instruction level is an important way to improve performance. because the time spent in loop execution dominates total execution time, a large body of optimizations focuses on decreasing the time to execute each iteration. software pipelining is a technique that reforms the loop so that a faster execution rate is realized. iterations are executed in overlapped fashion to increase parallelism. let {abc}n represent a loop containing operations a, b, c that is executed n times. although the operations of a single iteration can be parallelized, more parallelism may be achieved if the entire loop is considered rather than a single iteration. the software pipelining transformation utilizes the fact that a loop {abc }n is equivalent to a{bca}n 1bc. although the operations contained in the loop do not change, the operations are from different iterations of the original loop. various algorithms for software pipelining exist. a comparison of the alternative methods for software pipelining is presented. the relationships between the methods are explored and possibilities for improvement highlighted. vicki h. allan reese b. jones randall m. lee stephen j. allan an apl compiler: the sofremi-agl compiler, a tool to produce low-cost efficient software alain guillon object-oriented programming in ada83 - genericity rehabilitated henry g. baker who pays for standards? who should pay for standards? l. p. meissner configuration management in biin sms r. w. schwanke e. s. cohen r. gluecker w. m. hasling d. a. soni m. e. wagner configuration support for system description, construction and evolution j. kramer j. magee m. sloman the influence of system design complexity research on the design of module interconnection languages d. c. ince walton hall milton keynes debugging distributed programs using controlled re-execution distributed programs are hard to write. a distributed debugger equipped with the mechanism to re-execute the traced computation in a controlled fashion can greatly facilitate the detection and localization of bugs. this approach gives rise to a general problem, called predicate control problem, which takes a computation and a safety property specified on the computation, and outputs a controlled computation that maintains the property. we define a class of global predicates, called region predicates, that can be controlled efficiently in a distributed computation. we prove that the synchronization generated by our algorithm is optimal. further, we introduce the notion of an admissible sequence of events and prove that it is equivalent to the notion of predicate control. we then give an efficient algorithm for the class of disjunctive predicates based on the notion of an admissible sequence. neeraj mittal vijay k. garg a programmer controlled approach to data and control abstraction traditionally, data abstraction languages have only provided a means to extend the language "upward" to include new procedures and data types not present in the base language. this paper introduces a complementary approach, which also allows programmers to extend the language "downward" and thus to override many of the previously preempted decisions concerning the nature and implementation of various language constructs. in order to illustrate the approach, several extension examples are presented that involve control of decisions below the level of pascal-like languages. implementation of the programmer defined language constructs is also discussed and benchmark results are reported that show them comparable---and in many cases exceeding---in efficiency the corresponding built-in constructs of conventional languages. juha heinänen exploiting style in architectural design environments david garlan robert allen john ockerbloom practical programmer: inspections - some surprising findings robert l. glass distributed software architectures (tutorial) jeff kramer jeff magee letters to the editor corporate linux journal staff editorial diane crawford are booleans safe? b. a. wichmann proposal for a monotonic multiple inheritance linearization previous studies concerning multiple inheritance convinced us that a better analysis of conflict resolution mechanisms was necessary. in [dhhm92], we stated properties that a sound mechanism has to respect. among them, a monotonicity principle plays a critical role, ensuring that the inheritance mechanism behaves "naturally" relative to the incremental design of the inheritance hierarchy. we focus here on linearizations and present an intrinsically monotonic linearization, whereas currently used linearizations are not. this paper describes the algorithm in detail, explains the design choices, and compares it to other linearizations, with loops and clos taken as references. in particular, this new linearization extends clos and loops linearizations, producing the same results when these linearizations are sound. r. ducournau m. habib m. huchard m. l. mugnier playing with binary formats alessandro rubin why specint95 should not be used to benchmark embedded systems tools jakob engblom design concepts as basis for organizing software catalogs mehdi jazayeri georg trausmuth design by decomposition of multiparty interactions in raddle87 i. r. forman data abstraction and hiding in fortran few computing techniques have been as successful for software development as fortran's external procedure facilities. these may be viewed as a practical way to provide libraries of abstract operators for manipulating numeric and boolean, and in fortran 77 also character, data objects. the implementation details of such operators are hidden from, and are of no concern to, the user. proposed fortran 8x allows programmers to define new data objects. intrinsic operations do not exist for programmer-defined objects, and procedural abstraction is the only mechanism for providing operations for such objects. thus implementation details of programmer-defined object operators may also be hidden from the user. moreover, data object definitions themselves may be made externally, and used without local redefinition. thus the implementation details of any data object may also be ignored by the user. fortran 8x therefore provides many of the features of data abstraction as well as procedural abstraction. the proposed fortran 8x mechanisms for data object definition and external data libraries will be described, with examples of their use as data abstraction facilities. jerrold l. wagener higher-order abstract syntax we describe motivation, design, use, and implementation of higher-order abstract syntax as a central representation for programs, formulas, rules, and other syntactic objects in program manipulation and other formal systems where matching and substitution or unification are central operations. higher-order abstract syntax incorporates name binding information in a uniform and language generic way. thus it acts as a powerful link integrating diverse tools in such formal environments. we have implemented higher-order abstract syntax, a supporting matching and unification algorithm, and some clients in common lisp in the framework of the ergo project at carnegie mellon university. f. pfenning c. elliot novice to novice: linux installation and x-windows dean oisboid holmes: a system to support software product lines giancarlo succi jason yip eric liu witold pedrycz oospec: an executable object-oriented specification language mohammad n. paryavi william j. hankley translation: myth or reality? (panel) grady booch steven fraser robert c. martin steven j. mellor michael lee steven garone martin fowler douglas c. schmidt marie lenzi take command: finding files and more eric goebelbecker compile-time support for efficient data race detection in shared-memory parallel programs john mellor-crummey uninitialized modula-2 abstract object instances, yet again j bondy magpie: mpi's collective communication operations for clustered wide area systems thilo kielmann rutger f. h. hofman henri e. bal aske plaat raoul a. f. bhoedjang using view-based models to formalize architecture description kurt lichtner paulo s.c. alencar donald cowan fostering debugging communities on the web john domingue paul mulholland the glorious future vs. the good old days (editorial) jeanne c. adams efficient accommodation of may-alias information in ssa form we present an algorithm for incrementally including may-alias information into static single assignment form by computing a sequence of increasingly precise (and correspondingly larger) partial ssa forms. our experiments show significant speedup of our method over exhaustive use of may-alias information, as optimization problems converge well before most may-aliases are needed. ron cytron reid gershbein testing the correctness of tasking supervisors with tsl specifications this paper describes the application of behavior specifications to the testing of tasking supervisors, an important component of an implementation of a concurrent programming language. the goal of such testing is to determine whether or not a tasking supervisor correctly implements the semantics of its associated language. we have tested a distributed tasking supervisor for the ada programming language by monitoring the execution behavior of ada tasking programs that have been compiled and linked with the supervisor. this behavior is checked for consistency with an event-based formalization of the ada tasking semantics expressed in the tsl specification language. the tsl runtime system automatically performs all monitoring and consistency checking at runtime. our approach improves upon other approaches to testing tasking supervisors, particularly the ada compiler validation capability (acvc), and also an approach described by klarund. in contrast with these other approaches, in our approach (1) we test only the behavior of the tasking supervisor, not the behavior of the test programs; and (2) any ada tasking program may be employed as test data, because the tsl specifications we construct describe the semantics of ada language statements, not the semantics of application programs. d. rosenblum d. luckham a documentation system, philosophy and implementation at the uab university computer center the university computer center at the university of alabama at birmingham (uab/tucc) supports the primary administrative functions of the university, as well as some research and academic programs. a relatively small, specialized user services staff must provide assistance to several hundred users with widely varying needs. the traditional methods of maintaining documentation, which require dedicated people and resources to update and enter the information, are impossible to implement here given the size and makeup of the staff and the diversity of the user base. the solution to this problem has been the development of an on-line system, called usdr (user services documentation retrieval), which permits information in many forms (e.g., script output, tapes from vendors, word processing files) to be retrieved and printed, without the need to rekey or reformat documentation into a common form or location in the system. this paper will discuss the underlying concepts of usdr and an overview of the system which is actually run at uab/tucc. landy manderson eliciting interactive systems requirements in a language-centered user- designer collaboration: a semiotic approach marcelo soares pimenta richard faust a decentralized model for information flow control andrew c. myers barbara liskov generators and the replicator control structure in the parallel environment of alloy the need for searching a space of solutions appears often. many problems, such as iteration over a dynamically created domain, can be expressed most naturally using a generate-and-process style. serial programming languages typically support solutions of these problems by providing some form of generators or backtracking. a parallel programming language is more demanding since it needs to be able to express parallel generation and processing of elements. failure driven computation is no longer sufficient and neither is multiple-assignment to generated values. we describe the replicator control operator used in the high level parallel programming language alloy. the replicator control operator provides a new view of generators which deals with these problems. thanasis mitsolides malcolm harrison tcfs transparent cryptographic file system: think of tcfs as an extended nfs. it acts just like nfs, but allows a user to protect files using encryption ermelindo mauriello apl as a prototyping language: case study of a compiler development project we are applying prototyping method using apl to a commercial compiler's development project. this paper will discuss the following matters based on our one year experience of the project: merits of apl as a prototyping language. an environment of the prototype. a representation of tables and the intermediate language of compilers in an apl environment. a strategy of transforming the apl prototype into final product written in pascal. evaluation of this method. matsuki yoshino simulating the object-oriented paradigm to nial philip j. beaudet michael a. jenkins locality, causality and continuations concurrency and distribution are topics exacerbated by the omnipresence of internet. although many languages address these topics, few offer a real opportunity to control and coordinate widely spread dynamic computations. this paper presents such a language and its prominent features. besides explaining the software architecture of the implementation (based on objects and generic functions), it also presents an original coherency protocol for shared mutable variables. we first recall, in section 1, the main features of our scheme-based, concurrent, distributed and computation-oriented language already presented in more details and examples in [qd93]. section 2 explains how to systematically implement a concurrent and distributed interpreter for that language, using a set of program transformations combining abstract continuation passing style (acps) and object-oriented lifting. the originality of this implementation is that it chiefly uses objects and generic functions in a style that allows to concentrate the problems related to concurrency and migration of computations into the sole discriminating behavior of generic functions. acps is not only used to reify continuations but also to enforce locality of computations in presence of distal objects. in section 3, we propose a new (to our knowledge) protocol to manage shared mutable variables. this protocol enhances [msrn92], does not require atomic broadcast, tolerates short communication breakdowns and uses bounded circular clocks. this result comes from the use of a distributed gc (which allows us to maintain an approximation of global virtual time) and from the exploitation of causality as stated by continuations. to give a continuation a value (and a store) clearly expresses that the computations that are present in the continuation causally depend on the invoker of the continuation. finally the computation- orientation of our language and mainly the ability to control groups of threads, concurrently running on multiple sites for the completion of the evaluation of a single expression, is shortly sketched in section 4. as usual, related works and conclusions end this paper. christian queinnec the impact of selected concurrent language constructs on the sam run-time system myra jean prelle ann m. wollrath thomas j. brando edward h. bensley a suite of www-based tools for advanced course management a collection of tools for creation of advanced and comprehensive course home pages is presented. the tools cover the spectrum from course overview pages and hypertext teaching materials to interactive services that support the teaching activities during the course. from the teacher's perspective the tools allow for abstraction from details and automation of routine work in the authoring process. seen from a student's perspective the comprehensive linking of course plans, teaching material, and interactive services provides for a valuable organization of a large body of information. kurt nørmark evaluating the locality benefits of active messages a major challenge in fine-grained computing is achieving locality without excessive scheduling overhead. we built two j-machine implementations of a fine-grained programming model, the berkeley threaded abstract machine. one implementation takes an active messages approach, maintaining a scheduling hierarchy in software in order to improve data cache performance. another approach relies on the j-machine's message queues and fast task switch, lowering the control costs at the expense of data locality. our analysis measures the costs and benefits of each approach, for a variety of programs and cache configurations. the active messages implementation is strongest when miss penalties are high and for the finest-grained programs. the hardware- buffered implementation is strongest in direct-mapped caches, where it achieves substantially better instruction cache performance. ellen spertus william j. dally supporting the regression testing in lisp programs richard c. waters a tale of two directories: implementing distributed shared objects in java maurice herlihy michael p. warres a tool for the collection of industrial software metrics data k. k. aggarwal a fast general-purpose hardware synchronization mechanism john t. robinson synchronization primitives for a multiprocessor: a formal specification formal specifications of operating system interfaces can be a useful part of their documentation. we illustrate this by documenting the threads synchronization primitives of the taos operating system. we start with an informal description, present a way to formally specify interfaces in concurrent systems, give a formal specification of the synchronization primitives, briefly discuss the implementation, and conclude with a discussion of what we have learned from using the specification for more than a year. a. birrell j. guttag j. horning r. levin shortcut deforestation in calculational form akihiko takano erik meijer experience with an estelle development system anthony chung deepinder sidhu model and techniques to specify, develop and use a framework: a meta modeling approach object-oriented frameworks are very popular for the efficiency they provide in reusing software. however, their use by instantiation, their enrichment by composition, and their extension by specialization, are all complicated operations requiring to be simplified by the introduction of new techniques and models. pascal rapicault compiler structure engineering with attribute grammars ilka miloucheva hans loeper performance evaluation of the orca shared-object system orca is a portable, object-based distributed shared memory (dsm) system. this article studies and evaluates the design choices made in the orca system and compares orca with other dsms. the article gives a quantitative analysis of orca's coherence protocol (based on write-updates with function shipping), the totally ordered group communication protocol, the strategy for object placement, and the all-software, user-space architecture. performance measurements for 10 parallel applications illustrate the trade-offs made in the design of orca and show that essentially the right design decisions have been made. a write-update protocol with function shipping is effective for orca, especially since it is used in combination with techniques that avoid replicating objects that have a low read/write ratio. the overhead of totally ordered group communication on application performance is low. the orca system is able to make near-optimal decisions for object placement and replication. in addition, the article compares the performance of orca with that of a page- based dsm (treadmarks) and another object-based dsm (crl). it also analyzes the communication overhead of the dsms for several applications. all performance measurements are done on a 32-node pentium pro cluster with myrinet and fast ethernet networks. the results show that orca programs send fewer messages and less data than the treadmarks and crl programs and obtain better speedups. henri e. bal raoul bhoedjang rutger hofman ceriel jacobs koen langendoen tim ruhl m. frans kaashoek an efficient implementation scheme of concurrent object-oriented languages on stock multicomputers several novel techniques for efficient implementtion of concurrent object- oriented languages on general purpose, stock multicomputers are presented. these techniques have been developed in implementing our concurrent object- oriented language abcl on a fujitsu laboratory's experimental multicomputer ap1000 consisting of 512 sparc chips. the propsed intra-node scheduling mechanism reduces the cost of local message passing. the cost of intra-node asynchronous message passing is about 20 sparc instructions in the bst case, including locality checking, dynamic method lookup, and scheduling. the minimum latency of asynchronous internode message passing is about 9μs, or about 120 instructions, employing the self- dispatching mechanism independently proposed by eicken et al. a large scale benchmark which involves 9,000,000 message passings shows 440 times speedup on the 512 nodes system compared to the sequential version of the same algorithm. we rely on simple hardware support for message passing and use no specialized architectural supports for object-oriented computing. thus, we are able to enjoy the benefits of future progress in standard processor technology. our result shows that concurrent object-oriented languages can be implemented efficiently on conventional multicomputers. kenjiro taura satoshi matsuoka akinori yonezawa concurrent use of generic types in modula-2 m e goldsby methodology standards: help or hindrance? david monarchi grady booch brian henderson-sellers ivar jacobson steve mellor james rumbaugh rebecca wirfs- brock introducing modula-3: the right tool for building complex linux applications. geoff wyant a note on pci: distributed processes communicating by interrupts uwe petermann andrzej szalas optimizing pattern matching we present improvements to the backtracking technique of pattern-matching compilation. several optimizations are introduced, such as commutation of patterns, use of exhaustiveness information, and control flow optimization through the use of labeled static exceptions and context information. these optimizations have been integrated in the objective-caml compiler. they have shown good results in increasing the speed of pattern-matching intensive programs, without increasing final code size. fabrice le fessant luc maranget property specification patterns for finite-state verification matthew b. dwyer george s. avrunin james c. corbett performance aspects of disk space management dynamic disk space management is a set of strategies for allocating files to disk storage devices so as to reduce seek time, device contention, and storage fragmentation. in this paper, the impact on system throughput and user response time is projected, via simulation model results, for a small general-purpose operating system which manages large capacity disk drives. james o. dyal michael k. draughn checking progress with action priority: is it fair? the liveness characteristics of a system are intimately related to the notion of fairness. however, the task of explicitly modelling fairness constraints is complicated in practice. to address this issue, we propose to check lts (labelled transition system) models under a strong fairness assumption, which can be relaxed with the use of action priority. the combination of the two provides a novel and practical way of dealing with fairness. the approach is presented in the context of a class of liveness properties termed progress, for which it yields an efficient model- checking algorithm. progress properties cover a wide range of interesting properties of systems, while presenting a clear intuitive meaning to users. dimitra giannakopoulou jeff magee jeff kramer the charrette ada compiler the charrette ada compiler is a working compiler for a substantial subset of the preliminary ada language. the ada source program is translated into an equivalent program in an intermediate implementation language. the result of the compilation process is machine language generated for this intermediate program. this paper provides a brief overview of the compiler with special attention given to the primary translation phase. emphasis is placed on the transformation of ada type and subtype information and the representation of objects. the translation of several interesting statement and expression forms is also outlined. jonathan rosenberg david alex lamb andy hisgen mark sherman reporting test results dale gaumer daniel roy complete validated software: validation through testing (panel session) john c. cherniavsky portability and reusability: common issues and differences james d. mooney concurrent object access in blaze 2 p. mehrotra j. van rosendale dynamic code replacement and ada ken tindell building high-performance applications and services in java: an experiential study sandeep k. singhal binh q. nguyen michael fraenkel richard redpath jimmy nguyen hamming numbers, lazy evaluation, and eager disposal c. k. yuen fortran standards corporate x3j3 staff editing apl objects with cms xedit the vm/sp system editor xedit is used to edit apl objects. two approaches are possible: with an apl function, or with an exec written in rexx (the new vm/sp system interpreter). the advantages of the latter are that nothing need be copied into the user's workspace, and that apl code can be executed from xedit, and xedit commands from apl. no new auxiliary processor is needed. the technique is in use at ibm yorktown. norman brenner managing reentrant structures using reference counts automatic storage management requires that one identify storage unreachable by a user's program and return it to free status. one technique maintains a count of the references from user's programs to each cell, since a count of zero implies the storage is unreachable. reentrant structures are self-referencing; hence no cell in them will have a count of zero, even though the entire structure is unreachable. a modification of standard reference counting can be used to manaage the deallocation of a large class of frequently used reentrant structures, including two-way and circularly linked lists. all the cells of a potentially reentrant structure are considered as part of a single group for deallocation purposes. information associated with each cell specifies its group membership. internal references (pointers from one cell of the group to another) are not reference counted. external references to any cell of this group are counted as references to the group as a whole. when the external reference count goes to zero, all the cells of the group can be deallocated. this paper describes several ways of specifying group membership, properties of each implementation, and properties of mutable and immutable group membership. daniel g. bobrow data-flow design as a visual programming language s. eisenbach l. mcloughlin c. sadler automatic isolation of compiler errors this paper describes a tool called vpoiso that was developed to isolate errors automatically in the vpo compiler system. the two general types of compiler errors isolated by this tool are optimization and nonoptimization errors. when isolating optimization errors, vpoiso relies on the vpo optimizer to identify sequences of changes, referred to as transformations, that result in semantically equivalent code and to provide the ability to stop performing improving (or unnecessary) transformations after a specified number have been performed. a compilation of a typical program by vpo often results in thousands of improving transformations being performed. the vpoiso tool can automatically isolate the first improving transformation that causes incorrect output of the execution of the compiled programs by using a binary search that varies the number of improving transformation performed. not only is the illegal transformation automatically isolated, but vpoiso also identifies the location and instant the transformation is performed in vpo. nonoptimization errors occur from problems in the front end, code generator, and necessary transformations in the optimizer. if another compiler is available that can produce correct (but perhaps more inefficient) code, then vpoiso can isolate nonoptimization errors to a single function. automatic isolation of compiler errors facilitates retargeting a compiler to a new machine, maintenance of the compiler, and supporting experimentation with new optimizations. david b. whalley linux distributions comparison corporate linux journal staff domain understanding and the software process some application domains for which software is written are well understood, and some are not. this distinction is crucial to understanding---and improving ---the software process. pamela zave towards a theory of packages a model for packages is introduced, along with operations for their manipulation. the model is based on the unifying principle that programs should be represented by trees, and packages by substitutions on trees. operations are defined on packages, that allow the construction of any package from a collection of basic packages. a programming environment, based on this model, would allow manipulations and operations that are not possible in current languages. information hiding and encapsulation are automatically supported by the model. a typing mechanism is presented, which allows polymorphic types. the typing does not affect the typeless aspect of the model. snorri agnarsson m. s. krishnamoorthy a survival strategy for apl echoing the concerns of the conference call for papers, apl no longer holds the market niche which it once had - over the past few years new developments in the world of computing have overtaken many of the areas which were once our exclusive province. given that the computing industry is one which focuses hard on 'winners', we have to ask ourselves whether or not there is a long-term place for apl, what that place might be and how we can attain it. the contention of this paper is that an honorable future for apl can be found and that we can reach this position more easily if we work toward it together the exact path to be outlined here is not necessarily the 'correct' one but at the present stage of the game our most urgent need is to consider as many of the facts as we can muster. dick bowman machine-adaptable dynamic binary translation dynamic binary translation is the process of translating and optimizing executable code for one machine to another at runtime, while the program is "executing" on the target machine. dynamic translation techniques have normally been limited to two particular machines; a competitor's machine and the hardware manufacturer's machine. this research provides for a more general framework for dynamic translations, by providing a framework based on specifications of machines that can be reused or adapted to new hardware architectures. in this way, developers of such techniques can isolate design issues from machine descriptions and reuse many components and analyses. we describe our dynamic translation framework and provide some initial results obtained by using this system. david ung cristina cifuentes a modest proposal concerning variables and assignment statements frank a. adrian introducing data decomposition into vdm for tractable development of programs jian lu data abstraction from a programming language viewpoint this paper traces the development of data abstraction concepts in programming languages. a data abstraction, or abstract data type, describes a collection of abstract entities and operations on the entities. a program which uses a data abstraction can access or modify the entities only through the abstract operations. specific research topics discussed in the paper include: the role of type in a programming language, the formal specification of the semantics of a data abstraction, data abstraction language construct design issues, type hierarchies, and type-checking. lawrence a. rowe version management in distributed network environment bogdan korel horst wedde srinivas magaraj kalique nawaz venugopal dayana letters to the editor corporate linux journal staff is there anything "time-honored" in the field of software? robert l. glass improved sharing of apl workspaces and libraries years of experience have revealed deficiencies in apl's traditional library structure and its facilities for managing saved workspaces. as part of the development of a new file system, stsc undertook to resolve these deficiencies in a revised design. the rules governing the use of shared workspaces have been broadened and placed under the control of an access matrix, giving the user great flexibility in controlling how invidual workspaces are used. apl libraries have been generalized and likewise placed under access-matrix control. system facilities are provided that allow any workspace-management action to be performed under program control. the concept of the apl workspace has hardly changed since the earliest apl systems; it is worthwhile to wonder why. to programmers weary from battling the job control languages of that era's operating systems, the workspace must have seemed a concept of charming and startling simplicity. to the computer novice, who found in apl a shortcut through the maze of technical lore, the notion of the workspace was quickly assimilated into intuition. james g. wheeler software reliability apportionment using the analytic hierarchy process k. k. aggarwal yogesh singh fault classes and error detection capability of specification-based testing some varieties of specification-based testing rely upon methods for generating test cases from predicates in a software specification. these methods derive various test conditions from logic expressions, with the aim of detecting different types of faults. some authors have presented empirical results on the ability of specification-based test generation methods to detect failures. this article describes a method for cokmputing the conditions that must be covered by a test set for the test set to guarantee detection of the particular fault class. it is shown that there is a coverage hierarchy to fault classes that is consistent with, and may therefore explain, experimental results on fault-based testing. the method is also shown to be effective for computing mcdc-adequate tests. d. richard kuhn the df command phil hughes a framework for isolating connection expection management partha pratim pal mixing coroutines and processes in an ada tasking implementation s. vestal an overview of common lisp a dialect of lisp called "common lisp" is being cooperatively developed and implemented at several sites. it is a descendant of the maclisp family of lisp dialects, and is intended to unify the several divergent efforts of the last five years. we first give an extensive history of lisp, particularly of the maclisp branch, in order to explain in context the motivation for common lisp. we enumerate the goals and non-goals of the language design, discuss the language features of primary interest, and then consider how these features help to meet the expressed goals. finally, the status (as of may 1982) of six implementations of common lisp is summarized. guy l. steele adding parametric polymorphism to the common object request broker architecture (corba) (poster session) the research displayed in this poster showcases issues surrounding adding parametric polymorphism to corba. the merits of parametric polymorphism are widely published, yet there is no support for the parametric polymorphism paradigm in corba. this research should be of special interest to c++ programmers, and other programmers accustomed to generic programming, frustrated by this lack of support. wayne l. bethea analysis is necessary, but far from sufficient (abstract only): experiences building and deploying successful tools for developers and testers why are there so few successful "real-world" programming and testing tools based on academic research? this talk focuses on program analysis tools, and proposes a surprisingly simple explanation with interesting ramifications. for a tool aimed at developers or testers to be successful, people must use it - and must use it to help accomplish their existing tasks, rather than as an end in itself. if the tool does not help them get their job done, or the effort to learn and/or use the tool is too great, users will not perceive enough value; the tool will not get significant usage, even if it is free. this talk focuses on the often- overlooked consequences of this seemingly basic statement in two major areas: program analysis, and the work beyond core analysis that must be done to make a successful tool. examples will be drawn from tools that have been successfully used in industry (sold commercially, and developed for internal use). jon pincus a whirlwind tour of forth resources paul frenger verification of requirements for saftey-critical software the purpose of this paper is to describe a methodology for the verification of safety-critical software. the methodology is implemented with use-cease modeling notation of the unified modeling language (uml). the methodology also contains techniques for creating requirements-based test cases from scenarios. the test cases are formatted for test scripts that exercised the software. paul b. carpenter tutorial: a quick introduction to software reliability modeling jarrett rosenberg smalltalk under the umbrella: the travelers' smalltalk experience john cunningham objects and domain engineering (panel) sanjiv gossain don batory hassan gomaa mitch lubars christopher pidgeon ed seidewitz high-performance java cherri pancake christian lengauer the ups and downs of programmer stress robert l. glass what is the proper size of a program module? (abstract only) there are a variety of views about structured programming, and thus a number of ways to describe it. however, we all know that a well structured program must be easy to read and easy to modify. as a consequence, modularity become a central concept in structured programming. automobiles are assembled from components (i.e. modules) instead of individual parts, so that their structure look simple, and thus easier for auto - mechanics to handle the necessary repairs. the human ability in handling complexity is limited by their short term memory capacity, which has been known to range from 5 to 9 simultaneous facts. however, computer programs contain statements presented on a medium such as paper, and nobel-prize winner psychologist herbert simon once suggested that the short term memory capacity on paper for humans exceeds that in human brain. the short term memory capacity has been used successfully, in our class, as a guideline in setting the size for program modules. the readability of the student programs was markedly improved, and thus greatly reduced the time needed for grading the programs. a main program usually contains a larger number of declarations (non-active statements) than that of a sub-program; but each one of them is a program module. thus, our class had, after discussion, chosen to limit each program module to no more than 12 active statements. as a result, the student programs were much easier to read. we are currently studying the possibility of establishing a more comprehensive measure for module complexity that is easy to use. guided by researches on measuring the readability of natural language texts, we are exploring the following: counting the number of identifiers declared as constants, types, and variables. counting the number of operators in each active statement and considering the use of different operators as more complex than the repetition of the same operator. determining to what extent some active statements, such as do loop and conditional branches, should be assigned a larger count; because they appear to be more complex than assignment statement. these ideas and others will be discussed during our presentation. colleagues in the audience will be invited to offer their ideas and suggestions for further studies. note: a part of this paper's content was presented by yin-min wei at the 1985 ohio computer science colleague's conference, bowling green state university, ohio yin-min wei edgar howell on beyond oop a. michael berman top-down parsing in coco-2 h. dobler prism - productivity improvement for software engineers and managers an integrated set of tools to support top---down, structured software design is described. the tools provide multiple views of the design process and share a common, iconic user interface among themselves and with an extensive set of office system utilities. the multiple design views are supported by a multiple window operating system running on a high performance microcomputer with a high resolution, bit---mapped display and a mouse. the system provides support for proven software engineering methodologies from data flow design through software maintenance. high quality documentation is automatically produced during each phase of the design. the tool set is language---independent and open---ended for ease of future expansion. doug rosenberg techniques for debugging parallel programs with flowback analysis jong-deok choi barton p. miller robert h. b. netzer a structure coverage tool for ada software systems liqun wu victor r. basili karl reed a good hoare axiom system for an algol-like language clarke has shown that it is impossible to obtain a relatively complete axiomatization of a block-structured programming language if it has features such as static scope, recursive procedure calls with procedure parameters, and global variables, provided that we take first-order logic as the underlying assertion language [cl]. we show that if we take a more powerful assertion language, and hence a more powerful notion of expressiveness, such a complete axiomatization is possible. the crucial point is that we need to be able to express weakest preconditions of commands with free procedure parameters. the axioms presented here are natural and reflect the syntax of the programming language. such an axiom system provides a tool for understanding how to reason about languages with powerful control features. joseph y. halpern simple translation of goal-directed evaluation todd a. proebsting why bother with catocs? robbert van renesse the design of an object oriented architecture this paper proposes a new object model, called the distributed object model, wherein the model is unified as a protection unit, as a method of data abstraction, and as a computational unit, so as to realize reliable, maintainable, and secure systems. an object oriented architecture called zoom is designed based on this object model. a software simulator and cross assembler for this architecture have been implemented. the feasibility and performance of the architecture are discussed according to program sizes and estimated hardware size and execution speed. yutaka ishikawa mario tokoro patterns, teams and domain engineering steven fraser deborah leishman robert mclellan porting the oberon system to alphaaxp gunter dotzel hartmut goebel cooperating transactions against the epos database reidar conradi carl chr. malm availability of f2c - a fortran to c converter s. i. feldman d. m. gay m. w. maimone n. l. schryer object-oriented analysis for evolving systems mitchell lubars greg meredith colin potts charles richter roman-9x: a technique for representing object models in ada 9x notation gary j. cernosek token based solutions to m resources allocation problem aomar maddi the time problem russell m. clapp trevor mudge comparing and assessing programming languages: basis for a qualitative methodology jarallah alghamdi joseph urban polymorphism considered harmful carl ponder bill bush perspectives on case tool integration nicholas wybolt 1st workshop on open source software engineering _open source software (oss) has recently become the focus of considerable interest, yet there remains a need for rigorous analytical inquiry into the subject. this workshop seeks to articulate oss as an se paradigm and to address the requirements of oss in terms of methodology & process, tools & enabling technologies, and human resources & project management. format: round-table discussion. size: maximum 40 participants. position papers required. the workshop report will be published in a special issue of iee proceedings - software on open source software engineering, and workshop participants will be encouraged to submit full research papers based on their position papers for possible inclusion in the special issue._ joseph feller brian fitzgerald andre van der hoek encapsulators: a new software paradigm in smalltalk-80 certain situations arise in programming that lead to multiply polymorphic expressions, that is, expressions in which several terms may each be of variable type. in such situations, conventional object-oriented programming practice breaks down, leading to code which is not properly modular. this paper describes a simple approach to such problems which preserves all the benefits of good object-oriented programming style in the face of any degree of polymorphism. an example is given in smalltalk-80 syntax, but the technique is relevant to all object-oriented languages. geoffrey a. pascoe pure versus impure lisp nicholas pippenger the role of explicit type management schemes in the implementation of abstract data types in ada larry latour testing: principles and practice stephen r. schach linux counter corporate linux journal staff apl and coroutines norman thomson fortran at ten gigaflops: the connection machine convolution compiler mark bromley steven heller tim mcnerney guy l. steele note on a problem with reed and long's fbr results this short note discusses a problem we found in a recent article in _operating systems review_ [1]. the problem is related to apparent inconsistencies in results presented for a data cache algorithm, known as fbr (frequency based replacement), which we first described in [2]. a property of this algorithm is that cache miss ratios obtained using fbr can be related to lru cache miss ratios for any given trace. application of this property to the results in [1] reveals apparent inconsistencies. these inconsistencies could be the result of errors either in implementing the algorithm or in recording the results. this note also highlights this property of fbr, which should help contribute to its understanding. john t. robinson murthy v. devarakonda x-ability: a theory of replication different replication mechanisms provide different solutions to the same basic problem. however, there is no precise specification of the problem itself, only of particular classes of solutions, such as active replication and primary-backup. having a precise specification of the problem would help us better understand the space of possible solutions. we present a formal definition of the problem solved by replication in the form of a correctness criterion called x-ability (exactly-once ability). an x-able service has obligations to its environment and its clients. it must update its environment under exactly-once semantics. furthermore, it must provide idempotent, non-blocking request processing and deliver consistent results to its clients. x-ability is a local property: replicated services can be specified and implemented independently, and later composed in the implementation of more complex replicated services. svend frølund rachid guerraoui monte carlo debugging: a brief tutorial monte carlo debugging, which has been relied upon for some applications at the author's data center, is briefly discussed with the presentation of some guidelines. r. charles bell an efficient separate compilation strategy for very large programs this paper describes the design of a compiling system that supports the efficient compilation of very large programs. the system consists of front ends for different languages, a common program database to store the intermediate code, and various back ends, optimizers, debuggers and other development tools. the compiling system achieves efficiency of use by minimizing the number of system components that must be invoked when a small change is made in a program. a new separate compilation strategy is presented that is both easy and natural to use and does not require language extensions for its use. the database provides the necessary contextual information to support separate compilation and to facilitate complete compile-time checking. also, the use of this database affords a unique opportunity to reduce substantially the cost of recompilation and to support an efficient source patching facility. andres rudmik barbara g. moore kernel corner writing a linux driver the main goal of this article is to learn what a driver is, how to implement a driver for linux and how to integrate it into the operating system. an article for the experienced c programmer fernando matia genie forth roundtable brad rodriguez development cost and size estimation starting from high-level specifications this paper addresses the problem of estimating cost and development effort of a system, starting from its complete or partial high-level description. in addition, some modifications to evaluate the cost- effectiveness of reusing vhdl-based designs, are presented. the proposed approach has been formalized using an approach similar to the cocomo analysis strategy, enhanced by a project size prediction methodology based on a vhdl function point metric. the proposed design size estimation methodology has been validated through a significant benchmark, the leon-1 microprocessor, whose vhdl description is of public domain william fornaciari fabio salice umberto bondi edi magini experiences with the partitions model brian dobbing dispatching on the function result this paper is a tutorial on dynamic dispatching of functions with a controlling result. a case shows how to use this construct to replicate a value extracted from a heterogeneous data structure. mordechai ben-ari a framework for reducing the cost of instrumented code instrumenting code to collect profiling information can cause substantial execution overhead. this overhead makes instrumentation difficult to perform at runtime, often preventing many known _offline_ feedback-directed optimizations from being used in online systems. this paper presents a general framework for performing _instrumentation sampling_ to reduce the overhead of previously expensive instrumentation. the framework is simple and effective, using code-duplication and _counter-based sampling_ to allow switching between instrumented and non- instrumented code. our framework does not rely on any hardware or operating system support, yet provides a high frequency sample rate that is tunable, allowing the tradeoff between overhead and accuracy to be adjusted easily at runtime. experimental results are presented to validate that our technique can collect accurate profiles (93-98% overlap with a perfect profile) with low overhead (averaging 6% total overhead with a naive implementation). a jalape~ no-specific optimization is also presented that reduces overhead further, resulting in an average total overhead of 3%. matthew arnold barbara g. ryder experience with porting techniques on a cobol 74 compiler the problems of compiler construction have largely been solved for cobol '74, but a remaining fundamental consideration in a commercial environment is the cost of compiler development. this can be reduced by the use of portable software, but the cost of porting to a new system remains significant. in the microprocessor system market, two approaches have been followed. the first is to produce software for a virtual machine environment and simulate this on real hardware be means of an interpreter. most software, including language processors and large parts of the operating system, can be transferred almost unchanged to a new processor. nearly all commercial software for microprocessors has been developed on this basis. the alternative approach is to produce software which, by virtue of certain characteristics, can be made to be portable to new real machine environments, provided the appropriate high level language compiler is available. we have developed an ans cobol 74 compiler for microprocessors using this second approach [1]. since the compiler compiles cobol at a level approaching level 2 in the main modules to the true machine code of the target machine, the same approach becomes available to the applications programmer. as far as can be established, this work is unique. both compiler and object code interface directly to the machine environment. no interpreter is required for their execution. the compiler system runs on, and generates code for, three machines. in addition, since the compiler is written in a high level language, portable cross compilers are available. i. m. kipps organizational badge collecting james w. moore roy rada overlapping token definitions chris clark sherlock holmes: the mystery of the vanishing variable solving errors in other apl systems is frequently a tedious and ultimately an expensive chore. fortunately apl is able to fully describe its own environment without the need for 'special language query tools'. this paper describes utilities which extract directive knowledge surrounding a program error. used as a systematic discipline, they can greatly facilitate debugging and prevent potential errors in any system. roger willink clock face david steinbrook programming pearls jon bentley don knuth doug mcilroy the meta4 programming language overview (poster session) jason w. kim ada runtime environment working group - a framework for describing ada runtime environment implementation benefits of c++ language mechanisms c + + was designed by bjarne stroustrup at at&t; bell laboratories in the early 1980s as an extension to the c language, providing data abstraction and object-oriented programming facilities. c + + provides a natural syntactic extension to c, incorporating the class construct from simula. a design principle was to remain compatible and comparable with c in terms of syntax, performance and portability another goal was to define an object-oriented language that significantly increased the amount of static type checking provided, with user-defined types (classes) and built-in types being part of a single unified type system obeying identical scope, allocation and naming rules. these aims have been achieved, providing some underlying reasons why c + + has become so prevalent in the industry. the approach has allowed a straightforward evolution from existing c-based applications to the new facilities offered by c + + , providing an easy transition for both software systems and programmers. the facilities described are based on release 2.0 of the language, the version on which the ansi and is0 standardization of c + + is being based. david jordan x3j3 avoids deadlock loren p. meissner knowledge-based ooa and ood ka-wing wong the persistent store as an enabling technology for integrated project support environments the software engineering community has recognised the need for integrated project support environments (ipses) for some time. the technique of integration as a method of cost saving applies to all levels in the hierarchy of problem solving, both hardware and software. this paper discusses one such level, that in which the ipse is implemented and in particular the use of a persistent store as an enabling technology for ipses. the facilities of the language ps-algol necessary to support an ipse are illustrated by example and it is demonstrated how an ipse's base may be provided by a persistent store that supports first class procedures as data objects. the need for a type secure object system which allows static and dynamic binding is demonstrated and finally the secure transactional base of ps-algol is shown to be a necessary and sufficient condition to provide secure version control and concurrent access to both programs and data. ronald morrison alan dearle peter j. bailey alfred l. brown malcolm p. atkinson using memorization to achieve polynomial complexity of purely functional executable specifications of non-deterministic top-down parsers r. a. frost programming pearls jon bentley report from the trade show floor matthew cunningham configuration management in the pact software engineering environment i. simmonds acm president's letter: the challenge of the fifth generation david h. brandin event graph visualization for debugging large applications dieter kranzlmuller siegfried grabner jens volkert the design and implementation of a certifying compiler george c. necula peter lee closing panel (panel session): where are we standing? can we say "reuse is dead, long live reuse" or is it too soon? manzour zand paul bassett ruben prieto-diaz object-oriented software testing robert v. binder characteristics of modern system implementation languages j. m. bishop r. faria objective view poiint: the abcs of writing c++ classes g. bowden wise a paradigm change in software engineering christiane floyd dynamic rescheduling: a technique for object code compatibility in vliw architectures thomas m. conte sumedh w. sathaye using cause-effect analysis to understand the performance of distributed programs wagner meira thomas j. leblanc virgílio a. f. almeida perturbation analysis of high level instrumentation for spmd programs the process of instrumenting a program to study its behavior can lead to perturbations in the program's execution. these perturbations can become severe for large parallel systems or problem sizes, even when one captures only high level events. in this paper, we address the important issue of eliminating execution perturbations caused by high-level instrumentation of spmd programs. we will describe perturbation analysis techniques for common computation and communication measurements, and show examples which demonstrate the effectiveness of these techniques in practice. sekhar r. sarukkai allen d. malony multiparadigm communications in java for grid computing vladimir getov gregor von laszewski michael philippsen ian foster architectural support for ada in the rational environment (panel session) dave stevenson introduction to the attribute driven design method this tutorial will introduce the attribute driven design (add) method. add is a method for designing the software architecture of a system or collection of systems based on an explicit articulation of the quality attribute goals for the system(s). the method is appropriate for any quality attributes but has been particularly elaborated for the attributes of performance, modifiability, security, reliability/availability and usability. the method has been used for designing the software architecture of products ranging from embedded to information systems. felia buchmann len bass the unix eighth edition network file system this talk will summarize the design goals for, performance of, and troubles with the eighth edition unix network file system. the principal goal was to make available the file systems of other machines to programs running on one machine (the client) without changing the semantics of system calls. the network file system did not need to provide for execution of programs on other machines, since our network already had a mechanism for that. the requirement that the semantics of system calls be preserved has several implications: names of remote files must be syntactically and semantically the same as names of local files; the current directory can be either a local directory or a remote directory, so the kernel has to be modified. these goals have been met. no program had to be recompiled to use remote files, and except for bugs, and a few subtle points discussed below, programs cannot distinguish between local and remote files. this is an abstract of a longer paper which will include the results of the current reimplementation. peter j. weinberger parameterization: a case study will tracz io-lite: a unified i/o buffering and caching system vivek s. pai peter druschel willy zwaenepoel an overview of the corba portable object adapter irfan pyarali douglas c. schmidt using prototypical objects to implement shared behavior in object-oriented systems a traditional philosophical controversy between representing general concepts as abstract sets or classes and representing concepts as concrete prototypes is reflected in a controversy between two mechanisms for sharing behavior between objects in object oriented programming languages. inheritance splits the object world into classes, which encode behavior shared among a group of instances, which represent individual members of these sets. the class/instance distinction is not needed if the alternative of using prototypes is adopted. a prototype represents the default behavior for a concept, and new objects can re-use part of the knowledge stored in the prototype by saying how the new object differs from the prototype. the prototype approach seems to hold some advantages for representing default knowledge, and incrementally and dynamically modifying concepts. delegation is the mechanism for implementing this in object oriented languages. after checking its idiosyncratic behavior, an object can forward a message to prototypes to invoke more general knowledge. because class objects must be created before their instances can be used, and behavior can only be associated with classes, inheritance fixes the communication patterns between objects at instance creation time. because any object can be used as a prototype, and any messages can be forwarded at any time, delegation is the more flexible and general of the two techniques. henry lieberman separating content from form: a language for formatting on-line documentation and dialog recent research has demonstrated the advantages of separating management of the user interface from the application program. a user interface system should integrate access to on-line help and documentation with other dialog for interacting with the program into a uniform environment. we describe such a user interface management system, called ice, with emphasis on its facilities for authoring networks of frames containing help information and menus for interacting with the application program. authors can write help and dialog using a language similar to the scribe document processing system, widely used at cmu and elsewhere. but instead of generating hardcopy documents for different printing devices, ice produces interactive "softcopy" documents, consisting of a network of frames combining documentation and interface. in ice the screen layout of frames and the style of interaction is specified in a format file which is separate from the dialog file that contains the text to appear in the frames. this separation allows the dialog author to write the text without having to worry much about its precise appearance on the screen. the display designer can specify the actual format independently. the same text can be formatted in different ways to make use of different display devices and to allow experimentation with alternative formats and styles of interaction. charlie wiecha max henrion mohca-java: a tool for c++ to java conversion support scott malabarba premkumar devanbu aaron stearns a study of file sizes and functional lifetimes the performance of a file system depends strongly on the characteristics of the files stored in it. this paper discusses the collection, analysis and interpretation of data pertaining to files in the computing environment of the computer science department at carnegie-mellon university (cmu-csd). the information gathered from this work will be used in a variety of ways: 1\\. as a data point in the body of information available on file systems. 2\\. as input to a simulation or analytic model of a file system for a local network, being designed and imlemented at cmu-csd [1]. 3\\. as the basis of implementation decisions and parameters for the file system just mentioned. 4\\. as a step toward understanding how a user community creates, maintains and uses files. m. satyanarayanan software technology maturation we have reviewed the growth and propagation of a variety of software technologies in an attempt to discover natural characteristics of the process as well as principles and techniques useful in transitioning modern software technology into widespread use. what we have looked at is the technology maturation process, the process by which a piece of technology is first conceived, then shaped into something usable, and finally "marketed" to the point that it is found in the repertoire of a majority of professionals. a major interest is the time required for technology maturation --- and our conclusion is that technology maturation generally takes much longer than popularly thought, especially for major technology areas. but our prime interest is in determining what actions, if any, can accelerate the maturation of technology, in particular that part of maturation that has to do with transitioning the technology into widespread use. our observations concerning maturation facilitators and inhibitors are the major subject of this paper. samuel t. redwine william e. riddle the incorporation of testing into verification (abstract): direct, modular, and hierarchical correctness degrees leo marcus thin locks: featherweight synchronization for java david f. bacon ravi konuru chet murthy mauricio serrano give-n-take - a balanced code placement framework give-n-take is a code placement framework which uses a general producer- consumer concept. an advantage of give-n-take over existing partial redundancy elimination techniques is its concept of production regions, instead of single locations, which can be beneficial for general latency hiding. give-n-take guaranteed balanced production, that is, each production will be started and stopped once. the framework can also take advantage of production coming "for free," as induced by side effects, without disturbing balance. give- n-take can place production either before or after consumption, and it also provides the option to hoist code out of potentially zero-trip loop (nest) constructs. give-n-take uses a fast elimination method based on tarjan intervals, with a complexity linear in the program size in most cases. we have implemented give-n-take as part of a fortran d compiler prototype, where it solves various communication generation problems associated with compiling data-parallel languages onto distributed- memory architectures. reinhard von hanxleden ken kennedy gcode: a revised standard for a graph representation for functional programs mike joy tom axford time space tradeoffs in vector algorithms for apl functions t. budd when bears are blue and bulls are red analysts deal with real numerical data. often the data is collected over time. because numbers are abstract, analysts often use graphs to make numerical data more meaningful. real data typically fluctuates in some random fashion. however, patterns often exist in the data. an example would be the seasonal fluctuations in commodities. analysts use a variety of techniques to help them find patterns that have the potential to improve their skill in predicting future trends. this paper suggests some ways to make some of these patterns more obvious.we are here presenting techniques to enhance traditional methods of providing information in graphs. by taking advantage of current computer technology to provide graphic images and to draw graphs, new and exciting visual images are possible. in addition, animation strategies allow us to simulate the passage of time. finally, we add a whimsical touch.by animating the graphs of mathematical functions we illuminate aspects of the graphing process which might otherwise go unnoticed. the focus of this paper is on curve fitting. apl and j provide the matrix divide function for finding the coefficients of the constant, linear, quadratic, cubic or nth degree polynomial which best fit a collection of data.each of these polynomials also has a polynomial derivative. these derivatives are useful for three reasons. select a point on the x-axis. the y-value of the polynomial is the height of the polynomial for that x-value. but the y-value of the derived polynomial for the chosen x-value provides information about the original polynomial in three ways. first the y-value for the chosen x-value is the slope of the original polynomial at that point. the second useful aspect of the derivative is that positive and negative values indicate when the original polynomial is increasing or decreasing. a derivative is one degree less than the original polynomial so for example, a cubic polynomial has a quadratic polynomial derivative.applications will demonstrate how the slope of the tangent together with the use of images and color can enhance the reader's ability to assimilate data. these enhancements give additional meaning to graphs and add life to the data. linda alvord tama traberman technical correspondence diane crawford from manufacturing document requirements to customized authoring and automated publishing framework marc le bissonnais francois prunet a replicated unix file system (extended abstract) barbara liskov robert gruber paul johnson liuba shrira does aspect-oriented programming work? gail c. murphy robert j. walker elisa l. a. baniassad martin p. robillard albert lai mik a. kersten kersten applying object-oriented metrics to ada 95 ada 95 adds many new and notable features to the ada 83 standard. the additions include such aspects as object- oriented programming, hierarchical libraries, and protected objects. the enhancements to the language may have a profound impact on the way developers design software in ada. consequently, the way in which the new software designs are assessed needs to be addressed. recent studies suggest traditional functionally-oriented metrics are not applicable to object-oriented software. as a result, new measures are being proposed that may be applicable to object- oriented design. some of these metrics have been validated on small to medium sized projects written in c++ and smalltalk. this paper demonstrates how to apply these metrics to ada 95. william w. pritchett letter from the editor michael k. johnson modeling the software process: modeling the software engineering process (panel session) gregory a. hansen technology insertion: establishing an object-oriented life-cycle methodology john a. anderson hypercube experiments with joyce b. andersen the design and implementation of a karel compiler and interpreter rose hoskins the benefits and costs of dyc's run-time optimizations dyc selectively dynamically compiles programs during their execution, utilizing the run-time-computed values of variables and data structures to apply optimizations that are based on partial evaluation. the dynamic optimizations are preplanned at static compile time in order to reduce their run-time cost; we call this staging. dyc's staged optimizations include (1) an advanced binding-time analysis that supports polyvariant specialization (enabling both single-way and multiway complete loop unrolling), polyvariant division, static loads, and static calls, (2) low- cost, dynamic versions of traditional global optimizations, such as zero and copy propagation and dead- assignment elimination, and (3) dynamic peephole optimizations, such as strength reduction. because of this large suite of optimizations and its low dynamic compilation overhead, dyc achieves good performance improvements on programs that are larger and more complex than the kernels previously targeted by other dynamic compilation systems. this paper evaluates the benefits and costs of applying dyc's optimizations. we assess their impact on the performance of a variety of small to medium-sized programs, both for the regions of code that are actually transformed and for the entire application as a whole. our study includes an analysis of the contribution to performance of individual optimizations, the performance effect of changing the applications' inputs, and a detailed accounting of dynamic compilation costs. brian grant markus mock matthai philipose craig chambers susan j. eggers pancode assessed d. jonsson updating a master file - yet one more time for several years i have been teaching a file updating algorithm which is essentially the same as that in dwyer's admirable paper [1]. there is one unjustified objection to the algorithm that perceptive students and people with batch processing experience almost invariably raise and which is not addressed by dwyer. the objection is based on a situation which arises in a batch processing environment with several users, where key-ordered sequential update is used. j. inglis a niche for structured flowcharts (abstract only) the research investigated the preferences for graphical methods, such as structured flowcharts, and for verbal methods, such as pseudocode, when learning short, relatively complex algorithms. the research summarizes the data from six replications using 193 students and 16 data structures classes. a preference for flowcharts was hypothesized under eight conditions. all eight conditions produced large differences which were statistically significant. the preferences for flowcharts ranged from 75.1% to 89.1%. probabilities associated with these eight significant conditions ranged from p ≤ .0000001 × 10-23 to p ≤ .0000001 × 10-5. no statistically significant preferences were found for pseudocode. the results indicate that graphical methods should be strongly considered when teaching relatively complex algorithms. more specifically, the results indicate that data structures students strongly prefer a structured flowchart presentation in books and in lectures. the students also prefer to view a structured flowchart version of an algorithm before viewing a pseudocode version when given the opportunity to use both algorithmic learning techniques. including two other recent studies on structured flowchart preference, there are now eight full- and-partial replications which have produced nearly identical findings. the students found the data structures algorithms easier to comprehend when structured flowcharts were used. the results of these eight replications support the use of structured flowcharts in the instruction of short, relatively complex algorithms. the research does not support the use of structured flowcharts in any other context. david scanlan planning for os/2 support cathy leffler the surprising dynamics of a simple year 2000 bug these are the reactions (and an analysis of their reasons) of a very simple program containing a rather simple form of century-dependent code. these reactions are extremely surprising and emerge from an interesting daisy-chain of effects. lutz prechelt writing portable fortran procedures r. a. vowels an approach to large-scale collection of application usage data over the internet david m. hilbert david f. redmiles interprocedural pointer alias analysis we present practical approximation methods for computing and representing interprocedural aliases for a program written in a language that includes pointers, reference parameters, and recursion. we present the following contributions: (1) a framework for interprocedural pointer alias analysis that handles function pointers by constructing the program call graph while alias analysis is being performed; (2) a flow-sensitive interprocedural pointer alias analysis algorithm; (3) a flow- insensitive interprocedural pointer alias analysis algorithm; (4) a flow-insensitive interprocedural pointer alias analysis algorithm that incorporates kill information to improve precision; (5) empirical measurements of the efficiency and precision of the three interprocedural alias analysis algorithms. michael hind michael burke paul carini jong-deok choi an evaluation of software test environment architectures nancy s. eickelmann debra j. richardson mutually-distrusting cooperative file systems william j. bolosky john r. doucher marvin theimer inheritance versus containment rainer h. liffers portability of software hermann kaindl a fully reusable class of objects for synchronization and communication in ada 95 patrick de bondeli the proper image for linux dr. bentson did a survey of linux kernel developers to find out about their backgrounds. here are the results randolph bentson the cornell program synthesizer: a syntax-directed programming environment programs are not text; they are hierarchical compositions of computational structures and should be edited, executed, and debugged in an environment that consistently acknowledges and reinforces this viewpoint. the cornell program synthesizer demands a structural perspective at all stages of program development. its separate features are unified by a common foundation: a grammar for the programming language. its full-screen derivation-tree editor and syntax-directed diagnostic interpreter combine to make the synthesizer a powerful and responsive interactive programming tool. tim teitelbaum thomas reps using oop for real-time programming (workshop session) brian m. barry mostly-copying reachability-based orthogonal persistence we describe how reachability-based orthogonal persistence can be supported even in uncooperative implementations of languages such as c++ and modula-3, and without modification to the compiler. our scheme extends bartlett's mostly- copying garbage collector to manage both transient objects and resident persistent objects, and to compute the reachability closure necessary for stabilization of the persistent heap. it has been implemented in our prototype of reachability-based persistence for modula-3, yielding performance competitive with that of comparable, but non-orthogonal, persistent variants of c++. experimental results, using the 007 object database benchmarks, reveal that the mostly-copying approach offers a straightforward path to efficient orthogonal persistence in these uncooperative environments. the results also characterize the performance of persistence implementations based on virtual memory protection primitives. antony l. hosking jiawan chen an architecture for viewpoint environments based on omg/corba wolfgang emmerich using models in software engineering the software engineering institute (sei) has participated in several projects1 in which the focus was on helping contractors make use of good software engineering methods and ada. during this participation, we have learned several important lessons about the development of software for both large- scale and embedded systems. we have noticed that after a long period of time where the focus on productivity generated searches for new methodologies, tools, and ways to write reusable software, the emphasis has shifted to quality, in recognition of the fact that the new methods and tools were not adequate to address the problems occurring at the design level. we propose that the industry instead concentrate the search for the old methods still in use in the other branches of engineering, and apply those methods to the software problem. r. d'ilppolito distributed c++ harold carr robert kessler mark swanson choosing an object-oriented domain framework garry froehlich h. james hoover paul g. sorenson a problem with extended pascal r. t. house kernel korner: loadable kernel module programming and system call interception nitesh dhanjani gustavo rodriguez-rivera distributed termination discussed is a distributed system based on communication among disjoint processes, where each process is capable of achieving a post-condition of its local space in such a way that the conjunction of local post-conditions implies a global post-condition of the whole system. the system is then augmented with extra control communication in order to achieve distributed termination, without adding new channels of communication. the algorithm is applied to a problem of constructing a sorted partition. nissim francez object-oriented documentation johannes sametinger the stream machine: a data flow architecture for real-time applications the stream machine is a software architecture designed to support the development and evolution, as well as the efficient execution, of software that performs both data acquisition and process control under real-time constraints. a stream machine program consists of a set of concurrently executing modules communicating through streams of data. streams provide essentially a data flow style of communication, thereby supporting deterministic data acquisition and calculation. the basic constructs have been augmented by several time-based operations to support process control software. in addition, explicit declarations of timing constraints layered on stream machine programs are currently being explored for resource allocation and scheduling. our experience to date with two implementations of the stream machine suggests that it facilitates the mixture of software that performs both data acquisition and process control. paul barth scott guthery david barstow object-oriented programming in scheme we describe a small set of additions to scheme to support object-oriented programming, including a form of multiple inheritance. the extensions proposed are in keeping with the spirit of the scheme language and consequently differ from lisp-based object systems such as flavors and the common lisp object system. our extensions mesh neatly with the underlying scheme system. we motivate our design with examples, and then describe implementation techniques that yields efficiency comparable to dynamic object-oriented language implementations considered to be high performance. the complete design has an almost-portable implementation, and the core of this design comprises the object system used in t, a dialect of scheme. the applicative bias of our approach is unusual in object-oriented programming systems. norman adams jonathan rees recommendations and future trends russell m. clapp trevor mudge daniel roy upcoming events corporate linux journal staff third eye - specification-based analysis of software execution traces (poster session) another concept of third eye is the tracing state. tracing state is a set of event types generated in that state, other event types are filtered out and not reported. the system is always in a specific tracing state. tracing states correspond to specifications. a program specification describes a set of constraints on events. the event types used in a specification have to be monitored to validate a trace against this specification. all event types contained in a specification and monitored for this specification form a tracing state. tracing states also control the overhead of tracing on the executing system. the third eye framework includes modules for event type definition, event generation and reporting, tracing state definition and management, trace logging, query and browsing interfaces. modules of event type definition, event reporting facility and tracing state controler are integrated with the software of the system under trace (sut). the rest of the modules are independent from the sut and can be deployed on a different execution platform to minimize the influence on system performance. trace delivery for logging and analysis uses alternative interfaces to accommodate devices with different data storage and connectivity capabilities. we have implemented third eye framework prototype currently used by the third eye project team in collaboration with product development teams in nokia's business units. we used third eye to test a number of software systems: the memory subsystem of one of nokia's handsets, apache web server, and wap (wireless application protocol) client. wap is an industrial standard for applications and services that operate over wireless communication networks. we validated message sequences in this protocol by adding events in the functions that correspond to the protocol primitives and then checking whether the event sequence corresponds to the protocol message sequence. events are mapped to prolog facts and constraints are expressed as prolog rules. third eye can be used for debugging, monitoring, specification validation, and performance measurements. these scenarios use typed events---a concept simple and yet expressive enough to be shared by product designers and developers. the third eye has an open architecture allowing easy replacement of third- party tools, including databases, analysis and validation tools. third eye is a practical framework for specification-based analysis and adaptive execution tracing of software systems. raimondas lencevicius alexander ran rahav yairi automating comments p.-n. robillard the software life cycle support environment (slcse): a computer based framework for developing software systems the software life cycle support environment (slcse) is a vax/vms-based software development environment framework which presents a common and consistent user interface accessing a comprehensive set of software development tools supporting the full spectrum of dod-std-2 167a software life cycle activities from requirements analysis to maintenance. these tools utilize a project database which maintains information relevant not only to the software under development (e.g., requirements allocation, software interfaces, etc.), but also information relating to the project as a whole (e.g., schedules, milestones, quality assurance, configuration management, etc.). the project database supports the dod-std-2167a life cycle model and associated data item descriptions (dids). slcse's framework approach supports the integration of new tools into the environment and permits the slcse to evolve over time and adapt to advances in software engineering technology. tom strelich integrating segmentation and paging protection for safe, efficient and transparent software extensions tzi-cker chiueh ganesh venkitachalam prashant pradhan design, implementation, and evaluation of optimizations in a just-in-time compiler kazuaki ishizaki motohiro kawahito toshiaki yasue mikio takeuchi takeshi ogasawara toshio suganuma tamiya onodera hideaki komatsu toshio nakatani a string case statement for ur/forth p. frenger the impact of apl2 on teaching apl with the advent of significant new features in apl, for example apl2, attention to and an understanding of the fundamental principles and concepts of apl is especially important. this expository paper is aimed primarily at the apl educator, present or future, and the person seriously pursuing a more in-depth mastery of the language. the paper states some of the underlying concepts and principles of the language and why they are important. within the presentation several techniques which the author has found successful in conveying these concepts are stated. finally, the fact that apl2 permits another new and different approach to problem solving is discussed and illustrated. raymond p. polivka generating machine specific optimizing compilers roger hoover kenneth zadeck hybrid slicing: integrating dynamic information with static analysis program slicing is an effective techniqe for narrowing the focus of attention to the relevant parts of a program during the debugging process. however, imprecision is a problem in static slices, since they are based on all possible executions that reach a given program point rather than the specific execution under which the program is being debugged. dynamic slices, based on the specific execution being debugged, are precise but incur high run-time overhead due to the tracing information that is collected during the program's execution. we present a hybrid slicing technique that integrates dynamic information from a specific execution into a static slice analysis. the hybrid sliceproduced is more precise that the static slice and less costly that the dynamic slice. the technique exploits dynamic information that is readily available during debugging---namely, breakpoint information and the dynamic call graph. this information is integrated into a static slicing analysis to more accurately estimate the potential paths taken by the program. the breakpoints and call/return points, used as reference points, divide the execution path into intervals. by associating each statement in the slice with an execution interval, hybrid slicing provides information as to when a statement was encountered during execution. another attractive feature of our approach is that it allows the user to control the cost of hybrid slicing by limiting the amount of dynamic information used in computing the slice. we implemented the hybrid slicing technique to demonstrate the feasibility of our approach. rajiv gupta mary lou soffa john howard concurrent behaviors s. lujun f. changpeng x. lihul model checking java programs (abstract only) automatic state exploration tools (model checkers) have had some success when applied to protocols and hardware designs, but there are fewer success stories about software. this is unfortunate, since the software problem is worsening even faster than the hardware and protocol problems. model checking of concurrent programs is especially interesting, because they are notoriously difficult to test, analyze, and debug by other methods. this talk will be a description of our initial efforts to check java programs using a model checker. the model checker supports dynamic allocation, thread creation, and recursive procedures (features that are not necessary for hardware verification), and has some special optimizations and checks tailored to multi-threaded java program. i will also discuss some of the challenges for future efforts in this area. david dill software quality analysis and measurement service activity in the company takeshi tanaka minoru aizawa hideto ogasawara atsushi yamada modeling configuration as transactions g. w. kaiser oopsla distributed object management john r. rymer richard mark soley william stephen andreas ian fuller neal jacobson richard a. demers progress in programming languages kim b. bruce a framework for using formal methods in object-oriented software development richard holt dennis dechampeaux compiling object-oriented data intensive applications processing and analyzing large volumes of data plays an increasingly important role in many domains of scientific research. high-level language and compiler support for developing applications that analyze and process such datasets has, however, been lacking so far. in this paper, we present a set of language extensions and a prototype compiler for supporting high-level object- oriented programming of data intensive reduction operations over multidimensional data. we have chosen a dialect of java with data-parallel extensions for specifying collection of objects, a parallel for loop, and reduction variables as our source high-level language. our compiler analyzes parallel loops and optimizes the processing of datasets through the use of an existing run-time system, called active data repository (adr). we show how loop fission followed by interprocedural static program slicing can be used by the compiler to extract required information for the run-time system. we present the design of a compiler/n-time interface which allows the compiler to effectively utilize the existing run-time system. a prototype compiler incorporating these techniques has been developed using the titanium front-end from berkeley. we have evaluated this compiler by comparing the performance of compiler generated code with hand customized adr code for three templates, from the areas of digital microscopy and scientific simulations. our experimental results show that the performance of compiler generated versions is, on the average 21% lower, and in all cases within a factor of two, of the performance of hand coded versions. renato ferreira gagan agrawal joel saltz parallelizing compilers michael wolfe typed closure conversion yasuhiko minamide greg morrisett robert harper an efficient implementation of self a dynamically-typed object-oriented language based on prototypes we have developed and implemented techniques that double the performance of dynamically-typed object-oriented languages. our self implementation runs twice as fast as the fastest smalltalk implementation, despite self's lack of classes and explicit variables. to compensate for the absence of classes, our system uses implementation-level maps to transparently group objects cloned from the same prototype, providing data type information and eliminating the apparent space overhead for prototype- based systems. to compensate for dynamic typing, user-defined control structures, and the lack of explicit variables, our system dynamically compiles multiple versions of a source method, each customized according to its receiver's map. within each version the type of the receiver is fixed, and thus the compiler can statically bind and inline all messages sent to self. message splitting and type prediction extract and preserve even more static type information, allowing the compiler to inline many other messages. inlining dramatically improves performance and eliminates the need to hard- wire low- level methods such as +,==, and iftrue:. despite inlining and other optimizations, our system still supports interactive programming environments. the system traverses internal dependency lists to invalidate all compiled methods affected by a programming change. the debugger reconstructs inlined stack frames from compiler-generated debugging information, making inlining invisible to the self programmer. c. chambers d. ungar e. lee implementation of state machines with tasks and protected objects state machines are common in real-time software. they are also a standard way of describing the life of an object in object-oriented analysis and design. their use is fairly straightforward but non-trivial. the examples in this article show implementations of state machines and associated activities by means of tasks and protected objects. they cover a range of synchronization and communication needs between activity and state machine. bo i. sanden logic optimization and code generation for embedded control applications we address software optimization for embedded control systems. the esterel language is used as the front-end specification; esterel compiler v6 is used to partition the control circuit and data path; the resulting intermediate representation of the design is a control-data network. this paper emphasizes the optimization of the control circuit portion and the code generation of the logic network. the new control-data network representation has four types of nodes: control, multiplexer, predicate and data expression; the control portion is a multi-valued logic network (mv-network). we use an effective multi-valued logic network optimization package called mvsis for the control optimization. it includes algebraic methods to perform multi-valued algebraic division, factorization and decomposition and logic simplification methods based on observability don't cares. we have developed methods to evaluate a control-data network based on both an mdd and sum-of-products representation of the multi-valued logic functions. the mdd-based approach uses multi-valued intermediate variables and generates code according to the internal bdd structure. the sop-based code is proportional to the number of cubes in the logic network. preliminary results compare the two approaches and the optimization effectiveness. yunjian jiang robert k. brayton practical programmer: software teams i have often heard the phrase, "we see what we know." as technicians, we concentrate on technical ways to manage complexity: abstraction, design techniques, high- level languages, and so on. that is what we know best. but when the tale is told of a project that failed, the blame is often laid not on technical difficulties, but on management and interpersonal problems. in the last six months, i have seen firsthand how attention to the social organization of a software team can make a big difference in the success of a development project. i work in a "research and development" group. "research" means that some aspects of the project are experimental---we do not know for sure what is going to work. "development" means we are expected to produce high- quality software for real users. so while we want to encourage creative thought, we must pay heed to the lessons of commercial software developers in quality assurance, testing, documentation, and project control. our all-wise project leader decided we also needed to pay heed to the lessons of sociology. in particular, we began to apply the ideas found in larry constantine's work on the organization of software teams. our efforts have resulted in a team that is productive, flexible, and comfortable. i thought these qualities are unusual enough to merit a column on the subject. marc rettig object oriented programming and fortran kurt hirchert talking about modules and delivery adding a module system to lisp enhances program security and efficiency, and help the programmer master the complexity of large systems, thus facilitating application and delivery. talk's module system is based on a simple compilation model which takes macros into account and provides a solid basis for automatic module management tools. higher-level structuring entities--- libraries and executables---group modules into deliverable goods. the module system is secure because it validates interfaces, efficient because it separates compilation dependencies from execution dependencies, and useful because it offers a simple processing model, automatic tools, and a graceful transition from development to delivery. harley davis pierre parquier nitsan seniak h/direct: a binary foreign language interface for haskell sigbjorn finne daan leijen erik meijer simon peyton jones a truly implementation independent gui development tool over the last few years, graphical user interface programming has become increasingly prevalent. many libraries and languages have been developed to simplify this task. additionally, design tools have been created that allow the programmer to "draw" their desired interface and then have code automatically generated. unfortunately, use of these tools locks the programmer into a particular implementation. even if the tool targets a multi-platform library (e.g. tcl/tk or jvm), the flexibility of the result is constrained. we present a truly implementation and platform independent solution. rapid generates ada code targeted to an object-oriented set of graphical user interface specifications with absolutely no implementation dependent information. the pattern used to derive these specifications is an improvement over the "abstract factory" pattern, as it allows both the specification and implementation to take advantage of inheritance. the user can then select an implementation (for example, tcl/tk or jvm) at compile time. rapid itself is also written using the same specifications; therefore it requires no modification to target a new implementation or to use a new implementation itself. rapid is currently being used to design the user interface for a satellite ground station. martin c. carlisle operators considered harmful martin gfeller somebody still uses assembly language? richard a. sevenich an automated method of referencing ada reusable code using lil george c. harrison moving apl towards objects karl soop upfront corporate linux journal staff design considerations in language processing tools for ada the ada language system (als) is a complete programming environment for the development of ada programs. this paper discusses the design objectives of those portions of the als which support translation and execution of ada programs, particularly the compiler, linker, and program library. the als capabilities for maintenance of software configuration control are highlighted. tradeoffs in the design of the compiler phase structure and intermediate languages are presented. w. babich l. weissman m. wolfe stop the presses phil hughes dynamic ceiling priorities and ada 95 ada 95 provides dynamic priorities for tasks but ceiling priorities for protected objects are statically assigned by means of a pragma. the semantic complexity as well as the potential inefficiency of any resulting implementation seem to have been the main objections to including dynamic ceilings in ada 95. nevertheless, in frequent scenarios such as multi-moded real-time systems, it is desirable for ceiling priorities to be dynamic, to accommodate the different priority assignments given to tasks in the different modes. this problem is usually worked around by applying the so-called ceiling of ceilings as a static ceiling for all modes, which may result in lower schedulability as extra blocking potentially occurs. in this paper we investigate on how to use a hypothetical _set_ceiling_ dynamic operation, without incurring the risk of semantic ambiguities, and how a multi-moded system can benefit from its existence. jorge real andy wellings a model for scheduling protocol-constrained components and environments steve haynal forrest brewer a comparison of two object-oriented design methodologies (abstract only) with the increasing use of ada* in light of dod directive 5000.31, "the ada programming language shall become the single, common, computer programming language for defense mission- critical applications." there has been an accompanying increase in interest in the proper software engineering techniques to insure that the powerful capabilities of ada will be used to their fullest. at one ada development site, the design and implementation approach used has been that proposed by grady booch, object- oriented design (ood): identify the objects of interest in the problem space and their attributes identify the operations suffered by and required of each object establish the visibility of each object in relation to other objects establish the interface of each object a proposal has been made to compare the deliverables and the end results of the above method with that of david parnas, the software cost reduction method (scr): identify a list of design decisions for which change cannot be ruled out (data structures, algorithms, etc.) make each design decision the "secret" of one module which comprises sub-programs that cannot be coded without knowledge of the "secret" design module interfaces that are insensitive to anticipated changes *ada is a registered trademark of the u.s. department of defense (ada joint program office) linda rising design issues and team support: experiences of an ada tool vendor experiences, insights gained, and lessons learned in the course of developing two ada-based graphic design support tools are summarized. the insights and lessons learned fall into two categories. the fist category involves support for various design methods and approaches, as well as the graphic notation used in providing that support. the second category involves other tool features that are required or desired by teams of software engineers designing mid-sized or large ada systems. b. crawford d. baker dealing with atomicity in object-based distributed systems rachid guerraoui sigada '98: acm sigada international conference michael hind a tool for ada program manipulations: mentor-ada v. donzeau-gouge b. lang b. me'le'se pancode and boxcharts: structured programming revisited dan jonsson designing specification languages for process control systems: lessons learned and steps to the future previously, we defined a blackbox formal system modeling language called rsml (requirements state machine language). the language was developed over several years while specifying the system requirements for a collision avoidance system for commercial passenger aircraft. during the language development, we received continual feedback and evaluation by faa employees and industry representatives, which helped us to produce a specification language that is easily learned and used by application experts. since the completion of the rsml project, we have continued our research on specification languages. this research is part of a larger effort to investigate the more general problem of providing tools to assist in developing embedded systems. our latest experimental toolset is called spectrm (specification tools and requirements methodology), and the formal specification language is spectrm-rl (spectrm requirements language). this paper describes what we have learned from our use of rsml and how those lessons were applied to the design of spectrm-rl. we discuss our goals for spectrm-rl and the design features that support each of these goals. nancy g. leveson mats p. e. heimdahl jon damon reese module: an encapsulation mechanism for specifying and implementing abstract data types an encapsulation mechanism, the module, for specifying abstract data types is described. the module employs a constructive specification method for specifying abstract data types. the constructive method consists of two specifications: a logical structure specification and a semantics of operations specification. the specification and implementation of a nontrivial data type, a symboltable data type, is given and proof of implementation correctness is discussed and illustrated. billy g. claybrook marvin p. wyckoff an algebra for program fragments program fragments are described either by strings in the concrete syntax or by constructor applications in the abstract syntax. by defining conversions between these forms, both may be intermixed. program fragments are constructed by terminal and nonterminal symbols from the grammar and by variables having program fragments as values. basic operations such as valuetransfer, composition and decomposition are defined for program fragments allowing more complicated operations to be implemented. usual operations such as testing for equality are defined, and in addition more specialized operations such as testing that a program fragment is derivable from another and converting program fragments in concrete form to abstract form are defined. by introducing regular expressions in the grammar these may be used in program fragments in concrete form. by defining constructors for regular expressions these may also be used in program fragments in abstract form. bent bruun kristensen ole lehrmann madsen birger møller-pedersen kristen nygaard eiffel: programming for reusability and extendibility b meyer position paper for sigops workshop on fault tolerance support in distributed systems andrew birrell kernel korner driving one's own audio device: in this article alessandro will show the design and implementation of a custom audio device, paying particular attention to the software driver. the driver,as usual, is developed as a kernel module. even thou 1p, where s1 is the minimum serial space requirement. we also show that the expected total communication of the algorithm is at most o(_pt ∞ ( 1 + nd)smax), where smax is the size of the largest activation record of any thread and nd is the maximum number of times that any thread synchronizes with its parent. this communication bound justifies the folk wisdom that work- stealing schedulers are more communication efficient than their work-sharing counterparts. all three of these bounds are existentially optimal to within a constant factor. robert d. blumofe charles e. leiserson the gnat project: a gnu-ada 9x compiler we describe the general organization of the gnat compiler, its relationship to the gcc multilanguage compiler system, and some of the architectural details of the system. this paper serves as an introduction to the following papers in this session. edmond schonberg bernard banner modularity: a phased approached to documentation sandra pakins graphical representation of programs in a demonstrational visual shell - an empirical evaluation an open question in the area of programming by demonstration (pbd) is how to best represent the inferred program. without a way to view, edit, and share programs, pbd systems will never reach their full potential. we designed and implemented two graphical representation languages for a pbd desktop similar to the apple macintosh finder. although a user study showed that both languages enabled nonprogrammers to generate and comprehend programs, the study also revealed that the language that more closely reflected the desktop domain doubled users' abilities to accurately generate programs. trends suggest that the same language was easier for users to comprehend. these findings suggest that it is possible for a pbd system to enable nonprogrammers to construct programs and that the form of the representation can impact the pbd system's effectiveness. a paper-and-pencil evaluation of the two versions of the pbd desktop prior to the study supported these finding and provided interesting feedback on the interaction between usability evaluations and user studies. in particular, the comparison of the paper-and-pencil evaluation with the empirical evaluation suggested that nonempirical evaluation techniques can provide guidance into how to interpret empirical data and, in particular, that pbd systems need to provide support for programming-strategy selection in order to be successful. francesmary modugno albert t. corbett brad a. myers visual programming languages from an object-oriented perspective (abstract) allen l. ambler margaret m. burnett database theory for supporting specification-based database system development we report on the development of a formal theory of databases designed to support specification-based development of database systems. this theory formalizes database systems which include non-first normal form relations, complex integrity constraints, transactions, and embedded data types such as integers, character strings, and user-defined types. our theory is based on two axiomatized algebras (abstract data types) and is being used to mechanically prove the properties of relational algebra and functional dependencies, as well as the relationships between integrity constraints and the primitive operations on databases, e. g., inserts and deletes of tuples. we are also using the theory to prove whether or not specific transactions obey complex integrity constraints which can include universal and existential quantifiers. the latter proofs (as well as failed attempts at proofs) can be used during the design of specific systems and in the optimization of system implementations. david stemple tim sheard producers and consumers views of software quality (panel session) at this very acm workshop/symposium indicates, software quality is of great concern to both producers and users of software. it should be obvious to those who have attended the earlier sessions today and to those who will attend the sessions tomorrow that quality is something that cannot be tested into a system or added to a system. it must be integral from the start of the definition of the system's requirements through each phase of analysis, design, implementation, integration, testing, and installation. software quality implies an engineering type approach to the development of software. it implies the use of a disciplined development environment, and the use of tools and techniques to provide assurances throughout the software development process that both the software and its baseline specifications are complete, consistent, and traceable from one to another. peter gross incremental context-dependent analysis for language-based editors thomas reps tim teitelbaum alan demers wine wine allows you to run programs compiled for ms windowsunder linux, freebsd, and netbsd. learn what wine is, andhow it works. bob amstadt michael k. johnson commentary on "object-oriented documentation" by johannes sametinger craig boyle the gutenberg operating system kernel panayiotis chrysanthis krithi ramamritham david stemple stephen vinter introduction to the special issue of the sigplan notices on the object- oriented programming workshop peter wegner bruce shriver four refutations of text files b. j. rodriguez evolution of the high level programming languages: a critical perspective masud ahmad malik cpu: practical components for systems software matthew flatt alastair reid jay lepreau extending java to support shared resource protection and deadlock detection in threads programming anita j. van engen michael k. bradshaw nathan oostendorp event-based performance perturbation: a case study allen d. malony microsoft's .net: platform in the clouds aaron weiss generating mc68000 code for ada we adapted the code generator of the karlsruhe ada compiler to produce symbolic mc68000 code. the essential part of the code generator - the code selection phase - is produced automatically by the code generator synthesis system (cgss) out of a formal description of the target machine. therefore our main task consisted of writing a target machine description (tmd) for the motorola mc68000. in this paper we report our experience made in writing the target machine description and in adapting the whole code generator. hans-stephan jansohn rudolf landwehr jochen hayek michael thätner using regular approximations for generalisation during partial evalution on-line partial evaluation algorithms include a generalisation step, which is needed to ensure termination. in partial evaluation of logic and functional programs, the usual generalisation operation applied to computation states is the most specific generalisation (_msg_) of expressions. this can cause loss of information, which is especially serious in programs whose computations first build some internal data structure, which is then used to control a subsequent phase of execution - a common pattern of computation. if the size of the intermediate data is unbounded at partial evaluation time then the _msg_ will lose almost all information about its structure. hence the second phase of computation cannot be effectively specialised. in this paper a generalisation based on regular approximations is presented. regular approximations are recursive descriptions of term structure closely related to tree automata. a regular approximation of computation states can be built during partial evaluation. the critical point is that when generalisation is performed, the upper bound on regular descriptions can be combined with the _msg_, thus preserving structural information including recursively defined structure. the domain of regular approximations is infinite and hence a widening is incorporated in the generalisation to ensure termination. an algorithm for partial evaluation of logic programs, enhanced with regular approximations, along with some examples of its use will be presented. john p. gallagher julio c. peralta reifying configuration management for object-oriented software j.-m. jezequel unrolling-based optimizations for modulo scheduling daniel m. lavery wen-mei w. hwu a new solution to lamport's concurrent programming problem using small shared variables gary l. peterson combining contracts and exemplar-based programming for class hiding and customization for performance reasons, client applications often need to influence the implementation strategies of libraries whose services they use. if an object- oriented library contains multiple service classes customized for different usage patterns, applications can influence service implementations by instantiating the customized classes that match their needs. however, with many similar service classes, it can be difficult for applications to determine which classes to instantiate. choosing the wrong class can result in very subtle errors since a customized class might use optimizations that work only over a restricted domain. in this paper, we show how client-side software contracts and exemplar-based class factories can be used to construct customized server objects. by expressing priorities and requirements in contracts, clients can delegate service class selection to the library and thereby avoid implicit dependencies on the library implementation. we have used this approach in the implementation of a real-time database system. victor b. lortz kang g. shin automatic i/o hint generation through speculative execution fay chang garth a. gibson trap-driven simulation with tapeworm ii tapeworm ii is a software- based simulation tool that evaluates the cache and tlb performance of multiple-task and operating system intensive workloads. tapeworm resides in an os kernel and causes a host machine's hardware to drive simulations with kernel traps instead of with address traces, as is conventionally done. this allows tapeworm to quickly and accurately capture complete memory referencing behavior with a limited degradation in overall system performance. this paper compares trap-driven simulation, as implemented in tapeworm, with the more common technique of trace-driven memory simulation with respect to speed, accuracy, portability and flexibility. richard uhlig david nagle trevor mudge stuart sechrest measuring requirements testing:: experience report theodore hammer linda rosenberg lenore huffman lawrence hyatt open, distributed coordination with finesse andrew berry simon kaplan ada 9x protected types in pascal-fc a. burns g. l. davies the base-rate fallacy and the difficulty of intrusion detection many different demands can be made of intrusion detection systems. an important requirement is that an intrusion detection system be effective; that is, it should detect a substantial percentage of intrusions into the supervised system, while still keeping the false alarm rate at an acceptable level. this article demonstrates that, for a reasonable set of assumptions, the false alarm rate is the limiting factor for the performance of an intrusion detection system. this is due to the base-rate fallacy phenomenon, that in order to achieve substantial values of the bayesian detection rate p(intrusion***alarm), we have to achieve a (perhaps in some cases unattainably) low false alarm rate. a selection of reports of intrusion detection performance are reviewed, and the conclusion is reached that there are indications that at least some types of intrusion detection have far to go before they can attain such low false alarm rates. stefan axelsson a high speed foreground mulitasker for forth h. leverenz product review: microstation 95 for linux bradley willson synchronizing clocks in the presence of faults algorithms are described for maintaining clock synchrony in a distributed multiprocess system where each process has its own clock. these algorithms work in the presence of arbitrary clock or process failures, including "two- faced clocks" that present different values to different processes. two of the algorithms require that fewer than one-third of the processes be faulty. a third algorithm works if fewer than half the processes are faulty, but requires digital signatures. leslie lamport p. m. melliar-smith partition testing, stratified sampling, and cluster analysis andy podgurski charles yang the society of objects mario tokoro global data flow analysis by decomposition into primes the concept of prime program is applied as a decomposition technique to the global data flow analysis problem. this is done both in the abstract and through the use of the live variables problem as an example. it is also shown how the prime program decomposition is equivalent to the arrangement of a certain matrix (associated with the global data flow analysis problem) into block triangular form. ira r. forman techniques for obtaining high performance in java programs this survey describes research directions in techniques to improve the performance of programs written in the java programming language. the standard technique for java execution is interpretation, which provides for extensive portability of programs. a java interpreter dynamically executes java bytecodes, which comprise the instruction set of the java virtual machine (jvm). execution time performance of java programs can be improved through compilation, possibly at the expense of portability. various types of java compilers have been proposed, including just-in-time (jit) compilers that compile bytecode into native processor instructions on the fly; direct compilers that directly translate the java source code into the target processor's native language; and bytecode-to-source translators that generate either native code or an intermediate language, such as c, from the bytecodes. additional techniques, including bytecode optimization, dynamic compilation, and executing java programs in parallel, attempt to improve java run-time performance while maintaining java's portability. another alternative for executing java programs is a java processor that implements the jvm directly in hardware. in this survey, we discuss the basis features, and the advantages and disadvantages, of the various java execution techniques. we also discuss the various java benchmarks that are being used by the java community for performance evaluation of the different techniques. finally, we conclude with a comparison of the performance of the alternative java execution techniques based on reported results. iffat h. kazi howard h. chen berdenia stanley david j. lilja a reuse approach based on object orientation: its contributions in the development of case tools the aim of this paper is to present an approach to facilitate reuse. this approach, which is based on an object oriented design method, describes a way of structuring components and reuse library. two concepts, domain and theme, are introduced to allow a classification of components by the services that they offer and by application domain. the library itself is organized in three hierarchical levels -general, dedicated and personal-, where the reusable components are stored according to their degree of "interest" (general interest, by application type or particular). so, the library is generic and could cluster various reusable component types (specification components, design components, packages,…). the contributions of this approach in the development of case tools are also emphasized. henda hadjami ben ghezala farouk kamoun stop the presses corporate linux journal staff vipers: a data flow visual programming environment based on the tcl language this paper presents vipers, a new, general purpose, visual programming environment, based on an augmented data-flow model. the novel design peculiarity of vipers is the use of a fully programmable interpretive command language (tcl) to define the flow graph operators, as well as to control the behaviour and the appearance of the whole programming environment. we show how such a choice, with its resulting synergies, can lead to a multifeatured, flexible and convenient programming environment, where the application developer's tasks are remarkably simplified. massimo bernini mauro mosconi performance comparison a look at computational performance for an appoication running on an x86 cpu with linux and windows 95/98/nt, and how they compare tim newman residual test coverage monitoring christina pavlopoulou michal young understandability of the software engineering method as an important factor for selecting a case tool the article highlights the understandability of a software engineering methodology as an important criterion for selecting a case tool. this aspect is treated through the comparison of learning properties for two very well known methodology on which the case tools are usually based on. the first one is sa-sd and the second one is jsd. in the purpose to compare both methodology a group of young engineers has been tested. each of them wrote a seminar theme, answered a questionnaire and explained his observations. at the end of the paper, a general conclusion is presented. i. rozman j. györkös k. rizmañ interface and execution models in the fluke kernel bryan ford mike hibler jay lepreau roland mcgrath patrick tullmann coven: brewing better collaboration through software configuration management our work focuses on building tools to support collaborative software development. we are building a new programming environment with integrated software configuration management which provides a variety of features to help programming teams coordinate their work. in this paper, we detail a hierachy-based software configuration management system called _coven_, which acts as a collaborative medium for allowing teams of programmers to cooperate. by providing a family of inter-related mechanisms, our system provides powerful support for cooperation and coordination in a manner which matches the structure of development teams. mark c. chu-carroll sara sprenkle manevro: a new approach to class based programming manevro is a class based programming language designed primarily for system programming. using forth-like environment, compiler produces extremely efficient code. one of the main goals of the project was to create machine independent compiler thus increasing portability of code. srdjan mijanovic multiple downcasting techniques in this paper, we describe and compare techniques that implement multiple downcasting in strongly-typed languages. we conclude that multimethods stand out as the single best technique. in the first section, we describe seven commonly used techniques. in the second section, we compare these seven techniques using ten criteria. and, in the third section, we comment on some additional techniques.multiple downcasting occurs so often that developers use a variety of terms to describe it, and a variety of language constructs and patterns to implement it. "feed the animals," "driver-vehicle," and "parallel hierarchies" are well-known examples that require multiple downcasting. "multiple type dispatch" and "covariant subclassing" identify different facets of multiple downcasting. "dynamic type casting," "typecase statements," and the "visitor pattern" are frequently used to implement multiple downcasting. we believe that multiple downcasting is not a mistake or the result of poor program design, rather multiple downcasting is a specific technique that implements a specific kind of application semantics.in the animal hierarchy shown in figure 1, cows eat grass, wolves eat meat, and in the superclass, animals eat food. the generalization that animals eat food is imprecise: it ignores the facts that cows only eat grass and wolves only eat meat. so, how should developers write a function to safely "feed the animals" without getting the types mixed up? in program 1, we show how a developer might like to "feed the animals" in c++. the feed_unsafely function may fail, while the feed_safely function works properly for all combinations of foods and animals. the animal hierarchy will not compile in c++, because the eat method is covariant.in this paper, we elaborate ideas and examples from a middle-out concept of hierarchy. we adapted the "feed the animals" example from are cows animals? by shang. puppydog p. o. p. l. b. s. raccoon contextbox (extended abstract) (poster session): a visual builder for context beans we present an assembly-design environment that supports the javabeans extensible runtime containment and services protocol. the environment provides: a vehicle for demonstrating the java component model; a third-party client for testing beancontext and beancontextchild components; and a prototype illustrating how a visual builder might unify visual and context nesting during component assembly. david h. lorenz predrag petkovic the operating system kernel as a secure programmable machine to provide modularity and performance, operating system kernels should have only minimal embedded functionality. today's operating systems are large, inefficient and, most importantly, inflexible. in our view, most operating system performance and flexibility problems can be eliminated simply by pushing the operating system interface lower. our goal is to put abstractions traditionally implemented by the kernel out into user-space, where user-level libraries and servers abstract the exposed hardware resources. to achieve this goal, we have defined a new operating system structure, _exokernel,_ that safely exports the resources defined by the underlying hardware. to enable applications to benefit from full hardware functionality and performance, they are allowed to download additions to the supervisor-mode execution environment. to guarantee that these extensions are safe, techniques such as code inspection, inlined cross-domain procedure calls, and secure languages are used. to test and evaluate exokernels and their customization techniques a prototype exokernel, _aegis,_ is being developed. dawson r. engler m. frans kaashoek james w. o'toole non-markovian petri nets kishor s. trivedi a. bobbio g. ciardo r. german a. puliafito m. telek good news, bad news: experience building software development environment using the object-oriented paradigm this paper presents our experience building an extendible software development environment using the object- oriented paradigm. we have found that object instances provide a natural way to model program constructs, and to capture complex relationships between different aspects of a software system. the object-oriented paradigm can be efficiently implemented on standard hardware and software, and provides some degree of extendibility without requiring major modifications to the existing implementation. unfortunately, we have also found that some natural extensions that we would like to make to the environment are not easily incorporated. we argue that the lack of extendibility is due to the object-oriented paradigm's lack of support for providing modifications and extensions to the object-oriented paradigm itself. w. h. harrison p. f. sweeney j. j. shilling verifying the correctness of compiler transformations on basic blocks using abstract interpretation timothy s. mcnerney porting linux to a power pc board an experiment and experience in using linux in an embedded application he zhu xiaoqiang chen comparing java implementations for linux no hype here--find out what java really is and what choices you have with java for linux. michael hirsch duration calculus in cooz xiaodong yuan jiajun chen guoliang zheng what is the future of software engineering standards?: discussion paper leonard l. tripp prism: a reverse engineering toolset this paper presents a process for the reengineering of computer-based control systems, and describes tools that automate portions of the process. the intermediate representation (ir) for capturing features of computer-based systems during reverse engineering is presented. a novel feature of the ir is that it incorporates the control system software architecture, a view that enables information to be captured at five levels of granularity: the program level, the task level, the package level, the subprogram level, and the statement level. a reverse engineering toolset that constructs the ir from ada programs, displays the ir, and computes metrics for concurrency, communication and object-orientedness is also presented. lonnie r. welch computing reachable states of parallel programs david p. helmbold charles e. mcdowell surveyor's forum: the file assignment problem j. g. kollias michalis hatzopoulos remembering preston briggs cooz: a complete object-oriented extension to z yuan xiaodong hu deqiang xu hao li yong zheng guoliang a survey of issues to be considered in the development of an object-oriented development methodology for ada r. ladden the next generation of computer assistance for software engineering aidan ward software process enactment in oikos despite much research work in progress to model the different facets of software process enactment from different approaches, there are no models yet generally recognized as adequate, and there is need for more experimentation. we describe the oikos environment and its coordination language esp: they provide an infrastructure in which experiments may be performed and evaluated. oikos predefines a number of services offering basic facilities, like access to data bases, workspaces, user interfaces etc.. services are customizable, in a declarative way that matches naturally the way esp defines and controls the software process. esp allows to define services, to structure them in a dynamic hierarchy, and to coordinate them according to the blackboard paradigm. the concepts of environment and of software process and their interplay are naturally characterized in oikos, in terms of sets of services and of the hierarchy. in the paper, an example taken from a real project (the specification of a small language and the implementation of its compiler) shows how oikos and esp are effective for software process enactment. as it is, esp embeds prolog as its sequential component, and combines it smoothly to the blackboard approach to deal with concurrency and distribution. anyway, most of the concepts used to model and enact software processes are largely independent of logic programming. v. ambriola p. ciancarini c. montangero block structure and object oriented languages (abstract only) ole madsen a real-time data plotting program how to program using the qt windowing system in x. david watt book review: structuring xml documents terry dawson hardware support for dynamic activation of compiler-directed computation reuse daniel a. connors hillery c. hunter ben-chung cheng wen-mei w. hwu experience with flamingo: a distributed, object-oriented user interface system the flamingo window management system is based on a remote method invocation mechanism that provides separate processes running in a heterogeneous, distributed computing environment with complete access to flamingo's objects and methods. this object-oriented interface has made flamingo a kernel window manager into which device drivers, graphics libraries, window managers and user interfaces can be dynamically loaded. this paper discusses the strengths and weaknesses of flamingo's system architecture, and introduces a new architecture which will provide a network-wide object space with object protection, migration, and garbage collection. david b. anderson non-intrusive object introspection in c++: architecture and application tyng- ruey chuang y. s. kuo chien-min wang structured apl: a proposal for block structured control flow in apl robert g. willhoft comments on software quality evaluation guidebook john b. bowen resource-bounded partial evaluation olivier danvy nevin hentze karoline malmkjær kde the highway ahead: in this article, mr. dalheimer describes some of the plans being made for future versions of kde kalle dalheimer a reply to "a note on metrics of pascal programs" victor scheider an experiment on a concurrent object-oriented programming language chang-hyan jo technical correspondence the column, "benchmarks for lan performance evaluation," by larry press (aug. 1988, pp. 1014-1017) presented a technique for quickly benchmarking the performance of lans in an office environment. our interest was peaked since office automation is growing in importance. as a result, an empirical analysis of the press benchmark programs was conducted. the results indicated that these benchmarking programs were appropriate for the benchmarking of lans in an office environment. diane crawford using representation clauses as an operating system interface karl a. nyberg a practical parallel garbage collection algorithm and its implementation one of the major problems of list processing programs is that of garbage collection. this paper presents a new practical parallel garbage collection algorithm and its improvements, and proposes a special processor for parallel garbage collection. for the parallel garbage collection system, an urgent requirement is to reduce the the garbage collector cycle time that is defined as the total execution time for the marking and reclaiming phase. the effect of improvements discussed here reduces the garbage collector cycle time to one half of that for the original algorithm. the performance of the processor tailored for parallel garbage collection is six times faster than that of an ordinary processor, while it requires a little bit larger amount of hardware than a typical channel controller. this processor satisfies the effectiveness condition for parallelism, even if the list process node consumption rate is high, e.g. when a compiled program is executed. yasushi hibino law-governed software processes naftaly h. minsky we are no longer a priesthood nora h. sabelli a brief introduction to icon david r. hanson system administration high availability linux web servers: if a web server goes down, here's one way to save time and minimize traffic lossconfiguring multiple hosts to serve the same ip address aaron gowatch domain-oriented design environments gerhard fischer type checking concurrent i/o in parallel programming languages multityped data structures may be shared by two or more processes. process i/o to these structures is assumed to be physically interleaved but logically parallel. this article addresses a syntactic mechanism to specify a type for such structures and extends an example language and its type-checking algorithm to these structures. w. h. carlisle career: mechanisms for ensuring the integrity of distributed object systems david s. rosenblum software pipelining irregular loops on the tms320c6000 vliw dsp architecture the tms320c6000 architecture is a leading family of digital signal processors (dsps). to achieve peak performance, this vliw architecture relies heavily on software pipelining. traditionally, software pipelining has been restricted to _regular_ (for) loops. more recently, software pipelining has been extended to _irregular_ (while) loops, but only on architectures that provide special- purpose hardware such as rotating (predicate and general-purpose) register files, specific instructions for filling/draining software pipelined loops, and possibly hardware support for speculative code motion. in contrast, the tms320c6000 family has a limited, static register file and no specialized hardware beyond the ability to predicate instructions using a few static registers. in this paper, we describe our experience extending a production compiler for the tms320c6000 family to software pipeline irregular loops. we discuss our technique for preprocessing irregular loops so that they can be handled by the existing software pipeliner. our approach is much simpler than previous approaches and works very well in the presence of the dsp applications and the target architecture which characterize our environment. with this optimization, we achieve impressive speedups on several key dsp and non-dsp algorithms. elana granston eric stotzer joe zbiciak what is multi-threading? a primer on multi-threading: the process whereby linux manages several tasks simultaneously martin mccarthy linux out of the real world: plant experiments run by linux ride the space shuttle sebastian kuzminsky apl object manager in order to co-ordinate apl code written for the european commission, which has several different computers (mainframes, minis and micros) and thus several different apl's, a comprehensive set of programming standards was developed. the standards outline, amongst other topics, how workspaces, apl objects and files should be structured, especially with respect to documentation. to aid the programmer to follow the standards, a development environment was produced and is now being used to develop systems written in dyalog apl. the development environment guides the user through various programming tasks and organizes the documentation associated with each apl object. this paper discusses the concepts and facilities of the development environment with specific references to the standards. b. bykerk common sense and real time executives william e. drissel maintaining the evolution of software objects in an integrated enviornment this paper discusses the organization of software objects in an integrated environment from the software configuration management view. we will emphasize organization by evolution, by membership and by composition. the paper introduces program base objects which are version controlled in order to maintain the evolution of software objects. different specializations of such objects make it possible to maintain source objects and to support organization of software objects by membership and by composition in a uniform way. a. gustavsson reflection facilities and realistic programming yutaka ishikawa the san francisco project: business process components and infrastructure verlyn johnson on the coding of dependencies in distributed computations abstract claude jard guy-vincent jourdan verification of the application of coding rules for ada to enhance portability of real-time applications in order to improve the portability of code for real time applications written in ada, a set of general coding rules has been compiled. in a second step, ada code constructs offending these rules were identified. in order to support the verification of the application of coding rules a static analysis tool was developed, which tests ada source code for these constructs and which indicates the offending constructs. since a syntactical analysis is insufficient to perform all necessary tests, the tool uses the asis-interface to ada libraries in order to obtain the necessary semantic information. different source code was tested with the tool. obvious differences can be observed depending on the application of the source code and its portation. p. obermayer j. schröer r. a. peek n. collienne a. klimek r. landwehr towards a distributed object-oriented propagation model using ada95 representing interdependencies between the objects of an object-oriented software application requires design-time mechanisms for specifying object interrelationships, as well as software constructs for the runtime maintenance of these relationships. we present a portion of our software engineering research environment adam, (short for active design and analyses modeling), which incorporates a technique for design-time modeling of _propagations_ (our term for the relationships between interdependent objects). we examine the adam environment's support for the automated generation of ada95 software constructs that maintain object interdependency at runtime. we focus on our propagation model's use of ada95 tasking constructs and protected objects, with an emphasis on the source level mechanisms through which our model utilizes concurrency. donald m. needham steven a. demurjian thomas j. peters timing variation in dual loop benchmark n. altman n. weiderman building a web weather station mr. howard tells us how he gathers and outputs weather information to the web using linux, perl and automated ftp chris howard toward a generic framework for computing subsystem interfaces spiros mancoridis capsules: a data abstraction facility for pascal described is a data type abstraction facility that has been implemented as an extension to the university of minnesota pascal 6000 compiler. the facility provides an encapsulation that establishes a static scope of identifiers with controlled visibility. the facility was developed primarily for instructional purposes for use throughout an undergraduate computer science curriculum in which pascal is the major language for program implementation. the paper begins with a brief background in which earlier abstraction facilities influencing the development of the capsule are described. then a description of the capsule and capsule parameterization (generics) facilities is provided, followed by an outline of the implementation of the capsule in a pascal compiler. plans for a generic capsule library and some comments concerning the use of the system in undergraduate instruction at temple university are given in the last section. alessio giacomucci frank l. friedman simultaneous reference allocation in code generation for dual data memory bank asips we address the problem of code generation for dsp systems on a chip. in such systems, the amount of silicon devoted of program rom is limited, so application software must be sufficiently dense. additionally, the software must be written so as to meet various high-performance constraints, which may include hard real-time constraints. unfortunately, current compiler technology is unable to generate high-quality code for dsps, whose architectures are highly irregular. thus, designers often resort to programming application software in assembly---a time-consuming task.in this paper, we focus on providing support for architectural feature of dsps that makes code generation difficult, namely multiple data memory banks. this feature increases memory bandwith by permitting multiple data memory accesses to occur in parallel when the referenced variables belong to different data memory banks and the registers involved conform to a strict set of conditions. we present an algorithm that attempst to maximize the benefit of this architectural feature. while previous approaches have decoupled the phases of register allocation and memory bank assignment, thereby compromising code quality, our algorithm performs these two phases simultaneously. experimental results demonstrate that our algorithm not only generates high-quality compiled code, but also improves the quality of completely-referenced code. ashok sudarsanam sharad malik on the power and limitations of strictness analysis strictness analysis is an important technique for optimization of lazy functional languages. it is well known that all strictness analysis methods are incomplete, i.e., fail to report some strictness properties. in this paper, we provide a precise and formal characterization of the loss of information that leads to this incompletenss. specifically, we establish the following characterization theorem for mycroft's strictness analysis method and a generalization of this method, called ee- analysis, that reasons about exhaustive evaluation in nonflat domains: mycroft's method will deduce a strictness property for program p iff the property is independent of any constant appearing in any evaluation of p. to prove this, we specify a small set of equations, called e-axioms, that capture the information loss in mycroft's method and develop a new proof technique called e-rewriting. e-rewriting extends the standard notion of rewriting to permit the use of reductions using e-axioms interspersed with standard reduction steps. e-axioms are a syntactic characterization of information loss and e-rewriting provides and algorithm-independent proof technique for characterizing the power of analysis methods. it can be used to answer questions on completeness and incompleteness of mycroft's method on certain natural classes of programs. finally, the techniques developed in this paper provide a general principle for establishing similar results for other analysis methods such as those based on abstract interpretation. as a demonstration of the generality of our technique, we give a characterization theorem for another variation of mycroft's method called dd-analysis. r. sekar i. v. ramakrishnan p. mishra best of technical support corporate linux journal staff crl: high-performance all-software distributed shared memory k. l. johnson m. f. kaashoek d. a. wallach execution monitoring and debugging tool for ada using relational algebra a. di maio s. ceri s. crespi reghizzi issues in the full scale use of formal methods for automated testing experience from a full scale effort to apply formal methods to automated testing in the open systems software arena is described. the formal method applied in this work is based upon the clemson automated testing system (cats) which includes a formal specification language, a set of guidelines describing how to use the method effectively, and tool support capable of translating formal specifications into executable tests. this method is currently being used to develop a full scale test suite for ieee's ada language binding to posix. following an overview of cats, an experience report consisting of results, lessons learned and future directions is presented. j. l. crowley j. f. leathrum k. a. liburdy a framework for construction and evaluation of high-level specifications for program analysis techniques abstract interpretation introduced the notion of formal specification of program analyses. denotational frameworks are convenient for reasoning about such specifications. however, implementation considerations make denotational specifications complex and hard to develop. we present a framework that facilitates the construction and understanding of denotational specifications for program analysis techniques. the framework is exemplified by specifications for program analysis techniques from the literature and from our own research. this approach allows program analysis techniques to be incorporated into automatically generated program synthesizers by including their specifications with the language definition. g. a. venkatesh from the editor phil hughes rational for the design of reusable abstract data types implemented in ada c. genillard n. ebel a. strohmeler interfaces and specifications for the smalltalk-80 collection classes william r. cook selecting a case tool l. zucconi the trade shows randolph bentson arnold robbins a review of apl-berlin-2000 by stephen m. mansour stephen m. mansour how do you pronounce oo-era-rdbms-oms? timothy e. lindquist robert g. munck best of technical support corporate linux journal staff a comparison of data flow path selection criteria a number of path selection testing criteria have been proposed throughout the years. unfortunately, little work has been done on comparing these criteria. to determine what would be an effective path selection criterion for revealing errors in programs, we have undertaken an evaluation of these criteria. this paper reports on the results of our evaluation for those path selection criteria based on data flow relationships. we show how these criteria relate to each other, thereby demonstrating some of their strengths and weaknesses. lori a. clarke andy podgurski debra j. richardson steven j. zeil pool: an unbounded array pei-chi wu feng-jian wang an ada reuse support system for windows 95/nt this paper describes a software resource that is being developed as part of the graduation requirement for the master in software engineering degree at the university of scranton. this project evolved from a series of experiments that were performed in undergraduate and graduate courses at the university. a basic editor was developed as part of an undergraduate course in rapid prototyping. several students used that project as the basis for undergraduate senior projects. all undergraduates are required to complete a project as a degree requirement. this basic editor was handed over to a graduate course in software generation and maintenance and used as the starting point for the construction of various software project management features. the system was constructed to support ada source code development. however, the system could be readily modified to support source code management in other languages, notably c++. this paper describes the construction of resources that encourage the use of reusable software. subsequent sections describe the overall framework of the system and selected details that carry out features that make reuse attractive. the system is called **reuse** (the **re**use **u**niversity of **s**cranton **e**nvironment).reuse is an ada programming environment which facilitates and promotes code reuse by individual developers or teams of developers. it provides centralized storage of project files, a package browser, automatic function and procedure call creation, a compiler interface, interactive error processing, multiple simultaneous editors, standard windows tools (menus, toolbars, etc.), and other features to help the developer write and reuse ada code efficiently.reuse was developed in microsoft visual basic 4.0 (32-bit) for the windows 95 / nt operating systems. david battaglia austin burke john beidler moving decision points outward from applications and utilities and into command level the user command interface of an operating system has come to be regarded as the outermost level in a layered hierarchy. in keeping with this view one should attempt to move decision points outward from the lower operating layers, so that all required actions can be defined at command level. that will allow the description of those actions in a form that matches the form of other commands normally entered from the keyboard, so that only a minimal amount of programming skill will be required. an example of this strategy is developed for implementation under generic msdos (microsoft) and rt11 (digital equipment corporation). maarten van swaay terminable statements and destructive computation boyko b. bantchev metric-driven reengineering for static concurrency analysis an approach to statically analyzing a concurrent program not suited for analysis is described. the program is reengineered to reduce the complexity of concurrency-related activities, thereby reducing the size of the concurrency state space. the key to the reengineering process is a metric set that characterizes program task interaction complexity and provides guidance for restructuring. an initial version of a metric set is proposed and applied to two examples to demonstrate the utility of the reengineering-for-analysis process. the reengineering has potential benefits apart from supporting analyzability, following the dictum that if it is hard to analyze, it is hard to understand and maintain. david l. levine richard n. taylor views of software development environments w. stinson product reviews mathematica version 3.0 for linux: review of new maple release. contacting waterloo for new version patrick galbraith treaty of orlando henry lieberman lynn stein david ungar a problem-oriented analysis of basic uml static requirements modeling concepts the unified modeling language (uml) is a standard modeling language in which some of the best object-oriented (oo) modeling experiences are embedded. in this paper we illustrate the role formal specification techniques can play in developing a precise semantics for the uml. we present a precise characterization of requirements-level (problem- oriented) class diagrams and outline how the characterization can be used to semantically analyze requirements class diagrams. robert france load time forth capabilities r. f. hoselton adapting ada distribution and fault tolerance during the third international real time ada workshop held at nemacolin woodlands, pennsylvania, in june 1989, the present authors formed a study group to review the changes in the ada standard which would make the language more suitable for programming distributed systems. it was accepted that one of the important advantages of distributing embedded systems was the potential for recovery from partial failure of hardware, and that the techniques of programming recovery from such failure were not well supported by the current form of ada. the working group made specific proposals, which have since been refined at a further meeting and discussions by electronic mail. the present paper reviews the causes of difficulty, and then outlines the new proposals, explaining their rationale in relation to the identified requirements. a. b. gargaro s. j. goldsack r. k. powers r. a. volz a. j. wellings a note on "towards a type theory for active objects" joel macauslan the architecture and performance of security protocols in the ensemble group communication system: using diamonds to guard the castle ensemble is a group communication system built at cornell and the hebrew universities. it allows processes to create _process groups_ within which scalable reliable fifo- ordered multicast and point-to-point communication are supported. the system also supports other communication properties, such as causal and total multicast ordering, flow control, and the like. this article describes the security protocols and infrastructure of ensemble. applications using ensemble with the extensions described here benefit from strong security properties. under the assumption that trusted processes will not be corrupted, all communication is secured from tampering by outsiders. our work extends previous work performed in the horus system (ensemble's predecessor) by adding support for multiple partitions, efficient rekeying, and application-defined security policies. unlike horus, which used its own security infrastructure with nonstandard key distribution and timing services, ensemble's security mechanism is based on off-the shelf authentication systems, such as pgp and kerberos. we extend previous results on group rekeying, with a novel protocol that makes use of diamondlike data structures. our diamond protocol allows the removal of untrusted members within milliseconds. in this work we are considering configurations of hundreds of members, and further assume that member trust policies are symmetric and transitive. these assumptions dictate some of our design decisions. integrating obstacles in goal-driven requirements engineering axel van lamsweerde emmanuel letier crops: coordinated restructuring of programs and storage larry carter jeanne ferrante a general framework for formalizing uml with formal languages _informal and graphical modeling techniques enable developers to construct abstract representations of systems. object-oriented modeling techniques further facilitate the development process. the_ unified modeling language _(uml), an object-oriented modeling approach, could be broad enough in scope to represent a variety of domains and gain widespread use. currently, uml comprises several different notations with no formal semantics attached to the individual diagrams. therefore, it is not possible to apply rigorous automated analysis or to execute a uml model in order to test its behavior, short of writing code and performing exhaustive testing. we introduce a general framework for formalizing a subset of uml diagrams in terms of different formal languages based on a homomorphic mapping between metamodels describing uml and the formal language. this framework enables the construction of a consistent set of rules for transforming uml models into specifications in the formal language. the resulting specifications derived from uml diagrams enable either execution through simulation or analysis through model checking, using existing tools. this paper describes the use of this framework for formalizing uml to model and analyze embedded systems. a prototype system for generating the formal specifications and results from an industrial case study are also described._ william e. mcumber betty h. c. cheng machine-oriented languages in the apl environment many apl systems, especially mainframe systems, have made it inconvenient or impossible for users to write subroutines in machine-oriented languages (e.g., fortran, c, or assembler). this deficiency has had a serious impact on apl, preventing it from competing with conventional compiled languages for speed and access to hardware capabilities. the trend in modern apl systems, particularly microcomputer systems, is to provide more convenient and flexible access to machine-oriented programs. this paper describes the machine-level interfaces provided by several apl systems, lists desirable features of such interfaces, gives examples of what machine-oriented programs can be used for, and takes a speculative look at the future. jim weigang self-organizing distributed operating system: implementation and problem using ada this paper introduces a new concept: "self- organization" in distributed operating system. up to now, the communication facilities in distributed systems were almost based on the assumption: the human intervention. software systems for c i m (computer integrated manufacturing), however, should be highly intelligent, i.e., highly dependable and dynamically evolving (and/or gracefully degrading) systems. the most serious drawback of ada faced with developing distributed software is the rendezvous mechanism of tasking which provides one-to-one communication facility. one-to-many communication mechanism should be provided. furthermore, communication facility in distributed system should be self-organized so as to fit the requirements of the problem such as in neural nets. the main idea of this paper is to "self-organize" the communication facilities according to (physical and/or virtual) structural change of communication networks. the mechanism and implementation strategy for the self-organizing distributed operating system are described. a prototype system-1 is being constructed on ten suns network using verdix ada compiler. the advantages of using ada over other languages in constructing distributed operating system are the dynamic data structure and package supported by separate compilation facility. problems facing with this project also will be discussed. shohei fujita reducing the effects of infeasible paths in branch testing branch testing, which is one of the most widely used methods for program testing, see white [1] for example, involves executing a selected set of program paths in an attempt to exercise all program branches. criteria for selecting such paths have, to date, received scant attention in the literature and it is the issue of developing a suitable path selection strategy to which this paper is addressed. specifically, a selection strategy, which aims at reducing the number of infeasible paths generated during the branch testing exercise is proposed. the strategy is founded on an assertion concerning the likely feasibility of program paths. statistical evidence in support of the assertion is provided, a method implementing the strategy is described, and the results obtained from applying the method to a set of program units are reported and analysed. d. yates n. malevris benchmarking implementations of lazy functional languages pieter h. hartel koen g. langendoen letters to the editor corporate linux journal staff replicated distributed programs eric c. cooper a program complexity metric based on data flow information in control graphs this paper presents a new approach to measuring program complexity with the use of data flow information in programs. a complexity metric, called du, is defined for the control graph of a structured program. this new metric is different from other control-graph based metrics in that it is based on "representative" data flow information in a control graph. an algorithm for computing the value of du(g) for a control graph g is given. the lower and upper bounds of du(g) are provided. the du metric is shown to have several advantages over other control-graph based complexity metrics. kuo-chung tai linux programming tips michael k. johnson duals: an alternative to variables rick hoselton view-based mechanisms for structured and distributed enactment denis avrilionis pierre-yves cunin operators for recursion recursion is frequently used in functional programming languages. in lisp, it is the primary means of traversing data structures, and, along with cond, is instrumental in controlling program flow. in apl2, the generalization of arrays to include arrays of arrays encourages the use of recursion. this usage is enhanced by the generalization of, and extensions to, operators. in particular, the ability to define operators allows us to create operators which simplify the coding of recursive logic. this paper studies the use of operators to provide program control for solutions to problems which involve recursion. for the most part, iteration on the elements of a list is used to replace recursive function calls. the program control methods used in this paper are introduced in a companion paper, operators for program control (eusebi[3]). the reader is encouraged to read that paper first. edward v. eusebi advances in high-speed networking william stallings book review: x user tools danny yee liseb: a language for modeling living systems with apl2 the paper discusses the design and implementation of liseb, a class-based language built on top of apl2 to respond to challenges posed by modelling living systems from a medical point of view. liseb capitalises on several features of apl and on some lessons learned from its history. living systems are modelled as open systems: environments in which concurrent mobile autonomous agents interact. modelling of these properties required extensions of traditional object- oriented paradigms and of their previous translations under apl: a) every object performs a sequence of actions dynamically modified to adapt to circumstances; b) a new policy of message management is introduced uniformly encompassing broadcast and directly addressed communication. an example of a simulation using liseb illustrates these concepts. p. bottoni m. mariotto p. mussio a discipline of software architecture peter j. denning pamela a. dargan from the editor jerrold l. wagener python update andrew kuchling ada's design goals and object-oriented programming jeffrey r. carter hotdraw (abstract): a structured drawing editor framework for smalltalk ralph johnson dipc: the linux way of distributed programming this article discusses the main characteristics of distributed inter-process communication (dipc), a relatively simple system software that provides uses of the linux operating system with both the distributed shared memory and the message mohsen sharifi kamran karimi verification of real-time designs: combining scheduling theory with automatic formal verification we present an automatic approach to verify designs of real-time distributed systems for complex timing requirements. we focus our analysis on designs which adhere to the hypothesis of analytical theory for fixed-priority scheduling. unlike previous formal approaches, we draw from that theory and build small formal models (based on timed automata) to be analyzed by means of model checking tools. we are thus integrating scheduling analysis into the framework of automatic formal verification. victor a. braberman miguel felder some notes on converting programs from mainframes to the ibm-pc using ms- fortran 3.30 - there is good news and bad news ronald m. sawey corba vs. ada 95 dsa: a programmer's view recent evolutions in computer networks have made it possible to consider distributed platforms as possible architectures for numerous applications. the classes of distributed applications are numerous and go from time consuming application (performance quest) to geographically distributed applications through safety and critical softwares. as beforehand, hardware evolutions have been fast paced and much earlier than software techniques to build applications on these distributed targets. in this paper, we shall investigate and compare three different approaches: the language approach of ada 95 for distributed systems vs. two distributed platforms. and for this evaluation, we shall take the developer's view, that is to go beyond the technical response but to take into account the developer of the distributed application. yvon kermarrec intrinsic and library procedures corporate high performance fortran forum safe pointers christoph grein multiple representation of abstract data types and reuse of realizations h. m. al-haddad k. m. george g. e. hedrick d. d. fisher a reexamination of "optimization of array subscript range checks" jonathan asuru proposed recently an enhanced method for optimizing array subscript range checks. the proposed method is however unsafe and may generate optimized programs whose behavior is different from the original program. two main flaws in asuru's method are described, together with suggested remedies and improvements. wei-ngan chin eak-khoon goh new products corporate linux journal staff emeralds: a small-memory real-time microkernel emeralds (extensible microkernel for embedded, real-time, distributed systems) is a real-time microkernel designed for small-memory embedded applications. these applications must run on slow (15-25mhz) processors with just 32-128 kbytes of memory, either to keep production costs down in mass-produced systems or to keep weight and power consumption low. to be feasible for such applications, the os must not only be small in size (less than 20 kbytes), but also have low-overhead kernel services. unlike commercial embedded oss which rely on carefully-crafted code to achieve efficiency, emeralds takes the approach of re-designing the basic os services of task scheduling, synchronization, communication, and system call mechanism by using characteristics found in small-memory embedded systems, such as small code size and _a priori_ knowledge of task execution and communication patterns. with these new schemes, the overheads of various os services are reduced 20-40% without compromising any os functionality. khawar m. zuberi padmanabhan pillai kang g. shin capturing design expertise in software architecture design environments robert t. monroe a parallel prolog: the construction of a data driven model an argument is presented for the implementation of a prolog-like language using data driven execution, as a step towards the solution of the problems associated with multiprocessor machine architectures. to facilitate this, a number of changes and extensions to the execution control mechanism of prolog have been implemented. among the notable features of the system are the use of conditional and (cand) and conditional or (cor) constructs to allow the programmer sequential control in the context of a parallel execution system, and mechanisms supporting dataflow execution within groups of parallel literals. a tentative solution proposed for the evaluation of negative literals is also being investigated. michael j. wise functional array fusion this paper introduces a new approach to optimizing array algorithms in functional languages. we are specifically aiming at an efficient implementation of irregular array algorithms that are hard to implement in conventional array languages such as fortran. we optimize the storage layout of arrays containing complex data structures and reduce the running time of functions operating on these arrays by means of equational program transformations. in particular, this paper discusses a novel form of combinator loop fusion, which by removing intermediate structures optimizes the use of the memory hierarchy. we identify a combinator named loop p that provides a general scheme for iterating over an array and that in conjunction with an array constructor replicate p is sufficient to express a wide range of array algorithms. on this basis, we define equational transformation rules that combine traversals of loop p and replicate p as well as sequences of applications of loop p into a single loop p traversal. our approach naturally generalizes to a parallel implementation and includes facilities for optimizing load balancing and communication. a prototype implementation based on the rewrite rule pragma of the glasgow haskell compiler is significantly faster than standard haskell arrays and approaches the speed of hand coded c for simple examples. manuel m. t. chakravarty gabriele keller the ravenscar profile the ravenscar profile is described. all its features and restrictions are noted. also, the means of designating the profile is presented. detailed motivations for the profile are not given. the aim of the paper is to summarize the outcome of deliberations at the 8th and 9th international workshops on real-time ada issues. alan burns failure is not just one value b. meek concurrent object oriented 'c' (cooc) rajiv trehan nobuyuki sawashima akira morishita ichiro tomoda toru imai ken-ichi maeda microcomputers as remote nodes of a distributed system a time-sharing system may be extended into a general-purpose distributed computing facility through the use of remote microcomputer systems. in such an extension the microcomputer system is not merely a remote terminal but a complete computer system. it becomes part of an integrated network of computers, each with significant local computational power with access to host system data resources. the implementation described emphasizes ease-of-use by minimizing any differences between local and remote data use. michael a. pechura didactic instructional tool for topics in computer science we consider the design of computer-enhanced teaching environment to be primarily a didactic task, and concentrate on representation of a domain structure and pedagogical view of the teaching process. at the core of our tutoring environment for computability topics is the ability of the system to engage students' previous knowledge of programming in comprehension of abstract concepts. elena trichina visual object-oriented programming margaret m. burnett a technique for automatically porting dialects of pascal to each other michael j. sorens how to make a recoverable server by synchronization code inheriting a. romanovsky softool change/configuration management j. martinis cons should not cons its arguments, or, a lazy alloc is a smart alloc henry g. baker valence and precedence in apl extensions extended operator syntax, defined operators, and strand notation are evaluated as apl extensions using the concept of syntactic binding, which is equivalent to the function scope rule and is more readily extended to other classes of objects. changing syntax has wider impact than adding function, and its side effects should be considered before adoption. j. philip benkard a plea for a readable prolog programming style r. mclaughlin fine-grained dynamic instrumentation of commodity operating system kernels ariel tamches barton p. miller on iterative constructs the wheel is repeatedly reinvented because it is a good idea. perhaps anson's "a generalized iterative construct and its semantics" [1] confirms that "a generalized control structure and its formal definition" [2], and the earlier "an alternative control structure and its formal definition" [3] presented good ideas. however, there are several misstatements in [1] that should be corrected. as anson points out, [2] contained definitions of constructs equivalent to both do term and do upon. however, he is incorrect when he suggests that it emphasized do term because of efficiency considerations. by writing "there is a pragmatic justification for either definition! ", i made it clear that that was not the reason for my choice. do term has two, quite different, advantages. (a) do term is more general. an implementation of do term may, in fact, be do upon if desired. further, a programmer using do term can achieve the effects of do upon by choosing his guards accordingly. (b) do term, like dijkstra's do od, eases the verification of programs by maintaining independence of guarded commands. the verification procedure for such constructs as do od and do term is (1) verify that the union of the guards is true in all states where the program will be invoked; (2) verify that each guarded command, on its own, will do no wrong. for do upon the second step is complicated by the need to consider the terminating commands in the list when considering an iterating command. anson argues that the semantics of do term are more complex. the minor syntactic difference between his two definitions is a consequence of the clumsiness of wp semantics. in the relational semantics used in [2], the change from do upon to do term meant the addition of one simple definition. as mills' [4] has explained, programmers should not be deriving the semantics of their programs from the text as anson's analysis suggests. we do not write programs arbitrarily and then try to determine their semantics. instead, programmers should be verifying that the program they have written has the semantics that they set out to achieve. fortunately, this verification is much easier than the inductive derivation of semantics described in [1]. as explained above, verification is easier for do term than do upon. anson suggests that a stronger weakest-precondition "seems to imply a weaker construct." on the contrary, do term can describe algorithms that cannot be described with do upon. anson also suggests that in do term termination is more difficult to obtain. programmers can obtain the behavior that they want in either. with do term the guards may be longer. for those that want to reduce the length of the guards, [2] offered a third alternative, a deterministic construct. this construct forced left-to-right consideration of the commands. this alternative has the verification disadvantages of do upon (the guarded command semantics are not independent), but, by putting the terminating commands first, one can achieve everything that anson values in do upon. in fact, with the deterministic construct, one can often achieve guards that are shorter than they would be with do upon. do upon seems to be a compromise between do term and the deterministic construct, a compromise with some of the disadvantages of both extremes and the advantages of neither. anson has not provided the full semantics of the constructs in question. it has been known for many years (e.g., majster [5]) that wp alone does not define the semantics of a program. two programs with the same wp can differ in their behavior in important ways. to provide a complete semantics of the constructs one must define both wp and wlp. that was one of the reasons for using a relational semantics in [2] and [3]. when i wrote [2] i deliberately chose do term over do upon because i felt that the simplicity of verification compensated for the longer guards. i also valued the ability to describe the algorithms that cannot be described with do upon. i continue to prefer the syntax used in [2]. i believe that readers who consider the facts above will make the same choice. the discussion of these issues is made a bit academic by the four-year delay between anson's submission of his paper (which apparently coincided with the publication of [2]) and the publication of [1]. in that time a generalization of both schemes has been published as a technical report [6] and has been submitted for publication. in this generalization the decision about whether a command is iterating or terminating can be made during execution, and the semantics must be that of do term. further generalizations make the seman tics of the constructs more practical, since side-effects are accurately treated in all cases. a method for reducing the length of guards and avoiding duplicated subexpressions is also provided. david lorge parnas automated support for encapsulating abstract data types robert w. bowdidge william g. griswold programming pearls jon bentley the role of distributed, real-time ada & c++ on the airborne surveillance testbed (ast) program the airborne surveillance testbed (ast) program, managed by smdc for bmdo, is a technology demonstration program that supports development, test, and evaluation of defensive systems to counter intercontinental and theater ballistic missiles (icbms and tbms) and their warheads. the heart of the ast program is a boeing 767 aircraft equipped with a raytheon-built, large-aperture, multiband, high data rate infrared sensor and a wide variety of processing equipment designed to detect, track, and discriminate ballistic missiles at long ranges. a raytheon interceptor seeker (part of a navy risk reduction effort) has recently been integrated onto the aircraft; a staring medium wave infrared (mwir) camera is currently being added as well. onboard processing capabilities include a concurrent turbohawk (multi-cpu powerpc flight computer) along with a variety of custom and off- the-shelf signal processing equipment, sgi workstations, dec alphas, and pcs, largely programmed in ada and c++. these systems are linked via scramnet, ethernet, 1553b, rs422 and rs232, and communicate externally via various radio systems. since the start of the program in 1984, ast has been making use of ada83, ada95, and c++ for both simulations and embedded flight software. during that time, we have gathered a lot of experience in the use of ada for real-time distributed systems, especially concerning: the pitfalls of task scheduling algorithms and priorities the benefits of the (careful) use of generics the importance of some changes between ada83 and ada95 interfacing ada software to hardware (and standardized interrupt handling) the importance of proper use of exception handling to ensure fault tolerance henry a. lortz timothy a. tibbetts an overview of dod-std-1838a (proposed) the common apse interface set: revision a five-year effort under the ada joint program office has developed a proposed standard for a host system interface as seen by tools running in an ada programming support environment (apse). standardization of this interface as dod-std-1838a will have a number of desirable effects for the department of defense, including tool portability, tool integration, data transportability, encouragement of a market in portable tools, and better programmer productivity. as the capability of tools to communicate with each other is a central requirement in apses, the common apse interface set (cais) has paid particular attention to facilitate such communication in a host-independent fashion. cais incorporates a well- integrated set of concepts tuned to the needs of writers and users of integrated tool sets. this paper covers several of these concepts: • the entity management system used in place of a traditional filing system, • object typing with inheritance, • process control including atomic transactions, • access control and security, • input/output methods, • support for distributed resource control, and • facilities for inter-system data transport. robert munck patricia oberndorf erhard ploedereder richard thall reducing the impact of software prefetching on register pressure david w. shrewsbury cindy norris oop in the real world (panel session) richard denatale john lalonde burton leathers reed philips higher order objects in pure object-oriented languages thomas kuhne software methodology: astm pete schilling concurrent organizational objects peter de jong inlining semantics for subroutines which are recursive henry g. baker improving the performance of sml garbage collection using application-specific virtual memory management we improved the performance of garbage collection in the standard ml of new jersey system by using the virtual memory facilities provided by the mach kernel. we took advantage of mach's support for large sparse address spaces and user-defined paging servers. we decreased the elapsed time for realistic application by as much as a factor of 4. eric cooper scott nettles indira subramanian dynamic i/o power management for hard real-time systems _power consumption is an important design parameter for embedded and portable systems. software-controlled (or dynamic) power management (dpm) has recently emerged as an attractive alternative to inflexible hardware solutions. dpm for hard real-time systems has received relatively little attention. in particular, energy-driven i/o device scheduling for real-time systems has not been considered before. we present the first online dpm algorithm, which we call_ l_ow_ e_nergy_ d_evice_ s_cheduler (ledes), for hard real-time systems. ledes takes as inputs a predetermined task schedule and a device-usage list for each task and it generates a sequence of sleep/working states for each device. it guarantees that real-time constraints are not violated and it also minimizes the energy consumed by the i/o devices used by the task set. ledes is energy-optimal under the constraint that the start times of the tasks are fixed. we present a case study to show that ledes can reduce energy consumption by almost 50%._ vishnu swaminathan krishnendu chakrabarty s. s. iyengar modules for standard ml the functional programming language ml has been undergoing a thorough redesign during the past year, and the module facility described here has been proposed as part of the revised language, now called standard ml. the design has three main goals: (1) to facilitate the structuring of large ml programs; (2) to support separate compilation and generic library units; and (3) to employ new ideas in the semantics of data types to extend the power of ml's polymorphic type system. it is based on concepts inherent in the structure of ml, primarily the notions of a declaration, its type signature, and the environment that it denotes. david macqueen batces solution #2: a simplified sa/ood approach michael hirasuna idioms and problem solving techniques in apl2 alan graham asistint vasiliy fofanov sergey rybin alfred strohmeier consummating virtuality to support more polymorphism in c++ wen-ke chen jia-su sun zhi-min tang exception handling and object-oriented programming: towards a synthesis the paper presents a discussion and a specification of an exception handling system dedicated to object-oriented programming. we show how a full object- oriented representation of exceptions and of protocols to handle them, using meta-classes, makes the system powerful as well as extendible and solves many classical exception handling issues. we explain the interest for object- oriented programming of handlers attached to classes and to expressions. we propose an original algorithm for propagating exceptions along the invocation chain which takes into account, at each stack level, both kind of handlers. any class can control which exceptions will be propagated out of its methods; any method can provide context-dependant answers to exceptional events. the whole specification and some keys of our smalltalk implementation are presented in the paper. christophe dony experimental results from an automatic test case generator constraint-based testing is a novel way of generating test data to detect specific types of common programming faults. the conditions under which faults will be detected are encoded as mathematical systems of constraints in terms of program symbols. a set of tools, collectively called godzilla, has been implemented that automatically generates constraint systems and solves them to create test cases for use by the mothra testing system. experimental results from using godzilla show that the technique can produce test data that is very close in terms of mutation adequacy to test data that is produced manually, and at substantially reduced cost. additionally, these experiments have suggested a new procedure for unit testing, where test cases are viewed as throw-away items rather than scarce resources. richard a. demillo a. jefferson offutt making use: a design representation john m. carroll an automatically generated, realistic compiler for imperative programming language we describe the automatic generation of a complete, realistic compiler from formal specifications of the syntax and semantics of sol/c, a nontrivial imperative language "sort of like c." the compiler exhibits a three pass structure, is efficient, and produces object programs whose performance characteristics compare favorably with those produced by commercially available compilers. to our knowledge, this is the first time that this has been accomplished. u. f. pleban p. lee stores and partial continuations as first-class objects in a language and its environment g. f. johnson d. duggan concise reference manual for the series macro package richard c. waters testtalk: software test description language debra richardson multi-dimensional modelling and measurement of software designs design structure measures are example of a class of metrics that may be derived early on in a software project; they are useful indicators of design weaknesses - weaknesses which, if uncorrected, lead to problems of implementation, reliability and maintainability. unfortunately, structure metrics are limited in their ability to model system architecture since they are insensitive to component size. thus, architectures that trade structural complexity for very large components may not be detected. this paper has two concerns. first, we consider the problem of adequately measuring component size at design time. various existing metrics are evaluated and found to be deficient. consequently, a new, more flexible approach based upon the traceability from system requirements to design components, is proposed. second, we address the issue of multi-dimensional modelling (in this case structure and size). we apply outlier analysis techniques to identify three classes of problem design component and relate our work to an empirical study of 62 modules. the results suggest that augmenting the more traditional approach of a single structure metric with an additional perspective, that of module size, considerably enhances the ability of design metrics to isolate problem components. it is our belief that far more sophisticated software modelling techniques, such as the multi-dimensional approach we present, are required, if measurement and modelling is to reach its full potential, as an integral part of software engineering processes. martin shepperd darrel ince c and tcc: a language and compiler for dynamic code generation dynamic code generation allows programmers to use run-time information in order to achieve performance and expressiveness superior to those of static code. the 'c(tick c) language is a superset of ansi c that supports efficient and high-level use of dynamic code generation. 'c provides dynamic code generation at the level of c expressions and statements and supports the composition of dynamic code at run time. these features enable programmers to add dynamic code generation to existing c code incrementally and to write important applications (such as "just-in-time" compilers) easily. the article presents many examples of how 'c can be used to solve practical problems. the tcc compiler is an efficient, portable, and freely available implementation of 'c. tcc allows programmers to trade dynamic compilation speed for dynamic code quality: in some aplications, it is most important to generate code quickly, while in others code quality matters more than compilation speed. the overhead of dynamic compilation is on the order of 100 to 600 cycles per generated instruction, depending on the level of dynamic optimizaton. measurements show that the use of dynamic code generation can improve performance by almost an order of magnitude; two- to four-fold speedups are common. in most cases, the overhead of dynamic compilation is recovered in under 100 uses of the dynamic code; sometimes it can be recovered within one use. massimiliano poletto wilson c. hsieh dawson r. engler m. frans kaashoek upfront corporate linux journal staff bridging the requirements/design gap in dynamic systems with use case maps (ucms) two important aspects of future software engineering techniques will be the ability to seamlessly move from analysis models to design models and the ability to model dynamic systems where scenarios and structures may change at run-time. use case maps (ucms) are used as a visual notation for describing causal relationships between responsibilities of one or more use cases. ucms are a scenario-based software engineering technique most useful at the early stages of software development. the notation is applicable to use case capturing and elicitation, use case validation, as well as high-level architectural design and test case generation. ucms provide a behavioural framework for evaluating and making architectural decisions at a high level of design. architectural decisions may be based on performance analysis of ucms. ucms bridge the gap between requirements and design by combining behaviour and structure in one view and by flexibly allocating scenario responsibilities to architectural components. they also provide dynamic (run-time) refinement capability for variations of scenarios and structure and they allow incremental development and integration of complex scenarios. therefore, ucms address the issues mentioned above. daniel amyot gunter mussbacher synchronous coordination in the log coordination model koen de bosschere jean- marie jacquet the omg, corba, orbix and ada an object request broker (orb) mediates between applications - including distributed ones. this document presents the design goals and philosophy that lead iona technologies to produce the object request broker, orbix. the ada language binding is described, and some simple programming examples are given to illustrate its operation.the introduction discusses the needs for orbs and the industry initiatives that arose from that need - culminating in the omg's corba specifications. david clarke two object oriented decomposition methods vaclav rajlich joao silva parsing distfix operators the advantages of user-defined distfix operators---a syntactic convenience that enhances the readability of programs \---can be obtained as an extension of almost any programming language without requiring dynamic changes to the parser. simon l. peyton jones composing first-class transactions nicholas haines darrell kindred j. gregory morrisett scott m. nettles jeannette m. wing incremental global reoptimization of programs although optimizing compilers have been quite successful in producing excellent code, two factors that limit their usefulness are the accompanying long compilation times and the lack of good symbolic debuggers for optimized code. one approach to attaining faster recompilations is to reduce the redundant analysis that is performed for optimization in response to edits, and in particulars, small maintenance changes, without affecting the quality of the generated code. although modular programming with separate compilation aids in eliminating unnecessary recompilation and reoptimization, recent studies have discovered that more efficient code can be generated by collapsing a modular program through procedure inlining. to avoid having to reoptimize the resultant large procedures, this paper presents techniques for incrementally incorporating changes into globally optimized code. an algorithm is given for determining which optimizations are no longer safe after a program change, and for discovering which new optimizations can be performed in order to maintain a high level of optimization. an intermediate representation is incrementally updated to reflect the current optimizations in the program. analysis is performed in response to changes rather than in preparation for possible changes, so analysis is not wasted if an edit has no far-reaching effects. the techniques developed in this paper have also been exploited to improve on the current techniques for symbolic debugging of optimized code. lori l. pollock mary lou soffa enhancement for multiple-inheritance james hendler uniforum '96: this unix and open systems trade show report belinda frazier a mark-and-sweep collector c++ our research is concerned with compiler-independent, tag-free garbage collection for the c++ programming language. we have previously presented a copying collector based on root registration. this paper presents a mark-and- sweep garbage collector that ameliorates shortcomings of the previous collector. we describe the two collectors and discuss why the new one is an improvement over the old one. we have tested this collector and a conservative collector in a vlsi cad application, and this paper discusses the differences. currently this prototype of the collector imposes too much overhead on our application. we intend to solve that problem, and then use the techniques described in this paper to implement a generational mark-and-sweep collector for c++. daniel r. edelson a measure of program locality and its application although the phenomenon of locality has long been recognized as the single most important characteristic of program behaviour, relatively little work has been done in attempting to measure it. recent work has led to the development of an intrinsic measure of program locality based on the bradford-zipf distribution. potential applications for such a measure are many, and include the evaluation of program restructuring methods (manual and automatic), the prediction of system performance, the validation of program behaviour models, and the enhanced understanding of the phenomena that characterize program behaviour. a consideration of each of these areas is given in connection with the proposed measure, both to increase confidence in the validity of the measure and to illustrate a methodology for dealing with such problems. richard b. bunt jennifer m. murphy shikharesh majumdar an implementation of a code generator specification language for table driven code generators this paper discusses an implementation of glanville's code generator generator for producing a code generator for a production pascal compiler on an amdahl 470. we successfully replaced the hand written code generator of an existing compiler with one which was produced automatically from a formal specification. this paper first outlines glanville's original scheme, then describes extensions which were necessary for generating code for a production compiler. peter l. bird an aristotelian understanding of object-oriented programming derek rayside gerard t. campbell matching data storage to application needs dawson dean richard zippel control of structure and evaluation scalar conformance controls the structure phase of expression evaluation. program control of this phase is achieved through insertion and deletion of scalar levels of structure. the basic functions to do this are enclose and disclose with axis. their application in some common cases is made easier with proposed extensions to bracket index and interval. this approach provides the function of a rank operator with greater flexibility while keeping data structure information with the data. a number of simple expression transformations have proved useful experimentally. two operators are defined which in their several invocations provide full and partial currying and commutation of left and right arguments; composition, power, and dual of functions; composition, conversion between dyadic and monadic function syntax. the variations in the definitions of partitions have led to suggestions that more than one primitive operation may be needed. extensions to each and enclose are proposed which provide a wide range of function while satisfying numerous identities. since these two operations are each used in apl2 to create a level of structure, the extensions are appropriate. j. philip benkard bringing the high end to the low end: high performance device drivers of the linux pc robert geist james westall memory management with explicit regions david gay alex aiken edward reid:some comments on fortran 8x edward reid programming marjorie richardson object-oriented program development using clos and clim (abstract) chris richardson process programming and process models at the fourth software process workshop the final session was devoted to identifying "emerging issues". the issues that were identified fell into two categories. one had to do with language issues --- how to provide the facilities that support the writing of the process programs that implement process models. the other had to do with reporting on experience with writing, evaluating, and sharing process models. since the last workshop we have made significant progress on several of the issues concerned with specifying and implementing process programs. in the paragraphs below we will comment briefly on these. the context in which we have explored these issues is the e-l system, a new programming language and supporting environment that became operational in prototype form in the summer of 1989. while we cannot provide much detail about e-l here, we believe that it provides many of the facilities --- including both language and system facilities --- required to support process modeling and programming. a central concern of e-l is extensibility, both in the language as well as the environment. the environment has a shared software development database, accommodates multiple simultaneous users, and facilitates communication and cooperation among a variety of users. from the user's point of view, the material that would constitute a "program" or "module" in a conventional environment is, in e-l, organized into documents. a document has parts, recursively, and, ultimately, is composed of various objects like program fragments, text, and pictures. we call document as well as their constituents artifacts. an artifact is typed --- examples being documents, text in latex format, pictures in postscript, and e-l program fragments in the surface syntax or some extension of it.1 in general, an artifact includes other artifacts and has a predecessor2. once an artifact has been created (and committed) it is not subject to change, although an edit that starts with a copy of it may be the basis for developing its successor. a given artifact is, typically, shared in the sense of being included in several different documents. examples of sharing include references to library entities, a reference to some artifact in the "current" version of some document as well as references to that same artifact in "previous" versions, and a reference to an artifact in the "current release" of a product as well as in one or more "current developments" of that product. there are three major reasons for using documents and artifacts in the organizing framework for e-l. the first is to provide for literate programming in the sense articulated by knuth. that is, program constructs like functions and types are presented in the context of a document3 that motivates, explains, and defends the design decisions that underlie these constructs. an "implementation module" is simply the collection of leaves of some document that are program entities. the second reason is to provide for sharing of artifacts so that, given two versions of some document, we can identify what is common and what is different about them. the third reason is that the structure provided facilitates the addition of new types of artifacts. the user interface to e-l provides each user with a "window" into the collection of artifacts that are of interest to that user and supports three main activities. the first is browsing --- following various "links" to inspect some collection of artifacts and attributes. in addition to the includes and included- by links that provide a natural way to navigate within a document, there are other links that may be followed like calls, called-by, and recently-modified. the second activity that is supported is editing --- evolving some document or set of documents. the editor insures that the integrity of artifacts is maintained.4 the third activity is program execution, usually aided by a collection of sophisticated debugging tools. the user is not concerned with conventional tool invocation \--- calling compilers, analyzers, text processing tools, and the like. in general, such tools, whose purpose is to derive new artifacts or attributes, are scheduled automatically and run opportunistically. a number of the programming language and environment issues that were identified at the last workshop are dealt with directly by e-l, including: the basic system provides facilities for dealing with concurrency and communication, including mechanisms that provide "notification" to interested users upon the occurrence of certain events. the artifact machinery provides the basis for sharing and containment as well as for hierarchies and decomposition. the type system of the e-l language is quite rich and permits user-defined type constructors, expandable unions, and type templates --- "types", useable in function headers, that contain parameters that are bound for use in function bodies. several other issues are being investigated: long term execution a model for writing programs that execute indefinitely has been developed, a mechanism that transforms such models into program fragments that contain the necessary constructs to insure the persistence of such programs between "active" periods has been designed, and a prototype implementation of the facility is scheduled to be operational in the fall of 1989. rules an extension to e-l that provides for rules that react to changes in the database and permit arbitrary programmed responses to such changes has been designed. a preliminary overview of this extension is provided in [che89]. sketches a "sketch" is a representation of a process program that depicts the process as a state transition diagram. we are currently designing a sketch facility for which we expect to have an operational prototype early in 1990. specific process models we are currently developing specifications for several process models, probably including models for handling software trouble reports, cooperative program development and modification, and developing and maintaining families of programs targeted for a variety of parallel architectures. we are also investigating a process model that has nothing to do with the software process but is concerned with the usaf process for issuing rfps. support for process modeling and analysis we are presently investigating the use of colored hierarchical petri nets (see [jen86]) for high level specification of process models as well as the use of the design/cpm tool5 construct graphical representations of these models. our experience to this point is very positive. thomas e. cheatham much ado about patterns robert zubek active network vision and reality: lessions from a capsule-based system although active networks have generated much debate in the research community, on the whole there has been little hard evidence to inform this debate. this paper aims to redress the situation by reporting what we have learned by designing, implementing and using the ants active network toolkit over the past two years. at this early stage, active networks remain an open research area. however, we believe that we have made substantial progress towards providing a more flexible network layer while at the same time addressing the performance and security concerns raised by the presence of mobile code in the network. in this paper, we argue our progress towards the original vision and the difficulties that we have not yet resolved in three areas that characterize a "pure" active network: the capsule model of programmability; the accessibility of that model to all users; and the applications that can be constructed in practice. david wetherall testing and evolutionary development anneliese von mayrhauser architectural considerations for next generation file systems we evaluate two architectural alternatives---partitioned and integrated---for designing next generation file systems. whereas a partitioned server employs a separate file system for each application class, an integrated file server multiplexes its resources among all application classes; we evaluate the performance of the two architectures with respect to sharing of disk bandwidth among the application classes. we show that although the problem of sharing disk bandwidth in integrated file systems is conceptually similar to that of sharing network link bandwidth in integrated services networks, the arguments that demonstrate the superiority of integrated services networks over separate networks are not applicable to file systems. furthermore, we show that: (i) an integrated server outperforms the partitioned server in a large operating region and has slightly worse performance in the remaining region, (ii) the capacity of an integrated server is larger than that of the partitioned server, and (iii) an integrated server outperforms the partitioned server by up to a factor of 6 in the presence of bursty workloads. prashant shenoy pawan goyal harrick m. vin efficient interpretation of prolog programs j. barklund choices (class hierarchical open interface for custom embedded systems) roy campbell garry johnston vincent russo putting widgets in their place stephen uhler software process models ian sommerville the chaining approach for software test data generation software testing is very labor intensive and expensive and accounts for a significant portion of software system development cost. if the testing process could be automated, the cost of developing software could be significantly reduced. test data generation in program testing is the process of identifying a set of test data that satisfies a selected testing criterion, such as statement coverage and branch coverage. in this article we present a chaining approach for automated software test data generation which builds on the current theory of execution-oriented test data generation. in the chaining approach, test data are derived based on the actual execution of the program under test. for many programs, the execution of the selected statement may require prior execution of some other statements. the existing methods of test data generation may not efficiently generate test data for these types of programs because they only use control flow information of a program during the search process. the chaining approach uses data dependence analysis to guide the search process, i.e., data dependence analysis automatically identifies statements that affect the execution of the selected statement. the chaining approach uses these statements to form a sequence of statements that is to be executed prior to the execution of the selected statement. the experiments have shown that the chaining approach may significantly improve the chances of finding test data as compared to the existing methods of automated test data generation. roger ferguson bogdan korel acm president's letter:the state of acm - 1982 david h. brandin corba and corba services for dsa comparing corba and the ada 95 distributed systems annex shows that an advantage of corba is its common object services, providing standard, frequently-used components for distributed application development. this paper presents our implementation of similar services for the dsa. we also introduce new developments of our team that aim at providing close interaction between corba and ada applications. part of the work presented here was accomplished by the adabroker team: fabien azavant, emmanuel chavane, jean-marie cottin, tristan gingold, laurent k bler, vincent niebel, and sebastien ponce. laurent pautet thomas quinot samuel tardieu efficient java rmi for parallel programming java offers interesting opportunities for parallel computing. in particular, java remote method invocation (rmi) provides a flexible kind of remote procedure call (rpc) that supports polymorphism. sun's rmi implementation achieves this kind of flexibility at the cost of a major runtime overhead. the goal of this article is to show that rmi can be implemented efficiently, while still supporting polymorphism and allowing interoperability with java virtual machines (jvms). we study a new approach for implementing rmi, using a compiler-based java system called manta. manta uses a native (static) compiler instead of a just- in-time compiler. to implement rmi efficiently, manta exploits compile-time type information for generating specialized serializers. also, it uses an efficient rmi protocol and fast low-level communication protocols.a difficult problem with this approach is how to support polymorphism and interoperability. one of the consequences of polymorphism is that an rmi implementation must be able to download remote classes into an application during runtime. manta solves this problem by using a dynamic bytecode compiler, which is capable of compiling and linking bytecode into a running application. to allow interoperability with jvms, manta also implements the sun rmi protocol (i.e., the standard rmi protocol), in addition to its own protocol.we evaluate the performance of manta using benchmarks and applications that run on a 32-node myrinet cluster. the time for a null-rmi (without parameters or a return value) of manta is 35 times lower than for the sun jdk 1.2, and only slightly higher than for a c-based rpc protocol. this high performance is accomplished by pushing almost all of the runtime overhead of rmi to compile time. we study the performance differences between the manta and the sun rmi protocols in detail. the poor performance of the sun rmi protocol is in part due to an inefficient implementation of the protocol. to allow a fair comparison, we compiled the applications and the sun rmi protocol with the native manta compiler. the results show that manta's null-rmi latency is still eight times lower than for the compiled sun rmi protocol and that manta's efficient rmi protocol results in 1.8 to 3.4 times higher speedups for four out of six applications. jason maassen rob van nieuwpoort ronald veldema henri bal thilo kielmann ceriel jacobs rutger hofman the integration of virtual memory management and interprocess communication in accent (abstract only) robert fitzgerald richard f. rashid acm algorithms policy f. t. krogh a translation language complete for database update and specification s. abiteboul v. vianu infant mortality and generational garbage collection henry g. baker composition of multi-site software (chaims) gio wiederhold dorothea beringer neal sample laurence melloul session 6b: reliability and complexity measures i a. marmor-squires automatic language conversion and its place in the transition to ada p. j. l. wallis standardizing production of domain components john favaro take command implementing a deltree command in linux: removing a software package is made easyusing dr. ekdahl's deltree command graydon l. ekdahl interprocedural constant propagation in a compiling system that attempts to improve code for a whole program by optimizing across procedures, the compiler can generate better code for a specific procedure if it knows which variables will have constant values, and what those values will be, when the procedure is invoked. this paper presents a general algorithm for determining for each procedure in a given program the set of inputs that will have known constant values at run time. the precision of the answers provided by this method are dependent on the precision of the local analysis of individual procedures in the program. since the algorithm is intended for use in a sophisticated software development environment in which local analysis would be provided by the source editor, the quality of the answers will depend on the amount of work the editor performs. several reasonable strategies for local analysis with different levels of complexity and precision are suggested and the results of a prototype implementation in a vectorizing fortran compiler are presented. david callahan keith d. cooper ken kennedy linda torczon a strongly typed persistent object store if we examine present day computer systems we find many dichotomies and discontinuities in their design. we contend that the present dependence on a plethora of mechamisms increases the cost of understanding and maintaining software for even the simplest of activities. it is important to remove this incoherence now since it is placing a considerable overhead on the users and developers of computer systems. we report on our current research on programming languages and environments and propose a persistent information space architecture (pisa) capable of integrating all activities. we have identified the following problems faced by users of current systems which when negated become requirements of modern systems in order to achieve simplicity and integration. these are controlling complexity, orthogonal persistence, controlled system evolution, protection of data and concurrent computation. we do not regard this list as exhaustive. the complexity of the system must be kept under control, so that developers and users can concentrate on the application rather than the complexity of the system. this depends on establishing consistent rules which apply throughout the design and being parsimonious in the introduction of new concepts into these designs. the discontinuity between the method of using data that is short term and manipulated by program and long term data that is manipulated by the file system or dbms causes unnecessary complexity. we have defined the persistence of data to be the length of time for which the data exists and is useable. we aspire to systems where the use of data is independent of its persistence. the uses of data (including program) are neither limited nor predictable. it is necessary to support the construction of unanticipated software systems or databases which make use of pre-existing data (or program) even when the data and program were defined independently of one another. for large scale, widely or continuously used systems any alteration to part of the system should not require total rebuilding. we require a mechanism which will allow the programmer to control the units of reconstruction. some large bodies of data are inherently valuable. it is necessary to protect them from misuse from hardware and software failure. this implies both a type and protection system to meet all users needs together with recovery mechanisms to limit the losses due to component failure. a large body of data requires a community effort for its construction and maintenance. any useful body of data is likely to be of concurrent interest to many users, probably in dispersed geographic locations. different models of concurrency and transactions may have to be accommodated by the underlying mechanism. we have designed and implemented the language ps-algol as a testbed for experiments on the above requirements. we report here on some results and propose further experiments in the search for better programming systems. central to our aim of building a total system capable of providing for all programming activity in an integrated manner is our persistent information space. this space is made up of objects which may be simple or highly structured and are part of the universe of discourse defined by the type system of the pisa architecture. the type system must therefore be rich enough to satisfy all our requirements. this we recognise as a research challenge. the information is persistent; the programmer has no knowledge of where the data resides. this may be locally in main store or disk, or remotely in non- local stores. the programmer is relieved of the burden of organising the physical storage of data in the system and presented with a conceptually simple model of data. the mechanisms for binding in the persistent information space are those of the name space together with those used for introducing names in the architecture language. at present we feel that it is premature to construct mechanisms and protocols for concurrency and transactions. we subscribe to the view that it is more sensible to build in a primitive for synchronization and a mechanism for specifying non-deterministic parallel computation and to construct the required protocols out of them by layers of abstraction. the reason for this is that although all of the protocols may be built out of the primitives it is not obvious that they form a hierarchy. at a lower level the persistent information space is supported by a stable store mechanism for reliability. the stable store may be distributed over many storage devices and processors and it is currently the focus of some research to build such a system. the approach proposed is to design the environment and the language as a coherent whole. the initial steps outlined in this paper are sketches of the way certain facilities, such as flexible binding, reliable and long term storage currently provided by the operating system may be profitably specified as part of a language. elsewhere we publish work where we demonstrate that this approach is feasible. malcolm p. atkinson alan dearle ronald morrison making languages more powerful by removing limitations jozef de man object-oriented design ian m. holland karl j. lieberherr reuse that pays a company builds a software system capable of running a diesel engine in a week, and in one case over a weekend, as opposed to the full year that it used to take. another company builds one of its typical systems with 13 software engineers instead of the more than 100 it once required, and at the same time decreases the systems defect rate ten-fold. still another increases its software-intensive product offerings from four per year to 50 per year. imagine being able to use one person to integrate and test 1.5 million source lines of ada for a real-time command-and-control system onboard a ship, with safety-critical requirements? or increasing software productivity four-fold over three years, as another company has done? these organizations all achieved their results through strategic software reuse. we software people have been promising the benefits of reuse for decades. are we finally achieving a reuse strategy that lives up to its hype? linda m. northrop using symbolic execution for verification of ada tasking programs a method is presented for using symbolic execution to generate the verification conditions required for proving correctness of programs written in a tasking subset of ada. the symbolic execution rules are derived from proof systems that allow tasks to be verified independently in local proofs, which are then checked for cooperation. the isolation nature of this approach to symbolic execution of concurrent programs makes it better suited to formal verification than the more traditional interleaving approach, which suffers from combinatorial problems. the criteria for correct operation of a concurrent program include partial correctness, as well as more general safety properties, such as mutual exclusion and freedom from deadlock. laura k. dillon approaches to specification-based testing current software testing practices focus, almost exclusively, on the implementation, despite widely acknowledged benefits of testing based on software specifications. we propose approaches to specification-based testing by extending a wide variety of implementation-based testing techniques to be applicable to formal specification languages. we demonstrate these approaches for the anna and larch specification languages. d. richardson o. o'malley c. tittle the essence of objects "the essence of objects" is an ongoing program of research in programming languages, type systems, and distributed programming at the university of pennsylvania, supported by the national science foundation under career grant ccr-9701826, _principled foundations for programming with objects._ papers on most of the topics discussed in this outline can be found via http://www.cis.upenn.edu/bcpierce/papers. benjamin c. pierce the design of a class mechanism for moby kathleen fisher john reppy on concurrent programming fred b. schneider scheduler activations: effective kernel support for the user-level management of parallelism thomas e. anderson brian n. bershad edward d. lazowska henry m. levy adaptive search techniques applied to software testing an experiment was performed in which executable assertions were used in conjuction with search techniques in order to test a computer program automatically. the program chosen for the experiment computes a position on an orbit from the description of the orbit and the desired point. errors were interested in the program randomly using an error generation method based on published data defining common error types. assertions were written for program and it was tested using two different techniques. the first divided up the range of the input variables and selected test cases from within the sub- ranges. in this way a "grid" of test values was constructed over the program's input space. the second used a search algorithm from optimization theory. this entailed using the assertions to define an error function and then maximizing its value. the program was then tested by varying all of them. the results indicate that this search testing technique was as effective as the grid testing technique in locating errors and was more efficient. in addition, the search testing technique located critical input values which helped in writing correct assertions. j. p. benson position paper for oopsla '92 panel on oop for manguages based on strong, static typing s. tucker taft impact of economics on compiler optimization compile-time program optimizations are similar to poetry: more are written than are actually published in commercial compilers. hard economic reality is that many interesting optimizations have too narrow an audience to justify their cost in a general-purpose compiler, and custom compilers are too expensive to write. an alternative is to allow programmers to define their own compile-time optimizations. this has already happened accidentally for c++, albeit imperfectly, in the form of template metaprogramming. this paper surveys the problems, the accidental success, and what directions future research might take to circumvent current economic limitations of monolithic compilers. arch d. robison the state of reuse: perceptions of the reuse community the ninth workshop on institutionalizing software reuse (wisr9) was held january 7-9, 1999, bringing together established reuse researchers from academia and industry. on the first day of the workshop, a survey was taken to collect feedback about the reuse community's collective beliefs and disagreements. preliminary results were then shared with the participants during the last plenary session of the workshop. this article presents the results of the survey, which capture the opinions of an important segment of the reuse community. stephen h. edwards supporting software reuse within an integrated software development environment (position paper) significant gains in programmer productivity have been achieved through the use of simple abstraction mechanisms that enhance the reuse of code. there are other useful forms of abstraction (over arbitrary identifier bindings, for example) which could further increase reuse rates, but are not well supported by programming languages; such forms may be better expressed by exploiting mechanisms provided by an integrated programming environment. this paper outlines ongoing work which aims to provide programming environment mechanisms that support the reuse of code via various forms of abstraction that complement those traditionally provided by programming languages. the concept of derivation- based reuse is also defined, and a generic framework for its support is outlined. in addition, a collection of environment mechanisms, intended to fit within this framework, are outlined. keith j. ransom chris d. marlin eli: a complete, flexible compiler construction system robert w. gray steven p. levi vincent p. heuring anthony m. sloane william m. waite on e. w. dijkstra's position paper on "fairness:'' f. b. schneider l. lamport enhancing program comprehension: formatting and documenting mouloud arab data specialization todd b. knoblock erik ruf wisr'92: fifth annual workshop in software reuse: working group reports martin griss will tracz the java syntactic extender (jse) the ability to extend a language with new syntactic forms is a powerful tool. a sufficiently flexible macro system allows programmers to build from a common base towards a language designed specifically for their problem domain. however, macro facilities that are integrated, capable, and at the same time simple enough to be widely used have been limited to the lisp family of languages to date. in this paper we introduce a macro facility, called the java syntactic extender (jse), with the superior power and ease of use of lisp macro sytems, but for java, a language with a more conventional algebraic syntax. the design is based on the dylan macro system, but exploits java's compilation model to offer a full procedural macro engine. in other words, syntax expanders may be implemented in, and so use all the facilities of, the full java language jonthan bachrach keith playford transforming high-level data-parallel programs into vector operations efficient parallel execution of a high-level data-parallel language based on nested sequences, higher order functions and generalized iterators can be realized in the vector model using a suitable representation of nested sequences and a small set of transformational rules to distribute iterators through the constructs of the language. jan f. prins daniel w. palmer producer: a tool for translating smalltalk-80 to objective-c source to source translation tools provide a way of integrating the strengths of production programming environments like c/unix with rapid prototyping environments like smalltalk-80 into a comprehensive hybrid environment that spans more of the software development life-spiral than ever before. this paper describes a tool-assisted process for translating smalltalk-80 programs into objective-c , and shows how the tool, called producer, is used in practice. to assist others in using this translation tool, we have made producer publicly available without charge on usenet. brad j. cox kurt j. schmucker software analysis: a roadmap daniel jackson martin rinard abstracts in software engineering j. v. guttag j. j. horning j. m. wing the pan language-based editing system for integrated development powerful editing systems for developing complex software documents are difficult to engineer. besides requiring efficient incremental algorithms and complex data structures, such editors must integrate smoothly with the other tools in the environment, maintain a sharable database of information concerning the documents being edited, accommodate flexible editing styles, provide a consistent, coherent, and empowering user interface, and support individual variations and project-wide configurations. pan is a language-based editing and browsing system that exhibits these characteristics. this paper surveys the design and engineering of pan, paying particular attention to a number of issues that pervade the system: incremental checking and analysis, information retention in the presence of change, tolerance for errors and anomalies, and extension facilities. robert a. ballance susan l. graham michael l. vande vanter a study of the prather software metric (abstract) due to the increasing complexity of modern software code, we need ways of effectively measuring the difficulty of programs to detect problem code during development. program metrics [conte, 86] are a partial answer to this need. they offer us various ways of measuring the length, modularity, nesting, and data flow of software. the prather metric, developed by ronald e. prather [prather, 84], is a statement structure metric which measures statement nesting and control structures as a function of the simple statements they contain. for our research, we developed a program2 which takes pascal programs as input and returns the prather measures of the programs. our project was to examine the prather metric by comparing it with the statement count and mccabe measures, both are commonly used metrics, and see if there was a relationship between prather and the other measures. if there is a correlation, this would show that prather is not truly a unique measure, but it contains aspects of statement count or mccabe's measure. we also examined the use of the prather metric as a cheating checker on student programs. we found that prather may be better at picking out possible cheaters than statement count and mccabe's measure. we concluded that this was due to the nature of the prather metric: it includes aspects of statement count and number of paths plus a measure of nesting. scott moore ronald curtis a rendezvous with linda the ada 9x revision found the rendezvous mechanism too complex to implement as the communication mechanism in a distributed system. in this paper another distributed programming model, linda, is related to the rendezvous mechanism. linda can be used to allow tasks to communicate in a rendezvous like manner. sequences of linda operations that roughly corresponds to different rendezvous operations are given and a small extension of linda is presented in order to handle more of the rendezvous constructs. linda can be used to communicate across partitions as well as within partitions. kristina lundqvist göran wall challenges in type systems research martin odersky sa-2-ada: a methodology for deriving ada designs from structured analysis specifications george h. marschalk algorithmic modifications to the jacobi-davidson parallel eigensolver to dynamically balance external cpu and memory load clusters of workstations (cows) and smps have become popular and cost effective means of solving scientific problems. because such environments may be heterogenous and/or time shared, dynamic load balancing is central to achieving high performance. our thesis is that new levels of sophistication are required in parallel algorithm design and in the interaction of the algorithms with the runtime system. to support this thesis, we illustrate a novel approach for application-level balancing of external cpu and memory load on parallel iterative methods that employ some form of local preconditioning on each node. there are two key ideas. first, because all nodes need not perform their portion of the preconditioning phase to the same accuracy, the code can achieve perfect loadbalance, dynamically adapting to external cpu load, if we stop the preconditioning phase on all processors after a fixed amount of time. second, if the program detects memory thrashing on a node, it recedes its preconditioning phase from that node, hopefully speeding the completion of competing jobs hence the relinquishing of their resources. we have implemented our load balancing approach in a state-of-the-art, coarse grain parallel jacobi-davidson eigensolver. experimental results show that the new method adapts its algorithm based on runtime system information, without compromising the overall convergence behavior. we demonstrate the effectiveness of the new algorithm in a cow environment under (a) variable cpu load and (b) variable memory availability caused by competing applications. richard tran mills andreas stathopoulos evgenia smirni the philosophy of lisp we consider here the importance of an overall systems viewpoint in avoiding computer-related risks. according to webster's, a system is a regularly interacting or interdependent group of items forming a unified whole. in computer systems, one person's components may be another person's system, and one person's system may in turn be one of another person's components. that is, each layer of abstraction may have it own concept of a system. we speak of a memory system, a multiprocessor system, a distributed system, a multisystem system, a networked system, and so on. a system design can most effectively be considered as a unified whole when it is possible to analyze the interdependent subsystems individually and then to evaluate, reason about, and test the behavior of the entire system based on the interactions among the subsystems. this is particularly true of distributed systems that mask the presence of distributed storage, processing, and control. at each layer of abstraction, it is desirable to design (sub)systems that are context-free, but in reality there may be subtle interactions that must be accommodated--- particularly those involving the operating environment. kenneth h. sinclair david a. moon parsing with c++ constructors philip w. hall using standard fortran - past, present, and future alan clarke ada box structures: starting with objects cameron m. donaldson edward r. comer andres rudmik transparent recovery in distributed systems (position paper) david f. bacon up-down parsing with prefix grammars p k turner a note on metrics of pascal programs gordon davies a. tan programming by multiset transformation jean-pierre banâtre daniel le metayer working group report library and representation subgroup of methods and tools for design, specification, and reuse james solderitsch deterministic execution testing of concurrent ada programs an execution of a concurrent program p exercises a sequence of synchronization events, called a synchronization sequence or a syn-sequence. this non- deterministic execution behavior creates a problem during the testing phase of p: when testing p with input x, a single execution is insufficient to determine the correctness of p with input x. in this paper, we show how to accomplish deterministic execution testing of a concurrent ada program according to a given syn-sequence. we first define the format of a syn-sequence which provides sufficient information for deterministic execution. then we show how to transform a concurrent ada program p into a slightly different program p* (also written in ada) so that an execution of p* with (x,s) as input, where x and s are an input and syn- sequence of p respectively, determines whether or not s is feasible for p with input x and produces the same result as p with input x and syn-sequence s would, provided that s is feasible. tools for transforming concurrent ada programs for deterministic execution testing are described. r. carver k. c. tai which pointer analysis should i use? during the past two decades many different pointer analysis algorithms have been published. although some descriptions include measurements of the effectiveness of the algorithm, qualitative comparisons among algorithms are difficult because of varying infrastructure, benchmarks, and performance metrics. without such comparisons it is not only difficult for an implementor to determine which pointer analysis is appropriate for their application, but also for a researcher to know which algorithms should be used as a basis for future advances. this paper describes an empirical comparison of the effectiveness of five pointer analysis algorithms on c programs. the algorithms vary in their use of control flow information (flow-sensitivity) and alias data structure, resulting in worst-case complexity from linear to polynomial. the effectiveness of the analyses is quantified in terms of compile-time precision and efficiency. in addition to measuring the direct effects of pointer analysis, precision is also reported by determining how the information computed by the five pointer analyses affects typical client analyses of pointer information: mod/ref analysis, live variable analysis and dead assignment identification, reaching definitions analysis, dependence analysis, and conditional constant propagation and unreachable code identification. efficiency is reported by measuring analysis time and memory consumption of the pointer analyses and their clients. michael hind anthony pioli fast strictness analysis based on demand propagation r. sekar i. v. ramakrishnan challenges for future real-time operating systems tom hand improved interpretation of unix-like file names embedded in data when the data processed by a program span several files, the common practice of including file names as data in some of the files leads to difficulties in moving or sharing that data. in systems using tree structured directories, this problem can be solved by making a syntactic distinction between absolute and relative file names. douglas w. jones preliminary design use cases: combining use cases and event response lists for reuse and legacy system enhancement this report describes an approach to combining use cases with elements from event response lists to help improve requirements understanding. the format suggested for the combination, called the _preliminary design use cas,_ proves particularly effective in facilitating reuse of existing system elements. it also supports the definition of extensions to existing systems, and encourages dialogue between customers, analysts and designers. the approach has resulted from industrial practice and has proven particularly successful. declan martin oo testing in the real world (panel): lessons for all john d. mcgregor ed berard don firesmith brian marick dave thomson a correct implementation of general semaphores david hemmendinger fault tolerance (session summary) mike kamrad a life cycle based approach to ada software configuration management a. reedy d. stephenson e. dudar f. c. blumberg pgraphite: an experiment in persistent typed object management defining, creating, and manipulating persistent typed objects will be central activities in future software environments. pgraphite is a working prototype through which we are exploring the requirements for the persistent object capability of an object management system in the arcadia software environment. pgraphite represents both a set of abstractions that define a model for dealing with persistent objects in an environment and a set of implementation strategies for realizing that model. pgraphite currently provides a type definition mechanism for one important class of types, namely directed graphs, and the automatic generation of ada implementations for the defined types, including their persistence capabilities. we present pgraphite, describe and motivate its model of persistence, outline the implementation strategies that it embodies, and discuss some of our experience with the current version of the system. jack c. wileden alexander l. wolf charles d. fisher peri l. tarr a presentation manager based on application semantics we describe a system for associating the user interface entities of an application with their underlying semantic objects. the associations are classified by arranging the user interface entities in a type lattice in an object-oriented fashion. the interactive behavior of the application is described by defining application operations in terms of methods on the types in the type lattice. this scheme replaces the usual "active region" interaction model, and allows application interfaces to be specified directly in terms of the objects of the application itself. we discuss the benefits of this system and some of the difficulties we encountered. s. mckay w. york m. mcmahon conceptual module querying for software reengineering elisa l. a. baniassad gail c. murphy syntactic closures in this paper we describe syntactic closures. syntactic closures address the scoping problems that arise when writing macros. we discuss some issues raised by introducing syntactic closures into the macro expansion interface, and we compare syntactic closures with other approaches. included is a complete implementation. alan bawden jonathan rees effective whole-program analysis in the presence of pointers understanding large software systems is difficult. traditionally, automated tools are used to assist program understanding. however, the representations constructed by these tools often require prohibitive time and space. demand-driven techniques can be used to reduce these requirements. however, the use of pointers in modern languages introduces additional problems that do not integrate well with these techniques. we present new techniques for effectively coping with pointers in large software systems written in the c programming language and use our techniques to implement a program slicing tool.first, we use a fast, flow-insensitive, points-to analysis before traditional data-flow analysis. second, we allow the user to parameterize the points-to analysis so that the resulting program slices more closely match the actual program behavior. such information cannot easily be obtained by the tool or might otherwise be deemed unsafe. finally, we present data-flow equations for dealing with pointers to local variables in recursive programs. these equations allow the user to select an arbitrary amount of calling context in order to better trade performance for precision.to validate our techniques, we present empirical results using our program slicer on large programs. the results indicate that cost-effective analysis of large programs with pointers is feasible using our techniques. darren c. atkinson william g. griswold diff, patch, and friends de-mystifying patches and the tools used to create and apply them. michael k. johnson cappuccino - a c++ to java translator frank buddrus jorg schodel modified structured decision table and its complexity a. k. misra b. d. chaudhary a single model for files an processes p singleton k h bennett o p brereton self-stabilizing extensions for message-passing systems shmuel katz kenneth perry persistent shared object support in the guide system: evaluation & related work the purpose of the guide project is to explore the use of shared objects for communication in a distributed system, especially for applications that require cooperative work. since 1986, two prototypes have been implemented respectively on top of unix (guide-1) and mach 3.0 (guide-2). they have been used for the development of distributed cooperative applications, allowing us to validate or reject many design choices in the system. this paper gathers the lessons learned from our experience and compares the basic design choices with those in other distributed object-oriented systems. the lessons may be summarized as follows. this system layer must provide a generic interface for the support of several object-oriented languages. it must manage fine grained objects and enforce protection between objects and processes. these requirements can be achieved with an acceptable trade-off between protection and efficiency. daniel hagimont p.-y. chevalier a. freyssinet s. krakowiak s. lacourte j. mossière x. rousset de pina linux on the desktop marjorie richardson the design of a data flow analyzer this paper presents an efficient inter-procedural data flow analysis algorithm for precisely determining aliases in programs that employ a rich set of parameter passing mechanisms and pointer data types. this approach handles the use of pointers bounded to a data type as in pascal, as well as unbounded pointers that can point to the same locations to which variables map. in the last step of this approach, the alias information is used to compute data flow information that is required for optimization. anita l. chow andres rudmik advanced techniques for understanding, profiling, and debugging object oriented systems chris laffra ashok malhotra vicki de mey effective fine-grain synchronization for automatically parallelized programs using optimistic synchronization primitives this article presents our experience using optimistic synchronization to implement fine-grain atomic operations in the context of a parallelizing compiler for irregular, object- based computations. our experience shows that the synchronization requirements of these programs differ significantly from those of traditional parallel computations, which use loop nests to access dense matrices using affine access functions. in addition to coarse-grain barrier synchronization, our irregular computations require synchronization primitives that support efficient fine-grain atomic operations. the standard implementation mechanism for atomic operations uses mutual exclusion locks. but the overhead of acquiring and releasing locks can reduce the performance. locks can also consume significant amounts of memory. optimistic synchronization primitives such as loud-linked/store conditional are an attractive alternative. they require no additional memory and eliminate the use of heavyweight blocking synchronization constructs. we evaluate the effectiveness of optimistic synchronization by comparing experimental results from two versions of a parallelizing compiler for irregular, object-based computations. one version generates code that uses mutual exclusion locks to make operations execute atomically. the other version generates code that uses mutual exclusion locks to make operations execute atomically. the other version uses optimistic synchronization. we used this compiler to automatically parallelize three irregular, object-based benchmark applications of interest to the scientific and engineering computation community. the presented experimental results indicate that the use of optimistic synchronization in this context can significantly reduce the memory consumption and improve the overall performance. martin c. rinard an apl/370 compiler and some performance comparisons with apl interpreter and fortran the experimental apl/370 e-compiler compiles a subset of apl which is large enough for most scientific and engineering uses, directly into 370 assembly code. the compiler does not require variable declarations. the front-end employs extensive type-shape analysis based on global dataflow analysis. the back-end takes the parse trees, graphs and tables produced by the front-end and generates 370-code which runs independently of the interpreter. the compiled-code executes at 2-10 times the speed of the interpreter on several one-line functions, and this ratio increases significantly with iterative programs. the code quality is comparable to that produced by an optimizing fortran compiler on fortran programs corresponding to our test cases. this removes the performance penalty of apl in computation intensive applications. wai-mee ching evaluation of processor code efficiency for embedded systems this paper evaluates the code efficiency of the arm, java, and x86 instruction sets by compiling the spec cpu95/ cpu2000/jvm98 and caffeinemark benchmarks, in terms of code sizes, basic block sizes, instruction distributions, and average instruction lengths. as a result, mainly because (i) the java architecture is a stack machine, (ii) there are only four local variables which can be accessed by a 1-byte instruction, and (iii) additional instructions are provided for the network security, the code efficiency of java turns out to be inferior to that of arm thumb. moreover, through this efficiency analysis it should be claimed that a more efficient code architecture can be constructed by taking minute account of the customization of an instruction set as well as the number of registers. morgan hirosuke miki mamoru sakamoto shingo miyamoto yoshinori takeuchi toyohiko yoshida isao shirakawa amt - the ada maintenance toolchest a. von mayrhauser a foundation for an efficient multi-threaded scheme system we have built a parallel dialect of scheme called sting that differs from its contemporaries in a number of important respects. sting is intended to be used as an operating system substrate for modern parallel programming languages. the basic concurrency management objects is sting are first-class lightweight threads of control and virtual processors (vps). unlike high-level concurrency structures, sting threads and vps are not encumbered by complex synchronization protocols. threads and vps are manipualted in the same way as any other scheme structure. sting separates thread policy decisions from thread implementation ones. implementations of different parallel languages built on top of sting can define their own scheduling and migration policies without requiring modification to the runtime system or the provided interface. process migration and scheduling can be customized by applications on a per-vp basis. the semantics and implementation of threads minimizes the cost of thread creation, and puts a premium on storage locality. the storage management policies in sting lead to better cache and page utilization, and allows users to experiment with a variety of different execution regimes---from fully delayed to completely eager evaluation. suresh jagannathan jim philbin on the notion of inheritance one of the most intriguing---and at the same time most problematic---notions in object-oriented programing is inheritance. inheritance is commonly regarded as the feature that distinguishes object-oriented programming from other modern programming paradigms, but researchers rarely agree on its meaning and usage. yet inheritance of often hailed as a solution to many problems hampering software development, and many of the alleged benefits of object- oriented programming, such as improved conceptual modeling and reusability, are largely credited to it. this article aims at a comprehensive understanding of inheritance, examining its usage, surveying its varieties, and presenting a simple taxonomy of mechanisms that can be seen as underlying different inheritance models. antero taivalsaari documenting software systems with views software professionals rely on internal documentation as an aid in understanding programs. unfortunately, the documentation for most programs is usually out-of-date and cannot be trusted. without it, the only reliable and objective information is the source code itself. personnel must spend an inordinate amount of time exploring the system by looking at low-level source code to gain an understanding of its functionality. one way of producing accurate documentation for an existing software system is through reverse engineering. this paper outlines a reverse engineering methodology for building subsystem structures out of software building blocks, and describes how documenting a software system with views created by this process can produce numerous benefits. it addresses primarily the needs of the software engineer and technical manager as document users. scott r. tilley hausi a. muller mehmet a. orgun three dimensional software modelling joseph gil stuart kent treating statement sequences as block objects hanspeter mössenböck system information retrieval collect your system configuration files and store them on a separate machine dan lasley extending use cases throughout the software lifecycle the relevance of _use cases_ throughout the software development life cycle is considered in the context of an actual project. extensions, called _behavior case_ and _test cases_ are proposed to address design and testing activities. d. kirner r. porter p. punniamoorthy m. schuh d. shoup s. tindall d. umphress a new approach for efficient implementation of ada multi-tasking gansheng li claw, a high level, portable, ada 95 binding for microsoft windows randall brukardt tom moran message conversion and a new type system for oo model jianhua zhao jiajun chen guoliang zheng disposable memo functions (extended abstract) byron cook john launchbury the portable common runtime approach to interoperability operating system abstractions do not always reach high enough for direct use by a language or applications designer. the gap is filled by language-specific runtime environments, which become more complex for richer languages (commonlisp needs more than c+ +, which needs more than c). but language- specific environments inhibit integrated multi-lingual programming, and also make porting hard (for instance, because of operating system dependencies). to help solve these problems, we have built the portable common runtime (pcr), a language-independent and operating-system-independent base for modern languages. pcr offers four interrelated facilities: storage management (including universal garbage collection), symbol binding (including static and dynamic linking and loading), threads (lightweight processes), and low-level i/o (including network sockets). pcr is "common" because these facilities simultaneously support programs in several languages. pcr supports c. cedar, scheme, and commonlisp intercalling and runs pre-existing c and commonlisp (kyoto) binaries. pcr is "portable" because it uses only a small set of operating system features. the pcr source code is available for use by other researchers and developers. m. weiser a. demers c. hauser task allocation in distributed systems: a survey of practical strategies a general distributed processing model is formulated, and the problem of task allocation is defined. a survey is made of various task allocation strategies that have been proposed in recent literature. each strategy is described briefly with respect to the particular distributed processing model for which it is designed, and its tested performance and intended applicability are discussed. camille c. price software specification and design must "engineer" quality and cost iteratively tom gilb constructing a class library for microsoft windows lon fisher kernel korner: the elf object file format by dissection eric youngdale reflection and metalevel architectures in object-oriented programming (workshop session) mamdouh h. ibrahim objects and database standards (panel) rick cattell frank manola richard soley jeff sutherland mary loomis classification as a paradigm for computing (abstract only) peter wegner the systems engineer and the software crisis d. m. johnson view-3 and ada: tools for building systems with many tasks ann kratzer mark sherman cooking with linux--the french connection mr. gagne provides us with several recipes from his famed french kitchen marcel gagne choices in server-side programming: a comparative programming exercise one of the fastest growing and changing fields for software developers is in writing applications that are used across the "world wide web", which is in turn a client-server system that runs on the internet. servers, known by name, can be accessed over the internet, and a protocol, known as http (hypertext transfer protocol) is used to send requests to servers for text, images (still and moving), audio, and other information. a very large amount of business will be conducted this way, now and in the future, having grown from almost nothing only 5 years ago. application development on the web is largely a matter of writing programs that are executed by the server, and we're going to focus our attention on two programming and operating environments that can be used to deliver server-side software.we would like to show how apl can be used to develop programs that run on the server, and we will contrast it with another means of developing and delivering server-side programs. it is not our goal to deliver any judgment about whether or not one system is "better" than the other (we believe that any such statement would be simplistic to the point of being misleading); we intend only to share the results of our experience, help the reader understand what technologies that are available, and assist in making an informed choice if participation in a project of this nature is part of what the reader does. robert g. brown willi hahn how to make ad-hoc polymorphism less ad hoc this paper presents type classes, a new approach to ad-hoc polymorphism. type classes permit overloading of arithmetic operators such as multiplication, and generalise the "eqtype variables" of standard ml. type classes extend the hindley/milner polymorphic type system, and provide a new approach to issues that arise in object-oriented programming, bounded type quantification, and abstract data types. this paper provides an informal introduction to type classes, and defines them formally by means of type inference rules. p. wadler s. blott running windows applications now need to run ms windows applications right away? then desqview/x offers a solution ron bardarson transparent fault tolerance for distributed ada applications the advent of open architectures and initiatives in massively parallel supercomputing, combined with the maturation of distributed processing methods and algorithms, has enabled the implementation of responsive software-based fault tolerance. expanding capabilities of distributed ada runtime environments further stimulate the incorporation of hardware fault tolerance into critical, realtime embedded systems. through the integration of proven ada program component distribution and virtually synchronous communication protocols, we have established a benchmark fault tolerant system, which layers transparently between an ada application and the runtime environment. such transparence allows rapid reconfiguration of distribution and fault tolerance characteristics without change to the source code, thus enhancing portability, scalability, and reuse. the ada fault tolerance project has implemented software technologies which penetrate the envelope of an ada program to detect, diagnose, and recover from hardware faults. these realtime facilities interact with the rational distributed application development and runtime environment systems to service replicated ada software tasks (i.e., threads of control). the deployed system proves that all replicated threads, including those of independently distributed components, can achieve timely consensus during periodic fault detection cycles through transparently embedded voting protocols. our implementation uses a hybrid redundancy computation strategy and relies on a communication layer which provides virtual synchrony via a causal multicast protocol. mark a. breland steven a. rogers guillaume p. brat kenneth l. nelson an introduction to jdbc mr. konchady presents some of the benefits of using java over cgi as well as the basics of managing a departmental database with java manu konchady migration of processes, files, and virtual devices in the mdx operating system load management in distributed systems is usually focused on balancing process execution and communication load. stress on storage media and i/o-devices is considered only indirectly or disregarded. for i/o-intensive processes this imposes severe restrictions on balancing algorithms: processes have to be placed relative to fixed allocated resources. therefore, beyond process migration, there is a need for a migration of all operating system objects, like files, pipes, timers, virtual terminals, and print jobs. in addition to new options for balancing cpu loads, this also makes it possible to balance the loads associated with these objects like storage capacity or i/o-bandwidth.this paper presents a concept for a general migration of nearly all operating system objects of a unix environment. the migrations of these objects work all in the same unix compliant and transparent manner. objects can be moved throughout a distributed system independently of each other and at any time, according to a user defined policy. the migration mechanism is implemented as part of the mdx operating system; we present performance measurements. we believe that most of the mechanism can also apply to other message-passing based distributed operating systems. harald schrimpf elaboration order issues in ada 9x mats weber experimental implementation of an ada tasking run-time system on the multiprocessor computer cm anders ardö letters to the editor corporate linux journal staff a review of the quality aspects of the approved software engineering standards f. buckley experiences of software quality management using metrics through the life- cycle hideto ogasawara atsushi yamada michiko kojo a persistent runtime system using persistent data structures zhiqing liu process programming: a structured multi-paradigm approach could be achieved w. deiters v. gruhn w. schäfer stop the presses corporate linux journal staff on embedding boolean as a subtype of integer markku sakkinen reengineering of old systems to an object-oriented architecture ivar jacobson fredrik lindström effective exploitation of a zero overhead loop buffer gang-ryung uh yuhong wang david whalley sanjay jinturkar chris burns vincent cao asis for gnat: goals, problems and implementation strategy this article describes the approach taken to implement the ada semantic interface specification (asis) for the gnat ada compiler. the paper discusses the main implementation problems and their solutions. it also describes the current state of the implementation. this is a slightly revised version of the paper published in the proceedings of the ada-europe'95 conference. sergey rybin alfred strohmeier eugene zueff elint in lamina: application of a concurrent object language the design and performance of an "expert system" signal interpretation application written in a concurrent object-based programming language, lamina, is described together with a synopsis of the programming model that forms the foundation of the language. the effects of load balancing and the limits imposed by task granularity and message transmission costs are studied and their consequences to application performance are measured over the range of one to 250 processors as simulated in simple/care, an extensively instrumented simulation system and computer array model. b. a. delagi n. p. saraiya including a user interface management system (uims) in the performance relationship model j. e trumbly k. p. arnett alternatives to construct-based programming misconceptions in this paper, we investigate whether or not most novice programming bugs arise because students have misconceptions about the semantics of particular language constructs. three high frequency bugs are examined in detail --- one that clearly arises from a construct-based misconception, one that does not, and one that is less cut and dry. based on our empirical study of 101 bug types from three programming problems, we will argue that most bugs are not due to misconceptions about the semantics of language constructs. j. c. spohrer e. soloway the kernel of modula-2 integrated environment z. guoliang z. chengxiang the amadeus grt: generic runtime support for distributed persistent programming vinny cahill seán baker chris horn gradimir starovic comparison of the programming languages c and pascal alan r. feuer narain h. gehani scalable software libraries don batory vivek singhal marty sirkin jeff thomas the jalapeño dynamic optimizing compiler for java michael g. burke jong-deok choi stephen fink david grove michael hind vivek sarkar mauricio j. serrano v. c. sreedhar harini srinivasan john whaley the cost of selective recompilation and environment processing when a single software module in a large system is modified, a potentially large number of other modules may have to be recompiled. by reducing both the number of compilations and the amount of input processed by each compilation run, the turnaround time after changes can be reduced significantly. potential time savings are measured in a medium-sized, industrial software project over a three-year period. the results indicate that a large number of compilations caused by traditional compilation unit dependencies may be redundant. on the available data, a mechanism that compares compiler output saves about 25 percent, smart recompilation saves 50 percent, and smartest recompilation may save up to 80 percent of compilation work. furthermore, all compilation methods other than smartest recompilation process large amounts of unused environment data. in the project analyzed, the average environment symbols are actually used. reading only the actually used symbols would reduce total compiler input by about 50 percent. combining smart recompilation with a reduction in environment processing might double to triple perceived compilation speed and double linker speed, without sacrificing static type safety. rolf adams walter tichy annette weinert letters corporate linux journal staff supporting evolution and maintenance by using a flexible automatic code generator jacqueline floch software metrics for object-oriented systems the application of software complexity metrics within the object-oriented paradigm is examined. several factors affecting the complexity of an object are identified and discussed. halstead's software science metrics and mccabe's cyclomatic complexity metric are extended to an object. a limit for the cyclomatic complexity of an object is suggested. j. chris coppick thomas j. cheatham a practical method for syntactic error diagnosis and recovery our goal is to develop a practical syntactic error recovery method applicable within the general framework of viable prefix parsing. our method represents an attempt to accurately diagnose and report all syntax errors without reporting errors that are not actually present. successful recovery depends upon accurate diagnosis of errors together with sensible "correction" or alteration of the text to put the parse back on track. the issuing of accurate and helpful diagnostics is achieved by indicating the nature of the recovery made for each error encountered. the error recovery is prior to and independent of any semantic analysis of the program. however, the method does not exclude the invocation of semantic actions while parsing or preclude the use of semantic information for error recovery. the method assumes a framework in which an lr or ll parser, driven by the tables produced by a parser generator, maintains an input symbol buffer, state or prediction stack, and parse stack. the input symbol buffer contains part or all of the sequence of remaining input tokens, including the current token. the lr state stack is analogous to the ll prediction stack; except when restricting our attention to the ll case, prediction stack shall serve as a generic term indicating the lr state or ll prediction stack. the parse stack contains the symbols of the right hand sides that have not yet been reduced. michael burke gerald a. fisher a runtime supervisor to support ada tasking: rendezvous and delays g. a. riccardi t. p. baker draft report on requirements for a common prototyping system r. p. gabriel comment on "on the application of a popular notation to semantics" and reply from the author oleg kiselyov dick botting architecture of the advanced development environment (abstract only) reusing large software components has the potential to increase development productivity and improve software quality. the advanced development environment consists of five components: the resource of reusable software, the release management system, the automated system design, the catalog, and the workstation. the catalog maintains descriptive information about reusable components and the facilities in the workstation. a reusable component can be a subsystem, a feature, a transaction or a program. subsystems consist of features; features consist of transactions and transactions consist of programs. the workstation provides graphics and inquiries capabilities for the environment. the reusable component selector, the screen formatter and the database modeler are the major utilities in the workstation. james ambroise yat kat chan ada and beyond a. nico habermann the change request process m. lacroix p. lavency learning the interaction between pointers and scope in c++ traditionally, pointers, and their interaction with scope in c++ have been a source of frustration and confusion for students in our computer science ii course. since problem-solving is known to improve learning [6], we set out to develop software that would help our students better understand these concepts by repeatedly solving problems based on them. in this paper, we will first describe the design and features of this software. we conducted tests in two sections of our computer science ii course this fall to evaluate the effectiveness of using this software. the results have been very encouraging: the class average in both the sections increased by 100% from the pretest to the post-test. we will also present the design and results of these tests. amruth n. kumar balanced job bound analysis of queueing networks applications of queueing network models to computer system performance prediction typically involve the computation of their equilibrium solution. when numerous alternative systems are to be examined and the numbers of devices and customers are large, however, the expense of computing the exact solutions may not be warranted by the accuracy required. in such situations, it is desirable to be able to obtain bounds on the system solution with very little computation. asymptotic bound analysis (aba) is one technique for obtaining such bounds. in this paper, we introduce another bounding technique, called balanced job bounds (bjb), which is based on the analysis of systems in which all devices are equally utilized. these bounds are tighter than aba bounds in many cases, but they are based on more restrictive assumptions (namely, those that lead to separable queueing network models). j. zahorjan k. c. sevcik d. l. eager b. i. galler examples of applying software estimate tool fumiko fujiwara takushi goto sadao araki when bad programs happen to good people henry g. baker elements of style: analyzing a software design feature with a counterexample detector we illustrate the application of nitpick, a specification checker, to the design of a style mechanism for a word processor. the design is cast, along with some expected properties, in a subset of z. nitpick checks a property by enumerating all possible cases within some finite bounds, displaying as a counterexample the first case for which the property fails to hold. unlike animation or execution tools, nitpick does not require state transitions to be expressed constructively, and unlike theorem provers, operates completely automatically without user intervention. using a variety of reduction mechanisms, it can cover an enormous number of cases in a reasonable time, so that subtle flaws can be rapidly detected. daniel jackson craig a. damon opening up ada-tasking t. baker sisal: a safe and efficient language for numerical calculations the benefits of sisal and a call for action. d. j. raymond extending ada to assist multiprocessor embedded development the ada language supports high level techniques for inter-processor communication and also supports low level implementation of data address manipulation. the language however requires the software developers to manage the low-level manipulations. this paper proposes a technique to incorporate the high level concepts with low level implementation for multiprocessor communications. tony lowe design criteria for a pc-based common user interface to remote information systems (abstract only) the problems associated with retrieval by casual users of information stored in remote is&r; systems and the possible utilization of personal computers to solve these problems are discussed. a standardized system which will allow the user to access information stored in many distinct systems through a single common interface is described. the intent of this system is to spare the user the necessity of learning multiple command languages in order to access multiple systems and also retain the full retrieval capabilities of each system. several levels of interaction are provided to facilitate new user learning phase activity and allow the intermediate and advanced users to interact with the system with the minimum necessary prompting. the system is designed to maximize utilization of local processing and display capabilities and to provide built-in evaluation tools. philip hall from pascal to delphi to object pascal-2000 the evolution of pascal to delphi, the powerful and elegant features of that language and rad environment are discussed. yet the evolution was not smooth, some inconsistencies and glitches built up. so the next version of the language named object pascal 2000 is suggested. it eliminates the inconsistencies, improves the language and sets the fundamentals for standardizing it. alexander gofen the java factor sandeep singhal binh nguyen design and implementation of a multi-tool ada front end rodney m. bates viswa santhanam donald e. johnson a static performance estimator to guide data partitioning decisions vasanth balasundaram geoffrey fox ken kennedy ulrich kremer modular interprocedural pointer analysis using access paths: design, implementation, and evaluation in this paper we present a modular interprocedural pointer analysis algorithm based on access-paths for c programs. we argue that access paths can reduce the overhead of representing context-sensitive transfer functions and effectively distinguish non-recursive heap objects. and when the modular analysis paradigm is used together with other techniques to handle type casts and function pointers, we are able to handle significant programs like those in the speccint92 and speccint95 suites. we have implemented the algorithm and tested it on a pentium ii 450 pc running linux. the observed resource consumption and performance improvement are very encouraging. ben-chung cheng wen-mei w. hwu systematic software reuse (panel session): objects and frameworks are not enough martin griss ted biggerstaff sallie henry ivar jacobson doug lea will tracz building abstract iterators using continuations christopher coyle peter crogono lamport's algorithm reconsidered a modification to lamport's algorithm that eliminates busy waiting is presented. a processor that is waiting to enter its critical section is freed to do other useful work. the algorithm is modified further to recognize and honor priorities. h. neil singleton ronnie g. ward setting up a sparcstation john little files in pygmy forth frank sergeant high performance clusters: state of the art and challenges ahead david e. culler foundations and extensions of object-oriented programming (abstract only) j. a. goguen j. meseguer using a document parser to automate software testing patricia lutsky linux means business: highway pos system marc l allen bpl: a sketch of a language derived from apl john mark smeenk lightweight monitor for java vm byung-sun yang junpyo lee jinpyo park soo-mook moon kemal ebcioglu erik altman the ergo support system: an integrated set of tools for prototyping integrated environments the ergo support system (ess) is an engineering framework for experimentation and prototyping to support the application of formal methods to program development, ranging from program analysis and derivation to proof-theoretic approaches. the ess is a growing suite of tools that are linked together by means of a set of abstract interfaces. the principal engineering challenge is the design of abstract interfaces that are semantically rich and yet flexible enough to permit experimentation with a wide variety of formally-based program and proof development paradigms and associated languages. as part of the design of ess, several abstract interface designs have been developed that provide for more effective component integration while preserving flexibility and the potential for scaling. a benefit of the open architecture approach of ess is the ability to mix formal and informal approaches in the same environment architecture. the ess has already been applied in a number of formal methods experiments. peter lee frank pfenning gene rollins william scherlis that's vimprovement! a better vi ouallin details the enhancement of the vim vi clone. steve oualline hare: an optimizing portable compiler for scheme dan teodosiu apl function variants and system labels while apl extensions are adding power to the language by generalizing and adding operators, the necessity of defining additional attributes of functions is becoming increasingly visible and no longer ignorable. this paper describes the function variant problem, and proposes a solution by introducing the concept of system labels. three system labels are described: (@@@@)id for the identity function, (@@@@)inv for the inverse function, (@@@@)fl for the fill function. david a. rabenhorst linux administration a beginner's guide harvey friedman transparent files in apl (a preliminary proposal) file-related operations in apl could be classified according to whether manipulation of the file is restricted solely to apl or whether the file is to be accessible via another language. in the latter case communication is often dependent on the other language and should not affect apl language design, except for the provision of a general facility. guidelines for using shared variables and defined functions to this end are presented. files strictly intra-apl can and should be independent of such external communication, and a detailed proposal is presented whereby such internal files can be viewed and manipulated as if they were ordinary variables. the same conventions could also be extended to cover named objects in the workspace, thus giving enhanced access control within the workspace. concerns addressed include syntax and nomenclature for reference to controlled or extra-workspace named objects, space constraints, error classification and reporting, upward compatibility, and to some extent, implementation concerns. jim lucas regression test selection for java software regression testing is applied to modified software to provide confidence that the changed parts behave as intended and that the unchanged parts have not been adversely affected by the modifications. to reduce the cost of regression testing, test cases are selected from the test suite that was used to test the original version of the software---this process is called regression test selection. a _safe_ regression-test-selection algorithm selects every test case in the test suite that may reveal a fault in the modified software. safe regression-test- selection technique that, based on the use of a suitable representation, handles the features of the java language. unlike other safe regression test selection techniques, the presented technique also handles incomplete programs. the technique can thus be safely applied in the (very common) case of java software that uses external libraries of components; the analysis of the external code is note required for the technique to select test cases for such software. the paper also describes retest, a regression- test-selection algorithm can be effective in reducing the size of the test suite. mary jean harrold james a. jones tongyu li donglin liang ashish gujarathi generalization of leaner object-oriented slicing the leaner object-oriented slicing technique in [2], albeit effective in providing further code reduction from an object-oriented slice, fails to handle an object-oriented slice with multiple inheritance nets. that is, an object-oriented slice in a tree form with multiple branches is beyond the ability of leaner object-oriented slicing. this paper proposes a new approach which extends the applicability of code reduction by leaner object-oriented slicing. a generalized leaner object- oriented slicing method can now be achieved which provides further code reduction for general object-oriented slices. rob law a framework for encapsulating card-oriented, interactive, legacy applications over the last several years lockheed martin has invested ir&d; funds to explore lowering the cost of ownership of command and control systems for dod software intensive systems. two key technologies that address this issue are developing software components that can be reused across a family of systems within the command and control domain and methods for reusing legacy system applications. this paper describes how these two approaches have been merged using object oriented technology to provide a reusable framework for encapsulating legacy applications.the object oriented technology (oot) that we consider key to lowering cost of ownership and to strategies for reuse of legacy code is described as design patterns [gamma] and frameworks[coplien]. design patterns exist at a level of abstraction that provides a high return on reuse investment because they model stable relationships that are not likely to change as a system evolves and which can be reused across systems [coplien]. it is not the intent of this paper to describe these terms. rather it is our purpose to illustrate the advantages of this level of oot. al butkus barbara l. mitchell on the use of trace sampling for architectural studies of desktop applications patrick crowley jean-loup baer framelets - small and loosely coupled frameworks wolfgang pree kai koskimies from the publisher: first conference on freely redistributable software corporate linux journal staff orient84/k (abstract only): an object oriented concurrent programming language for knowledge systems mario tokoro yutaka ishikawa the syntax of apl, an old approach revisited most efforts have been made in the past apl conferences to propose new approaches to the analysis of apl, and introduce new features in the language. however, most of these proposals take very little in account the amount of work that has been made around syntax analysis in the 'main stream' of computer science. the proposed approach, as described in this paper, can therefore be considered as an old work. we first describe the fundamental syntactic devices of apl. those devices are then arranged as a standard bnf. we can exhibit some properties of the language described, and consequently select from the literature an appropriate technique to perform the syntax analysis of apl expressions. the proposed method uses a push-down automaton. although the language is shown to be not lr(k), an adaptation of this technique works well using backtracking. the whole process has been modelled in apl. a first application transforms a grammar expressed as a bnf in a set of tables. a second application uses these tables to perform the syntax analysis of apl statements. these last programs have been rewritten in c, integrated in an apl system, and used to test different dialects of apl, such as iso-apl or apl2. results show the influence of the cost of syntax analysis in the processing of apl expressions. jean jacques girardot florence rollin analysis of techniques in the assessment of ultra-reliable computerized systems angelo morzenti fabio a. schreiber borrow, copy or steal?: loans and larceny in the orthodox canonical form anthony j. h. simons ada compiler selection for embedded targets a. tetewsky r. racine introduction to commentaries on minimalism beyond the nurnberg funnel robert r. johnson support for garbage collection at every instruction in a java compiler james m. stichnoth guei-yuan lueh michal cierniak maximum processing rates of memory bound systems r. m. bryant requirements analysis for ada compilers a guide to the selection and specification of ada compilers was recently produced by the portability working group of ada-europe. most of the points addressed are applicable criteria for evaluation of any language compiler. peter j. l. wallis brian a. wichmann a testbed for investigating real-time ada issues m. borger m. klein n. weiderman optimistic incremental specialization: streamlining a commercial operating system c. pu t. autrey a. black c. consel c. cowan j. inouye l. kethana j. walpole k. zhang programs worth one thousand words: visual languages bring programming to the masses lorrie cranor ajay apte a minimalist approach to framework documentation good documentation is crucial for the success of frameworks. in this research, a new documenting approach is proposed combining existing document styles in a kind of "minimalist" framework manual with a special emphasis on framework understandability and usability, rather than on describing framework design. benefits and drawbacks are evaluated from frameworks of different domains and complexity. ademar aguiar busybox: a swiss army knife for linux learn how to save disk space with this open source tool for embedded systems. nicholas wells the role of process models in software and systems development and evolution in seeking to maximise opportunity for achieving common understanding and real progress on specific issues, the announcement for the 5th process workshop adopts a somewhat narrow view of the role of process models. by so focussing the discussion, the fact that desirable characteristics of models will vary with the purpose for which they are to be used, the benefit one hopes to derive from their development or use and the value attached to benefit to gained from such activity has had to be disregarded. yet process models can serve many purposes as summarised, for example, in the introduction to the proceedings of the 4th workshop [spw88] and also on the attached table. though the roles included in that table are neither newly identified nor controversial they are listed so that they may be kept in the back of our minds as the workshop discussion progresses. the individual importance of these roles can clearly not be ordered or even quantified. their relative significance will depend on the goals of the work during which they are developed or used. the primary motivation underlying my work with process models over the past years has been the search for a better understanding of the software development and evolution process. this has led to conclusions which are, perhaps, self evident to many computer scientists, [dem79], [fet88], [var79, 89] (as extreme examples) and others [bon77]. they are, however, not widely understood by the general public and, more importantly, by those involved in the definition and acquisition of computer systems for specific applications. in view of the increasing dependence of mankind on computers and, hence, on software it appears important to bring these, explicitly, into the open and to also examine their implications in the narrower confines of the topics discussed in the process workshop series. in the first instance my studies concentrated on an examination of models of program evolution in recognition of the fact that understanding and controlling that phenomenon demanded understanding and control of the programming process [leh69, 85a]. this investigation led directly to the spe classification scheme in which e-type programs, in particular, are defined as programs that implement applications or solve problems from and in the real world. in developing this schema process models played a fundamental role [leh80b]. these models had no direct or immediate relationship to development practice but, nevertheless, led to the insight that is reflected in the lst process model [leh84] that later formed the conceptual underpinning of a working environment [leh85b]. the work led to the view that an e-type program is a model of a model of a … model of an application in the real world [leh80b]. this abstract total- process model was enriched by turski's view [tur81] which regarded successive model pairs as a theory and a model of that theory or, equally, as a specification and an implementation of that specification. at the first process workshop [spw84] turski's interpretation of the "three-legged" model was, in fact, to see both the real world application (concept) and the final implementation as models of a specification that forms the bridge between concept and realisation. equally, the source and target representations at the core of each step of the lst process, for example, may be viewed as a theory and a model of that theory. from turski's view it follows that each implementation is godel incomplete. its properties, including functional properties, cannot be completely known from within the system. by their involvement in the development process and through system usage humans become an integral part of the system. it is their insights, viewpoints, theories, algorithms, definitions, formalisations, reactions, interpretations and so on that drive and direct the abstraction, refinement and evolution processes. they determine the degree of satisfaction that the final solution system yields. hence it may be observed that for any software system implementing a solution to a real world problem, modelling some aspect of the real world, there exists a degree of godel-type uncertainty about its properties [leh89]. the definition, specification and development process must seek to limit this uncertainty so that it is of no practical significance; reflecting only abstract representational incompleteness. this is not the only type of uncertainty related to program behaviour under execution relative to the operational environment. a primary property of e-type programs is the fact that installation and operation of a system change the application and its operational domain. implicitly if not explicitly, the system includes a model of itself. also the acts of conceiving, developing, installing, using and adapting a software system change understanding of the application and its needs, methods of solution, the relative value of specific features of system functionality, opportunities for enhancement and human perception of all these. this leads to declining satisfaction with the system unless the system is adapted to the new reality. because of the many feedback paths the system displays heisenberg-type uncertainty [leh77]. the more precise the knowledge of the application, of system properties and of their respective domains the less satisfaction does the solution system deliver in terms of what are, at the time of usage, the system properties perceived to be desirable. mismatch between system properties and human needs and desires cannot be removed, except momentarily. development, adaptation and evolution processes and their management are key to minimisation of the consequences of this inherent uncertainty. there is also a third type of uncertainty. the domain of an e-type application is, in general, unbounded and, effectively, continuous. the solution system is finite and discrete. the process of deriving one from the other involves a variety of abstractions on the basis of assumptions, about the application, its domain, perceived needs and opportunities, human responses to real world and solution system events, computational algorithms, theories about all these and so on. some assumptions will be explicitly stated, others will be implicit. all will be built into the final system. but the real world is essentially dynamic, forever changing. even without feedback effects as discussed above, exogenous changes in the real world will change the facts on which the assumption set is based. however careful and complete the control on the validity of assumptions is at the time they are built into the system some, at least, will, at best, be less than fully valid when the program is executed, or better, when the results of execution are applied. that is when, to be fully satisfactory, a program needs to be correct. initial correctness at the time of implementation is merely a means to an end. the assumption set must be maintained correct without significant delay by appropriate changes to program or documentation texts, an impossible task even if assumptions could be precisely pinpointed. pragmatic uncertainty in system behaviour is inevitable. this analysis leads to the following uncertainty principle for computer application: the outcome of software system operation in the real world is inherently uncertain with the precise area of uncertainty also not knowable. this position paper is not the place to explore this assertion in detail. some aspects have been discussed elsewhere [leh77, 89]. more immediately the principle throws a new light on the expectations to be associated with software engineering, the system and software development process and the models constructed to represent that process. one may have many views of the process. the new one that emerges from the above discussion is of software engineering in general and the software development and evolution (maintenance) process in particular as the means to minimise uncertainty, the consequences of uncertainty and the maintenance of user satisfaction. project models must reflect this responsibility m. m. lehman progress on an ansi standard for apl this paper discusses what a programming language standard is and discusses how one is developed. it also reports the progress to date on the planned ansi standard for apl. through the process of standardization, many existing differences between apl systems will be resolved, resulting in a benefit to the apl community that will fully justify the effort. clark wiedmann new ideas for generic components in ada the creation of reusable software components is an important part of modern software practice. generic templates are one technique for designing these components. a generic template is a module containing algorithms which can operate on some class of data types where the specific data type is not known until later in the development process. many languages, including ada, support this technique. in ada, generic templates must be type-safe at compile time. we examine some features of ada which allow us to define the type as an entire package module and to instantiate with that module. the main theme of this paper will be generic formal package parameters. richard riehle selecting a linux cd phil hughes procol: a protocol-constrained concurrent object-oriented language procol is a simple concurrent object-oriented language supporting a distributed, incremental and dynamic object environment. its communication is based on unidirectional messages. objects are only bound during actual message transfer and not during the processing of the message. this short duration object binding promotes parallelism. the communication leading to access has o obey an explicit protocol in each object. it takes the form of a specification of the occurrence and sequencing of the interaction between the object and its communication partners. the use of such protocols fosters structured, safer and potentially verifiable communication between objects. j. van den bos what orientation should ada objects take? j. p. rosen magic tooltool interface erik fretheim j. w. degroat the coming-of-age of software architecture research mary shaw a brief survey of systems providing process or object migration facilities mark nuttall a pragmatic approach to c++, eiffel and ada 9x programming gabriela o. de vivo marco de vivo book review: linux application development andrew johnson real-time threads karsten schwan hongyi zhou ahmed gheith generalized hierarchy chart generator peter j. thomas working with xml xml (extensible mark up language) is fast becoming a standard method for data organization, structure, and exchange on the internet. the extensibility of xml allows web authors to create their own markup for any type of information. this extensibility is mobilizing the science and industry to customize xml for different fields such as mathematics (mathml), and business (xbrl). in this tutorial we will create an on-line multiple choice test in xml while discussing the xml syntax and its validation techniques. we will also present different methods for formatting, displaying and manipulating xml documents inside a browser. the tutorial will end with a discussion of the current status of xml. ali m. farahani the fortran (not the foresight) saga: the light and the dark brian meek recovery of a reference architecture: a case study wolfgang eixelsberger convivial error recovery this paper presents a new approach to an event control mechanism (ecm) for apl. after an informal review of some typical ecm implementations, it describes the general architecture of an ecm and shows how the mentioned implementations map onto it. an extended ecm is then presented that combines many features of existing implementations with some new capabilities. it gives the user access to the immediate context of the subexpression where an error occurred and introduces new control functions to allow execution to move to a determined calling environment. the proposed additions aim at a "convivial" utilization of the ecm suitable both to automatic control and to interactive debugging of apl applications. denis samson yves ouellet modeling the software process: software process modeling experience (panel session): panel session position paper marc i. kellner linux kernel internals corporate linux journal staff production logic synthesis john darringer daniel brand william h. joyner louise trevillyan john v. gerbi dragoon and ada: the wedding of the nineties s. genolini a. di maio m. de michele classes versus prototypes in object-oriented languages a. h. borning reference escape analysis: optimizing reference counting based on the lifetime of references young gil park benjamin goldberg experience with commonloops commonloops is an object-oriented language embedded in common lisp. it is one of two such languages selected as starting points for the common lisp object system (clos) which is currently being designed as a standard object-oriented extension to common lisp. this paper reports on experiences using the existing portable commonloops (pcl) implementation of commonloops. the paper is divided into two parts: a report on the development of a window system application using the commonloops programming language, and a description of the implementation of another object-oriented language (commonobjects) on top of the commonloops metaclass kernel, paralleling the two aspects of commonloops: the programming language and the metaclass kernel. usage of the novel features in commonloops is measured quantitatively, and performance figures comparing commonloops, commonobjects on commonloops, and the native lisp implementation of commonobjects are presented. the paper concludes with a discussion about the importance of quantitative assessment for programming language development. james kempf warren harris roy d'souza alan snyder protected types with entry barriers depending on parameters of the entries: some practical examples entry barriers of a protected type have one subtle restriction: they are not allowed to depend on parameters of the entry. this paper reviews how to implement protected types with entry barriers depending on parameters of the entries. pascal ledru optimization of validation test suite coverage timothy e. lindquist kurt m. gutzmann david l. remkes gary mckee surpassing the tlb performance of superpages with less operating system support many commercial microprocessor architectures have added translation lookaside buffer (tlb) support for superpages. superpages differ from segments because their size must be a power of two multiple of the base page size and they must be aligned in both virtual and physical address spaces. very large superpages (e.g., 1mb) are clearly useful for mapping special structures, such as kernel data or frame buffers. this paper considers the architectural and operating system support required to exploit medium-sized superpages (e.g., 64kb, i.e., sixteen times a 4kb base page size). first, we show that superpages improve tlb performance only after invasive operating system modifications that introduce considerable overhead. we then propose two subblock tlb designs as alternate ways to improve tlb performance. analogous to a subblock cache, a complete-subblock tlb associates a tag with a superpage-sized region but has valid bits, physical page number, attributes, etc., for each possible base page mapping. a partial-subblock tlb entry is much smaller than a complete-subblock tlb entry, because it shares physical page number and attribute fields across base page mappings. a drawback of a partial-subblock tlb is that base page mappings can share a tlb entry only if they map to consecutive physical pages and have the same attributes. we propose a physical memory allocation algorithm, page reservation, that makes this sharing more likely. when page reservation is used, experimental results show partial-subblock tlbs perform better than superpage tlbs, while requiring simpler operating system changes. if operating system changes are inappropriate, however, complete-subblock tlbs perform best. madhusudhan talluri mark d. hill quantifying software quality in software quality assurance the quality of the program must be defined in practical and measurable terms. this is accomplished by first standardizing the error reporting technique, identifying the types and frequency of occurrence of software errors experienced, and finally, predicting the error arrival rate of the software programs. this paper details a practical approach to quantifying software quality by investigating empirical software error data. software failure trend analysis and residual error predictions are the by-products of this technique. successfully applied to software engineering projects, the author reports on the findings and details an implementation plan. kenneth s. hendis fairness in processor scheduling in time sharing systems s. haldar d. k. subramanian letters to the editor corporate linux journal staff fortran 90/95/hpf information file (part 2, utilities) michael metcalf type reconstruction for coercion polymorphism ryan stansifer dan wetklow openj: an extensible system level design language jianwen zhu daniel d. gajski concerning the fortran 8x draft lorer p. meissner module-sensitive program specialisation dirk dussart rogardt heldal john hughes prioritizing test cases for regression testing test case prioritization techniques schedule test cases in an order that increases their effectiveness in meeting some performance goal. one performance goal, rate of fault detection, is a measure of how quickly faults are detected within the testing process; an improved rate of fault detection can provide faster feedback on the system under test, and let software engineers begin locating and correcting faults earlier than might otherwise be possible. in previous work, we reported the results of studies that showed that prioritization techniques can significantly improve rate of fault detection. those studies, however, raised several additional questions: (1) can prioritization techniques be effective when aimed at specific modified versions; (2) what tradeoffs exist between fine granularity and coarse granularity prioritization techniques; (3) can the incorporation of measures of fault proneness into prioritization techniques improve their effectiveness? this paper reports the results of new experiments addressing these questions. sebastian elbaum alexey g. malishevsky gregg rothermel mechanisms for abstraction in ada david a. smith programming pearls jon bentley don knuth javascripting into the next millenniun aaron weiss prolog compared with lisp this is a report on a very concrete experiment to compare the relative speeds of dec-10 prolog (edinburgh implementation) and dec-10 lisp (texas implementation of uci-lisp, or tlisp). the comparison was made on the occasion of programming a theorem prover for the propositional calculus, natural- deduction type. the prover is complicated enough so as to try sufficiently sophisticated weaponry of each language, yet simple enough for the human eye to follow both programs in parallel and make sure that they are accomplishing very much the same thing. the experiment was not undertaken with the direct intention of determining the relative performances of the two languages in general, but with the more specific goal of providing a basis for choosing a language for a large research project. nevertheless, the results are interesting enough so as to make us consider them worth divulging, especially because they seem to contradict the notion, repeatedly argued for by prolog advocates, that performances of the two languages are similar in relation to speed. claudio gutierrez library for architecture-independent development of structured grid applications max lemke daniel quinlan a dynamic optimization framework for a java just-in-time compiler the high performance implementation of java virtual machines (jvm) and just-in-time (jit) compilers is directed toward adaptive compilation optimizations on the basis of online runtime profile information. this paper describes the design and implementation of a dynamic optimization framework in a production-level java jit compiler. our approach is to employ a mixed mode interpreter and a three level optimizing compiler, supporting quick, full, and special optimization, each of which has a different set of tradeoffs between compilation overhead and execution speed. a lightweight sampling profiler operates continuously during the entire program's exectuion. when necessary, detailed information on runtime behavior is collected by dynmiacally generating instrumentation code which can be installed to and uninstalled from the specified recompilation target code. value profiling with this instrumentation mechanism allows fully automatic code specialization to be performed on the basis of specific parameter values or global data at the highest optimization level. the experimental results show that our approach offers high performance and a low code expansion ratio in both program startup and steady state measurements in comparison to the compile-only approach, and that the code specialization can also contribute modest performance improvement toshio suganuma toshiaki yasue motohiro kawahito hideaki komatsu toshio nakatani a model of noisy software engineering data (status report) roseanne tesoriero marvin zelkowitz a comment on english neologisms and programming language keywords the choice of keywords in the design of programming languages is compared to the formation of neologisms, or new words, in natural languages. examination of keywords in high-level programming languages shows that they are formed using mechanisms analogous to those observed in english. the use of mirror words as closing keywords is a conspicuous exceptions. caroline m. eastman ada 9x and oop s. tucker taft the experimental validation and packaging of software technologies forrest shull victor r. basili marvin zelkowitz the search for software quality, or one more trip down the yellow brick road fletcher j. buckley if versus rule beth tibbits single-user capabilities in interprocess communication stephen russell a high-level programming and command language unifying programming and command languages is a promising idea that has yet to be thoroughly exploited. most attempts at such unification have used lisp or traditional languages, such as pascal. this paper describes the command and programming language ez, which attempts to unify command and programming languages by using high-level string-processing concepts, such as those in snobol4 and icon. ez has particularly simple data abstractions that attempt to bridge the gap between the data abstractions of command languages and those of programming languages. this is accomplished by type fusion, which pushes the differences between some classes of types, such as strings and text files, out of the language and into the implementation. the language, its use, and its implementation are described. christopher w. fraser david r. hanson empirical analysis of the correlation between amount-of-reuse metrics in the c programming language william curry giancarlo succi michael smith eric liu raymond wong field analysis: getting useful and low-cost interprocedural information we present a new limited form of interprocedural analysis called field analysis that can be used by a compiler to reduce the costs of modern language features such as object-oriented programming,automatic memory management, and run-time checks required for type safety.unlike many previous interprocedural analyses, our analysis is cheap, and does not require access to the entire program. field analysis exploits the declared access restrictions placed on fields in a modular language (e.g. field access modifiers in java) in order to determine useful properties of fields of an object. we describe our implementation of field analysis in the swiftoptimizing compiler for java, as well a set of optimizations thatexploit the results of field analysis. these optimizations include removal of run-time tests, compile-time resolution of method calls, object inlining, removal of unnecessary synchronization, and stack allocation. our results demonstrate that field analysis is efficient and effective. speedups average 7% on a wide range of applications, with some times reduced by up to 27%. compile time overhead of field analysis is about 10%. sanjay ghemawat keith h. randall daniel j. scales towards an architecture handbook bruce anderson patterns: building blocks for object-oriented architectures3 bruce anderson peter coad mark mayfield letters to the editor corporate linux journal staff microcomputers and multi-tasking machine control microcomputers offer the world of machine control an unequaled tool in providing flexible control of large and complicated machines. traditionally a microcomputer is alloted for each machine to be controlled thereby massing several microcomputers. however, factors such as economic considerations or logistics may force a designer to stretch a microcomputer beyond its traditional capacities. a method of providing concurrent control of multiple machines using a single uni-tasking microcomputer system is investigated. the investigation is founded upon a working system, designed and built for the usda-ars soils laboratory in morris, minnesota. the analysis, therefore, has a certain practical flair to it that can be found useful for other projects where multiple tasks need to be handled simultaneously. anthony a. kempka what you see is what you test: a methodology for testing form-based visual programs gregg rothermel lixin li christopher dupuis margaret burnett compliant mappings of ada programs to the dod-std-2167 static structure j k grau k a gilroy fortran programming tools under linux are you a fortran user migrating to linux from a non-unix environment? steve shows you how to take the linux plunge without sacrificing your "native" programming capability steven hughes using java reflection to automate extension language parsing an extension language is an interpreted programming language designed to be embedded in a domain-specific framework. the addition of domain-specific primitive operations to an embedded extension language transforms that vanilla extension language into a domain-specific language. the luxworks processor simulator and debugger from lucent uses tcl as its extension language. after an overview of extension language embedding and luxworks experience, this paper looks at using java reflection and related mechanisms to solve three limitations in extension language - domain framework interaction. the three limitations are gradual accumulation of ad hoc interface code connecting an extension language to a domain framework, over-coupling of a domain framework to a specific extension language, and inefficient command interpretation. java reflection consists of a set of programming interfaces through which a software module in a java system can discover the structure of classes, methods and their associations in the system. java reflection and a naming convention for primitive domain operations eliminate ad hoc interface code by supporting recursive inspection of a domain command interface and translation of extension language objects into domain objects. java reflection, name-based dynamic class loading, and a language-neutral extension language abstraction eliminate language over- coupling by transforming the specific extension language into a runtime parameter. java reflection and command objects eliminate inefficiency by bypassing the extension language interpreter for stereotyped commands. overall, java reflection helps to eliminate these limitations by supporting reorganization and elimination of handwritten code, and by streamlining interpretation. dale parson specification languages and their implementations this paper describes some historical software engineering backgrounds leading to specification languages. these specification languages have high promises for the future. one of the main problems is the derivation of implementations. we propose a methodology to translate lotos specifications into implementations. the principles behind the framework and its concepts are introduced using a simple example. finally, we explain how implementation techniques proposed by other authors can be integrated in our framework. ludo cuypers connecting viewpoints by shared phenomena michael jackson summary of changes to fortran draft jeanne c. adams jerrold l. wagener copy elimination in functional languages copy elimination is an important optimization for compiling functional languages. copies arise because these languages lack the concepts of state and variable; hence updating an object involves a copy in a naive implementation. copies are also possible if proper targeting has not been carried out inside functions and across function calls. targeting is the proper selection of a storage area for evaluating an expression. by abstracting a collection of functions by a target operator, we compute targets of function bodies that can then be used to define an optimized interpreter to eliminate copies due to updates and copies across function calls. the language we consider is typed lambda calculus with higher-order functions and special constructs for array operations. our approach can eliminate copies in divide and conquer problems like quicksort and bitonic sort that previous approaches could not handle. we also present some results of implementing a compiler for a single assignment language called sal on some small but tough programs. our results indicate that it is possible to approach a performance comparable to imperative languages like pascal. k. gopinath j. l. hennessy low-cost pathways towards formal methods use martin s. feather a case study in test environment evolution k. f. donnelly k. a. gluck an experimental evaluation of simple methods for seeding program errors this paper describes an experiment in which simple syntactic alterations were introduced into program text in order to evaluate the testing strategy known as error seeding. the experiment's goal was to determine if randomly placed syntactic manipulations can produce failure characteristics similar to those of indigenous errors found within unseeded programs. as a result of a separate experiment, several programs were available, all of which were written to the same specifications and thus were intended to be functionally equivalent. the use of functionally equivalent programs allowed the influence of individual programmer styles to be removed as a variable from the error seeding experiment. each of six different syntactic manipulations were introduced into each program and the mean times to failure for the seeded errors were observed. the seeded errors were found to have a broad spectrum of mean times to failure independent of the syntactic alteration used. we conclude that it is possible to seed errors using only simple syntactic techniques that are arbitrarily difficulty to locate. in addition, several unexpected results indicate that some issues involved in error seeding have not been addressed previously. john c. knight paul e. ammann a critical look at some ada features s. srinivasan user responses to constraints in computerized design tools (extended abstract) donald l. day pcol - a protocol-constrained object language jan van der bos editor's letter j. kaye grau saving traces for ada debugging carol h. ledoux d. stott parker an extended form of must alias analysis for dynamic allocation the paper presents methods that we have implemented to improve the quality of the def-uses reported for dynamically allocated locations. the methods presented are based on the ruggieri/murtagh naming scheme for dynamically created locations. we expand upon this scheme to name dynamically allocated locations for some user written allocation routines. using this expanded naming scheme, we introduce an inexpensive, non-iterative, and localized calculation of extended must alias analysis to handle dynamically allocated locations, and show how this information can be used to improve def-use information. this is the first attempt to specify must alias information for names which represent a set of dynamically allocated locations. empirical results are presented to illustrate the usefulness of our method. we consider this work a step towards developing practical re- engineering tools for c. rita z. altucher william landi consistent generics in modula-2 j beidler p jackowitz some problems in distributing real-time ada programs across machines richard a. volz trevor n. mudge arch w. naylor john h. mayer an interactive source commenter for prolog programs prolog meta- circular interpreters, i.e., interpreters for prolog written in prolog, perform at least two operations on an object program - they parse it and execute its instructions. there is a useful variant of the meta-circular interpreter, the meta-circular parser, which as its name suggests, parses an object program without executing its instructions. the value of such a parser is that it provides an elegant means to modify prolog source code. as the object program is parsed, new information in the form of additional instructions, comments, etc., can be selectively inserted. the prolog source code commenter we describe is a meta-circular parser with facilities added to allow a user to interactively enter comments. as a prolog program is parsed into its basic components, the user is allowed to view that component and enter an appropriate comment. the result is a new fully commented (and formatted) source program. david roach hal berghel john r. talburt object-oriented programming: an objective sense of style we introduce a simple, programming language independent rule (known in-house as the law of demeter ) which encodes the ideas of encapsulation and modularity in an easy to follow form for the object-oriented programmer. you tend to get the following related benefits when you follow the law of demeter while minimizing simultaneously code duplication, the number of method arguments and the number of methods per class: easier software maintenance, less coupling between your methods, better information hiding, narrower interfaces, methods which are easier to reuse, and easier correctness proofs using structural induction. we discuss two important interpretations of the law (strong and weak) and we prove that any object-oriented program can be transformed to satisfy the law. we express the law in several languages which support object-oriented programming, including flavors, smalltalk-80, clos, c++ and eiffel. k. lieberherr i. holland a. riel communication between ada programs in diadem it has been recognized for some time that ada is not well designed for the purpose of programming networks of loosely-coupled processors, and numerous projects have been set up to find a solution. the diadem project had the particular goal of constructing flexible ada programs that could be executed in a range of distributed systems, including, as a limiting case, a centralized uniprocessor. to this end a technique was developed by which the ada rendezvous could be used as a remote communication mechanism, supported in a highly portable way on a range of different network architectures and communication systems. this was achieved by the definition of a special package providing a "standard interface" to the transport layer services of the host communication system. after outlining the basic principles of the diadem approach, and describing the first prototype implementation of the communication mechanism, this paper discusses how a standard interface similar to that used in diadem could provide an efficient, yet portable, technique for supporting remote communication in time-critical (hard) real-time, distributed systems. c. atkinson s. j. goldsack an example p4 process program for rebus this p4 (pronounced "p-quad") program describes a fragment of rebus, which is a program for creating a graph of functional requirements. dennis heimbigner from uml to java, building a 3-tier architecture: case study the successful use of object technology requires far more than simply the adoption of a modern software technology or notation such as uml, java, corba or com. what is crucial, is knowing how to use these technologies to build commercially robust software systems. in this session the speaker draws on his experience at nasa, at&t;, ibm, nova gass, and other leading companies to illustrate the pitfalls and best practices of component based software development. timothy d. korson linux gazette: disk hog: tracking system disk usage ivan griffin pascal to forth leonard morgenstern algebraic software architecture reconfiguration the ability of reconfiguring software architectures in order to adapt them to new requirements or a changing environment has been of growing interest, but there is still not much formal work in the area. most existing approaches deal with run-time changes in a deficient way. the language to express computations is often at a very low level of specification, and the integration of two different formalisms for the computations and reconfigurations require sometimes substantial changes. to address these problems, we propose a uniform algebraic approach with the following characteristics. components are written in a high-level program design language with the usual notion of state. the approach combines two existing frameworks---one to specify architectures, the other to rewrite labelled graphs---just through small additions to either of them. it deals with certain typical problems such as guaranteeing that new components are introduced in the correct state (possibly transferred from the old components they replace). it shows the relationships between reconfigurations and computations while keeping them separate, because the approach provides a semantics to a given architecture through the algebraic construction of an equivalent program, whose computations can be mirrored at the architectural level. michel wermelinger jose luiz fiadeiro performance and productivity on unconventional operating systems (panel) elizabeth d. rather tom dowling andrew kobziar cameron lowe report of the integration mechanisms working group david mundie the new crop of java virtual machines (panel) lars bak john duimovich jesse fang scott meyer david ungar device management in turning plus r. c. holt remark on algorithm 622: a simple macroprocessor a number of updates to the macroprocessor are described that bring the code into line with the fortran 77 standard. this is followed by an outline of how the macroprocessor was used for the rapid porting of geophysical software from a 64-bit supercomputer environment to a number of different unix workstations. finally a number of deficiencies remaining in the macroprocessor are noted and workarounds suggested where possible. stewart a. levin diana (panel session): an intermediate representation language for ada stowe boyd rudolph krutar george romanski tucker taft tom wilcox mapping concurrent programs to vliw processors hester bakewell donna j. quammen pearl y. wang projecting functional models of imperative programs m. harman s. danicic new products corporate linux journal staff ada/x window system bindings: conversion strategies karl s. mathias mark a. roth system commands in gcos-apl l. j. dickey uninitialized modula-2 abstract objects, revisited j savit the a+ programming language, a different apl this paper introduces a+, which is an array oriented programming language derived from apl, and explains the motivations for creating such a language. after discussing of some problems of current apl system, like the lack of control structures, the usage of dynamic binding, and the difficulty of handling functions, we present how these problems are solved in a+. the most important feature of a+ is that it uses lexical binding rather than dynamic binding as apl does, which solves some old problems of apl while making it a safer language, and offers a new kind of programming style. jean-jacques girardot a model for the reuse of software design information kevin w. jameson polymorphic effect systems we present a new approach to programming languages for parallel computers that uses an effect system to discover expression scheduling constraints. this effect system is part of a 'kinded' type system with three base kinds: types, which describe the value that an expression may return; effects, which describe the side-effects that an expression may have; and regions, which describe the area of the store in which side-effects may occur. types, effects and regions are collectively called descriptions. expressions can be abstracted over any kind of description variable -- this permits type, effect and region polymorphism. unobservable side-effects can be masked by the effect system; an effect soundness property guarantees that the effects computed statically by the effect system are a conservative approximation of the actual side-effects that a given expression may have. the effect system we describe performs certain kinds of side-effect analysis that were not previously feasible. experimental data from the programming language fx indicate that an effect system can be used effectively to compile programs for parallel computers. j. m. lucassen d. k. gifford focus on software david a. bandel adl - a documentation language richard h smith higher-order distributed objects we describe a distributed implementation of scheme that permits efficient transmission of higher-order objects such as closures and continuations. the integration of distributed communication facilities within a higher-order programming language engenders a number of new abstractions and paradigms for distributed computing. among these are user-specified load-balancing and migration policies for threads, incrementally linked distributed computations, and parameterized client-server applications. to our knowledge, this is the first distributed dialect of scheme (or a related language) that addresses lightweight communication abstractions for higher-order objects. henry cejtin suresh jagannathan richard kelsey completely validated software: in defense of coverage criteria (panel session) elaine j. weyuker beg, borrow, or steal (workshop session): using multidisciplinary approaches in empirical software engineering research the goal of this workshop is to provide an interactive forum for software engineers and empirical researchers to investigate the feasibility of applying proven methods from other research disciplines to software engineering research. participants submitted position papers describing problems that might benefit from a multidisciplinary approach. expert guest speakers from software engineering and other disciplines will address the issues highlighted in the papers with the goal of encouraging more multidisciplinary research. janice singer margaret-anne storey susan elliott sim extended control services in operating system interfaces grant r. guenther optimizing primary data caches for parallel scientific applications: the pool buffer approach liuxi yang josep torrellas a system for computer music performance a computer music performance system (cmps) is a computer system connected to input devices (including musical keyboards or other instruments) and to graphic and audio output devices. a human performer generates input events using the input devices. the cmps responds to these events by computing and performing sequences of output actions whose intended timing is determined algorithmically. because of the need for accurate timing of output actions, the scheduling requirements of a cmps differ from those of general-purpose or conventional real-time systems. this paper describes the scheduling facilities of formula, a cmps used by many musicians. in addition to providing accurate timing of output action sequences, formula provides other basic functions useful in musical applications: (1) per-process virtual time systems with independent relationships to real time; (2) process grouping mechanisms and language-level control structures with time-related semantics, and (3) integrated scheduling of tasks (such as compiling and editing) whose real-time constraints are less stringent than those of output action computations. david p. anderson ron kuivila yodl or yet one other document language karel kubat in search of program complexity p. f. sorensen my hairiest bug war stories marc eisenstadt modules and type checking in pl/ll the type system of a programming language system pl/ll is described. pl is a simple object oriented programming language and ll is a language for composing pl modules into programs. the goals of the pl/ll system are to enable the programming of efficient object- oriented computations and to provide the powerful linking language ll for facilitating the construction of large programs. the goal of the type system is to ensure efficient and secure object handling through a combination of static and dynamic type checking, and to preserve this property across module boundaries. the solution is based on (i) the module and linking concepts of ll, (ii) a language construct in pl for the safe creation of linked data structures, and (iii) a limited form of type polymorphism and type unification. lars-erik thorelli constraint-based array dependence analysis traditional array dependence analysis, which detects potential memory aliasing of array references is a key analysis technique for automatic parallelization. recent studies of benchmark codes indicate that limitations of analysis cause many compilers to overlook large amounts of potential parallelism, and that exploiting this parallelism requires algorithms to answer new question about array references, not just get better answers to the old questions of aliasing. we need to ask about the flow of values in arrays, to check the legality of array privatization, and about the conditions under which a dependence exists, to obtain information about conditional parallelism. in some cases, we must answer these questions about code containing nonlinear terms in loop bounds or subscripts. this article describes techniques for phrasing these questions in terms of systems of contstraints. conditional dependence analysis can be performed with a constraint operation we call the "gist" operation. when subscripts and loop bounds are affine, questions about the flow of values in array variables can be phrased in terms of presburger arithmetic. when the constraints describing a dependence are not affine, we introduce uninterpreted function symbols to represent the nonaffine terms. our constraint language also provides a rich language for communication with the dependence analyzer, by either the programmer or other phases of the compiler. this article also documents our investigations of the praticality of our approach. the worst-case complexity of presburger arithmetic indicates that it might be unsuitable for any practical application. however, we have found that analysis of benchmark programs does not cause the exponential growth in the number of constraints that could occur in the worst case. we have studied the constraints produced during our aanalysis, and identified characteristics that keep our algorithms free of exponential behavior in practice. william pugh david wonnacott putting "engineering" into software engineering (abstract) mary shaw limitations of using tokens for mutual exclusion edward t. ordman data abstraction mechanisms in sina/st this paper describes a new data abstraction mechanism in an object-oriented model of computing. the data abstraction mechanism described here has been devised in the context of the design of sina/st language. in sina/st no language constructs have been adopted for specifying inheritance or delegation, but rather, we introduce simpler mechanisms that can support a wide range of code sharing strategies without selecting one among them as a language feature. sina/st also provides a stronger data encapsulation than most of the existing object-oriented languages. this language has been implemented on the sun 3 workstation using smalltalk. mehmet aksit anand tripathi other compiler support working group joseph k. cross multithreaded rendezvous: a design pattern for distributed rendezvous ricardo jimenez-peris marta patiño-martinez sergio arevalo experiences in configuration management for modula-2 m. jordan burg: fast optimal instruction selection and tree parsing christopher w. fraser robert r. henry todd a. proebsting dyalog apl arrives in the u.s. gregg w. taylor a naming system for feature-based service specification in distributed operating systems the paper describes a naming system that allows a service to evolve or reconfigure in functionality by adding and removing features and still co- exist with its previous versions. the underlying naming model has two aspects: (1) (attribute-name attribute- value) pair based characterization of service features which allows the meta information on a service to be represented as a collection of such pairs (as in x.500 and universal naming protocol). at low level, the name server provides parse and match operations on the (attribute, value) pairs using which high level name binding operations, viz., name registration and name resolution, are constructed. (2) data-driven communication paradigm which enables different versions of a client and server to communicate with one another. in this paradigm, a server matches the attributes requested by a client with those it supports, and invokes service specific functions named by the attributes. since attributes refer to orthogonal features, client and server can evolve independently by adding or removing attributes and still communicate. with this model of specifying services, name server functions may be factorized from service specific functions and implemented in a generic fashion in terms of parse and match operations and function invocations. the paper also describes language support for the naming system and implementation issues. k. ravindran k. k. ramakrishnan linux means business: linux in camouflage corporate linux journal staff distributed programming with intermediate idl several heterogeneous-language distributed programming systems have been developed which either use an explicit interface definition language (idl) for the specification of distributed objects or which directly translate server language specifications to corresponding client language representations. in this paper, we present a new approach which combines the advantages of these prior systems. our approach uses an idl as an implicit intermediate step in the translation from server to client language rather than as an explicit user-provided definition. gary w. smith richard a. volz functions as objects in a data flow based visual language data flow based visual programming languages are an active area of research in visual programming languages. some recent data flow visual programming languages have implemented higher order functions, allowing functions to be passed to/from functions. this paper describes a data flow visual programming language in which the first class citizenship of programs have been taken a step further, and programs can be manipulated as data with the same kind of flexibility that lisp offers in manipulating programs as data. alex fukunaga wolfgang pree takayuki dan kimura an empirical study of static call graph extractors informally, a call graph represents calls between entities in a given program. the call graphs that compilers compute to determine the applicability of an optimization must typically be conservative: a call may be omitted only if it can never occur in any execution of the program. numerous software engineering tools also extract call graphs with the expectation that they will help software engineers increase their understanding of a program. the requirements placed on software engineeringtools that compute call graphs are typically more relaxed than for compilers. for example, some false negatives---calls that can in fact take place in some execution of the program, but which are omitted from the call graph---may be acceptable, depending on the understanding task at hand. in this article, we empirically show a consequence of this spectrum of requirements by comparing the c call graphs extracted from three software systems (mapmaker, mosaic, and gcc) by nine tools (cflow, cawk, cia, field, gct, imagix, lsme, mawk, and rigiparse). a quantitative analysis of the call graphs extracted for each system shows considerable variation, a result that is counterintuitive to many experienced software engineers. a qualitative analysis of these results reveals a number of reasons for this variation: differing treatments of macros, function pointers, input formats, etc. the fundamental problem is not that variances among the graphs extracted by different tools exist, but that software engineers have little sense of the dimensions of approximation in any particular call graph. in this article, we describe and discuss the study, sketch a design space for static call graph extractors, and discuss the impact of our study on practitioners, tool developers, and researchers. although this article considers only one kind of information, call graphs, many of the observations also apply to static extractors of other kinds of information, such as inheritance structures, file dependences, and references to global variables. gail c. murphy david notkin william g. griswold erica s. lan functional programming in c++ using the fc++ library brian mcnamara yannis smaragdakis an approach for exploring code improving transformations although code transformations are routinely applied to improve the performance of programs for both scalar and parallel machines, the properties of code- improving transformations are not well understood. in this article we present a framework that enables the exploration, both analytically and experimentally, of properties of code-improving transformations. the major component of the framework is a specification language, gospel, for expressing the conditions needed to safely apply a transformation and the actions required to change the code to implement the transformation. the framework includes a technique that facilitates an analytical investigation of code- improving transformations using the gospel specifications. it also contains a tool, genesis, that automatically produces a transformer that implements the transformations specified in gospel. we demonstrate the usefulness of the framework by exploring the enabling and disabling properties of transformations. we first present analytical results on the enabling and disabling properties of a set of code transformations, including both traditional and parallelizing transformations, and then describe experimental results showing the types of transformations and the enabling and disabling interactions actually found in a set of programs. deborah l. whitfield mary lou soffa assuring the correctness of configured software descriptions s. c. choi w. s. scacchi a non-fragmenting non-moving, garbage collector gustavo rodriguez-rivera michael spertus charles fiterman operating system support for persistent and recoverable computations john rosenberg alan dearle david hulse anders lindström stephen norris disk scheduling for mixed-media workloads in a multimedia server y. rompogiannakis g. nerjes p. muth m. paterakis p. triantafillou g. weikum ada debugging and testing support environments this paper presents analysis and design considerations for ada programming support environments (apses) to support interactive debugging and testing of embedded, real time software at the ada source code level. the analysis is based on the "stoneman" requirements specification for apses (1). important factors in the analysis and design of ada debugging and testing support systems include the requirement for a source level system, the host machine- target machine configurations, the real time and concurrent nature of target software, and the kapse virtual machine interface to the apse data base. although this paper is specifically concerned with debugging and testing issues, the methods utilized and the results obtained are of general applicability. the following sections of the paper address general analysis considerations, source level support environments, design considerations for an interactive source level debugger, and kapse design considerations. richard e. fairley object-oriented fortran 77 (a practitioner's view) bob patton visual specification of blocks in programming languages moreshwar r. bhujade more on fortran coding conventions r. n. caffin specifying cognitive interface requirements chris roast a domain-specific software architecture engineering process outline will tracz lou coglianese patrick young object oriented librarianship tsvi bar-david equivalence analysis: a general technique to improve the efficiency of data- flow analyses in the presence of pointers existing methods to handle pointer variables during data-flow analyses can make such analyses inefficient both in time and space because the data-flow analyses must store and propagate large sets of data facts that are introduced by dereferences of pointer variable. this paper presents _equivalence analysis,_ a general technique to improve the efficiency of data-flow analyses in the presence of pointers. the technique identifies equivalence relations among the memory locations accessed by a procedure and ensures that two equivalent memory locations share the same set of data facts in a procedure and in the procedures that are called by that procedure. thus, a data-flow analysis needs to compute the data-flow information only for a representative memory location in an equivalence class. the data-flow information for other memory locations in the equivalence class can be derived from that of the representative memory location. our empirical studies indicate that equivalence analysis may effectively improve the efficiency of many data-flow analyses. donglin liang mary jean harrold a bnf parser in forth brad rodriguez shared functions and variables as an aid to applications design analogic apl incorporates true sharing of named objects into apl for the first time. both functions and variables may be shared. true shared variables are variables which are visible and accessible to multiple tasks without the aid of any intervening processor. they fill roles traditionally taken by shared file systems and (@@@@)svo-style shared variables. their straightforward design allows an apl programmer to develop whatever sharing protocol is most appropriate to the application. one could, for example, build a complete shared file system or (@@@@)svo-style shared variable processor written in apl. all of the features of analogic apl discussed in this paper have been fully specified. the system is now being implemented. a substantial subset of these features will be demonstrable in helsinki in june, 1984. michael j.a. berry fast i/o - efficient file processing dick bowman timing results of various compilers using an optimization quality benchmark h. liu m. klerer readers' choice awards 1997 gena shurtleff a future for apl martin gfeller linux apprentice beginner's guide to jdk: this article covers the use of the java development kit on a linux platform. it includes a general introduction to java, installing the jdk 1.1.6, compiling java support into the linux kernel, writing a simple ja < gordon chamberlin software engineering for the cobol environment in a attempt to improve the productivity of their 70 development staff, skandinaviska enskilda banken has built an integrated set of manual and automatic tools for the implementation of cobol programs. it was possible to use a number of modern programming techniques, including software engineering methods, in a cobol environment. the project required 31 person-months; the aims, current status, and initial results are reported. michael evans user-level process checkpoint and restore for migration m. bozyigit m. wasiq kernel korner: a non-technical look inside the ext2 file system randy appleton an empirical study of the object-oriented paradigm and software reuse john a. lewis sallie m. henry dennis g. kafura robert s. schulman a note on covariance and contravariance unification myles f. barrett marshall e. giguere the architecture of montana: an open and extensible programming environment with an incremental c++ compiler montana is an open, extensible integrated programming environment for c++ that supports incremental compilation and linking, a persistent code cache called a codestore, and a set of programming interfaces to the codestore for tool writers. codestore serves as a central source of information for compiling, browsing, and debugging. codestore contains information about both the static and dynamic structure of the compiled program. this information spans files, macros, declarations, function bodies, templates and their instantiations, program fragment dependencies, linker relocation information, and debugging information.montana allows the compilation process to be extended and modified [11]. montana has been used as the basis of a number of tools [1,7], and is also used as the infrastructure of a production compiler, ibm's visual age c++ 4.0 [8]. michael karasick an intermediate design language and its analysis a simple relational language is presented that has two desirable properties. first, it is sufficiently expressive to encode, fairly naturally, a variety of software design problems. second, it is amenable to fully automatic analysis. this paper explains the language and its semantics, and describes a new analysis scheme (based on a stochastic boolean solver) that dramatically outperforms existing schemes. daniel jackson a language-based editing process for visual object-oriented programming chung- hua wang feng-jian wang process reuse using a template approach: a case-study from avionics w. lam new products corporate linux journal staff data abstraction, controlled iteration, and communicating processes iterators provide access to elements of an abstract structured object in some sequence. it is argued that parallel composition of iterators should be achieved implicitly by means of a generalized for loop rather than by use of mutually interacting coroutines. the generalized for loop employs controlled iteration, which is shown to be a powerful yet inexpensive construct. the generalized for loop is consistent with block structure, and, for program proof purposes, is much more tractable than an unrestricted loop. concrete programming examples are used extensively. alfs t. berztiss software documentation as an engineering process jose r. hilera leon a. gonzalez jose a. guitierrez j. m. martinez the design and application of a retargetable peephole optimizer peephole optimizers improve object code by replacing certain sequences of instructions with better sequences. this paper describes po, a peephole optimizer that uses a symbolic machine description to simulate pairs of adjacent instructions, replacing them, where possible, with an equivalent single instruction. as a result of this organization, po is machine independent and can be described formally and concisely: when po is finished, no instruction, and no pair of adjacent instructions, can be replaced with a cheaper single instruction that has the same effect. this thoroughness allows po to relieve code generators of much case analysis; for example, they might produce only load/add-register sequences and rely on po to, where possible, discard them in favor or add-memory, add-immediate, or increment instructions. experiments indicate that naive code generators can give good code if used with po. jack w. davidson christopher w. fraser an incremental flow- and context-sensitive pointer aliasing analysis jyh- shiarn yur barbara g. ryder william a. landi the scheme of things: implementing lexically scoped macros jonathan rees dear ada doug bryan a measure for composite module cohesion sukesh patel william chu rich baxter the click modular router click is a new software architecture for building flexible and configurable routers. a click router is assembled from packet processing modules called _elements._ individual elements implement simple router functions like packet classification, queueing, scheduling, and interfacing with network devices. complete configurations are built by connecting elements into a graph; packets flow along the graph's edges. several features make individual elements more powerful and complex configurations easier to write, including _pull processing,_ which models packet flow driven by transmitting interfaces, and _flow- based router context,_ which helps an element locate other interesting elements.we demonstrate several working configurations, including an ip router and an ethernet bridge. these configurations are modular---the ip router has 16 elements on the forwarding path---and easy to extend by adding additional elements, which we demonstrate with augmented configurations. on commodity pc hardware running linux, the click ip router can forward 64-byte packets at 73,000 packets per second, just 10% slower than linux alone. robert morris eddie kohler john jannotti m. frans kaashoek a mobility-aware file system for partially connected operation the advent of affordable off-the-shelf wide-area wireless networking solutions for portable computers will result in partial (or intermittent) connectivity becoming the common networking mode for mobile users. this paper presents the design of pfs, a mobility-aware file system specially designed for partially connected operation.pfs supports the extreme modes of full connection and disconnection gracefully, but unlike other mobile file systems, it also provides an interrace for mobility-aware applications to direct the file system in its caching and consistency decisions in order to fully exploit intermittent connectivity. using pfs, it is possible for an application to maintain consistency on only the critical portions of its data files. since pfs provides adaptation at the file system level, even unaware applications can 'act' mobile-aware as a result of the transparent adaptation provided by pfs. dane dwyer vaduvur bharghavan an error complexity model for software reliability measurement yutaka nakagawa shuetsu hanata m.h. halstead's software science - a critical examination karl popper has described the scientific method as "the method of bold conjectures and ingenious and severe attempts to refute them". software science has made "bold conjectures" in postulating specific relationships between various 'metrics' of software code and in ascribing psychological interpretations to some of these metrics. this paper describes tests made on the validity of the relationships and interpretations which form the foundations of software science. the results indicate that the majority of them represent neither natural laws nor useful engineering approximations. peter g. hamer gillian d. frewin programming paradigms involving exceptions: a software quality approach j. a. perkins r. s. gorzela technical correspondence diane crawford part iii: implementing components in resolve paolo bucci joseph e. hollingsworth joan krone bruce w. weide superoptimizer: a look at the smallest program henry massalin on the role of language constructs for framework design görel hedin jørgen lindskov knudsen an investigation on the use of machine learned models for estimating correction costs mauricio a. de almeida hakim lounis walcelio l. melo software architecture, analysis, generation, and evolution (saage) barry boehm neno medvidovic structured requirements definition in the 80s in the last few years, structured programming and structured systems design have become increasingly popular. recently, there has been considerable application of structuring to the front-end of the systems life cycle, in particular to the requirements definition process. this paper discusses the experience with one methodology the langston, kitch structured systems development methodology (ssd), and forecasts what the major developments in requirements development will be in the 1980s. kenneth t. orr a validation framework for a maturity measurement model for safety-critical software systems vijay k. vaishnavi martin d. fraser hartstone: synthetic benchmark requirements for hard real-time applications nelson weiderman specification formalisms for component-based concurrent systems this project builds on my ongoing research into design formalisms for, and the automatic verification of, concurrent systems. the difficulties such systems pose for system engineers are well-known and result in large part from the the complexities of process interaction and the possibilities for nondeterminism. my work is motivated by a belief that mathematically rigorous specification and verification techniques will ultimately lead to better and easier-to-build concurrent systems.my specific research interests lie in the development of fully automatic analysis methods and process-algebraic design formalisms for modeling system behavior. i have worked on algorithms for checking properties of, and refinement relations between, system descriptions [ch93, cs93]; the implementation and release of a verification tool, the cwb- nc [cs96] (see http://www.cs.sunysb.edu/~rance to obtain the distribution); case studies [bcl99, ecb97]; and the formalization of system features, such as real time, probability, and priority, in process algebra [bcl99, cdsyar].the aims of this project include the development of expressive and usable formalisms for specifying and reasoning about properties of _open, component- based_ concurrent systems. more specifically, my colleagues and i have been investigating new approaches for describing component requirements and automated techniques for determining when finite-state components meet their requirements. the key topics under study include the following.**a temporal logic for open systems.** we are working on a notation for conveniently expressing properties constraining the behavior of open systems.**implicit specifications.** _implicit specifications_ use system _contexts,_ or "test harness," to define requirements for open systems. we are studying expressiveness issues and model-checking algorithms for such specifications.**automatic model-checker generation.** we have been developing a _model-checker generator_ that, given a temporal logic and "proof rules" for the logic, automatically produces an efficient model checker. rance cleaveland making sense of software engineering environment framework standards barbara cuthill fast procedure calls a mechanism for control transfers should handle a variety of applications (e.g., procedure calls and returns, coroutine transfers, exceptions, process switches) in a uniform way. it should also allow an implementation in which the common cases of procedure call and return are extremely fast, preferably as fast as unconditional jumps in the normal case. this paper describes such a mechanism and methods for its efficient implementation. butler w. lampson the quick start guide to the gimp, part 2 michael j. hammel comments on debugging hypercubes in a von neumann language joel macauslan take command: apropos corporate linux journal staff surveyor's forum: retargetable code generators christopher w. fraser on the vax/vms time-critical process scheduling the vax/vms process schedule is briefly described. a simple priority-driven round-robin queuing model is then constructed to analyze the behavior of the time-critical processes of vax/vms under such a schedule. mean and variance of the conditional response time of a process at a given priority are derived, conditioned on the amount of service time required by that process. numerical results are given with comparisons to the ordinary priority queuing systems. y. t. wang producing reusable object-oriented components: a domain-and-organization- specific perspective developing reusable object-oriented software requires a designer to determine how to structure a software system so as to achieve the necessary functionality, while at the same time increasing the reuse potential of the software. we introduce a set of reusability metrics intended to be iteratively applied during the design and implementation parts of the software life-cycle to help guide the production and identification of reusable components. component identification centers on the application's domain, with reuse focusing specifically on an organization's future systems. our approach requires the developer to subjectively categorize classes, identify component boundaries, and specify where components are related. our metrics provide reuse valuations on the couplings between components. based upon the results of applying our metrics, we provide refactoring guidelines to increase the separation between components in a manner that improves component reusability. we include an application of our metrics to a commercial objec- oriented framework. margaretha w. price donald m. needham steven a. demurjian framework description using concern-specific design patterns composition antónio rito silva francisco assis rosa teresa gonçalves using linux in a training environment b. scott burkett toolcase: a repository of computer-aided software engineering tools panagiotis linos the relationship between theory and practice in software engineering robert l. glass book review: unix programming tools andrew l. johnson vax-11 fortran: a flaw in efficient code generation m. r. khalil garbage collection in generic libraries gor v. nishanov sibylle schupp lustre: a declarative language for real-time programming lustre is a synchronous data-flow language for programming systems which interact with their environments in real-time. after an informal presentation of the language, we describe its semantics by means of structural inference rules. moreover, we show how to use this semantics in order to generate efficient sequential code, namely, a finite state automaton which represents the control of the program. formal rules for program transformation are also presented. p. caspi d. pilaud n. halbwachs j. a. plaice redirect, pipe, and stdio in ms-dos this paper describes the results of efforts to develop ms-dos application programs which provide acceptable i/o redirection. the characteristics considered "acceptable" are defined, and compared with results obtained from some compilers. problems related to writing character i/o routines and their solutions are described. arthur zachai managing stack frames in smalltalk j. e. b. moss a module concept for viewpoints christian piwetz michael goedicke isolation-only transactions for mobile computing qi lu m. satyanaranyanan on failure of the pruning technique in "error repair in shift-reduce parsers" a previous article presented a technique to compute the least-cost error repair by incrementally generating configurations that result from inserting and deleting tokens a syntactically incorrect input. an additional mechanism to improve the run-time efficiency of this algorithm by pruning some of the configurations was discussed as well. in this communication we show that the pruning mechanism may lead to suboptimal repairs or may block all repairs. certain grammatical errors in a common construct of the java programming language also lead to the above kind of failure. eberhard bertsch mark-jan nederhof robust, distributed references and acyclic garbage collection marc shapiro peter dickman david plainfosse facade: a typed intermediate language dedicated to smart cards the use of smart cards to run software modules on demand has become a major business concern for application issuers. such down-loadable executable content needs to be trusted by the card execution environment in order to ensure that an instruction on a memory area is compliant with the definition of the data stored in this area (i.e. its type). current solutions for smart cards rely on three techniques. for java card, either an off-card verifier- converter performs a static verification of type- safety, or a defensive virtual machine performs the verification at runtime. for other types of open smart cards, no type-checking is carried out and the trust is only based on the containment of applications. static verification is more efficient and flexible than dynamic techniques. nevertheless, as the java verifier cannot fit into a card, the trust is dependent on an external third- party. in this way, the card security has been partly turned to the outside. we propose and describe the facade language for which the type-safety verification can be performed statically on- card. gilles grimaud jean-louis lanet jean-jacques vandewalle info: interactive apl documentation a large body of apl code may be hard to understand and analyze, particularly if you are not its author. a code system that spans multiple workspaces (wss) compounds that problem._info_ is a multi- ws system written in apl+win that provides convenient interactive documentation of apl+win and apl+dos multi-ws systems. the user has a variety of displays that give insight into the system structures and relationships within single- or multi-ws systems. an administrator can easily set up and maintain the info static analysis data bases for any number of ws groups.this paper demonstrates info by showing how info documents itself. a distribution package contains the complete code and instructions for setting up this self- documentation. george mebus javascript application cookbook ben crowder multitasking software components d. w. gonzalez ada modules s. e. watson inertia - the reluctance of code motion? william s. shu letters to the editor corporate linux journal staff effective use of assertions in c++ mike a. marin letters corporate linux journal staff describing distributed environments (abstract only) the distributed execution analysis suite project (dea-suite) [1] is concerned with developing a set of integrated tools for studying the execution of programs in a distributed environment. dea-suite consists of an environment to: (1) allow the specification of a candidate distributed execution environment. (2) assign the distributable portions of a program to the sites of execution in the distributed execution environment. (3) schedule the distributable portions of a program as they become enabled, and finally (4) simulate the execution of the program on the candidate distributed execution environment. one aim is to provide an environments. the presentation's focus will be on the description of distributed environments within the dea-suite project. paul a. ponville software engineering tutorial the field of software engineering has undergone some of the most profound changes in the last decade. in recent years, the national acm conferences have been giving increasing attention to software engineering---"structured program planning and design: standardization needs" (acm '79 - detroit) and " more on structured design" (acm '80 - nashville). this session contains treatment of software engineering with emphasis on real applications. presented are state-of-the-art software techniques including the latest refinements in structured analysis and design and programming methodologies. several issues related to the portrayal of software design include architectural, structural, behavioral, and informational considerations. stressed are techniques for implementing reliable, maintainable software. also offered are software management approaches. murray r. berkowitz gordon davis kenneth t. orr james a. senn darrell ward deciding when to forget in the elephant file system douglas s. santry michael j. feeley norman c. hutchinson alistair c. veitch ross w. carton jacob ofir real-time ada demonstration l. lucas d. dent distribution of ada tasks onto a heterogeneous environment haruhiko nishida takumi itoh ryuji nakayama a possible approach to object-oriented reengineering of cobol programs marijana tomic the cost of standardizing components for software reuse giancarlo succi francesco baruchelli new type signatures for legacy fortran subroutines we are currently developing a methodological framework for reverse engineering fortran77 programs used by electricite de france, in which the first step is the construction of an algebraic specification which faithfully represents the fortran code. to construct this specification, we must decide on a coherent set of "profiles" (type signatures) for the specifications of the fortran sub-programs. we propose an analysis of the dynamic aliases between formal and actual sub- program arguments in order to derive these profiles. in many examples of real fortran code this analysis does not give satisfactory results if arrays are treated as indivisible. instead, we must take into account which fragment of the array may really be accessed by the sub-program. we have therefore implemented our analysis as an extension of the _pips_ code parallelisation tool, which provides us with a precise analysis of inter- procedural array data-flow. nicky williams preston analysis and testing of concurrent object-oriented software k. c. tai asynchronous communication on occam n. b. serbedzija a net method for specification of reusable software h. s. dhama d. shtern orchestrating interactions among parallel computations many parallel programs contain multiple sub-computations, each with distinct communication and load balancing requirements. the traditional approach to compiling such programs is to impose a processor synchronization barrier between sub- computations, optimizing each as a separate entity. this paper develops a methodology for managing the interactions among sub-computations, avoiding strict synchronization where concurrent or pipelined relationships are possible. our approach to compiling parallel programs has two components: symbolic data access analysis and adaptive runtime support. we summarize the data access behavior of sub-computations (such as loop nests) and split them to expose concurrency and pipelining opportunities. the split transformation has been incorporated into an extended fortran compiler, which outputs a fortran 77 program augmented with calls to library routines written in c and a coarse- grained dataflow graph summarizing the exposed parallelism. the compiler encodes symbolic information, including loop bounds and communication requirements, for an adaptive runtime system, which uses runtime information to improve the scheduling efficiency of irregular sub- computations. the runtime system incorporates algorithms that allocate processing resources to concurrently executing sub-computations and choose communication granularity. we have demonstrated that these dynamic techniques substantially improve performance on a range of production applications including climate modeling and x-ray tomography, expecially when large numbers of processors are available. susan l. graham steven lucco oliver sharp errata, amendments and interpretations for the fortran 90 standards document jerrold l. wagener system development methodology using logos the development of applications written in apl has traditionally both benefited by and suffered from the freedom offered by the environment. a consequence of this freedom is that few applications are designed from the perspectives of consistency, modularity, and structure. this paper describes how logos, a programming environment for apl, helps improve the development and maintenance of apl applications. through the use of basic support facilities and integrated tools, logos encourages a modular design within applications and a greater consistency among them. the support facilities provide such functionality as the paging of large applications, or the parallel testing of multiple versions of software, with minimal effort. specific examples of system design and development are given in the paper. programming methodologies in widespread practice are examined, and their application to apl with logos is described. the use of logos in top-down design and in application prototyping is also discussed. david b. allen mark r. dempsey leslie h. goldsmith what is a procedure call? brian l. meek fault tolerant distributed ada s. arevalo a. alvarez 801 storage: architecture and programming the ibm rt pc implements the necessary features of 801 storage architecture. the upper 4 bits of a 32-bit short address select one of 16 segment registers. a 12-bit segment id from the register replaces the 4 bits to form a 40-bit long virtual address. this creates a single large space of 4096 256m-byte segments. only the supervisor may load segment registers and may therefore control access to and sharing of segments. a long virtual address is translated to real by an inverted page table, in which each entry contains the virtual page address currently allocated to a real page. hardware searches the table using chained hashing. if a given virtual address is found in a table entry, the index of that entry is the desired real page address. table size is related only to real storage size rather than to virtual size as with conventional segment and page tables. the inverted page table includes a transaction locking mechanism. each entry contains bits to represent read and write locks, for 128-byte lines within the page, granted to the transaction id also in the entry. lock fault interrupt occurs when storage access by the current transaction (id in a register) is not permitted by locks and id in a table entry. page protection bits may instead be used when transaction locking is not required. a cpr transaction is all the storage actions by a process on a set of file segments, performed between two calls to commit for that set of segments, including actions of the supervisor in directory segments when files are created, renamed, etc. file open options influence transaction processing. when open options specify file sharing and implicit functions of serialization and atomic commit, we call the file segments database storage. transaction locking hardware is used. lock fault interrupts invoke the storage manager to grant locks and later, when a transaction commits, the storage manager writes log records of changed storage. strong transaction properties are achieved, without explicit calls from programs which access storage and independently of superimposed data organization or access pattern. a language-based approach to invoking transaction functions seems to be more explicit and restrictive. pl.8 is a pl/i dialect for systems programming and the pl.8 compiler is part of the 801 project. to simplify file programming, a persistent storage class (i.e. storage in a file segment) and a refp type (capability to a file segment id) were added to pl.8. persistent storage may be used with any type, including arrays of structures and based structures in areas (appropriate for records, indices, etc.). semantics of computation with persistent variables depend only on type and so are the same as computation in working storage classes automatic, static, and controlled (heap). computation with large aggregates over multiple file segments is possible, exceeding 32-bit addressing. database storage is a new way to implement certain storage management functions in an operating system, built on and similar in spirit to virtual storage. both are very general, transparent, and rather monolithic approaches to storage management, one for storage hierarchy, the other for storage concurrency and recovery. we believe that database storage will perform well for a wide range of applications and that the simplicity it offers is too attractive to dismiss. as in the early days of virtual storage, the challenge is to understand and exploit its characteristics. a. chang m. mergen shallow binding makes functional arrays fast henry g. baker disjoint eager execution: an optimal form of speculative execution augustus k. uht vijay sindagi kelley hall enumerated data types the origins of many language features are hard to pin down. not so with enumerated data types. these were invented by niklans wirth in the design of the pascal programming language [1]. they permit convenience and greater safety in performing tasks that had to handled by mapping concepts to a subrange of the integers in previous languages. the feature is useful enough and easy enough to define and implement that no really modem language is without some form of it.the feature is still missing from fortran. the proposed new standard for fortran adds a version of the feature to the language [2]. however, it is present as part of a c-interoperability mechanism only. unfortunately, it leaves out most of the aspects of the idea that make it useful to a programmer. and it is not even necessary for interoperation with c.this article describes the concept of enumerated data types and the properties the feature should include. it then describes the proposed version of the feature in fortran. some suggestions will be made about future directions. jim giles transporting an ada software tool: a case study stewart french a structural framework for the formal representation of cooperation anthony finkelstein the "wolf fence" algorithm for debugging the "wolf fence" method of debugging time-sharing programs in higher languages evolved from the "lions in south africa" method that i have taught since the vacuum-tube machine language days. it is a quickly converging iteration that serves to catch run-time errors. edward j. gauss upfront corporate linux journal staff software development process audits - a general procedure to assist development organizations in improving software quality and productivity, the at&t; bell laboratories quality assurance center created a design quality group to independently evaluate the software development processes and associated development products of our at&t; projects. these software development process audits examine software engineering techniques and tools in practice, as they fit into the overall development environment. our strategy behind these audits is to assemble a team which, with the involvement of the developers and their managers, will: characterize the existing development process; identify project strengths and areas for improvements; and recommend possible improvements. this paper details the general approach behind this strategy. stewart g. crawford m. hosein fallah software process in the organizational context: more data on european practices santiago rementeria launching a successful cpme program in a multi-vendor environment rogert g ruckert john d dean the design and implementation of tripwire: a file system integrity checker at the heart of most computer systems is a file system. the file system contains user data, executable programs, configuration and authorization information, and (usually) the base executable version of the operating system itself. the ability to monitor file systems for unauthorized or unexpected changes gives system administrators valuable data for protecting and maintaining their systems. however, in environments of many networked heterogeneous platforms with different policies and software, the task of monitoring changes becomes quite daunting. tripwire is tool that aids unix system administrators and users in monitoring a designated set of files and directories for any changes. used with system files on a regular (e.g., daily) basis, tripwire can notify system administrators of corrupted or altered files, so corrective actions may be taken in a timely manner. tripwire may also be used on user or group files or databases to signal changes. this paper describes the design and implementation of the tripwire tool. it uses interchangeable "signature" (usually, message digest) routines to identify changes in files, and is highly configurable. tripwire is no-cost software, available on the internet, and is currently in use on thousands of machines around the world. gene h. kim eugene h. spafford from the editor: lg and ielg corporate linux journal staff error repair with validation in lr-based parsing when the compiler encounters an error symbol in an erroneous input, the local error-repair method repairs the input by either inserting a repair string before the error symbol or deleting the error symbol. although the extended fmq of fischer et al. and the method of mckenzie et al. report the improved quality of diagnostic messages, they suffer from redundant parse stack configurations.this article proposes an efficient lr error-recovery method, with validation-removing repairs that give the same validation result as a previously considered, lower-cost repair. moreover, its execution speed is proportional to the length of the stack configuration. the algorithm is implemented on a bison, gnu lalr(1), parser generating system. experimental results are presented. ik-soon kim kwang-moo choe constructing reusable specifications through analogy chia-chu chiang david neubart the missing link in requirements engineering hermann kaindl generating efficient and complete ada from a case tool goran hemdal ada information hiding - a design goal missing? w g piotrowski work structures and shifts: an empirical analysis of software specification teamwork salah bendifallah walt scacchi the under-appreciated unfold jeremy gibbons geraint jones ada: adding reliability and efficiency to task communication in programming distributed control systems k. m. sacha architecture-based software testing antonia bertolino paola inverardi automatic determination of recommended test combinations for ada compilers ada compilers are validated using the ada compiler validation capability (acvc) test suite, containing over 4000 individual test programs. each test program focuses, to the extent possible, on a single language feature. despite the advantages of this "atomic testing" methodology, it is often the unexpected interactions between language features that result in compilation problems. this research investigated techniques to automatically identify recommended combinations of ada languge features for compiler testing. a prototype program was developed to analyze the ada language grammar specification and generate a list of recommended combinations of features to be tested. the output from this program will be used within the ada features identification system (afis), a configuration management tool currently under development for the acvc test suite. the prototype uses an annotated ada language grammar to drive a test case generator. the generated combinations of ada features are analyzed to select the combinations to be tested. while the skill and intuition of the compiler tester are eseential to the annotation of the ada grammar, the prototype demonstrated that automated support tools can be used to identify recommended combinations for ada compiler testing. james s. marr patricia k. lawlis extended pascal is no problem j. mauney object-oriented graphics for requirements analysis and logical design donald firesmith recent advances in software measurement (abstract and references for talk) v. r. basili the object-oriented systems life cycle in software engineering, the traditional description of the software life cycle is based on an underlying model, commonly referred to as the "waterfall" model (e.g., [4]). this model initially attempts to discretize the identifiable activities within the software development process as a linear series of actions, each of which must be completed before the next is commenced. further refinements to this model appreciate that such completion is seldom absolute and that iteration back to a previous stage is likely. various authors' descriptions of this model relate to the detailed level at which the software building process is viewed. at the most general level, three phases to the life cycle are generally agreed upon: 1) analysis, 2) design and 3) construction/implementation (e.g., [36], p. 262; [42]) (figure 1(a)). the analysis phase covers from the initiation of the project, through to users- needs analysis and feasibility study (cf. [15]); the design phase covers the various concepts of system design, broad design, logical design, detailed design, program design and physical design. following from the design stage(s), the computer program is written, the program tested, in terms of verification, validation and sensitivity testing, and when found acceptable, put into use and then maintained well into the future. in the more detailed description of the life cycle a number of subdivisions are identified (figure 1(b)). the number of these subdivisions varies between authors. in general, the problem is first defined and an analysis of the requirements of current and future users undertaken, usually by direct and indirect questioning and iterative discussion. included in this stage should be a feasibility study. following this a user requirements definition and a software requirements specification, (srs) [15], are written. the users requirements definition is in the language of the users so that this can be agreed upon by both the software engineer and the software user. the software requirements specification is written in the language of the programmer and details the precise requirements of the system. these two stages comprise an answer to the question of what? (viz. problem definition). the user-needs analysis stage and examination of the solution space are still within the overall phase of analysis but are beginning to move toward not only problem decomposition, but also highlighting concepts which are likely to be of use in the subsequent system design; thus beginning to answer the question how? on the other hand, davis [15] notes that this division into "what" and "how" can be subject to individual perception, giving six different what/how interpretations of an example telephone system. at this requirements stage, however, the domain of interest is still very much that of the problem space. not until we move from (real-world) systems analysis to (software) systems design do we move from the problem space to the solution space (figure 2). it is important to observe the occurrence and location of this interface. as noted by booth [6], this provides a useful framework in object-oriented analysis and design. the design stage is perhaps the most loosely defined since it is a phase of progressive decomposition toward more and more detail (e.g., [41]) and is essentially a creative, not a mechanistic, process [42]. consequently, systems design may also be referred to as "broad design" and program design as "detailed design" [20]. brookes et al. [9] refer to these phases as "logical design" and "physical design." in the traditional life cycle these two design stages can become both blurred and iterative; but in the object-oriented life cycle the boundary becomes even more indistinct. the software life cycle, as described above, is frequently implemented based on a view of the world interpreted in terms of a functional decomposition; that is, the primary question addressed by the systems analysis and design is what does the system do viz. what is its function? functional design, and the functional decomposition techniques used to achieve this, is based on the interpretation of the problem space and its translation to solution space as an interdependent set of functions or procedures. the final system is seen as a set of procedures which, apparently secondarily, operate on data. functional decomposition is also a top-down analysis and design methodology. although the two are not synonymous, most of the recently published systems analysis and design methods exhibit both characteristics (e.g., [14, 17]) and some also add a real-time component (e.g., [44]). top-down design does impose some discipline on the systems analyst and program designer; yet it can be criticized as being too restrictive to support contemporary software engineering designs. meyer [29] summarizes the flaws in top-down system design as follows: 1\\. top-down design takes no account of evolutionary changes; 2\\. in top-down design, the system is characterized by a single function---a questionable concept; 3\\. top-down design is based on a functional mindset, and consequently the data structure aspect is often completely neglected; 4\\. top-down design does not encourage reusability. (see also discussion in [41], p. 352 et seq.) brian henderson-sellers julian m. edwards script: a communication abstraction mechanism in this paper, we introduce a new abstraction mechanism, called a script, which hides the low-level details that implement patterns of communication. a script localizes the communication between a set of roles (formal processes), to which actual processes enroll in order to participate in the action of the script. the paper discusses the addition of scripts to the languages csp and ada, as well as to a shared-variable language with monitors. nissim francez brent hailpern disconnected operation in the coda file system james j. kistler m. satyanarayanan objects in concurrent logic programming languages kenneth kahn eric dean tribble mark s. miller daniel g. bobrow interfacing ada with verification languages david leeson glenn macewen david andrews the rewards of generating true 32-bit code michael franz improving the ratio of memory operations to floating-point operations in loops over the past decade, microprocessor design strategies have focused on increasing the computational power on a single chip. because computations often require more data from cache per floating-point operation than a machine can deliver and because operations are pipelined, idle computational cycles are common when scientific applications are executed. to overcome these bottlenecks, programmers have learned to use a coding style that ensures a better balance between memory references and floating-point operations. in our view, this is a step in the wrong direction because it makes programs more machine-specific. a programmer should not be required to write a new program version for each new machine; instead, the task of specializing a program to a target machine should be left to the compiler. but is our view practical? can a sophisticated optimizing compiler obviate the need for the myriad of programming tricks that have found their way into practice to improve the performance of the memory hierarchy? in this paper we attempt to answer that question. to do so, we develop and evaluate techniques that automatically restructure program loops to achieve high performance on specific target architectures. these methods attempt to balance computation and memory accesses and seek to eliminate or reduce pipeline interlock. to do this, they estimate statically the balance between memory operations and floating-point operations for each loop in a particular program and use these estimates to determine whether to apply various loop transformations. experiments with our automatic techniques show that integer-factor speedups are possible on kernels. additionally, the estimate of the balance between memory operations and computation, and the application of the estimate are very accurate---experiments reveal little difference between the balance achieved by our automatic system that is made possible by hand optimization. steve carr ken kennedy a programmable preprocessor approach to efficient parallel language design matthew rosing robert b. schnabel robert p. weaver practical applications of a syntax directed program manipulation environment we present applications in various domains of a system built around a syntax directed editor: the mentor system. the main characteristics of the system are the abstract representation of data, programmability of the command language, and language independance. the applications presented belong to the area of program editing and manipulation, extensions of programming languages through the development of preprocessors, processing of multi-formalism documents and program portability. v donzeau-gouge b. lang b. melese objective view point: casting in c++: bringing safety and smartness to your programs g. bowden wise scaling down ada (or towards a standard ada subset) henry f. ledgard andrew singer summary of fortran 88 loren p. meissner visual support for incremental abstraction and refinement in ada 95 t. dean hendrix james h. cross larry a. barowski karl s. mathias linkage in the nemesis single address space operating system timothy roscoe notes on what language maturity means, and how to measure it j. e. sammet ada for ms-windows applications paul r. pukite reusable software components trudy levine using aspectc to improve the modularity of path-specific customization in operating system code layered architecture in operating system code is often compromised by execution path-specific customizations such as prefetching, page replacement and scheduling strategies. path-specific customizations are difficult to modularize in a layered architecture because they involve dynamic context passing and layer violations. effectively they are vertically integrated slices through the layers. an initial experiment using an aspect- oriented programming language to refactor prefetching in the freebsd operating system kernal shows significant benefits, including easy (un)pluggability of prefetching modes, independent development of prefetching modes, and overall improved comprehensibility. yvonne coady gregor kiczales mike feeley greg smolyn microcomputer operating systems panel: unix are multi-user microcomputer systems a good idea? they tend to be reinventions of the timesharing mainframe mistakes of the 60's and 70's. why not give each user a real micro? this argument leads one either to multiprocessor systems, or to networking. but unix is an interesting system, since it's inherently multi-programmed, and because multi-programming if done correctly makes your system inherently multi-user. this is a case of the principle that if you make things general, you often get unexpected benefits. unix also inherently facilitates sharing of files and other resources. other systems tend to make communal sharing difficult. unix evolved to serve a small community of cooperating users, hence a "supermicro" unix for a small department should work well (and does). unix has also, of course, been used in much larger (and less cooperative!) environments with considerable success (but not without some system maintenance). ian f. darwin performance and architectural evaluation of the psi machine kazuo taki katzuto nakajima hiroshi nakashima morihiro ikeda an object-oriented design and implementation of reusable graph objects with c++: a case study wing ning li ravi kiran two impediments to the proper use of ada donald g. firesmith linda meets minix p. ciancarini n. guerrini an example of event-driven asynchronous scheduling with ada james v. chelini donna d. hughes leonard j. hoffman denise m. brunelle asynchronous transfer of control in ada rapid event driven mode shifts and certain kinds of error recovery require a task asynchronously to affect the flow of control of another task in ways that cannot be conveniently achieved by abort alone…. w. j. toetenel j. van katwijk "the continuing adventures of "[" smith, only writer diana patterson spin \\- an extensible microkernel for application-specific operating system services application domains such as multimedia, databases, and parallel computing, require operating system services with high performance and high functionality. existing operating systems provide fixed interfaces and implementations to system services and resources. this makes them inappropriate for applications whose resource demands and usage patterns are poorly matched by the services provided. the _spin_ operating system enables system services to be defined in an application-specific fashion through an _extensible_ microkernel. it offers applications fine-grained control over a machine's logical and physical resources through run-time _adaptation_ of the system to application requirements. brian n. bershad craig chambers susan eggers chris maeda dylan mcnamee przemyslaw pardyak stefan savage emin gun sirer addressing security for object-oriented design and ada 95 development during the 1990s, we have seen an explosive growth in object-oriented design and development. ada95 and its strong ties to dod and government software, and java with an increasing impact on commercial internet-based and general- purpose software, both expand the base of software professionals working on object-oriented platforms. security is a paramount concern in both languages; in ada 95 to manage access to government sensitive data, and in java to control the effects of platform-independent software. organizations will demand high constancy and high assurance in object-oriented software, across a wide range of domains. for example, health care systems need instant access to data in life-critical situations, while cad applications must insure that the most up-to-date specifications on mechanical parts are available in a shared manner to promote cooperation and facilitate productivity. the successful ada 95 tool chest for the year 2000 must contain tools and techniques that support the definition, analyses, and enforcement of security for object-oriented software. this paper investigates security for object-oriented design with a focus on its realization in ada 95. john a. reisner steven a. demurjian systematic high-level interrupts in forth b. ahmad a. adiga data requirements for implementation of n-process mutual exclusion using a single shared variable james e. burns paul jackson nancy a. lynch michael j. fischer gary l. peterson acm forum robert l. ashenhurst compiling real-time programs into schedulable code we present a programming language with first-class timing constructs, whose semantics is based on time-constrained relationships between observable events. since a system specification postulates timing relationships between events, realizing the specification in a program becomes a more straightforward process. using these constraints, as well as those imposed by data and control flow properties, our objective is to transform the code so that its worst-case execution time is consistent with its real-time requirements. to accomplish this goal we first translate an event-based source program into intermediate code, in which the timing constraints are imposed on the code itself, and then use a compilation technique which synthesizes feasible code from the original source program. seongsoo hong richard gerber stochastic reliability growth: a model with applications to computer software faults and hardware design faults an assumption commonly made in early models of software reliability is that the failure rate of a program is a constant multiple of the number of faults remaining. this implies that all faults have the same effect upon the overall failure rate. the assumption is challenged and an alternative proposed. the suggested model results in earlier fault-fixes having a greater effect than later ones (the worst faults show themselves earlier and so are fixed earlier), and the dfr property between fault-fixes (confidence in programs increases during periods of failure-free operations, as well as at fault- fixes). the model shows a high degree of mathematical tractability, and allows a range of reliability measures to be calculated exactly. predictions of total execution time to achieve a target reliability, and total number of fault- fixes to target reliability, are obtained. it is suggested that the model might also find applications in those hardware reliability growth situations where design errors are being eliminated. b. littlewood using annotations to reduce dynamic optimization time _dynamic compilation and optimization are widely used in heterogenous computing environments, in which an intermediate form of the code is compiled to native code during execution. an important trade off exists between the amount of time spent dynamically optimizing the program and the running time of the program. the time to perform dynamic optimizations can cause significant delays during execution and also prohibit performance gains that result from more complex optimization._ _in this research, we present an annotation framework that substantially reduces compilation overhead of java programs. annotations consist of analysis information collected off-line and are incorporated into java programs. the annotations are then used by dynamic compilers to guide optimization. the annotations we present reduce compilation overhead incurred at all stages of compilation and optimization as well as enable complex optimizations to be performed dynamically. on average, our annotation optimizations reduce optimized compilation overhead by 78% and enable speedups of 7% on average for the programs examined._ chandra krintz brad calder book review: linux configuration and installation, second edition harvey friedman daistish: systematic algebraic testing for oo programs in the presence of side-effects daistish is a tool that performs systematic algebraic testing similar to gannon's daists tool [2]. however, daistish creates effective test drivers for programs in languages that use side effects to implement adts; this includes c++ and most other object-oriented languages. the functional approach of daists does not apply directly in these cases. the approach in our work is most similar to the astoot system of doong and frankl [1]; daistish differs from astoot by using guttag-style algebraic specs (functional notation), by allowing aliasing of type names to tailor the application of parameters in test cases, and by retaining the abilities of daists to compose new test points from existing ones. daistish is a perl script, and is compact and practical to apply. we describe the implementation and our experiments in both eiffel and c++. our work has concentrated on solving the semantics- specific issues of correctly duplicating objects for comparison; we have not worked on methods for selecting specific test cases.daistish consists of a perl script and supporting documentation. the current distribution can be obtained via www at urlhttp://www.cs.unc.edu/~stotts/daistish/ merlin hughes david stotts the power of partial tanslation: an experiment with the c-ification of binary prolog paul tarau bart demoen koen de bosschere kernel korner supporting multiple kernel versions: expect scripts to help you support multiple versions of the kernel across different platforms. tony wildish introducing virtual instance variables in classes to provide sufficient support for encapsulation li xuedong zheng guoliang a scalable loading balancing system for nows sanglu lu xie li optimizing the idle task and other mmu tricks cort dougan paul mackerras victor yodaiken operating system kernel automatic construction each year a large amount of resources have been devoted into porting operating system kernels from one machine to another. this paper proposes a new system which is able to construct operating system kernel automatically based on kernel functions specification and hardware interface specification. the system can increase the efficiency and productivity of porting operating systems or building new systems. it can also generate more reliable kernels than human programmers. xiaohua jia mamoru maekawa validating programs without specifications this work was supported by the office of naval research and the naval weapons center w. howden two-level hybrid interpreter/native code execution for combined space-time program efficiency t. pittman algorithms for on-the-fly garbage collection mordechai ben-ari a simple bucket-brigade advancement mechanism for generation-bases garbage collection p. r. wilson object oriented methods using fortran 90 brian j. dupee a distributed garbage collector for active objects this paper presents an algorithm that performs garbage collection in distributed systems of active objects (i.e., objects having their own threads of control). our proposition extends the basic marking algorithm proposed by kafura in [1] to a distributed environment. the proposed garbage collector is made up of a set of local garbage collectors, one per site, loosely coupled to a (logically centralized) global garbage collector that maintains a global snapshot of the system state relevant to garbage collection. the specific features of the proposed garbage collector are that local garbage collectors need not be synchronized with each other for detecting garbage objects, and that faulty sites and communication channels are tolerated. the paper describes the proposed garbage collector, together with its implementation and performance for a concurrent object-oriented language running on a local area network of workstations. isabelle puaut shared logging services for fault-tolerant distributed computing dean daniels roger haskin jon reinke wayne sawdon linux events corporate linux journal staff configuring designs for reuse anssi karhinen alexander ran tapio tallgren software reuse standards james baldo james moore david rine embedded control systems development with giotto giotto is a principled, tool-supported design methodology for implementing embedded control systems on platforms of possibly distributed sensors, actuators, cpus, and networks. giotto is based on the principle that time- triggered task invocations plus time-triggered mode switches can form the abstract essence of programming real-time control systems. giotto consists of a programming language with a formal semantics, and a retargetable compiler and runtime library. giotto supports the automation of control system design by strictly separating platform-independent functionality and timing concerns from platform-dependent scheduling and communication issues. the time- triggered predictability of giotto makes it particularly suitable for safety- critical applications with hard real-time constraints. we illustrate the platform- independence and time-triggered execution of giotto by coordinating a heterogeneous flock of intel x86 robots and lego mindstorms robots. thomas a. henzinger benjamin horowitz christoph meyer kirsch engineering the enterprise: evolving quality systems malcolm slovin donn di nunno 'those silly bastards': a report on some users' views of documentation russell e. borland a cots-based design editor for user specified domains bob blazer efficient implementation of the first-fit strategy for dynamic storage allocation we describe an algorithm that efficiently implements the first-fit strategy for dynamic storage allocation. the algorithm imposes a storage overhead of only one word per allocated block (plus a few percent of the total space used for dynamic storage), and the time required to allocate or free a block is o(log w), where w is the maximum number of words allocated dynamically. the algorithm is faster than many commonly used algorithms, especially when many small blocks are allocated, and has good worst-case behavior. it is relatively easy to implement and could be used internally by an operating system or to provide run-time support for high-level languages such as pascal and ada. a pascal implementation is given in the appendix. r. p. brent technical correspondence diane crawford an apl simulation of feedback systems practical feedback systems involve interacting linear and nonlinear components. modern techniques of design are based on representing a feedback system by a set of first order nonlinear differential equations and using a digital computer to obtain experimental solutions. in this paper, apl notation is used in a concise development of the mathematical basis of a computing algorithm, and in the realization of an actual system which is effective from the standpoints of ease of use, complexity of feedback systems that can be modeled, and computing efficiency. ease of use is important because a feedback systems engineer is not necessarily skilled in computing and, when using a computer simulation, should be permitted to think in terms of the customary concepts of feedback systems. wilbur r. lepage richard mcfee a diagram for object-oriented programs we introduce a notation for diagramming the message sending dialogue that takes place between objects participating in an object-oriented computation. our representation takes a global point of view which emphasizes the collaboration between objects implementing the behavior of individuals. we illustrate the diagram's usage with examples drawn from the smalltalk-80 virtual image. we also describe a mechanism for automatic construction of diagrams from smalltalk code. ward cunningham kent beck prettyprinting an algorithm for prettyprinting is given. for an input stream of length n and an output device with linewidth m, the algorithm requires time o(n) and space o(m). the algorithm is described in terms of two parallel processes: the first scans the input stream to determine the space required to print logical blocks of tokens; the second uses this information to decide where to break lines of text; the two processes communicate by means of a buffer of size o(m). the algorithm does not wait for the entire stream to be input, but begins printing as soon as it has received a full line of input. the algorithm is easily implemented. dereck c. oppen dimensional analysis in ada p. rogers generation scavenging: a non-disruptive high performance storage reclamation algorithm many interactive computing environments provide automatic storage reclamation and virtual memory to ease the burden of managing storage. unfortunately, many storage reclamation algorithms impede interaction with distracting pauses. generation scavenging is a reclamation algorithm that has no noticeable pauses, eliminates page faults for transient objects, compacts objects without resorting to indirection, and reclaims circular structures, in one third the time of traditional approaches. we have incorporated generation scavenging in berkeley smalltalk(bs), our smalltalk-80 implementation, and instrumented it to obtain performance data. we are also designing a microprocessor with hardware support for generation scavenging. david ungar concurrent programming language support for invocation handling: design and implementation ronald a. olsson complexity measurement of electronic switching system (ess) software we have been developing a tool that measures the complexity of software: 1) to predict the quality of software products and 2) to allocate proportionally more testing resources to complex modules. the software being measured is real-time and controls telephone switching systems. this software system is large and its development is distributed over a period of several years, with each release providing enhancements and bug fixes. we have developed a two-stage tool consisting of a parser and an analyzer. the parser operates on the source code and produces operator, operand, and miscellaneous tables. these tables are then processed by an analyzer program that calculates the complexity measures. changes for tuning our halstead counting rules involve simple changes to the analyzer only. during the development there were problems and issues to be confronted dealing with static analysis and code metrics. these are also described in this paper. in several systems we found that more than 80% of software failures can be traced to only 20% of the modules in the system. the mccabe complexity and some of halstead's metrics score higher than the count of executable statements in their correlations with field failures. it is reasonable to expect that we could devote more effort to the review and test of high- complexity modules and increase the quality of the software product that we send to the field. david r. gross mary a. king michael r. murr michael r. eddy a composite model checking toolset for analyzing software systems tevfik bultan from the editor: open source's first six months eric s. raymond software reuse myths will tracz procedure merging with instruction caches scott mcfarling xfm 1.3 a file and applications manager: a non-file manager user discovers the usefulness andflexibility of xfm. robert dalrymple design metrics and ada paul a. szulewski nancy m. sodano british apl jill moss an integrated ada programming environment: awa g. xing h. rau b. liu j. shen m.-y. zhu translating an adapt partition to ada9x s. j. goldsack a. a. holzbacher-valero r. volz r. waldrop first step to forth engine construction yong m. lee edward conjura the internet worm program: an analysis eugene h. spafford x/motif programming and god said "let there be light"! ibrahim f. haddad describing working environments in opm in opm, each software process provides a working environment in which programmers can actually work in order to accomplish a designated task, rather than prescribing the algorithm of the task [1], or giving a behavioral description of the task [4]. a process will (i) collect necessary resources, (ii) collect necessary activities, and (iii) specify certain constraint on the execution of activities. a process will also (iv) navigate activities to be performed by a human, (v) execute activities asked by a human, and (vi) execute some activities automatically when certain conditions are met. each working environment may consist of a different set of resources and activities depending on the task to be performed within it. thus the software development environment as a whole will be a collection of smaller and heterogeneous working environments. in opm, process templates are described in a process programming language called galois [3] which is an extension of c++ [2]. as an example consider the working on bug process illustrated in figure 1. in the working on bug process, a typical edit-compile-run cycle will be performed in order to fix a bug of given source files. figure 2 will give a skeleton of the working on bug process in galois. yasuhiro sugiyama ellis horowitz second chances: rationale for the reprint series t. r. girill early experiences with euclid the programming language euclid was designed to be used for the construction of reliable and efficient systems software. this paper discusses the authors' experience in the design and implementation of the first large (about 60,000 source lines) piece of software written in euclid. the emphasis in this paper is on how the various language features in euclid affected the implementation of the software. david b. wortman james r. cordy register allocation & spilling via graph coloring in a previous paper we reported the successful use of graph coloring techniques for doing global register allocation in an experimental pl/i optimizing compiler. when the compiler cannot color the register conflict graph with a number of colors equal to the number of available machine registers, it must add code to spill and reload registers to and from storage. previously the compiler produced spill code whose quality sometimes left much to be desired, and the ad hoe techniques used took considerable amounts of compile time. we have now discovered how to extend the graph coloring approach so that it naturally solves the spilling problem. spill decisions are now made on the basis of the register conflict graph and cost estimates of the value of keeping the result of a computation in a register rather than in storage. this new approach produces better object code and takes much less compile time. g. j. chaitin macros for defining c++ classes conrad weisert problems in application software maintenance the problems of application software maintenance in 487 data processing organizations were surveyed. factor analysis resulted in the identification of six problem factors: user knowledge, programmer effectiveness, product quality, programmer time availability, machine requirements, and system reliability. user knowledge accounted for about 60 percent of the common problem variance, providing new evidence of the importance of the user relationship for system success or failure. problems of programmer effectiveness and product quality were greater for older and larger systems and where more effort was spent in corrective maintenance. larger scale data processing environments were significantly associated with greater problems or programmer effectiveness, but with no other problem factor. product quality was seen as a lesser problem when certain productivity techniques were used in development. bennet p. lientz e. burton swanson composition of before/after metaclasses in som in som, the ibm system object model, a class is a run-time object that defines the behavior of its instances by creating an instance method table. because classes are objects, their behavior is defined by other classes (called metaclasses). for example, a "before/after metaclass" can be used to define the implementation of classes that, by suitable construction of their instance method tables, arrange for each invocation of a method to be preceded by execution of a "before method" and followed by execution of an "after method". this paper introduces and solves the problem of composing different before/after metaclasses in the context of som. an enabling element in the solution is som's concept of derived metaclasses, i.e., at run-time a som system derives the appropriate metaclass of a class based on the classes of its parents and an optional metaclass constraint. ira r. forman scott danforth hari madduri system administration: how to log friends and influence people mark komarinski a distributed virtual machine to support software process vincenzo ambriola giovanni a. cignoni more comments about "object-oriented documentation" russell borland automatic abstraction for model checking software systems with interrelated numeric constraints model checking techniques have not been effective in important classes of software systems characterized by large (or infinite) input domains with interrelated linear and non-linear constraints over the input variables. various model abstraction techniques have been proposed to address this problem. in this paper, we wish to propose _domain abstraction_ based on data equivalence and trajectory reduction as an alternative and complement to other abstraction techniques. our technique applies the abstraction to the input domain (environment) instead of the model and is applicable to _constraint-free_ and deterministic _constrained_ data transition system. our technique is automatable with some minor restrictions. yunja choi sanjai rayadurgam mats p.e. heimdahl fast parallel implementation of lazy languages - the equals experience this paper describes equals, a fast parallel implementation of a lazy functional language on a commercially available shared-memory parallel machine, the sequent symmetry. in contrast to previous implementations, we detect parallelism automatically by propagating exhaustive (normal form) demand. another important difference between equals and previous implementations is the use of reference counting for memory management instead of garbage collection. our implementation shows that reference counting leads to very good scalability, low memory requirements and improved locality. we compare our results with sequntial sml/nj as well as parallel (v, g-machine and gaml implementations. o. kaser s. pawagi c. r. ramakrishnan i. v. ramakrishnan r. c. sekar an o(nlog n) unidirectional algorithm for the circular extrema problem gary l. peterson the notion of inheritance in object-oriented programming kenneth baclawski bipin indurkhya using the vax/vms lock manager with ada tasks james j. maloney object-oriented design: a responsibility-driven approach object- oriented programming languages support encapsulation, thereby improving the ability of software to be reused, refined, tested, maintained, and extended. the full benefit of this support can only be realized if encapsulation is maximized during the design process. we argue that design practices which take a data-driven approach fail to maximize encapsulation because they focus too quickly on the implementation of objects. we propose an alternative object-oriented design method which takes a responsibility-driven approach. we show how such an approach can increase the encapsulation by deferring implementation issues until a later stage. r. wirfs-brock b. wilkerson simulating inheritance with ada e. perez what's gnu? plan 9 part ii arnold robbins formal specification of corba services: experience and lessons learned remi bastide philippe palanque ousmane sy david navarre timing extensions to structured analysis for real time systems l. peters laziness pays! using lazy synchronization mechanisms to improve non-blocking constructions we present a simple and efficient wait- free implementation of lazy large load-linked/store-conditional (lazy-ll/sc), which can be used to atomically modify a dynamically-determined set of shared variables in a lock-free manner. the semantics of lazy-ll/sc is weaker than that of similar objects used by us previously to design lock-free and wait- free constructions, and as a result can be implemented more efficiently. however, we show that lazy-ll/sc is strong enough to be used in existing non- blocking universal constructions and to build new ones. mark moir book review: inside linux corporate linux journal staff object oriented requirements analysis in an ada project maria manuel freitas ana moreira pedro guerreiro structured programming in the 1980s the "structured revolution" is over ten years old. as we enter the 1980s, the major trends growing out of the work on structured program design can be seen. in this paper, the author discusses the major elements of development for the next ten years. among them are: 1. the emergence of the theoretical programmer. 2. structured program design. 3. data structured design. 4. an algebra of logical structures. 5. improvements in programming languages, operating systems, and data base management systems. 6. models and generators. kenneth t. orr compiling a functional language this paper summarizes my experience in implementing a compiler for a functional language. the language is ml(1) [milner 84] and the compiler was first implemented in 1980 as a personal project when i was a postgraduate student at the university of edinburgh(2). at the time, i was familiar with programming language semantics but knew very little about compiler technology; interpreters had been my main programming concern. major influences in the design of this compiler have been [steele 77] [steele 78] and the implementation folklore for statically and dynamically scoped dialects of lisp [allen 78]. as a result, the internal structure of the compiler is fairly unorthodox, if compared for example with [aho 78]. anyway, a compiler for a language like ml has to be different. ml is interactive, statically scoped, strongly typed, polymorphic, and has first class higher- order functions, type inference and dynamic allocation. these features preclude many well-known implementation styles, particularly the ones used for lisp (because of static scoping), the algol family (because of functional values) and c (because of nested scoping and strong typing). the interaction of these features is what gives ml its "character", and makes compilation challenging. the compiler has been recently partially converted to the new ml standard. the major points of interest which are discussed in this paper are: (a) the interactive interpreter-like usage; (b) the polymorphic type inference algorithm; (c) the compilation of pattern matching; (d) the optimization of the representation of user defined data types; (e) the compilation of functional closures, function application and variable access; (f) the intermediate abstract machine and its formal operational semantics; (g) modules and type-safe separate compilation. luca cardelli ada measurement based on software quality principles s. e. keller j. a. perkins parasitic methods: an implementation of multi-methods for java john boyland giuseppe castagna the pascal-xt code generator k. h. drechsler m. p. stadel a semantic model of program faults program faults are artifacts that are widely studied, but there are many aspects of faults that we still do not understand. in addition to the simple fact that one important goal during testing is to cause failures and thereby detect faults, a full understanding of the characteristics of faults is crucial to several research areas in testing. these include fault-based testing, testability, mutation testing, and the comparative evaluation of testing strategies. in this workshop paper, we explore the fundamental nature of faults by looking at the differences between a syntactic and semantic characterization of faults. we offer definitions of these characteristics and explore the differentiation. specifically, we discuss the concept of "size" of program faults --- the measurement of size provides interesting and useful distinctions between the syntactic and semantic characterization of faults. we use the fault size observations to make several predictions about testing and present preliminary data that supports this model. we also use the model to offer explanations about several questions that have intrigued testing researchers. a. jefferson offutt j. huffman hayes software requirements negotiation: some lessons learned barry boehm alexander egyed status of object-oriented cobol (panel) j. g. van stee dan clarke david filani dmitry lenkov raymond obin a source code documentation system for ada y. c. wu t. p. baker open implementation analysis and design chris maeda arthur lee gail murphy gregor kiczales on prolog and the occur check problem k. marriott h. sondergaard letters to the editor corporate linux journal staff fine-grained mobility in the emerald system the emerald compiler analyzes object definitions and attempts to produce efficient implementations commensurate with the way in which objects are used. for example, an object that moves around the network will require a very general remote procedure call implementation; however, an object that is completely internal to that mobile object can be implemented using direct memory addressing and inline code or procedure calls. we wanted to achieve performance competitive with standard procedural languages in the local case and standard remote procedure call systems in the remote case. these goals are not trivial in a location-independent object- based environment. to meet them, we relied heavily on an appropriate choice of language semantics, a tight coupling between the compiler and run-time kernel, and careful attention to implementation. as an example of emerald's local performance, table 1 shows execution times for several local emerald operations executed on a micro vax ii1. the "resident global invocation" time is for a global object (i.e., one that can move around the network) when invoked by another object resident on the same node. by comparison, other object-based distributed systems are typically over 100 times slower for local invocations of their most general objects [6, 1]. the emerald language uses call- by-object-reference parameter passing semantics for all invocations, local or remote. while call-by-object-reference is the natural semantics for object- based systems, it presents a potential performance problem in a distributed environment. when a remotely invoked object attempts to access its arguments, those accesses will typically require remote invocations. because emerald objects are mobile, it may be possible to avoid some of these remote references by moving argument objects to the site of a remote invocation. from this table we can compute the benefit of call-by-move for a simple argument object. for this simple argument object, the additional cost of call- by-move was 2 milliseconds while call-by-visit cost 6.4 milliseconds. these are computed by subtracting the time for a remote invocation with an argument reference that is local to the destination. the call-by-visit time includes sending the invocation message and the argument object, performing the remote invocation (which then invokes its argument), and returning the argument object with the reply. had the argument been a reference to a remote object (i.e., had the object not been moved), the incremental cost would have been 30.8 milliseconds. these measurements are somewhat of a lower bound because the cost of moving an object depends on the complexity of the object and the types of objects it names. emerald currently executes on a small network of microvax iis and has recently been ported to the sun 32. we have concentrated on implementing fine-grained mobility in emerald while minimizing its impact on local performance. this has presented significant problems; however, through the use of language support and a tightly-coupled compiler and kernel, we believe that our design has been successful in meeting both its conceptual and performance goals. e. jul h. levy n. hutchinson a. black using java servlets with database connectivity the persistent nature of java servlets makes them ideal for database/web technology. mr. mcdonald takes a look at using servlets with postgresql and jdbc bruce mcdonald static checking of interrupt-driven software resource-constrained devices are becoming ubiquitous. examples include cell phones, palm pilots, and digital thermostats. it can be difficult to fit required functionality into such a device without sacrificing the simplicity and clarity of the software. increasingly complex embedded systems require extensive brute-force testing, making development and maintenance costly. this is particularly true for system components that are written in assembly language. static checking has the potential of alleviating these problems, but until now there has been little tool support for programming at the assembly level. in this paper we present the design and implementation of a static checker for interrupt-driven z86-based software with hard real-time requirements. for six commercial microcontrollers, our checker has produced upper bounds on interrupt latencies and stack sizes, as well as verified fundamental safety and liveness properties. our approach is based on a known algorithm for model checking of pushdown systems, and produces a control-flow graph annotated with information about time, space, safety, and liveness. each benchmark is approximately 1000 lines of code, and the checking is done in a few seconds on a standard pc. our tool is one of the first to give an efficient and useful static analysis of assembly code. it enables increased confidence in correctness, significantly reduced testing requirements, and support for maintenance throughout the system life- cycle. dennis brylow niels damgaard jens palsberg developing a guide using object-oriented programming joseph a. konstan lawrence a. rowe gnatdist: a configuration language for distributed ada 95 applications yvon kermarrec laurent nana laurent pautet lisp: introduction john foderaro technical correspondence i read with interest peter pearson's article, "fast hashing of variable-length text strings" (june 1990, pp. 677-680). in it he defines a hash function, given a text c1 … cn, by exclusive or'ing the bytes and modifying each intermediate result through a table of 256 randomish bytes. diane crawford futures: a mechanism for concurrency among objects a future provides the basic primitive through which a user in an object- oriented distributed system can achieve concurrency. it is based on the notion of being able to translate what appears to be a remote procedure call into a request for computation to be scheduled by the system. the mechanism required to make this translation is discussed. refinements to the future mechanism allow futures to be passed as arguments to other procedure calls. this provides the user with the added flexibility of implementing synchronization schemes suited to specific needs. further, it simplifies the process of migrating an object which has outstanding futures at the time of migration. a. chatterjee characterization of object behaviour in standard ml of new jersey we describe a method of measuring lifetime characteristics of heap objects, and discuss ways in which such quantitative object behaviour measurements can help improve language implementations, especially garbage collection performance. for standard ml of new jersey, we find that certain primary aspects of object behaviour are qualitatively the same across benchmark programs, in particular the rapid object decay. we show that the heap-only allocation implementation model is the cause of this similarity. we confirm the weak generational hypothesis for sml/nj and discuss garbage collector configuration tuning. our approach is to obtain object statistics directly from program execution, rather than simulation, for reasons of simplicity and speed. towards this end, we exploit the flexibility of the garbage collector toolkit as a measurement tool. careful numerical analysis of the acquired data is necessary to arrive at relevant object lifetime measures. this study fills a gap in quantitative knowledge of the workings of heap-based compilers and their run-time systems, and should be useful to functional language implementors. darko stefanovic j. eliot b. moss an array operation synthesis scheme to optimize fortran 90 programs an increasing number of programming languages, such as fortran 90 and apl, are providing a rich set of intrinsic array functions and array expressions. these constructs which constitute an important part of data parallel languages provide excellent opportunities for compiler optimizations. in this paper, we present a new approach to combine consecutive data access patterns of array constructs into a composite access function to the source arrays. our scheme is based on the composition of access functions, which is similar to a composition of mathematic functions. our new scheme can handle not only data movements of arrays of different numbers of dimensions and segmented array operations but also masked array expressions and multiple sources array operations. as a result, our proposed scheme is the first synthesis scheme which can synthesize fortran 90 reshape, eoshift, merge, and where constructs together. experimental results show speedups from 1.21 to 2.95 for code fragments from real applications on a sequent multiprocessor machine by incorporating the proposed optimizations. gwan-hwan hwang jenq kuen lee dz-ching ju nesting in ada programs is for the birds given a data abstraction construct like the ada package and in light of current thoughts on programming methodology, we feel that nesting is an anachronism. in this paper we propose a nest-free program style for ada that eschews nested program units and declarations within blocks and instead heavily utilizes packages and context specifications as mechanisms for controlling visibility. we view this proposal as a first step toward the development of programming methods that exploit the novel language features available in ada. consideration of this proposal's ramifications for data flow, control flow, and overall program structure substantiates our contention that a tree structure is seldom a natural representation of a program and that nesting therefore generally interferes with program development and readability. lori a. clarke jack c. wileden alexander l. wolf reflections on the verification of the security of an operating system kernel this paper discusses the formal verification of the design of an operating system kernel's conformance to the multilevel security property. the kernel implements multiple protection structures to support both discretionary and nondiscretionary security policies. the design of the kernel was formally specified. mechanical techniques were used to check that the design conformed to the multilevel security property. all discovered security flaws were then either closed or minimized. this paper considers the multilevel security model, the verification methodology, and the verification tools used. this work is significant for two reasons. first, it is for a complete implementation of a commercially available secure computer system. second, the verification used off-the-shelf tools and was not done by the verification environment researchers. jonathan m. silverman understanding the limitations of causally and totally ordered communication david r. cheriton dale skeen file hoarding under nfs and linux dorota m. huizinga heather sherman ada as a software transition tool our agency plans to use the ada programming language as a vehicle to transport a locally written inquiry system from burroughs' equipment to another vendor's hardware. this is being done in the following manner. a bootstrap ada translator has been written in pascal to generate burroughs' algol. the ada translator will be recoded into ada. the inquiry system will be rewritten into ada from algol and executed on the current burroughs' machine. when the new hardware is selected, the ada translator will be retargeted to generate an efficient language on that machine, thereby transporting the inquiry system. the ada translator will be discarded for an ada compiler on the new system. gary l. filipski donald r. moore john e. newton generalized regular expressions: a programming exercise in haskell e. p. wentworth ada and the x window system stu lewin kirk beitz christopher byrnes michael hardy rich hilliard craig warsaw how to steal from a limited private account: why mode in out parameters for limited types must be passed by reference henry g. baker practicing judo: java under dynamic optimizations a high-performance implementation of a java virtual machine (jvm) consists of efficient implementation of just-in-time (jit) compilation, exception handling, synchronization mechanism, and garbage collection (gc). these components are tightly coupled to achieve high performance. in this paper, we present some static anddynamic techniques implemented in the jit compilation and exception handling of the microprocessor research lab virtual machine (mrl vm), i.e., lazy exceptions, lazy gc mapping, dynamic patching, and bounds checking elimination. our experiments used ia-32 as the hardware platform, but the optimizations can be generalized to other architectures. michal cierniak guei-yuan lueh james m. stichnoth distributed ada and real-time (session summary) michael gonzalez harbour distributed computing in a nump (non-uniform message-passing) environment cui- qing yang optimizing memory usage in higher-order programming languages: theoretical and experimental studies mitchell wand william d. clinger resourceful systems for fault tolerance, reliability, and safety above all, it is vital to recognize that completely guaranteed behavior is impossible and that there are inherent risks in relying on computer systems in critical environments. the unforeseen consequences are often the most disastrous [neumann 1986]. section 1 of this survey reviews the current state of the art of system reliability, safety, and fault tolerance. the emphasis is on the contribution of software to these areas. section 2 reviews current approaches to software fault tolerance. it discusses why some of the assumptions underlying hardware fault tolerance do not hold for software. it argues that the current software fault tolerance techniques are more accurately thought of as delayed debugging than as fault tolerance. it goes on to show that in providing both backtracking and executable specifications, logic programming offers most of the tools currently used in software fault tolerance. section 3 presents a generalization of the recovery block approach to software fault tolerance, called _resourceful systems_. systems are resourceful if they are able to determine whether they have achieved their goals or, if not, to develop and carry out alternate plans. section 3 develops an approach to designing resourceful systems based upon a functionally rich architecture and an explicit goal orientation. russell j. abbott window management, graphics, and operating systems robert l. gordon dynamically-valued constants: an underused language feature jonathan l. schilling protecting against uninitialized abstract objects in modula-2 r s weiner automatic alignment of array data and processes to reduce communication time on dmpps this paper investigates the problem of aligning array data and processes in a distributed-memory implementation. we present complete algorithms for compile- time analysis, the necessary program restructuring, and subsequent code- generation, and discuss their complexity. we finally evaluate the practical usefulness by quantitative experiments. the technique presented analyzes complete programs, including branches, loops, and nested parallelism. alignment is determined with respect to offset, stride, and general axis relations. placement of both data and processes are computed in a unifying framework based on an extended preference graph and its analysis. dynamic redistributions are derived. the experimental results are very encouraging. the optimization algorithms implemented in our modula-2* compiler improved the execution times of the programs by an average over 40% on a maspar mp-1 with 16384 processors. michael philippsen the il programming language randy m. kaplan creating formal specifications from requirements documents edward g. amoroso the val language: description and analysis james r. mcgraw structured tools and conditional logic: an empirical investigation prior research has identified two psychological processes that appear to be used by programmers when they perform design and coding tasks: (a) taxonomizing---identifying the conditions that evoke particular actions; and (b) sequencing---converting the taxa into a linear sequence of program code. three structured tools---structured english, decision tables, and decision trees---were investigated in a laboratory experiment to determine how they facilitated these two processes. when taxonomizing had to be undertaken, structured english outperformed decision tables, and decision trees outperformed structured english. when sequencing had to be undertaken, decision trees and structured english outperformed decision tables, but decision trees and structured english evoked the same level of performance. iris vessey ron weber unreachable procedures in object-oriented programming unreachable procedures are procedures that can never be invoked. their existence may adversely affect the performance of a program. unfortunately, their detection requires the entire program to be present. using a long-time code modification system, we analyze large, linked, program modules of c++, c, and fortran. we find that c++ programs using object-oriented programming style contain a large fraction of unreachable procedure code. in contrast, c and fortran programs have a low and essentially constant fraction of unreachable code. in this article, we present our analysis of c++, c, and fortran programs, and we discuss how object-oriented programming style generates unreachable procedures. amitabh srivastava novice to novice: dos emulation with dosemu dean oisboid automatic compiler recognition of monitor tasks jonathan l. schilling johan olmutz nielsen fortran conditional compilation: preliminary specification loren meissner the cip method: component- and model-based construction of embedded systems cip is a model-based software development method for embedded systems. the problem of constructing an embedded system is decomposed into a functional and a connection problem. the functional problem is solved by constructing a formal reactive behavioural model. a cip model consists of concurrent clusters of synchronously cooperating extended state machines. the state machines of a cluster interact by multi-cast events. state machines of different clusters can communicate through asynchronous channels. the construction of cip models is supported by the cip tool, a graphical modelling framework with code generators that transform cip models into concurrently executable cip components. the connection problem consists of connecting generated cip components to the real environment. this problem is solved by means of techniques and tools adapted to the technology of the interface devices. construction of a cip model starts from the behaviour of the processes of the real environment, leading to an operational specification of the system behaviour in constructive steps. this approach allows stable interfaces of cip components to be specified at an early stage, thus supporting concurrent development of their connection to the environment. hugo fierz self-assessment procedure viii: a self-assessment procedure dealing with the programming language ada peter wegner an example case study on ada tasking ken sumate comparing frameworks and layered refinement object-oriented frameworks are a popular mechanism for building and evolving large applications and software product lines. this paper describes an alternative approach to software construction, java layers (jl), and evaluates jl and frameworks in terms of flexibility, ease of use, and support for evolution. our experiment compares schmidt's ace framework against a set of ace design patterns that have been implemented in jl. we show how problems of framework evolution and overfeaturing can be avoided using jl's component model, and we demonstrate that jl scales better than frameworks as the number of possible application features increases. finally, we describe how constrained parametric polymorphism and a small number of language features can support jl's model of loosely coupled components and stepwise program refinement. richard cardone calvin lin icmake part 2 frank brokken k. kubat a definition of lines of code for ada juergen f. h. winkler product review informix on linux: first impressions: notes on installing and configuring informix's port to linux fred butzen a distributed architecture for programming environments programming environments are typically based on concepts, such as syntax and semantics, and they provide functionalities, such as parsing, editing, type- checking, and compiling. most existing programming environments are designed in a fully integrated manner, where parsers, editors, and semantic tools are tightly coupled. this leads to systems that are the sum of all their components, with obvious implications in terms of size, reusability, and maintainability. in this paper, we present a proposal for a distributed architecture for programming environments. dominique clement introducing ada the ada programming language was finalized in july 1980 with the publication of the proposed standard ada language reference manual [a1]. that standard document is being reviewed for clarity and internal consistency under american national standards institute (ansi) canvas procedures in preparation for the issuance of an ansi standard for ada. ada is also on the agenda of the international standards organization (iso). the united states department of defense (dod) has sponsored the development of ada to provide a standard machine independent high order language (hol) for software which is embedded in or procured as part of major defense systems. examples of the intended applications include communications, avionics, air traffic control, missile guidance, and battlefield or shipboard decision support. computers are usually dedicated to these applications. frequently, they must operate in unfriendly environments, and are specially hardened to withstand shocks, vibrations, temperature fluctuations and other environmental stresses. dod's annual investment for software to run on these embedded computers far exceeds that for data processing applications such as payroll, inventory and financial management [c3]. william e. carlson larry e. druffel david a. fisher william a. whitaker descriptive and predictive aspects of the 3cs model: seta1 working group summary larry latour tom wheeler bill frakes alphabet soup the internationalization of linux, part 1: mr. turnbull takes a look at the problems faced when different character sets and the need for standardization stephen turnbull data typing in apl d. livingstone h. gharib synopsis of orca, a simple language implementation of phase abstractions lawrence snyder the software stack data-type as an operating system service jon w. osterlund multi-view description of software architectures valerie issarny titos saridakis apostolos zarras common ada bindings to compartmented mode workstations this paper describes a project to build a common set of ada bindings to the compartmented mode workstation (cmw) operating system. the paper discusses various design alternatives for creating these bindings. an object-oriented cmw model is described together with an object-oriented common ada-cmw interface specification. a two-layered design solution is presented and its merits and demerits are discussed. issues relating to the development of these bindings including testing, ada 83 vs. ada 9x issues, and portability are described and some of the ways to handle these issues are discussed. c. armstrong r. venkatraman parametric polymorphism for java: a reflective solution jose h. solorzano suad alagic m7: reuse library interoperability group (rig) (panel): purpose and progress james w. moore oopde (workshop session): object oriented program development environments dmitry lenkov mike monegan hierarchical schedulers, performance guarantee, and resource management john regehr john a. stankovic a portable sampling-based profiler for java virtual machines john whaley attribute propagation by message passing the goal of our work is to use the paradigms of syntax-directed editing to perform very sophisticated semantic checking, for example, to check flow- sensitive properties such as whether a variable is necessarily defined before it is used. semantic checking of this power is much more important than syntax checking because it relieves the programmer of the need to keep track of numerous details as the program grows in complexity. there has been a great deal of recent work on syntax-directed editing. this work primarily serves the needs of novice programmers: making the task of entering and editing programs easier and less error prone. most syntax- directed editors guarantee that the program fragment under construction is syntactically correct at all times. many of them also detect simple semantic errors such as undeclared variables or type errors. few, however, attempt to perform more semantic analysis than a typical (non-optimizing) compiler. alan demers anne rogers frank kenneth zadeck aplelegance - the art of staying within one's depth a barrier to the successful use of apl2 is the need to control the changes of depth which arise from using the operator each with functions such as enclose and disclose. the "spilt pepper" effect following a deluge of "each"'s can be just as damaging as the chains of left- brackets, and of quotes with their accompanying @@@@'s which are the sign of an iso apl programmer losing his or her mental grip. the temptations of a fast interactive system as a substitute for thought are considerable, and should be resisted. to this end there are places in which informal perceptions of how apl2 functions and operators work can be more valuable than the formal descriptions given in the manuals, and this paper focuses on some points where such pedagogical issues arise. it consists of 4 sections, each with rules or precepts for the disciplined and controlled use of nestedness. norman thomson tutorial on programming language research rowe: i am going to talk a little about data abstraction from a programming language viewpoint. i suspect that what i will say in some places will be every bit as controversial amongst programming language folks, as what dennis said was amongst database folks. historically, programming languages evolve continually, from very low-level representations or descriptions of computations to higher-level descriptions. the idea is to express computations in a way that makes them easier to write, faster to debug, and make them survive change. all these marvelous buzzwords! reusable software components trudy levine reconciling responsiveness with performance in pure object-oriented languages dynamically dispatched calls often limit the performance of object- oriented programs, since opject-oriented programming encourages factoring code into small, reusable units, thereby increasing the frequency of these expensive operations. frequent calls not only slow down execution with the dispatch overhead per se, but more importantly they hinder optimization by limiting the range and effectiveness of standard global optimizations. in particular, dynamically dispatched calles prevent standard interprocedual optimizations that depend on the availability of a static call graph. the self implementation described here offers tow novel approaches to optimization. type feedback speculatively inlines dynamically dispatched calls based on profile information that predicts likely receiver classes. adaptive optimization reconciles optimizing compilation with interactive performance by incrementally optimizing only the frequently executed parts of a program. when combined, these two techniques result in a system that can execute programs significantly faster than previous systems while retaining much of the interactiveness of an interpreted system. urs hölzle david ungar book review: unix power tools samuel ockman performance of cache coherence in stackable filing j. heidemann g. popek software unit test coverage and adequacy objective measurement of test quality is one of the key issues in software testing. it has been a major research focus for the last two decades. many test criteria have been proposed and studied for this purpose. various kinds of rationales have been presented in support of one criterion or another. we survey the research work in this area. the notion of adequacy criteria is examined together with its role in software dynamic testing. a review of criteria classification is followed by a summary of the methods for comparison and assessment of criteria. hong zhu patrick a. v. hall john h. r. may a demand-driven analyzer for data flow testing at the integration level evelyn duesterwald rajiv gupta mary lou soffa using an object-relationship model for rapid prototyping bogdan d. czejdo ralph p. tucci caching in the sprite network file system this paper describes a simple distributed mechanism for caching files among a networked collection of workstations. we have implemented it as part of sprite, a new operating system being implemented at the university of california at berkeley. a preliminary version of sprite is currently running on sun-2 and sun-3 workstations, which have about 1-2 mips processing power and 4-16 mbytes of main memory. the system is targeted for workstations like these and newer models likely to become available in the near future; we expect the future machines to have at least five to ten times the processing power and main memory of our current machines, as well as small degrees of multiprocessing. we hope that sprite will be suitable for networks of up to a few hundred of these workstations. because of economic and environmental factors, most workstations will not have local disks; instead, large fast disks will be concentrated on a few server machines. in sprite, file information is cached in the main memories of both servers (workstations with disks), and clients (workstations wishing to access files on non-local disks). on machines with disks, the caches reduce disk-related delays and contention. on clients, the caches also reduce the communication delays that would otherwise be required to fetch blocks from servers. in addition, client caches reduce contention for the network and for the server machines. since server cpus appear to be the bottleneck in several existing network file systems [saty85, lazo86], client caching offers the possibility of greater system scalability as well as increased performance. sprite uses the file servers as centralized control points for cache consistency. each server guarantees cache consistency for all the files on its disks, and clients deal only with the server for a file: there are no direct client-client interactions. the sprite algorithm depends on the fact that the server is notified whenever one of its files is opened or closed, so it can detect when concurrent write-sharing is about to occur. sprite handles sequential write-sharing using version numbers. when a client opens a file, the server returns the current version number for the file, which the client compares to the version number associated with its cached blocks for the file. if they are different, the file must have been modified recently on some other workstation, so the client discards all of the cached blocks for the file and reloads its cache from the server when the blocks are needed. the delayed-write policy used by sprite means that the server doesn't always have the current data for a file (the last writer need not have flushed dirty blocks back to the server when it closed the file). servers handle this situation by keeping track of the last writer for each file; when a client other than the last writer opens the file, the server forces the last writer to write all its dirty blocks back to the server's cache. this guarantees that the server has up-to-date information for a file whenever a client needs it. the file system module and the virtual memory module each manage a separate pool of physical memory pages. virtual memory keeps its pages in approximate lru order through a version of the clock algorithm [nels86]. the file system keeps its cache blocks in perfect lru order since all block accesses are through the "read" and "write" system calls. each system keeps a time-of-last- access for each page or block. whenever either module needs additional memory (because of a page fault or a miss in the file cache), it compares the age of its oldest page with the age of the oldest page from the other module. if the other module has the oldest page, then it is forced to give up that page; otherwise the module recycles its own oldest page. we used a collection of benchmark programs to measure the performance of the sprite file system. on average, client caching resulted in a speedup of about 10-40% for programs running on diskless workstations, relative to diskless workstations without client caches. with client caching enabled, diskless workstations completed the benchmarks only 0-12% more slowly than workstations with disks. client caches reduced the server utilization from about 5-18% per active client to only about 1-9% per active client. since normal users are rarely active, our measurements suggest that a single server should be able to support at least 50 clients. we also compared the performance of sprite to both the andrew file system [saty85] and sun's network file system (nfs) [sand85]. we did this by executing the andrew file system benchmark [howa87] concurrently on multiple sprite clients and comparing our results to those presented in [howa87] for nfs and andrew. for a single client, sprite is about 30% faster than nfs and about 35% faster than andrew. as the number of concurrent clients increased, the nfs server quickly saturated. the andrew system showed the greatest scalability: each client accounted for only about 2.4% server cpu utilization, vs. 5.4% in sprite and over 20% in nfs. m. nelson b. welch j. ousterhout encapsulation and inheritance in object-oriented programming languages object-oriented programming is a practical and useful programming methodology that encourages modular design and software reuse. most object- oriented programming languages support data abstraction by preventing an object from being manipulated except via its defined external operations. in most languages, however, the introduction of inheritance severely compromises the benefits of this encapsulation. furthermore, the use of inheritance itself is globally visible in most languages, so that changes to the inheritance hierarchy cannot be made safely. this paper examines the relationship between inheritance and encapsulation and develops requirements for full support of encapsulation with inheritance. alan snyder automatic synthesis of deadlock free connectors for com/dcom applications many software projects are based on the integration of independently designed software components that are acquired on the market rather than developed within the project itself. sometimes interoperability and composition mechanisms provided by component based integration frameworks cannot solve the problem of binary component integration in an automatic way. notably, in the context of component based concurrent systems, the binary component integration may cause deadlocks within the system. in this paper we present a technique to allow connectors synthesis for deadlock-free component based architectures [2] in a real scale context, namely in the context of com/dcom applications. this technique is based on an architectural, connector-based approach which consists of synthesizing a com/dcom connector as a com/dcom server that can route requests of the clients through a deadlock free policy. this work also provides guide lines to implement an automatic tool that derives the implementation of routing dead-lock-free policy within the connector from the dynamic behavior specification of the com components. it is then possible to avoid the deadlock by using com composition mechanisms to insert the synthesized connector within the system while letting the system com servers unimodified. we present a sucessful application of this technique on the (com version of the) problem known as _"the dining philosophers"._ depending on the type of deadlock we have a strategy that automatically operates on the connector part of the system architecture in order to obtain a suitably equivalent version of the system which is deadlock-free. paola inverardi massimo tivoli session 8b: panel - software engineering and social responsibility s. gerhart programming pearls, 2nd edition harvey friedman a software process model based on unit workload network yohihiro matsumoto kiyoshi agusa tsuneo ajisaka "ada after one year: how have government politics, industry concerns, and academia mixed?" a potpourri of mini-debates and talks constituting an overview of what's happening throughout the community in ada and delving into "psychological/societal/political" issues. 6 activists wrestling with ada introduction, representing diverse constituencies, have agreed to express their views and expose the issues. audience interaction will be encouraged! hal hart three-tier architecture professor ortiz presents a little of the theory behind the three-tier architecture and shows how it may be applied using linux, java and minisql. ariel ortiz ramirez automatic interoperability test generation for source-to-source translators mark molloy kristy andrews james herren david cutler paul del vigna apl two by two-syntax analysis by pairwise reduction benkard explains the precedence rules of apl with a syntactic binding hierarchy [1,2]. this paper uses the hierarchy as a basis for developing a more formal description of apl syntax which can be applied directly in a syntax analyzer. the flexibility of this approach makes it useful as a basis for examining the effect of proposed syntax changes and the consequences of previous design choices. j. d. bunda j. a. gerth transparent adaptive parallelism on nows using openmp alex scherer honghui lu thomas gross willy zwaenepoel automatic generation of global optimizers deborah whitfield mary lou soffa programming pearls jon bentley encodings of non-binary constraint satisfaction problems kostas stergiou toby walsh next generation object-oriented programming languages: extending the paradigm fred cummins roman cunis john lamping writing efficient smalltalk programs (abstract) ken auer understanding the sources of variation in software inspections in a previous experiment, we determined how various changes in three structural elements of the software inspection process (team size and the number and sequencing of sessions) altered effectiveness and interval. our results showed that such changes did not significantly influence the defect detection rate, but that certain combinations of changes dramatically increased the inspection interval. we also observed a large amount of unexplained variance in the data, indicating that other factors must be affecting inspection performance. the nature and extent of these other factors now have to be determined to ensure that they had not biased our earlier results. also, identifying these other factors might suggest additional ways to improve the efficiency of inspections. acting on the hypothesis that the "inputs" into the inspection process (reviewers, authors, and code units) were significant sources of variation, we modeled their effects on inspection performance. we found that they were responsible for much more variation in detect detection than was process structure. this leads us to conclude that better defect detection techniques, not better process structures, are the key to improving inspection effectiveness. the combined effects of process inputs and process structure on the inspection interval accounted for only a small percentage of the variance in inspection interval. therefore, there must be other factors which need to be identified. adam porter harvey siy audris mockus lawrence votta best of technical support corporate linux journal staff an ada-lisp interface generator souripriya das stephen r. schach future of ieee standard for ada pdl to be considered the ansi/ieee standard describing ada as a program design language was approved by the ieee in 1986 and reaffirmed in 1992. normal ieee procedures would require that it next be reconsidered for withdrawal, revision or reaffirmation in 1997. the completion of the ada 95 language revision, though, suggests that reconsideration of the standard should be accelerated. after a quick overview of ieee standardization, this article briefly describes the history of the ada/pdl standard and its contents. it poses some of the issues relevant to reconsideration of the standard. finally, it solicits volunteers to help determine the future of the standard. james w. moore static, dynamic and run-time modeling of compound classes rakesh agarwal giorgio bruno marco torchiano linux apprentice windows/linux dual boot don't want to give up windows while you learn linux? here's how to use both on the same machine vince veselosky on the logical foundations of staged computation (invited talk) dividing a computation into stages and optimizing later phases using information from earlier phases is a familiar technique in algorithm design. in the realm of programming languages, staged computation has found two important realizations: partial evaluation and run-time code generation. a priori, these are fundamentally operational concepts, concerned with how a program executes, but not what it computes. in this talk we provide a logical foundation for staged computation which is consistent with the operational intuition. we concentrate on run-time code generation which is related to modal logic via an interpretation of constructive proofs as programs. this correspondence yields new insights into issues of language design and leads to a static type system in which staging errors become type errors. we sketch the language pml (for phased ml), whose design has been directly motivated by our foundational reconstruction, and discuss our ongoing compiler construction effort [1; 2; 3; 4]. frank pfenning objective view point: statics: schizophrenia for c++ programmers g. bowden wise ada design and implementation: remarks by the session chairman michael b. feldman physical type checking for c the effectiveness of traditional type checking in c is limited by the presence of type conversions using type casts. because the c standard allows arbitrary type conversions between pointer types, neither c compilers, nor tools such as _lint,_ can guarantee type safety in the presence of such type conversions. in particular, by using casts involving pointers to structures (c structs), a programmer can interpret any memory region to be of any desired type, further compromising c's weak type system. not only do type casts make a program vulnerable to type errors, they hinder program comprehension and maintenance by creating latent dependencies between seemingly independent pieces of code.to address these problems, we have developed a stronger form of type checking for c programs, called _physical type checking._ physical type checking takes into account the layout of c struct fields in memory. this paper describes an inference-based physical type checking algorithm. our algorithm can be used to perform static safety checks, as well as compute useful information for software engineering applications. satish chandra thomas reps integrating object-oriented programming and protected objects in ada 95 integrating concurrent and object-oriented programming has been an active research topic since the late 1980's. there is a now a plethora of methods for achieving this integration. the majority of approaches have taken a sequential object-oriented language and made it concurrent. a few approaches have taken a concurrent language and made it object-oriented. the most important of this latter class is the ada 95 language, which is an extension to the object-based concurrent programming language ada 83. arguably, ada 95 does not fully integrate its models of concurrency and object-oriented programming. for example, neither tasks nor protected objects are extensible. this article discusses ways in which protected objects can be made more extensible. a. j. wellings b. johnson b. sanden j. kienzle t. wolf s. michell ccg: a prototype coagulating code generator w. g. morris acm forum robert l. ashenhurst distributed objects wolfgang emmerich neil roodyn a review of the apl2000 user conference by kevin weaver kevin weaver from the editor corporate linux journal staff grapple example: processes as plans as an example of a small part of the software process, consider the activities preliminary to building a new version of a software system; one task involves setting up an appropriate environment in which to apply the compilation and linking tools. this task can be done in several different ways. a common approach under unix entails collecting all the compilation units representing the baseline from which the new system version is to be developed into a "reference directory" and collecting all new compilation units unique to the new system version into a "working directory". when two (or more) new system versions are being developed from the same baseline, they can share one reference directory, but each system version always needs its own working directory. obviously, one directory cannot serve as both a reference and a working directory. in this approach, setting up an environment consists of a hierarchy of goals shown informally in the figure below. setting up the reference directory and setting up the working directory break down into further subgoals: the programmer has to commit an appropriate directory, empty it of extraneous files, and fill it with the relevant files. these goals are accomplished with operating system commands to make directories (mkdir), copy files (copy), move files (move), and delete files (delete). the state schema for this example includes four types of entities: directories, files, contents of files, and systems. predicates are defined to record that files are in directories, contents are stored in files, contents are of different kinds source, include, executable, etc), contents can be part of systems, contents can represent the executable forms of systems, etc. directories can be used by systems as reference or working directories. the examples shows four characteristics that we believe are typical for processes. first, there is an obvious hierarchical structure. second, there are many different action sequences that can be used to achieve a given goal. in some cases, there are several operators to choose from. for example, removing extraneous files can be achieved by delete or move, or combination thereof. in other cases, the actions chosen on the context---whether there are existing, uncommmited directories, or what the current placement of files in those directories is. third, there is a high degree of interleaving of tasks. for example, actions to setup the reference directory can be intermixed with actions to setup the working directory, or actions from two separate setup- environment plans can be intermixed. fourth, conflicts can arise that will destroy the effectiveness of the plans. if the reference directory is already setup, and the programmer is now setting up the working directories, then there are better ways of emptying the working directory of extraneous files than moving them into reference directory: this will destroy the "readiness" of the reference directory, meaning that more work will have to be done to reestablish "readiness". such a conflict leads, at best to an inefficient plan; if not detected, it leads to an improper plan (when the file is never subsequently moved out of the reference directory). the definitions given below are taken from an example that runs in the grapple plan-based process support system. karen e. huff a programming language based on the concepts of objects and fields the object-oriented model is very successful to represent various phenomena in our real world from an entity-category point of view. however, it is not sufficient to model the dynamic actions of autonomous entities under this paradigm. this model is in short of representing the relationships between entities and the environments. in this paper, we propose an experimental object-oriented programming language to manage the relationships together. fumihiko nishio toyohide watanabe noboru sugie an introduction to partial evaluation partial evaluation provides a unifying paradigm for a broad spectrum of work in program optimization compiling interpretation and the generation of automatic program generators [bjørner et al. 1987; ershov 1992; and jones et al. 1993]. it is a program optimization technique, perhaps better called program specialization, closely related to but different from jørring and scherlis' staging transformations [1986]. it emphasizes, in comparison with burstall and darlington [1977] and jørring and scherlis [1986] and other program transformation work, full automation and the generation of program generators as well as transforming single programs. much partial evaluation work to date has concerned automatic compiler generation from an interpretive definition of programming language, but it also has important applications to scientific computing, logic programming, metaprogramming, and expert systems; some pointers are given later. neil d. jones fine grained data management to achieve evolution resilience in a software development environment a software development environment (sde) exhibits evolution resilience if changes to the sde do not adversely affect its functionality nor performance, and also do not introduce delays in returning the sde to an operational state after a change. evolution resilience is especially difficult to achieve when manipulating fine grained data, which must be tightly bound to the language in which the sde is implemented to achieve adequate performance. we examine a spectrum of approaches to tool integration that range from high sde- development-time efficiency to high sde- execution-time efficiency. we then present a meta-environment, a specific sde tailored to the development of target sde's, that supports easy movement of individual tools along this spectrum. richard snodgrass karen shannon the last word stan kelly-bootle walks into the apl design space like any living language, apl is evolving. the design space for its future development is wide open. some trails into that space are known, for other languages have followed them. we will walk on some trails, visit some sights and also some dead ends. we will see which development areas could solve some old problems of apl, without changing its fabric. the goal of the paper is to enable the reader to explore the apl design space on his or her own. to that end i will point the reader to some good books and articles about problems and solutions only touched upon in the paper. martin gfeller the impact of interprocedural analysis and optimization in the rn programming environment in spite of substantial progress in the theory of interprocedural data flow analysis, few practical compiling systems can afford to apply it to produce more efficient object programs. to perform interprocedural analysis, a compiler needs not only the source code of the module being compiled, but also information about the side effects of every procedure in the program containing that module, even separately compiled procedures. in a conventional batch compiler system, the increase in compilation time required to gather this information would make the whole process impractical. in an integrated programming environment, however, other tools can cooperate with the compiler to compute the necessary interprocedural information incrementally. as the program is being developed, decreasing both the overall cost of the analysis and the cost of individual compilations. a central goal of the rn project at rice university is to construct a prototype software development environment that is designed to build whole programs, rather than just individual modules. it employs interprocedural analysis and optimization to produce high-quality machine code for whole programs. this paper presents an overview of the methods used by the environment to accomplish this task and discusses the impact of these methods on the various environment components. the responsibilities of each component of the environment for the preparation and use of interprocedural information are presented in detail. keith d. cooper ken kennedy linda torczon an incremental specification flow for real time embedded systems (poster paper) alex niemegeers gjalt de jong and/or programs: a new approach to structured programming a simple tree-like programming/specification language is presented. the central idea is the dividing of conventional programming constructs into the two classes of and and or subgoaling, the subgoal tree itself constituting the program. programs written in the language can, in general, be both nondeterministic and parallel. the syntax and semantics of the language are defined, a method for verifying programs written in it is described, and the practical significance of programming in the language assessed. finally, some directions for further research are indicated. david harel one more time - how to update a master file barry dwyer introduction to gawk how to speed up your programming tasks using the gnu version of awk. ian gordon experiences teaching concurrency in ada ronald j. leach a visualization tool for parallel and distributed computing using the lilith framework david a. evensky ann c. gentile pete wyckoff a note on structured interrupts pradeep hatkanagalekar p4: a logic language for process programming dennis heimbigner system administration: upgrading the linux kernel mark komarinski graphics marjorie richardson preventing spams and relays the smtpd package is a useful mail demon for stopping spam, thereby saving money and resources john wong effective use of abort in programming mode changes a. burns t. j. quiggle efficient implementation of object-oriented programming languages (abstract) craig chambers david ungar case & reengineering: from archaeology to software perestroika e. j. chikofsky programming pearls: little program, a lot of fun jon l. bentley multiple-view case support for object-oriented design (abstract) shang-cheng chyou "chairman" fails to get vote bob stephen system structure and software maintenance performance an experiment is designed to investigate the relationship between system structure and maintainability. an old, ill-structured system is improved in two sequential stages, yielding three system versions for the study. the primary objectives of the research are to determine how or whether the differences in the systems influence maintenance performance; whether the differences are discernible to programmers; and whether the differences are measurable. experienced programmers perform a portfolio of maintenance tasks on the systems. results indicate that system improvements lead to decreased total maintenance time and decreased frequency of ripple effect errors. this suggests that improving old systems may be worthwhile and may yield benefits over the remaining life of the system. system differences are not discernible to programmers; apparently programmers are unable to separate the complexity of the systems from the complexity of the maintenance tasks. this finding suggests a need for further research on the efficacy of subjectively based software metrics. finally, results indicate that a selected set of automatable, objective complexity metrics reflected both the improvements in the system and programmer maintenance performance. these metrics appear to offer potential as project management tools. virginia r. gibson james a. senn the geneva convention on the treatment of object aliasing john hogg doug lea alan wills dennis dechampeaux richard holt goal-oriented programming, or composition using events, or threads considered harmful robbert van renesse software specification and design with ada: a disciplined approach ken shumate common lisp object system specification x3j13 document 88-002r d. g. bobrow l. g. demichiel r. p. gabriel s. e. keene g. kicsales d. a. moon on converting a compiler into an incremental compiler malcolm crowe clark nicol michael hughes david mackay identifying procedural structure in cobol programs the principal control-flow abstraction mechanism in the cobol language is the perform statement. normally, perform statements are used in a straightforward manner to define parameterless procedures (where global variables are used to pass data into and out of procedure bodies). however, unlike most procedural constructs, distinct performed procedures can share code in arbitrarily complicated ways. in addition, performs can also be used in such a way as to cause transfers of control that do not correspond to normal call/return semantics.in this paper, we show how a cobol program can be efficiently transformed into a semantically-equivalent _procedurally well-structured_ representation, in which conventional procedures (i.e., with the usual call and return semantics and without code sharing) and procedure call statements replace performed code and perform statements. this transformation process properly accounts for the non-procedural control flow that can result from ill-behaved perform statements.the program representation derived from our analysis can be used directly in program understanding applications, program restructuring tools, and inter-language translators. in addition, it can be used as the starting point for a variety of context-sensitive program analyses, e.g., program slicing. john field g. ramalingam implementing signatures for c++ we outline the design and detail the implementation of a language extension for abstracting types and for decoupling subtyping and inheritance in c++. this extension gives the user more of the flexibility of dynamic typing while retaining the efficiency and security of static typing. after a brief discussion of syntax and semantics of this language extension and examples of its use, we present and analyze three different implementation techniques: a preprocessor to a c++ compiler, an implementation in the front end of a c++ compiler, and a low-level implementation with back-end support. we follow with an analysis of the performance of the three implementation techniques and show that our extension actually allows subtype polymorphism to be implemented more efficiently than with virtual functions. we conclude with a discussion of the lessons we learned for future programming language design. gerald baumgartner vincent f. russo book review: teach yourself perl david flood penguin's progress: getcha program! peter h. salus novice-to-novice: keyboards, consoles, and vt cruising corporate linux journal staff south african business uses linux to connect the story of a company replacing windows systems with linux to obtain better speed and greater reliability paul daniels asis has been approved as iso standard object-oriented development for ada r. m. ladden continuation-passing, closure-passing style we implemented a continuation-passing style (cps) code generator for ml. our cps language is represented as an ml datatype in which all functions are named and most kinds of ill-formed expressions are impossible. we separate the code generation into phases that rewrite this representation into ever-simpler forms. closures are represented explicitly as records, so that closure strategies can be communicated from one phase to another. no stack is used. our benchmark data shows that the new method is an improvement over our previous, abstract- machine based code generator. a. w. appel t. jim ada and hatley-pirbhai d. roberts rod ontjes analysis of the paging behavior of unix jeffrey c. becker arvin park quicktalk: a smalltalk-80 dialect for defining primitive methods quicktalk is a dialect of smalltalk-80 that can be compiled directly into native machine code, instead of virtual machine bytecodes. the dialect includes "hints" on the class of method arguments, instance variables, and class variables. we designed the dialect to describe primitive smalltalk methods. quicktalk achieves improved performance over bytecodes by eliminating the interpreter loop on bytecode execution, by reducing the number of message send/returns via binding some target methods at compilation, and by eliminating redundant class checking. we identify changes to the smalltalk-80 system and compiler to support the dialect, and give performance measurements. mark b. ballard david maier allen wirfs- brock an analysis of length equation using a dynamic approach t y chen s c kwan re2c: a more versatile scanner generator it is usually claimed that lexical analysis routines are still coded by hand, despite the widespread availability of scanner generators, for efficiency reasons. while efficiency is a consideration, there exist freely available scanner generators such as gla [gray 1988] that can generate scanners that are faster than most hand- coded ones. however, most generated scanners are tailored for a particular environment, and retargeting these scanners to other environments, if possible, is usually complex enough to make a hand-coded scanner more appealing. in this paper we describe re2c, a scanner generator that not only generates scanners that are faster (and usually smaller) than those produced by any other scanner generator known to the authors, including gla, but that also adapt easily to any environment. peter bumbulis donald d. cowan the f programming language michael metcalf the most influential papers from the issta research community (panel) richard hamlet richard kemmerer edward f. miller debra j. richardson objective viewpoint: the java3d programming model george crawford empirical investigation of a novel approach to check the integrity of software engineering measuring processes (poster session) this distribution is counter-intuitive for at least two reasons. first it would seem "obvious" that the numbers drawn from a list generated from widely different arbitrary processes would have roughly equally probabilities for the digits 1 and 9 to be first digits. this is not normally the case. if the list of numbers does not have artificial limits, or include invented numbers such as postal codes, then approximately 30% of the numbers will have 1 as their first digit, but only 5% will have 9 as their first digit. deviations from the expected benford distribution indicate the presence of some special characteristic of the data. the second, more theoretically challenging, problem is: what is the underlying property associated with so many widely different processes which generates lists of numbers that follow benford's law? we have conducted an empirical investigation to determine under what circumstances various software metrics follow benford's law, and whether any special characteristics, or irregularities, in the data can be uncovered if the data are found not to follow the law. the more tricky problem of understanding why the list of metrics might follow benford's law is left to another study. lists were form from three software metrics extracted from 100 public domain industrial java projects. these metrics were lines of code (loc), fan-out (fo) and mccabe cyclomatic complexity (mcc). given that a benford's law analysis requires a list of considerable length, the data were divided into two groups. the first groups was from projects containing more than 100 files. this was intended as the "control group" and what was expected to follow benford's law if that law was applicable for the analysis of software engineering metrics. to study the sensitivity of the digital analysis technique to project size, projects with a smaller number of files were compared to the control group. the empirical results indicate that the first digits of numbers in lists of loc metrics extracted from the projects closely followed the probabilities predicted by benford's law than an "equal probability of occurrence" suggested by intuitive reasoning. this was shown using both qualitative and quantitative measures. the fo and mcc metrics did not follow the standard benford's law as well as did the loc metrics. this is because the fo and mcc lists contain a significant number of numbers less than 10 and follow a different first digit distribution. further investigation of the digital analysis technique is necessary to evaluate the applicability of benford's law in the total context of software metrics. skylar lei michael smith giancarlo succi oop in languages providing strong, static typing (panel session) david bulman asynchronous parallel algorithm for mining association rules on a shared- memory multi-processors david w. cheung kan hu shaowei xia software metrics - an experimental analysis b. mehndiratta p. s. grover correspondence b. j. wu a software technology evaluation program (abstract only) a wealth of potentially beneficial software engineering tools, practices, and techniques has emerged in the past several years. simultaneously, realization has grown that all software engineering technologies are not equally effective for all software development problems and environments. the software engineering laboratory (sel), a cooperative project of the national aeronautics and space administration, computer sciences corporation, and the university of maryland, conducts an extensive technology evaluation program. measurement is the basic prerequisite for evaluation. the sel collects measures on the production of fortran software for spacecraft navigation systems. recent sel investigations demonstrated that the use of structured programming and quality assurance improves software reliability. however, the major factor in both productivity and reliability continues to be personnel capability. such technology evaluation programs provide an empirical basis for defining software development standards and selecting tools. david n. card a report on random testing random testing of programs is usually (but not always) viewed as a worst case of program testing. test case generation that takes into account the program structure is usually preferred. path testing is an often proposed ideal for structural testing. path testing is treated here as an instance of partition testing. (partition testing is any testing scheme which forces execution of at least one test case from each subset of a partition of the input domain.) simulation results are presented which treat path and partition testing in a reasonably favorable way, and yet still suggest that random testing may often be more cost effective. results of actual random testing experiments are presented which tend to confirm the viability of random testing as a useful validation tool. joe w. duran simeon ntafos reliable atomic broadcast in distributed systems with omission faults g. le lann g. bres cde plug-and-play the programming infrastructure, such as tooltalk, is a major strength of the common desktop environment. this article illustrates client and server plug- and-play through the use of the desktop's application programming interfaces (apis) 2 and m nonprime, can be reduced to the 2-operand case by isomorphic transformation. computation results of 2-operand residue arithmetic operations are provided. applications to rns arithmetic implementation are discussed. christos a. papachristou the road to better reliability and yield embedded dfm tools kees veelenturf mixed-level simulation from a hierarchical chdl a system for mixed- level design simulation from a comprehensive chdl is introduced. the chdl supports hierarchical design descriptions at behavioral, logic and electrical levels. a design is described as a set of interconnected blocks, where each block may be described at any of the three levels. a mixed- level simulation capability is described. designs may be simulated at any of these levels, or at a mixture of levels. simulateneous simulation of a given block at two levels is also supported. problems related to implementation of such a system are discussed. william a. johnson jeri jane crowley j. d. ray multi chip modules r. h. bruce w. p. meuli j. ho power comparisons for barrel shifters k. acken m. irwin r. owens 1991 acm/sigda international workshop on formal methods in vlsi design p. a. subrahmanyam an efficient procedure for the synthesis of fast self-testable controller structures the bist implementation of a conventionally synthesized controller in most cases requires the integration of an additional register only for test purposes. this leads to some serious drawbacks concerning the fault coverage, the system speed and the area overhead. a synthesis technique is presented which uses the additional test register also to implement the system function by supporting self-testable pipeline-like controller structures. it will be shown, that if the need of two different registers in the final structure is already taken into account during synthesis, then the overall number of flipflops can be reduced, and the fault coverage and system speed can be enhanced. the presented algorithm constructs realizations of a given finite state machine specification which can be trivially implemented by a self- testable structure. the efficiency of the procedure is ensured by a very precise characterization of the space of suitable realizations, which avoids the computational overhead of previously published algorithms. sybille hellebrand hans-joachim wunderlich diagnosing programmable interconnect systems for fpgas fabrizio lombardi david ashen xiaotao chen wei kang huang a hardware environment for prototyping and partitioning based on multiple fpgas marc wendling wolfgang rosenstiel delay fault coverage: a realistic metric and an estimation technique for distributed path delay faults mukund sivaraman andrzej j. strojwas dummy feature placement for chemical-mechanical polishing uniformity in a shallow trench isolation process manufacturability of a design that is processed with shallow trench isolation (sti) depends on the uniformity of the chemical-mechanical polishing (cmp) step in sti. the cmp step in sti is a dual-material polish, for which all previous studies on dummy feature placement for single-material polish [3, 11, 1] are not applicable. based on recent semi-physical models of polish pad bending [5], local polish pad compression [2, 10], and different polish rates for materials present in a dual-material polish [2, 13], this paper derives a time-dependent relation between post-cmp topography and layout pattern density for cmp in sti. using the dependencies derived, the first formulation of dummy feature placement for cmp in sti is given as a nonlinear programming problem. an iterative approach is proposed to solve the dummy feature placement problem. computational experience on four layouts from motorola is given. ruiqi tian xiaoping tang d. f. wong memory systems doug burger parallel mixed-level power simulation based on spatio-temporal circuit partitioning mauro chinosi roberto zafalon carlo guardiani performance consequences of parity placement in disk arrays edward k. lee randy h. katz free performance and fault tolerance (extended abstract): using system idle capacity efficiently srini tridandapani anton t. dahbura charles u. martel john matthews arun k. somani merlin: semi-order-independent hierarchical buffered routing tree generation using local neighborhood search amir h. salek jinan lou massoud pedram minimizing the application time for manufacturer testing of fpga (abstract) f. fummi a. marshall l. pozzi m. g. sami accurate layout area and delay modeling for system level design c. ramachandran f. j. kurdahi d. d. gajski a. c.-h. wu v. chaiyakul evaluation of parts by mixed-level dc-connected components in logic simulation dah-cherng yuan lawrence t. pillage joseph t. rahmeh an investigation of power delay trade-offs on powerpc circuits qi wang sarma b. k. vrudhula shantanu ganguly reducing the complexity of defect level modeling using the clustering effect jose t. de sousa vishwani d. agrawal algorithms for multiple-criterion design of microprogrammed control hardware this paper describes two algorithms designed to optimize memory size and controller performance for a microprogrammed controller. the algorithms accept two inputs: a set of interconnected registers and logical operators called the data paths, and a control flow graph which describes how these data paths are to be exercised. the autonomy algorithm identifies data path elements which should be controlled directly from the microword without encoding. this algorithm aids the effectiveness of the subsequent encoding algorithm by eliminating some signals from consideration. a second algorithm, the attraction algorithm, determines which microoperations will execute in parallel and which will be encoded into separate microinstruction formats. this algorithm accepts a microword width constraint and implements parallel operations in the microcode and the corresponding encoding. both the parallelism and the encoding are determined by the algorithm. application of these algorithms to an example, the pdp**-11/40, has produced a control store design 14 percent wider and equal in parallelism to an equivalent portion of the human design. andrew w. nagle alice c. parker module placement for analog layout using the sequence-pair representation florin balasa koen lampaert an automatically verified generalized multifunction arithmetic pipeline matthias mutz controlling and sequencing a heavily pipelined floating-point operator andre seznec karl courtel analyzing the working set characteristics of branch execution sangwook p. kim gary s. tyson selective value prediction value prediction is a relatively new technique to increase instruction-level parallelism by breaking true data dependence chains. a value prediction architecture _produces_ values, which may be later _consumed_ by instructions that execute speculatively using the predicted value.this paper examines selective techniques for using value prediction in the presence of predictor capacity constraints and reasonable misprediction penalties. we examine prediction and confidence mechanisms in light of these constraints, and we minimize capacity conflicts through instruction filtering. the latter technique filters which instructions put values into the value prediction table. we examine filtering techniques based on instruction type, as well as giving priority to instructions belonging to the longest data dependence path in the processor's active instruction window. we apply filtering both to the producers of predicted values and the consumers. in addition, we examine the benefit of using different confidence levels for instructions using predicted values on the longest dependence path. brad calder glenn reinman dean m. tullsen new design error modeling and metrics for design validation sungho kang stephen a. szygenda exploiting off-chip memory access modes in high-level synthesis preeti ranjan panda nikil d. dutt alexandru nicolau buffer insertion with accurate gate and interconnect delay computation charles j. alpert anirudh devgan stephen t. quay cycle-accurate energy consumption measurement and analysis: case study of arm7tdmi we introduce an energy consumption analysis of complex digital systems through a case study of arm7tdmi risc processor by using a new energy measurement technique. we developed a cycle-accurate energy consumption measurement system based on charge transfer which is robust to spiky noise and is capable of collecting a range of power consumption profiles in real time. the relative energy variation of the risc core is measured by changing the opcode, the instruction fetch address, the register number, in each pipeline stage, respectively. we demonstrated energy characterization of a pipelined risc processor for high-level power reduction. naehyuck chang kwanho kim hyung gyu lee a new approach to the rectilinear steiner tree problem we discuss a new approach to constructing the rectilinear steiner tree (rst) of a given set of points in the plane, starting from a minimum spanning tree (mst). the main idea in our approach is to determine l-shaped layouts for the edges of the mst, so as to maximize the overlaps between the layouts, thus minimizing the cost (i.e., wire length) of the resulting rst. we describe a linear time algorithm for constructing a rst from a mst, such that the rst is optimal under the restriction that the layout of each edge of the mst is an l-shape. the rst's produced by this algorithm have 8-33% lower cost than the mst, with the average cost improvement, over a large number of random point sets, being about 9%. the running time of the algorithm on an ibm 3090 processor is under 0.01 seconds for point sets with cardinality 10. we also discuss a property of rst's called stability under rerouting, and show how to stabilize the rst's derived from our approach. stability is a desirable property in vlsi global routing applications. jan-ming ho g. vijayan c. k. wong cosmos: a compiled simulator for mos circuits r. e. bryant d. beatty k. brace k. cho t. sheffler atpg tools for delay faults at the functional level s. tragoudas m. michael locating logic design errors via test generation and don't-care propagation sy-yen kuo hope: an efficient parallel fault simulator for synchronous sequential circuits h. k. lee d. s. ha a mos/lsi oriented logic simulator a logic simulator capable of efficiently modelling complex mos/lsi circuits is presented. the circuit is simulated at the combinational logic and transmission gate level using a set of six node-states. gate models have inertial delay and assignable nominal rise and fall delays. both unidirectional and bidirectional transmission gates are accurately simulated, and functional models are provided for rom, ram, etc. dan holt dave hutchings paola: a tool for topological optimization of large plas this paper presents a tool, called paola, for optimizing the layout of large plas used as decoders in vlsi systems. the optimization techniques it uses are heuristic. they involve compacting the and/or matrices by cutting and reorganizing the input/output lines in order to reduce the number of columns in these matrices. they also allow the lengthening of the shape of the pla and the lateral access to the input/output segments. this eases the topological adaptation of different blocks in order to reduce the surface of the interconnection network between them. the layout of the pla uses internal topological conflicts and ground refresh lines positions to improve accessibility to the input/output segments created inside the and/or matrices. this system has been tested on several examples including industrial plas. it gives an area reduction of the or matrix of up to 50% on plas having 3000 to 5000 positions in the or matrix with a computing time of 4 to 5 minutes on a main frame hb-68 computer running with the multics operating system. samuel chuquillanqui tomas perez segovia all digital built-in delay and crosstalk measurement for on-chip buses chauchin su yue-tsang chen mu-jeng huang gen-nan chen chung-len lee dynamic fault collapsing and diagnostic test pattern generation for sequential circuits vamsi boppana w. kent fuchs analog system verification in the presence of parasitics using behavioral simulation edward w. y. liu henry c. chang alberto l. sangiovanni-vincentelli an efficient verification algorithm for parallel controllers krzysztof bilinski e. l. dagless jonathan saul janusz szajna single chip or hybrid system integration ivo bolsens efficient floorplan area optimization the floorplan area optimization problem is to determine the shape and dimensions of all the modules when the topology of the floorplan is given. the objective is to minimize the area of the resulting floorplan. existing methods only apply to slicing floorplans. we present in this paper an algorithm for general hierarchical floorplans. our algorithm combines the curve-adding technique used in the case of slicing floorplans and a new technique that computes shape curves by iterative modification of the shape and dimensions of individual modules. we also present an optimal points selection algorithm using the technique of dynamic programming to further enhance the process of shape curves construction. d. f. wong p. s. sakhamuri an analog performance estimator for improving the effectiveness of cmos analog systems circuit synthesis adrian nunez-aldana ranga vemuri using partitioning to help convergence in the standard-cell design automation methodology hema kapadia mark horowitz a cmos continuous-time field programmable analog array c. a. looby c. lyden a graphical hardware design language gdl is a graphical hardware design language that separates design decisions into three interrelated, but distinct domains: behavioral, structural and physical. specific language features are provided to represent a design in each of these domains. this report describes the process model for gdl. functional behavior is separated into distinct activities called "processes" (autonomous control centers.) the computations performed by a process are specified in its behavior graph. processes may communicate with each other through ports where the channel between two ports may be an abstract logical link or may be a physical bus. provisions are made for synchronization. the paper concludes with an evaluation of gdl and suggestions for future research. paul j. drongowski jwahar r. bami ranganathan ramaswamy sundar iyengar tsu-hua wang a fault simulation methodology for mems r. rosing a. m. richardson a. p. dorey low power, testable dual edge triggered flip-flops r. llopis m. sachdev a flexible specification framework for hardware-software codesign (poster paper) jose manuel moya santiago domínguez francisco moya juan carlos lopez hierarchical test generation under intensive global functional constraints j. lee j. h. patel slic - symbolic layout of integrated circuits d. gibson s. nance a probability-based approach to vlsi circuit partitioning shantanu dutt wenyong deng automatic layout of silicon-on-silicon hybrid packages in modern, high speed computer systems, performance and density limits are being set more by interconnection and packaging constraints than by transistor characteristics. the most severe limitation comes from the single-chip packages that carry the vlsi circuits. multichip, silicon-on-silicon hybrid packages can significantly improve performance by eliminating this level of packaging. a system has been developed to automatically generate hybrid layouts given a schematic description and layouts of the vlsi circuits. this paper describes the hybrid technology, the design automation system foundation, and the hybrid layout system. this layout method, in combination with the fabrication technology, produces layouts that are 5 to 8 times more dense than the same circuits implemented with single-chip packages on printed circuit boards. simulations show that clock speeds can be increased by a factor of two. b. preas m. pedram d. curry efficient linear circuit analysis by pade approximation via the lanczos process peter feldmann roland w. freund testing redundant asynchronous circuits by variable phase splitting luciano lavagno antonio lioy michael kishinevsky interlaced accumulation programming for low power dsp h. kojima a. shridhar reliability study of combinatorial circuits edgar holmann ivan r. linscott g. leonard tyler verifying large-scale multiprocessors using an abstract verification environment due to a patent dispute, full text of this article is not availableat this time. dennis abts mike roberts hybrid fpga architecture alireza kaviani stephen brown detecting undetectable controller faults using power analysis j. carletta c. a. papachristou m. nourani bridging fault detection in fpga interconnects using iddq this paper presents a vector generation approach for testing interconnects in configurable (sram-based) field programmable gate arrays (fpgas). the proposed approach detects bridging faults and is based on quiescent current (iddq monitoring. compared with previous voltage-based methods, iddq testing has the advantage of utilizing a small number of programming phases for configuring the fpga during the test process with negligible observability requirements, even under multiple faults. algorithms for test generation which exploit the homogeneous nature of the fpga array, are described. an example using the xc4000 is described in detail. for testing the xc4000 series interconnect, a total of 20 phases and 11 vectors are required: 11 phases for s (switch) block testing, and 9 phases for c (connection) block testing. l. zhao d. m. h. walker f. lombardi architectural support for reduced register saving/restoring in single-window register files the use of registers in a processor reduces the data and instruction memory traffic. since this reduction is a significant factor in the improvement of the program execution time, recent vlsi processors have a large number of registers which can be used efficiently because of the advances in compiler technology. however, since registers have to be saved/restored across function calls, the corresponding register saving and restoring (rsr) memory traffic can almost eliminate the overall reduction. this traffic has been reduced by compiler optimizations and by providing multiple-window register files. although these multiple-window architectures produce a large reduction in the rsr traffic, they have several drawbacks which make the single-window file preferable. we consider a combination of hardware support and compiler optimizations to reduce the rsr traffic for a single-window register file, beyond the reductions achieved by compiler optimizations alone. basically, this hardware keeps track of the registers that are written during execution, so that the number of registers saved is minimized. moreover, hardware is added so that a register is saved in the activation record of the function that uses it (instead of in the record of the current function); in this way a register is restored only when it is needed, rather than wholesale on procedure return. we present a register saving and restoring policy that makes use of this hardware, discuss its implementation, and evaluate the traffic reduction when the policy is combined with intraprocedural and interprocedural compiler optimizations. we show that, on the average for the four general- purpose programs measured, the rsr traffic is reduced by about 90 percent for a small register file (i.e., 32 registers), which results in an overall data memory traffic reduction of about 15 percent. miquel huguet tomás lang tearing based automatic abstraction for ctl model checking woohyuk lee abelardo pardo jae-young jang gary hachtel fabio somenzi a one-bit-signature bist for embedded operational amplifiers in mixed-signal circuits based on the slew-rate detection i. rayane j. velasco-medina m. nicolaidis atpg based on a novel grid-addressable latch element susheel j. chandra tom ferry tushar gheewala kerry pierce fpga logic block architecture for digit-serial dsp applications (abstract) hanho lee sarvesh shrivastava gerald e. sobelman lisa - machine description language for cycle-accurate models of programmable dsp architectures stefan pees andreas hoffmann vojin zivojnovic heinrich meyr switch bound allocation for maximizing routability in timing-driven routing of fpgas kai zhu d. f. wong be careful with don't cares daniel brand reinaldo a. bergamaschi leon stok compression-relaxation: a new approach to performance driven placement for regular architectures we present a new iterative algorithm for performance driven placement applicable to regular architectures such as fpgas. our algorithm has two phases in each iteration: a compression phase and a relaxation phase. we employ a novel compression strategy based on the longest path tree of a cone for improving the timing performance of a given placement. compression might cause a feasible placement to become infeasible. the concept of a slack neighborhood graph is introduced and is used in the relaxation phase to transform an infeasible placement to a feasible one using a mincost flow formulation. our analytical results regarding the bounds on delay increase during relaxation are validated by the rapid convergence of our algorithm on benchmark circuits. we obtain placements that have 13% less critical path delay (on the average) than those generated by the xilinx automatic place and route tool (apr) on technology mapped mcnc benchmark circuits with significantly less cpu time than apr. anmol mathur c. l. liu a defect-tolerant and fully testable pla this paper presents a defect-tolerant and fully testable pla allowing for the repair of a defective chip. the repair process is described. special emphasis is devoted to the location of defects inside a pla. the defect location mechanism is completely topological and circuit independent and therefore easy to adapt to existing pla generators. yield considerations for this type of plas are presented. n. wehn m. glesner k. caesar p. mann a. roth mixing buffers and pass transistors in fpga routing architectures the routing architecture of an fpga consists of the length of the wires, the type of switch used to connect wires (buffered, unbuffered, fast or slow) and the topology of the interconnection of the switches and wires. fpga routing architecture has a major influence on the logic density and speed of fpga devices. previ? work [] based on a 0.35um cmos process has suggested that an architecture consisting of length 4 wires (where the length of a wire is measured in terms of the number of logic blocks it passes before being switched) and half of the programmable switches are active buffers, and half are pass transistors. in that work, however, the topology of the routing architecture prevented buffered tracks from connecting to pass-transistor tracks. this restriction prevents the creation of interconnection trees for high fanout nets that have a mixture of buffers and pass transistors. electrical simulations sug?t that connections closer to the leaves on interconnection trees are faster using pass transistors, but it is essential to buffer closer to the source. this latter effect is well known in regular asic routing [2]. in this work we propose a new routing architecture that allows liberal switching between buffered and pass transistor tracks. we explore various versions of the architecture to determine the den?y-speed trade-off. we show that one version of the new architec?e results in fpgas with 10% faster critical path delay yet uses the same area as the previous architecture that does not allow such switching. we also show that the new architecture allows a useful area-speed trade off and several versions of the new architecture result in fpgas with 8% gain in area- delay product than the previ? architecture that does not allow the switching. mike sheng jonathan rose the chip layout problem: an automatic wiring procedure k. a. chen m. feuer k. h. khokhani n. nan s. schmidt improving i/o performance with a conditional store buffer lambert schaelicke al davis esp: a new standard cell placement package using simulated evolution esp (evolution-based standard cell placement) is a new program package designed to perform standard cell placement and includes macro-block placement capabilities. it uses the new heuristic method of simulating an evolutionary process in order to minimize the cell interconnection wire length. while achieving results comparable to or better than the popular simulated annealing algorithm, esp performs its task about ten times faster. r.-m. kling p. banerjee improving prediction for procedure returns with return-address-stack repair mechanisms kevin skadron pritpal s. ahuja margaret martonosi douglas w. clark the theory of signature testing for vlsi several methods for testing vlsi chips can be classified as signature methods. both conventional and signature testing methods apply a number of test patterns to the inputs of the circuit. the difference is that a conventional method examines each output, while a signature method first accumulates the outputs in some data compression device, then examines the signature - the final contents of the accumulator - to see if it agrees with the signature produced by a good chip. signature testing methods have several advantages, but they run the risk that masking may occur. masking is said to occur if a faulty chip and a good chip behave differently on the test patterns, but the signatures are identical. when masking occurs, the signature testing method will incorrectly conclude that the chip is good, whereas a conventional method would discover that the chip is defective. this paper gives theoretical justification to the use of several signature testing techniques. we show that for these methods, the probability that masking will occur is small. an important difference between this and other work is that our results require very few assumptions about the behavior of faulty chips. they hold even in the presence of so-called correlated errors or even if the circuit were subject to sabatoge. when we speak of the probability of masking, we use the probabilistic approach of gill, rabin and others. that is, we introduce randomness into the testing method in a way which can be controlled by the designer. thus, one theorem assumes that the order of the input patterns - or the patterns themselves - is random; another assumes that the connections between the chip and the signature accumulator are made randomly, and a third assumes that the signature accumulator itself incorporates a random choice. most of the results of this paper use a particularly simple and practical signature accumulator based on a linear feedback shift register. j. lawrence carter architecture of centralized field-configurable memory as the capacities of fpgas grow, it becomes feasible to implement the memory portions of systems directly on an fpga together with logic. we believe that such an fpga must contain specialized architectural support in order to implement memories efficiently. the key feature of such architectural support is that it must be flexible enough to accommodate many different memory shapes (widths and depths) as well as allowing different numbers of independently- addressed memory blocks. this paper describes a family of centralized field- configurable memory architectures which consist of a number of memory arrays and dedicated mapping blocks to combine these arrays. we also present a method for comparing these architectures, and use this method to examine the tradeoffs involved in choosing the array size and mapping block capabilities. steven j. e. wilton jonathan rose zvonko g. vranesic on designing ulm-based fpga logic modules in this paper, we give a method to design fpga logic modules, based on an extension of classical work on designing universal logic modules (ulm). specifically, we give a technique to design a class of logic modules that specialize to a large number of functions under complementations and permutations of inputs, bridging of inputs and assignment of 0/1 to inputs. thus, a lot of functions can be implemented using a single logic module. the significance of our work lies in our ability to generate a large set of such logic modules. a choice can be made from this set based on design criteria. we demonstrate the technique by generating a set of 471 8-input functions that have a much higher coverage than the 8-input cells employed by actel's fp-gas. our functions can specialize to up to 23 times the number of functions that actel functions can. we also show that by carefully optimizing these functions one can obtain multi-level implementations of them that have delays within 10% of the delays of actel modules. we demonstrates the effectiveness of these modules in mapping benchmark circuits. we observed a 16% reduction in area and a 21% reduction in delay using our logic modules instead of actel's on these circuits. shashidhar thakur d. f. wong pace - a microprogram evaluation system this paper describes pace (product assurance code evaluation) system, a tool for evaluating microprograms. pace incorporates both static analysis and dynamic analysis capabilities and it provides features that enable systematic and comprehensive evaluations of large-scale microcoded systems. the pace static analysis capability performs a control flow analysis of the code being evaluated, reports various anomalous program constructs, and generates a program flow graph that is subsequently employed by pace's dynamic analysis procedures. the pace dynamic analysis capability uses encoded execution trace data to produce microcode test-coverage reports and formatted code-execution traces. the dynamic analysis capability provides quantitative code execution coverage data that enables an assessment of testing thoroughness and is useful in the identification of effective regression test cases. robert e. skibbe general purpose router numerous solutions to the problem of detailed routing of wires on a chip have been proposed for two routing layers but few are general enough to also handle switchboxes, more than two layers, variable channel widths, or multiple-layer problems with stacked terminals (3-d routing) without extensive modifications. we propose a different routing approach that not only can solve the two layer problem but the other problems as well. the inherent parallelism of the approach lead to a coarse-grained parallel algorithm. r. j. enbody h. c. du a study of i/o behavior of perfect benchmarks on a multiprocessor the i/o behavior of some scientific applications, a subset of perfect benchmarks, executing on a multiprocessor is studied. the aim of this study is to explore the various patterns of i/o access of large scientific applications and to understand the impact of this observed behavior on the i/o subsystem architecture. i/o behavior of the program is characterized by the demands it imposes on the i/o subsystem. it is observed that implicit i/o or paging is not a major problem for the applications considered and the i/o problem is mainly manifest in the explicit i/o done in the program. various characteristics of i/o accesses are studied and their impact on architecture design is discussed. a. l. narasimha reddy prithviraj banerjee algebraic analysis of nondeterministic behavior this paper is concerned with the analysis of design errors that lead to unpredictable response of digital systems. besides classical topics, such as hazards and races, the analysis of malfunctions in real circuits is also included. after defining the notion of behavior and nondeterministic response, a general approach for detecting such design problems through algebraic analysis is presented. compared with existing simulation methods, the algebraic technique provides results of improved accuracy. another basic advantage is the ability to accomodate modular synthesis of digital systems. examples show how the proposed methods deal with sequential circuits under various delay assumptions. in particular, analysis of designs based on nominal delay parameters and on window delays is presented. a novel method, aiming at spike detection, is also presented. the ability of the algebraic analysis to detect errors in a modular design environment is illustrated by means of an example. finally, the topic of nondeterministic behavior at rtl is briefly discussed. notably, an algebraic method for deriving setup and hold time constraints from the circuit delay parameters is proposed. s. leinwand t. lamdan communication-time trade-offs in network synchronization baruch awerbuch board-level multiterminal net routing for fpga-based logic emulation we consider a board-level routing problem applicable to fpga-based logic emulation systems such as the realizer system [varghese et al. 1993] and the enterprise emulation system [maliniak 1992] manufactured by quickturn design systems. optimal algorithms have been proposed for the case where all nets are two-terminal nets [chan and schlag 1993; mak and wong 1995]. we show how multiterminal nets can be handled by decomposition into two-terminal nets. we show that the multiterminal net decomposition problem can be modeled as a bounded-degree hypergraph-to-graph transformation problem where hyperedges are transformed to spanning trees. a network flow-based algorithm that solves both problems is proposed. it determines if there is a feasible decomposition and gives one whenever such a decomposition exists. wai-kei mak d. f. wong global stacking for analog circuits b. arsintescu s. spânoche logic synthesis for efficient pseudoexhaustive testability andrzej krasniewski control optimization based on resynchronization of operations david c. ku dave filo giovanni de micheli wattch: a framework for architectural-level power analysis and optimizations power dissipation and thermal issues are increasingly significant in modern processors. as a result, it is crucial that power/performance tradeoffs be made more visible to chip architects and even compiler writers, in addition to circuit designers. most existing power analysis tools achieve high accuracy by calculating power estimates for designs only after layout or floorplanning are complete. in addition to being available only late in the design process, such tools are often quite slow, which compounds the difficulty of running them for a large space of design possibilities. this paper presents wattch, a framework for analyzing and optimizing microprocessor power dissipation at the architecture-level. wattch is 1000x or more faster than existing layout-level power tools, and yet maintains accuracy within 10% of their estimates as verified using industry tools on leading-edge designs. this paper presents several validations of wattch's accuracy. in addition, we present three examples that demonstrate how architects or compiler writers might use wattch to evaluate power consumption in their design process. we see wattch as a complement to existing lower- level tools; it allows architects to explore and cull the design space early on, using faster, higher-level tools. it also opens up the field of power- efficient computing to a wider range of researchers by providing a power evaluation methodology within the portable and familiar simplescalar framework. david brooks vivek tiwari margaret martonosi wire-sizing for delay minimization and ringing control using transmission line model youxin gao d. f. wong palace: a layout generator for scvs logic blocks a novel approach to the automatic layout synthesis of dynamic cmos circuits is presented. a set of logic expressions is realized in a row of cells. taking multi-level boolean expressions as input, logic transistors are placed and routed. efficient solutions are achieved by permuting the variables of the expressions and by row folding. the layout is designed on a coarse grid taking timing requirements into account and afterwards adapted to the geometric design rules by a compactor. a comparison to handcrafted layouts shows that the results of palace are nearly equivalent. at the same time the design productivity is increased significantly. knut m. just edgar auer werner l. schiele alexander schwaferts a digital partial built-in self-test structure for a high performance automatic gain control circuit a. lechner j. ferguson a. richardson b. hermes the ispd98 circuit benchmark suite from 1985-1993, the mcnc regularly introduced and maintained circuit benchmarks for use by the design automation community. however, during the last five years, no new circuits have been introduced that can be used for developing fundamental physical design applications, such as partitioning and placement. the largest circuit in the existing set of benchmark suites has over 100,000 modules, but the second largest has just over 25,000 modules, which is small by today's standards. this paper introduces the ispd98 benchmark suite which consists of 18 circuits with sizes ranging from 13,000 to 210,000 modules. experimental results for three existing partitioners are presented so that future researchers in partitioning can more easily evaluate their heuristics. charles j. alpert a mixed-mode simulator to provide flexibility and efficiency in logic and timing verification of mos vlsi circuits, it is desirable that various portions of a circuit can be described and simulated at appropriate levels of detail. such a capability is provided by the mixed-mode simulator described here. this simulator allows different elements of a circuit to be modeled and simulated at different levels of detail. the modeling levels are mos transistor level, logic gate level and functional level. the simulation levels are timing, multiple delay and unit delay. the simulator is being used on production lsi chips and its performance is discussed. v. d. agrawal a. k. bose p. kozak h. n. nham e. pacas-skewes 1991 international workshop on formal methods in vlsi design: trip report richard boulton a general dispersive multiconductor transmission line model for interconnect simulation in spice mustafa celik andreas c. cangellaris configuration compression for fpga-based embedded systems fpgas are a promising technology for developing high-performance embedded systems. the density and performance of fpgas have drastically improved over the past few years. consequently, the size of the configuration bit-streams has also increased considerably. as a result, the cost-effectiveness of fpga- based embedded systems is significantly affected by the memory required for storing various fpga configurations. this paper proposes a novel compression technique that reduces the memory required for storing fpga configurations and results in high decompression efficiency. decompression efficiency corresponds to the decompression hardware cost as well as the decompression rate. the proposed technique is applicable to any sram-based fpga device since configuration bit-streams are processed as raw data. the required decompression hardware is simple and is independent of the individual semantics of configuration bit-streams or specific features of the on-chip configuration mechanism. moreover, the time to configure the device is not affected by our compression technique. using our technique, we demonstrate up to $41 \%$ savings in memory for configuration bit-streams of several real- world applications. andreas dandalis viktor k. prasanna a digital method for testing embedded switched capacitor filters m. robson g. russell system design based on single language and single-chip java asip microcontroller sergio akira ito luigi carro ricardo pezzuol jacobi test generation for lsi a new automatic test generation approach for lsi circuits has been presented in the companion papers [1,2]. in this paper we generate tests for a typical lsi circuit using the new approach. the goal of this study is to gain insight into the problems involved in using the test generation procedures. a formal model c for a 1-bit microprocessor slice is defined which has all the main features of commercially available bit slices such as the am2901. the circuit c is modeled as a network of interconnected functional modules. the functions of the individual modules are described using binary decision diagrams, or equivalently using experiments derived from the diagrams. using our test generation technique, we derive tests for the circuit c capable of detecting various faults covered by our fault model [1]. it is shown that backtracking is rarely needed while generating tests for c. also, we show that generating a multiple vector test is not required for any of the faults considered in the study. the length of the circuit's test sequence is significantly reduced using the fault collapsing method. a discussion of how to model some of the features of lsi circuits that are not included in the circuit c is presented. a comparison between the length of the test generated by our method and other manually-generated ones is also presented. magdy s. abadir hassan k. reghbati logic clause analysis for delay optimization berhard rohfleisch bernd wurth kurt antreich a fault list reduction approach for efficient bridge fault diagnosis jue wu gary s. greenstein elizabeth m. rudnick timing verification by formal signal interaction modeling in a multi-level timing simulator a new multi-level macromodeling technique for timing simulation has been developed. this technique is based upon the modeling of the behavior of subcircuits under single input changes. the possible interactions between multiple input changes determine the range of validity of the models. a formal method for developing the model validity conditions is presented. this work establishes a bridge between timing analysis by using single input change models, and timing simulation which correctly models signal interactions. the availability of a formal criterion for the validity of the models allows the dynamic identification of the parts of the circuit that require more accurate models. as a result, the cost advantage of high level models can be fully exploited while still allowing critical interactions to be simulated with high accuracy. j. benkoski a. j. strojwas improving functional density through run-time constant propagation michael j. wirthlin brad l. hutchings an exact solution to the minimum size test pattern problem this article addresses the problem of test pattern generation for single stuck-at faults in combinational circuits, under the additional constraint that the number of specified primary input assignments is minimized. this problem has different applications in testing, including the identification of "don't care" conditions to be used in the synthesis of built-in self-test (bist) logic. the proposed solution is based on an integer linear programming (ilp) formulation which builds on an existing propositional satisfiability (sat) model for test pattern generation. the resulting ilp formulation is linear on the size of the original sat model for test generation, which is linear on the size of the circuit. nevertheless, the resulting ilp instances represent complex optimization problems, that require dedicated ilp algorithms. preliminary results on benchmark circuits validate the practical applicability of the test pattern minimization model and associated ilp algorithm. paulo f. flores horácio c. neto joão p. marques-silva generating highly-routable sparse crossbars for plds a method for evaluating and constructing sparse crossbars which are both area efficient and highly routable is presented. the evaluation method uses a network flow algorithm to accurately compute the percentage of random test vectors that can be routed. the construction method attempts to maximize the spread of the switch locations, such that any given subset of input wires can connect to as many output wires as possible. based on hall's theorem, we argue that this increases the likelihood of routing. the hardest test vectors to route are those which attempt to use all of the crossbar outputs. results in this paper show that area-efficient sparse crossbars can be constructed by providing more outputs than required and a sufficient number of switches. in a few specific case studies, it is shown that sparse crossbars with about 90% fewer switches than a full crossbar can be constructed, and these crossbars are capable of routing over 95% of randomly chosen routing vectors. in one case, a new switch matrix which can replace the one in the altera flex8000 family is shown. this new switch matrix uses approximately 14% more transistors, yet can increase the routability of the most difficult test vectors from 1% to over 96%. guy lemieux paul leventis david lewis low power techniques and design tradeoffs in adaptive fir filtering for prml read channels in this paper, we describe area and power reduction techniques for a low- latency adaptive finite-impulse response filter for magnetic recording read channel applications. various techniques are used to reduce area and power dissipation while speed remains as the main performance criterion for the target application. a parallel transposed direct form architecture operates on real-time input data samples and employs a fast, low- area multiplier based on selection of radix-8 pre-multiplied coefficients in conjunction with one-hot encoded bus leading to a very compact layout and reduced power dissipation. area, speed and power comparisons with other low- power implementation options are also shown. the proposed filter has been fabricated using a 0.18 l-effective cmos technology and operates at 550 msamples/s. khurram muhammad robert b. staszewski poras t. balsara formal verification of behavioral vhdl specifications: a case study felix nicoli laurence pierre towards adaptable hierarchical placement for fpgas florent de dinechin wayne luk steve mckeever experiences and issues in vhdl-based synthesis stephen e. lim david c. hendry ping f. yeung coupled noise estimation for distributed rc interconnect model janet m. wang qingjian yu ernest s. kuh fault tolerant and bist design of a fifo cell p. prinetto f. corno m. sonza reorda gated-vdd: a circuit technique to reduce leakage in deep-submicron cache memories deep-submicron cmos designs have resulted in large leakage energy dissipation in microprocessors. while sram cells in on-chip cache memories always contribute to this leakage, there is a large variability in active cell usage both within and across appli?ions. this paper explores an integrated architectural and circuit-level approach to reducing leakage energy dissipation in instruc?n caches. we propose, gated-vdd, a circuit-level technique to gate the supply voltage and reduce leakage in unused sram cells. our results indicate that gated-vdd together with a novel resizable cache architecture reduces energy-delay by 62% with minimal impact on performance. michael powell se-hyun yang babak falsafi kaushik roy t. n. vijaykumar an algorithm for face-constrained encoding of symbols using minimum code length manuel martinez maría j. avedillo jose m. quintana jose l. huertas energy recovery for the design of high-speed, low-power static rams n. tzartzanis w. athas an asip design methodology for embedded systems kayhan kucukcakar 2n-way jump microinstruction hardware and an effective instruction binding method a scheme is developed by which multiway microprogram jumps may be made to any of 2n next possible microinstructions as a function of any selection of n > 1 logically independent tests. an efficient method of binding microinstructions to memory locations allows this to be done at very low cost, both in terms of speed and hardware. independent simultaneous tests are a necessity if horizontally microcodable machines are to continue to get wider, since algorithms presumably have fixed operations/tests ratios. this scheme will give parallelizers for such machines maximum flexibility in rearranging flow control. joseph a. fisher a discussion on non-blocking/lockup-free caches in our previous contribution in the june 1996 issue of can, we presented a discussion on lockup-free caches. the article raised a couple of issues with the audience, which we will attempt to address in this addendum. samson belayneh david r. kaeli verification of hardware descriptions by retargetable code generation this paper proposes a new method for hardware verification. the basic idea is the application of a retargetable compiler as verification tool. a retargetable compiler is able to compile programs into the machine code of a specified hardware (target). if the program is the complete behavioural specification of the target, the compiler can be used to verify that a properly programmed structure implements the behaviour. methods, algorithms and applications of an existing retargetable compiler are described. l. nowak p. marwedel yield improvement and repair trade-off for large embedded memories yervant zorian multi-level logic optimization by implication analysis this paper proposes a new approach to multi-level logic optimization based on atpg (automatic test pattern generation). previous atpg-based methods for logic minimization suffered from the limitation that they were quite restricted in the set of possible circuit transformations. we show that the atpg-based method presented here allows (in principle) the transformation of a given combinational network c into an arbitrary, structurally different but functionally equivalent combinational network c'. furthermore, powerful heuristics are presented in order to decide what network manipulations are promising for minimizing the circuit. by identifying indirect implications between signals in the circuit, transformations can be derived which are "good" candidates for the minimization of the circuit. in particular, it is shown that recursive learning can derive "good" boolean divisors justifying the effort to attempt a boolean division. for 9 out of 10 iscas-85 benchmark circuits our tool hannibal obtains smaller circuits than the well-known synthesis system sis. wolfgang kunz prem r. menon efficient algorithms for acceptable design exploration in this paper, we present an efficient approach to find effective module selections under resource, latency, and power constraints. the framework contains two phases: choosing a resource configuration, and determining a module binding for each resource. the first phase applies _inclusion scheduling_ to estimate generic resources required. in the second phase, _module utility_ measurement is used to determine module selections. a heuristic which perturbs module utility values until they lead to superior selections according to design objectives are also proposed. the experiments on well-known benchmarks show the effectiveness of the approach when comparing the obtained module selections with the results from enumerating all module selections, as well as mssr and psga. chantana chantrapornchai edwin h.-m. sha xiaobo hu robust delay-fault test generation and synthesis for testability under a standard scan design methodology kwang-ting cheng srinivas devadas kurt keutzer ota amplifiers design on digital sea-of-transistors array jung hyun choi sergio bampi controller re-specification to minimize switching activity in controller/data path circuits a. raghunathan s. dey n. jha k. wakabayashi the case for sram main memory the growing cpu-memory gap is resulting in increasingly large cache sizes. as cache sizes increase, associativity becomes less of a win. at the same time, since costs of going to dram increase, it becomes more valuable to be able to pin critical data in the cache---a problem if a cache is direct-mapped or has a low degree of associativity. something else which is a problem for caches of low associativity is reducing misses by using a better replacement policy. this paper proposes that l2 cache sizes are now starting to reach the point where it makes more sense to manage them as the main memory of the computer, and relegate the traditional dram main memory to the role of a paging device. the paper details advantages of an sram main memory, as well as problems that need to be solved, in managing an extra level of virtual to physical translation. philip machanick boolean matching for complex plbs in lut-based fpgas with application to architecture evaluation in this paper, we developed boolean matching techniques for complex programmable logic blocks (plbs) in lut-based fpgas. a complex plb can not only be used as a k-input lut, but also can implement some wide functions of more than k variables. we apply previous and develop new functional decomposition methods to match wide functions to plbs. we can determine exactly whether a given wide function can be implemented with a xc4000 clb or other three plb architectures (including the xc5200 clb). we evaluate functional capabilities of the four plb architectures on implementing wide functions in mcnc benchmarks. experiments show that the xc4000 clb can be used to implement up to 98% of 6-cuts and 88% of 7-cuts in mcnc benchmarks, while two of the other three plb architectures have a smaller cost in terms of logic capability per silicon area. our results are useful for designing future logic unit architectures in lut based fpgas. jason cong yean-yow hwang attacking the semantic gap between application programming languages and configurable hardware it is difficult to exploit the massive, fine- grained parallelism of configurable hardware with a conventional application program?g language such as c, pascal or java. the difficulty arises from the mismatch between the synchronous, concurrent processing capability of the hardware and the expressiveness of the lan?ge-the so-called "semantic gap." we attack this problem by using a programming model matched to the hardware's capabilities that can be implemented in any (unmodified) object-oriented lan?ge, and building a corresponding compiler. the result is appli?ion code that can be developed, compiled, debugged and executed on a personal computer using conventional tools (such as visual c++ or visual cafe), and then recompiled without modifi?ion to the configurable hardware target. a straightforward c++ implementation of the serpent encryption algorithm compiled with our compiler onto a virtex xcv1000 fpga yielded an implemen?ion that was smaller (3200 vs. 4502 clbs) and faster (77 mhz vs. 38 mhz) than an independent vhdl implementation with the same degree of pipelining. a tuned version of the source yielded an implementation that ran at 95 mhz. greg snider barry shackleford richard j. carter the vhsic hardware description language (vhdl) program the emergence of the importance of vlsi design automation and the vlsi custom/semicustom industry has spurred a wide-spread interest in hardware description languages. starting in 1981, the vhsic program has acted as a catalyst to develop a standard hardware description language that could beneficially serve the government, industry, and academic communities. this panel will discuss from different viewpoints the issues associated with vlsi interoperability standards and the potential role of the vhsic hardware description language. al dewey selecting partial scan flip-flops for circuit partitioning this paper presents a new method of selecting scan flip-flops (ffs) in partial scan designs of sequential circuits. scan ffs are chosen so that the whole circuit can be partitioned into many small subcircuits which can be dealt with separately by a test pattern generator. this permits easy automatic test pattern generation for arbitrarily large sequential circuits. algorithms of selecting scan ffs to allow such partitioning and of scheduling tests for subcircuits are given. experimental results show that the proposed method makes it possible to generate test patterns for extra large sequential circuits which previous approaches cannot deal with. toshinobu ono multiple instruction issue and single-chip processors in this paper we evaluate the performance of single-chip processors with multiple functional units. as a basis for our studies we use a processor model that is very similar to many of today's single-chip processors. using this basic machine model, we investigate the performance that can be achieved if some limited form of multiple instruction issue is supported. for these investigations, we use 4 variants of the basic machine that represented different memory access times and branch execution times. in particular, we evaluate issuing 2 instructions per cycle and find that by restricting multiple instruction issue to load or branch instructions much of the same performance gains can be achieved as in the unrestricted form. a. r. pleszkun g. s. sohi efficient partitioning and analysis of digital cmos-circuits u. hubner h. t. vierhaus integrating symbolic techniques in atpg-based sequential logic optimization enrique san millán luis entrena jose a. espejo silvia chiusano fulvio corno a comprehensive approach to logic synthesis and physical design for two- dimensional logic arrays andisheh sarabi ning song malgorzata chrzanowska- jeske marek a. perkowski algorithms for current monitor based diagnosis of bridging and leakage faults s. chakravarty m. liu event propagation conditions in circuit delay computation accurate and efficient computation of delays is a central problem in computer- aided design of complex vlsi circuits. delays are determined by events (signal transitions) propagated from the inputs of a circuit to its outputs, so precise characterization of event propagation is required for accurate delay computation. although many different propagation conditions (pcs) have been proposed for delay computation, their properties and relationships have been far from clear. we present a systematic analysis of delay computation based on a series of waveform models that capture signal behavior rigorously at different levels of details. the most general model, called the exact of w0 model, specifies each event occurring in a circuit signal. a novel method is presented that generates approximate waveforms by progressively eliminating signal values from the exact model. for each waveform model, we drive the pcs that correctly capture the requirements under which an event propagates along a path. the waveform models and their pcs are shown to form a well-defined hierarchy, which provides a means to trade accuracy for computational effort. the relationships among the derived pcs and existing ones are analyzed in depth. it is proven that though many pcs, such as the popular floating mode condition, produce a correct upper bound on the circuit delay, they can fail to recognize event propagation in some instances. this analysis further enables us to derive new and useful pcs. we describe such a pc, called safe static. experimental results demonstrate that safe static provides an excellent accuracy/efficiency tradeoff. hakan yalcin john p. hayes virtual grid symbolic layout n. weste a coordinated approach to partitioning and test pattern generation for pseudoexhaustive testing in this work, we propose a circuit partitioning and test pattern generation algorithm for built-in pseudoexhaustive self-testing of vlsi circuits. the circuit partitioning process is to partition a given circuit into a set of subcircuits such that pseudoexhaustive self-testing will be possible, while the test pattern generation process is to generate the pseudoexhaustive test patterns for each subcircuit using a linear feedback shift register (lfsr). both problems are considered and solved in the same phase and lead to an efficient and well-coordinated solution. experiments using computer simulation have been conducted. the results demonstrate that the proposed method is very good, especially for circuits that are highly locally connected. w.-b. jone c. a. papachristou exhaustive simulation need not require an exponential number of tests daniel brand testing arithmetic coprocessor in system environment j. sosnowski t. bech is reconfigurable computing commercially viable (panel)? herman schmit configuration cloning: exploiting regularity in dynamic dsp architectures s. r. park w. burleson evaluating mmx technology using dsp and multimedia applications ravi bhargava lizy k. john brian l. evans ramesh radhakrishnan a unified framework for race analysis of asynchronous networks a unified framework is developed for the study of asynchronous circuits of both gate and mos type. a basic network model consisting of a directed graph and a set of vertex excitation functions is introduced. a race analysis model, using three values (0, 1, and x), is developed for studying state transitions in the network. it is shown that the results obtained using this model are equivalent to those using ternary simulation. it is also proved that the set of state variables can be reduced to a minimum size set of feedback variables, and the analysis still yields both the correct state transitions and output hazard information. finally, it is shown how the general results above are applicable to both gate and mos circuits. j. a. brzozowski c.-j. seger on-line testing and diagnosis of bus lines with respect to intermediate voltage values (poster paper) cecilia metra michele favalli bruno riccó a generic software system for drift reliability optimization of vlsi circuits min huang m. a. styblinski interconnect yield model for manufacturability prediction in synthesis of standard cell based designs hans t. heineken wojciech maly circuit recognition and verification based on layout information the mathematical and technical background information for our highly efficient procedure of circuit recognition and verification from layout information is presented. complete verification and extremely short computing times are the main goals. this procedure can be performed for bipolar as well as for mos technologies and is part of the whole layout-control system locate. i. ablasser u. jäger on testing wave pipelined circuits jui-ching shyur hung-pin chen tai-ming parng constrained register allocation in bus architectures elof frank salil raje majid sarrafzadeh computing with faulty shared objects yehuda afek david s. greenberg michael merritt gadi taubenfeld register windows david chase striping in large tape libraries a. l. drapeau r. h. katz automatic diagnosis may replace simulation for correcting simple design errors a. wahba d. borrione a bist scheme for on-chip adc and dac testing jiun-lang huang chee-kian ong kwang-ting cheng a methodology and algorithms for post-placement delay optimization lalgudi n. kannan peter r. suaris hong-gee fang improving the accuracy of circuit activity measurement bhanu kapoor timing models for high-level synthesis viraphol chaiyakul allen c.-h. wu daniel d. gajski combined spectral techniques for boolean matching e. schubert w. rosenstiel testing and debugging custom integrated circuits edward h. frank robert f. sproull power analysis and low-power scheduling techniques for embedded dsp software mike tien-chien lee vivek tiwari sharad malik masahiro fujita using bdds to design ulms for fpgas zeljko zilic zvonko g. vranesic switching activity analysis using boolean approximation method taku uchino fumihiro minami takashi mitsuhashi nobuyuki goto macromodeling of digital mos vlsi circuits this paper presents a method for modeling mos combinational logic gates. analyses are given for power consumption, output response delay, output response waveshape, and input capacitance. the models are both computationally efficient and accurate, typically lying within 5% of spice estimates. they are pertinent to simulation and optimization applications. a general macromodeling software support package is described. a related paper [1] discusses a circuit optimizer based on these models. mark d. matson a new approach for factorizing fsm's exact factors as defined in [2], if present in an fsm can result in most effective way of factorization. however, it has been found that most of the fsm's are not exact factorizable. in this paper, we have suggested a method of making fsm's exact factorizable by minor changes in the next state space while maintaining the functionality of the fsm. we have also developed a new combined state assignment algorithm for state encoding of factored and factoring fsm's. experimental results on mcnc benchmark examples, after running misii on the original fsm, factored fsm and factoring fsm have shown a reduction of 40% in the worst case signal delay through the circuit in a multilevel implementation. the total number of literals, on an average is the same after factorization as that obtained by running misii on the original fsm. for two-level implementation, our method has been able to factorize benchmark fsm's with a 14% average increase in overall areas, while the areas of combinational components of factored and factoring fsm's have been found to be significantly less than the area of the combinational component of the original fsm. c. rama mohan p. p. chakrabarti regular layout generation of logically optimized datapaths r. x. t. nijssen c. a. j. van eijk memory segmentation to exploit sleep mode operation amir h. farrahi gustavo e. tellez majid sarrafzadeh completion time multiple branch prediction for enhancing trace cache performance the need for multiple branch prediction is inherent to wide instruction fetching. this paper presents a completion time multiple branch predictor called the tree-based multiple branch predictor (tmp) that builds on previous single branch prediction techniques. it employs a tree structure of branch predictors, or tree-node predictors, and achieves accurate multiple branch prediction by leveraging the high accuracies of the individual branch predictors. a highly-efficient tmp design uses the 2-bit saturating counters for the tree-node predictors. to achieve higher prediction rate, the tmp employs two-level schemes for the tree-node predictors resulting in a three- level tmp design. placing the tmp at completion time reduces the critical latency in the front-end of the pipeline; the resultant longer update latency does not significantly impact the overall performance. in this paper the tmp is applied to a trace cache design and shown to be very effective in increasing its performance. results: a realistic-size tmp (72kb) can predict 1, 2, 3, and 4 consecutive blocks with compounded prediction accuracies of 96%, 93%, 87%, and 82%, respectively. the block-based trace cache with this tmp achieves 4.75 ipc for specint95 on an idealized machine, which is a 20% performance improvement over the original design [1]. this improved performance is 8% above that of a conventional i-cache design with perfect single branch prediction. ryan rakvic bryan black john paul shen optimization of hierarchical designs using partitioning and resynthesis this paper explores the influence of optimization along the boundary between hierarchically described components. a novel technique called repartitioning combines partitioning and sequential resynthesis of the design under various quality measures. it is applied to various digital circuits which consist of a controller and a datapath. the outcome of this effort is a versatile, parametrizable resynthesis tool which preserves this hierarchy. due to the cost measures, an average improvement ranging between 5% and 15% was obtained. heinz-josef eikerling ralf hunstock raul camposano 12-b 125 msps cmos d/a designed for spectral performance d. mercer l. singer fpga-targeted development system for embedded applications v. sklyarov j. fonseca r. monteiro a. oliveira a. melo n. lau k. kondratjuk i. skliarova p. neves a. ferrari global microcode compaction under timing constraints existing global microcode compaction algorithms have all been based on the assumption that the parallelism exploitation is constrained only by data dependency and resource limitation. however, the timing constraint also has great impact upon microcode compaction, thus it is highly necessary to study global microcode compaction under timing constraints. this paper conducts an analysis on the global timing problem, modifies the mo motion rules and presents an algorithm based on trace scheduling for global microcode compaction under timing constraints. b. su j. wang j. xia error-correcting codes for semiconductor memories c. l. chen metrology for analog module testing using analog testability bus chauchin su yue-tsang chen shyh-jye jou yuan-tzu ting midas - microprogram description and analysis system this paper did not arrive in time to be included in the proceedings. igor hansen the effect of page allocation on caches william l. lynch brian k. bray m. j. flynn advantages of the xc6000 architecture for embedded system design (abstract) karlheinz weiß ronny kistner arno kunzmann wolfgang rosenstiel an innovative, segmented high performance fpga family with variable-grain- architecture and wide-gating functions om agrawal herman chang brad sharpe- geisler nick schmitz bai nguyen jack wong giap tran fabiano fontana bill harding table lookup techniques for fast and flexible digital logic simulation a well-known fundamental computer technique consists of the "interpretation" of naturally available or artifically formed data items as addresses to perform table-lookups. although well-known, this technique is still not exploited to its fullest potential. the power and extent of this technique as applied to logic simulation is demonstrated. ernst ulrich on convex formulation of the floorplan area minimization problem it is sho wn that the floorplan area minimization problem can be formulated as a convex programming problem with the n umbers of variables and constraints significantly less than those previously published. temo chen michael k. h. fan high-performance carry chains for fpgas carry chains are an important consideration for most computations, including fpgas. current fpgas dedicate a portion of their logic to support these demands via a simple ripple carry scheme. in this paper we demonstrate how more advanced carry constructs can be embedded into fpgas, providing significantly higher performance carry computations. we redesign the standard ripple carry chain to reduce the number of logic levels in each cell. we also develop entirely new carry structures based on high performance adders such as carry select, carry lookahead, and brent-kung. overall, these optimizations achieve a speedup in carry performance of 3.8 times over current architectures. scott hauck matthew m. hosler thomas w. fry dynamical identification of critical paths for iterative gate sizing since only sensitizable paths contribute to the delay of a circuit, false paths must be excluded in optimizing the delay of the circuit. just identifying false paths in the first place is not sufficient since during iterative optimization process, false paths may become sensitizable, and sensitizable paths false. in this paper, we examine cases for false path becoming sensitizable and sensitizable becoming false. based on these conditions, we adopt a so-called loose sensitization criterion which is used to develop an algorithm for dynamically identification of sensitizable paths. by combining gate sizing and dynamically identification of sensitizable paths, an efficient performance optimization tool is developed. results on a set of circuits from iscas benchmark set demonstrate that our tool is indeed very effective in reducing circuit delay with less number of gate sized as compared with other methods. how-rern lin tingting hwang performance estimation of multiple-cache ip-based systems: case study of an interdependency problem and application of an extended shared memory model in estimating the performance of multiple-cache ip- based systems, we face a problem of interdependency between cache configuration and system behavior. in this paper, we investigate the effects of the interdependency on system performance in a case study. we present a method that gives fast and accurate estimation of system performance by simulating ip cores at the behavioral level with annotated delays and by simulating the multiple-cache communication architecture with an extended shared memory model. experiments show the effectiveness of the proposed method. sungjoo yoo kyoungseok rha youngchul cho jinyong jung kiyoung choi revisiting floorplan representations floorplan representations are a fundamental issue in designing floorplan algorithms. in this paper, we first derive the exact number of configurations of mosaic floorplans and slicing floorplans. we then present two non-redundant representations: a twin binary tree structure for mosaic floorplans and a slicing ordered tree for slicing floorplans. finally, the relations between the state-of-the-art floorplan representations are discussed and their efficiency is explored. bo yao hongyu chen chung-kuan cheng ronald graham static timing analysis for self resetting circuits vinod narayanan barbara a. chappell bruce m. fleischer fsmd functional partitioning for low power enoch hwang frank vahid yu-chin hsu data dependency graph bracing the sunburst compiler refined at utah state university employs a powerful mechanism for management of data anti- dependencies in data dependency graphs, ddg's: the ddg bracer. the term bracing1 is used to mean the fastening of two or more parts together. there are two major goals in bracing: 1) semantic correctness, and 2) creation of an optimal ddg. bracing provides necessary joining of code fragments, produced by a divide and conquer code generation algorithm, while yielding multiple code sequences. since no anti-dependency arcs are present, the input ddg's are said to be in normal form. because anti-dependency arcs occur only when a resource must be reused, a ddg in normal form represents infinite resources. the output ddg is a merging of the two input ddg's such that data dependency arcs between the two ddg's are inserted and data anti-dependency arcs are added to sequentialize the use of common resources. vegdahl [veg82] was one of the first to recognize the importance of live track manipulation. a live track is an ordered pair: the first component is the microoperation node ( mo) in which a resource is born, and the second component is the set of nodes in which the resource dies. v. h. allan state reduction using reversible rules c. norris ip david l. dill report on vlsi design '95: eighth international conference on vlsi design n. ranganathan sharad seth fast fault simulation in combinational circuits: an efficient data structure, dynamic dominators and refined check-up bernd becker ralf hahn rolf krieger fpga circuit optimization based on block integration (abstract) takenori kouda yahiko kambayashi programmable memory blocks supporting content-addressable memory the embedded system block (esb) of the apex e programmable logic device family from altera corporation includes the capability of implementing content addressable memory (cam) as well as product term macrocells, rom, and dual port ram. in cam mode each esb can implement a 32 word cam with 32 bits per word. in product term mode, each esb has 16 macrocells built out of 32 product terms with 32 literal inputs. the ability to reconfigure memory blocks in this way represents a new and innovative use of resources in a programmable logic device, requiring creative solutions in both the hardware and software domains. the architecture and features of this embedded system block are described. frank heile andrew leaver kerry veenstra a vhdl error simulator for functional test generation alessandro fin franco fummi a comprehensive fault macromodel for opamps in this paper, a comprehensive macromodel for transistor level faults in an operational amplifier is developed. with the observation that faulty behavior at output may result from interfacing error in addition to the faulty component, parameters associated with input and output characteristics are incorporated. test generation and fault classification are addressed for stand-alone opamps. a high fault coverage is achieved by a proposed testing strategy. transistor level short/bridging faults are analyzed and classified into catastrophic faults and parametric faults. based on the macromodels for parametric faults, faults simulation is performed for an active filter. we found many parametric faults in the active filter cannot be detected by traditional functional testing. a dft scheme alone with a current testing strategy to improve fault coverage is proposed. chen-yang pan kwang-ting cheng sandeep gupta flexible processors: a promising application-specific processor design approach a new approach to application specific processor design is presented in this paper. existing application specific processors are either based on existing general purpose processors or custom designed special purpose processors. the availability of a new technology, the xilinx logic cell array, presents the opportunity for a new alternative. the flexible processor cell is a prototype of an extremely reconfigurable application specific processor. flexible processors can potentially provide the performance advantages of special purpose processors as well as the cost advantages of general purpose processors. the flexible processor concept opens many potential areas for future research in processor architecture and implementation. this paper presents the design, implementation, and preliminary performance evaluation of an experimental flexible processor. a. wolfe p. shen cost-effective generation of minimal test sets for stuck-at faults in combinational logic circuits seiji kajihara irith pomeranz kozo kinoshita sudhakar m. reddy low power digital design in fpgas (poster abstract) andres d. garc jean luc danger wayne burleson hierarchical optimization of asynchronous circuits bill lin gjalt de jong tilman kolks mossim: a switch-level simulator for mos lsi r. e. bryant closed form solutions to simultaneous buffer insertion/sizing and wire sizing in this paper, we consider the delay minimization problem of an interconnect wire by simultaneously considering buffer insertion, buffer sizing and wire sizing. we consider three cases, namely using no buffer (i.e., wire sizing alone), using a given number of buffers, and using the optimal number of buffers. we provide elegant closed form optimal solutions for all three problems. these closed form solutions are useful in early stages of the vlsi design flow such as logic synthesis and floorplanning. chris chu d. f. wong surveyors' forum - high-performance secondary memory alexander thomasian floating body effects in partially-depleted soi cmos circuits p. lu j. ji c. chuang l. wagner c. hsieh j. kuang l. hsu m. pelella s. chu c. anderson using mathematical logic and formal methods to write correct microcode david shepherd exact memory size estimation for array computations without loop unrolling ying zhao sharad malik the use of static column ram as a memory hierarchy the static column ram devices recently introduced offer the potential for implementing a direct- mapped cache on-chip with only a small increase in complexity over that needed for a conventional dynamic ram memory system. trace-driven simulation shows that such a cache can only be marginally effective if used in the obvious way. however it can be effective in satisfying the requests from a processor containing an on-chip cache. the scram cache is more effective if the processor cache handles both instructions and data. james r. goodman men-chow chiang analysis of glitch power dissipation in cmos ics m. favalli l. benini some aspects of high-level microprogramming subrata dasgupta freezeframe: compact test generation using a frozen clock strategy yanti santoso matthew merten elizabeth m. rudnick miron abramovici technology mapping for k/m-macrocell based fpgas in this paper, we study the technology mapping problem for a novel fpga architecture that is based on k-input single-output pla-like cells, or, k/m-macrocells. each cell in this architecture can implement a single output function of up to k inputs and up to m product terms. we develop a very efficient technology mapping algorithm, k_m_flow, for this new type of architecture. the experiment results show our algorithm can achieve depth- optimality in practically all cases. furthermore it is shown that the k/m-macrocell based fpgas are practically equivalent to the traditional k-lut based fpgas with only a relatively small number of product terms (m≤k \\+ 3). we also investigate thetotal are and delay of k/m-macrocell based fpgas on various benchmarks to compare it with commonly used 4-lut based fpgas. the experimental result shows k/m-macrocell based fpgas can outperform 4-lut based fpgas in terms of both delay and area after placement and routing by vpr. jason cong hui huang xin yuan automated layout generation using gate matrix approach this paper presents a software system alsugma for automated gate matrix layout generation. its structured net-list and realization matrix models which are different from previous interval graph approach are introduced. algorithms to minimize and realize the gate matrix layout are also presented with examples. empirical results showed good performance in terms of both speed and layout quality. folding technique for such layout style is also introduced. y.-c. chang s.-c. chang l.-h. hsu enhanced controllability for iddq test sets using partial scan tapan j. chakraborty sudipta bhawmik robert bencivenga c. j. lin formally verifying a microprocessor using a simulation methodology derek l. beatty randal e. bryant functional testing of semiconductor random access memories magdy s. abadir hassan k. reghbati the k2 parallel processor: architecture and hardware implementation k2 is a distributed-memory parallel processor designed to support a multi- user, multi-tasking, time-sharing operating system and an automatically parallelizing fortran compiler. this paper presents the architecture and the hardware implementation of k2, and focuses on the architectural features required by the operating system and the compiler. a prototype machine with 24 processors is currently being developed. marco annaratone marco fillo kiyoshi nakabayashi marc viredaz embedded systems and hardware-software (panel): co-design: panacea or pandora's box? wayne wolf reduction of latency and resource usage in bit-level pipelined data paths for fpgas p. kollig b. m. al-hashimi vhdl synthesis using structured modeling this paper describes the use of vhdl in a behavioral synthesis system. a structured modeling methodology is presented which suggests standard practices for writing vhdl descriptions which span a variety of design models. the vhdl synthesis system (vss) processes each of these input descriptions and produces a structural description of generic components. j. s. lis d. d. gajski bist plas, pass or fail - a case study numerous built-in self testing (bist) designs now exist for the testing of programmable logic arrays (pla), but their practical usefulness has not been studied. in this paper, we implement and compare several bist designs using a common methodology of implementation. we also perform an yield analysis to characterize the yield degradation due to the bist design methodology. our preliminary findings of this work is that bist approach results in considerable degradation of yield, and therefore may not be suitable as a test vehicle for plas. shambhu j. upadhyaya john a. thodiyil test generation for scan design circuits with tri-state modules and bidirectional terminals this paper describes a program which generates test patterns for scan design circuits with tri-state modules and bidirectional terminals. the test generation procedure uses a path sensitization technique with 14 signal values. the principal features of this program are test generation with automatic decision of i/o mode of bidirectional terminals, generation of test sets for high impedance state, and generation of test sets for system clock control circuits of shift register latches(srls) by using shift-in function of srls. takuji ogihara shinichi murai yuzo takamatsu kozo kinoshita hideo fujiwara assessing the cost effectiveness of integrated passives michael scheffler gerhard tröster synthesis of vhdl concurrent processes petru eles marius minea krzysztof kuchcinski zebo peng placement of irregular circuit elements on non-uniform gate arrays a program is described which was designed primarily to automatically place 5000 gate circuits comprising irregular drop-in components onto the uk5000 type gate array. the architecture of this array is unique, having latch cells together with basic logic cells already predefined on the uncommitted die and so is not a uniform structure. the program uses levels of automatic partitioning and placement forming initial solutions constructively followed by iterative improvement techniques. the concept of function dependent targeting for partitions is introduced together with a novel constructive initial partitioning algorithm. the concept extends throughout most of the subsequent improvement and placement processes. a novel placement improvement algorithm which considers the distribution of unused cells following initial placement is also introduced. the program is entirely file-driven, and uses plug-in algorithms making it suitable for a wide range of placement problems. i. h. kirk p. d. crowhurst j. a. skingley j. d. bowman g. l. taylor a fresh look at retiming via clock skew optimization rahul b. deokar sachin s. sapatnekar integrating superscalar processor components to implement register caching a large logical register file is important to allow effective compiler transformations or to provide a windowed space of registers to allow fast function calls. unfortunately, a large logical register file can be slow, particularly in the context of a wide-issue processor which requires an even larger physical register file, and many read and write ports. previous work has suggested that a register cache can be used to address this problem. this paper proposes a new register caching mechanism in which a number of good features from previous approaches are combined with existing out-of-order processor hardware to implement a register cache for a large logical register file. it does so by separating the logical register file from the physical register file and using a modified form of register renaming to make the cache easy to implement. the physical register file in this configuration contains fewer entries than the logical register file and is designed so that the physical register file acts as a cache for the logical register file, which is the backing store. the tag information in this caching technique is kept in the register alias table and the physical register file. it is found that the caching mechanism improves ipc up to 20% over an un-cached large logical register file and has performance near to that of a logical register file that is both large and fast. matthew postiff david greene steven raasch trevor mudge an optimal clock period selection method based on slack minimization criteria an important decision in synthesizing a hardware implementation from a behavioral description is selecting the clock period to schedule the datapath operations into control steps. prior to scheduling, most existing behavioral synthesis systems either require the designer to specify the clock period explicitly or require that the delays of the operators used in the design be specified in multiples of the clock period. an unfavorable choice of clock period could result in operations being idle for a large portion of the clock period and, consequently, affect the performance of the synthesized design. in this article, we demonstrate the effect of clock slack on the performance of designs and present an algorithm to find a slack-minimal clock period. we prove the optimality of our method and apply it to several examples to demonstrate its effectiveness in maximizing design performance. en-shou chang daniel d. gajski sanjiv narayan tests for path delay faults vs. tests for gate delay faults: how different they are andrzej krasniewski leszek b. wronski determining the optimum extended instruction-set architecture for application specific reconfigurable vliw cpus (poster abstract) c. alippi w. fornaciari l. pozzi m. sami a fast approach to computing exact solutions to the resource-constrained scheduling problem this article presents an algorithm that substantially reduces the computational effort required to obtain the exact solution to the resource constrained scheduling (rcs) problem. the reduction is obtained by (a) using a branch-and-bound search technique, which computes both upper and lower bounds, and (b) using efficient techniques to accurately estimate the possible time-steps at which each operation can be scheduled and using this to prune the search space. results on several benchmarks with varying resource constraints indicate the clear superiority of the algorithm presented here over traditional approaches using integer linear programming, with speed-ups of several orders of magnitude. m. narasimhan j. ramanujam a hardware/software partitioning algorithm for pipelined instruction set processor binh ngoc nguyen masaharu imai nobuyuki hikichi hardware differences in the 9373 and 9375 processors lawrence curley james kuruts john myers insyn: integrated scheduling for dsp applications alok sharma rajiv jain information theoretic measures of energy consumption at register transfer level diana marculescu radu marculescu massoud pedram the role of vhdl as a description language for off-the shelf integrated circuits j. morris average interconnection length and interconnection distribution based on rent's rule in this paper we show that it is necessary to utilize different partitioning coefficients in interconnection length analyses which are based on rent's rule, depending on whether one- or two-dimensional placement strategies are used. β, the partitioning coefficient in the power-law relationship αΒβ, provides a measure of the number of interconnections which cross a boundary enclosing Β blocks. the partitioning coefficients are β=p/2 and β=p for two- and one-dimensional arrays, respectively, where p is the experimental coefficient of the rent relationship t=αΒ~~p. based on these separate partitioning coefficients, an average interconnection length prediction is presented for rectangular arrays that outperforms existing predictions. examples are given to support this theory. c. v. gura j. a. abraham an experience in the formal verification of industrial software m. g. staskauskas synchronous design in vhdl alain debreil philippe oddo futures for partitioning in physical design (tutorial) the context for partitioning in physical design is dominated by two concerns: top-down design and the focus on spatial embedding. the role of partitioning is exactly that of a facilitator of divide-and-conquer metaheuristics for floorplanning, timing and placement optimization. formulations or optimization objectives for partitioning follow from its context and role. finally, the available algorithm technology determines how effectively we can address a given partitioning formulation and optimize a given objective. this invited paper considers the future of partitioning for physical design in light of these factors, and proposes a list of technology needs. a living version of this paper can be found at vlsicad.cs.ucla.edu. andrew b. kahng the placement problem as viewed from the physics of classical mechanics n. r. quinn hierarchical strategy of model partitioning for vlsi-design using an improved mixture of experts approach k. hering r. haupt th. villmann an asynchronous matrix-vector multiplier for discrete cosine transform this paper proposes an efficient asynchronous hardwired matrix-vector multiplier for the rwo-dimensional discrete cosine transform and inverse discrete cosine transform (dct/idct). the design achieves low power and high performance by taking advantage of the typically large fraction of zero and small-valued data in dct and idct applications. in particular, it skips multiplication by zero and dynamically activates/deactivates required bit- slices of fine-grain bit-partitioned adders using simplified, static-logic- based speculative completion sensing. the results extracted by both bit-level analysis and hspice simulations indicate significant improvements compared to traditional designs. kyeounsoo kim peter a. beerel youpyo hong parallel logic simulation on a network of workstations using parallel virtual machine this paper explores parallel logic simulation on a network of workstations using a parallel virtual machine (pvm). a novel parallel implementation of the centralized-time event-driven logic simulation algorithm is carried out such that no global controlling workstation is needed to synchronize the advance of simulation time. further advantages of our new approach include a random partitioning of the circuit onto available workstations and a pipelined execution of the different phases of the simulation algorithm. to achieve a better load balance, we employ a semioptimistic scheme for gate evaluations (in conjunction with a centralized- time algorithm) such that no rollback is required. the performance of this implementation has been evaluated using the iscas benchmark circuits. speedups improve with the size of the circuit and the activity level in the circuit. analyses of the communication overhead show that the techniques developed here will yield even higher gains as newer networking technologies like atm are employed to connect workstations. maciek kormicki ausif mahmood bradley s. carlson a hardware switch level simulator for large mos circuits the hss is a hardware switch level simulator that has been designed and built to be a useful and cost effective addition to a mos circuit designers tool set. the hss is based on the mossim software simulator, but has been further developed to include hardware for simulating pass transistor circuits and for doing timing simulation. by using dynamic ram for internal list storage, a single hss processor can accommodate a circuit of up to 262,144 mos devices. the hss can be interfaced to a variety of host computers via a general purpose parallel interface, and in its current form offers a 25 times speed improvement compared to mossim ii running on a vax 11-780. timing mode offers similar speed advantages, with delay calculations that are sufficiently accurate for many simulation tasks. m. t. smith generation of universal series-parallel boolean functions the structural tree-based mapping algorithm is an efficient and popular technique for technology mapping. in order to make good use of this mapping technique in ftga design, it is desirable to design fpga logic modules based on boolan functions which can be represented by a tree of gates (i.e., series- parallel or sp functions). thakur and wong [1996a; 1996b] studied this issue and they demonstrated the advantages of designing logic modules as universal sp functions, that is, sp functions which can implement all sp functions with a certain number of inputs. the number of variables in the universal function corresponds to the number of inputs to the fpga module, so it is desirable to have as few variables as possible in the constructed functions. the universal sp functions presented in thakur and wong [1996a; 1966b] were designed manually. recently, there is an algorithm that can generate these functions automatically [young and wong 1997], but the number of variables in the generated functions grows exponentially. in this paper, we present an algorithm to generate, for each n > 0, a universal sp function fn for implementing all sp functions with n inputs or less. the number of variables in fn is less than n2.376 and the constructions are the smallest possible when n is small (n ≤ 7). we also derived a nontrival lower bound on the sizes of the optimal universal sp functions ( (n log n)). f. y. young chris c. n. chu d. f. wong a comparison of four two-dimensional gate matrix layout tools a comparison of four layout tools is presented. the layout style is a two- dimensional gate matrix. the first layout tool discussed uses "standard" simulated annealing. annealing on gate clusters instead of individual gates can be used to improve the layout results. two different ways of determining good gate clusters for use in the annealing process are compared. the first way uses clusters derived from user specified gate hierarchies, while the second determines clusters based on gate connectivity. the fourth layout tool uses a decomposition scheme based on quadrisection. layout results for a set of benchmark circuits are presented for each of the tools. m. j. irwin r. m. owens delay and power optimization in vlsi circuits the problem of optimally sizing the transistors in a digital mos vlsi circuit is examined. macromodels are developed and new theorems on the optimal sizing of the transistors in a critical path are presented. the results of a design automation procedure to perform the optimization is discussed. lance a. glasser lennox p.j. hoyte illiads: a new fast mos timing simulator using direct equation-solving approach y.-h. shih s. m. kang parallel pattern fault simulation of path delay faults this paper presents an accelerated fault simulation approach for path delay faults. the distinct features of the proposed fault simulation method consist in the application of parallel processing of patterns at all stages of the calculation procedure, its versatility to account for both robust and non- robust detection of path delay faults, and its capability of efficiently maintaining large numbers of path faults to be simulated. m. schulz f. fink k. fuchs estimation and bounding of energy consumption in burst-mode control circuits peter a. beerel kenneth y. yun steven m. nowick pei-chuan yeh working set prefetching for cache memories eric e. johnson excl this paper describes excl, an automated circuit extraction program that transforms an ic layout into a circuit representation suitable for detailed circuit simulation. the program has built-in, general extraction algorithms capable of accurate computations of interconnection resistance, internodal capacitance, ground capacitance, and transistor sizes. however, where possible, the general algorithms are replaced with simple techniques, thereby improving execution speed. a basic component of the extractor is a procedure that decomposes regions into domains appropriate for specialized or simple algorithms. the paper describes the decomposition algorithm, the extraction algorithms and discusses how they connect with the rest of excl. steven p. mccormick a unified approach to the extraction of realistic multiple bridging and break faults gerald spiegel albrecht p. stroele timing analysis for digital fault simulation using assignable delays e. w. thompson s. a. szygenda n. billawala r. pierce statistical estimation of the switching activity in digital circuits michael g. xakellis farid n. najm exploiting regularity for low-power design renu mehra jan rabaey nes: the behavioral model for the formal semantics of a hardware design language udl/i this paper describes a new behavioral model of hardware, named nes (nondeterministic event sequence) model, which was developed for the purpose of defining formal semantics of the gate level and the register transfer level hardware description languages. the nes model is a generalization of the event driven simulation, and can be a basis of synthesis and verification as well as simulation. we introduce basic concepts, formal definition, and a description method of the nes model. nagisa ishiura hiroto yasuura shuzo yajima gate matrix layout of random control logic in a 32-bit cmos cpu chip adaptable to evolving logic design previously the gate matrix technique was used to lay out the ralu section of a cmos 32-bit cpu chip. it took 1.2 engineer-years to complete the layout of the ralu that contained more than 20,000 transistors with multiple-bus structure. the average packing density was 840 μm2 per transistor in 2.5 μm design rules. recently we have applied the gate matrix technique to lay out the highly complex "random control logic" of the cpu. with a well-structured layout strategy, the gate matrix layout provided (1) adaptability to evolving logic design with short turnaround times, (2) compatibility with computer aids in layout verification, (3) high packing density competitive with hand layout, and (4) compatibility with a team approach to layout. it took 6.5 engineer- years to complete the error-free layout of over 7,000 transistors although the logic design was continuously evolving during the layout period. more than half of the layout efforts were due to logic changes. the average packing density in the final layout of the random control logic was about 1,500 μm2 per transistor with 2.5 μm design rules. s. m. kang r. h. krambeck h. f. s. law unified access to heterogeneous module generators andreas koch verifying imprecisely working arithmetic circuits m. huhn k. schneider th. kropf g. logothetis extension of the critical path tracing algorithm t. ramakrishnan l. kinney wrong-path instruction prefetching jim pierce trevor mudge single-layer fanout routing and routability analysis for ball grid arrays man- fai yu wayne wei-ming dai structured design implementation: a strategy for implementing regular datapaths on fpgas andreas koch an efficient heuristic approach to solve the unate covering problem roberto cordone fabrizio ferrandi donatella sciuto roberto wolfler calvo asic design with vhdl david b. alford digital cmos logic operation in the sub-threshold region numerous efforts in balancing the trade-off between power, area and performance have been carried out in the medium performance, medium power region of the design spectrum. however, not much study has been done at the two extreme ends of the design spectrum, namely, the ultra-low power with acceptable performance at one end, and high performance with power within limit at the other. in this paper, we focus on the ultra-low power end of the spectrum where performance is of secondary importance. one solution to achieve the ultra-low power requirement is to operate the digital logic gates in sub- threshold region. in this paper, we analyze both cmos and pseudo-nmos logic operating in sub- threshold region. we compare the results with cmos in normal strong inversion region and with other known low-power logic, namely, energy recovery logic. results show energy/switching reduction of two orders of magnitude from an 8×8 carry-save array multiplier when it is operated in the sub- threshold region. hendrawan soeleman kaushik roy universal logic gate for fpga design in this paper the problem of selecting an appropriate programmable cell structure for fpga architecture design is addressed. the cells studied here can be configured to the desired functionality by applying input permutation, negation, bridging or constant assignment, or output negation. a general methodology to determine logic description of such cells, which are capable of being configured to a given set of functions is described. experimental results suggest that the new cell behaves as well as the actel 2 cell in terms of logic power but requires substantially less area and wiring overhead. chih-chang lin malgorzata marek-sadowska duane gatlin rc interconnect optimization under the elmore delay model sachin s. sapatnekar universal switch-module design for symmetric-array-based fpgas yao-wen chang d. f. wong c. k. wong short circuit power consumption of glitches d. rabe w. nebel a design for testability scheme with applications to data path synthesis scott chiu christos a. papachristou practical applications of recursive vhdl components in fpga synthesis john mccluskey tutorial: microcode compaction is smaller always better? microcode compaction is a technique whose time has come. recent advances make likely the development of microprogramming language compilers that use compaction as an optimization step. this session explains and contrasts several important approaches to compaction. the listener need not have a prior knowledge of microprogramming. david landskov improved tool and data selection in task management john w. hagerman stephen w. director darwin: cmos opamp synthesis by means of a genetic algorithm wim kruiskamp domine leenaerts re-encoding sequential circuits to reduce power dissipation we present a fully implicit encoding algorithm for minimization of average power dissipation in sequential circuits, based on the reduction of the average number of bit changes per state transition. we have studied two novel schemes for this purpose, one based on recursive weighted non- bipartite matching, and one on recursive minicut bi- partitioning. we employ adds (algebraic decision diagrams) to computate the transition probabilities, to measure the potential area saving, and in the encoding algorithms themselves. our experiments show the effectiveness of our method in reducing power dissipation for large sequential designs. gary d. hachtel mariano hermida abelardo pardo massimo poncino fabio somenzi register minimization beyond sharing among variables tsung-yi wu youn-long lin retiming sequential circuits with multiple register classes klaus eckl christian legl will your bridge stand the load? (position paper) the successful exploitation of lsi for creating products with greater function at reduced cost per operation is critically dependent on the availability of design systems that comprehend the job to be done. as system and component technologies continue their rapid rate of development, there is an increasingly higher probability that our ability to capitalize on them will be severely impacted by the lack of appropriate design system technology development. these impacts will take the form of our inability to deliver an affordable product to the marketplace while the demand still exists and before the technology is completely absolete. this position statement is not intended as an indictment of those who presently plan and implement "da" but rather it is a warning to both users and developers of da indicating where we will end up unless we look beyond the activities that engage us today and change our focus from "design automation tools" to "product design systems". a. e. fitch phase change recording henk van houten wouter leibbrandt compacting mimola microcode j. bhasker t. samad a report on the fifth high-level synthesis workshop, buehlerhoehe, germany, march 1991 elke a. rundensteiner optimal order of the vlsi ic testing sequence in this paper we introduce a technique which manipulates the order of tests to minimize the length of the testing sequence. probabilities of fault occurrences which are analyzed in terms of random phenomena inherent in vlsi manufacturing process are used to determine this optimal testing order. simulation of the pla testing process indicates that there exist possibilities for the significant improvement in testing efficiency of the actual vlsi circuit. wojciech maly a vlsi view of microprogrammed system design in this paper, the possible effects of vlsi technology on the design and development process of microprogrammed systems are explored. the function architectures of future microprogrammed vlsi systems are expected to be very complex, and most of them will be implemented as heterogeneous multiprocessors with each processor being microprogrammed to perform specific tasks. current microprogrammed system design methodologies are examined and are shown to be inadequate. a new design methodology employing a synthetic approach for developing microprogrammed systems is proposed. tientien li testing an integrated circuit for what it should not do (abstract only) integrated circuits are designed to perform some intended function. this is true not only of the circuit as a whole, but also of all of the functional parts that make up the circuit. the general philosophy of testing is to provide some sequence of inputs and test to see if the outputs reflect the intended function. little notice is given to the possibility that the circuit may have produced undesirable side effects. for example, a processor that is given a "load accumulator" instruction may indeed load data into the accumulator exactly as desired, but it may also incorrectly change the contents of some other register. the problem with testing for this condition is that the number of possibilities may be very large and the time required for exhaustive testing may become excessive. furthermore, it may become difficult to identify the meaningful possibilities to check for. this paper presents a case study for problems of this kind and shows how the problems encountered defy recognition by normal logic simulation and test generation techniques. extended methods of test generation are proposed for identifying problems of this type. charles kapps hardware microcontrol schemes using plas four new schemes for microprogram control design with programmable logic arrays (plas) are proposed. the general structure of the first three schemes consists of three units namely, the microcode memory (rom), the microsequencer pla, and a register-counter. the basic idea is to store only branching information, by means of control constructs or transactions, in the pla(s). these transactions have simple jump-type or continue-type formats with only the jump being embedded in pla(s). a more general structure, scheme 4, is also proposed with the objective to generate powerful transactions implementing complex control constructs, such as microsubroutines, nested microprogram loops, etc., in addition to multiway branch capability. these transactions contain horizontally formatted directive bits and, hence, they exhibit a measure of parrallelism. the aim is to transform the sequencing structure of a microprogram into a "program" composed of these transactions. however, a directive-driven processor is required to execute each transaction in order to produce the desired address. christos a. papachristou on routing two-point nets across a channel many problems that arise in general channel routing manifest themselves in simpler situations. we consider connecting a set of n terminals on a line to another set on a parallel line across a rectangular channel. we show that in any solution to the problem that (almost) minimizes the width of the channel (i.e. the distance between the lines the terminals reside on) a net may require as many as ?(?n) horizontal jogs no net routed from top to bottom need ever turn upward in the middle we also present an efficient algorithm to obtain minimal jogging in river routing, and provide necessary and sufficient conditions for conflict cycle resolution. these and other results are presented in the context of a general survey on routing from a combinatorial complexity point of view. ron y. pinter footprints in the cache this paper develops an analytical model for a cache-reload transient. when an interrupt program or system program runs periodically in a cache-based computer, a short cache-reload transient occurs each time the interrupt program is invoked. that transient depends on the size of the cache, the fraction of the cache used by the interrupt program, and the fraction of the cache used by background programs that run between interrupts. we call the portion of a cache used by a program its footprint in the cache, and we show that the reload transient is related to the area in the tail of a normal distribution whose mean is a function of the footprints of the programs that compete for the cache. we believe that the model may be useful as well for predicting paging behavior in virtual-memory systems with round-robin scheduling. harold s. stone dominique thibaut a reconfigurable multi-function computing cache architecture a considerable portion of a chip is dedicated to a cache memory in a modern microprocessor chip. however, some applications may not actively need all the cache storage, especially the computing bandwidth limited applications. instead, such applications may be able to use some additional computing resources. if the unused portion of the cache could serve these computation needs, the on-chip resources would be utilized more efficiently. this presents an opportunity to explore the reconfiguration of a part of the cache memory for computing. in this paper, we present a cache architecture to convert a cache into a computing unit for either of the following two structured computations, fir and dct/idct. in order to convert a cache memory to a function unit, we include additional logic to embed multi-bit output luts into the cache structure. therefore, the cache can perform computations when it is reconfigured as a function unit. the experimental results show that the reconfigurable module improves the execution time of applications with a large number of data elements by a large factor (as high as 50 and 60). in addition, the area overhead of the reconfigurable cache module for fir and dct/idct is less than the core area of those functions. our simulations indicate that a reconfigurable cache does not take a significant delay penalty compared with a dedicated cache memory. the concept of reconfigurable cache modules can be applied at level-2 caches instead of level-1 caches to provide an active- level-2 cache similar to active memories. hue-sung kim arun k. somani akhilesh tyagi automatic partitioning for deterministic test d. crestani a. aguila m.-h. gentil p. chardon c. durante defect-oriented mixed-level fault simulation of digital systems-on-a-chip using hdl m. b. santos j. p. teixeira technology mapping and retargeting for field-programmable analog arrays sree ganesan ranga vemuri logic emulation (panel): a niche or a future standard for design verification? jonathan rose the performance potential of data dependence speculation & collapsing yiannakis sazeides stamatis vassiliadis james e. smith the memory gap and the future of high performance memories maurice v. wilkes lroute: a delay minimal router for hierarchical cplds this paper describes lroute, a novel router for the popular and scalable hierarchical complex programmable logic devices (cplds). cpld routing has constraints on routing topologies due to architectural limitations and performance considerations. these constraints make the problem quite different from fpga routing and render the routing problem more complicated. extensions of popular fpga routers like the maze router performs poorly on such cplds. there is also little published work on cpld routing. lroute uses a different paradigm based on the lagrangian relaxation framework in the theory of mathematical programming. it respects the topology constraints imposed and routes a circuit with minimum delay. we tested this router on a set of industry problems that commercial software failed to route. our router was able to route all of them very quickly. k. k. lee martin d. f. wong integrating formal verification methods with a conventional project design flow Ásgeir th. eiríksson fast and accurate timing simulation with regionwise quadratic models of mos i-v characteristics this paper presents a technique called regionwise quadratic (rwq) modeling that allows highly accurate mos models, as well as measured i-v data, to be used in fast timing simulation. this technique significantly increases the accuracy of fast timing simulation while maintaining efficiency by permitting analytical solutions of node equations. a fast timing simulator using these rwq models has been implemented. several examples of rwq modeling are provided, and comparisons of simulation results with spice3 are shown to demonstrate accuracy and efficiency. speedups of two to three orders of magnitude for circuits containing up to 2000 transistors are observed. a. dharchoudhury s. m. kang k. h. kim s. h. lee representing circuits more efficiently in symbolic model checking j. r. burch e. m. clarke d. e. long scoap: sandia controllability/observability analysis program scoap is a program developed at sandia national laboratories for the analysis of digital circuit testability. testability is related to the difficulty of controlling and observing the logical values of internal nodes from circuit inputs and outputs, respectively. this paper reviews the testability analysis algorithms and describes their implementation in the scoap program. lawrence h. goldstein evelyn l. thigpen graph partitioning for concurrent test scheduling in vlsi circuit chien-in henry chen cregs: a new kind of memory for referencing arrays and pointers often, pointer and subscripted array references touch memory locations for which there are several possible aliases, hence these references cannot be made from registers. although conventional caches can increase performance somewhat, they do not provide many of the benefits of registers, and do not permit the compiler to perform many optimizations associated with register references. the creg (pronounced "c-reg") mechanism combines the hardware structures of cache and registers to create a new kind of memory structure, which can be used either as processor registers or as a replacement for conventional cache memory. by permitting aliased names to be grouped together, cregs resolve ambiguous alias problems in hardware, resulting in more efficient execution than even the combination of conventional registers and cache can provide. this paper discusses both the conceptual creg hardware structure and the compiler analysis and optimization techniques to manage that structure. h. dietz c. h. chi on-chip transient current monitor for testing of low-voltage cmos ic v. stopjaková h. manhaeve m. sidiropulos on the generation of small dictionaries for fault location irith pomeranz sudhakar m. reddy a reordering technique for efficient code motion luiz c. v. dos santos jochen a. g. jess delay test effectiveness evaluation of lssd-based vlsi logic circuits david m. wu charles e. radke computer architecture simulation using a register transfer language r. pittman l. bartel a low power high performance switched-current multiplier d. m. w. leenaerts g. h. m. joordens j. a. hegt efficient computation of quasi-periodic circuit operating conditions via a mixed frequency/time approach dan feng joel phillips keith nabors ken kundert jacob white vlsi design synthesis with testability a vlsi design synthesis approach with testability, area, and delay constraints is presented. this research differs from other synthesizers by implementing testability as part of the vlsi design solution. a binary tree data structure is used throughout the testable design search. its bottom up and top down tree algorithms provide datapath allocation, constraint estimation, and feedback for design exploration. the partitioning and two dimensional characteristics of the binary tree structure provide vlsi design floorplans and global information for test incorporation. an elliptical wave filter example was used to illustrate the design synthesis with testability constraints methodology. test methodologies such as multiple chain scan paths and bist with different test schedules were explored. design scores comprised of area, delay, fault coverage, and test length were computed and graphed. results show that the 'best' testable design solution is not always the same as that obtained from the 'best' design solution of an area and delay based synthesis search. catherine h. gebotys mohamed i. elmasry a partitioning-based logic optimization method for large scale circuits with boolean matrix yuichi nakamura takeshi yoshimura time-symbolic simulation for accurate timing verification of asynchronous behavior of logic circuits as a new approach for timing verification of logic circuits, we propose a new concept of time-symbolic simulation. while a conventional symbolic simulator treats signal values as logical expressions, a time-symbolic simulator treats time as algebraic expressions. in this paper, we describe algorithms for time- symbolic simulation, and its application to hazard detection and verification of asynchronous sequential circuits. n. ishiura m. takahashi s. yajima parity logging overcoming the small write problem in redundant disk arrays parity encoded redundant disk arrays provide highly reliable, cost effective secondary storage with high performance for read accesses and large write accesses. their performance on small writes, however, is much worse than mirrored disks---the traditional, highly reliable, but expensive organization for secondary storage. unfortunately, small writes are a substantial portion of the i/o workload of many important, demanding applications such as on-line transaction processing. this paper presents parity logging, a novel solution to the small write problem for redundant disk arrays. parity logging applies journalling techniques to substantially reduce the cost of small writes. we provide a detailed analysis of parity logging and competing schemes--- mirroring, floating storage, and raid level 5--- and verify these models by simulation. parity logging provides performance competitive with mirroring, the best of the alternative single failure tolerating disk array organizations. however, its overhead cost is close to the minimum offered by raid level 5. finally, parity logging can exploit data caching much more effectively than all three alternative approaches. daniel stodolsky garth gibson mark holland balance in architectural design we introduce a performance metric, normalized time, which is closely related to such measures as the area-time product of vlsi theory and the price / performance ratio of advertising literature. this metric captures the idea of a piece of hardware "pulling its own weight," i.e. contributing as much to performance as it costs in resources. we then prove general theorems for stating when the size of a given part is in balance with its utilization, and give specific formulas for commonly found linear and quadratic devices. we also apply these formulas to an analysis of a specific processor element, and discuss the implications for bit-serial vs word- parallel, risc vs cisc, and vliw designs. samuel ho lawrence snyder introspection: a low overhead binding technique during self-diagnosing microarchitecture synthesis balakrishnan iyer ramesh karri fast identification of robust dependent path delay faults u. sparmann d. luxenburger k.-t. cheng s. m. reddy fast functional simulation using branching programs pranav ashar sharad malik architectural potholes john mashey an approach for extracting rt timing information to annotate algorithmic vhdl specifications cordula hansen francisco nascimento wolfgang rosenstiel integrated test of interacting controllers and datapaths in systems consisting of interacting datapaths and controllers and utilizing built-in self test (bist), the datapaths and controllers are traditionally tested separately by isolating each component from the environment of the system during test. this work facilitates the testing of datapath/controller pairs in an integrated fashion. the key to the approach is the addition of logic to the system that interacts with the existing controller to push the effects of controller faults into the data flow, so that they can be observed at the datapath registers rather than directly at the controller outputs. the result is to reduce the bist overhead over what is needed if the datapath and controller are tested independently, and to allow a more complete test of the interface between datapath and controller, including the faults that do not manifest themselves in isolation. fault coverage and overhead results are given for four example circuits. mehrdad nourani joan carletta christos papachristou exploiting instruction level parallelism in geometry processing for three dimensional graphics applications chia-lin yang barton sano alvin r. lebeck algorithms for solving boolean satisfiability in combinational circuits luís guerra e silva l. miguel silveira joöa marques-silva delay minimization and technology mapping of two-level structures and implementation using clock-delayed domino logic jovanka ciric gin yee carl sechen april: a processor architecture for multiprocessing processors in large-scale multiprocessors must be able to tolerate large communication latencies and synchronization delays. this paper describes the architecture of a rapid-context-switching processor called april with support for fine-grain threads and synchronization. april achieves high single-thread performance and supports virtual dynamic threads. a commercial risc-based implementation of april and a run-time software system that can switch contexts in about 10 cycles is described. measurements taken for several parallel applications on an april simulator show that the overhead for supporting parallel tasks based on futures is reduced by a factor of two over a corresponding implementation on the encore multimax. the scalability of a multiprocessor based on april is explored using a performance model. we show that the sparc-based implementation of april can achieve close to 80% processor utilization with as few as three resident threads per processor in a large-scale cache-based machine with an average base network latency of 55 cycles. anant agarwal beng-hong lim david kranz john kubiatowicz noise estimation due to signal activity for capacitively coupled cmos logic gates the effect of interconnect coupling capacitance on neighboring cmos logic gates driving coupled interconnections strongly depends upon the signal activity. a transient analysis of two capacitively coupled cmos logic gates is presented in this paper for different combinations of signal activity. the uncertainty of the effective load capacitance and propagation delay due to the signal activity is addressed. analytical expressions characterizing the output voltage and propagation delay are also presented for different signal activity conditions. the propagation delay based on these analytical expressions is within 3% as compared to spice, while the estimated delay neglecting the difference between the load capacitances can exceed 45%. the logic gates should be properly sized to balance the load capacitances in order to minimize any uncertainty in the signal delay. the peak noise voltage on a quiet interconnection determined from the analytical expressions is within 4% of spice. kevin t. tang eby g. friedman functional testing of digital systems functional testing is testing aimed at validating the correct operation of a digital system with respect to its functional specification. we have designed and implemented a practical test generation methodology that can generate tests directly from a system's high-level specification. solutions adopted include multi-level fault models and multi- stage test generation. tests generated from the methodology were compared against test programs supplied by a computer manufacturer and were found to detect more faults with much better efficiency. the experiment demonstrated that functional testing can be both practical and efficient. automatic generation of design validation tests is now closer to reality. kwok-woon lai daniel p. siewiorek an efficient architecture for loop based data preloading william y. chen roger a. bringmann scott a. mahlke richard e. hank james e. sicolo simultaneous circuit partitioning/clustering with retiming for performance optimization jason cong honching li chang wu effective low power bist for datapaths (poster paper) d. gizopoulos n. kranitis a. paschalis m. psarakis y. zorian towards maximising the use of structural vhdl for synthesis k. o'brien s. maginot a. robert a low power unified cache architecture providing power and performance flexibility (poster session) advances in technology have allowed portable electronic devices to become smaller and more complex, placing stringent power and performance requirements on the devices components. the m7core m3 architecture was developed specifically for these embedded applications. to address the growing need for longer battery life and higher performance, an 8-kbyte, 4-way set-associative, unified (instruction and data) cache with pro-grammable features was added to the m3 core. these features allow the architecture to be optimized based on the applications requirements. in this paper, we focus on the features of the m340 cache sub-system and illustrate the effect on power and perfor-mance through benchmark analysis and actual silicon measure-ments. afzal malik bill moyer dan cermak synthesis of hazard-free customized cmos complex-gate networks under multiple- input changes prabhakar kudva ganesh gopalakrishnan hans jacobson steven m. nowick cas-bus: a scalable and reconfigurable test access mechanisms for systems on a chip mounir benabdenbi walid maroufi digital test generation and design for testability this paper is a tutorial intended primarily for individuals just getting started in digital testing. basic concepts of testing are described, and the steps in the test development process are discussed. a pragmatic approach to test sequence generation is presented, oriented towards ics interconnected on a board. finally, design for testability techniques are described, with an emphasis on solving problems that appeared during the test generation discussion. john grason andrew w. nagle iterative improvement based multi-way netlist partitioning for fpgas helena krupnova gabriele saucier sequence-pair based placement method for hard/soft/pre-placed modules this paper proposes a placement method for a mixed set of hard, soft, and pre- placed modules, based on a placement topology representation called sequence- pair. under one sequence-pair, a convex optimization problem is efficiently formulated and solved to optimize the aspect ratios of the soft modules. the method is used in two ways: i) directly applied in simulated annealing to present the most exact placement method, ii) applied as a post process in an approximate placement method for faster computation. the performance of these two methods are reported using mcnc benchmark examples. hiroshi murata ernest s. kuh a study of instruction cache organizations and replacement policies instruction caches are analyzed both theoretically and experimentally. the theoretical analysis begins with a new model for cache referencing behavior--- the loop model. this model is used to study cache organizations and replacement policies. it is concluded theoretically that random replacement is better than lru and fifo, and that under certain circumstances, a direct- mapped or set associative cache may perform better than a full associative cache organization. experimental results using instruction trace data are then given. the experimental results are shown to support the theoretical conclusions. james e. smith james r. goodman a low-voltage cmos multiplier for rf applications (poster session) a low-voltage analog multiplier operating at 1.2v is presented. the multiplier core consists of four mos transistors operating in the saturation region. the circuit exploits the quadratic relation between current and voltage of the mos transistor in saturation. the circuit wasdesigned using standard 0.6μm cmos technology. simulation results indicate an ip3 of 4.9dbm and a spur free dynamic range of 45db. carl james debono franco maloberti joseph micallef power efficient mediaprocessors: design space exploration johnson kin chunho lee william h. mangione-smith miodrag potkonjak dynajust: an efficient automatic routing technique optimizing delay conditions a new routing technique dynajust, dynamic wire length adjustment, is described. it accurately realizes specified wire lengths to fulfill delay conditions. the implementation, based on the combination of shortest path algorithms, is proposed to achieve a high completion ratio in a short processing time. the technique is useful in practical situations where high accuracy is required of many nets. y. fujihara y. sekiyama y. ishibashi m. yanaka a logic minimizer for vlsi pla design this paper describes logmin, a new, interactive computer aided logic design tool. logmin automates the increasingly complex problems of vlsi pla design which has made the specification, manipulation, minimization and generation of plas difficult to do by hand. logmin allows the specification of both combinational functions and sequential machines. combinational functions may be described using a variety of operators, intermediate variables or pla code. a state machine description language (smdl) was developed for the specification of sequential machines. this paper describes the background and motivation for logmin, the algorithms used and the grammar for smdl. several examples are provided. bill teel doran wilde guarded evaluation: pushing power management to logic synthesis/design vivek tiwari sharad malik pranav ashar the validity of retiming sequential circuits vigyan singhal carl pixley richard l. rudell robert k. brayton program path analysis to bound cache-related preemption delay in preemptive real-time systems unpredictable behavior of cache memory males it difficult to statically analyze the worst-case performance of real-time systems. this problem is exacerbated in case of preemptive multitask systems due to intertask cache in terference, called cache-related preemption delay (crpd). this paper proposes an approach to analysis of the tight upper bound on crpd which a task might impose on lower-priority tasks. our method determines the program execution path of the task which requires the maximum number of cache blocks using an integer linear programming technique. experimental results show that our approach provides up to 69% tighter bounds on crpd than a previous approach. hiroyuki tomiyama nikil d. dutt automated phase assignment for the synthesis of low power domino circuits priyadarshan patra unni narayanan optimizations for a highly cost-efficient programmable logic architecture architects of programmable logic devices (plds) face several challenges when optimizing a new device family for low manufacturing cost. when given an aggressive die-size goal, functional blocks that seem otherwise insignificant become targets for area reduction. once low die cost is achieved, it is seen that testing and packaging costs must be considered. interactions among these three cost contributors pose trade-offs that prevent independent optimization. this paper discusses solutions discovered by the architects optimizing the altera flex 6000 architecture. kerry veenstra bruce pedersen jay schleicher chiakang sung hsis: a bdd-based environment for formal verification a. aziz f. balarin s.-t. cheng r. hojati t. kam s. c. krishnan r. k. ranjan t. r. shiple v. singhal s. tasiran h.-y. wang r. k. brayton a. l. sangiovanni-vincentelli an investigation of the performance of a distributed functional digital simulator p. chawla h. w. carter real-time, frame-rate face detection on a configurable hardware system (poster abstract) rob mccready jonathan rose efficient algorithms for computing the longest viable path in a combinational network we consider the elimination of false paths in combinational circuits. we give the single generic algorithm that is used to solve this problem, and demonstrate that it is parameterized by a boolean function called the sensitization condition. we give two criteria which we argue that a valid sensitization condition must meet, and introduce four conditions that have appeared in the recent literature, of which two meet the criteria and two do not. we then introduce a dynamic programming procedure for the tightest of these conditions, the viability condition, and discuss the integration of all four sensitization conditions in the lllama timing environment. we give results on the iwls and iscas benchmark examples and on carry-bypass adders. p. c. mcgeer r. k. brayton fast printed circuit board routing this paper describes the algorithms in a printed circuit board router used for fully automatic routing of high-density circuit boards. completely automatic routing and running times of a few minutes have resulted from a new data structure for efficient representation of the routing grid, quick searches for optimal solutions, and generalizations of lee's algorithm. j. dion total stuct-at-fault testing by circuit transformation we present a new approach to the production testing of vlsi circuits. by using very structured design for testability, we achieve 100% single stuck-at fault coverage with under 20 test vectors and no search. the approach also detects most multiple faults. andrea s. lapaugh richard j. lipton hot cold optimization of large windows/nt applications robert cohn p. geoffrey lowney formal verification using parametric representations of boolean constraints mark d. aagaard robert b. jones carl-johan h. serger power exploration for data dominated video applications sven wuytack francky catthoor lode nachtergaele hugo de man functional verification of mos circuits this report describes the ideas behind silica pithecus, a program which verifies synchronous digital mos vlsi circuits. silica pithecus accepts the schematic of an mos vlsi circuit, declarations of the logical relationships between the inputs signals (e.g., which inputs are mutually exclusive), and a specification of the intended digital behavior of the circuit. if the circuit fails to meet its specification silica pithecus returns to the designer the precise reason it fails to do so. unlike previous verification systems, silica pithecus employs a realistic electrical model. it also automatically generates the constraints on the inputs of a circuit which ensure the circuit will exhibit its intended digital behavior. these constraints are necessary for hierarchical verification. silica pithecus operates hierarchically, interactively, and incrementally. d. weise power minimization of functional units partially guarded computation this paper deals with power minimization problem for data-dominated applications based on a novel concept called partially guarded computation. we divide a functional unit into two parts - msp (most significant part) and lsp (least significant part) - and allow the functional unit to perform only the lsp computation if the range of output data can be covered by lsp. we dynamically disable msp computation to remove unnecessary transitions thereby reducing power consumption. we also propose a systematic approach for determining optimal location of the boundary between the two parts during high-level synthesis. experimental results show about 10~44% power reduction with about 30~36% area overhead and less than 3% delay overhead in functional units. junghwan choi jinhwan jeon kiyoung choi system level design using c++ diederik verkest joachim kunkel frank schirrmeister transistor reordering for power minimization under delay constraint in this article we address the problem of optimization of vlsi circuits to minimize power consumption while meeting performance goals. we present a method of estimating power consumption of a basic or complex cmos gate which takes the internal capacitances of the gate into account. this method is used to select an ordering of series-connected transistors found in cmos gates to achieve lower power consumption. the method is very efficient when used by library-based design styles. we describe a multipass algorithm that makes use of transistor reordering to optimize performance and power consumption of circuits, has a linear time complexity per pass, and converges to a solution in a small number of passes. transformations in addition to transistor reordering can be used by the algorithm. the algorithm has been benchmarked on several large examples and the results are presented. s. c. prasad k. roy efficient trace-driven simulation methods for cache performance analysis wen- hann wang jean-loup baer the proposal of a computing model for prototypes of microprogrammed machines solving complex problems our research work concerns the definition of computational systems for the solution of complex scientific problems. in this work we present the analysis of a computing model for the definition and the implementation of microprogrammed prototypal machines. the computing model is expressed in terms of a system of equations. the structure and the control of a microprogrammed machine is directly implied by the formally defined properties of the system of equations. e. binaghi g. pasi g. r. sechi ku-band earth stations for data communications (panel session) (title only) j. field d. lyon e. parker e. yousefzadeh addressing high frequency effects in vlsi interconnects with full wave model and cfh ramachandra achar michel s. nakhla q. j. zhang low-power radix-4 divider a. nannarelli t. lang hardware compilation for fpga-based configurable computing machines xiaohan zhu bill lin hyhope: a fast fault simulator with efficient simulation of hypertrophic faults in sequential circuit fault simulation, the hypertrophic faults, which result from lengthened initialization sequence in the faulty circuits, usually produce a large number of fault events during simulation and require excessive gate evaluations. these faults degrade the performance of fault simulators attempting to simulate them exactly. in this paper, an exact simulation algorithm is developed to identify the hypertropic faults and to minimize their effects during the fault simulation. the simulator hyhope based on this algorithm shows that the average speedup ratio over hope 1.1 is 1.57 for iscas89 benchmark circuits. furthermore, the result indicates the performance of hyhope is close to the approximate simulator in which faults are simply dropped when they become potentially detected. chen-pin kung chen- shang lin timing uncertainty analysis for time-of-flight systems time-of-flight synchronization is a new digital design methodology that eliminates all latching devices, allowing higher clock rates than alternative timing schemes. synchronization is accomplished by precisely balancing connection delays. many effective pipeline stages are created by pipelining combinational logic, similar in concept to wave pipelining but differing in several respects. due to the unique flow-through nature of circuits and to the need for pulse-mode operation, time-of-flight design exposes interesting new areas for cad timing analysis. this paper discusses how static propagation delay uncertainty limits the clock period for time-of-flight circuits built with opto-electronic devices. we present algorithms for placing a minimum set of clock gates to restore timing in feedback loops that implement memory and for propagating delay uncertainty through a circuit graph. a mixed integer program determining the minimum feasible clock period subject to pulse width and arrival time constraints is discussed. algorithms are implemented in xhatch, a time-of- flight cad package. john r. feehrer harry f. jordan test generation for bridging faults in cmos ics based on current monitoring versus signal propagation bridge-type defects play a dominant role in state-of-the-art cmos technologies. this paper describes a combined functional and overcurrent-based test generation approach for cmos circuits, which is optionally based on layout information. comparative results for benchmark circuits are given to demonstrate the feasibility of voltage-based versus iddq-based testing. u. gläser h. t. vierhaus m. kley a. wiederhold testing of uncustomized segmented channel field programmable gate arrays this paper presents a methodology for production-time testing of (uncustomized) segmented channel field programmable gate arrays (fpgas) such as those manufactured by actel. the principles of this methodology are based on configuring the uncommitted modules (made of sequential and combinational logic circuits) of the fpga as a set of disjoint one-dimensional arrays similar to iterative logic arrays (ilas). these arrays can then be tested by establishing appropriate conditions such as constant testability (c-testability). a design approach is proposed. this approach is based on adding a small circuitry (consisting of two transistors) between each pair of uncustomized modules in a row for establishing the ila configuration as a one- dimensional unilateral array. it also requires the addition of a further primary pin. features such as number of test vectors and hardware requirements (measured by the number of additional transistors and primary input/output pins) are analyzed; it is shown that the proposed design approach requires a considerably smaller number of test vectors (a reduction of more than two orders of magnitude) and hardware overhead for the testing circuitry (a reduction of 13.6%) than the original fpga configuration of [1]. the proposed approach requires 8+2nf vectors for testing the uncommitted fpga of [1], where nf is the number of flip-flops (equal to the number of sequential modules for the fpga of [1]) in a row of the fpga. tong liu wei kang huang fabrizio lombardi milef: an efficient approach to mixed level automatic test pattern generation u. gläser h. t. vierhaus generation of high quality non-robust tests for path delay faults kwang-ting cheng hsi-chuan chen a logic design structure for lsi testability e. b. eichelberger t. w. williams universal logic modules for series-parallel functions shashidhar thakur d. f. wong digital mos circuit partitioning with symbolic modeling lluís ribas xirgo jordi carrabina bordoll a layout system for the random logic portion of mos lsi the random logic portion of an mos lsi chip intended mainly for a calculator is constructed of an array of mos complex gates, each composed of an mos ratioless circuit with a multi-phase clocking system, and occupies ordinarily a considerable part of chip area. in this paper, a layout system for this portion of an lsi is described, which is constructed on the basis of a set of optimization heuristics. experimental results of the layout system are also shown so as to reveal that the random logic portion can be realized in much the same area as can be done by manual layout. isao shirakawa noboru okuda takashi harada sadahiro tani hiroshi ozaki statistics on logic simulation the high costs associated with logic simulation of large vlsi based systems have led to the need for new computer architectures tailored to the simulation task. such architectures have the potential for significant speedups over standard software based logic simulators. several commercial simulation engines have been produced to satisfy needs in this area. to properly explore the space of alternative simulation architectures, data is required on the simulation process itself. this paper presents a framework for such data gathering activity by first examining possible sources of speedup in the logic simulation task, examining the sort of data needed in the design of simulation engines, and then presenting such data. the data contained in the paper includes information on subtask times found in standard discrete event simulation algorithms, event intensities, queue length distributions and simultaneous event distributions. k. f. wong m. a. franklin r. d. chamberlain b. l. shing algorithm 166: prettyprint ross bettinger software-fault detector for microprocessors for the realization of the means to develop more reliable software for the microcomputers especially in real time environments, we design a hardware tool called "software-fault detector" which detects software faults such as misaccess to an element beyond the range of an array. the implementation of the mechanism for such address range checks is generally difficult in microprocessor environment, since internal registers are not readily visible to external logics. we introduce an "incremental" key-lock protection scheme into the fixed microcomputer architecture of the intel 8080 because of its popularity and simplicity. in this scheme, a "lock" is a protection code associated with the storage cell, and a "key" is associated with access capability such as address. in each memory access to a cell, a check is made whether a key matched against the lock of the addressed cell. in this paper, we present the details of the scheme and its analysis. further, we present an actual hardware design of the software fault detector. our design methodology is to realize a dector by the use of identical microprocessor 8080s, as an independent one-board module which could be connected to the memory bus of the host system. kozo itano tetsuo ida gate-level current waveform simulation of cmos integrated circuits alessandro bogliolo luca benini giovanni de micheli bruno riccó false path exclusion in delay analysis of rtl-based datapath-controller designs c. papachristou m. nourani the quarter micron challenge: intergrating physical and logic design raul camposano the design of a hardware recognizer for utilization in scanning operations this paper addresses the design issues and the performance evaluation of a special purpose hardware recognizer device capable of performing pattern matching and text retrieval operations. in addition, the vlsi design and the time and space complexities of the proposed organization are discussed. the structure of the system is based on the concept of the non-deterministic finite state model with a high degree of parallelism incorporated into the design. the system simulates a parallel finite state automaton by utilizing a number of identical units called "cells" which have associative processing capabilities. the proposed system improves the performance of pattern matching operations by matching several patterns in parallel. because of the similarities between the scanning process during compilation and the pattern matching operations, the proposed module can be used as a hardware scanner. the hardware scanner can be used as an interface between the user and the compiler in the conventional general purpose systems as well as the language oriented or high-level language computers. a. r. hurson s. shirazi an o (n log n) algorithm for boolean mask operations u. lauther prediction of wiring space requirements for lsi w. r. heller w. f. mikhail w. e. donath more wires and fewer luts: a design methodology for fpgas in designing fpgas, it is important to achiev e a good balance bet w een the number of logic blocks, suc h has look-up tables (luts), and wiring resources. it is difficult to find an optimal solution. in this paper, w e presen t an fpga design methodology to efficiently find well-balanced fpga architectures. the method covers all aspects of fpga development from the architecture- decision process to physical implementation. it has been used to develop a new fpga that can implement circuits that are twice as large as those implementable with the previous version but with half the number of logic blocks. this indicates that the methodology is effectiv e in dev eloping well- balanced fpgas. atsushi takahara toshiaki miyazaki takahiro murooka masaru katayama kazuhiro hayashi akihiro tsutsui takaki ichimori ken-nosuke fukami estimation of maximum transition counts at internal nodes in cmos vlsi circuits chin-chi teng anthony m. hill sung-mo kang a general state graph transformation framework for asynchronous synthesis bill lin chantal ykman-couvreur peter vanbekbergen a generalized channel router a "generalized" channel router operates on horizontal and vertical channels generated from an irregular cell structure, and is free of a routing grid. such a router can solve virtually any routing problem. it has two major phases: the global routing phase and the channel routing phase. this paper describes both phases as they have been implemented at ti. it concludes with a demonstration of the versatility of the router (it is used to solve the hampton court maze) and with applications of the router in ti's i2l (integrated injector logic) / stl (schottky transistor logic) automatic layout system. david w. hightower robert l. boyd new performance-driven fpga routing algorithms michael j. alexander gabriel robins memory coherence in shared virtual memory systems kai li paul hudak design of system interface modules jane s. sun robert w. brodersen accurate estimation of combinational circuit activity huzefa mehta manjit borah robert michael owens mary jane irwin automatic vlsi layout verification xerox has instituted a set of software tools that close the loop between circuit design and mask generation of vlsi and provide checks and analysis along the way. the software includes circuit extraction, capacitance calaulation, nodal analysis and logic recognition as well as interfaces to graphic systems. the systematic method of capturing circuit designs and the software packages for analyzing mask data are described in this paper. the kinds of errors checked and the method of reporting errors are explained. this paper traces a single design from circuit description to mask making. it shows all the check points and verification tools presently available to a designer at xerox. the example used for demonstration is the design of a small component of a chip called a cell. it should be pointed out that the tools described are not restricted to cell verification. for the most part they will accommodate up to full chip designs. laurin williams storage optimization by replacing some flip-flops with latches y. lin t. wu detecting false timing paths: experiments on powerpc microprocessors richard raimi jacob abraham rectification of multiple logic design errors in multiple output circuits masahiro tomita tamotsu yamamoto fuminori sumikawa kotaro hirano a vhdl-based methodology for the design and verification of pipeline a/d converters eduardo peralías antonio j. acosta adoración rueda jose l. huertas a unified approach to multilayer over-the-cell routing sreekrishna madhwapathy naveed a. sherwani siddharth bhingarde anand panyam a hierarchical approach for the symbolic analysis of large analog circuits o. guierra e. roca f. fernandez a. rodriguez-vasquez constraints from hell; how to tell makes a good fpga (panel) the fpga development is an extraordinarily complex task, involving many people working in architecture, chip design, software, marketing and production. each tends to focus on their immediate task, and have their own measures of goodness for what they do, as well as very particular constraints relevant to their discipline. in this panel we will discuss these measures and constraints in an attempt to see how they succeed in transcending the artificial borders in each discipline. an fpga customer presents a set of impossible demands: create a chip that will do whatever he wants, and do it as cheaply as possible, run at the required speed, be available in the right package and have an easy to use software that does everything automatically with no manual intervention and no design iteration. an fpga architect tries to meet some of these demands. he must address the needs of the greatest number of customers, many of which may be conflicting. what are his metrics? early on in the architecture development, when tools may not be available, how does he make decisions? what are the constraints that force particular types of decisions? what about later? although software development and architecture go hand in hand, the software group brings additional constraints to the table: having to maintain large chunks of code in the face of major new architectures, interface and integration of external tools supplied by cad vendors (schematic capture, synthesis, simulation, etc.) what measures of goodness are applied in the software development and integration process? how do these affect architectural decisions? marketing often plays the role of representing the customer, by relaying his demands to the architects and other developers. what is marketing's true goal? depending on whether dilbert holds sway or not, this can be a nefarious or an inspired role. academics seeking to add knowledge to the community often create various metrics to measure the goodness of their own work such as track count, pin count, logic block count, area models, net delay, path delay, published paper count, happiness of funding agency representatives, etc. do any of these reflect reality that will convey true knowledge? operations also have their concerns with regard to yields, wafer sort, testing, packaging, and even inventory and shipping. how do these concerns define a measure of goodness for fpgas? finally, there is the long-term health of the fpga industry itself. have we made progress against gate arrays and standard cells? will the micro-processor sneak up and bite us from behind? jonathan rose sinan kaptanoglu clive mccarthy rob smith sandip vij steve taylor a design and validation system for asynchronous circuits peter vanbekbergen albert wang kurt keutzer synthesis of reusable dsp cores based on multiple behaviors wei zhao christos a. papachristou cost minimization of partitioned circuits with complex resource constraints in fpgas (poster abstract) in this paper, we formulated a new cost minimization partition problem with complex resource constraints in large fpgas and proposed a maximum matching and ilp based algorithm to solve it. in traditional partitioning methods, one starts with a random initial partition of the circuit. instead, we proposed a maximum matching based algorithm to generate a feasible initial partition efficiently. the proposed problem is formulated in ilp model. the ilp solver, lingo, is employed to find the number of fpga chips of each type to minimize the total cost. further, a new vertex ordering matching algorithm is proposed to get a smaller cut-size partition. experimental results on the mcnc lgsynth91 benchmark show that circuit partition with multiple resource types has 20% lower cost on average than that use simple resource type fpga. the proposed vertex ordering method reduces the cost by 19% compared with the method without vertex ordering considerations. yu-chung lin su-feng tseng tsai-ming hsieh paraspice: a parallel circuit simulator for shared-memory multiprocessors this paper presents a general approach to parallelizing direct method circuit simulation. the approach extracts parallel tasks at the algorithmic level for each compute-intensive module and therefore is suitable for a wide range of shared-memory multiprocessors. the implementation of the approach in spice2 resulted in a portable parallel direct circuit simulator, paraspice. the superior performance of paraspice is demonstrated on an 8-ce alliant fx/80 using a number of benchmark circuits. gung-chung yang a rate selection algorithm for quantized undithered dynamic supply voltage scaling (poster session) in this paper we propose a novel rate calculation algorithm called quantized rate selection (qrs) for quantized undithered dynamic supply voltage scaling (dsvs) systems. the algorithm monitors the total buffered workload, and where possible selects a ratevalue equal to a quantized rate value. at quantized rate values, energy dissipation of quantized dsvs systems approaches continuous voltage level dsvs systems. our experimental work on fmidct computation using nine video sequences and a 4-level quantized undithered system shows that additional energy savings of 1.4% to 18.5% can be achieved from qrs, compared to the existing averaging technique. lama h. chandrasena michael j. liebelt iterative abstraction-based ctl model checking jae-young jang in-ho moon gary d. hachtel the fhdl pla tools peter m. maurer craig d. morency a survey of optimization techniques targeting low power vlsi circuits srinivas devadas sharad malik the automatic generation of functional test vectors for rambus designs k. d. jones j. p. privitera the ongoing evolution of scientific supercomputing john gehl design methodology for ip providers jurgen haase re-configurable computing richard swan anthony wyatt richard cant caroline langensiepen on approximation algorithms for microcode bit minimization the bit (or width) minimization problem for microprograms is known to be np- complete. motivated by its practical importance, we address the question of obtaining near-optimal solutions. two main results are presented. first, we establish a tight bound on the quality of solutions produced by algorithms which minimize the number of compatibility classes. second, we show that the bit minimization problem has a polynomial time relative approximation algorithm only if the vertex coloring problem for graphs with n nodes can be approximated to within a factor of (logn) in polynomial time. s. s. ravi d. gu bit-flipping bist hans-joachim wunderlich gundolf kiefer a hardware assisted design rule check architecture this paper describes an architecture for design rule checking that uses a small amount of special purpose hardware to achieve a significant speed improvement over conventional methods. a fixed grid raster scan algorithm is used that allows checking of 45 angled edges at a modest cost in performance. operations implemented directly in hardware include width checks, edge condition checks, boolean operations on masks, and shrinking and expansion of masks. hardware support for rasterization is also provided. software in a controlling processor handles all geometric data manipulation. this architecture should be able to check a simple set of design rules on a 300 mil square layout in one and one half minutes, if the controlling processor can provide data quickly enough. layouts have been completed for two of four custom chips used in this architecture, and one has been fabricated and proven functional. larry seiler a single phase latch for high speed gaas domino circuits (poster paper) s. nooshabadi j. a. montiel-nelson a. núñez r. sarmiento j. sosa leap frog multiplier s. mahant-shetti c. lemonds p. balsara a fault analysis method for synchronous sequential circuits in this paper we extend the use of the fault analysis method dealing with combinational circuits[1] to synchronous sequential circuits. using the iterative array model, extended forward propagation and backward implication are performed. based on the observed values at primary outputs, to deduce the actual values of each line to determine its fault status. any stuck fault can be identified, even in a circuit without any initialization sequence. a fault which is covered is tested unconditionally; thus the results obtained would not be invalidated in the presence of untested or untestable lines. examples will be given to demonstrate the ability of our method. t. y. kuo j. y. lee j. f. wang formally based static analysis of microcode algebraic methods have been widely used to find properties of programs, especially for use in compiler optimisation. this paper describes the use of this kind of method to prove the absence of particular errors in microcode, or to detect and locate such errors. in order to show the kind of error which may be found we consider a number of examples. all of these have found errors in practical microcode, written for the perq computer. j. m. foster a layout verification system for analog bipolar integrated circuits a new layout verification system, called alas (alayout analysis system) is presented. its main intention is to tackle the particular verification problems of analog bipolar circuits. at present, the system comprises four main parts: a device recognition program produces a list of devices, a plot program converts these data to a layout- oriented circuit diagram, a connectivity analysis program yields device- oriented or net-oriented descriptions of the derived circuit and a network comparison program tests the consistency of this actual circuit with the intended nominal one. a fifth program, that will calculate the parameters of the actual circuit, is under development. to derive the actual circuit from layout alas uses geometrical mask data only; no additional circuit information is needed. if not available from the design system, a description of the nominal circuit may be supplied manually in a spice-like input format. erich barke a comparative study of design for testability methods using high-level and gate-level descriptions vivek chickermane jaushin lee janak h. patel i/o potholes adrian cockcroft low power and high performance design challenges in future technologies we discuss key barriers to continued scaling of supply voltage and technology for microprocessors to achieve low-power and high-performance. in particular, we focus on short-channel effects, device parameter variations, excessive subthreshold and gate oxide leakage, as the main obstacles dictated by fundamental device physics. functionality of special circuits in the presence of high leakage, sram cell stability, bit line delay scaling, and power consumption in clocks & interconnects, will be the primary design challenges in the future. soft error rate control and power delivery pose additional challenges. all of these problems are further compounded by the rapidly escalating complexity of microprocessor designs. the excessive leakage problem is particularly severe for battery-operated, high-performance microprocessors. vivek de shekhar borkar report on the 20th fault-tolerant computing symposium stanislaw j. plestrak fast and accurate estimation of floorplans in logic/high-level synthesis in many applications such as high-level synthesis (hls) and logic synthesis and possibly engineering change order (eco) we would like to get fast and accurate estimations of different performance measures of the chip, namely area, delay and power consumption. these measures cannot be estimated with high accuracy unless a fairly detailed layout of the chip, including the floorplan and routing is available, which in turn are very costly processes in terms of running time. as we have entered the deep sub-micron era, we have to deal with designs which contain million gates and up. not only we should consider the area occupied by the modules, but we also have to consider the wiring congestion. in this paper we propose a cost function that is, in addition to other parameters, a function of the wiring area. we also propose a method, to avoid running the floorplanning process after _every_ change in the design, by considering the possible changes in advance and generating a floorplan which is _tolerant_ to these modifications, i.e., the changes in the netlist does not dramatically change the performance measures of the chip. experiments are done in the high-level synthesis domain, but the method can be applied to logic synthesis and eco as well. we gain speedups of 184% on the average over the traditional estimation methods used in hls. kiarash bazargan abhishek ranjan majid sarrafzadeh low-power mapping of behavioral arrays to multiple memories p. panda n. dutt design of a low-power cmos baseband circuit for wideband cdma testbed (poster session) in this paper, the design and performance of a cmos baseband circuit for wcdma direct conversion receiver are presented. consisting of one 5th-order anti- aliasing filter, one 4th-order tunable channel filter, and three variable gain amplifier (vga) stages, the baseband chain provides 72db gain range with 2db gain step and is tunable to select three differentbandwidths (from 5mhz to 20mhz radio-frequency spacing). it dissipates only18mw from a single 3v supply. the input ip3 is 10dbm, and the input-referred noise in the passband is 41nv/$\sqrt{hz}$. chunlei shi yue wu mohammed ismail a timing analysis algorithm for circuits with level-sensitive latches for a logic design with level-sensitive latches, we need to validate timing signal paths which may flush through several latches. we developed efficient algorithms based on the modified shortest and longest path method. the computational complexity of our algorithm is generally better than that of known algorithms in the literature. the implementation (cyclopss) has been applied to an industrial chip to verify the clock schedules. jin-fuw lee donald t. tang c. k. wong higher product complexity and shorter development time - continuous challenge to design and test environment jouko junkkari active memory: a new abstraction for memory-system simulation alvin r. lebeck david a. wood improving the efficiency of power simulators by input vector compaction chi- ying tsui radu marculescu diana marculescu massoud pedram microprogramming in multiprocessor data acquisition system s. d'angelo l. lisca a. proserpio g. r. sechi architecture considerations for mixed signals fpgas luigi carro identification of critical paths in circuits with level-sensitive latches timothy m. burks karem a. sakallah trevor n. mudge design verification considering manufacturing tolerances by using worst-caste distances helmut e. graeb claudia u. wieser kurt j. antreich synthesis of concurrent system interface modules with automatic protocol conversion generation we describe a new high-level compiler called integral for designing system interface modules. the input is a high-level concurrent algorithmic specification that can model complex concurrent control flow, logical and arithmetic computations, abstract communication, and low-level behavior. for abstract communication between two communicating modules that obey different i/o protocols, the necessary protocol conversion behaviors are automatically synthesized using a petri net theoretic approach. we present a synthesis trajectory that can synthesize the necessary hardware resources, control circuitry, and protocol conversion behaviors for implementing system interface modules. bill lin steven vercauteren an adaptive timing-driven layout for high speed vlsi an adaptive timing-driven layout system, called june, has been developed. the constructive algorithm, which combines placement with the global routing, constructs a placement satisfying timing and routability constraints. the placement problem for each macro is solved hierarchically as a sequence of two optimization problems followed by an adaptive correction procedure. experimental results for industrial sea-of-gates chips confirmed effectiveness of this approach. suphachai sutanthavibul eugene shragowitz behavioral modeling of transmission gates in vhdl this paper presents a technique for describing the behavior of transmission gates (tgs) in vhdl. the concept of virtual signal is introduced into the tg's data structure to represent the nature of the connection. the model's semantics are coded in three parts: the state transition, the steady states, and the connecting protocol. simulation results indicate that the model is correct and robust. s. s. leung layla: a vlsi layout language this paper describes layla, a pascal- based hardware description language for the specification of vlsi layouts. the primary application of layla is the development of parameterized cell libraries. important features include pascal's computational power and file i/o facilities, hierarchical (cell- based) layout generation, a run-time design parameter file, an extendible layer set, external compilation, the ability to support different output formats, and the automatic generation of parameterized layla from artwork. warren e. cory accuracy management for mixed-mode digital vlsi simulation accuracy management (am) offers delay-oriented control of simulation accuracy in automated mixed-mode simulation of digital vlsi circuits. am heuristics, with little additional computation, can achieve a required level of accuracy in output timing predictions while insuring computational efficiency by using simulations with appropriate levels of accuracy throughout a circuit. these results are demonstrated through both mathematical predictions on some benchmark circuit topologies, and actual mixed-mode timing simulations on large combinational circuit benchmarks. simulation speedups of 5-10 over corresponding standard, wave-form relaxation-based circuit simulations are demonstrated. gary l. dare charles a. zukowski low power cmos design strategies matthias schoebinger tobias g. noll on static compaction of test sequences for synchronous sequential circuits irith pomeranz sudhakar m. reddy object-oriented reuse methodology for vhdl cristina barna wolfgang rosenstiel a test synthesis technique using redundant register transfers chris papachristou mikhail baklashov enhancing high-level control-flow for improved testability frank f. hsu elizabeth m. rudnick janak h. patel celtic - solving the problems of lsi design with an integrated polycell da system the major problems associated with lsi design include those of design complexity, documentation, mapping, design verification both functional and physical, physical implementation, and the manufacturing interface. the design aids group at burroughs, cumbernauld, have developed an integrated lsi design system, celtic, which solves these problems with a polycell design approach, direct graphics entry of logic diagrams, and automatic generation of lsi layouts. the system reduces the time required to lay out lsi's from months to weeks. g. martin j. berrie t. little d. mackay j. mcvean d. tomsett l. weston illegal state space identification for sequential circuit test generation m. h. konijnenburg j. th. van der linden a. j. van de goor designing gate arrays using a silicon compiler this paper describes a programming environment in which gate array designs can be developed. it allows the engineer to design for performance, wirability and testability by manipulating a textual description of a design. the principle features of this are a high-level language for design description, completely automatic layout, and an integrated simulator. the total package can be referred to as a silicon compiler in the gate array design style. john p. gray irene buchanan peter s. robertson functional test generation for delay faults in combinational circuits irith pomeranz sudhakar m. reddy selected aspects of component modeling adam pawlak a new ieee 1149.1 boundary scan design for the detection of delay defects sungju park taehyung kim noise-aware repeater insertion and wire-sizing for on-chip interconnect using hierarchical moment-matching chung-ping chen noel menezes methods for generalized deductive fault simulation in this paper, the authors describe methods for generalized deductive fault simulation of digital networks. by introducing the notion of unknown fault list, the propagation algorithm through gates modelized with rise and fall times are simplified with the same accuracy. n. giambiasi a. miara d. muriach technology mapping for fpgas with nonuniform pin delays and fast interconnections jason cong yean-yow hwang songjie xu parallel algorithms for the simulation of lossy transmission lines w. rissiek o. rethmeier h. holzheuer local microcode compaction techniques david landskov scott davidson bruce shriver patrick w. mallett a variable observation time method for testing delay faults test methodologies for delay faults usually observe output patterns at a single observation time, and the same observation time is used for all faults in the circuit under test. in this paper we show that use of a single observation time is not advantageous for testing delay faults, and we are able to show that the detection threshold can be dramatically improved by using a testing methodology that allows variable, fault-dependent and output-dependent observation times. a "waveform-type" simulation method is used for calculating detection thresholds for definitely detectable faults. statistical distributions of delay fault detection thresholds are presented for ten benchmark circuits. wei-wei mao michael d. ciletti do our low-power tools have enough horse power? (panel session) (title only) giovanni de micheli tony correale pietro erratico srini raghvendra hugo de man jerry frankil vivek tiwari efficient switching activity simulation under a real delay model using a bitparallel approach m. buhler m. papesch k. kapp u. g. baitinger reducing bdd size by exploiting functional dependencies alan j. hu david l. dill testing functional faults in vlsi functional testing has become increasingly important due to the advent of vlsi technology. this paper presents a systematic procedure for generating tests for detecting functional faults in digital systems described by the register transfer language. procedures for testing register decoding, instruction decoding, data transfer, data storage and data manipulation function faults in microprocessors are described step-by-step. examples are given to illustrate the procedures. yinghua min stephen y.h. su heuristic tradeoffs between latency and energy consumption in register assignment one of the challenging tasks in code generation for embedded systems is register allocation and assignment, wherein one decides on the placement and lifetimes of variables in registers. when there are more live variables than registers, some variables need to be spilled to memory and restored later. in this paper we propose a policy that minimizes the number of spills --- which is critical for portable embedded systems since it leads to a decrease in energy consumption. we argue however, that schedules with a minimal number of spills do not necessarily have minimum latency. accordingly, we propose a class of policies that explore tradeoffs between assignments leading to schedules with low latency versus those leading to low energy consumption and show how to tune them to particular datapath characteristics. based on experimental results we propose a criterion to select a register assignment policy that for 99% of the cases we considered minimizes both latency and energy consumption associated with spills to memory. r. anand m. jacome g. de veciana hierarchical physical design methodology for multi-million gate chips in this paper, a design methodology for the implementation of multi- million gate system-on-chip designs is described. wei-jin dai genac: an automatic cell synthesis tool we present a solution to the layout problem of cell synthesis, which achieves multiple optimization objectives. in particular, we propose a new hierarchical method for fast and optimal placement of the transistors in a cell. the method minimizes the number of diffusion breaks, and allows a further pursuit of a secondary optimization objective, such as routing channel density. for cells with non- uniform transistor widths, the transistors are folded in such a way as to optimize a cost function which is a good approximation to the area of the final(compacted) layout of the cell. we also analyze the characteristic nature of routing in cell generation problem, and design an algorithm for doing routing over the transistors; such routing reduces the routing channel density in the central region of the cell. the routing in the central region is completed by a new channel router at, or near, the channel density. the algorithms are implemented in a system call genac. the input to genac is a transistor net list, describing the connectivity as well as the size and type of each transistor. the output is a synthesized layout of the cell in symbolic language. c.-l. ong j.-t. li c.-y. lo control schemes for vlsi microprocessors as microprocessors move into the vlsi era, the number and complexity of the functions they are expected to perform increases beyond the capability of conventional control schemes. this paper discusses the fundamental requirements of such a control system, and explores the possibility of using external microcode control. gary r. burke two dimensional codes for low power m. stan w. burleson vhdl & verilog compared & contrasted - plus modeled example written in vhdl, verilog and c douglas j. smith data prefetch mechanisms the expanding gap between microprocessor and dram performance has necessitated the use of increasingly aggressive techniques designed to reduce or hide the latency of main memory access. although large cache hierarchies have proven to be effective in reducing this latency for the most frequently used data, it is still not uncommon for many programs to spend more than half their run times stalled on memory requests. data prefetching has been proposed as a technique for hiding the access latency of data referencing patterns that defeat caching strategies. rather than waiting for a cache miss to initiate a memory fetch, data prefetching anticipates such misses and issues a fetch to the memory system in advance of the actual memory reference. to be effective, prefetching must be implemented in such a way that prefetches are timely, useful, and introduce little overhead. secondary effects such as cache pollution and increased memory bandwidth requirements must also be taken into consideration. despite these obstacles, prefetching has the potential to significantly improve overall program execution time by overlapping computation with memory accesses. prefetching strategies are diverse, and no single strategy has yet been proposed that provides optimal performance. the following survey examines several alternative approaches, and discusses the design tradeoffs involved when implementing a data prefetch strategy. steven p. vanderwiel david j. lilja transistor reordering for low power cmos gates using an sp-bdd representation alexey l. glebov david blaauw larry g. jones the generalized boundary curve - a common method for automatic nominal design centering of analog circuits r. schwencker f. schenkel h. graeb k. antreich understanding the differences between value prediction and instruction reuse avinash sodani gurindar s. sohi a new method for verifying sequential circuits we present an algorithm for deciding whether two given synchronous, logic- level sequential circuits are functionally equivalent. our approach involves a formal symbolic comparison, as opposed to the (often very time-consuming) generation and simulation of numerous test vector sequences. the given circuits need not have the same number of states, nor must they have the same number of inputs --- for example, one circuit may be a parallel implementation and the other serial. although this is an intractable problem in general, we believe that the method is useful on a broad class of practical circuits; our computational experience thus far is encouraging. kenneth j. supowit steven j. friedman ic test using the energy consumption ratio wanli jiang bapiraju vinnakota synthesis for mixed cmos/ptl logic (poster paper) congguang yang maciej ciesielski the inversion algorithm for digital simulation the inversion algorithm is an event-driven algorithm, whose performance rivals or exceeds that of levelized compiled code simulation, even at activity rates of 50% or more. the inversion algorithm has several unique features, the most remarkable of which is the size of the run-time code. the basic algorithm can be implemented using no more than a page of run-time code, although in practice it is more efficient to provide several different variations of the basic algorithm. the run-time code is independent of the circuit under test, so the algorithm can be implemented either as a compiled code or an interpreted simulator with little variation in performance. because of the small size of the run-time code, the run-time portions of the inversion algorithm can be implemented in assembly language for peak efficiency, and still be retargeted for new platforms with little effort. peter m. maurer a model of shared dasd and multipathing this paper presents a model of an i/o subsystem in which devices can be accessed from multiple cpus and/or via alternative channel and control unit paths. the model estimates access response times, given access rates for all cpu-device combinations. the systems treated are those having the ibm system/370 architecture, with each path consisting of a cpu, channel, control unit, head of string, and device with rotational position sensing. the path selected for an access at seek initiation time remains in effect for the entire channel program. the computation proceeds in three stages: first, the feasibility of the prescribed access rates is determined by solving a linear programming problem. second, the splitting of access rates among the available paths is determined so as to satisfy the following principle: the probability of selecting a given path is proportional to the probability that the path is free. this condition leads to a set of nonlinear equations, which can be solved by means of the newton-raphson method. third, the rps hit probability, i.e. the probability that the path is free when the device is ready to transmit, is computed in the following manner: from the point of view of the selected path, the system may be viewed as being in one of 25 possible states. there are twelve different subsets of states whose aggregate probabilities can be computed from the (by now) known flow rates over the various paths. the maximum entropy principle is used to calculate the unknown state probabilities, with the known aggregate probabilities acting as constraints. the required rps hit probability can be computed easily once the state probabilities have been determined. explicit formulas are given for all these quantities. empirically derived formulas are used to compute the rps miss probability on subsequent revolutions, given the probability on the first revolution. the model is validated against a simulator, showing excellent agreement for systems with path utilizations up to 50 percent. the model is also validated against measurements from a real three- cpu system with 31 shared devices. in this validation, the i/o subsystem model acts as a common submodel to three copies of a system model, one for each cpu. estimated end-user transaction response times show excellent agreement with the live measurements. yonathan bard gpcad: a tool for cmos op-amp synthesis maria del mar hershenson stephen p. boyd thomas h. lee evaluation of a high performance code compression method compressing the instructions of an embedded program is important for cost- sensitive low- power control-oriented embedded computing. a number of compression schemes have been proposed to reduce program size. however, the increased instruction density has an accompanying performance cost because the instructions must be decompressed before execution. in this paper, we investigate the performance penalty of a hardware-managed code compression algorithm recently introduced in ibm's powerpc 405. this scheme is the first to combine many previously proposed code compression techniques, making it an ideal candidate for study. we find that code compression with appropriate hardware optimizations does not have to incur much performance loss. furthermore, our studies show this holds for architectures with a wide range of memory configurations and issue widths. surprisingly, we find that a performance increase over native code is achievable in many situations. charles lefurgy eva piccininni trevor mudge 1991 international workshop on formal methods in vlsi design (trip report) zheng zhu synchronous up/down binary counter for lut fpgas with counting frequency independent of counter size alexandre f. tenca miloš d. ercegovac logic simulation for lsi this paper describes the logic simulation system and the design verification method for logic design, timing analysis, and testing for vlsi. the integrity of test and network data on a logic design stage must be kept in lsi testing in the final verification stage. in dealing with consistency, emphasis is placed on the discrepancy between the real time domain on a simulator and a testing time domain on an lsi tester. the logic simulation system (block integrator and analyzer: binaly) handles a hierarchical structure, a detailed timing model, and a timing alignment method for a testing time domain. kazuyuki hirakawa noboru shiraki michiaki muraoka reconfigurable processing for robust navigation and control (abstract) jeanette f. arrigo kevin j. page paul m. chau n. c. tien design-flow and synthesis for asics: a case study massimo bombana patrizia cavalloro salvatore conigliaro roger b. hughes gerry musgrave giuseppe zaza a new look at logic synthesis j. a. darringer w. h. jr. joyner smap: heterogeneous technology mapping for area reduction in fpgas with embedded memory arrays it has become clear that large embedded configurable memory arrays will be essential in future fpgas. embedded arrays provide high-density high-speed implementations of the storage parts of circuits. unfortunately, they require the fpga vendor to partition the device into memory and logic resources at manufacture-time. this leads to a waste of chip area for customers that do not use all of the storage provided. this chip area need not be wasted, and can in fact be used very efficiently, if the arrays are configured as large multi- output roms, and used to implement logic. in order to efficiently use the embedded arrays in this way, a technology mapping algorithm that identifies parts of circuits that can be efficiently mapped to an embedded array is required. in this paper, we describe such an algorithm. the new tool, called smap, packs as much circuit information as possible into the available memory arrays, and maps the rest of the circuit into four-input lookup-tables. on a set of 29 sequential and combinational benchmarks, the tool is able to map, on average, 60 4-luts into a single 2-kbit memory array. if there are 16 arrays available, it can map, on average, 358 4-luts to the 16 arrays. steven j. e. wilton communication refinement in video systems on chip j.-y. brunel e. a. de kock w. m. kruijtzer h. j. h. n. kenter w. j. m. smits optimized rapid prototyping for real-time embedded heterogeneous multiprocessors t. grandpierre c. lavarenne y. sorel memory consistency and event ordering in scalable shared-memory multiprocessors scalable shared-memory multiprocessors distribute memory among the processors and use scalable interconnection networks to provide high bandwidth and low latency communication. in addition, memory accesses are cached, buffered, and pipelined to bridge the gap between the slow shared memory and the fast processors. unless carefully controlled, such architectural optimizations can cause memory accesses to be executed in an order different from what the programmer expects. the set of allowable memory access orderings forms the memory consistency model or event ordering model for an architecture. this paper introduces a new model of memory consistency, called release consistency, that allows for more buffering and pipelining than previously proposed models. a framework for classifying shared accesses and reasoning about event ordering is developed. the release consistency model is shown to be equivalent to the sequential consistency model for parallel programs with sufficient synchronization. possible performance gains from the less strict constraints of the release consistency model are explored. finally, practical implementation issues are discussed, concentrating on issues relevant to scalable architectures. kourosh gharachorloo daniel lenoski james laudon phillip gibbons anoop gupta john hennessy a benchmark suite for evaluating configurable computing systems - status, reflections, and future directions this paper presents a benchmark suite for evaluating a configurable computing system's infrastructure, both tools and architecture. a novel aspect of this work is the use of stressmarks, benchmarks that focus on a specific characteristic or property of interest. this is in contrast to traditional approaches that utilize functional benchmarks, benchmarks that emphasize measuring end-to-end execution time. this suite can be used to assess a broad range of configurable computing systems, including single configurable devices, multiple configurable devices, and mixed architectures, such as fixed-plus-variable devices and hybrid systems. in addition, aspects that are particularly relevant to the domain of configurable computing, such as run- time reconfiguration and variable precision arithmetic, are considered. the paper provides an overview of the benchmark suite, presents some implementation results on an annapolis micro systems wildforce board, reflects on the benchmark suite developed, and briefly describes future work. s. kumar l. pires s. ponnuswamy c. nanavati j. golusky m. vojta s. wadi d. pandalai h. spaanenberg path sensitization of combinational circuits and its impact on clocking of sequential systems r. peset llopis optimal design of synchronous circuits using software pipelining techniques we present a method to optimize clocked circuits by relocating and changing the time of activation of registers to maximize the throughput. our method is based on a modulo scheduling algorithm for software pipelining, instead of retiming. it optimizes the circuit without the constraint on the clock phases that retiming has, which permits to always achieve the optimal clock period. the two methods have the same overall time complexity, but we avoid the computation of all pair-shortest paths, which is a heavy burden regarding both space and time. from the optimal schedule found, registers are placed in the circuit without looking at where the original registers were. the resulting circuit is a multi-phase clocked circuit, where all the clocks have the same period and the phases are automatically determined by the algorithm. edge- triggered flip-flops are used where the combinational delays exactly match that period, whereas level-sensitive latches are used elsewhere, improving the area occupied by the circuit. experiments on existing and newly developed benchmarks show a substantial performance improvement compared to previously published work. françois r. boyer el mostapha aboulhamid yvon savaria michel boyer big science versus little science - do you have to build it? (panel session) research can be called big science if projects have numerous researchers, large funding, significant infrastructure, and plans to build complex tools or prototypes. most experimental physicists practice big science, as do computer architects who build prototype software- hardware systems. conversely, research can be called little science when projects have few researchers, modest funding, little special infrastructure, and no plans to build complex tools or prototypes. most mathematicians practice little science, as do computer architects who study aspects of a design and build confidence in their proposals with models or simulations. a very simple model contrasting the two approaches is illustrated below, where money flows from governments (gov) to academia (edu) which produce ideas for industry (com) to make better products for all (pop). a key difference is whether governments fund a few, large research projects or many, smaller ones. the goal of this session is explore whether, when and why universities should do big or little science. panelists may discuss why big science wastes money, exploits graduate students and makes research too short range. they may argue that little science produces results that are too deep and narrow, oblivious to global systems issues, not properly validated, and too out of touch with reality to ever be practical. panelists may also find some advantages to both kinds of science. panelists include members from government, academia and industry, who are also members of the general population. to keep the discussion lively, nothing said necessarily represents the opinion of any government agency, university or corporation with whom panelists are affiliated. david r. ditzel john l. hennessy bernie rudin alan jay smith stephen l. squires zeke zalcstein paras: system-level concurrent partitioning and scheduling wing hang wong rajiv jain multilevel k-way hypergraph partitioning george karypis vipin kumar vhdl quality: synthesizability, complexity and efficiency evaluation m. mastretti non-linear components for mixed circuits analog front-end luigi carro adão souza marcelo negreiros gabriel jahn denis franco buffer insertion and sizing under process variations for low power clock distribution joe g. xi wayne w. m. dai unidirectional error correction/detection for vlsi memory unidirectional error protecting codes for vlsi memories are described. after developing the theory of unidirectional error detection and correction, optimal systematic codes capable of detecting unidirectional errors are presented. efficient single error correcting and d(d≥2)-unidirectional error detecting codes are also discussed. bella bose vhdl 1076 - 1992 languages changes andrew guyler futurebus+ as an i/o bus: profile b the ieee futurebus+ is a very fast (3gb/sec.), industry standard backplane bus specification for computer systems. futurebus+ was designed independent of any cpu architecture so it is truly open. with this open architecture futurebus+ can be applied to many different computing applications. profile b is a subset of the ieee 896 futurebus+ standard and targets high performance, general purpose computer i/o applications. this paper describes how and why the functional, electrical, mechanical and environmental characteristics were chosen. barbara p. aichinger improving branch predictors by correlating on data values branch predictors typically use combinations of branch pc bits and branch histories to make predictions. recent improvements in branch predictors have come from reducing the effect of interference, i.e. multiple branches mapping to the same table entries. in contrast, the branch difference predictor (bdp) uses data values as additional information to improve the accuracy of conditional branch predictors. the bdp maintains a history of differences between branch source register operands, and feeds these into the prediction process. an important component of the bdp is a rare event predictor (rep) which reduces learning time and table interference. an rep is a cache-like structure designed to store patterns whose predictions differ from the norm. initially, ideal interference-free predictors are evaluated to determine how data values improve correlation. next, execution driven simulations of complete designs realize this potential. the bdp reduces the misprediction rate of five spec95 integer benchmarks by up to 33% compared to gshare and by up to 15% compared to bi-mode predictors. timothy h. heil zak smith j. e. smith high-throughput and low-power dsp using clocked-cmos circuitry manjit borah robert michael owens mary jane irwin libra - a library-independent framework for post-layout performance optimization in this paper we present a post-layout timing optimization framework which (1) is library-independent such that it can take the logic-optimized verilog file as its input netlist, (2) provides a prototype interface which can communicate with any vendor's physical design tools to obtain the accurate timing, topological and physical information, and perform eco placement and routing, and (3) has fast and powerful rewiring routines that offer an extra solution space beyond the existing physical-level optimization methodologies. we conduct the post-layout performance optimization experiments on some benchmark circuits which are originally optimized by synopsys's design compiler, (with high timing effort), followed by avant!'s timing-driven place-and-route tool, apollo. the optimization strategies we used include rewiring, buffer insertion, and cell sizing. to study the trade-offs between these transformations and the benefits of mixing them together, they are applied both separately and closely integrated by some heuristic cost functions. the result shows that by using all these strategies, post-layout timing optimization can further achieve up to 23.9% of improvement after global routing. we also discuss the pros and cons for our proposed procedures applied after global routing versus after detail routing. some factors that can affect the quality of rewiring such as level of recursive learning and type of rewiring will also be addressed. ric chung-yang huang yucheng wang kwang-ting chen allocation of fifo structures in rtl data paths along with functional units, storage and interconnects contribute significantly to data path costs. this paper addresses the issue of reducing the costs of storage and interconnect. in a post-datapath synthesis phase, one or more queues can be allocated and variables bound to it, with the goal of reducing storage and interconnect costs. further, in contrast to earlier work, we support "irregular" cdfgs and multicycle functional units for queue synthesis. initial results on hls benchmark examples have been encouraging, and show the potential of using queue synthesis to reduce datapath cost. a novel feature of our work is the formulation of the problem for a variety of fifo structures with their own "queueing" criteria. m. balakrishnan heman khanna multi-stack optimization for data-path chip (microprocessor) layout as data-path chips such as microprocessors and risc chips become more complex, multiple stacks of data-path macros are required to implement the entire data- path. the physical decomposition of a chip into a single data-path stack, and control logic of random logic as in the past is not always feasible. this paper describes a special multi-stack structure, optimization techniques and algorithms to partition, place and wire the data-path macros in the form of the multi-stack structure, taking into account the connectivity of the entire chip logic (data-path, control logic, chip drivers, on-chip memory). the overall objective is: (1) to fit the circuits within the chip, (2) to ensure data-path wireability, including stack to random logic wireability, and (3) to minimize wire lengths for wireability and timing. a tool for automatic multi- stack optimization has been implemented and applied successfully to layout some high density data-path chips. w. k. luk a. a. dean circuit partitioning for dynamically reconfigurable fpgas huiqun liu d. f. wong vlsi, computer science, and synergetic research "the lsi and vlsi revolution" has probably become one of the most frequently mentioned phrases in discussions concerning technology trends. microprocessors have entered most of our homes, offices and laboratories. students in universities have begun to learn chip design, and prototype chips are routinely being designed and fabricated for various research projects. what will be the impacts of vlsi to us as computer scientists and to the discipline of computer science in general? i believe that the impacts will be profound and will be much greater than most of us can imagine today. in this talk i will present a simplified view of this matter. in particular, i will suggest a style of synergetic system research as a model to explore the potential of vlsi, an area where impressive advances have been made in recent years and are expected to continue the foreseeable future. h. t. kung a comparative study of power efficient sram designs _this paper investigates the effectiveness of combination of different low power sram circuit design techniques. the divided bit line (dbl), pulsed word line (pwl) and isolated bit line (ibl) strategies have been implemented in a various size sram designs and evaluated using 0.35micron technology and 3.3v vdd at 100mhz frequency. different decoder structures have been investigated for their power efficiency as well. it is observed that the power reduces by 29%, 32% and 52% over an unoptimized sram design when (pwl_+_ibl), (pwl_+_dbl) and (pwl_+_ibl_+_dbl) are implemented in a 256*2 size sram respectively._ jeyran hezavei n. vijaykrishnan m. j. irwin an evaluation of redundant arrays of disks using an amdahl 5890 recently we presented several disk array architectures designed to increase the data rate and i/o rate of supercomputing applications, transaction processing, and file systems [patterson 88]. in this paper we present a hardware performance measurement of two of these architectures, mirroring and rotated parity. we see how throughput for these two architectures is affected by response time requirements, request sizes, and read to write ratios. we find that for applications with large accesses, such as many supercomputing applications, a rotated parity disk array far outperforms traditional mirroring architecture. for applications dominated by small accesses, such as transaction processing, mirroring architectures have higher performance per disk than rotated parity architectures. peter m. chen garth a. gibson randy h. katz david a. patterson the second generation motis mixed-mode simulator this paper describes the second generation motis mixed-mode simulator. in particular, it extends the current modeling capabilities to include resistors, floating capacitors, and bidirectional transmission gates. it employs a relaxation algorithm with local time-step control for timing simulation, and a switch level approach for unit delay simulation. it provides logic and timing verification for general mos circuits in a mixed-mode environment. the new simulator is being used for production chips, and it is more accurate, flexible, and efficient than the existing motis mixed-mode simulator. c. f. chen c-y lo h. n. nham prasad subramaniam an o(nlogm) algorithm for vlsi design rule checking this paper describes a new variant of the segment tree approach for vlsi design rule checking. the best known algorithms to date for flat vlsi design rule checking require o(nlogn) expected time and o( n) expected space, where n is the total number of edges on a mask layer of the chip. we present a new algorithm that can run in o(nlogm) expected time, where m is the maximum feature size on a particular mask layer. since the maximum feature size must be bounded by the height of a chip, i.e. m ≤ o( n), the new algorithm is adaptively more efficient than o(nlogn). for layers such as diffusion or contact windows where the maximum feature size is independent of chip size, i.e. m = o(1), the new algorithm runs in o(n) expected time, a definite improvement. the improved time efficiency is achieved without sacrificing o( n) expected space complexity. c. r. bonapace c.-y. lo a data-flow driven resource allocation in a retargetable microcode compiler a method for global resource allocation is described, which minimizes data movements and optimizes the use of resources like special purpose registers and functional units in complicated bus structures. the algorithm can deal with arbitrary flow graphs and hierarchies of nonrecursive procedures. it is based on a thorough data flow analysis of the source program and a description of the target architecture. the method has been implemented in a retargetable compiler with front-ends for the system implementation languages c and cdl2. h. feuerhahn logic simulation using networks of state machines peter m. maurer synthesis of low-power selectively-clocked systems from high-level specification we propose a technique for synthesizing low-power systems from behavioral specifications. we analyze the control flow of the specification model to detect mutually exclusive sections of the computation. a selectively-clocked interconnection of interacting fsms is automatically generated and optimized, where each fsm controls the execution of one section of computation. only one of the interacting fsms is active for a high fraction of the operation time, while the others are idle and their clocks are stopped. periodically, the active machine releases the control of the system to another fsm and stops. our interacting fsm implementation achieves consistently lower power dissipation than the functionally equivalent monolithic implementation. on average, 37% power savings and 12% speedup are obtained, despite a 30% area overhead. l. benini g. de micheli an on chip adc test structure yun-che wen kuen-jong lee profile-driven code execution for low power dissipation (poster session) this paper proposes a novel technique for power-performance trade-off based on profile-driven code execution. specifically, we show that there is an optimal level of parallelism for energy consumption and propose a compiler- assisted technique for code annotation that can be used at run-time to adaptively trade-off power and performance. as shown by experimental results, our approach is up to 23% better than clock throttling and is as efficient as voltage scaling (up to 10% better in some cases). the technique proposed in this paper can be used by an acpi-compliant power manager for prolonging battery life or as a passive cooling feature for thermal management. diana marculescu a mixed nodal-mesh formulation for efficient extraction and passive reduced- order modeling of 3d interconnects as vlsi circuit speeds have increased, reliable chip and system design can no longer be performed without accurate three-dimensional interconnect models. in this paper, we describe an integral equation aproach to modeling the impedance of inter-connect structures accounting for both the charge accumulation on the surface of conductors and the current traveling in their interior: our formulation, based on a combination of nodal and mesh analysis, has the required properties to be combined with model order reduction techniques to generate accurate and guaranteed passive low order interconnect models for efficient inclusion in standard circuit simulators. furthermore, the formulation is shown to be more flexible and efficient than previously reported methods. nuno marques mattan kamon jacob white l. miguel silveira an improved switch-level simulator for mos circuits the notion of a well-designed circuit is introduced, and improvements are presented to the earlier switch-level simulators of bryant [b80] and lipton, sedgewick, and valdes [lsv81], so that race conditions can be detected, and undefined values can be handled in a clean and simple way in these circuits. vijaya ramachandran cache memories: a tutorial and survey of current research directions the tutorial presents a unified nomenclature for the description of cache memory systems. using this foundation, examples of existing cache memory systems are detailed and compared. the second presentation discusses a programmable cache memory architecture. in this architecture, intelligence is added to the cache to direct the activity between the cache and the main memory. also to be described are heuristics for programming the cache which allow the additional power to be exploited. the third presentation deals with innovations involving systems where the cache memory is not used as a simple high speed buffer for main memory. a straight forward example of this appears in ibm's translation lookaside buffer on 370s with dynamic address translation hardware. other examples are to be described include a cache system for the activation stack of a block structured language, a cache system to store subexpressions for an expression oriented architecture, and a multiprocessor architecture that relies on two levels of cache. robert p. cook cathy j. linn joseph l. linn terry m. walker resist: a recursive test pattern generation algorithm for path delay faults karl fuchs michael pabst torsten rössel analysis of placement procedures for vlsi standard cell layout this paper describes a study of placement procedures for vlsi standard cell layout. the procedures studied are simulated annealing, min cut placement, and a number of improvements to min cut placement including a technique called terminal propagation which allows min cut to include the effect of connections to external cells. the min cut procedures are coupled with a force directed pairwise interchange (fdpi) algorithm for placement improvement. for the same problem these techniques produce a range of solutions with a typical standard deviation 4% for the total wire length and 3% to 4% for the routed area. the spread of results for simulated annealing is even larger. this distribution of results for a given algorithm implies that mean results of many placements should be used when comparing algorithms. we find that the min cut partitioning with simplified terminal propagation is the most efficient placement procedure studied. mark r. hartoog functional level simulation at raytheon raytheon has enhanced its gate level simulation system (grass) to include a functional level simulation capability. this paper describes the features of the functional description language (fdl), implementation features, recent results, and future plans for the system. dan nash keith russell paul silverman mary thiel area optimization of analog circuits considering matching constraints (poster papar) christian paulus ulrich kleine roland thewes device design for low power electronics with accurate deep submicrometer ldd- mosfet models k. chen y. cheng c. hu structural gate decomposition for depth-optimal technology mapping in lut- based fpga designs in this paper we study structural gate decomposition in general, simple gate networks for depth-optimal technology mapping using k-input lookup-tables (k-luts). we show that (1) structural gate decomposition in any k-bounded network results in an optimal mapping depth smaller than or equal to that of the original network, regardless of the decomposition method used; and (2) the problem of structural gate decomposition for depth-optimal technology mapping is np-hard for k-unbounded networks when k≥3 and remains np-hard for k-boundeds networks when k≥5\. based on these results, we propose two new structural gate decomposition algorithms, named dogma and dogma-m, which combine the level-driven node-packing technique (used in flowmap) and the network flow-based labeling technique (used in chortle-d) for depth- optimal technology mapping. experimental results show that (1) among five structural gate decompostion algorithms, dogma-m results in the best mapping solutions; and (2) compared with speed_up(an algebraic algorithm) and tos (a boolean approach), dogma-m completes, decomposition of all tested benchmarks in a short time while speed_up and tos fail in several cases. however, speed_up results in the smallest depth and area in the following technology mapping steps. jason cong yean-yow hwang at-speed delay testing of synchronous sequential circuits i. pomeranz s. m. reddy correctness verification of concurrent controller specifications m. t. l. schaefer w. u. klein an approximate timing analysis method for datapath circuits hakan yalcin john p. hayes karem a. sakallah an efficient non-quasi-static diode model for circuit simulation andrew t. yang yu liu jack t. yao r. r. daniels new clock-gating techniques for low-power flip-flops two novel low power flip-flops are presented in the paper. proposed flip-flops use new gating techniques that reduce power dissipation deactivating the clock signal. presented circuits overcome the clock duty-cycle limitation of previously reported gated flip-flops. circuit simulations with the inclusion of parasitics show that sensible power dissipation reduction is possible if input signal has reduced switching activity. a 16-bit counter is presented as a simple low power application. a. g. m. strollo e. napoli d. de caro the use of inverse layout trees for hierarchical design rule checking the inverse layout tree concept is used to perform fully hierarchical drc without any constraints on the use of overlapping or incomplete cells that are completed at higher levels of hierarchy. hierarchy is preserved and design rule violations are displayed in the cell where they should be corrected. the drc is corner-based and processes 200-800 corners/second on a vax 11/750. n. hedenstierna k. o. jeppson hardware/software partitioning between microprocessor and reconfigurable hardware m. anand sanjiv kapoor m. balakrishnan high-performance bidirectional repeaters _in this paper, we present high-performance bidirectional repeaters that recondition the signal waveform and reduce the signal degradation. we also present the application of these repeaters to the design of high-performance bidirectional busses. spice simulation results for long bidirectional interconnects show an almost linear increase in delay with repeaters compared to a quadratic increase in delay without repeaters. these repeaters are also applied to improve the performance of long and domino gates. spice simulation results show a significant reduction in the delay of long and domino gate with repeaters._ s. bobba i. n. hajj analysis of rc interconnections under ramp input we give new methods for calculating the time-domain response for a finite- length distributed rc line that is stimulated by a ramp input. the following are our contributions. first, we obtain the solution of the diffusion equation for a seminfinite distributed rc line with ramp input. we then present a general and, in the limit, exact approach to compute the time-domain response for finite-length rc lines under ramp input by summing distinct diffusions starting at either end of the line. next, we obtain analytical expressions for the finite time-domain voltage response for an open-ended finite rc line and for a finite rc line with capacitive load. the delay estimates using this method are very close to spice-computing delays. finally, we present a general recursive equation for computing the higher-order diffusion components due to reflections at the source and load ends. future work extends our method to response computations in general interconnection trees by modeling both reflection and transmission coefficients at discontinuities. andrew b. kahng sudhakar muddu minimizing sensitivity to delay variations in high-performance synchronous circuits xun liu marios c. papaefthymiou eby g. friedman array optimization for vlsi synthesis we present in this paper an algorithm that solves a general array optimization problem. the algorithm can be used for compacting gate matrix layouts, sla's, weinberger arrays, and for multiple folding of pla's. our approach is based on the technique of simulated annealing. a major contribution of this paper is the formulation of the solution space which facilitates an effective search for an optimal solution. experimental results are very encouraging. d. f. wong c. l. liu code transformations to improve memory parallelism current microprocessors incorporate techniques to exploit instruction-level parallelism (ilp). however, previous work has shown that these ilp techniques are less effective in removing memory stall time than cpu time, making the memory system a greater bottleneck in ilp-based systems than previous- generation systems. these deficiencies arise largely because applications present limited opportunities for an out-of-order issue processor to overlap multiple read misses, the dominant source of memory stalls. this work proposes code transformations to increase parallelism in the memory system by overlapping multiple read misses within the same instruction window, while preserving cache locality. we present an analysis and transformation framework suitable for compiler implementation. our simulation experiments show substantial increases in memory parallelism, leading to execution time reductions averaging 23% in a multiprocessor and 30% in a uniprocessor. we see similar benefits on a convex exemplar. vijay s. pai sarita adve an experimental mos fault simulation program csasim a prototype version of a new switch-level fault simulator for digital mos ic's is described. the simulation program, which is called csasim, analyzes csa (connector-switch-attenuator) circuit models using multiple logic values. a novel method of signal evaluation is employed, based on the super-position of bidirectional static and dynamic signals. csasim also allows efficient simulation of many different fault types, including stuck-at-constant, open- circuit, short-circuit, and delay faults. the internal structure and fault- simulation mechanisms of the simulator are discussed in this paper. masato kawai john p. hayes using the stuck-at fault model: how well does it work? a. a. fennisi d. s. reeves optimal clock period fpga technology mapping for sequential circuits peichen pan c. l. liu abilbo: analog built-in block observer marcelo lubaszewski salvador mir leandro pulz an iterative improvement algorithm for low power data path synthesis anand raghunathan niraj k. jha optimization of custom mos circuits by transistor sizing andrew r. conn paula k. coulman ruud a. haring gregory l. morrill chandu visweswariah performance enhancement of cmos vlsi circuits by transistor reordering bradley s. carlson c. y. roger chen a multi level testability assistant for vlsi design m. bombana g. buonanno p. cavalloro d. sciuto g. zaza a module interchange placement machine the interchange of pairs of modules is used in a number of popular automatic placement routines in which it is the most time-consuming computation. a system for automatic placement based on iterative placement improvement algorithms which use module interchange is presented. the major attribute of this system is in the hardware implementation of the computation of the cost increment for the new placement resulting from the interchange of two modules. the system was constructed and its results indicate that a speed-up could be achieved of one order of magnitude or better in comparison with software implementations. alexander iosupovicz clarence king melvin a. breuer a fast wavelet collocation method for high-speed vlsi circuit simulation d. zhou n. chen w. cai a general methodology for synthesis and verification of register-transfer designs the general relationship between register-transfer synthesis and verification is discussed, and common mechanisms are shown to underlie both tasks. the paper proposes a framework for combined synthesis and verification of hardware that supports any combination of user-selectable synthesis techniques. the synthesis process can begin with any degree of completion of a partial design, and verification of the partial design can be achieved by completing its synthesis while subjecting it to constraints that can be generated from a "template" and user constraints. the driving force was the work done by hafer [3] on a synthesis model. the model was augmented by adding variables and constraints in order to verify interconnections. a multilevel, multidimensional design representation [6] is introduced which is shown to to be equivalent to hafer's model. this equivalence relationship is exploited in deriving constraints off the design representation. these constraints can be manipulated in a variety of ways before being input to a linear program which completes the synthesis/verification process. an example is presented in which verification and synthesis occur simultaneously and the contribution of each automatically varies, depending on the number of previous design decisions. alice c. parker fadi kurdahi mitch mlinar contest: a concurrent test generator for sequential circuits this paper describes the application of a concurrent fault simulator to automatic test vector generation. as faults are simulated in the fault simulator a cost function is simultaneously computed. a simple cost function is the distance (in terms of the number of gates and flip-flops) of a fault effect from a primary output. the input vector is then modified to reduce the cost function until a test is found. the paper presents experimental results showing the effectiveness of this method in generating tests for combinational and sequential circuits. by defining suitable cost functions, we have been able to generate: 1) initialization sequences, 2) tests for a group of faults, and 3) a test for a given fault. even asynchronous sequential circuits can be handled by this approach. vishwani d. agrawal kwang-ting cheng prathima agrawal exploiting power-up delay for sequential optimization vigyan singhal carl pixley adnan aziz robert k. brayton new placement and global routing algorithms for standard cell layouts new placement and global routing algorithms are proposed for standard cell layouts. the placement algorithm, called the hierarchical clustering with min- cut exchange (hcme), is effective to avoid being trapped in local optimum solutions. the global routing algorithm does not route the nets one by one and therefore, the results are independent of the net order and channel order. in this algorithm, channel width is minimized under a cost function, in which the trade-off between the minimization of net lengths and the minimization of the number of tracks is considered. these algorithms are simple and highly efficient. this is confirmed by computational experiments. masato edahiro takeshi yoshimura power estimation in sequential circuits farid n. najm shashank goel ibrahim n. hajj a "non-restrictive" artwork verification program for printed circuit boards this paper describes a pcb artwork verification program which imposes virtually no restrictions on the layout designer. the program is capable of making fast and reliable verifications of layouts of any type and style. the concepts and techniques used to achieve the "non- restrictive" feature of the program are discussed. a unique characteristic of the program is the special treatment of nonelectrical elements. the program has been proven by continuous practical use in a dynamic production environment. david kaplan cat - caching address tags: a technique for reducing area cost of on-chip caches this paper presents a technique for minimizing chip-area cost of implementing an on-chip cache memory of microprocessors. the main idea of the technique _caching address tags, or_ _cat cache_ for short. the _cat_ cache exploits locality property that exists among addresses of memory references for the purpose of minimizing chip area-cost of address tags. by keeping only a limited number of distinct tags of cached data rather than having as many tags as cache lines, the _cat_ cache can reduce the cost of implementing tag memory by an order of magnitude without noticeable performance difference from ordinary caches. therefore, _cat_ represents another level of caching for cache memories. simulation experiments are carried out to evaluate performance of _cat_ cache as compared to existing caches. performance results of spec92 programs show that the _cat_ cache with only a few tag entries performs as well as ordinary caches while chip-area saving is significant. such area saving will increase as the address space of a processor increases. by allocating the saved chip area for larger cache capacity, or more powerful functional units, _cat_ is expected to have a great impact on overall system performance. hong wang tong sun qing yang random current testing for cmos logic circuits by monitoring a dynamic power supply current hideo tamamoto hiroshi yokoyama yuichi narita lessons in language design: cost/benefit analysis of vhdl features oz levia serge maginot jacques rouillard wave steering in yadds: a novel non-iterative synthesis and layout technique arindam mukherjee ranganathan sudhakar malgorzata marek-sadowska stephen i. long an analytic performance model of disk arrays edward k. lee randy h. katz layout based frequency dependent inductance and resistance extraction for on- chip interconnect timing analysis it is well understood that frequency independent lumped-element circuits can be used to accurately model proximity and skin effects in transmission lines [7]. furthermore, it is also understood that these circuits can be synthesized knowing only the high and the low frequency resistances and inductances [4]. existing vlsi extraction tools however, are not efficient enough to solve for the frequency dependent resistances and inductances on large vlsi layouts, nor do they synthesize circuits suitable for timing analysis. we propose a rules- based method that efficiently and accurately captures the high and low frequency characteristics directly from layout shapes, and subsequently synthesizes a simple frequency independent ladder circuit suitable for timing analysis. we compare our results to other simulation results. byron krauter sharad mehrotra an algorithm for synthesis of system-level interface circuits ki-seok chung rajesh k. gupta c. l. liu transparent repeaters the concept of a "transparent repeater1," which is an amplifier circuit designed to minimize the delay introduced by highly resistive interconnect lines in high speed digital circuits, is introduced and described in this paper. an insertion methodology for this circuit is also discussed. defining characteristics of this circuit are: the input is connected to the output, the output generates the same sense transition as the corresponding input transition, the buffer output becomes high impedance after every transition, and the buffer may detect input transitions with low threshold voltages. radu m. secareanu eby g. friedman a formal basis for design process planning and management in this paper we present a design formalism that allows for a complete and general characterization of design disciplines and for a unified representation of arbitrarily complex design processes. this formalism has been used as the basis for the development of several prototype cad meta-tools that offer effective design process planning and management services. margarida f. jacome stephen w. director power estimation of cell-based cmos circuits alessandro bogiolo luca benini bruno riccò information base structure for an adaptive system of intellectual compilation alexander marchenco alexander sakada microcode compaction via microblock definition the paper describes a microprogram compaction technique based on a microoperation and microistruction modelling, applicable to different types of target machine. the model describes microoperation semantics by relating them to microcodes used in microinstruction fields, without any explicit description of machine timing. evaluation of the proposed technique is given in terms of efficiency of the automatically generated microcode. m. mezzalama p. prinetto g. filippi predicting coupled noise in rc circuits bernard n. sheehan pleasure: a computer program for simple/multiple constrained unconstrained folding of programmable logic arrays g. demichelli a. sangiovanni-vincentelli heterogeneous technology mapping for fpgas with dual-port embedded memory arrays it has become clear that on-chip storage is an essential component of high- density fpgas. these arrays were originally intended to implement storage, but recent work has shown that they can also be used to implement logic very efficiently. this previous work has only considered single-port arrays. many current fpgas, however, contain dual-port arrays. in this paper we present an algorithm that maps logic to these dual- port arrays. our algorithm can either optimize area with no regard for circuit speed, or optimize area under the constraint that the combinational depth of the circuit does not increase. experimental results show that, on average, our algorithm packs between 29% and 35% more logic than an algorithm that targets single-port arrays. we also show, however, that even with this algorithm, dual-port arrays are still not as area-efficient as single-port arrays when implementing logic. steven j. e. wilton microarchitectural synthesis of vlsi designs with high test concurrency ian g. harris alex orailoglu parallel algorithms for fpga placement _fast fpga cad tools that produce high quality results has been one of the most important research issues in the fpga domain. simulated annealing has been the method of choice for placement. however, simulated annealing is a very compute-intensive method. in our present work we investigate a range of parallelization strategies to speedup simulated annealing with application to placement for fpga. we present experimental results obtained by applying the different parallelization strategies to the versatile place and route (vpr) tool, implemented on an sgi origin shared memory multi-processor and an ibm- sp2 distributed memory multi-processor. the results show the tradeoff between execution time and quality of result for the different parallelization strategies._ malay haldar anshuman nayak alok choudhary prith banerjee layout driven technology mapping massoud pedram narasimha bhat efficient boolean function matching jerry r. burch david e. long lyra: a new approach to geometric layout rule checking lyra is a layout rule checking program for manhattan vlsi circuits. in lyra, rules are specified in terms of constraints that must hold at certain corners in the design. the corner-based mechanism permits a wide variety of rules to be specified easily, including rules involving asymmetric constructs such as transistor overhangs. lyra's mechanism also has locality, which can be exploited to construct incremental and/or hierarchical checkers. a rule compiler translates symbolic rules into efficient code for checking those rules, and permits the system to be retargeted for different processes. michael h. arnold john k. ousterhout combinational logic synthesis from an hdl description hardware description languages are used to input the details of a digital system into an automatic design system. an algorithm to synthesize combinational logic from the description in one such language (ddl) is discussed. a sample implementation and the cost comparison are provided. sajjan g. shiva an efficient and flexible methodology for modelling and simulation of heterogeneous mechatronic systems stefan scherber christian muller-schloer externally hazard-free implementations of asynchronous circuits milton sawasaki chantal ykman-couvreur bill lin ultra-high-density data storage: introduction lambertus hesselink the concurrent simulation of nearly identical digital networks e. g. ulrich t. baker micropipelines the pipeline processor is a common paradigm for very high speed computing machinery. pipeline processors provide high speed because their separate stages can operate concurrently, much as different people on a manufacturing assembly line work concurrently on material passing down the line. although the concurrency of pipeline processors makes their design a demanding task, they can be found in graphics processors, in signal processing devices, in integrated circuit components for doing arithmetic, and in the instruction interpretation units and arithmetic operations of general purpose computing machinery. because i plan to describe a variety of pipeline processors, i will start by suggesting names for their various forms. pipeline processors, or more simply just pipelines, operate on data as it passes along them. the latency of a pipeline is a measure of how long it takes a single data value to pass through it. the throughput rate of a pipeline is a measure of how many data values can pass through it per unit time. pipelines both store and process data; the storage elements and processing logic in them alternate along their length. i will describe pipelines in their complete form later, but first i will focus on their storage elements alone, stripping away all processing logic. stripped of all processing logic, any pipeline acts like a series of storage elements through which data can pass. pipelines can be clocked or event-driven, depending on whether their parts act in response to some widely- distributed external clock, or act independently whenever local events permit. some pipelines are inelastic; the amount of data in them is fixed. the input rate and the output rate of an inelastic pipeline must match exactly. stripped of any processing logic, an inelastic pipeline acts like a shift register. other pipelines are elastic; the amount of data in them may vary. the input rate and the output rate of an elastic pipeline may differ momentarily because of internal buffering. stripped of all processing logic, an elastic pipeline becomes a flow-through first-in-first-out memory, or fifo. fifos may be clocked or event-driven; their important property is that they are elastic. i assign the name micropipeline to a particularly simple form of event-driven elastic pipeline with or without internal processing. the micro part of this name seems appropriate to me because micropipelines contain very simple circuitry, because micropipelines are useful in very short lengths, and because micropipelines are suitable for layout in microelectronic form. i have chosen micropipelines as the subject of this lecture for three reasons. first, micropipelines are simple and easy to understand. i believe that simple ideas are best, and i find beauty in the simplicity and symmetry of micropipelines. second, i see confusion surrounding the design of fifos. i offer this description of micropipelines in the hope of reducing some of that confusion. the third reason i have chosen my subject addresses the limitations imposed on us by the clocked-logic conceptual framework now commonly used in the design of digital systems. i believe that this conceptual framework or mind set masks simple and useful structures like micropipelines from our thoughts, structures that are easy to design and apply given a different conceptual framework. because micropipelines are event- driven, their simplicity is not available within the clocked-logic conceptual framework. i offer this description of micropipelines in the hope of focusing attention on an alternative transition- signalling conceptual framework. we need a new conceptual framework because the complexity of vlsi technology has now reached the point where design time and design cost often exceed fabrication time and fabrication cost. moreover, most systems designed today are monolithic and resist mid-life improvement. the transition-signalling conceptual framework offers the opportunity to build up complex systems by hierarchical composition from simpler pieces. the resulting systems are easily modified. i believe that the transition-signalling conceptual framework has much to offer in reducing the design time and cost of complex systems and increasing their useful lifetime. i offer this description of micropipelines as an example of the transition-signalling conceptual framework. until recently only a hardy few used the transition-signalling conceptual framework for design because it was too hard. it was nearly impossible to design the small circuits of 10 to 100 transistors that form the elemental building blocks from which complex systems are composed. moreover, it was difficult to prove anything about the resulting compositions. in the past five years, however, much progress has been made on both fronts. charles molnar and his colleagues at washington university have developed a simple way to design the small basic building blocks [9]. martin rem's "vlsi club" at the technical university of eindhoven has been working effectively on the mathematics of event-driven systems [6, 10, 11, 19]. these emerging conceptual tools now make transition signalling a lively candidate for widespread use. i. e. sutherland a performance comparison of contemporary dram architectures in response to the growing gap between memory access time and processor speed, dram manufacturers have created several new dram architectures. this paper presents a simulation- based performance study of a representative group, each evaluated in a small system organization. these small-system organizations correspond to workstation-class computers and use on the order of 10 dram chips. the study covers fast page mode, extended data out, synchronous, enhanced synchronous, synchronous link, rambus, and direct rambus designs. our simulations reveal several things: (a) current advanced dram technologies are attacking the memory bandwidth problem but not the latency problem; (b) bus transmission speed will soon become a primary factor limiting memory-system performance; (c) the post-l2 address stream still contains significant locality, though it varies from application to application; and (d) as we move to wider buses, row access time becomes more prominent, making it important to investigate techniques to exploit the available locality to decrease access time. vinodh cuppu bruce jacob brian davis trevor mudge a practical online design rule checking system in this paper, we propose a practical online design rule checking system, which can provide the following features: an ability to cope with a complicated and wide variety of design rules with high accuracy, easy use, high speed incremental drc (simultaneous checking with pattern editing) and total drc (non-simultaneous checking), high speed pattern editing, and a small memory space. for the last three items, a field block data structure is developed. experimental results show this system is very effective for vlsi cell layout design. goro suzuki yoshio okamura microarchitecture modelling through adl adl is an architecture description language that has been developed to model computer architectures at different levels of detail, as for instance, at the microarchitecture level. target architectures described in adl are processed by the support system of the language which generates an interpreter program related to the description of the target machine. the interpreter reproduces the behavior of the architecture being modeled, including the interpretation of the target code. in addition to a brief review of the language and the implementation details of its support system, this paper also shows some methods to deal with target machine parallelism, and the modeling of two microprogrammable machines. e. s.t. fernandes ils - interactive logic simulator due to increasing vlsi complexity, logic level simulators have become necessary tools for design verification and test generation. logic simulators must respond to this increased demand by providing additional functionality in a user-friendly environment. ils (interactive logic simulator) currently under development in the cad lab of hewlett-packard co. provides these features. ils accepts a hierarchical, scoped description of the network topology. this description may consist of libraries, blocks, ils-defined primitives (transistors and gates), or user-defined primitives. these multiple levels simplify network description and allow ils to accurately simulate a wide variety of circuits. the simulator incorporates a new modeling scheme which allows functional, logic, and circuit level primitives to communicate efficiently. in addition, ils features a new concept in simulation control languages to facilitate generation of functional test programs. this paper will briefly review these significant benefits provided by the ils simulator. gregory d. jordan brij b. popli symbolic parasitic extractor for circuit simulation (specs) this paper describes the design, development and implementation of the program specs. the purpose of specs is to automatically extract from a rockwell microelectronic symbolic matrix description a netlist for circuit simulation. this program differs from others in that it uses a symbol layout matrix as an input, calculates both interelectrode and intrinsic capacitance, calculates conductor resistance, produces a schematic representation of the network and has a selective trace, i.e., traces only the circuit or network of interest. j. d. bastian m. ellement p. j. fowler c. e. huang l. p. mcnamee cooperative prefetching: compiler and hardware support for effective instruction prefetching in modern processors chi-keung luk todd c. mowry automatic tub region generation for symbolic layout compaction this paper describes a new algorithm that automatically generates tub regions for vlsi symbolic layouts with quality comparable to that of human designers. the algorithm supports an explicit modeling of enclosure rules in the layout compaction task with the benefit of robustness and reduced output database size. in addition, the algorithm runs at o(n2) time and o(n) space with the expected run time of o(nlogn). c.-y. lo identifying sequential redundancies without search mahesh a. iyer david e. long miron abramovici is sc + ilp = rc? sequential consistency (sc) is the simplest programming interface for shared-memory systems but imposes program order among all memory operations, possibly precluding high performance implementations. release consistency (rc), however, enables the highest performance implementations but puts the burden on the programmer to specify which memory operations need to be atomic and in program order. this paper shows, for the first time, that sc implementations can perform as well as rc implementations if the hardware provides enough support for speculation. both sc and rc implementations rely on reordering and overlapping memory operations for high performance. to enforce order when necessary, an rc implementation uses software guarantees, whereas an sc implementation relies on hardware speculation. our sc implementation, called sc++, closes the performance gap because: (1) the hardware allows not just loads, as some current sc implementations do, but also stores to bypass each other speculatively to hide remote latencies, (2) the hardware provides large speculative state for not just processor, as previously proposed, but also memory to allow out-of-order memory operations, (3) the support for hardware speculation does not add excessive overheads to processor pipeline critical paths, and (4) well-behaved applications incur infrequent rollbacks of speculative execution. using simulation, we show that sc++ achieves an rc implementation's performance in all the six applications we studied. chris gniady babak falsafi t. n. vijaykumar regarding a device to help battering the ram wall jean-louis lafitte a fault model for vhdl descriptions at the register transfer level t. riesgo j. uceda design of concurrently testable microprogrammed control units four schemes for the design of concurrently testable microprogrammed control units are presented. in schemes 1 and 2 the concept of path signatures is used for detection of malfunctions in the control unit. two different methods for computation of signatures are given. in schemes 3 and 4, a check-symbol is assigned to each microinstruction and the integrity of these check-symbols is checked concurrently. a deterministic approach is used for generation of check-symbols in scheme 4. a comparative study of these schemes is done with respect to storage and time overhead, error coverage, and implementation complexity. masood namjoo on average power dissipation and random pattern testability of cmos combinational logic networks amelia shen abhijit ghosh srinivas devadas kurt keutzer a study in coverage-driven test generation mike benjamin daniel geist alan hartman gerard mas ralph smeets yaron wolfsthal fault-tolerant wafer-scale architectures for vlsi the basic problem which limits both yields and chip sizes is the fact that circuits created using current design techniques will not function correctly in the presence of even a single flaw of sufficient size anywhere on the chip. in this work we examine the problem of constructing chips up to the size of a wafer which operate correctly despite the presence of such flaws. this can be accomplished by building on the wafer a nearest-neighbor network of small, independent, asynchronously communicating modules. a specific algorithm to be performed by the wafer is then mapped onto a fault-free subgraph of the network. we are interested in algorithms which map naturally onto a linear array of identical processors. construction of fault-tolerant implementations of these algorithms is addressed in two contexts. first we consider the general problem of finding a fault-free subgraph of the host network which is isomorphic to the linear array required to solve a problem. we then examine ways to tailor a specific, known algorithm to the fault-tolerant context. donald fussell peter varman the compilation of regular expressions into integrated circuits robert w. floyd jeffrey d. ullman physical design for fpgas fpgas have been growing at a rapid rate in the past few years. their ever- increasing gate densities and performance capabilities are making them very popular in the design of digital systems. in this paper we discuss the state- of-the-art in fpga physical design. compared to physical design in traditional asics, fpgas pose a different set of requirements and challenges. consequently the algorithms in fpga physical design have evolved differently from their asic counterparts. apart from allowing fpga users to implement their designs on fpgas, fpga physical design is also used extensively in developing and evaluating new fpga architectures. finally, the future of fpga physical design is discussed along with how it is interacting with the latest fpga technologies. rajeev jayaraman tradeoffs and design of an ultra low power uhf transceiver integrated in a standard digital cmos process a broad range of high-volume consumer applications require low-power, battery operated, wireless microsystems and sensors. these systems should conciliate a sufficient battery lifetime with reduced dimensions, low cost and versatility. the design of such systems highlights many tradeoffs between performances, lifetime, cost and power consumption. also, special circuit and design techniques are needed to comply with the reduced supply voltage (down to 1v). these considerations are illustrated by design examples taken from a transceiver chip realized in a standard 0.5μm digital cmos process. the chip is dedicated to a distributed sensors network and is based on a direct- conversion architecture. the circuit prototype operates in the 434 mhz ism band and consumes only 1mw in receive mode. it achieves a -95dbm sensitivity for a data rate of 24kbit/s. the transmitter section is designed for 0dbm output power under the minimum 1v supply, with a global efficiency higher than 15%. alain-serge porret thierry melly e. a. vittoz c. c. enz a register file and scheduling model for application specific processor synthesis e. ercanli c. papachristou garbage collection with pointers to individuals cells b. pearlmutter emerald: an architecture-driven tool compiler for fpgas darren c. cronquist larry mcmurchie functional timing analysis for ip characterization hakan yalcin mohammad mortazavi robert palermo cyrus bamji karem sakallah design of a logic synthesis system (tutorial) richard rudell a new heuristic for single row routing problems in this paper, we present a new heuristic algorithm for the classical single row routing problem. the algorithm is based on a graph theoretic decomposition scheme and uses modified cut-numbers. the algorithm was implemented in c on vax 8200. the experimental results show that the quality of solutions generated by our algorithm could be up to 36% better as compared to the existing algorithms. n. a. sherwani j. s. deogun high-level synthesis for testability: a survey and perspective kenneth d. wagner sujit dey potentials of chip-package co-design for high-speed digital applications gerhard tröster exfi: a low-cost fault injection system for embedded microprocessor-based boards evaluating the faulty behavior of low-cost embedded microprocessor-based boards is an increasingly important issue, due to their adoption in many safety critical systems. the architecture of a complete fault injection environment is proposed, integrating a module for generating a collapsed list of faults, and another for performing their injection and gathering the results. to address this issue, the paper describes a software- implemented fault injection approach based on the trace exception mode available in most microprocessors. the authors describe exfi, a prototypical system implementing the approach, and provide data about some sample benchmark applications. the main advantages of exfi are the low cost, the good portability, and the high efficiency a. benso p. prinetto m. rebaudengo m. sonza reorda residue bdd and its application to the verification of arithmetic circuits shinji kimura a system for incremental synthesis to gate-level and reoptimization following rtl design changes s. c. prasad p. anirudhan p. bosshart parity logging disk arrays parity-encoded redundant disk arrays provide highly reliable, cost-effective secondary storage with high performance for reads and large writes. their performance on small writes, however, is much worse than mirrored disks---the traditional, highly reliable, but expensive organization for secondary storage. unfortunately, small writes are a substantial portion of the i/o workload of many important, demanding applications such as on-line transaction processing. this paper presents parity logging, a novel solution to the small- write problem for redundant disk arrays. parity logging applies journalling techniques to reduce substantially the cost of small writes. we provide detailed models of parity logging and competing schemes---mirroring, floating storage, and raid level 5---and verify these models by simulation. parity logging provides performance competitive with mirroring, but with capacity overhead close to the minimum offered by raid level 5. finally, parity logging can exploit data caching more effectively than all three alternative approaches. daniel stodolsky mark holland william v. courtright garth a. gibson hardware reuse at the behavioral level patrick schaumont radim cmar serge vernalde marc engels ivo bolsens codes to reduce switching transients across vlsi i/o pins arvin park ron maeder leakage control with efficient use of transistor stacks in single threshold cmos mark c. johnson dinesh somasekhar kaushik roy a design for testability technique for rtl circuits using control/data flow extraction indradeep ghosh anand raghunathan niraj k. jha cycle-true simulation of the st10 microcontroller (poster paper) lovic gauthier ahmed amine jerraya speed up of behavioral a.t.p.g. using a heuristic criterion jean françois santucci anne-lise courbis norbert giambiasi stay away from minimum design-rule values chris w. h. strolenberg plus: a distributed shared-memory system plus is a multiprocessor architecture tailored to the fast execution of a single multithreaded process; its goal is to accelerate the execution of cpu- bound applications. plus supports shared memory and efficient synchronization. memory access latency is reduced by non-demand replication of pages with hardware-supported coherence between replicated pages. the architecture has been simulated in detail and the paper presents some of the key measurements that have been used to substantiate our architectural decisions. the current implementation of plus is also described. roberto bisiani mosur ravishankar meld scheduling: relaxing scheduling constraints across region boundaries santosh g. abraham vinod kathail brian l. deitrich automatic layout algorithms for function blocks of cmos gate arrays automatic layout algorithms, placement and routing, for function blocks of cmos gate arrays are presented. the placement algorithm assigns transistors to basic cells so as to minimize the number of cells used and to minimize the number of interconnections crossing cut-lines. the former objective is achieved by finding a maximum matching and the latter is achieved by iterative interchanges of transistor pairs. a new routing technique based on channel routing methods is introduced to handle the internal cell area. it intends to route with the primary use of the first layer and with the least use of tracks. a program based on the algorithms has been developed and applied to many block designs for up to 200 transistors. the results show that the presented algorithms could realize as good a layout as manual. shigeo noda hitoshi yoshizawa etsuko fukuda haruo kato hiroshi kawanishi takashi fujii logic and conflict-free vector addresses tsong-chih hsu ling-yang kung buffer insertion for clock delay and skew minimization x. zeng d. zhou wei li a novel dimension reduction technique for the capacitance extraction of 3d vlsi interconnects wei hong weikai sun zhenhai zhu hao ji ben song wayne wei- ming dai cache memories alan jay smith cad directions for high performance asynchronous circuits ken stevens shai rotem steven m. burns jordi cortadella ran ginosar michael kishinevsky marly roncken terabytes >> teraflops or why work on processors when i/o is where the action is? (abstract) david patterson single step current driven routing of multiterminal signal nets for analog applications thorsten adler erich barke analysis of user requirements jacques rouillard co-synthesis of pipelined structures and instruction reordering constraints for instruction set processors this paper presents a hardware/software co-synthesis approach to pipelined isp (instruction set processor) design. the approach synthesizes the pipeline structure from a given instruction set architecture (behavioral) specification. in addition, it generates a set of reordering constraints that guides the compiler back-end (reorderer) to properly schedule instructions so that possible pipeline hazards are avoided and throughput is improved. co-synthesis takes place while resolving pipeline hazards, which can be attributed to interin-struction dependencies (iids). an extended taxonomy of iids have been proposed for the systematic analysis of pipeline hazards. hardware/software methods are developed to resolve iids. algorithms based on taxonomy and resolutions are constructed and integrated into the pipeline synthesis process to explore hardware and software design space. application benchmarks are used to evaluate possible designs and guide the design decision. the power of the co-synthesis tool piper is demonstrated through pipeline synthesis of one illustrative example and two isps, including an industrial one (tdy-43). in comparison with other related approaches, our approach achieves higher throughput and provides a systematic way to explore the hardware/software trade-off. ing-jer huang provably correct high-level timing analysis without path sensitization this paper addresses the problem of true delay estimation during high level design. the existing delay estimation techniques either estimate the topological delay of the circuit which may be pessimistic, or use gate-level timing analysis for calculating the true delay, which may be prohibitively expensive. we show that the paths in the implementation of a behavioral specification can be partitioned into two sets, sp and up. while the paths in sp can affect the delay of the circuit, the paths in up cannot. consequently, the true delay of the resulting circuit can be computed by just measuring the topological delay of the paths in sp, eliminating the need for the computationally intensive process of path sensitization. experimental results show that high-level true delay estimation can be done very fast, even when gate-level true delay estimation becomes computationally infeasible. the high-level delay estimates are verified by comparing with delay estimates obtained by gate-level timing analysis on the actual implementation. subhrajit bhattacharya sujit dey franc brglez expected current distributions for cmos circuits dennis j. ciplickas ronald a. rohrer delay minimal decomposition of multiplexers in technology mapping shashidhar thakur d. f. wong shankar krishnamoorthy early verification of prototype tooling for ic designs (tutorial) j. p. simmons synthesis and optimization procedures for robustly delay-fault testable combinational logic circuits in this paper we apply recently developed necessary and sufficient conditions for robust path-delay-fault testability to develop synthesis procedures which produce two-level and multilevel circuits with high degrees of robust path delay fault testability. for circuits which can be flattened to two levels, we give a covering procedure which optimizes for robust path delay fault testability. these two-level circuits can then be algebraically factored to produce robustly path- delay-fault testable multilevel circuits. for regular structures which cannot be flattened to two levels, we give a composition procedure which allows for the construction of robustly path- delay-fault testable regular structures. finally, we show how these two techniques can be combined to produce cascaded combinational logic blocks that are robustly path-delay-fault testable. we demonstrate these techniques on a variety of examples. it is possible to produce entire chips that are fully path delay testable using these techniques. srinivas devadas kurt keutzer logic decomposition during technology mapping eric lehman yosinori watanabe joel grodstein heather harkness dynamic power management based on continuous-time markov decision processes qinru qiu massoud pedram parameterized schematics this paper presents a design capture system that allows parameterized schematics and code to be intermixed freely to produce annotated net lists. a key feature of the system is its extensibility. it provides a small set of powerful abstractions for design description that can easily be extended by users. the system also allows convenient graphical specification of layout generators, and has been used to produce several large vlsi chips. richard barth bertrand serlet pradeep sindhu low-power micromachined microsystems (invited talk) micromachined microsystems and micro electro mechanical systems (mems) have made possible the development of highly accurate and portable sensors and instrument for a variety of applications in the health care, industrial, consumer products, avionics, and defense. design of low-power circuits for these applications, and use of micromachined sensors and actuators in combination with integrated circuits to implement even lower power microinstruments has now become possible and the focus of attention. this paper reviews the state of the art in the development of micromachined microsystems and mems, discusses low-power design approaches for microsystems, and reviews some recent development in power generation and energy harvesting from the environment. khalil najafi tradeoffs in processor/memory interfaces for superscalar processors thomas m. conte "cool low power" 1ghz multi-port register file and dynamic latch in 1.8 v,0.25 m soi and bulk technology (poster session)this paper describes power analysis at sub-zero temperatures for a highperformance dynamic multiport register file (6 read and 2 write ports, 32wordlines x 64 bitlines) fabricated in 0.25 μm silicon on insulator (soi) andbulk technologies. based on the hardware it is shown that the performance ofboth register file and latch improves by 2-3.5% per 10Â c reduction intemperature. the standby power for soi reduces by 1.5% to 3per 10Â ctemperature drop down to -30Â c. the soi chip is shown to havemore significantperformance improvement at low temperatures compared tobulk chip due to thefloating body effect which partially offsets the increase in the thresholdvoltages (vt). the low temperature performance gain is attributed to reductionin capacitance (around 7-8%) and rest is due to dynamic threshold voltages. at30Â c the register file is capable of functioning close to 1.02 ghz for readand write operations in a single cycle.r. v. joshi w. hwang s. c. wilson c. t. chuang developments in verification of design correctness (a tutorial) this paper reviews recent developments in the verification of digital systems designs. the emphasis is on proof of functional correctness. some of the techniques reviewed are symbolic simulation (including parallel simulation of hdl descriptions), dataflow verification by grammar construction, comparison of manually generated design with automated design, and functional abstraction. w. e. cory w. m. vancleemput bias boosting technique for a 1.9ghz class ab rf amplifier a bias boosting technique for a 3.2v, 1.9ghz class ab rf amplifier designed in a 30ghz bicmos process is presented in this paper. in a class ab amplifier, the average current drawn from the supply depends on the input signal level. as the output power increases so does the average currents in both the emitter and the base of the power transistor. the increased average current causes an increased voltage drop in the biasing circuitry and the ballast resistor. this reduces the conduction angle in the amplifier, pushing it deep into class b and even class c operation, reducing the maximum output power by 25%. to avoid the power reduction, the amplifier should have a larger bias which inevitably has a larger power dissipation at low output power levels. the proposed bias boosting circuitry dynamically increases the bias of the power transistor as the output power increases. the amplifier has less power dissipation at low power levels with an increased maximum output power. tirdad sowlati sifen luo a customized control store design in microprogrammed control units the paper reports on the control store cost minimization using an approach related to the bit reduction method. a methodology is presented for finding the microinstruction format which provides a minimum joint cost of the control store and microinstruction decoder circuitry for a given set of microprograms. the optimization criterion measures the area taken by the control store and decoders in a large scale integrated circuit. the methodology is based on the concepts of codable microoperation classes and microoperation class distributivity introduced in the paper. the codable classes are those that provide the length reduction of the microinstruction field used for binary encoding microoperation combinations, compared to the single bit/microoperation encoding method. microoperation class properties and basic types of fields assignments in microinstruction word formats for codable class approach are also discussed. m. s. tudruj chip assembly in the playout vlsi design system klaus glasmacher gerhard zimmermann efficiently supporting fault-tolerance in fpgas while system reliability is conventionally achieved through component replication, we have developed a fault-tolerance approach for fpga-based systems that comes at a reduced cost in terms of design time, volume, and weight. we partition the physical design into a set of tiles. in response to a component failure, we capitalize on the unique reconfiguration capabilities of fpgas and replace the affected tile with a functionally equivalent tile that does not rely on the faulty component. unlike fixed structure fault-tolerance techniques for asics and microprocessors, this approach allows a single physical component to provide redundant backup for several types of components. experimental results conducted on a subset of the mcnc benchmarks demonstrate a high level of realiability with low timing and hardware overhead. john lach william h. mangione-smith miodrag potkonjak a new partitioning method for parallel simulation of vlsi circuits on transistor level norbert fröhlich volker glöckel josef fleischmann issues in ic implementation of high level, abstract designs with the exponential explosion in chip complexity there is a growing need for high level design aids. a preliminary experiment was conducted in mating a hierarchical, top-down da system for data paths with an existing ic placement and router. nine designs ranging in complexity from 7 to 150 register transfers were synthesized. strong correlations were observed between high level, abstract measures and final placed and routed chip area. it was observed that use of logic primitives of a moderate level abstraction yielded a 50% savings in placed and routed chip area. jin h. kim daniel p. siewiorek estimating testability and coverage distributions of a vlsi circuit from a mixture of discrete and continuous functions h. farhat m. zand h. saiedian using vhdl for datapath synthesis v. olive r. airiau j. m. berge a. robert fault emulation: a new approach to fault grading kwang-ting cheng shi-yu huang wei-jin dai two-level logic minimization for low power in this paper we present a complete boolean method for reducing the power consumption in two-level combinational circuits. the two-level logic optimizer performs the logic minimization for low power targeting static pla, general logic gates, and dynamic pla implementations. we modify the espresso algorithm by adding our heuristics, which bias logic minimization toward lowering power dissipation. in our heuristics, signal probabilities and transition densities are two important parameters. the experimental results are promising. jyh- mou tseng jing-yang jou minimization of chip size and power consumption of high-speed vlsi buffers d. zhou x. y. liu post-placement residual-overlap removal with minimal movement sudip nag kamal chaudhary tricap - a three dimensional capacitance solver for arbitrarily shaped conductors on printed circuit boards and vlsi interconnections matthias tröscher hans hartmann georg klein andreas plettner an over-cell gate array channel router a gate array router that utilizes horizontal and vertical over-cell routing channels to increase cell density is described. logic macros, with fixed intraconnect metal that may span several cell columns, are mapped onto the array producing partially filled routing channels. macro interconnects are loosely assigned to the partially filled horizontal and vertical routing channels during global routing. each loose horizontal channel segment is assigned to a channel track using a maze router. vertical channel segments are completed by a modified dogleg channel router. howard e. krohn a deterministic approach to adjacency testing for delay faults adjacency testing for delay faults is examined in both theory and implementation. we shall show that the necessary and sufficient conditions for adjacency testability yield an efficient method of robust delay test generation. empirical results (including several different cost measurements) are presented which demonstrate that our technique: (1) achieves high fault coverages under both the robust and nonrobust delay fault models and (2) is cost effective. c. t. glover m. r. mercer enor: model order reduction of rlc circuits using nodal equations for efficient factorization bernard n. sheehan a study of 80x86/80x87 floating-point execution the evolution of a processor architecture must include enhancements to both the fixed and floating-point units. to increase overall performance, architectural and implementation performance improvements to the floating- point unit must keep pace with the performance enhancements made to the fixed- point unit. in this paper we evaluate the performance enhancements provided by the evolution of the floating-point co-processor for the intel 80x86 family. using a suite of traces, we show how code compiled for predecessor co- processors will perform on current floating-point implementation. zoran miljanic david r. kaeli evaluating stream buffers as a secondary cache replacement s. palacharla r. e. kessler driving toward higher iddq test quality for sequential circuits: a generalized fault model and its atpg hisashi kondo kwang-ting cheng facet: a procedure for the automated synthesis of digital systems c. j. tseng d. p. siewiorek cache evaluation and the impact of workload choice alan jay smith performance driven resynthesis by exploiting retiming-induced state register equivalence priyank kalla maciej j. ciesielski performance verification of circuits this paper describes a multi-level simulation strategy for verifying and- optimizing vlsi circuit performance. circuit simulation alone is insufficient for ensuring that vlsi designs meet performance targets. to meet vlsi needs, a tri-level family of simulation tools consisting of critical path analyzers, parasitic timing simulators, and circuit simulators is proposed. the relationship and interface between these tools, including how they combine "tops-down" and "bottoms-up" design methodologies, and some results from the initial implementation of this strategy in actual vlsi product designs are also discussed. jerry mar you-pang wei system-level hardware/software trade-offs samuel p. harbison abstract routing of logic networks for custom module generation this paper describes a switchbox-type router for custom vlsi module generation as performed by a module planner. a module is decomposed into abstract cells consisting of global routes and boolean functional specifications. each abstract cell is given to a cell synthesizer which generates the circuit layout and through-the-cell routing. abstract routing for a module planner is in some sense similar to switchbox routing to the degree that all of the routes are generated internally within a rectangular boundary (routes are coming from four sides). the principle difference with respect to standard switchbox routing is at the geometric level, where a cell synthesizer generates the routing conduction layers along with circuit devices for each abstract cell within this rectangular region. the aspects of this paper which are thought to be novel contributions are 1) a relative pin assignment algorithm for the abstract cells; 2) a global routing penalty function which not only considers previous routes, but also considers gate complexity within the cells; 3) an efficient optimization algorithm for minimizing the number of tracks running through the module. s. t. healey w. j. kubitz hybrid floorplanning based on partial clustering and module restructuring takayuki yamanouchi kazuo tamakashi takashi kambe constrained via minimization with practical considerations for multi-layer vlsi/pcb routing problems sung-chuan fang kuo-en chang wu-shiung feng sao-jie chen proptest: a property based test pattern generator for sequential circuits using test compaction ruifeng guo sudhakar m. reddy irith pomeranz formally verified redundancy removal stefan hendricx luc claesen condition graphs for high-quality behavioral synthesis identifying mutual exclusiveness between operators during behavioral synthesis is important in order to reduce the required number of control steps or hardware resources. to improve the quality of the synthesis result, we propose a representation, the condition graph, and an algorithm for identification of mutually exclusive operators. previous research efforts have concentrated on identifying mutual exclusiveness by examining language constructs such as if- then-else statements. thus, their results heavily depend on the description styles. the proposed approach can produce results independent of description styles and identify more mutually exclusive operators than any previous approaches. the condition graph and the proposed algorithm can be used in any scheduling or binding algorithms. experimental results on several benchmarks have shown the efficiency of the proposed representation and algorithm. hsiao-ping juan viraphol chaiyakul daniel d. gajski analysis of superposition of streams into a cache buffer robert j. t. morris an over-the-cell router a program that produces single-layer planar routing over the cells for i2l and lst2l logic arrays is described. this router has been integrated into a layout system which was previously restricted to the layout of standard cell lsi chips. when used in conjunction with a channel router, the complete routing is produced automatically. this paper defines the over-the-cell routing problem, describes the algorithms for its solution, and presents typical routing results. david n. deutsch paul glick a "dogleg" channel router d. n. deutsch soft decision maximum likelihood decoders for binary linear block codes implemented on fpgas (abstract) hidehisa nagano takayuki suyama akira nagoya programs for verifying circuit connectivity of mos/lsi mask artwork this paper describes three programs which perform connectivity rule check, logic gate recognition for logic simulation and circuit connectivity comparison. these programs have been developed for verifying circuit connectivity extracted from mask artwork. powerful algorithms are used in these programs, including a heuristic graph comparison algorithm, to realize highly practical verification aids. through the combined use of these programs, more cost-effective verification is possible. makoto takashima takashi mitsuhashi toshiaki chiba kenji yoshida transition reduction in carry-save adder trees p. larsson c. nicol symmetric transparent bist for rams s. hellebrand h.-j. wunderlich v. n. yarmolik ace: a circuit extractor this paper describes the design, implementation and performance of a fiat edge-based circuit extractor for nmos circuits. the extractor is able to work on large and complex designs, it can handle arbitrary geometry, and outputs a comprehensive wirelist. measurements show that the run time of the edge-based algorithm used is linear in size of the circuit, with low implementation overheads. the extractor is capable of analyzing a circuit with 20,000 transistors in less than 30 minutes of cpu time on a vax 11/780. the high performance of the extractor has changed the role that a circuit extractor played in the design process, as it is now possible to extract a chip a number of times during the same session. anoop gupta three competing design methodologies for asic's: architectual synthesis, logic synthesis, logic synthesis and module generation k. keutzer vhdl switch level fault simulation christopher a. ryan joseph g. tront nova: state assignment of finite state machines for optimal two-level logic implementations the problem of encoding the states of a synchronous finite state machine (fsm), so that the area of a two-level implementation of the combinational logic is minimized, is addressed. as in previous approaches, the problem is reduced to the solution of the combinatorial optimization problems defined by the translation of the cover obtained by a multiple-valued logic minimization or by a symbolic minimization into a compatible boolean representation. in this paper we present algorithms for their solution, based on a new theoretical framework that offers advantages over previous approaches to develop effective heuristics. the algorithms are part of nova, a program for optimal encoding of control logic. final areas averaging 20% less than other state assignment programs and 30% less than the best random solutions have been obtained. literal counts averaging 30% less than the best random solutions have been obtained. t. villa a. sangiovanni-vincentelli behavioral level transformation in the cmu-da system the carnegie- mellon university design automation system (cmu-da) [2] consists of a set of computer programs whose goal is to produce a complete design in a user- specified device technology, given as input a behavioral description of the piece of hardware to be designed and a set of constraints. this paper describes one of the tools in the cmu-da environment - a software package to perform optimizing transformations at the behavioral level. motivations for these transformations are given, and an example of their use is shown. robert a. walker donald e. thomas the standard transistor array (star) (part ii automatic cell placement techniques) layout of a star device consists of the placement of standard cells (circuit elements) on the array and the routing of conductors between cells. cell placement must be such that routing is not hindered. also, placement procedures must be cost effective and easy to implement on a digital computer. a placement procedure for stars is described in this paper that satisfies these characteristics. the procedure attempts to optimize the placement with respect to several criteria including expected routing channel usage and routing via requirements. computer implementations of the procedure are discussed. experimental results are presented which indicate that the procedure yields near-optimum results in computationally convenient amounts of time. glenn w. cox b. d. carroll synthesis of low-overhead interfaces for power-efficient communication over wide buses l. benini a. macii e. macii m. poncino r. scarsi timing driven floorplanning on programmable hierarchical targets the goal of this paper is to perform a timing optimization of a circuit described b y a network of cells on a target structure whose connection delays ha v ediscrete values follo wing its hierarch y. the circuits is modelled by a set of timed cones whose delay histograms allow their classification into critical, potential critical and neutral cones according to predicted delays. the floorplanning is then guided b y this cone structuring and has two innov ativ e features:first, it is shown that the placement of the elements of the neutral cones has no impact on timing results, th us a significant reduction is obtained; second, despite a greedy approach, a near optimal floorplan is achieved in a large number of examples. s. a. senouci a. amoura h. krupnova g. saucier minimum power and area n-tier multilevel interconnect architectures using optimal repeater insertion minimum power cmos asic macrocells are designed by minimizing the macrocell area using a new methodology to optimally insert repeaters for n-tier multilevel interconnect architectures. the minimum macrocell area and power dissipation are projected for the 100, 70 and 50 nm technology generations and compared with a n-tier design without using repeaters. repeater insertion and a novel interconnect geometry scaling technique decrease the power dissipation by 58-68% corresponding to a macrocell area reduction of 70-78% for the global clock frequency designs of these three technology generations. raguraman venkatesan jeffrey a. davis keith a. bowman james d. meindl dotsplus - better than braille? john a. gardner software-controlled caches in the vmp multiprocessor vmp is an experimental multiprocessor that follows the familiar basic design of multiple processors, each with a cache, connected by a shared bus to global memory. each processor has a synchronous, virtually addressed, single master connection to its cache, providing very high memory bandwidth. an unusually large cache page size and fast sequential memory copy hardware make it feasible for cache misses to be handled in software, analogously to the handling of virtual memory page faults. hardware support for cache consistency is limited to a simple state machine that monitors the bus and interrupts the processor when a cache consistency action is required. in this paper, we show how the vmp design provides the high memory bandwidth required by modern high-performance processors with a minimum of hardware complexity and cost. we also describe simple solutions to the consistency problems associated with virtually addressed caches. simulation results indicate that the design achieves good performance providing data contention is not excessive. d. r. cheriton g. a. slavenburg p. d. boyle algorithms for routing and testing routability of planar vlsi layouts this paper studies the problem of routing wires in a grid among features on one layer of a vlsi chip, when a sketch of the layer is given. a sketch specifies the positions of features and the topology of the interconnecting wires. we give polynomial-time algorithms that (1) determine the routability of a sketch, and (2) produce a routing of a sketch that optimizes both individual and total wire length. these algorithms subsume most of the polynomial-time algorithms in the literature for planar routing and routability testing in the rectilinear grid model. we also provide an explicit construction of a database, called the rubber-band equivalent, to support computation involving the layout topology. c e leiserson f m maley a prototype framework for knowledge-based analog circuit synthesis r. harjani r. a. rutenbar l. r. carley how to use knowledge in an analysis process heiko holzheuer read-after-read memory dependence prediction we identify that typical programs exhibit highly regular read-after-read (rar) memory dependence streams. we exploit this regularity by introducing read- after-read (rar) memory dependence prediction. we also present two rar memory dependence prediction-based memory latency reduction techniques. in the first technique, a load can obtain a value by simply naming a preceding load with which a rar dependence is predicted. the second technique speculatively converts a series of loadi-use ;i,…,loadn-use< subscrpt>n chains into a single loadi-usei…use producer/consumer graph. our techniques can be implemented as surgical extensions to the recently proposed read-after-write (raw) dependence prediction based speculative memory cloaking and speculative memory bypassing. on average, our techniques provide correct values for an additional 20% (integer codes) and 30% (floating-point codes) of all loads. moreover, a combined raw- and rar-based cloaking/bypassing mechanism improves performance by 6.44% (integer) and 4.66% (floating-point) even when naive memory dependence speculation is used. the original raw-based cloaking/bypassing mechanism yields improvements of 4.28% (integer) and 3.20% (floating-point). andreas moshovos gurindar s. sohi low-power design tools - where is the impact? (panel) jan m. rabaey nanette collins bill bell jerry frenkil vassilios gerousis massoud pedram deo singh jim sproch random generation of test instances for logic optimizers kazuo iwama kensuke hino on reordering instruction streams for pipelined computers this paper describes a method to reorder the straight line instruction streams for pipelined computers which have one instruction issue unit but may contain multiple function units. the objective is to make the most efficient usage of the pipelines within the computer system. the input to the scheduler is the intermediate code of a compiler, and is represented by a data dependence graph (ddg). the scheduler is a kind of list scheduler. the data dependence and the pipeline effect of the function units within the system have been considered for finding a most suitable time slot for each node during reordering time. the scheduler has been implemented and several scientific application programs have been tested. the results show that in most of the cases the scheduler will achieve the optimal result. the average instruction issue rate is over 96%. as a comparison, the issue rate of an ordinary compiler is only 22%; and the issue rate of a compiler with the effect of pipeline but without reordering the instruction stream is about 45%. j.-j. shieh c. papachristou stochastic sequential machine synthesis targeting constrained sequence generation diana marculescu radu marculescu massoud pedram efficient use of large don't cares in high-level and logic synthesis r. a. bergamaschi d. brand l. stok m. berkelaar s. prakash intrinsic response for analog module testing using an analog testability bus a parasitic effect removal methodology is proposed to handle the large parasitic effects in analog testability buses. the removal is done by an on- chip test generation technique and an intrinsic response extraction algorithm. on-chip test generation creates test signals on-chip to avoid the parasitic effects of the test application bus. the intrinsic response extraction cross- checks and cancels the parasitic effects of both test application and response observation paths. the tests using both spice simulation and mnabst-1 p1149.4 test chip reveal that the proposed algorthm can not only remove the parasitic effects of the test buses but also tolerate test signal variations. furthermore, it is robust enough to handle loud environmental noise and the nonlinearity of the switching devices. chauchin su yue-tsang chen shyh-jye jou partitioned register files for vliws: a preliminary analysis of tradeoffs andrea capitanio nikil dutt alexandru nicolau evaluating the impact of memory system performance on software prefetching and locality optimizations software prefetching and locality optimizations are techniques for overcoming the speed gap between processor and memory. in this paper, we evaluate the impact of memory trends on the effectiveness of software prefetching and locality optimizations for three types of applications: regular scientific codes, irregular scientific codes, and pointer-chasing codes. we find for many applications, software prefetching outperforms locality optimizations when there is sufficient memory bandwidth, but locality optimizations outperform software prefetching under bandwidth- limited conditions. the break-even point (for 1 ghz processors) occurs at roughly 2.5 gbytes/sec on today's memory systems, and will increase on future memory systems. we also study the interactions between software prefetching and locality optimizations when applied in concert. naively combining the techniques provides robustness to changes in memory bandwidth and latency, but does not yield additional performance gains. we propose and evaluate several algorithms to better integrate software prefetching and locality optimizations, including a modified tiling algorithm, padding for prefetching, and index prefetching. abdel-hameed a. badawy aneesh aggarwal donald yeung chau-wen tseng gala - an automatic layout system for high density cmos gate arrays this paper describes the automatic layout software system - gala (gate array layout automation) - developed at hughes aircraft company for a high density cmos gate array family with 3u design rules. the system layout and hierarchical decomposition schemes used in gala are presented. the particular design environment and style are discussed. the system has been used in production for the various sizes of the hughes hcmos gate array family for over 2 years. routing results for some designs produced during that period are presented. b. n. tien b. s. ting j. cheam k. chow s. c. evans an mpeg-2 decoder case study as a driver for a system level design methodology pieter van der wolf paul lieverse mudit goel david la hei kees vissers early load address resolution via register tracking higher microprocessor frequencies accentuate the performance cost of memory accesses. this is especially noticeable in the intel's ia32 architecture where lack of registers results in increased number of memory accesses. this paper presents novel, non-speculative technique that partially hides the increasing load-to-use latency, by allowing the early issue of load instructions. early load address resolution relies on register tracking to safely compute the addresses of memory references in the front-end part of the processor pipeline. register tracking enables decode-time computation of register values by tracking simple operations of the form reg±immediate. register tracking may be performed in any pipeline stage following instruction decode and prior to execution. several tracking schemes are proposed in this paper: stack pointer tracking allows safe early resolution of stack references by keeping track of the value of the esp register (the stack pointer). about 25% of all loads are stack loads and 95% of these loads may be resolved in the front-end. absol ute address tracking allows the early resolution of constant-address loads.< ;/italic> displace ment-based tracking tackles all loads with addresses of the form reg±immediate by tracking the values of all general-purpose registers. this class corresponds to 82% of all loads, and about 65% of these loads can be safely resolved in the front-end pipeline. the paper describes the tracking schemes, analyzes their performance potential in a deeply pipelined processor and discusses the integration of tracking with memory disambiguation. michael bekerman adi yoaz freddy gabbay stephan jourdan maxim kalaev ronny ronen a video driver system designed using a top-down, constraint-driven methodology iasson vassiliou henry chang alper demir edoardo charbon paolo miliozzi alberto sangiovanni-vincentelli bites: a bdd based test pattern generator for strong robust path delay faults rolf drechsler practical considerations of clock-powered logic recovering and reusing circuit energies that would otherwise be dissipated as heat can reduce the power dissipated by a vlsi chip. to accomplish this requires a power source that can efficiently inject and extract energy, and an efficient power delivery system to connect the power source to the circuit nodes. the additional circuitry and timing required to support this process can readily exceed the power-savings benefit. clock-powered logic is a circuit-level, energy-recovery approach that has been implemented in two generations of small-scale microprocessor experiments. the results have shown that it is possible and practical to extract useful amounts of power savings by leveraging the additional circuitry for other compatible purposes. the capabilities and limitations of clock-powered logic as a competitive low-power approach are presented and discussed in this paper. william athas crosshatch disk array for improved reliability and performance s. w. ng a loop optimization technique based on scheduling table loop optimization is an important aspect of microcode compaction to minimize execution time. in this paper a new loop optimization technique for horizontal microprograms is presented, which makes use of the cyclic regularity of loops. we have extended the concept of reservation table, which is used to develop a pipeline control strategy, so that both data dependencies and resource conflicts are taken into account. based on the analysis of the extended reservation table, or scheduling table, an optimal schedule can be obtained. the iterations of a loop are then rearranged to form a new loop body, whose length may be greater than that of the original one. but the average initiation latency between iterations is minimal. d. liu w. k. giloi reducing energy requirements for instruction issue and dispatch in superscalar microprocessors (poster session) recent studies [mgk 98, tiw 98] have confirmed that a significant amount of energy is dissipated in the process of instruction dispatching and issue in modern superscalar microprocessors. we propose a model for the energy dissipated by instruction dispatching and issuing logic in modern superscalar microprocessors and validate them through register level simulations and spice-measured dissipation coefficients from 0.5 micron cmos layouts of relevant circuits. alternative organizations are studied for instruction window buffers that result in energy savings of about 47% over traditional designs. kanad ghose exact width and height minimization of cmos cells robert l. maziasz john p. hayes inaccuracies in power estimation during logic synthesis daniel brand chandu visweswariah high-level power estimation with interconnect effects we extend earlier work on high-level average power estimation to include the power due to interconnect loading. the resulting technique is a combination of a rtl- level gate count prediction method and average interconnect estimation based on rent's rule. the method can be adapted to be used with different place and route engines and standard cell libraries. for a number of benchmark circuits, the method is verified by extracting wire lengths from a layout of each circuit and then comparing the predicted (at rtl) power against that measured using spice. an average error of 14.4% is obtained for the average interconnect length, and an average error of 25.8% is obtained for average power estimation including interconnect effects. kavel m. buyuksahin farid n. najm boolean techniques for low power driven re-synthesis r. iris bahar fabio somenzi test generation for gigahertz processors using an automatic functional constraint extractor raghuram s. tupuri arun krishnamachary jacob a. abraham design and implementation of a field programmable analogue array adrian bratt ian macbeth dynamic ipc/clock rate optimization current microprocessor designs set the functionality and clock rate of the chip at design time based on the configuration that achieves the best overall performance over a range of target applications. the result may be poor performance when running applications whose requirements are not well-matched to the particular hardware organization chosen. we present a new approach called complexity- adaptive processors (caps) in which the ipc/clock rate tradeoff can be altered at runtime to dynamically match the changing requirements of the instruction stream. by exploiting repeater methodologies used increasingly in deep sub- micron designs, caps achieve this flexibility with potentially no cycle time impact compared to a fixed architecture. our preliminary results in applying this approach to on-chip caches and instruction queues indicate that caps have the potential to significantly outperform conventional approaches on workloads containing both general-purpose and scientific applications. david h. albonesi extracting schematic-like information from cmos circuit net-lists a global circuit structure is established from the circuit net-list based on knowledge about cmos circuits. this approach first partitions the circuit network and then assigns signal flow directions accordingly. the information obtained is absent in the net-list but explicitly present in schematic diagrams. such interpretation of a circuit network can provide extra heuristics to assist in various cmos circuit design tasks. w.-j. lue l. p. mcnamee efficient construction of binary moment diagrams for verifying arithmetic circuits kiyoharu hamaguchi akihito morita shuzo yajima a survey of high level microprogramming languages this paper surveys the current state of design and implementation of high level microprogramming languages. first, a number of important design issues are formulated. next, four microprogramming languages are considered in detail, to see how each of them has approached these issues. brief remarks are made about six other languages. finally, some concluding remarks are made. marleen sint evaluating the performance of four snooping cache coherency protocols write-invalidate and write-broadcast coherency protocols have been criticized for being unable to achieve good bus performance across all cache configurations. in particular, write-invalidate performance can suffer as block size increases; and large cache sizes will hurt write-broadcast. read- broadcast and competitive snooping extensions to the protocols have been proposed to solve each problem. our results indicate that the benefits of the extensions are limited. read- broadcast reduces the number of invalidation misses, but at a high cost in processor lockout from the cache. the net effect can be an increase in total execution cycles. competitive snooping benefits only those programs with high per-processor locality of reference to shared data. for programs characterized by inter- processor contention for shared addresses, competitive snooping can degrade performance by causing a slight increase in bus utilization and total execution time. s. j. eggers r. h. katz performance driven global routing and wiring rule generation for high speed pcbs and mcms sharad mehrotra paul franzon michael steer behavioral synthesis of combinational logic using spectral-based heuristics a prototype system developed to convert a behavioral representation of a boolean function in obdd form into an initial structural representation is described and experimental results are given. the system produces a multilevel circuit using heuristic rules based on properties of a subset of spectral coefficients. since the behavioral description is in obdd form, efficient methods are used to quickly compute the small subset of spectral coefficients needed for the application of the heuristics. the heuristics guide subsequent decompositions of the obdd, resulting in an iterative construction of the structural form. at each stage of the translation, the form of the decomposition is chosen in order to achieve optimization goals. m. a. thornton v. s. s. nair the yags branch prediction scheme a. n. eden t. mudge the aurora ram compiler ajay chandna c. david kibler richard b. brown mark roberts karem a. sakallah delay: an efficient tool for retiming with realistic delay modeling kumar n. lalgudi marios c. papaefthymiou a 2-dimensional placement algorithm for the layout of electrical circuits d. g. schweikert architectural support for the management of tightly-coupled fine-grain goals in flat concurrent prolog we propose architectural support for goal management as part of a special- purpose processor architecture for the efficient execution of flat concurrent prolog. goal management operations: halt, spawn, suspend and commit are decoupled from goal reduction, and overlapped in the goal management unit. their efficient execution is enabled using a goal cache. we evaluate the performance of the goal management support using an analytic performance model and program parameters characteristic of the system's development workload. most goal management operations are completely overlapped, resulting in a speedup of 2. higher speedups are obtained for workloads that exhibit greater goal management complexity. leon alkalaj tomás lang miloš ercegovac symbolic optimization of fsm networks based on sequential atpg techniques fabrizio ferrandi franco fummi enrico macii massimo poncino donatella sciuto cris: a test cultivation program for sequential vlsi circuits daniel g. saab youssef g. saab jacob a. abraham re-engineering of timing constrained placements for regular architectures anmol mathur k. c. chen c. l. liu patege: an automatic dc parametric test generation system for series gated ecl circuits for ecl circuits. dc parametric tests such as input current (iil iih), reference voltage (vbb), and power supply current (icc) tests are executed as well as functional tests. this paper describes: an automatic dc parametric test generation system patege for the series gated ecl circuits. patege can automatically generate the test patterns and calculate the expected values for iil, iih, vbb and icc tests. takuji ogihara shuichi saruyama shinichi murai congestion estimation during top-down placement congestion is one of the fundamental issues in vlsi physical design. in this paper, we propose two congestion estimation approaches for early placement stages. first, we theoretically analyze the peak congestion value of the design and experimentally validate the estimation approach. second, we estimate regional congestion in the early top-down placement. this is done by combining the wirelength distribution model and inter-region wire estimation. both approaches are based on the well known rent's rule, which is previously used for wirelength estimation. this is the first attempt to predict congestion using rent's rule. the estimation results are compared with the layout after placement and global routing. experiments on large industry circuits show that the early congestion estimation based on rent's rule is a promising approach. xiaojian yang ryan kastner majid sarrafzadeh seesim - a fast synchronous sequential circuit fault simulator with single event equivalence ching ping wu chung len lee wen zen shen mixed-signal bist using correlation and reconfigurable hardware (poster paper) j. machado da silva j. s. duarte j. s. matos a method of automatic data path synthesis a method of automatically synthesizing data paths from a behavioral description has been developed. an initial implementation of this method, which is integrated into the carnegie- mellon university design automation system, is presented in this paper. its principles of operation are explained and an evaluation of its performance is given. this method of automatic synthesis surpasses the performance of previous cmu-da approaches. further, a designer can use it in a semi-automatic fashion and complement its abilities with his expert insight. charles y. hitchcock donald e. thomas automatic generation of fpga routing architectures from high-level descriptions in this paper we present a "high- level" fpga architecture description language which lets fpga architects succinctly and quickly describe an fpga routing architecture. we then present an "architecture generator" built into the vpr cad tool [1, 2] that converts this high-level architecture description into a detailed and completely specified flat fpga architecture. this flat architecture is the representation with which cad optimization and visualization modules typically work. by allowing fpga researchers to specify an architecture at a high-level, an architecture generator enables quick and easy "what-if" experimentation with a wide range of fpga architectures. the net effect is a more fully optimized final fpga architecture. in contrast, when fpga architects are forced to use more traditional methods of describing an fpga (such as the manual specification of every switch in the basic file of the fpga), far less experimentation can be performed in the same time, and the architectures experimented upon are likely to be highly similar, leaving important parts of the design space completely unexplored. this paper describes the automated routing architecture generation problem, and highlights the two key difficulties --- creating an fpga architecture that matches all of an fpga architect's specifications, while simultaneously determining good values for the many unspecified portions of an fpga so that a high quality fpga results. we describe the method by which we generate fpga routing architectures automatically, and present several examples. vaughn betz jonathan rose fast prototyping: a system design flow for fast design, prototyping and efficient ip reuse francois pogodalla richard hersemeule pierre coulomb the design space layer: supporting early design space exploration for core- based designs helvio p. peixoto margarida f. jacome ander royo juan c. lopez optimizations and oracle parallelism with dynamic translation we describe several optimizations which can be employed in a dynamic binary translation (dbt) system, where low compilation/translation overhead is essential. these optimizations achieve a high degree of ilp, sometimes even surpassing a static compiler employing more sophisticated, and more time- consuming algorithms [9]. we present results in which we employ these optimizations in a dynamic binary translation system capable of computing oracle parallelism. kemal ebcioglu erik r. altman michael gschwind sumedh sathaye verifying sequential equivalence using atpg techniques in this paper we address the problem of verifying the equivalence of two sequential circuits. state-of-the-art sequential optimization techniques such as retiming and sequential redundancy removal can handle designs with up to hundreds or even thousands of flip-flops. however, the bdd-based approaches for verifying sequential equivalence can easily run into memory explosion for such designs. in an attempt to handle larger circuits, we modify test pattern- generation techniques for verification. the suggested approach utilizes the popular efficient backward-justification technique used in most sequential atpg programs. we present several techniques to enhance the efficiency of this approach by (1) identifying equivalent flip-flop pairs using an induction- based algorithm, and (2) generalizing the idea of exploring the structural similarity between circuits to perform verification in stages. this atpg-based framework is suitable for verifying circuits either with or without a reset state. in order to extend this approach to verify retimed circuits, we introduce a delay-compensation-based algorithm for preprocessing the circuits. the experimental results of verifying the correctness of circuits after sequential redundancy removal and retiming with up to several hundred flip- flops are presented. layout-driven rtl binding techniques for high-level synthesis using accurate estimators the importance of effective and efficient accounting of layout effects is well established in high-level synthesis (hls), since it allows more realistic exploration of the design space and the generation of solutions with predictable metrics. this feature is highly desirable in order to avoid unnecessary iterations through the design process. in this article, we address the problem of layout-driven register-transfer-level (rtl) binding as this step has a direct relevance to the final performance of the design. by producing not only an rtl design but also an approximate physical topology of the chip-level implementation, we ensure that the solution will perform at the predicted metric once implemented, thus avoiding unnecessary delays in the design process. min xu fadi j. kurdahi speeding up pipelined circuits through a combination of gate sizing and clock skew optimization harsha sathyamurthy sachin s. sapatnekar john p. fishburn compiler synthesized dynamic branch prediction scott mahlke balas natarajan exploiting ilp in page-based intelligent memory this study compares the speed, area, and power of different implementations of active pages [ocs98], an intelligent memory system which helps bridge the growing gap between processor and memory performance by associating simple functions with each page of data. previous investigations have shown up to 1000x speedups using a block of reconfigurable logic to implement these functions next to each sub-array on a dram chip. in this study, we show that instruction-level parallelism, not hardware specialization, is the key to the previous success with reconfigurable logic. in order to demonstrate this fact, an active page implementation based upon a simplified vliw processor was developed. unlike conventional vliw processors, power and area constraints lead to a design which has a small number of pipeline stages. our results demonstrate that a four-wide vliw processor attains comparable performance to that of pure fpga logic but requires significantly less area and power. mark oskin justin hensley diana keen frederic t. chong matthew farrens aneet chopra new algorithms for gate sizing olivier coudert ramsey haddad srilatha manne experiments on the synthesis and testability of non-scan finite state machines michael pabst tiziano villa a. richard newton low power sequential circuit design by using priority encoding and clock gating this paper presents a state assignment technique called priority encoding, which uses multi-code assignment plus clock gating to reduce power dissipation in sequential circuits. the basic idea is to assign multiple codes to states so as to enable more effective clock gating in the sequential circuit. practical design examples are studied and simulated by pspice. experimental results demonstrate that the priority encoding technique can result in sizable power saving. xunwei wu massoud pedram computer-aided design of electrical circuits simulation techniques (a tutorial) one of the very first applications of digital computers was that of simulation. perhaps more computer time has been used over the years in this area than any other. many programs are responsible for the largest computers in existence grinding away, day in and day out, in this general area. this paper will cover simulation as it applies to the design and development of very large scale integrated (vlsi) circuits. these techniques cover broadly the areas of process and circuit simulation, logic and timing simulation (to include faulted machine performance), and system simulation or simulation at the major block level, sometimes referred to the register transfer level or rtl simulation. this paper will present numerous references so that the reader can pursue further study as needed. naturally only a summary of each of the major techniques can be presented. it is again noted that this paper covers only the restricted area of simulation as it applies to the development of vlsi although many topics have application far beyond this. glenn r. case a high level synthesis tool for mos chip design this paper describes a design tool called functional design system (fds) that supports high level mos lsi design. designers can build circuits at the register transfer level by using a set of high level fds primitives. fds then automatically produces in seconds an accurate and efficient polycell implementation for these primitives. therefore, the design cycle time can be reduced significantly. fds is an integral part of a larger cad system [1] which supports other aspects of the design cycle, namely, graphical design capture, simulation, test generation, and layout. the system has proved to be highly successful in helping designers to develop extremely reliable chips in a short time frame. jean dussault chi-chang liaw michael m. tong mies: a microarchitecture design tool this paper describes mies, a design tool for the modeling, visualization, and analysis of vlsi microarchitectures. mies combines a graphical data path model and symbolic control model and provides a number of user interfaces which allow these models to be created, simulated, and evaluated. j. a. nestor b. soudan z. mayet verification techniques for cache coherence protocols in this article we present a comprehensive survey of various approaches for the verification of cache coherence protocols based on state enumeration, (symbolic model checking, and symbolic state models. since these techniques search the state space of the protocol exhaustively, the amount of memory required to manipulate that state information and the verification time grow very fast with the number of processors and the complexity of the protocol mechanisms. to be successful for systems of arbitrary complexity, a verification technique must solve this so- called state space explosion problem. the emphasis of our discussion is onthe underlying theory in each method of handling the state space exposion problem, and formulationg and checking the safety properties (e.g., data consistency) and the liveness properties (absence of deadlock and livelock). we compare the efficiency and discuss the limitations of each technique in terms of memory and computation time. also, we discuss issues of generality, applicability, automaticity, and amenity for existing tools in each class of methods. no method is truly superior because each method has its own strengths and weaknesses. finally, refinements that can further reduce the verification time and/or the memory requirement are also discussed. fong pong michel dubois gate-level simulation of digital circuits using multi-valued boolean algebras scott woods giorgio casinovi timing preserving interface transformations for the synthesis of behavioral vhdl p. gutberlet wolfgang rosenstiel low power self-timed radix-2 division (poster session) a self-timed radix-2 division scheme for low power consumption is proposed. by replacing dual-rail dynamic circuits in non-critical data paths with single- rail static circuits, power dissipation is decreased, yet performance is maintained by speculative remainder computation. spice simulation results show that the proposed design can achieve 33.8-ns latency for 56-bit mantissa division and 47% energy reduction compared to afully dual-rail version. jae-hee won kiyoung choi technology mapping issues for an fpga with lookup tables and pla-like blocks in this paper we present new technology mapping algorithms for use in a programmable logic device (pld) that contains both lookup tables (luts) and pla-like blocks. the technology mapping algorithms partially collapse circuits to reduce either area or depth, and pack the circuits into a minimum number of luts and pla-like blocks. since no other technology mapping algorithm for this problem has been previously published, we cannot compare our approach to others. instead, to illustrate the importance of this problem we use our algorithms to investigate the benefits provided by a pld architecture with both luts and pla-like blocks compared to a traditional lut-based fpga. the experimental results indicate that our mixed pld architecture is more area- efficient than lut-based fpgas by up to 29%, or more depth-efficient by up to 75%.1 alireza kaviani stephen brown how to write awk and perl scripts to enable your eda tools to work together robert c. hutchins shankar hemmady evaluation of a concurrent error detection method for microprogrammed control units a. bailas l. l. kinney power-optimal encoding for dram address bus (poster session) this paper presents pyramid code, an optimal code for transmitting sequential addresses over a dram bus. constructed by finding an eulerian cycle on a complete graph, this code is optimal for conventional dram in the sense that it minimizes the switching activity on the time-multiplexed address bus from cpu to dram. experimental results on a large number of testbenches with different characteristics (i.e. sequential vs. random memory access behaviors) are reported and demonstrate a reduction of bus activity by as much as 50%. wei-chung cheng massoud pedram advanced simulation and modeling techniques for hardware quality verification of digital systems s. forno stephen rochel an automated system for testing lsi memory chips this paper describes a software system for testing lsi memory chips. this system achieves complete automation by customizing test data for a given part number design and by creating an overall test program to be used by a computer-controlled tester in a manufacturing environment. this system encompasses dc testing of the memory product and test sites, ac testing under a variety of timing conditions, and generating a complete set of ac functional test patterns. h. d. schnurmann l. j. vidunas r. m. peters a new integrated system for pla testing and verification part is a system for pla testing and verification, intended to be properly interfaced with other existing tools to generate a comprehensive design environment. to this purpose, it provides several facilities, among which the capability of generating fault population on the basis of layout information. part aims at producing a very compact test set for all detectable crosspoint defects, using limited amounts of run time and storage. this is achieved by means of an efficient partitioning algorithm together with powerful heuristics. test minimality is ensured by a simple procedure. in the present paper these are discussed, experimental results are given and a comparison with competing strategies is made. fabio somenzi silvano gai marco mezzalama paolo prinetto stand-by power minimization through simultaneous threshold voltage selection and circuit sizing supamas sirichotiyakul tim edwards chanhee oh jingyan zuo abhijit dharchoudhury rajendran panda david blaauw adaptive software cache management for distributed shared memory architectures an adaptive cache coherence mechanism exploits semantic information about the expected or observed access behavior of particular data objects. we contend that, in distributed shared memory systems, adaptive cache coherence mechanisms will outperform static cache coherence mechanisms. we have examined the sharing and synchronization behavior of a variety of shared memory parallel programs. we have found that the access patterns of a large percentage of shared data objects fall in a small number of categories for which efficient software coherence mechanisms exist. in addition, we have performed a simulation study that provides two examples of how an adaptive caching mechanism can take advantage of semantic information. john k. bennett john b. carter willy zwaenepoel design issues for dynamic voltage scaling processors in portable electronic devices generally have a computational load which has time-varying performance requirements. dynamic voltage scaling is a method to vary the processors supply voltage so that it consumes the minimal amount of energy by operating at the minimum performance level required by the active software processes. a dynamically varying supply voltage has implications on the processor circuit design and design flow, but with some minimal constraints it is straightforward to design a processor with this capability. thomas d. burd robert w. brodersen partitioning-based standard-cell global placement with an exact objective dennis j.-h. huang andrew b. kahng using complete-1-distinguishability for fsm equivalence checking this article introduces the notion of a complete-1-distinguishability (c-1-d) property for simplifying equivalence checking of finite state machines (fsms). when a specification machine has the c-1-d property, the traversal of the product machine can be eliminated. instead, a much simpler check suffices. the check consists of first obtaining a 1-equivalence mapping between the individually reachable states of the specification and the implementation machines, and then checking that it is a bisimulation relation. the c-1-d property can be used directly for specification machines on which it naturally holds---a condition that has not been exploited thus far in fsm verification. we also show how this property can be enforced on an arbitrary fsm by exposing some of its latch outputs as pseudo-primary outputs during synthesis and verification. in this sense, our synthesis/verification methodology provides another point in the trade-off curve between constraints-on-synthesis versus complexity-of- verification. practical experiences with this methodology have resulted in success with several examples for which it is not possible to complete verification using existing implicit state space traversal techniques. pranav ashar aarti gupta sharad malik high-level bit-serial datapath synthesis for multi-fpga systems tsuyoshi isshiki wayne wei-ming dai behavioral synthesis methodology for hdl-based specification and validation d. knapp t. ly d. macmillen r. miller symbolic computation of logic implications for technology-dependent low-power synthesis r. bahar m. burns g. hachtel e. macii h. shin f. somenzi phantom redundancy: a high-level synthesis approach for manufacturability balakrishnan iyer ramesh karri israel koren systematic cycle budget versus system power trade-off: a new perspective on system exploration of real-time data-dominated applications in contrast to current design practice for (programmable) processor mapping, which mainly targets performance, we focus on a systematic trade-off between cycle budget and energy consumed in the background memory organization. the latter is a crucial component in many of todays designs, including multi- media, network protocols and telecom signal processing. we have a systematic way and tool to explore both freedoms and to arrive at pareto charts, in which for a given application the lowest cost implementation of the memory organization is plotted against the available cycle budget per submodule. this by making optimal usage of a parallelized memory architecture. we indicate, with results on a digital audio broadcasting receiver and an image compression demonstrator, how to effectively use the pareto plot to gain significantly in overall system energy consumption within the global real-time constraints. erik brockmeyer arnout vandecappelle francky catthoor substrate noise influence on circuit performance in variable threshold-voltage scheme tadahiro kuroda tetsuya fujita shinji mita toshiaki mori kenji matsuo masakazu kakumu takayasu sakurai are multiport memories physically feasible? martti j. forsell multi-level logic minimization based on multi-signal implications masayuki yuguchi yuichi nakamura kazutoshi wakabayashi tomoyuki fujita an algorithm to reduce test application time in full scan designs soo y. lee kewal k. saluja vhsic hardware description (vhdl) development program the vhsic program has realized the importance of a standard hardware description language to facilitate the design and documentation of future military digital systems incorporating vhsic technology. to that end, the vhsic hardware description language development program has been organized to generate a language and an associated hierarchical simulator. the paper briefly covers some back-ground information and then outlines the general structure of the program. the latter is explained with respect to the requirements of the vhsic hardware description language effort and how the tasks have been designed to meet the requirements. al dewey traces for hardware verification roger stokes low power mixed analog-digital signal processing the power consumption of mixed-signal systems featured by an analog front-end, a digital back-end, and with signal processing tasks that can be computed with multiplications and accumula?ns, is analyzed. an implementation is proposed, composed of switched-capacitor mixed analog/digital multiply accumulate units in the analog front-end, followed by an a/d converter. this imple?tation is shown to be superior in respect of power consumption compared to an equivalent implementation with a high-speed a/d converter in the front-end, to execute signal processing tasks that include decimation. the power savings are only due to relaxed requirement on a/d conversion rate, as a direct consequence of the decimation. in a case study of a narrowband fir filter, realized with four multiply accumulate units, and with a decimation factor of 100; power saving is 54 times. implementation details are given, the power consumption, and the thermal noise are analyzed. mattias duppils christer svensson mixed-vth (mvt) cmos circuit design methodology for low power applications liqiong wei zhanping chen kaushik roy yibin ye vivek de power estimation approach for sram-based fpgas this paper presents the power consumption estimation for the novel virtex architecture. due to the fact that the xc4000 and the virtex core architecture are very similar, we used the basic approaches for the xc4000-fpgas power consumption estimation and extended that method for the new virtex family. we determined an appropriate technology-dependent power factor kp to calculate the power consumption on virtex-chips, and developed a special benchmark test design to conduct our investigations. additionally, the derived formulas are evaluated on two typical industrial designs. our own emulation environments called spyder- asic-x1 and spyder-virtex-x2 were used, which are best suited for the emulation of hardware designs for embedded systems. karlheinz weiß carsten oetker igor katchan thorsten steckstor wolfgang rosenstiel an automated design of minimum-area ic power/ground nets given tree topologies for routing power/ground (p/g) nets in integrated circuits, this paper formulates and solves the problem of determining the widths of the branches of the trees. constraints are developed in order to maintain proper logic levels and switching speed, to prevent electromigration, and to satisfy certain design rule and regularity requirements. the area required by the p/g distribution system is minimized subject to these constraints. some case studies are also presented. s. chowdhury retiming-based factorization for sequential logic optimization current sequential optimization techniques apply a variety of logic transformations that mainly target the combinational logic component of the circuit. retiming is typically applied as a postprocessing step to the gate- level implementation obtained after technology mapping. this paper introduces a new sequential logic transformation which integrates retiming with logic transformations at the technology-independent level. this transformation is based on implicit retiming across logic blocks and fanout stems during logic optimization. its application to sequential network synthesis results in the optimization of logic across register boundaries. it can be used in conjunction with any measure of circuit quality for which a fast and reliable gain estimation method can be obtained. we immplemented our new technique within the sis framework and demonstrated its effectiveness in terms of cycle- time minimization on a set sequential benchmark circuits. surendra bommu niall o'neill maciej ciesielski towards support for design description languages in eda framework we report on a new framework service for design tool encapsulation, based on an information model for design management. the new service uses generated language processors that perform import and export of design files to and from a design management database with the support of nested syntax specifications and extension language scripts. our prototype design environment is based on the nelsis cad framework and several tools from the synopsys high-level synthesis and simulation tool suite. olav schettler susanne heymann cmos stuck-open fault detection using single test patterns cmos combinational circuits exhibit sequential behavior in the presence of open faults, thus making it necessary to use two pattern tests. two or multi- pattern sequences may fail to detect cmos stuck-open faults in the presence of glitches. the available methods for augmenting cmos gates to test cmos stuck- open faults, are found to be inadequate in the presence of glitches. a new cmos testable design is presented. the scheme uses two additional mosfets, which convert a cmos gate to either pseudo nmos or pseudo pmos gate during testing. the proposed design ensures the detection of stuck-open faults using a single vector during testing r. rajsuman a. p. jayasumana y. k. malaiya dual-vt sram cells with full-swing single- ended bit line sensing for high- performance on-chip cache in 0.13 m technology generation comparisons among different dual-vt design choices for a large on-chip cache with single-ended sensing show that the design using a dual-vt cell and low-vt peripheral circuits is the best, and provides 10% performance gain with 1.2x larger active leakage power, and 1.6% larger cell area compared to the best design using high-vt cells. fatih hamzaoglu yibin te ali keshavarzi kevin zhang siva narendra shekhar borkar mircea stan vivek de assignment of storage values to sequential read-write memories s. gerez e. woutersen delay fault test generation for scan/hold circuits using boolean expressions d. bhattacharya p. agrawal v. d. agrawal entity overloading for mixed-signal abstraction in vhdl c.-j. richard shi an efficient approach for moment-matching simulation of linear subnetworks with measured or tabulated data guowu zheng qi-jun zhang michel nakhla ramachandra achar rapid prototyping of asics with oasis: an open architecture silicon implementation software franc brglez rectangle-packing-based module placement hiroshi murata kunihiro fujiyoshi shigetoshi nakatake yoji kajitani symbolic fault simulation for sequential circuits and the multiple observation time test strategy r. krieger b. becker m. keim power-profiler: optimizing asics power consumption at the behavioral level raul san martin john p. knight recent developments in high-level synthesis we survey recent developments in high level synthesis technology for vlsi design. the need for higher-level design automation tools are discussed first. we then describe some basic techniques for various subtasks of high-level synthesis. techniques that have been proposed in the past few years (since 1994) for various subtasks of high-level synthesis are surveyed. we also survey some new synthesis objectives including testability, power efficiency, and reliability. youn-long lin sharps: a hierarchical layout system for vlsi a hierarchical layout system for vlsi provided with placement and routing facilities is described, highlighting the routing scheme constructed on the basis of a channel router. several implementation results are also shown to reveal how much the system has potentialities to be of great use in the practice of layout design of full custom lsi's. toru chiba noboru okuda takashi kambe ikuo nishioka tsuneo inufushi seiji kimura fast transient power and noise estimation for vlsi circuits today's digital design systems are running out of steam, when it comes to meeting the challenges presented by simultaneous switching, power consumption and reliabilty constraints emerging in vlsi circuits. in this paper a new technique to accurately estimate the transient behavior of large cmos cell- based circuits in a reasonable amount of time is presented. gate-level simulations and a consistent modeling methodology are employed to compute the time-domain waveforms for signal voltages, supply currents, power consumption and Δ noise on power lines. this can be done for circuit blocks and complete designs by our new tool powtim, which adds spice-like capabilities to digital design standards. wolfgang t. eisenmann helmut e. graeb fitting formal methods into the design cycle k. l. mcmillan lib: a cell layout generator we present an automatic layout generation system, called lib, for the library cells used in cmos asic design. lib takes a transistor-level circuit schematic in spice format and outputs a symbolic layout. our layout style is similar to that proposed by uehara and van cleemput in [17]. we propose several heuristic algorithms to solve the transistor-clustering, -pairing, -chaining, -folding, the chain placement, the routing, and the net assignment problems, respectively. experimental results are presented to show the capability of lib. yung-ching hsieh chi-yi hwang youn-long lin yu-chin hsu reducing the frequency of tag compares for low power i-cache design ramesh panwar david rennels automatic equivalence check of circuit descriptions at clocked algorithmic and register transfer level (poster paper) jens schönberr bernd straube chip layout optimization using critical path weighting a. e. dunlop v. d. agrawal d. n. deutsch m. f. jukl p. kazak hierarchical partitioning dirk behrens klaus harbich erich barke desensitization for power reduction in sequential circuits xiangfeng chen peicheng pen c. l. liu statistical delay modeling in logic design and synthesis horng-fei jyu sharad malik evaluation of fpga resources for built-in self-test of programmable logic blocks charles stroud ping chen srinivasa konala miron abramovici microprocessor based testing for core-based system on chip c. a. papachristou f. martin m. nourani interface timing verification with application to synthesis elizabeth a. walkup gaetano borriello a vlsi design methodology based on parametric macro cells a new methodology for designing vlsi circuits has been developed at harris gss. the methodology is based on the concept of parametric macro cells. a parametric macro cell is an msi-level circuit which can be modified by a computer program to meet the needs of a particular design. in this paper we discuss the design methodology, chip layout, the simulation techniques and other software tools used to ensure a valid design, plus the wafer testing approach. r. a. kriete r. k. nettleton timing verification using hdtv in this paper, we provide an overview of a system designed for verifying the consistency of timing specifications for digital circuits. the utility of the system comes from the need to verify that existing digital components will interact correctly when placed together in a system. the system can also be used in the case of verifying specifications of unimplemented components. alan r. martello steven p. levitan donald m. chiarulli automating rt-level operand isolation to minimize power consumption in datapaths m. munch b. wurth r. mehra j. sproch n. wehn adolt - an adaptable online testing scheme for vlsi circuits a. maamar g. russell embedded pin assignment for top down system design thomas pförtner stefan kiefl reimund dachauer cache performance of fast-allocating programs marcelo j. r. gonçalves andrew w. appel self recovering controller and datapath codesign samuel n. hamilton andre hertwig alex orailoglu high-level power estimation and the area complexity of boolean functions m. nemani f. najm the design and implementation of fault insertion capabilities for isps fault tolerance is an important attribute of most computer systems, and to be effective it must be an explicit objective from the beginning of the design process. inserting faults into a simulation of the machine and observing its behavior is a thorough and economical technique for evaluating prospective fault detection, diagnosis, recovery, and repair mechanisms. as systems become larger due to rising semiconductor integration, the expense of these fault simulations increasingly necessitates that they be performed at higher levels of abstraction (such as the register transfer level) rather than lower (such as the gate level). this can achieve major cost savings without significantly compromising fault coverage. this paper describes the design and implementation of a high level fault insertion mechanism for the instruction set processor specification (isps) simulator. the isps simulator was chosen because it is an interactive, high level simulator which is capable, mature, and widely used and accepted. the faults which can be simulated include hard and transient, deterministic and probabilistic, stuck-at and bridged, data, control, and operation types. these facilities have been implemented and demonstrated to be sound in both concept and implementation. they have been incorporated as a standard feature in the latest release of the isps simulator j. duane northcutt measuring routing congestion for multi-layer global routing we propose an accurate measure of channel routing density and its application to global routing. our congestion metric calculation method considers the wire scenarios in a channel. tom chen alkan cengiz a new technique for exploiting regularity in data path synthesis c. y. roger chen mohammed aloqeely tidbits: speedup via time-delay bit-slicing in alu design for vlsi technology peter y. t. hsu joseph t. rahmeh edward s. davidson jacob a. abraham vvds: a verification/diagnosis system for vhdl in this paper, an interactive verification and diagnosis system for vhdl [vm88], vvds, is presented. in vvds, hybrid simulation, which simulates with both numerical and symbolic data, is implemented to achieve an effective compromise of the enormous quantity of input test data in the conventional simulation and the complexity of symbolic expression in the symbolic execution. to support efficient user interface in the verification and diagnosis process, both on- line programming of commands and micro-probing capability to passively and actively probe any level of design hierarchy are provided. h. t. liaw k.-t. tran c.-s. lin a multiple-dominance switch-level model for simulation of short faults peter dahlgren designing closer to the edge (tutorial) sani r. nassif desb, a functional abstractor for cmos vlsi circuits m. laurentin a. greiner r. marbot efficient reduced-order modeling for the transient simulation of three- dimensional interconnect mike chou jacob white perturb and simplify: multi-level boolean network optimizer in this paper, we discuss the problem of optimizing a multi-level logic combinational boolean network. our techniques apply a sequence of local perturbations and modifications of the network which are guided by the automatic test pattern generation atpg based reasoning. in particular, we propose several new ways in which one or more redundant gates or wires can be added to a network. we show how to identify gates which are good candidates for local functionality change. furthermore, we discuss the problem of adding and removing two wires, none of which alone is redundant, but when jointly added/removed they do not affect functionality of the network. we also address the problem of efficient redundancy computation which allows to eliminate many unnecessary redundancy tests. we have performed experiments on mcnc benchmarks and compared the results to those of misii and rambo. experimental results are very encouraging. shih-chieh chang malgorzata marek-sadowska novel routing schemes for ic layout part i: two-layer channel routing deborah c. wang algorithm selection: a quantitative computation-intensive optimization approach given a set of specifications for a targeted application, algorithm selection refers to choosing the most suitable algorithm for a given goal, among several functionally equivalent algorithms. we demonstrate an extraordinary potential of algorithm selection for achieving high throughput, low cost, and low power implementations. we introduce an efficient technique for low-bound evaluation of the throughput and cost during algorithm selection and propose a relaxation-based heuristic for throughput optimization. we also present an algorithm for cost optimization using algorithm selection. the effectiveness of methodology and algorithms is illustrated using examples. miodrag potkonjak jan rabaey accelerating concurrent hardware design with behavioural modelling and system simulation allan silburt ian perryman janick bergeron stacy nichols mario dufresne greg ward transient sensitivity computation of mosfet circuits using iterated timing analysis and selective-tracing waveform relaxation chung-jung chen wu-shiung feng asynchronous and clocked control structures for vlsi based interconnection networks a central issue in the design of multiprocessor systems is the interconnection network which provides communications paths between the processors. for large systems, high bandwidth interconnection networks will require numerous 'network chips' with each chip implementing some subnetwork of the original larger network. modularity and growth are important properties for such networks since multiprocessor systems may vary in size. this paper is concerned with the question of timing control of such networks. two approaches, asynchronous and clocked, are used in the design of a basic network switching module. the modules and the approaches are then modelled and equations for network time delay are developed. these equations form the basis for a comparison between the two approaches. the importance of clock distribution strategies and clock skew is quantified, and a network clock distribution scheme which guarantees equal length clock paths is presented. mark a. franklin donald f. wann chdstd - application support for reusable hierarchical interconnect timing views this paper describes an important new facility for timing-driven design applications within the new chdstd standard for a sematech design system for large complex chips. we first review eda requirements for chdstd hierarchy for large complex leading edge chips and current eda problems in accurately and efficiently handling complex interconnect. we then describe our approach for fully-reusable hierarchical interconnect timing views in support of timing driven design for 0.25 technologies and below. the result is a method which builds on sematech's new controlled error parasitic timing calculation capability for deep submicron, providing means for compactly storing and reusing accurate hierarchical timing views for 28m to 100m transistor chip designs. s. grout g. ledenbach r. g. bushroe p. fisher d. cottrell d. mallis s. dasgupta j. morrell amrich chokhavtia scheduling techniques for variable voltage low power designs this paper presents an integer linear programming (ilp) model and a heuristic for the variable voltage scheduling problem. we present the variable voltage scheduling techniques that consider in turn timing constraints alone, resource constraints alone, and timing and resource constraints together for design space exploration. experimental results show that our heuristic produces results competitive with those of the ilp method in a fraction of the run- time. the results also show that a wide range of design alternatives can be generated using our design space exploration method. using different cost/delay combinations, power consumption in a single design can differ by as much as a factor of 6 when using mixed 3.3v and 5v supply voltages. yann-rue lin cheng-tsung hwang allen c.-h. wu interoperability of verilog/vhdl procedural language interfaces to build a mixed language gui françoise martinolle charles dawson debra corlette mike floyd dynamic search-space pruning techniques in path sensitization joão p. marques silva karem a. sakallah a time optimal robust path-delay-fault self-testable adder bernd becker rolf drechsler achieving utility arbitrarily close to the optimal with limited energy energy is one of the limited resources for modern systems, especially the battery-operated devices and personal digital assistants. the backlog in new technologies for more powerful battery is changing the traditional system design philosophies. for example, due to the limitation on battery life, it is more realistic to design for the optimal benefit from limited resource rather than design to meet all the applications' requirement. we consider the following problem: a system achieves a certain amount of utility from a set of applications by providing them certain levels of quality of service(qos). we want to allocate the limited system resources to get the maximal system utility. we formulate this utility maximization problem, which is np-hard in general, and propose heuristic algorithms that are capable of finding solutions provably arbitrarily close to the optimal. we have also derived explicit formulae to guide the allocation of resources to actually achieve such solutions. simulation shows that our approach can use 99.9% of the given resource to achieve 25.6% and 32.17% more system utilities over two other heuristics, while providing qos guarantees to the application program. gang qu miodrag potkonjak directional bias and non-uniformity in fpga global routing architectures vaughn betz jonathan rose three approaches to design fault tolerant programmable logic arrays sami a. al-arian hari p. kunamneni computation of cyclic redundancy checks via table look-up cyclic redundancy check (crc) codes provide a simple yet powerful method of error detection during digital data transmission. use of a table look-up in computing the crc bits will efficiently implement these codes in software. d. v. sarwate design and test space exploration of transport-triggered architectures v. a. zivkovic r. j. w. tangelder h. g. kerkhoff cost-free scan: a low-overhead scan path design methodology chih-chang lin mike tien-chien lee malgorzata marek-sadowska kuang-chien chen specification and management of timing constraints in behavioral vhdl f. curatelli m. chirico l. mangeruca basic turorial layout tools - what really is there r. smith prove that a faulty multiplier is faulty!? formal verification of integer multipliers was an open problem for a long time as the size of any reduced ordered binary decision diagram (bdd) [1] which represents integer multiplication is exponential in the width of the operands [2]. in 1995, bryant and chen [4] introduced multiplicative binary moment diagrams (*bmd) which are a canonical data structure for pseudo boolean functions allowing a linear representation of integer multipliers. based on this data structure, bryant/chen [4] and hamaguchi et.al. [5] experimentally showed that integer multipliers up to a word size of 64 bits can be formally verified. however, all these results only handle the problem of proving a faultless integer multiplier to be correct. but, what happens if the multiplier is faulty? does the backward construction method of hamaguchi et.al. stop after a short time? after what time can i be sure that the integer multiplier under consideration is faulty? in this paper, we show that these questions are relevant in practice. in particular, we investigate simple add- step multipliers and show that simple design errors can lead to exponential growth of the *bmds occuring during backward construction. this proves that the backward construction method can only be applied as filter during formal logic combinational verification unless sharp upper bounds for the sizes of the *bmds occuring during the backward construction have been proven for the various circuit types as keim et.al. [6] did it for wallace tree multipliers. sandro wefel paul molitor emerging opportunities for binary tools in recent years, binary instrumentation and optimization tools have been used effectively to understand and improve performance of significant programs. however, new opportunities are emerging in the distributed computing model of the internet that has strong requirements for reliability and performance. these systems demand continuous operations in presence of open-ended design where some parts may be operated by third party services. because we cannot reproduce the scale and complexity of this environment on our desktops or in our labs, our traditional testing-based approaches to correctness and performance optimization will prove insufficient; these tasks must extend beyond the traditional idea of product development.in the past, compilation has been about turning source code into executables, balancing compilation speed against code optimization. binary tools "hacked" their way into the compilation process by intercepting and transforming executables; binary modification stage was not designed to be part of the compilation process. in the new environment, the definition of compilation must broaden: compilation will start very early---perhaps when we write the specification of a program \---and continues very late---perhaps through all the program's various executions on the client machines.this dynamic and heterogeneous environment provides great challenges and opportunities to expand the role of binary tools. this talk discusses the new opportunities and requirements for binary tools. it also describes vulcan, a second generation technology that is designed to address some of these challenges. vulcan has both static and dynamic code modification capabilities. it can provide system level analysis with heterogenoeus programs. vulcan works in the win32 environment and can process x86, ia64, and msii binaries. vulcan can process large commercial applications and has been used to improve performance and reliability of microsoft products in a production environment. amitabh srivastava simulation and sensitivity analysis of transmission line circuits by the characteristics method jun-fa mao janet meiling wang ernest s. kuh a versatile built-in self test scheme for delay fault testing (poster paper) y. tsiatouhas th. haniotakis d. nikolos a. arapoyanni partial scan design based on circuit state information dong xiang srikanth venkataraman w. kent fuchs janak h. patel logic fault simulation on a vector hypercube multiprocessor fault simulation is the process of simulating the response of a logic circuit to input patterns in the presence of all possible single faults and is an essential part of test generation for vlsi circuits. parallelization of the deductive and parallel simulation methods, on a hypercube multiprocessor and vectorization of the parallel simulation method are described. experimental results are presented. f. ozguner c. aykanat o. khalid a hardware mechanism for dynamic extraction and relayout of program hot spots this paper presents a new mechanism for collecting and deploying runtime optimized code. the code-collecting component resides in the instruction retirement stage and lays out hot execution paths to improve instruction fetch rate as well as enable further code optimization. the code deployment component uses an extension to the branch target buffer to migrate execution into the new code without modifying the original code. no significant delay is added to the total execution of the program due to these components. the code collection scheme enables safe runtime optimization along paths that span function boundaries. this technique provides a better platform for runtime optimization than trace caches, because the traces are longer and persist in main memory across context switches. additionally, these traces are not as susceptible to transient behavior because they are restricted to frequently executed code. empirical results show that on average this mechanism can achieve better instruction fetch rates using only 12kb of hardware than a trace cache requiring 15kb of hardware, while producing long, persistent traces more suited to optimization. matthew c. merten andrew r. trick erik m. nystrom ronald d. barnes wen-mei w. hmu critical path tracing - an alternative to fault simulation we present an alternative to fault simulation, referred to as critical path tracing, that determines the faults detected by a set of tests using a backtracing algorithm starting at the primary outputs of a circuit. critical path tracing is an approximate method, but the approximations introduced occur seldom and do not affect its usefulness. this method is more efficient than conventional fault simulation. m. abramovici p. r. menon d. t. miller system level design, a vhdl based approach joris van den hurk edwin dilling a technology mapping method based on perfect and semi-perfect matchings m. crastes k. sakouti g. saucier rtl emulation: the next leap in system verification sanjay sawant paul giordano a generic architecture for on-chip packet-switched interconnections pierre guerrier alain greiner design and implementation of an fpga based processor for compressed images (poster abstract) v. s. balakrishnan hardy pottinger fikret ercal mukesh agarwal extracting rtl models from transistor netlists k. j. singh p. a. subrahmanyam design for testability view on placement and routing derek feltham jitendra khare wojciech maly on using satisfiability-based pruning techniques in covering algorithms vasco m. manquinho joão marques-silva performance-driven soft-macro clustering and placement by preserving hdl design hierarchy in this paper, we present a performance-driven soft-macro clustering and placement method which preserves hdl design hierarchy to guide the soft-macro placement process. we also present a complete chip design methodology by integrating the proposed method and a set of commercial eda tools. experiments on three industrial designs ranging from 75k to 230k gates demonstrate that the proposed soft-macro clustering and placement method improves critical-path delay on an average of 24%. hsiao-pin su allen c.-h. wu youn-long lin a single-path-oriented fault-effect propagation in digital circuits considering multiple-path sensitization m. henftling h. c. wittmann k. j. antreich multiple operation memory structures this paper describes architectures based on a new memory structure. memory systems which can perform multiple transfers are described and issues in processor architecture are considered. a general model for memory operations is given, and the classical single transfer memory structures are described. based on the generalized model, new structures which allow multiple transfers to be performed as a single processor operation are developed. some architectural considerations at the processor level to support these kinds of memory systems are then discussed. the advantages and disadvantages of these new structures as compared to conventional memories are also discussed and a preliminary performance evaluation is done. this discussion generally refers to the random access, physical, main memory in the system, although many of the results are applicable to other storage devices. m. c. ertem a new rapid prototyping firmware (rpf) tool microprogramming has progressed from a method for systematically designing control units to its wide-spread application to the design, emulation, analysis, and implementation of instruction sets for general-purpose computers. the application was greatly enhanced through the family-oriented architectures of the ibm system 360 and on into the dec pdp-11 family, culminating in the vax family of machines. today, systolic, vlsi, and vhsic microarchitectures are demanding increased centralized microprogrammable control capability. however, microprogramming has always been a tedious, time consuming, difficult to verify and, therefore, costly exercise. a new rapid prototyping firmware (rpf) tool is described which ameliorates most of the problems. m. andrews f. lam test pattern generation based on arithmetic operations existing built-in self test (bist) strategies require the use of specialized test pattern generation hardware which introduces significant area overhead and performance degradation. in this paper, we propose a novel method for implementing test pattern generators based on adders widely available in data- path architectures and digital signal processing circuits. test patterns are generated by continuously accumulating a constant value and their quality is evaluated in terms of the pseudo-exhaustive state coverage on subspaces of contiguous bits. this new test generation scheme, along with the recently introduced accumulator-based compaction scheme facilitates a bist strategy for high performance datapath architectures that uses the functionality of existing hardware, is entirely integrated with the circuit under test, and results in at-speed testing with no performance degradation and area overhead. sanjay gupta janusz rajski jerzy tyszer auxiliary variables for bdd-based representation and manipulation of boolean functions bdds are the state-of-the-art technique for representing and manipulating boolean functions. their introduction caused a major leap forward in synthesis, verification, and testing. however, they are often unmanageable because of the large amount of nodes. to attack this problem, we insert auxiliary variables that decompose monolithic bdds in smaller ones. this method works very well for boolean function representation. as far as combinational circuits are concerned, representing their functions is the main issue. going into the sequential domain, we focus on traversal techniques. we show that, once we have boolean functions in decomposed form, symbolic manipulations are viable and efficient. we investigate the relation between auxiliary variables and static and dynamic ordering strategies. experimental evidence shows that we achieve a certain degree of independence from variable ordering. thus, this approach can be an alternative to dynamic re-ordering. experimental results on boolean function representation, and exact and approximate forward symbolic traversal of fsms, demonstrate the benefits both in terms of memory requirements and of cpu time. gianpiero cabodi paolo camurati stefano quer a time and space efficient net extractor we develop an efficient algorithm for net extraction. this algorithm is able to efficiently handle very large layouts even when memory is limited. this is done by effectively using disk storage. the algorithm has been programmed in fortran and is superior to other existing net extractors. surendra nahar sartaj sahni an enhancement of lssd to reduce test pattern generation effort and increase fault coverage in this paper we propose designs of latches which can be used in level sensitive scan design (lssd). these new designs can use the existing software support for design rule checks but result into a reduction of effort in test pattern generation and provide a better fault coverage. the system performance is not degraded with the use of latches proposed in this paper. kewal k. saluja an optimal probe testing algorthm for the connectivity verification of mcm substrates so-zen yao nan-chi chou chung-kuan cheng t. c. hu a hardware platform for vliw based emulation of digital design (poster paper) g. haug u. kebschull w. rosenstiel why a cad-verified fpga makes routing so simple and fast!: a result of co- designing fpgas and cad algorithms takahiro murooka atsushi takahara toshiaki miyazaki multi-node static logic implications for redundancy identification kabir gulrajani michael s. hsiao compiled-code-based simulation with timing verification winfried hahn andreas hagerer c. herrmann themis logic simulator - a mix mode, multi-level, hierarchical, interactive digital circuit simulator a new logic simulator called themistm logic simulator for the design of lsi, vlsi and pcbs is described. themis supports design verification and test development from initial specification in behavioral and rtl languages to analysis of the final layout at the gate and switch level. to allow the simulation of an entire system or check the correctness of a single circuit, the different modeling techniques can be easily intermixed. themis is a highly interactive simulator that minimizes a hardware engineer's time and effort to debug logic. this paper gives an overview of themis and its use by design engineers. mahesh h. doshi roderick b. sullivan donald m. schuler towards vlsi complexity: the da algorithm scaling problem: can special da hardware help? with the increasing scale of integration we need to employ da algorithms to assist us in managing the complexities involved. many of our current techniques are already costly and slow and yet scale by some power law as the gate count increases. an analysis of the strategies open to us leads to the possibility that in many cases specialised da hardware is cost effective. this paper then describes a series of trials to employ a 64 x 64 distributed array processor at da tasks. some unexpected results were obtained as algorithms evolved in the area of tracking, simulation, placement, test generation, fault simulation, and layout rule checking. the paper concludes with a discussion of the overheads incurred in employing unconventional hardware. h. g. adshead supporting reference and dirty bits in spur's virtual address cache virtual address caches can provide faster access times than physical address caches, because translation is only required on cache misses. however, because we don't check the translation information on each cache access, maintaining reference and dirty bits is more difficult. in this paper we examine the trade-offs in supporting reference and dirty bits in a virtual address cache. we use measurements from a uniprocessor spur prototype to evaluate different alternatives. the prototype's built-in performance counters make it easy to determine the frequency of important events and to calculate performance metrics. our results indicate that dirty bits can be efficiently emulated with protection, and thus require no special hardware support. although this can lead to excess faults when previously cached blocks are written, these account for only 19% of the total faults, on average. for reference bits, a miss bit approximation, which checks the references bits only on cache misses, leads to more page faults at smaller memory sizes. however, the additional overhead required to maintain true reference bits far exceeds the benefits of a lower fault rate. d. a. wood r. h. katz chip assemblers: concepts and capabilities a chip assembler is a tool for managing design information. it encourages a structured design methodology, wherein a design is described by a collection of hierarchical design decompositions, one for each of its representations. it assists in the enforcement of consistency constraints up, down, and across the different hierarchies. we argue that a chip assembler, as an integrated design environment, requires an integrated approach for the management of design data. we describe what a chip assembler is, how it is used for design management, and what are its desirable features. randy h. katz shlomo weiss a representation for dynamic graphs in reconfigurable hardware and its application to fundamental graph algorithms this paper gives a representation for graph data structures as electronic circuits in reconfigurable hardware. graph properties, such as vertex reachability, are computed quickly by exploiting a graph's edge parallelism--- signals propagate along many graph edges concurrently. this new representation admits arbitrary graphs in which vertices/edges may be inserted and deleted dynamically at low cost---graph modification does not entail any re-fitting of the graph's circuit. dynamic modification is achieved by rewriting cells in a reconfigurable hardware array. dynamic graph algorithms are given for vertex reachability, transitive closure, shortest unit path, cycle detection, and connected-component identification. on the task of computing a graph's transitive closure, for example, simulation of such a dynamic graph processor indicates possible speedups greater than three orders of magnitude compared to an efficient software algorithm running on a contemporaneously fast uniprocessor. implementation of a prototype in an fpga verifies the accuracy of the simulation and demonstrates that a practical and efficient (compact) mapping of the graph construction is possible in existing fpga architectures. in addition to speeding conventional graph computations with dynamic graph processors, we note their potential as parallel graph reducers implementing general (turing equivalent) computation. lorenz huelsbergen protocol generation for communication channels sanjiv narayan daniel d. gajski automatic synthesis of pipeline structures with variable data initiation intervals hong shin jun sun young hwang orthogonal greedy coupling: a new optimization approach to 2-d fpga routing yu-liang wu malgorzata marek-sadowska segmented channel routing routing channels in a field-programmable gate array contain predefined wiring segments of various lengths. these may be connected to the pins of the gates or joined end-to-end to form longer segments by programmable switches. the segmented channel routing problem is formulated, and polynomial time algorithms are given for certain special cases. the general problem is np- complete, but it can be adequately solved in practice. experiments indicate that a segmented channel with judiciously chosen segment lengths may near the efficiency of a conventional channel. jonathan greene vwani roychowdhury sinan kaptanoglu abbas el gamal design and specification of microprogrammed computer architectures this paper presents a hierarchical firmware design method. it allows to structure the design of a microprogrammed (level of a) computer architecture into independently verifiable modules. to specify the behaviour of the system we use the axiomatic architecture description language aadl. we illustrate the design and specification style using an emulation example. w. damm reconfigurable machine and its application to logic diagnosis naoaki suganuma yukihiro murata satoru nakata shinichi nagata masahiro tomita kotaro hirano ibm 3081 system overview and technology the development of the ibm 3081 established the methodology for designing and manufacturing a high- performance computer from an lsi chip technology. the high density packaging of the lsi chip is used to minimize interconnections and to support a fast machine cycle time. this paper will describe the methods used and will highlight some of the design problems that were solved, to offer an understanding of the challenges that lsi brings to the design cycle. clive a. collins dynamic fault diagnosis on reconfigurable hardware fatih kocan daniel g. saab fast, small, and static combinatorial cmos circuits we present alps, a new way to generate layout from boolean equations. we use an original tree- structured representation of arbitrary boolean expressions, more compact than classic disjunctive form, allowing fast symbolic manipulation and natural mapping onto silicon. this implementation of alps produces static cmos layout using a cascode-switch style. we present measurements done on fabricated circuits. for a large class of functions, particularly semi-regular control logic, vlsi layout generated by alps compares favorably in speed and area to plas and standard-cell designs. b. p. serlet algorithms for behavioral test pattern generation from vhdl circuit descriptions containing loop language constructs loïc vandeventer jean- françois santucci cost reduction and evaluation of temporary faults detecting technique lorena anghel michael nicolaidis model generation of test logic for macrocell based designs e. de la torre j. calvo j. uceda a new fpga architecture for high-performance bit-serial pipeline datapath (abstract) tsuyoshi isshiki takenobu shimizugashira akihisa ohta imanuddin amril hiroaki kunieda an lpga with foldable pla-style logic blocks laser- programmed gate arrays (lpgas) represent a new approach to application specific integrated circuit prototyping and implementation. this paper proposes a new lpga logic block architecture called a foldable pla-style logic block. the proposed logic block architecture is similar to that found in commercially available cplds. the term foldable means that the granularity of the logic block can be varied. this is achieved using the lpga laser disconnect methodology. a custom cad tool has been developed to map circuits into the new logic block architecture. an experimental study shows that lpgas with foldable logic blocks are more area-efficient than those based on normal unfoldable logic blocks. jason helge anderson stephen dean brown a floorplan-based planning methodology for power and clock distribution in asics joon-seo yim seong-ok bae chong-min kyung the impact of vlsi on microprogramming there are four "cultures" of microprogramming: the bit-slice culture, the commercial processor culture, the microprogrammable processor culture, and the single-chip culture. the effect of trends in vlsi (very large scale integration) on microprogramming can be assessed by looking the effect on each culture. the bit-slice culture will be affected because levels of integration in bipolar have reached 32-bit slices and the performance of cmos is improving to compete with the dominant bipolar technologies in these applications. the commercial processor culture and the microprogrammable processor culture will be least affected by changes in technology and integration levels. the single- chip culture will be most affected. there are about two more microprocessor generations to go before chips contain an essentially complete cpu. after that, designs will diversify. microprogramming still has a good future. n. tredennick propagation delay calculation for interconnection nets on printed circuit boards by reflected waves heinz mattes wolfgang weisenseel gerhard bischof reimund dachauer a loosely coupled parallel algorithm for standard cell placement we present a loosely coupled parallel algorithm for the placement of standard cell integrated circuits. our algorithm is a derivative of simulated annealing. the implementation of our algorithm is targeted towards networks of unix workstations. this is the very first reported parallel algorithm for standard cell placement which yields as good or better placement results than its serial version. in addition, it is the first parallel placement algorithm reported which offers nearly linear speedup, in terms of the number of processors (workstations) used, over the serial version. despite using the rather slow local area network as the only means of interprocessor communication, the processor utilization is quite high, up to 98% for 2 processors and 90% for 6 processors. the new parallel algorithm has yielded the best overall results ever reported for the set of mcnc standard cell benchmark circuits. wern-jieh sun carl sechen the minimization and decomposition of interface state machines ajay j. daga william p. birmingham optimal equivalent circuits for interconnect delay calculations using moments sudhakar muddu andrew b. kahng a bus router for ic layout. this paper describes a bus router that is part of a custom ic mask layout system called cipar. cipar works with rectangular building blocks of arbitrary dimensions. the router is designed specifically to handle power and ground buses. it can route these nets completely on one metal layer. the router also automatically calculates and tapers the bus path width based on current requirements specified in the input circuit description. margaret lie chi-song horng partially-dependent functional decomposition with applications in fpga synthesis and mapping jason cong yean-yow hwang constraint-driven system partitioning m. l. lopez-vallejo j. grajal j. c. lopez an fpga implementation and performance evaluation of the serpent block cipher with the expiration of the data encryption standard (des) in 1998, the advanced eneryption standard (aes) development process is well underway. it is hoped that the result of the aes process will be the specification of a new non-classified encryption algorithm that will have the global acceptance achieved by des as well as the capability of long-term protection of sensitive information. the technical analysis used in determining which of the potential aes candidates will be selected as the advanced encryption algorithm includes efficiency testing of both hardware and software implementations of candidate algorithms. reprogrammable devices such as field programmable gate arrays (fpgas) are highly attractive options for hardware implementations of encryption algorithms as they provide cryptographic algorithm agility, physical security, and potentially much higher performance than software solutions. this contribution investigates the significance of an fpga implementation of serpent, one of the advanced encryption standard candidate algorithms. multiple architecture options of the serpent algorithm will be explored with a strong focus being placed on a high speed implementation within an fpga in order to support security for current and future high bandwidth applications. one of the main findings is that serpent can be implemented with encryption rates beyond 4 gbit/s on current fpgas. a. j. elbirt c. paar synthesis of vhdl arrays on ram cells c. berthet j. rampon l. sponga signal delay in rc tree networks p. penfield j. jr. rubinstein the impact of synchronization and granularity on parallel systems in this paper, we study the impact of synchronization and granularity on the performance of parallel systems using an execution-driven simulation technique. we find that even though there can be a lot of parallelism at the fine grain level, synchronization and scheduling strategies determine the ultimate performance of the system. loop-iteration level parallelism seems to be a more appropriate level when those factors are considered. we also study barrier synchronization and data synchronization at the loop iteration level and found both schemes are needed for a better performance. ding-kai chen hong-men su pen-chung yew verification of electronic systems alberto l. sangiovanni-vincentelli patrick c. mcgeer alexander saldanha task response time optimization using cost-based operation motion we present a technique for task response time improvement based on the concept of code motion from the software domain. relaxed operation motion (rom) is a simple yet powerful approach for performing safe and useful operation motion from heavily executed portions of a design task to less visited segments. we introduce here our algorithm, how it differs from other code motion approaches, and its application to the embedded systems domain. results of our investigation indicate that cost-guided operation motion has the potential to improve task response time significantly. bassam tabbara abdallah tabbara alberto sangiovanni-vincentelli estimation of average switching activity in combinational and sequential circuits a. ghosh s. devadas k. keutzer j. white an efficient implementation of boolean functions nd finite state machine as self-timed circuit llana david ran ginosar michael yoeli a fast signature simulation tool for built-in self-testing circuits this paper describes a fast signature simulator (fss) tool for built- in self- testing (bist) circuits. the fss consists of a simulator generator and a compiled code simulator. the simulator generator comprises a controlling program called the executive and translation software called sim-gen. sim-gen accepts a hardware description language (hdl) representation of the circuit- under-test as its input and produces c code simulation modules comprising boolean relations that represent the structure of the circuit. these c code modules are then compiled and linked together to form the basis of the compiled code simulator. simulation is invoked by executing the compiled c code description of the circuit. the simulation time is minimised by the use of parallel simulation techniques in conjunction with efficient functional models and novel mapping techniques for the lfsrs. performances approaching 5 million gate evaluations per second (geps) have been achieved using the fss. s. b. tan k. totton k. baker p. varma r. porter novel hardware-software architecture for the recursive merge filtering algorithm (poster abstract) piyush s. jamkhandi amar mukherjee kunal mukherjee robert franceschini modeling shared variables in vhdl jan madsen jens p. brage gate sizing using a statistical delay model e. t. a. f. jacobs m. r. c. m. berkelaar effective iterative techniques for fingerprinting design ip andrew e. caldwell hyun-jin choi andrew b. kahng stefanus mantik miodrag potkonjak gang qu jennifer l. wong serial fault emulation luc burgun frederic reblewski gerard fenelon jean berbier olivier lepape exploiting hierarchy for multiple error correction in combinational circuits (poster paper) dirk w. hoffman thomas kropf calculation of ramp response of lossy transmission lines using two-port network functions in this paper, we present a new analytical approach for computing the ramp response of an rlc interconnect line with a pure capacitive load. the approach is based on the two-port representation of the transmission line and accounts for the output resistance of the driver and the line inductance. the results of our analysis are compared with the results of hspice simulations demonstrating the high accuracy of our solution under various values of driver, interconnect, and load impedances. payam heydari massoud pedram the trap as a control flow mechanism in this paper we show how traditional hardware trap handlers can be generalized into an efficient vehicle for conditional branches. these ideas are being used in a vlsi processor under design. conditional branches are often a major bottleneck in scheduling microinstructions on a horizontally microcoded machine. several tests and conditional branches are frequently ready for scheduling simultaneously, but only one test and branch is possible in a given cycle. the trap facility is traditionally treated as an interrupt scheme for the notification of exceptional conditions. in this paper we study how the role of the trap mechanism may be expanded to include the parallel evaluation of arbitrary user-specified tests, and the concomitant performance benefits. j. a. chandross h. v. jagadish a. asthana the cmu design automation system: an example of automated data path design a. parker d. thomas d. siewiorek m. barbacci l. hafer automating the sizing of analog cmos circuits by consideration of structural constraints r. schwencker j. eckmueller h. graeb k. antreich an algebra for logic strength simulation to simulate tri-state logic in a non-pessimistic way, a six valued algebra is shown to be necessary. this is then extended to quin-state logic (strong 0, strong 1, weak 0, weak 1, high impedance) and a fifteen valued algebra. the improved accuracy is as important for fault simulation as for design verification. the requirements for non- pessimistic test generation algebras for tri-state and quin-state logic are also discussed. pessimism in test generation increases the search space and hence the run time. p. l. flake p. r. moorby g. musgrave bdd-based testability estimation of vhdl designs e. macii m. poncino f. ferrandi f. fummi d. sciuto testable synthesis of high complex control devices f. fummi u. rovati d. sciuto improving the error detection ability of concurrent checkers by observation point insertion in the circuit under check (poster paper) valery a. vardanian liana b. mirzoyan the effect of lut and cluster size on deep-submicron fpga performance and density we use a fully timing-driven experimental flow [4] [15] in which a set of benchmark circuits are synthesized into different cluster-based [2] [3] [15] logic block architectures, which contain groups of luts and flip- flops. we look across all architectures with lut sizes in the range of 2 inputs to 7 inputs, and cluster size from 1 to 10 luts. in order to judge the quality of the architecture we do both detailed circuit level design and measure the demand of routing resources for every circuit in each architecture. these experiments have resulted in several key contributions. first, we have experimentally determined the relationship between the number of inputs required for a cluster as a function of the lut size (k) and cluster size (n). second, contrary to previous results, we have shown that when the cluster size is greater than four, that smaller luts (size 2 and 3) are almost as area efficient as 4-input luts, as suggested in [11]. however, our results also show that the performance of fpgas with these small lut sizes is significantly worse (by almost a factor of 2) than larger luts. hence, as measured by area- delay product, or by performance, these would be a bad choice. also, we have discovered that lut sizes of 5 and 6 produce much better area results than were previously believed. finally, our results show that a lut size of 4 to 6 and cluster size of between 4 and 10 provides the best area-delay product for an fpga. elias ahmed jonathan rose static power driven voltage scaling and delay driven buffer sizing in mixed swing quadrail for sub-1v i/o swings r. krishnamurthy i. lys l. carley multiple instruction issue in the nonstop cyclone processor this paper describes the architecture for issuing multiple instructions per clock in the nonstop cyclone processor. pairs of instructions are fetched and decoded by a dual two-stage prefetch pipeline and passed to a dual six-stage pipeline for execution. dynamic branch prediction is used to reduce branch penalties. a unique microcode routine for each pair is stored in the large duplexed control store. the microcode controls parallel data paths optimized for executing the most frequent instruction pairs. other features of the architecture include cache support for unaligned double-precision accesses, a virtually-addressed main memory, and a novel precise exception mechanism. robert w. horst richard l. harris robert l. jardine is redundancy necessary to reduce delay logic optimization procedures principally attempt to optimize three criteria: performance, area and testability. the relationship between area optimization and testability has recently been explored. as to the relationship between performance and testability, experience has shown that performance optimizations can, and do in practice, introduce single stuck-at-fault redundancies into designs. are these redundancies necessary to increase performance or are they only an unnecessary byproduct of performance optimization? in this paper we give a constructive resolution of this question in the form of an algorithm that takes as input a combinational circuit and returns an irredundant circuit that is as fast. we demonstrate the utility of this algorithm on a well known circuit, the carry-skip adder, and present a novel irredundant design of that adder. as our algorithm may either increase or decrease circuit area, we leave unresolved the question as to whether every circuit has an irredundant circuit that is at least as fast and is of equal or lesser area. kurt keutzer sharad malik alexander saldanha mesh arrays and logician: a tool for their efficient generation this paper introduces a standard structure for vlsi design which we call the mesh array and describes a design tool called logician which minimizes a set of functions for realization in cmos mesh arrays. logician features multi- level logic synthesis through recursive enumeration of each function. several techniques to speed-up the minimization process in logician are described. j. a. beekman r. m. owens m. j. irwin a knowledge based system for selecting a test methodology for a pla testability is a very important aspect of vlsi circuits. numerous design for testability (dft) methods exist. often designers face the complex problem of selecting the best dft techniques for a particular chip under a set of design constraints and goals. in order to aid in designing testable circuits, a prototype knowledge based system has been developed which simulates a human expert on design of testable plas. the system, described in this paper, has knowledge about testable pla design methodologies and is able to negotiate with the user so as to lead the user through the design space to find a satisfactory solution. a new search strategy, called reason analysis, is introduced. m. a. breuer xi-an zhu the directory-based cache coherence protocol for the dash multiprocessor dash is a scalable shared-memory multiprocessor currently being developed at stanford's computer systems laboratory. the architecture consists of powerful processing nodes, each with a portion of the shared-memory, connected to a scalable interconnection network. a key feature of dash is its distributed directory-based cache coherence protocol. unlike traditional snoopy coherence protocols, the dash protocol does not rely on broadcast; instead it uses point-to-point messages sent between the processors and memories to keep caches consistent. furthermore, the dash system does not contain any single serialization or control point. while these features provide the basis for scalability, they also force a reevaluation of many fundamental issues involved in the design of a protocol. these include the issues of correctness, performance and protocol complexity. in this paper, we present the design of the dash coherence protocol and discuss how it addresses the above issues. we also discuss our strategy for verifying the correctness of the protocol and briefly compare our protocol to the ieee scalable coherent interface protocol. daniel lenoski james laudon kourosh gharachorloo anoop gupta john hennessy design verification via simulation and automatic test pattern generation hussain al-asaad john p. hayes computing perspectives: the rise of the vlsi processor around 1970 intel discovered it could put 2,000 transistors---or perhaps a few more---on a single nmos chip. in retrospect, this may be said to mark the beginning of very large-scale integration (vlsi), an event which had been long heralded, but had been seemingly slow to come. at the time, it went almost unnoticed in the computer industry. this was partly because 2,000 transistors fell far short of what was needed to put a processor on a chip, but also because the industry was busy exploiting medium-scale integration (msi) in the logic family known as ttl. based on bipolar transistors, and a wide range of parts containing a few logical elements---typically two flip-flops or up to 16 gates in various combinations---ttl was highly successful. it was fast and versatile, and established new standards for cost effectiveness and reliability. indeed, in an improved form and with better process technology, ttl is still widely used. in 1970, nmos seemed a step backward as far as speed was concerned. intel did, however, find a customer for its new process; it was a company that was interested in a pocket calculator chip. intel was able to show that a programmable device would be preferable on economic grounds to a special- purpose device. the outcome was the chip that was later put on the market as the intel 4004. steady progress continued, and led to further developments: in april 1972 came the intel 8008 which comprised 3,300 transistors, and then in april 1974 came the 8080 which had 4,500 transistors. the 8080 was the basis of the altar 8800 which some people regard as the ancestor of the modern personal computer. it was offered in the form of a kit in january 1975. other semiconductor manufacturers then entered the field: motorola introduced the 6800 and mos technology inc. introduced the 6502. microcomputers had great success in the personal computer market which grew up alongside the older industry, but was largely disconnected from it. minicomputers were based on ttl and were faster than microcomputers. with instruction sets of their own design and with proprietary software, manufacturers of minicomputers felt secure in their well-established markets. it was not until the mid-1980s that they began to come to terms with the idea that one day they might find themselves basing some of their products on microprocessors taken from the catalogs of semiconductor manufacturers, over whose instruction sets they had no control. they were even less prepared for the idea that personal computers, in an enhanced form known as workstations, would eventually come to challenge the traditional minicomputer. this is what has happened---a minicomputer has become nothing more than a workstation in a larger box and provided with a wider range of peripheral and communication equipment. as time has passed, the number of cmos transistors that can be put on a single chip has increased steadily and dramatically. while this has been primarily because improvements in process technology have enabled semiconductor manufacturers to make the transistors smaller, it has also been helped by the fact that chips have tended to become larger. it is a consequence of the laws of physics that scaling the transistors down in size makes them operate faster. as a result, processors have steadily increased in speed. it would not have been possible, however, to take full advantage of faster transistors if the increase in the number that could be put on a chip had not led to a reduction in the total number of chips required. this is because of the importance of signal propagation time and the need to reduce it as the transistors become faster. it takes much less time to send a signal from one part of the chip to another part than it does to send a signal from one chip to another. the progress that has been made during the last three or four years is well illustrated by comparing the mips r2000 processor developed in 1986 with two- micron technology, with the intel i860 developed in 1989. the former is based on a risc processor which takes up about half the available space. this would not have left enough space for more than a very small amount of cache memory. instead the designer included the cache control circuits for off-chip instruction and data caches. the remaining space, amounting to about one-third of the whole was put to good use to accommodate a memory management unit (mmu) with a translation look aside buffer (tlb) of generous proportions. at this stage in the evolution of processor design, the importance of risc philosophy in making the processor as small as it was will be appreciated. a processor of the same power designed along pre-risc lines would have taken up the entire chip, leaving no space for anything else. when the intel i860 processor was developed three years later, it had become possible to accommodate on the chip, not only the units mentioned above, but also two caches---one for data and one for instructions---and a highly parallel floating point coprocessor. this was possible because the silicon area was greater by a factor of slightly more than 2, and the amount of space occupied by a transistor less by a factor of 2.5. this gave a five-fold effective increase in the space available. the space occupied by the basic risc processor itself is only 10% of the whole as compared with 50% on the r2000. about 35% is used for the floating point coprocessor and 20% for the memory management and bus control. this left about 35% to be used for cache memory. there are about one million transistors on the i860---that is 10 times as many as on the r2000, not 5 times as many as the above figures would imply. this is because much of the additional space is used for memory, and memory is very dense in transistors. when still more equivalent space on the silicon becomes available, designers who are primarily interested in high-speed operation will probably use the greater part of it for more memory, perhaps even providing two levels of cache on the chip. cmos microprocessors have now pushed up to what used to be regarded as the top end of the minicomputer range and will no doubt go further as the transistor size is further reduced. bipolar transistors have followed cmos transistors in becoming smaller, although there has been a lag. this is mainly because of the intrinsically more complex nature of the bipolar process; but it is also partly because the great success of cmos technology has led the semiconductor industry to concentrate its resources on it. bipolar technology will always suffer from the handicap that it takes twice as many transistors to make a gate as it does in cmos. the time to send a signal from one place to another depends on the amount of power available to charge the capacitance of the interconnecting wires. this capacitance is much greater for inter-chip wiring than for on-chip wiring. in the case of cmos, which is very low-power technology, it is difficult to provide enough power to drive inter-chip wiring at a high speed. the premium placed on putting everything on the same chip is, therefore, very great. much more power is available with bipolar circuits and the premium is not nearly so great. for this reason it has been possible to build multi-chip processors using gate arrays that take full advantage of the increasingly high speed of available bipolar technology. it is presently the case that all very fast computers on the market use multi- chip bipolar processors. nevertheless, as switching speeds have become higher it has become necessary to develop interconnect systems that are faster than traditional printed circuit boards. it is becoming more and more difficult to do this as switching speeds continue to increase. in consequence, bipolar technology is approaching the point---reached earlier with cmos---when further advance requires that all those units of a processor that need to communicate at high speed shall be on the same chip. fortunately, we are in sight of achieving this. it will soon be possible to implement, in custom bipolar technology on a single chip, a processor similar to the r2000. such a processor may be expected to show a spectacular increase of speed compared with multi-chip implementations based on similar technology, but using gate arrays. however, as it becomes possible to put even more transistors on a single chip, it may be that the balance of advantage will lie with cmos. this is because it takes at least four times as many transistors to implement a memory cell in bipolar as it does in cmos. since any processor, especially a cmos processor, gains greatly in performance by having a large amount of on-chip memory, this advantage could well tip the balance in favor of cmos. the advantage that would result from being able to put cmos transistors and bipolar transistors on the same chip has not gone unnoticed in the industry. active development is proceeding in this area, under the generic name bicmos. bicmos is also of interest for analogue integrated circuits. if the bicmos process were optimized for bipolar transistors it would be possible to have a very high- performance bipolar processor with cmos on-chip memory. if the bipolar transistors were of lower-performance levels they would still be of value for driving off-chip connections and also for driving long- distance connections on the chip itself. a pure bipolar chip, with a million transistors on it, will dissipate at least 50 watts, probably a good deal more. removing the heat presents problems, but these are far from being insuperable. more severe problems are encountered in supplying the power to the chip and distributing it without a serious voltage drop or without incurring unwanted coupling. design tools to help with these problems are lacking. a bicmos chip of similar size will dissipate much less power. on the other hand, bicmos will undoubtedly bring a spate of problems of its own, particularly as the noise characteristics of cmos and bipolar circuits are very different. cmos, bipolar, and bicmos technologies are all in a fluid state of evolution. it is possible to make projections about what may happen in the short term, but what will happen in the long term can only be a matter of guess work. moreover, designing a computer is an exercise in system design and the overall performance depends on the statistical properties of programs as much as on the performance of the individual components. it would be a bold person who would attempt any firm predictions. and then, finally, there is the challenge of gallium arsenide. a colleague, with whom i recently corresponded, put it very well when he described gallium arsenide as the wankel engine of the semiconductor industry! maurice v. wilkes high-level design verification of microprocessors via error modeling a design verification methodology for microprocessor hardware based on modeling design errors and generating simulation vectors for the modeled errors via physical fault testing techniques is presented. we have systematically collected design error data from a number of microprocessor design projects. the error data is used to derive error models suitable for design verification testing. a class of basic error models is identified and shown to yield tests that provide good coverage of common error types. to improve coverage for more complex errors, a new class of conditional error models is introduced. an experiment to evaluate the effectiveness of our methodology is presented. single actual design errors are injected into a correct design, and it is determined if the methodology will generate a test that detects the actual errors. the experiment has been conducted for two microprocessor designs and the results indicate that very high coverage of actual design errors can be obtained with test sets that are complete for a small number of synthetic error models. d. van campenhout h. al- asaad j. p. hayes t. mudge r. b. brown efficient resource arbitration in reconfigurable computing environments iyad ouaiss ranga vemuri low-cost branch folding for embedded applications with small tight loops many portable and embedded applications are characterized by spending a large fraction of execution time on small program loops. to improve performance, many embeded systems use special instructions to handle program loop executions. these special instructions, however, consume opcode space, which is valuable in the embedded computing environments. in this paper, we propose a hardware technique for folding out branches when executing these small loops. this technique does not require any special branch instructions. it is based on the detection and utilization of certain short backward branch instructions (sbb). a sbb is any pc-relative branch instruction with a limited backward branch distance. once a sbb is detected, its displacement field is used by the hardware to identify the actual program loop size. it does so by loading this negative displacement field into a counter and incrementing the counter for each instruction sequentially executed. as the count approaches zero, the hardware folds out the sbb by predicting that it is always taken. the hardware overhead for this technique is minimal. using a 5-bit increment counter, the performance improvement over a set of embedded applications is about 7.5%. lea hwang lee jeff scott bill moyer john arends an assigned probability technique to derive realistic worst-case timing models of digital standard cells alessandro dal fabbro bruno franzini luigi croce carlo guardiani logic design for low-voltage/low-power cmos circuits c. piguet j.-m. masgonty v. von kaenel t. schneider selective, accurate, and timely self-invalidation using last-touch prediction communication in cache-coherent distributed shared memory (dsm) often requires invalidating (or writing back) cached copies of a memory block, incurring high overheads. this paper proposes last-touch predictors (ltps) that learn and predict the "last touch" to a memory block by one processor before the block is accessed and subsequently invalidated by another. by predicting a last- touch and (self-)invalidating the block in advance, an ltp hides the invalidation time, significantly reducing the coherence overhead. the key behind accurate last-touch prediction is trace-based correlation, associating a last-touch with the sequence of instructions (i.e., a trace) touching the block from a coherence miss until the block is invalidated. correlating instructions enables an ltp to identify a last-touch to a memory block uniquely throughout an application's execution. in this paper, we use results from running shared-memory applications on a simulated dsm to evaluate ltps. the results indicate that: (1) our base case ltp design, maintaining trace signatures on a per-block basis, substantially improves prediction accuracy over previous self-invalidation schemes to an average of 79%; (2) our alternative ltp design, maintaining a global trace signature table, reduces storage overhead but only achieves an average accuracy of 58%; (3) last-touch prediction based on a single instruction only achieves an average accuracy of 41% due to instruction reuse within and across computation; and (4) ltp enables selective, accurate, and timely self- invalidation in dsm, speeding up program execution on average by 11%. an-chow lai babak falsafi simultaneous logic decomposition with technology mapping in fpga designs conventional technology mapping algorithms for sram-based field programmable gate arrays (fpgas) are normally carried out on a fixed logic decomposition of a circuit. the impact of logic decomposition on delay and area of the technology mapping solutions is not well understood. in this paper, we present an algorithm named sldmap that performs delay-minimized technology mapping on a large set of decompositions and simultaneously controls the mapping area under delay constraints. our study leads to two conclusions: (1) for depth minimization, the best algorithms in conventional flow (dmig + cutmap) produce satisfactory results with a short runtime, even with a fixed decomposition; (2) when all the structural decompositions of the 6-bounded boolean network are explored, sldmap consistently outperforms the state-of-the-art separate flow (dmig + cutmap) by 12% in depth and 10% in area on average; it also consistently outperforms the state-of-the-art combined approach dogma by 8% in depth and 6% in area on average. gang chen jason cong rom-based finite state machines with pla address modifiers t. luba k. gorski l. b. wronski mixed-signal switching noise analysis using voronoi-tessellated substrate macromodels ivan l. wemple andrew t. yang decoupled sectored caches: conciliating low tag implementation cost a. seznec an algorithm for incremental timing analysis jin-fuw lee donald t. tang a novel renaming scheme to exploit value temporal locality through physical register reuse and unification stephen jourdan ronny ronen michael bekerman bishara shomar adi yoaz multes/is: an effective and reliable test generation system for partial scan and non-scan synchronous circuits this paper describes an automatic test generation system which effectively generates test vectors by recognizing the circuit blocks for which vectors are automatically generated and the circuit blocks for which vectors have to be manually prepared. test vectors for full scan, partial scan and nonscan synchronous circuit blocks are automatically generated. test vectors for asynchronous circuit blocks have to be manually prepared. t. ogihara k. muroi g. yonemori s. murai integrated fault diagnosis targeting reduced simulation vamsi boppana w. kent fuchs a layout checking system for large scale integrated circuits k. yoshida m. takashi y. nakada t. chiva k. ogita design practices for better reliability and yield (tutorial) yervant zorian michael nicolaidis peter muhmenthaler david lepejian chris strolenberg kees veelenturf logic synthesis techniques for reduced area implementation of multilevel circuits with concurrent error detection this paper presents new logic synthesis techniques for generating multilevel circuits with concurrent error detection based on a parity-check code scheme that can detect all errors caused by single stuck-at faults. these synthesis techniques fully automate the design process and allow for a better quality result than previous methods thereby reducing the cost of concurrent error detection. an algorithm is described for selecting a good parity-check code for encoding the outputs of a circuit. once the code has been chosen, a new procedure called structure- constrained logic optimization is used to minimize the area of the circuit as much as possible while still using a circuit structure that ensures that single stuck-at faults cannot produce undetected errors. the implementation that is generated is path fault secure and when augmented by a checker forms a self-checking circuit. results indicate that self-checking multilevel circuits can be generated which require significantly less area than using duplication. nur a. touba edward j. mccluskey improving the performance and efficiency of an adaptive amplification operation using configurable hardware (poster abstract) michael j. wirthlin paul graham fpga implementation and analysis of image restoration f. s. ogrenci a. k. katsaggelos m. sarrafzadeh the conlan project: status and future plans conlan (consensus language) is a general formal language construction mechanism for the description of hard- and firmware at different levels of abstraction. it has been developed by the international conlan working group. members of the conlan language family are derived from a common root language called bcl (base conlan). this language provides the basic object types and operations to describe the behavior and the structure of digital systems in space and time. the paper is based on the conlan draft report (in print). the purpose of the paper is (1) to provide an informal introduction to the draft report version of bcl together with examples of its application (2) to outline some work on the derivation of languages from bcl and (3) to describe status and further plans for software tools supporting language derivation and implementation. robert piloty dominique borrione algorithmic transformations in the implementation of k- means clustering on reconfigurable hardware in mapping the k-means algorithm to fpga hardware, we examined algorithm level transforms that dramatically increased the achievable parallelism. we apply the k-means algorithm to multi-spectral and hyper-spectral images, which have tens to hundreds of channels per pixel of data. k-means is an iterative algorithm that assigns assigns to each pixel a label indicating which of k clusters the pixel belongs to. k-means is a common solution to the segmentation of multi-dimensional data. the standard software implementation of k-means uses floating-point arithmetic and euclidean distances. floating point arithmetic and the multiplication- heavy euclidean distance calculation are fine on a general purpose processor, but they have large area and speed penalties when implemented on an fpga. in order to get the best performance of k-means on an fpga, the algorithm needs to be transformed to eliminate these operations. we examined the effects of using two other distance measures, manhattan and max, that do not require multipliers. we also examined the effects of using fixed precision and truncated bit widths in the algorithm. it is important to explore algorithmic level transforms and tradeoffs when mapping an algorithm to reconfigurable hardware. a direct translation of the standard software implementation of k-means would result in a very inefficient use of fpga hardware resources. analysis of the algorithm and data is necessary for a more efficient implementation. our resulting implementation exhibits approximately a 200 times speed up over a software implementation. mike estlick miriam leeser james theiler john j. szymanski diagnostic simulation of stuck-at faults in sequential circuits using compact lists this article describes a diagnostic fault simulator for stuck-at faults in sequential circuits that is both time and space efficient. the simulator represents indistinguishable classes of faults as memory efficient lists. the use of lists reduces the number of output response comparisons between faults and hence speeds up the simulation process. the lists also make it easy to drop faults when they are fully distinguished from other faults. experimental results on the iscas89 circuits show that the simulator runs significantly faster than an earlier work based on distinguishability matrices, and for large circuits is faster and more memory efficient than a recent method based on lists of indistinguishable faults. the paper provides the first reports on pessimistic and optimistic diagnostic measures for all faults of the large iscas circuits with known deterministic tests. the diagnostic fault simulator has also been modified to diagnose defects, given the output responses of failing devices. results on simulated bridging defects show that the diagnosis time is comparable to the time for fault simulation with fault dropping. ismed hartanto srikanth venkataraman w. kent fuchs elizabeth m. rudnick janak h. patel sreejit chakravarty fault modeling of differential ecl udo jorczyk wilfried daehn oliver neumann diagnosing faults through responsibility robert milne exploiting positive equality and partial non-consistency in the formal verification of pipelined microprocessors miroslav n. velev randal e. bryant industrial evaluation of dram tests ad j. van de goor j. de neef jitter-tolerant clock routing in two-phase synchronous systems joe g. xi wayne w.-m. dai trip report - the european conference on design automation suresh rajgopal the memory wall and the cmos end-point maurice v. wilkes magneto-optical data storage terry mcdaniel a 100 mhz pll implemented on a 100k gate programmable logic device (abstract) david jefferson srinivas reddy christopher lane ninh ngo wanli chang manuel mijia ketan zaveri cameron mcclintock richard cliff a verification technique for hardware designs most existing hardware design verification techniques (logic simulation, symbolic simulation etc.), as well as the design phase, are rather synthetic. this paper discusses an analytic verification technique with examples of its application. this technique employs backward symbolic simuation, or causality tracing, which is carried out from the negation of a proposition which should be verified. analyticity this technique has, not only makes verification powerful but gives it another feature, design error diagnosis. fumihiro maruyama takao uehara nobuaki kawato takao saito locating functional errors in logic circuits in the verification phase of the design of logic circuits using the top-down approach, it is necessary not only to detect but also to locate the source of any inconsistencies that may exist between the functional-level description and its gate-level implementation. in this paper we present a method that determines the areas, within the gate-level circuit, that contain the functional errors. the indicated areas are shown to have sufficient resolution to allow the designer to quickly find the cause of the inconsistency and, therefore, reduce the time required for debugging. k. a. tamura digital detection of analog parametric faults in sc filters ramesh harjani bapiraju vinnakota graph algorithms for clock schedule optimization narendra shenoy robert k. brayton alberto l. sangiovanni-vincentelli exploiting intellectual properties in asip designs for embedded dsp software hoon choi ju hwan yi jong-yeol lee in-cheol park chong-min kyung laser correcting defects to create transparent routing for large area fpga's g. h. chapman benoit dufort circuit complexity reduction for symbolic analysis of analog integrated circuits walter daems georges gielen willy sansen increasing cache port efficiency for dynamic superscalar microprocessors the memory bandwidth demands of modern microprocessors require the use of a multi- ported cache to achieve peak performance. however, multi-ported caches are costly to implement. in this paper we propose techniques for improving the bandwidth of a single cache port by using additional buffering in the processor, and by taking maximum advantage of a wider cache port. we evaluate these techniques using realistic applications that include the operating system. our techniques using a single-ported cache achieve 91% of the performance of a dual-ported cache. kenneth m. wilson kunle olukotun mendel rosenblum saara fulvio corno paolo prinetto maurizio rebaudenpgo matteo sonza reorda optimal bipartite folding of pla the notion of a bipartite folding of a pla is introduced. an efficient branch and bound algorithm is presented which finds an optimal bipartite folding of a pla. the experimental results give additional justification to this folding technique. j. r. egan c. l. liu a graded-channel mos (gcmos) vlsi technology for low power dsp applications j. ma h. liang m. kaneshiro c. kyono r. pryor k. papworth s. cheng target architecture oriented high-level synthesis for multi-fpga based emulation oliver bringmann carsten menn wolfgang rosenstiel comparing structurally different views of a vlsi design in large design projects, it is desirable to compare alternate views that use different hierarchies. however, existing techniques either require essentially identical hierarchies (which is sometimes an unacceptable restriction) or must flatten to remove the differences (which may be very costly). a new technique, informed comparison, has neither shortcoming. first, hierarchy transformations are applied to reconcile the structures of the views; then a hierarchical base comparison finishes the task. the reconciliation is guided by a small amount of additional design information: the intended relationship between the hierarchies of the views. some qualities of informed comparison depend on the reconciliation repertoire and the base comparison. two examples are studied. mike spreitzer the role of vhdl within the tosca hardware/software codesign framework donatella sciuto stefano antoniazzi alessandro balboni william fornaciari automatic incorporation of on-chip testability circuits this paper presents a system which automatically incorporates testability circuits into ecl chips. this system incorporates three types of circuit: (1) random access scan circuit, (2) clock suppression circuit for delay fault testing, and (3) pin scan-out circuit for chip i/o pin observation in board testing. fanout destinations of each gate in the testability circuits are localized on a chip to keep the logical net length within the limit. this system was used to develop the new fujitsu vp-2000 supercomputer. noriyuki ito a low-power clock and data recovery circuit for 2.5 gb/s sdh receivers a low power monolithic clock and data recovery ic for 2.5 gb/s sdh stm-16 systems has been designed and fabricated using maxim gst-2 27 ghz-ft silicon bipolar technology. the circuit performs the following functions: signal amplification and limitation, clock recovery and decision; a single 3.3 v supply voltage is required, and power consumption results below 350 mw. this ic and a previously presented transimpedance amplifier so allows composing a chip set for the receiver with a total power dissipation below 0.5 w. preliminary measurements under a 223-1 prbs data stream have shown an input sensitivity below 20 mvpp and a rms jitter of 10 ps. andrea pallotta francesco centurelli alessandro trifiletti dynamic processor allocation in hypercube computers fully recognizing various subcubes in a hypercube computer efficiently is nontrivial due to the specific structure of the hypercube. we propose a method with much less complexity than the multiple-gc strategy in generating the search space, while achieving complete subcube recognition. this method is referred to as a dynamic processor allocation scheme because the search space generated is dependent upon the dimension of the requested subcube dynamically, rather than being predetermined and fixed. the basic idea of this strategy lies in collapsing the binary tree representations of a hypercube successively so that the nodes which form a subcube but are distant would be brought close to each other for recognition. the strategy can be implemented efficiently by using shuffle operations on the leaf node addresses of binary tree representations. extensive simulation runs are carried out to collect experimental performance measures of interest of different allocation strategies. it is shown from analytic and experimental results that this strategy compares favorably in many situations to any other known allocation scheme capable of achieving complete subcube recognition. po-jen chuang nian-feng tzeng diagnostic testing of embedded memories using bist timothy j. bergfeld dirk niggemeyer elizabeth m. rudnick properhitec: a portable, parallel, object-oriented approach to sequential test generation steven parkes prithviraj banerjee janak patel board-level multi-terminal net routing for fpga-based logic emulation wai-kei mak d. f. wong charge recovery on a databus kei-yong khoo alan n. wilson channel-based behavioral test synthesis for improved module reachability yiorgos makris alex orailoglu special section on microprogramming stanley habib a methodology for concurrent fabrication process/cell library optimization arun n. lokanathan jay b. brockman john e. renaud supporting system-level power exploration for dsp applications _system-level power exploration requires tools for estimation of the overall power consumed by a system, as well as a detailed breakdown of the consumption of its main functional blocks. we focus on power estimation for data-dominated systems specified as synchronous data-flows and implemented on a single- processor architecture. our estimator is integrated within the ptolemy design environment, and provides information to system designers on the power dissipated by every task in a given specification. power estimation is based on instruction-level power models. we demonstrate the applicability of our tool on a few design examples and target architectures._ luca benini marco ferrero alberto macii enrico macii massimo poncino interest of a vhdl native environment j. l. giordana logic synthesis for programmable gate arrays the problem of combinational logic synthesis is addressed for two interesting and popular classes of programmable gate array architectures: table-look-up and multiplexor-based. the constraints imposed by some of these architectures require new algorithms for minimization of the number of basic blocks of the target architecture, taking into account the wiring resources. rajeev murgai yoshihito nishizaki narendra shenoy robert k. brayton alberto sangiovanni-vincentelli a distributed i/o architecture for harts the issue of i/o device access in harts --- a distributed real-time computer system under construction at the real-time computing laboratory (rtcl), the university of michigan --- is explicitly addressed. several candidate solutions are introduced, explored, and evaluated according to cost and complexity, reliability, and performance: (1) "node-direct" distribution with the intra-node bus and a local i/o bus, (2) use of dedicated i/o nodes which are placed in the hexagonal mesh as regular applications nodes but which provide i/o services rather than computing services, and (3) use of a separate i/o network which has led to the proposal of an "interlaced" i/o network. the interlaced i/o network is intended to provide both high performance without burdening node processors with i/o overhead as well as a high degree of reliability. both static and dynamic multi-ownership protocols are developed for managing i/o device access in this i/o network. the relative merits of the two protocols are explored and the performance and accessibility which each provide are simulated. kang g. shin greg dykema an experimental analysis of the effectiveness of the circular self-test path technique paolo prinetto fulvio corno m. sonza reorda technology mapping of sequential circuits for lut-based fpgas for performance peichen pan c. l. liu recency-based tlb preloading caching and other latency tolerating techniques have been quite successful in maintaining high memory system performance for general purpose processors. however, tlb misses have become a serious bottleneck as working sets are growing beyond the capacity of tlbs. this work presents one of the first attempts to hide tlb miss latency by using preloading techniques. we present results for traditional next-page tlb miss preloading - an approach shown to cut some of the misses. however, a key contribution of this work is a novel tlb miss prediction algorithm based on the concept of "recency", and we show that it can predict over 55% of the tlb misses for the five commercial applications considered. ashley saulsbury fredrik dahlgren per stenström the transmogrifier-2: a 1 million gate rapid prototyping system david m. lewis david r. galloway marcus van ierssel jonathan rose paul chow optimization of critical paths in circuits with level-sensitive latches a simple extension of the critical path method is presented which allows more accurate optimization of circuits with level-sensitive latches. the extended formulation provides a sufficient set of constraints to ensure that, when all slacks are non-negative, the corresponding circuit will be free of late signal timing problems. cycle stealing is directly permitted by the formulation. however, moderate restrictions may be necessary to ensure that the timing constraint graph is acyclic. forcing the constraint graph to be acyclic allows a broad range of existing optimization algorithms to be easily extended to better optimize circuits with level-sensitive latches. we describe the extension of two such algorithms, both of which attempt to solve the problem of selecting parts from a library to minimize area subject to a cycle time constraint. timothy m. burks karem a. sakallah horizontal partitioning of pla-based finite state machines we present a new form of partitioning of pla-based fsms that combines the advantages of traditional vertical pla partitioning (i.e. via inputs and/or outputs) and counter embedding which consists of replacing the fsm state memories by a counter. like the former, horizontal partitioning allows the reduction of the number of input and/or output columns in the plas resulting from the partition. furthermore, the technique also reduces the total number of product terms, as in counter embedding techniques. this reduction is due to a decomposition of state transitions into two classes that are realized by two sets of logic. in this case however, a second pla-based fsm is used in place of the counter. this results in area reductions of 30% to 60% (with respect to regular two-level logic minimization) for the benchmark examples presented. p. g. paulin a placement technique based on minimization and even distribution of crossovers the component placement problem encountered in the design of cards pcb's, etc. is a complex one. although many solutions have been reported, they appear to share one common characteristic, i.e., the objective functions to be optimized are usually based on the classic minimum wiring distance. the time seems ripe for both industry and academia to realize that better and more effective objective functions are needed. a concept of producing placement by abandoning the traditional distance based criterion is presented. the emphasis is on a minimal number of crossovers and even distributions of these crossovers along some boundaries. since the concept has not been implemented, there is no numerical data available for comparison. a few points of concern are raised and hopefully they will stimulate further research in the area of component placement. pao-tsin wang verilat: verification using logic augmentation and transformations dhiraj k. pradhan debjyoti paul mitrajit chatterjee model checking in industrial hardware design jörg bormann jörg lohse michael payer gerd venzl test generation of stuck-open faults using stuck-at test sets in cmos combinational circuits in this paper we investigate two aspects regarding the detection of stuck-open (sop) faults using stuck-at test sets. first, we measure the sop fault coverage of stuck-at test sets for various cmos combinational circuits. the sop fault coverage is compared with that of random pattern test sets. second, we propose a method to improve the sop fault coverage of stuck-at test sets by organizing the test sequence of stuck-at test sets. the performance of the proposed method is compared with that of a competing method. experimental results show that the proposed method leads to smaller test sets and shorter processing time while achieving high sop fault coverage. h. k. lee d. s. ha k. kim representing the hardware design process by a common data schema maria brielmann elisabeth kupitz scan latch partitioning into multiple scan chains for power minimization in full scan sequential circuits nicola nicolici bashir m. al-hashimi a delay driven fpga placement algorithm srilata raman c. l. liu larry g. jones using architectural "families" to increase fpga speed and density in order to narrow the speed and density gap between fpgas and mpgas we propose the development of "families" of fpgas. each fpga family is targeted at a single maximum logic capacity, and consists of several "siblings", or fpgas of different yet complementary architectures. any given application circuit is implemented in the sibling with the most appropriate architecture. with properly chosen siblings, one can develop a family of fpgas which will have better speed and density than any single fpga. we apply this concept to create two different fpga families, one composed of architectures with different types of hard-wired logic blocks and the other created from architectures with different types of heterogeneous logic blocks. we found that a family composed of eight chips with different hard-wired logic block architectures simultaneously improves density by 12 to 14% and speed by 18 to 20% over the best single hard-wired fpga. vaughn betz jonathan rose a formal method for the specification, analysis, and design of register- transferlevel digital logic l. hafer a. c. parker testable path delay fault cover for sequential circuits a. krstic k. cheng s. chakradhar noise in deep submicron digital design kenneth l. shepard vinod narayanan an intermediate representation for behavioral synthesis this paper describes an intermediate representation for behavioral and structural designs that is based on annotated state tables. it facilitates user control of the synthesis process by allowing specification of partially design structures, and a mixture of behavior, structure and user specified bindings between the abstract behavior and the structure. the format's general model allows the capture of synchronous and asynchronous behavior, and permits hierarchical descriptions with concurrency. the format is easily translated to vhdl for simulation at each stage of the design process. it therefore complements a good simulation language (vhdl) by providing an excellent input path for behavioral and register-transfer synthesis. the format's simple and uniform syntax allows it to be used both as an intermediate exchange format for various behavioral synthesis tools, and as a graphical tabular interface for the user, thereby allowing a natural medium for automatic or manual refinement of the design. nikil d. dutt tedd hadley daniel d. gajski dfbt: a design-for-testability method based on balance testing krishnendu chakrabarty john p. hayes timed shared circuits: a power-efficient design style and synthesis tool luciano lavagno patrick c. mcgeer alexander saldanha alberto l. sangiovanni- vincentelli algorithms for address assignment in dsp code generation rainer leupers peter marwedel analysis and reliable design of ecl circuits with distributed rlc interconnections monjurul haque s. chowdhury vhdl development system and coding standard hans sahm claus mayer jörg pleickhardt johannes schuck stefan späth hot-carrier reliability enhancement via input reordering and transistor sizing aurobindo dasgupta ramesh karri modifying vm hardware to reduce address pin requirements matthew farrens arvin park gary tyson reliable low-power design in the presence of deep submicron noise (embedded tutorial session) scaling of feature size in semiconductor technology has been responsible for increasingly higher computational capacity of silicon. this has been the driver for the revolution in communications and computing. however, questions regarding the limits of scaling (and hence moore's law) have arisen in recent years due to the emergence of deep submicron noise. the tutorial describes noise in deep submicron cmos and their impact on digital as well as analog circuits. in particular, noise-tolerance is proposed as an effective means for achieving energy and performance efficiency in the presence of dsm noise. naresh shanbhag k. soumyanath samuel martin memory system energy (poster session): influence of hardware-software optimizations memory system usually consumes a significant amount of energy in many battery- operated devices. in this paper, we provide a quantitative comparison and evaluation of the interaction of two hardware cache optimization mechanisms (block buffering and sub-banking) and three widely used compiler optimization techniques (linear loop transformation, loop tiling, and loop unrolling). our results show that the pure hardware optimizations (eight block buffers and four sub-banks in a 4k, 2-way cache) provided up to 4% energy saving, with an average saving of 2% across all benchmarks. in contrast, the pure software optimization approach that uses all three compiler optimizations, provided at least 23% energy saving, with an average of 62%. however, a closer observation reveals that hardware optimization becomes more critical for on-chip cache energy reduction when executing optimized codes. g. esakkimuthu n. vijaykrishnan m. kandemir m. j. irwin minimization of memory traffic in high-level synthesis david j. kolson alexandru nicolau nikil dutt designing power efficient hypermedia processors chunho lee johnson kin miodrag potkonjak william h. mangione-smith mems-based integrated-circuit mass-storage systems l. richard carley gregory r. ganger david f. nagle rapid prototyping of asic based systems p. h. kelly k. j. page p. m. chau on generating compact test sequences for synchronous sequential circuits irith pomeranz sudhakar m. reddy soft error correction for increased densities in vlsi memories khaled abdel ghaffar robert j. mceliece multilevel logic synthesis for arithmetic functions chien-chung tsai malgorzata marek-sadowska design-for-debugging of application specific designs miodrag potkonjak sujit dey kazutoshi wakabayashi high-level library mapping for memories we present high-level library mapping, a technique that synthesizes a source memory module from a library of target memory modules. in this paper, we define the problem of high-level library mapping for memories, identify and solve the three subproblems associated with this task, and finally combine these solutions into a suite of two memory mapping algorithms. experimental results on a number of memory- intensive designs demonstrate that our memory mapping approach generates a wide variety of cost-effective designs, often counter-intuitive ones, based on a user-given cost function, the target library, and the mapping algorithm used. pradip k. jha nikil d. dutt concurrency-oriented optimization for low-power asynchronous systems l. plana s. nowick an 8ma, 3.8db nf, 40db gain cmos front-end for gps applications a fully differential 0.35μm cmos lna plus mixer, tailored to a double conversion architecture, for gps applications has been realized. the lna makes use of an inductively degenerated input stage and a resonant lc load, featuring 12% frequency tuning, accomplished by an mos varactor. the mixer is a gilbert cell like, in which an nmos and a pmos differential pair, shunted together, realize the input stage. this topology allows to save power, for given mixer gain and linearity. the front-end measured performances are: 40db gain, 3.8db nf, -25.5dbm iip3, 1.3ghz input frequency, 140mhz output frequency, with 8ma from a 2.8v voltage supply. f. svelto s. deantoni g. montagna r. castello efficient and realistic simulation of disk cache performance this paper describes an improved method for evaluating disk cache performance using trace driven simulation. this method differentiates between reads and writes in the trace data which results in higher miss ratios than when all traced events are treated alike. it allows for simulating various update policies and update intervals, physical blocks that are part of different files at different times, and the optimum replacement policy gopt. these methods are applied to traces from a vax 11/780 running 4.3 bsd unix to illustrate the importance of including writes in the traces and also to analyze the effects of choosing various update intervals. john f. cigas timing-driven placement for fpgas in this paper we introduce a new simulated annealing-based timing-driven placement algorithm for fpgas. this paper has three main contributions. first, our algorithm employs a novel method of determining source-sink connection delays during placement. second, we introduce a new cost function that trades off between wire-use and critical path delay, resulting in significant reductions in critical path delay without significant increases in wire-use. finally, we combine connection-based and path-based timing-analysis to obtain an algorithm that has the low time- complexity of connection-based timing- driven placement, while obtaining the quality of path-based timing-driven placement. a comparison of our new algorithm to a well known non-timing-driven placement algorithm demonstrates that our algorithm is able to increase the post-place- and-route speed (using a full path-based timing-driven router and a realistic routing architecture) of 20 mcnc benchmark circuits by an average of 42%, while only increasing the minimum wiring requirements by an average of 5%. alexander marquardt vaughn betz jonathan rose energy-driven integrated hardware-software optimizations using simplepower with the emergence of a plethora of embedded and portable applications, energy dissipation has joined throughput, area, and accuracy/precision as a major design constraint. thus, designers must be concerned with both optimizing and estimating the energy consumption of circuits, architectures, and software. most of the research in energy optimization and/or estimation has focused on single components of the system and has not looked across the interacting spectrum of the hardware and software. the novelty of our new energy estimation framework, simplepower, is that it evaluates the energy considering the system as a whole rather than just as a sum of parts, and that it concurrently supports both compiler and architectural experimentation. we present the design and use of the simplepower framework that includes a transition- sensitive, cycle-accurate datapath energy model that interfaces with analytical and transition sensitive energy models for the memory and bus subsystems, respectively. we analyzed the energy consumption of ten codes from the multidimensional array domain, a domain that is important for embedded video and signal processing systems, after applying different compiler and architectural optimizations. our experiments demonstrate that early estimates from the simplepower energy estimation framework can help identify the system energy hotspots and enable architects and compiler designers to focus their efforts on these areas. n. vijaykrishnan m. kandemir m. j. irwin h. s. kim w. ye vhdl fault simulation for defect-oriented test and diagnosis of digital ics j. teixeira f. celeiro l. dias j. ferreira m. santos lazy data routing and greedy scheduling for application-specific signal processors k. rimey p. n. hilfinger linear time fault simulation algorithm using a content addressable memory nagisa ishiura shuzo yajima verification of timing constraints on large digital systems t. mcwilliams using sparse crossbars within lut in fpgas, the internal connections in a cluster of lookup tables (luts) are often fully-connected like a full crossbar. such a high degree of connectivity makes routing easier, but has significant area overhead. this paper explores the use of sparse crossbars as a switch matrix inside the clusters between the cluster inputs and the lut inputs. we have reduced the switch densities inside these matrices by 50% or more and saved from 10 to 18% in area with no degradation to critical-path delay. to compensate for the loss of routability, increased compute time and spare cluster inputs are required. further investigation may yield modest area and delay reductions. guy lemieux david lewis dataflow analysis of branch mispredictions and its application to early resolution of branch outcomes alexandre farcy olivier temam roger espasa toni juan a new high density and very low cost reprogrammable fpga architecture sinan kaptanoglu greg bakker arun kundu ivan corneillet ben ting aesthetic routing for transistor schematics tsung d. lee lawrence p. mcnamee a hypothetical computer to simulate microprogramming and conventional machine language jerry e. sayers david e. martin on test set preservation of retimed circuits aiman el-maleh thomas marchok janusz rajski wojciech maly a gate-delay model for high-speed cmos circuits florentin dartu noel menezes jessica qian lawrence t. pillage a computer aided design automation system for developing microprogrammed processors: a design approach through hdls w. j. chen g. n. reddy 1990 workshop on logic level modeling for asics final report mark glasser rob mathews john m. acken complex system verification (panel): the challenge ahead ron collett ken mcmillan alberto sangiovanni-vincentelli martin baynes naeem zafar steve sapiro johan van ginderdeuren stephen ricca a hardware implementation of gridless routing based on content addressable memory a new gridless router accelerated by content addressable memory (cam) is presented. a gridless version of the line-expansion algorithm is implemented, which always finds a path if one exists. the router runs in linear time by means of the cam-based accelerator. experimental results show that the more obstacles there are in the routing region, the more effective the cam-based approach is. masao sato kazuto kubota tatsuo ohtsuki energy-efficient 32 32-bit multiplier in tunable near-zero threshold cmos an 80,000 transistor, low swing, 32~x~32-bit multiplier was fabricated in a standard 0.35μm,vth=0.5 v cmos process and in a 0.35μm, back-bias tunable, near-zero vth process. while standard cmos atvdd=3.3 v runs at 136 mhz, the same performance can be achieved in the low- vth version at vdd=1.3 v, resulting in more than 5 times lower power. similar power reductions are obtained for frequencies down to 10 mhz. in addition, the low-vth version is able to run at 188 mhz, which is 38% faster than standard cmos. vjekoslav svilan masataka matsui james b. burr incremental synthesis a small change in the input to logic synthesis may cause a large change in the output implementation. this is undesirable if a designer has some investment in the old implementation and does not want it perturbed more than necessary. we describe a method that solves this problem by reusing gates from the old implementation, and restricting synthesis to the modified portions only. daniel brand anthony drumm sandip kundu prakash narain system partitioning to maximize sleep time amir h. farrahi majid sarrafzadeh edge-map: optimal performance driven technology mapping for iterative lut based fpga designs we consider the problem of performance driven lookup-table (lut) based technology mapping for fp-gas using a general delay model. in the general delay model, each interconnection edge has a weight representing the delay of the interconnection. this model is particularly useful when combined with an iterative re-technology mapping process where the actual delays of the placed and routed circuit are fed-back to the technology mapping phase to improve the mapping based on the more realistic delay estimation. well-known technology mappers such as flowmap and chortle-d only minimize the number of levels in the technology mapped circuit and hence are not suitable for such an iterative re-technology mapping process. recently, mather and liu in [ml94] studied the performance driven technology mapping problem using the general delay model and presented an effective heuristic algorithm for the problem. in this paper, we present an efficient technology mapping algorithm that achieves provably optimal delay in the technology mapped circuit using the general delay model. our algorithm is a non-trivial generalization of flowmap. a key problem in our algorithm is to compute a k-feasible network cut such that the circuit delay on every cut edge is upper- bounded by a specific value. we implemented our algorithm in a lut based fpga technology mapping package called edge-map, and tested edge-map on a set of benchmark circuits. honghua yang d. f. wong a memory coherence technique for online transient error recovery of fpga configurations the partial reconfiguration feature of some of the current-generation field programmable gate arrays (fpgas) can improve dependability by detecting and correcting errors in on-chip configuration data. such an error recovery process can be executed online with minimal interference of user applications. however, because look-up tables (luts) in configurable logic blocks (clbs) of fpgas can also implement memory modules for user applications, a memory coherence issue arises such that memory contents in user applications may be altered by the online configuration data recovery process. in this paper, we investigate this memory coherence problem and propose a memory coherence technique that does not impose extra constraints on the placement of memory- configured luts. theoretical analyses and simulation results show that the proposed technique guarantees the memory coherence with a very small (on the order of 0.1%) execution time overhead in user applications. wei-je huang edward j. mccluskey trace-driven simulations for a two-level cache design in open bus systems two-level cache hierarchies will be a design issue in future high- performance cpus. in this paper we evaluate various metrics for data cache* designs. we discuss both one- and two- level cache hierarchies. our target is a new 100+ mips cpu, but the methods are applicable to any cache design. the basis of our work is a new trace-driven, multiprocess cache simulator. the simulator incorporates a simple priority-based scheduler which controls the execution of the processes. the scheduler blocks a process when a system call is executed. a workload consists of a total of 60 processes, distributed among seven unique programs with about nine instances each. we discuss two open bus systems supporting a coherent memory model, futurebus+ and sci, as the interconnect system for main memory. håkon o. bugge ernst h. kristiansen bjørn o. bakka feedback, correlation, and delay concerns in the power estimation of vlsi circuits farid n. najm directory services/ibm versus ccitt (panel session, title only) r. perlman e. stecher p. karp top: an algorithm for three-level optimization of plds e. dubrova p. ellervee d. m. miller j. c. muzio resolving signal correlations for estimating maximum currents in cmos combinational circuits harish kriplani farid najm ping yang ibrahim hajj sequential test generation at the register-transfer and logic levels the problem of test generation for non-scan sequential vlsi circuits is addressed. a novel method of test generation that efficiently generates test sequences for stuck-at faults in the logic circuit by exploiting register- transfer-level (rtl) design information is presented. our approach is targeted at chips with data-path like stg. the problem of sequential test generation is decomposed into three subproblems of combinational test generation, fault-free state justification and fault- free state differentiation. standard combinational test generation algorithms are used to generate test vectors for stuck-at faults in the logic-level implementation. the required state corresponding to the test vector is justified using a fault-free justification step that is performed using the rtl specification. similarly, if the effect of the fault has been propagated by the test vector to the flip-flop inputs alone, the faulty state produced is differentiated from the true next state by a differentiation step that uses the rtl specification. new and efficient algorithms for fault-free state justification and differentiation on rtl descriptions that contain arithmetic as well as random logic modules are described. unlike previous approaches, this approach does not require the storage of covers or a partial stg and can be used to generate tests for entire chips without scan. exploiting rtl information, together with a new conflict resolution technique results in improvements of up to 100x in performance over sequential test generation techniques restricted to operate at the logic level. we have successfully generated tests for the viterbi speech processor chip [18]. abhijit ghosh srinivas devadas a. richard newton cross-fertilizing fsm verification techniques and sequential diagnosis g. cabodi p. camurati f. corno p. prinetto m. sonza reorda smart antenna receiver based on a single chip solution for gsm/dcs basehand processing u. girola a. picciriello d. vincenzoni multilevel generalization of relaxation algorithms for circuit simulation vladimir b. dmitriyev-zdorov the performance impact of block sizes and fetch strategies this paper explores the interactions between a cache's block size, fetch size and fetch policy from the perspective of maximizing system-level performance. it has been previously noted that given a simple fetch strategy the performance optimal block size is almost always four or eight words [10]. if there is even a small cycle time penalty associated with either longer blocks or fetches, then the performance-optimal size is noticeably reduced. in split cache organizations, where the fetch and block sizes of instruction and data caches are all independent design variables, instruction cache block size and fetch size should be the same. for the workload and write-back write policy used in this trace-driven simulation study, the instruction cache block size should be about a factor of two greater than the data cache fetch size, which in turn should equal to or double the data cache block size. the simplest fetch strategy of fetching only on a miss and stalling the cpu until the fetch is complete works well. complicated fetch strategies do not produce the performance improvements indicated by the accompanying reductions in miss ratios because of limited memory resources and a strong temporal clustering of cache misses. for the environments simulated here, the most effective fetch strategy improved performance by between 1.7% and 4.5% over the simplest strategy described above. steven przybylski the detection and elimination of useless misses in multiprocessors in this paper we introduce a new classification of misses in shared-memory multiprocessors based on interprocessor communication. we identify the set of essential misses, i.e., the smallest set of misses necessary for correct execution. essential misses include cold misses and true sharing misses. all other misses are useless misses and can be ignored without affecting the correctness of program execution. based on the new classification we compare the effectiveness of five different protocols which delay and combine invalidations leading to useless misses. in cache-based systems the protocols are very effective and have miss rates close to the essential miss rate. in virtual shared memory systems the techniques are also effective but leave room for improvements. michel dubois jonas skeppstedt livio ricciulli krishnan ramamurthy per stenström algorithms to compute bridging fault coverage of i&l; t;/bold>ddq test sets we present two algorithms, called list-based scheme and tree-based scheme, to compute bridging fault (bf) coverage of i ddq tests. these algorithms use the novel ideal of "indistinguishable pairs," which makes it more efficient and versatile than known fault simulation algorithms. unlike known algorithms, the two algorithms can be used for combinational as well as sequential circuits and for arbitrary sets of bfs. experiments show that the tree-based scheme is, in general, better than the list-based scheme. but the list-based scheme is better for some classes of faults. paul thadikaran sreejit chakravarty janak patel the difference-bit cache the difference-bit cache is a two-way set-associative cache with an access time that is smaller than that of a conventional one and close or equal to that of a direct-mapped cache. this is achieved by noticing that the two tags for a set have to differ at least by one bit and by using this bit to select the way. in contrast with previous approaches that predict the way and have two types of hits (primary of one cycle and secondary of two to four cycles), all hits of the difference-bit cache are of one cycle. the evaluation of the access time of our cache organization has been performed using a recently proposed on-chip cache access model. toni juan tomás lang juan j. navarro circuit techniques for low-power cmos gsi a. bhavnagarwala v. de b. austin j. meindl measures of syntactic complexity for modeling behavioral vhdl neal s. stollon john d. provence hardware evolution system adam tomofumi hikage hitoshi hemmi katsunori shimohara a symbolic design system for integrated circuits as integrated circuit design has become increasingly complex, the need for more effective data description techniques has become critical. design verification from mask artwork data alone can consume vaste amounts of computer time for vlsi circuits, if it can be performed at all. the use of a symbolic design description, which allows the designer or synthesis program to express circuit structure as well as maintain full connectivity information, can reduce dramatically the burden placed on the verification tools: this paper describes a symbolic design system, its associated data manager, its color graphics viewport manager, and its application to a variety of design methods. the data manager can store a variety of representations of the design, including simulation data, geometric layout, symbolic layout, and schematic diagrams. the viewport manager can manage a number of viewports concurrently and the use of a model frame buffer allows it to function easily on a variety of graphics terminals and hard-copy devices. the system is designed with an engineering work station in mind. k. h. keller a. r. newton s. ellis at-speed boundary-scan interconnect testing in a board with multiple system clocks jongchul shin hyunjin kim sungho kang plowing: modifying cells and routing 45:9d - layouts this paper describes a plowing procedure which moves and modifies cells and routing. the plow pushes aside all obstacles one by one, thus penetrating into the layout. the process leads to a compaction at the front side of the plow. the compaction is performed in accordance with the minimum spacing rules and with automatic jog insertion. objects of the layout representation are cells, contacts, and wires. a wire is represented by a path of segments which lie in one of two layers, at angles of multiples of 45 . the generated blank space is used for adding new objects or resizing geometries. k. m. just w. l. schiele t. kruger speeding up symbolic model checking by accelerating dynamic variable reordering symbolic model checking is a widely used technique in sequential verification. as the size of the obdds and also the computation time depends on the order of the input variables, the verification may only succeed if a well suited variable order is chosen. since the characteristics of the represented functions are changing, the variable order has to be adapted dynamically. unfortunately, dynamic reordering strategies are often very time consuming and sometimes do not provide any improvement of the obdd representation. this paper presents adaptions of reordering techniques originally intended for combinatorial verification to the specific requirements of symbolic model checking. the techniques are orthogonal in the way that they use either structural information about the obdds or semantical information about the represented functions. the application of these techniques substantially accelerates the reordering process and makes it possible to finish computations, that are too time consuming, otherwise. christoph meinel christian stangier an exact algorithm for low power library-specific gate re-sizing de-sheng chen majid sarrafzadeh which asic technology will dominate the 1990's (panel) r. collet rice: rapid interconnect circuit evaluator curtis l. ratzlaff nanda gopal lawrence t. pillage implementation and use of spfds in optimizing boolean networks subarnarekha sinha robert k. brayton partitioning large designs by filling fpga devices with hierarchy blocks helena krupnova gabriele saucier correlation-reduced scan-path design to improve delay fault coverage weiwei mao michael d. ciletti a memory archtitecture with 4-address configurations for video signal processing (poster paper) sunho chang jong-sun kim lee-sup kim design aids for the simulation of bipolar gate arrays this paper describes a system of design aids which are used in the modeling and simulation of bipolar gate arrays for applications where the delays cannot be neglected. prewired function blocks composed of circuit elements such as transistors, resistors, diodes, etc. are automatically converted to logic gate descriptions. the transistor level model of a function block is analyzed with a circuit simulation program to obtain delay values for the gate level model. the gate equivalent circuits and the delay values produced by these methods provide accurate digital simulation of bipolar gate arrays. p. kozak a. k. bose a. gupta implementing a cache consistency protocol r. h. katz s. j. eggers d. a. wood c. l. perkins r. g. sheldon rapid diagnostic fault simulation of stuck-at faults in sequential circuits using compact lists srikanth venkataraman ismed hartanto w. kent fuchs elizabeth m. rudnick sreejit chakravarty janak h. patel low-cost single-layer clock trees with exact zero elmore delay skew we give the first single-layer clock tree construction with exact zero skew according to the elmore delay model. the previous linear-planar-dme method guarantees a planar solution under the linear delay model. in this paper, we use a linear-planar-dme variant connection topology to construct a low-cost zero skew tree (zst) according to the elmore delay model. while a linear-delay zst is trivially converted to an elmore-delay zst by "detouring" wires, the key idea is to defer this detouring as much as possible to reduce tree cost. costs of our planar zst solutions are comparable to those of the best previous non-planar zst solutions, and substantially improve over previous planar clock routing methods. andrew b. kahng chung-wen albert tsao productivity issues in high-level design: are tools solving the real problems? reinaldo a. bergamaschi experiments on evolving software models of analog circuits jason d. lohn differential fault simulation - a fast method using minimal memory a new, fast fault simulator called differential fault simulator, dsim, for sequential circuits is described. unlike the concurrent fault simulation, dsim simulates each machine by simulating its machine differences from the other machine just simulated instead of simulating its input differences from the previous status of the same machine. in this manner, dsim simulates each machine (good or bad) separately for every test vector. therefore, dsim dramatically reduces the dynamic memory requirement and the overhead in the memory management in the concurrent fault simulation. also unlike the single fault propagation which simulates each bad machine by simulating its machine difference from the good machine, the overhead to restore the good machine status before each bad machine simulation is eliminated in dsim. our experiments show that dsim runs 3-12 times faster than an existing concurrent fault simulator and an experimental single fault propagation simulator. furthermore, owing to the straightforward operations, dsim is very easy to implement and maintain. implementation consists of less than 300 lines of "c" language statements added to the event-driven true- value simulator in an existing sequential test generation system, stg. currently dsim uses a zero- delay timing model, while inclusion of other delay models is under development. w.-t. cheng m.-l. yu tina: analog placement using enumerative techniques capable of optimizing both area and net length t. abthoff f. johannes new approach in gate-level glitch modelling d. rabe w. nebel synthesis of low power cmos vlsi circuits using dual supply voltages vijay sundararajan keshab k. parhi estimating architectural resources and performance for high-level synthesis applications alok sharma rajiv jain sequential circuit test generation using decision diagram models jaan raik raimund ubar spades: a simulator for path delay faults in sequential circuits irith pomeranz lakshmi n. reddy sudhakar m. reddy dynamic vectorization: a mechanism for exploiting far-flung ilp in ordinary programs several ilp limit studies indicate the presence of considerable ilp across dynamically far-apart instructions in program execution. this paper proposes a hardware mechanism, _dynamic vectorization (dv),_ as a tool for quickly building up a large logical instruction window. dynamic vectorization converts repetitive dynamic instruction sequences into vector form, enabling the processing of instructions from beyond the corresponding program loop to be overlapped with the loop. this enables vector-like execution of programs with relatively complex static control flow that may not be amenable to static, compile time vectorization. experimental evaluation shows that a large fraction of the dynamic instructions of four of the six specint92 programs can be captured in vector form. three of these programs exhibit significant potential for ilp improvements from dynamic vectorization, with speedups of more than a factor of 2 in a scenario of realistic branch prediction and perfect memory disambiguation. under perfect branch prediction conditions, a fourth program also shows well over a factor of 2 speedup from dv. the speedups are due to the overlap of post-loop processing with loop processing. sriram vajapeyam p. j. joseph tulika mitra a controller-based design-for-testability technique for controller-data path circuits sujit dey vijay gangaram miodrag potkonjak on thermal effects in deep sub-micron vlsi interconnects kaustav banerjee amit mehrotra alberto sangiovanni-vincentelli chenming hu a crosstalk-aware timing-driven router for fpgas as integrated circuits are migrated to more advanced technologies, it has become clear that crosstalk is an important physical phenomenon that must be taken into account. crosstalk has primarily been a concern for asics, multi- chip modules, and custom chips, however, it will soon become a concern in fpgas. in this paper, we describe the first published crosstalk-aware router that targets fpgas. we show that, in a representative fpga architecture implemented in a 0.18mm technology, the average routing delay in the presence of crosstalk can be reduced by 7.1% compared to a router with no knowledge of crosstalk. about half of this improvement is due to a tighter delay estimator, and half is due to an improved routing algorithm. steven j. e. wilton logic synthesis for reliability - an early start to controlling electromigration and hot carrier effects kaushik roy sharat prasad static analysis for vhdl model evaluation mario stefanoni the hughes automated layout system - automated lsi/vlsi layout based on channel routing the hughes automated layout system (hal) is intended to provide fast, accurate, and efficient layout of lsi/vlsi circuits. the hal development plan calls for an evolutionary development in three phases, with each phase providing a usable design system. hal(i) is limited to standard cell layout and is now operational. hal(ii), which will permit more complex geometries, and hal(iii), which will add hierarchical capabilities, are in initial development. this paper discusses the features of hal(i), and the concepts of decomposition and ordering of routing domains that underlie the more advanced systems. g. persky c. enger d. m. selove optimum head separation in a disk system with two read/write heads a. r. calderbank e. g. coffman l. flatto transmission gate modeling in an existing three-value simulator existing three value (0, 1, x,) logic simulators cannot support the use of mos transmission gates, but this deficiency can be easily eliminated by the addition of one logic value - the high impedance (z) state. this paper demonstrates that complete transmission gate modeling, including bi- directional operation and ratio logic, can be accomplished with this single z state and explicit node models; and, since four states require the same internal storage as three states, such an enhancement would not require a major software rewrite. robert m. mcdermott multiway fpga partitioning by fully exploiting design hierarchy in this paper, we present a new integrated synthesis and partitioning method for multiple-fpga applications. our approach bridges the gap between hdl synthesis and physical partitioning by fully exploiting the design hierarchy. we propose a novel multiple-fpga synthesis and partitioning method which is performed in three phases: (1) fine-grained synthesis, (2) functional-based clustering, and (3) hierarchical set-covering partitioning. this method first synthesizes a design specification in a fine-grained way so that functional clusters can be preserved based on the structural nature of the design specification. then, it applies a hierarchical set-covering partitioning method to form the final fpga partitions. experimental results on a number of benchmarks and industrial designs demonstrate that i o limits are the bottleneck for clb utilization when applying a traditional multiple-fpga synthesis method on flattened netlists. in contrast, by fully exploiting the design structural hierarchy during the multiple-fpga partitioning, our proposed method produces fewer fpga partitions with higher clb and lower i o-pin utilizations. wen-jong fang allen c.-h. wu a performance and routablity driven router for fpgas considering path delays yuh-sheng lee allen c.-h. wu circuit extraction on a message-based multiprocessor this paper discusses the use of a general purpose, message based multiprocessor to speed up the task of vlsi circuit extraction. the parallel algorithm incorporates the use of secondary storage to allow complete vlsi circuits to be extracted with small scale multiprocessors. the paper presents experimental results for the detection and labelling of transistor regions. bruce a. tonkin automatic synthesis of extended burst-mode circuits using generalized c-elements k. yun fault diagnosis based on effect-cause analysis: an introduction this paper presents the basic concepts of a new fault diagnosis technique which has the following features: 1) is applicable to both single and multiple faults, 2) does not require fault enumeration, 3) can identify faults which prevent initialization, 4) can indicate the presence of nonstuck faults in the d.u.t., 5) can identify fault-free lines in the d.u.t. our technique, referred to as effect-cause analysis, does not require a fault dictionary and it is not based on comparing the obtained response of the d.u.t. with the expected response, which is not assumed to be known. effect- cause analysis directly processes the actual response of the d.u.t. to the applied test (the effect) to determine the possible fault situations (the causes) which can generate that response. miron abramovici melvin a. breuer optimum functional decomposition using encoding rajeev murgai robert k. brayton alberto sangiovanni-vincentelli efficient 3d modelling for extraction of interconnect capacitances in deep submicron dense layouts a. toulouse d. bernard c. landrault p. nouet heterogeneous built-in resiliency of application specific programmable processors kyosun kim ramesh karri miodrag potkonjak a vhdl reuse workbench g. lehmann k. muller-glaser b. wunder an integrated design environment for early stage conceptual design (poster paper) j. zuo s. director fast timing simulation of transient faults in digital circuits transient fault simulation is an important verification activity for circuits used in critical applications since such faults account for over 80% of all system failures. this paper presents a timing level transient fault simulator that bridges the gap between electrical and gate-level transient fault simulators. a generic mos circuit primitive and analytical solutions of node differential equations are used to perform transistor level simulation with accurate mos-fet models. the transient fault is modeled by a piecewise quadratic injected current waveform; this retains the electrical nature of the transient fault and provides spice-like accuracy. detailed comparisons with spice3 show the accuracy of this technique and speedups of two orders of magnitude are observed for circuits containing up to 2000 transistors. latched error distributions of the benchmark circuits are also provided. a. dharchoudhury s. m. kang h. cha j. h. patel kahlua: a hierarchical circuit disassembler a new tool called a circuit disassembler has been developed to transform a mask level layout into an equivalent symbolic layout. this technique has been implemented in the program called kahlua that can handle mask layout containing arbitrary manhattan geometry and is independent of the circuit technology. circuits designed using physical layout systems can be automatically disassembled into a symbolic environment. once converted, the disassembled cells can be manipulated further by any existing symbolic design or verification tools. in particular, these cells can be automatically remapped for a new technology. our formulation of the problem consists of two major stages: device extraction, and net decomposition. in the first stage the transistors and contacts are extracted from the layout to form leaf cells. in the second stage a set of symbolic wires is derived from the remaining interconnect geometry. kahlua has been tested on a wide range of physical cells and has produced high quality results with modest execution times. an additional feature of the technique include the ability to disassemble hierarchically, which makes disassembling large layouts feasible. b. lin a. r. newton hardware acceleration of gate array layout in this paper we describe the hardware and software of a system which we have implemented to accelerate the physical design of gate arrays. in contrast to nearly all other reported approaches, our approach to hardware acceleration is to augment a single-user host workstation with a general-purpose microprogrammable slave processor having a large private memory. one or more such slaves can be attached. we have implemented placement improvement on the system, achieving a 20 x speedup vs. a high-level host implementation. we give performance results, which are comparable to those reported elsewhere for mainframe implementations. philip m. spira carl hage a top down mixed-signal design methodology using a mixed-signal simulator and analog hdl t. murayama y. gendai an analytical method for compacting routing area in integrated circuits an analytical method is proposed for solving a routing area compaction problem in building block integrated circuits. related minimization is performed with a linear programming technique. minimum channel dimensions are calculated for a preliminary routing; these dimensions are used to construct routing constraints. placement constraints are added for the interrelations between placement and routing. this combined set of constraints leads to a least overestimation of routing area and under certain conditions guarantees routing feasibility. computational complexity and existence of a solution are discussed. m. j. ciesielski e. kinnen on the bounded-skew clock and steiner routing problems dennis j. h. huang andrew b. kahng chung-wen albert tsao incredyble-tg: incremental dynamic test generation based on learning irith pomeranz sudhakar m. reddy a methodology for hardware verification based on logic simulation a logic simulator can prove the correctness of a digital circuit if it can be shown that only circuits fulfilling the system specification will produce a particular response to a sequence of simulation commands.this style of verification has advantages over the other proof methods in being readily automated and requiring less attention on the part of the user to the low- level details of the design. it has advantages over other approaches to simulation in providing more reliable results, often at a comparable cost. this paper presents the theoretical foundations of several related approaches to circuit verification based on logic simulation. these approaches exploit the three-valued modeling capability found in most logic simulators, where the third-value x indicates a signal with unknown digital value. although the circuit verification problem is np-hard as measured in the size of the circuit description, several techniques can reduce the simulation complexity to a manageable level for many practical circuits. randal e. bryant vhdl for high speed desktop video ics: experience with replacement of other simulator michael jacobsen wolfgang nebel balance scheduling: weighting branch tradeoffs in superblocks since there is generally insufficient instruction level parallelism within a single basic block, higher performance is achieved by speculatively scheduling operations in superblocks. this is difficult in general because each branch competes for the processor's limited resources. previous work manages the performance tradeoffs that exist between branches only indirectly. we show here that dependence and resource constraints can be used to gather explicit knowledge about scheduling tradeoffs between branches. the first contribution of this paper is a set of new, tighter lower bounds on the execution times of superblocks that specifically accounts for the dependence and resource conflicts between pairs of branches. the second contribution of this paper is a novel superblock scheduling heuristic that finds high performance schedules by determining the operations that each branch needs to be scheduled early and selecting branches with compatible needs that favor beneficial branch tradeoffs. performance evaluations for superblocks from specint95 indicate that our bounds are very tight and that our scheduling heuristic outperforms well known superblock scheduling algorithms. alexandre e. eichenberger waleed m. meleis a simultaneous technology mapping, placement, and global routing algorithm for field-programmable gate arrays technology mapping algorithms for lut (look up table) based fpgas have been proposed to transfer a boolean network into logic-blocks. however, since those algorithms take no layout information into account, they do not always lead to excellent results. in this paper, a simultaneous technology mapping, placement and global routing algorithm for fpgas, maple, is presented. mapleis an extended version of a simultaneous placement and global routing algorithm for fpgas, which is based on recursive partition of layout regions and block sets. maple inherits its basic processes and executes the technology mapping simultaneously in each recursive process. therefore, the mapping can be done with the placement and global routing information. experimental results for some benchmark circuits demonstrate its efficiency and effectiveness. nozomu togawa masao sato tatsuo ohtsuki the tlb slice - a low-cost high-speed address translation mechanism the mips r6000 microprocessor relies on a new type of translation lookaside buffer --- called a tlb slice \\--- which is less than one-tenth the size of a conventional tlb and as fast as one multiplexer delay, yet has a high enough hit rate to be practical. the fast translation makes it possible to use a physical cache without adding a translation stage to the processor's pipeline. the small size makes it possible to include address translation on-chip, even in a technology with a limited number of devices. the key idea behind the tlb slice is to have both a virtual tag and a physical tag on a physically-indexed cache. because of the virtual tag, the tlb slice needs to hold only enough physical page number bits --- typically 4 to 8 --- to complete the physical cache index, in contrast with a conventional tlb, which needs to hold both a virtual page number and a physical page number. the virtual page number is unnecessary because the tlb slice needs to provide only a hint for the translated physical address rather than a guarantee. the full physical page number is unnecessary because the cache hit logic is based on the virtual tag. furthermore, if the cache is multi-level and references to the tlb slice are "shielded" by hits in a virtually indexed primary cache, the slice can get by with very few entries, once again lowering its cost and increasing its speed. with this mechanism, the simplicity of a physical cache can been combined with the speed of a virtual cache. george taylor peter davies michael farmwald a 1-v 1-mb sram for portable equipment h. morimura n. shibata a preprocessor for the via minimization problem the objective of the via minimization is to assign wire segments into different layers to minimize the number of vias required. several algorithms have been proposed for the constrained via minimization (cvm) problem where the topology of the given layout is fixed. in a cvm problem, some vias may be "essential" to the given layout. that is, they have to be selected and cannot be replaced by other vias. in this paper we present a procedure to find most of the essential vias. this procedure can be used as a preprocessor for the algorithms for cvm problems. experimental results show that the procedure is efficient and can identify most of essential vias. k. c. chang h. c. du parametric built-in self-test of vlsi systems d. niggemeyer m. ruffer block and ip wrapping for efficient design on fpgas (abstract) h. krupnova b. behnam g. saucier managing pipeline-reconfigurable fpgas while reconfigurable computing promises to deliver incomparable performance, it is still a marginal technology due to the high cost of developing and upgrading applications. hardware virtualization can be used to significantly reduce both these costs. in this paper we describe the benefits of hardware virtualization, and show how it can be achieved using a combination of pipeline reconfiguration and run-time scheduling of both configuration streams and data streams. the result is piperench, an architecture that supports robust compilation and provides forward compatibility. our preliminary performance analysis predicts that piperench will outperform commercial fpgas and dsps in both overall performance and in performance per mm2. srihari cadambi jeffrey weener seth copen goldstein herman schmit donald e. thomas memory access scheduling the bandwidth and latency of a memory system are strongly dependent on the manner in which accesses interact with the "3-d" structure of banks, rows, and columns characteristic of contemporary dram chips. there is nearly an order of magnitude difference in bandwidth between successive references to different columns within a row and different rows within a bank. this paper introduces memory access scheduling, a technique that improves the performance of a memory system by reordering memory references to exploit locality within the 3-d memory structure. conservative reordering, in which the first ready reference in a sequence is performed, improves bandwidth by 40% for traces from five media benchmarks. aggressive reordering, in which operations are scheduled to optimize memory bandwidth, improves bandwidth by 93% for the same set of applications. memory access scheduling is particularly important for media processors where it enables the processor to make the most efficient use of scarce memory bandwidth. scott rixner william j. dally ujval j. kapasi peter mattson john d. owens unit delay simulation with the inversion algorithm william j. schilp peter m. maurer a case for redundant arrays of inexpensive disks (raid) increasing performance of cpus and memories will be squandered if not matched by a similar performance increase in i/o. while the capacity of single large expensive disks (sled) has grown rapidly, the performance improvement of sled has been modest. redundant arrays of inexpensive disks (raid), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to sled, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. this paper introduces five levels of raids, giving their relative cost/performance, and compares raid to an ibm 3380 and a fujitsu super eagle. david a. patterson garth gibson randy h. katz optimal hardware pattern generation for functional bist silvia cataldo silvia chiusano paolo prinetto transformational placement and synthesis wilm donath prabhakar kudva leon stok lakshmi reddy andrew sullivan paul villarrubia a partnership in domestication of rapid prototyping technologies andrzej rucinski frank hludik john l. pokoski low power synthesis of dual threshold voltage cmos vlsi circuits vijay sundararajan keshab k. parhi an analysis of the information content of address and data reference streams jeffrey c. becker arvin park a practical approach to multiple-class retiming klaus eckl jean christophe madre peter zepter christian legl media spaces: bringing people together in a video, audio, and computing environment sara a. bly steve r. harrison susan irwin a new method to express functional permissibilities for lut based fpgas and its applications shigeru yamashita hiroshi sawada akira nagoya on primitive fault test generation in non-scan sequential circuits ramesh c. tekumalla prem r. menon a practical gate resizing technique considering glitch reduction for low power design masanori hashimoto hidetoshi onodera keikichi tamaru activity-sensitive architectural power analysis for the control path paul e. landman jan m. rabaey the silc silicon compiler: language and features we describe the language and features of silctm, a new silicon compiler. silctm takes an algorithmic description of a circuit, performs logic synthesis, optimization, and physical layout synthesis, and produces a mask- level description. special features of the compiler include multiple data path lengths, logic minimization, and asynchronously operating blocks. the focus of this paper is the input language. timothy blackman jeffrey fox christopher rosebrugh virtual grid symbolic layout free form or "stick" type symbolic layout provides a means of simplifying the design of ic subcircuits. to successfully utilize this style of layout, a complete design approach and the necessary tools to support this methodology are required. in particular, one of the requirements of such a design method is the ability to "compact" the loosely specified topology to create a set of valid mask data. this paper presents a new compaction strtegy which uses the concept of a virtual grid. the compaction algorithm using the virtual grid is both simple and fast, an veniently interact with the algorithm to optimize a layout. in addition to the compaction algorithm, methods used to create large building blocks will be described. the work described here is part of a complete symbolic layout system called mulga which is written in the c programming language and resides on the unix operating system. neil weste an investigation of static versus dynamic scheduling carl e. love harry f. jordan tree restructuring approach to mapping problem in cellular-architecture fpgas n. ramineni m. chrzanowska-jeske n. buddi a 10 mbit/s upstream cable modem with automatic equalization patrick schaumont radim cmar serge vernalde marc engels operation scheduling in reconfigurable, multifunction pipelines jack walicki john d. laughlin simple vector microprocessors for multimedia applications corinna g. lee mark g. stoodley an optimal channel pin assignment with multiple intervals for building block layout tetsushi koide shin'ichi wakabayashi noriyoshi yoshida mossim: a switch-level simulator for mos lsi the logic simulator mossim is designed specifically to serve the needs of the mos lsi designer. it models a mos circuit as a network of field-effect transistor "switches", with node states 0, 1, and x (unknown) and transistor states "open", "closed", and "unknown". mossim has proved quite versatile and accurate in simulating a variety of mos designs including ones for which the network was extracted automatically from the mask specifications. because it models the network at a logical level, it has a performance comparable to conventional logic gate simulators. randal e. bryant p.size: a sizing aid for optimized designs n. azemard v. bonzom d. auvergne automatic synthesis and optimization of partially specified asynchronous systems alex kondratyev jordi cortadella michael kishinevsky luciano lavagno alexander yakovlev shared memory implementations of synchronous dataflow specifications praveen k. murthy shuvra s. bhattacharyya using cluster-based logic blocks and timing-driven packing to improve fpga speed and density alexander (sandy) marquardt vaughn betz jonathan rose register-transfer level estimation techniques for switching activity and power consumption anand raghunathan sujit dey niraj k. jha global scheduling for high-level synthesis applications yaw fann minjoong rim rajiv jain custom-fit processors: letting applications define architectures joseph a. fisher paolo faraboschi giuseppe desoli diagnosis of realistic bridging faults with single stuck-at information brian chess david b. lavo f. joel ferguson tracy larrabee automatic test bench generation for simulation-based validation in current design practice synthesis tools play a key role, letting designers to concentrate on the specification of the system being designed by carrying out repetitive tasks such as architecture synthesis and technology mapping. however, in the new design flow, validation still remains a challenge: while new technologies based on formal verification are only marginally accepted for large designs, standard techniques based on simulation are beginning to fall behind the increased system complexity. this paper proposes an approach to simulation-based validation, in which an evolutionary algorithm computes useful input sequences to be included in the test bench. the feasibility of the proposed approach is assessed with a preliminary implementation of the proposed algorithm. m. lajolo l. lavagno m. rebaudengo m. sonza reorda m. violante area efficient pipelined pseudo-exhaustive testing with retiming huoy-yu liou ting-ting y. lin meeting delay constraints in dsm by minimal repeater insertion i-min liu adnan aziz d. f. wong fault dictionary compaction by output sequence removal fault dictionary compaction has been accomplished in the past by removing responses on individual output pins for specific test vectors. in contrast to the previous work, we present techniques for eliminating entire sequences of outputs and for efficiently storing the remaining output sequences. experimental results on the iscas 85 and iscas 89 benchmark circuits show that the sizes of dictionaries proposed are substantially smaller than the full fault dictionary, while the dictionaries retain most or all of the diagnostic capability of the full fault dictionary. vamsi boppana w. kent fuchs design and algorithms for parallel testing of random access and content addressable memories this paper presents a design strategy for efficient and comprehensive parallel testing of both random access memory (ram) and content addressable memory (cam). based on this design for testability approach, parallel testing algorithms for cams and rams are developed for a broad class of pattern sensitive faults. the resulting test procedures are significantly more efficient than previous approaches. for example, the design for testability strategy allows an entire w word cam to be read in just one operation with a resulting speed up in testing as high as w. in the case of an n bit ram, the improvement in test efficiency is by a factor of ( n). the overall reduction in testing time is considerable for large size memories. p. mazumder j. h. patel w. k. fuchs fast field solver-programs for thermal and electrostatic analysis of microsystem elements vladimir szekely márta rencz efficient bist hardware insertion with low test application time for synthesized data paths nicola nicolici bashir m. al-hashimi an integrated mask artwork analysis system a new lsi artwork analysis and processing system, called emap, is described with algorithms, a database schema and applications. emap provides the designer with the artwork verification and processing tools which include mask artwork processing, geometrical design rule checking, connectivity analysis and electrical circuit parameter calculation. the circuit connectivity data derived from the mask artwork data is used for input to a logic simulator, a timing simulator, a circuit simulator and a circuit schematic generator. takashi mitsuhashi toshiaki chiba makoto takashima kenji yoshida algorithms for library-specific sizing of combinational logic we examine the problem of choosing the proper sizes from a cell library for the logic elements of a boolean network to meet timing constraints on the propagation delay along every path from the primary input to the primary output. if the boolean network has a tree topology, we show that there exists a pseudo-polynomial time algorithm for finding the optimal solution to this problem. a backtracking-based algorithm for finding feasible solutions for networks that are not trees is also suggested and evaluated. pak k. chan toward zero-cost branches using instruction registers kent d. wilken david w. goodwin built-in test generation for synchronous sequential circuits irith pomeranz sudhakar m. reddy robust techniques for watermarking sequential circuit designs arlindo l. oliveira fpga clock management for low power applications (poster abstract) ian brynjolfson zeljko zilic run-time parameterizable cores steve guccione delon levi a fault simulator for mos lsi circuits a. k. bose p. kozak c.-y. lo h. n. nham e. pacas-skewes on driving many long wires in a vlsi layout it is assumed that long wires represent large capacitive loads, and the effect on the area of a vlsi layout when drivers are introduced along many long wires in the layout is investigated. a layout is presented for which the introduction of standard drivers along long wires squares the area of the layout; it is shown, however, that the increase in area is never greater than the layout's area squared if the driver can be laid out in a square region. this paper also shows an area-time trade-off for the driver of a single long wire of length / by which the area of the driver from (l), to (lq), q < l, can be reduced if a delay of (ll-q) rather than (log l) can be tolerated. tight bounds are also obtained on the worst-case area increase in general layouts having these drivers. vijaya ramachandran fast implementations of secret-key block ciphers using mixed inner- and outer- round pipelining the new design methodology for secret-key block ciphers, based on introducing an optimum number of pipeline stages inside of a cipher round is presented and evaluated. this methodology is applied to five well-known modern ciphers, triple des, rijndael, rc6, serpent, and twofish, with the goal to first obtain the architecture with the optimum throughput to area ratio, and then the architecture with the highest possible throughput. all ciphers are modeled in vhdl, and implemented using xilinx virtex fpga devices. it is demonstrated that all investigated ciphers can operate with similar maximum clock frequencies, in the range from 95 to 131 mhz, limited only by the delay of a single clb layer and delays of interconnects. rijndael, rc6, twofish, and serpent achieve throughputs in the range from 12.1 gbit/s to 16.8 gbit/s; and triple des achieves the throughput of 7.5 gbit/s. because of the optimum speed to cost ratio, the proposed architecture seems to be very well suited for practical implementations of secret-key block ciphers using both fpgas and custom asics. we also show that using this architecture for comparing hardware performance of secret-key block ciphers, such as aes candidates, operating in non-feedback cipher modes, leads to the more prudent and fairer analysis than comparisons based on other types of pipelined architectures. pawel chodowiec po khuon kris gaj slim - the translation of symbolic layouts into mask data a. e. dunlop a fast algorithm for minimizing fpga combinational and sequential modules we present a quadratic-time algorithm for minimizing the number of modules in an fpga with combinational and sequential modules (like the c-modules and s-modules of the act2 and act3 architectures). the constraint is that a combinational module can be combined with one flip-flop in a single sequential module, only if the combinational module drives no other combinational modules. our algorithm uses a minimum-cost flow formulation to solve the problem with a significant time improvement over a previous approach that used a general linear program. dimitrios kagaris spyros tragoudas design and verification of the rollback chip using hop: a case study of formal methods applied to hardware design the use of formal methods in hardware design improves the quality of designs in many ways: it promotes better understanding of the design; it permits systematic design refinement through the discovery of invariants; and it allows design verification (informal or formal). in this paper we illustrate the use of formal methods in the design of a custom hardware system called the "rollback chip" (rbc), conducted using a simple hardware design description language called "hop". an informal specification of the requirements of the rbc is first given, followed by a behavioral description of the rbc stating its desired behavior. the behavioral description is refined into progressively more efficient designs, terminating in a structural description. key refinement steps are based on system invariants that are discovered during the design, and proved correct during design verification. the first step in design verification is to apply a program called parcomp to derive a behavioral description from the structural description of the rbc. the derived behavior is then compared against the desired behavior using equational verification techniques. this work demonstrates that formal methods can be fruitfully applied to a nontrivial hardware design. it also illustrates the particular advantages of our approach based on hop and parcomp. last, but not the least, it formally verifies the rbc mechanism itself . ganesh gopalakrishnan richard fujimoto functional design for testability of control-dominated architectures control-dominated architectures are usually described in a hardware description language (hdl) by means of interacting fsms. a vhdl or verilog specification can be translated into an interacting fsm (ifsm) representation as described here. the ifsm model allows us to approach the testable synthesis problem at the level of each fsm. the functionality is modified by the addition of transparency to data flow. the complete testability of the ifsm implementation is thus achieved by connecting fully testable implementations of each modified fsm. in this way, test sequences separately generated for each fsm are directly applied to the ifsm to achieve complete fault coverage. the addition of test functionality to each fsm description, and its simultaneous synthesis with the fsm functionality, produces a lower area overhead than that necessary for the application of a partial-scan technique. moreover, the test generation problem is highly simplified since it is reduced to the test generation for each separate fsm. f. fummi u. rovati d. sciuto test generation cost analysis and projections p. goel test generation costs analysis and projections empirical observations are used to derive analytic formulae for test volumes, parallel fault simulation costs, deductive fault simulation costs, and minimum test pattern generation costs for lssd logic structures. the formulae are significant in projecting growth trends for test volumes and various test generation costs with increasing gate count g. empirical data is presented to support the thesis that test volume grows linearly with g for lssd structures that cannot be partitioned into disjoint substructures. such lssd structures are referred to as "coupled" structures. based on empirical observation that the number of latches in an lssd logic structure is proportional to the gate count g, it is shown that the logic test time for coupled structures grows as g2. it is also shown that (i) parallel fault simulation costs grow as g3, (ii) deductive fault simulation costs grow as g2, and (iii) the minimum test pattern generation costs grow as g2. based on these projections some future testing problems become apparent. prabhakar goel computing optimal clock schedules t. g. szymanski clock skew optimization for peak current reduction p. vuillod l. benini a. bogliolo g. de micheli design for testability this presentation will discuss the basics of design for testability. a short review of testing is given along with some reasons why one should test. the different techniques of design for testability are discussed in detail. these include techniques which can be applied to today's technologies and techniques which have been recently introduced and will soon appear in new designs. these techniques include the three main areas of design for testability, 1) ad hoc approaches; 2) structured approaches; and, 3) self test/built-in test approaches. t. w. williams a systematic technique for verifying critical path delays in a 300mhz alpha cpu design using circuit simulation madhav p. desai power-delay optimizations in gate sizing the problem of power-delay tradeoffs in transistor sizing is examined using a nonlinear optimization formulation. both the dynamic and the short-circuit power are considered, and a new modeling technique is used to calculate the short-circuit power. the notion of transition density is used, with an enhancement that considers the effect of gate delays on the transition density. when the short-cuircuit power is neglected, the minimum power circuit is idential to the minimum area circuit. however, under our more realistic models, our experimental results on several circuits show that the minimum power circuit is not necessarily the same as the minimum area circuit. sachin s. sapatnekar weitong chuang comparison of high speed voltage-scaled conventional and adiabatic circuits d. frank a fast physical constraint generator for timing driven layout w. k. luk addressable wsi: a non-redundant approach gilman chesley a vhdl-based design methodology: the design experience of a high performance asic chip maurizio valle daniele caviglia marco cornero giovanni nateri luciano briozzo functional verification methodology of chameleon processor françoise casaubieilh anthony mcisaac mike benjamin mike bartley françois pogodalla frederic rocheteau mohamed belhadj jeremy eggleton gerard mas geoff barrett christian berthet a concept for test and reconfiguration of a fault-tolerant vlsi processor system the following paper is to present a test and reconfiguration strategy for fault-tolerant vlsi processor systems. this is accomplished with respect to the requirements imposed by the vlsi technology. the proposed concept is exemplified by a model composed of four microprogrammable processors each with a local memory. the test strategy of a gradually expanding hardcore is applied where the central hardcore consists of a small test unit of low complexity. this test unit enables each processor to diagnose autonomously its data path structure and its associated local memory. a particular reconfiguration scheme is proposed for these components. it is implemented at microprogram level. as a result, compared with other fault- tolerance techniques, system reliability is considerably improved. k. e. grosspietsch j. kaiser e. nett an fpga-based genetic algorithm machine (poster abstract) barry shackleford etsuko okushi mitsuhiro yasuda hisao koizumi katsuhiko seo takashi iwamoto hiroto yasuura synthesis of hazard-free multi-level logic under multiple-input changes from binary decision diagrams we describe a new method for directly synthesizing a hazard-free multilevel logic implementation from a given logic specification. the method is based on free/ordered binary decision diagrams (bdd's), and is naturally applicable to multiple-output logic functions. given an incompletely-specified (multiple- output) boolean function, the method produces a multilevel logic network that is hazard-free for a specified set of multiple-input changes. we assume an arbitrary (unbounded) gate and wire delay model under a pure delay (pd) assumption, we permit multiple-input changes, and we consider both static and dynamic hazards. this problem is generally regarded as a difficult problem and it has important applications in the field of asynchronous design. the method has been automated and applied to a number of examples. the results we have obtained are very promising. bill lin srinivas devadas reducing power dissipation after technology mapping by structural transformations bernhard rohfleisch alfred kölbl bernd wurth automatic test generation for linear digital systems with bi-level search using matrix transform methods r. k. roy a. chatterjee j. h. patel j. a. abraham m. a. d'abreu optimization and resynthesis of complex data-paths hans eveking stefan höreth johann wolfgang goethe optimization of inductor circuits via geometric programming maria del mar hershenson sunderarajan s. mohan stephen p. boyd thomas h. lee a new efficient approach to statistical delay modeling of cmos digital combinational circuits this paper presents one of the first attempts to statistically characterize signal delays of basic cmos digital combinatorial circuits using the transistor level approach. hybrid analytical/iterative delay expressions in terms of the transistor geometries and technological process variations are created for basic building blocks. local delays of blocks along specific signal paths are combined together for the analysis of complex combinational vlsi circuits. the speed of analysis is increased by 2 to 4 orders of magnitude relative to spice, with about 5--10% accuracy. the proposed approach shows good accuracy in modeling the influence of the "noise" parameters on circuit delay relative to direct spice-based monte carlo analysis. examples of statistical delay characterization are shown. the important impact of the proposed approach is that statistical evaluation and optimization of delays in much larger vlsi circuits will become possible. syed a. aftab m. a. styblinski a resonant clock generator for single-phase adiabatic systems conrad h. ziesler suhwan kim marios papaefthymiou java consistency: nonoperational characterizations for java memory behavior the java language specification (jls) [gosling et al. 1996] provides an operational definition for the consistency of shared variables. the definition remains unchanged in the jls 2nd edition, currently under peer review, which relies on a specific abstract machine as its underlying model, is very complicated. several subsequent works have tried to simplify and formalize it. however, these revised definitions are also operational, and thus have failed to highlight the intuition behind the original specification. in this work we provide a complete nonoperational specification for java and for the jvm, excluding synchronized operations. we provide a simpler definition, in which we clearly distinguish the consistency model that is promised to the programmer from that which should be implemented in the jvm. this distinction, which was implicit in the original definition, is crucial for building the jvm. we find that the programmer model is strictly weaker than that of the jvm, and precisely define their discrepancy. moreover, our definition is independent of any specific (or even abstract) machine, and can thus be used to verify jvm implementations and compiler optimizations on any platform. finally, we show the precise range of consistency relaxations obtainable for the java memory model when a certain compiler optimization--- called prescient stores in jls---is applicable. alex gontmakher assaf schuster lowering power consumption in clock by using globally asynchronous locally synchronous design style a. hemani t. meincke s. kumar a. postula t. olsson p. nilsson j. oberg p. ellervee d. lundqvist fast prototyping: a system design flow applied to a complex system-on-chip multiprocessor design benoit clement richard hersemeule etienne lantreibecq bernard ramanadin pierre coulomb francois pogodalla a methodology for accurate performance evaluation in architecture exploration george hadjiyiannis pietro russo srinivas devadas otter: optimal termination of transmission lines excluding radiation rohini gupta lawrence t. pillage trace scheduling optimization in a retargetable microcode compiler michael a. howland robert a. mueller philip h. sweany the architecture and operational characteristics of the vmx host machine the vmx host machine is a hardware-firmware environment for implementation of actual computer systems with a favorable price performance ratio. the article presents the host framework architecture, together with all units which are used to build actual systems. the advantages and future potential of the architecture is briefly discussed. the notion of memory directives within the frame-work of an active (or "intelligent") memory unit is introduced. this notion serves as one example of the possibilities that are built into the architecture of the vmx host. gideon frieder reducing cross-coupling among interconnect wires in deep-submicron datapath design joon-seo yim chong-min kyung predicting the usefulness of a block result: a micro-architectural technique for high-performance low-power processors this paper proposes a micro-architectural technique in which a prediction is made for some power-hungry units of a processor. the prediction consists of whether the result of a particular unit or block of logic will be useful in order to execute the current instruction. if it is predicted useless, then that block is disabled. it would be ideal if the predictions were totally accurate, thus not decreasing the instruction- per-cycle (ipc) performance metric. however, this is not the case: the ipc might be degraded which in turn may offset the power savings obtained with the predictors due to the extra cycles to complete the execution of the application being run on the processor. in general, some logic may determine which of the block(s) that have a predictor associated will be disabled based on the outcome of the predictors and possibly some other signals from the processor. the overall processor power consumption reduction is a function of how accurate the predictors are, what percentage of the total processor power consumption corresponds to the blocks being predicted, and how sensitive to the ipc the different blocks are. a case example is presented where two blocks are predicted for low power: the on-chip l2 cache for instruction fetches, and the branch target buffer. the ipc vs power- consumption design space is explored for a particular micro- processor architecture. both the average and the peak power consumption are targeted. although the power analysis is beyond the scope of this paper, high- level estimations are done to show that it is plausible that the ideas described might produce a significant reduction in useless block accesses. clearly, this reduction may be exploited to reduce the power consumption demands of high- performance processors. enric musoll switch-level delay models for digital mos vlsi j. k. ousterhout communication based logic partitioning mark beardslee bill lin alberto sangiovanni-vincentelli graph based retargetable microcode compilation in the mimola design system lothar nowak swami: a flexible logic implementation system a new system for logic synthesis and vlsi layout, the stanford weinberger array minimizer and implementor (swami), has been developed. this paper describes the system's algorithms and presents preliminary results. the minimization of logic expressions uses local sum-of-products minimization, kernel factorization and common subexpression recognition and generates improved expressions of arbitrary depth. logic expressions are realized in nmos technology using one dimensional weinberger array structures or a new extension of weinberger arrays to two dimensions. the placement phase uses heuristic techniques, including simulated annealing. min-cut linear arrangement and local constructive clustering. christopher rowen john l. hennessey a formal semantics for verilog-vhdl simulation interoperability by abstract state machine hisashi sasaki timing verification and the timing analysis program r. b. hitchcock parallel logic simulation of vlsi systems roger d. chamberlain apss: an automatic pla synthesis system an integrated, fully automatic software capability that combines boolean logic translation, boolean minimization, pla folding, pla topology generation, and automatic pla subchip interfacing to the mp2d standard cell automatic placement and routing program in a single, modular software package is described. written in ansi standard fortran, apss permits the designer to input either arbitrarily formed boolean equations or a truth table, and to receive a complete mp2d-compatible pla subchip layout with automatically personalized mp2d subchip interfacing data, as output. as with mp2d, this capability is largely independent of technology and circuit implementation, requiring only an appropriate technology file and cell library consistent with the chosen pla layout style or "floor plan." m. w. stebnisky m. j. mcginnis j. c. werbickas r. n. putatunda a. feller architectural power optimization by bus splitting cheng-ta hsieh massoud pedram dynatapp: dynamic timing analysis with partial path activation in sequential circuits prathima agrawal vishwani d. agrawal sharad c. seth module compaction in fpga-based regular datapaths andreas koch an optimum channel-routing algorithm for polycell layouts of integrated circuits b. w. kernighan d. g. schweikert g. persky optimal test access architectures for system-on-a-chip test access is a major problem for core-based system-on-a-chip (soc) designs. since embedded cores in an soc are not directly accessible via chip inputs and outputs, special access mechanisms are required to test them at the system level. an efficient test access architecture should also reduce test cost by minimizing test application time. we address several issues related to the design of optimal test access architectures that minimize testing time., including the assignment of cores to test buses, distribution of test data width between multiple test buses, and analysis of test data width required to satisfy an upper bound on the testing time. even though the decision versions of all these problems are shown to be np-complete, they can be solved exactly for practical instances using integer linear programming (ilp). as a case study, the ilp models for two hypothetical but nontrivial systems are solved using a public-domain ilp software package. krishnendu chakrabarty logic synthesis for engineering change chih-chang lin kuang-chien chen shih- chieh chang malgorzata marek-sadowska kwang-ting cheng incorporating speculative execution in exact control-dependent scheduling ivan radivojevic forrest brewer impulse response fault model and fault extraction for functional level analog circuit diagnosis chauchin su shenshung chiang shyh-jye jou software accelerated functional fault simulation for data-path architectures m. kassab n. mukherjee j. rajski j. tyszer symbolic model checking using sat procedures instead of bdds a. biere a. cimatti e. m. clarke m. fujita y. zhu designing digital video systems: modeling and scheduling h. j. h. n. kenter c. passerone w. j. m. smits y. watanabe a. l. sangiovanni-vincentelli efficient validity checking for processor verification robert b. jones david l. dill jerry r. burch interconnect analysis: from 3-d structures to circuit models m. kamon n. marques y. massoud l. silveira j. white the design cube: a new model for vhdl designflow representation w. ecker m. hofmeister using bottom-up design techniques in the synthesis of digital hardware from abstract behavioral descriptions m. c. mcfarland on the use of the linear assignment algorithm in module placement s. b. akers analyzing multiple register sets charles y. hitchcock h. m. brinkley sprunt the future of flexible hw platform architectures (panel session) rolf ernst grand martin oz levia pierre paulin vassiliadis stamatis kees vissers scheduling and binding bounds for rt-level symbolic execution chuck monahan forrest brewer design and implementation of a hierarchical exception handling extension to systemc prashant arora rajesh k. gupta unified complete mosfet model for analysis of digital and analog circuits in this paper, we describe the complete mosfet model developed for circuit simulations. the model describes all transistor characteristics as functions of surface potentials, which are calculated iteratively at each applied voltage under the charge-sheet approximation. the key idea of this development is to put as much physics as possible into the equations describing the surface potentials. since the model includes both the drift and the diffusion contributions, a single equation is valid from the subthreshold to the saturation regions. the unified treatment of our model allows all transistor characteristics to be calculated without any nonphysical fitting parameters. additionally the calculation time is drastically reduced in comparison with a conventional piece-wise model. m. miura-mattausch u. feldmann a. rahm m. bollu d. savignac test quality and fault risk in digital filter datagraph bist laurence goodby alex orailoglu simultaneous functional-unit binding and floorplanning as device feature size decreases, interconnection delay becomes the dominating factor of system performance. thus it is important that accurate physical information is used during high level synthesis. in this paper, we consider the problem of simultaneously performing functional-unit binding and floorplanning. experimental results indicate that our approach to combine binding and floorplanning is superior to the traditional approach of separating the two tasks. yung-ming fang d. f. wong model and analysis for combined package and on-chip power grid simulation we present new modeling and simulation techniques to improve the accuracy and efficiency of transient analysis of large power dis?bution grids. these include an accurate model for the inherent decoupling capacitance of non- switching devices, as well as a sta?tical switching current model for the switching devices. more?r, three new simulation techniques are presented for problem size-reduction and speed-up. results of application of these tech?ues on three powerpctmmicroprocessors are also presented. rajendran panda david blaauw rajat chaudhry vladimir zolotov brian young ravi ramaraju scheduling designs into a time-multiplexed fpga an algorithm is presented for partitioning a design in time. the algorithm devides a large, technology-mapped design into multiple configurations of a time-multiplexed fpga. these configurations are rapidly executed in the fpga to emulate the large design. the tool includes facilities for optimizing the partitioning to improve routability, for fitting the design into more configurations than the depth of the critical path and for compressing the critical path of the design into fewer configurations, both to fit the design into the device and to improve performance. scheduling results are shown for mapping designs into an 8-configuration time-multiplexed fpga and for architecture investigation for a time-multiplexed fpga. steve trimberger on applying incremental satisfiability to delay fault testing joonyoung kim jesse whittemore joão p. marques-silva karem sakallah memory-to-memory connection structures in fpgas with embedded memory arrays steven j. e. wilton jonathan rose zvonko g. vranesic the application of genetic algorithms to the design of reconfigurable reasoning vlsi chips in this paper, we present a new genetic- algorithm-based design methodology for reasoning vlsi chips, called as lodett (logic design with the evolved truth table). in lodett, each task's case database is transformed into truth tables, which are evolved to obtain generalization capability (i.e. rules behind the past cases) through genetic algorithms. digital circuits are synthesized from the evolved truth-tables. parallelism in each task can be embedded directly in the circuits by the direct hardware implementation of the case database. we applied lodett to the english pronunciation reasoning (epr) problem, resulting in a gatalk chip. a gatalk chip has been designed with about 270k gates, and its reasoning time is about 100 ns for each phoneme. a prototype of the gatalk chip has been implemented using xilinx xc4010 fpga chips. it achieved a reasoning accuracy of 81.9% which is almost the same accuracy as nettalk in neural networks and mbrtalk in parallel ai. moritoshi yasunaga jung hwan kim ikuo yoshihara simulator-oriented fault test generator t. j. snethen robust fpga intellectual property protection through multiple small watermarks john lach william h. mangione-smith miodrag potkonjak direct performance-driven placement of mismatch-sensitive analog circuits k. lampaert g. gielen w. sansen customized instruction-sets for embedded processors joseph a. fisher allende: a procedural language for the hierarchical specification of vlsi layouts allende is a simple and powerful procedural language for vlsi layout. in allende the layout is described hierarchically as a composition of cells; absolute sizes or positions are never specified. the layout description is translated into linear constraints, which express design rules and relative position of the layout elements. by solving these constraints we obtain the absolute layout, which is guaranteed to be free of design rule violations. errors in the layout description are immediately detected and easily located. allende consists of five procedures to be called from a pascal or c program. a lot of parameterization is possible when specifying layout elements, besides the ability to make use of the full power of pascal or c. the allende system has been implemented for the nmos technology. jose monteiro de mata factoring large numbers with programmable hardware most advanced forms of security for electronic transactions rely on the public-key cryptosystems developed by rivest, shamir and adleman. unfortunately, these systems are only secure while it remains difficult to factor large integers. the fastest published algorithms for factoring large numbers have a common sieving step. these sieves collect numbers that are completely factored by a set of prime numbers that are known in advance. furthermore, the time required to execute these sieves currently dominates the runtime of the factoring algorithms. we show how the sieving process can be mapped to the mojave configurable computing architecture. the mapping exploits unique properties of the sieving algorithms to fully utilize the bandwidth of a multiple bank interleaved memory system. the sieve has been mapped to a single programmable hardware unit on the mojave computer, and achieves a clock frequency of 16 mhz. the full system implementation sieves over 28 times faster than an ultrasparc workstation. a simple upgrade to 8ns srams will result in a speedup factor of 160. hea joung kim william h. mangione-smith energy estimation for 32-bit microprocessors estimation of software power consumption is becoming one of the major problems for many embedded applications. the paper presents a novel approach to compute the energy of an instruction set, through a suitable functional decomposition of the activities involved during instruction execution. one of the main advantages of this approach is the capability to predict the power figures of the overall instruction-set starting from a small subset. a formal discussion on the statistical properties of the model is included, together with its application on five commercial 32-bit microprocessors. c. brandolese w. fornaciari f. salice d. sciuto field programmable port extender (fpx) for distributed routing and queuing field programmable gate arrays (fpgas) are being used to provide fast internet protocol (ip) packet routing and advanced queuing in a highly scalable network switch. a new module, called the field-programmable port extender (fpx), is being built to augment the washington university gigabit switch (wugs) with reprogrammable logic. fpx modules reside at the edge of the wugs switching fabric. physically, the module is inserted between an optical line card and the wugs gigabit switch back-plane. the hardware used for this project allows ports of the switch populated with an fpx to operate at rates up to 2.4 gigabits/second. the aggregate throughput of the system scales with the number of switch ports. logic on the fpx module is implemented with two fpga devices. the first device is used to interface between the switch and the line card, while the second is used to prototype new networking functions and protocols. the logic on the second fpga can be reprogrammed dynamically via control cells sent over the network. the flexibility of the fpx has made the card of interest for several networking applications. this year, fifty fpx hardware modules will be fabricated and distributed to researchers at eight universities around the country who are interested in experimenting with reprogrammable networks and per-flow queuing mechanisms. the fpx hardware will first be used to implement fast ip lookup algorithms and distributed input queueing. john w. lockwood jon s. turner david e. taylor simultaneous driver and wire sizing for performance and power optimization in this paper, we study the simultaneous driver and wire sizing (sdws) problem under two objective functions: (i) delay minimization only, or (ii) combined delay and power dissipation minimization. we present general formulations of the sdws problem under these two objectives based on the distributed elmore delay model with consideration of both capacitive power dissipation and short- circuit power dissipation. we show several interesting properties of the optimal sdws solutions under the two objectives, including an important result (theorem 3) which reveals the relationship between driver sizing and optimal wire sizing. these results lead to polynomial time algorithms for computing the lower and upper bounds of optimal sdws solutions under the two objectives, and efficient algorithms for computing optimal sdws solutions under the two objectives. we have implemented these algorithms and compared them with existing design methods for driver sizing only or independent driver and wire sizing. accurate spice simulations shows that our methods reduce the delay by up to 11%--47% and power dissipation by 26%--63% compared with existing design methods. jason cong cheng-kok koh rule-based vlsi verification system constrained by layout parasitics this paper addresses a rule-based method for vlsi design review, constrained by parasitics. using the new ideas discussed in this paper, extraction from layout is not limited anymore to conventional electrical data, but additionally allows modelling of functional and timing behaviour. an extendable rule based validation algorithm operates on extracted models, decorated with parasitic effects, to formally prove most aspects of design correctness. j. wenin j. verhasselt m. van camp j. leonard p. guebels on modeling top-down vlsi design we present an improved data model that reflects the whole vlsi design process including bottom-up and top-down design phases. the kernel of the model is a static version concept that describes the convergence of a design. the design history which makes the semantics of most other version concepts, is modeled explicitly by additional object classes (entities types) but not by the version graph itself. top-down steps are modeled by splitting a design object into requirements and realizations. the composition hierarchy is expressed by a simple but powerful configuration method. design data of iterative refinement processes are managed efficiently by storing incremental data only. bernd schurmann joachim altmeyer martin schutze timing driven placement in interaction with netlist transformations guenter stenz bernhard m. riess bernhard rohfleisch frank m. johannes low power state assignment targeting two-and multi-level logic implementations the problem of minimizing power consumption during the state encoding of a finite state machine is considered. a new power cost model for state encoding is proposed and encoding techniques that minimize this power cost for two- and multi-level logic implementations are described. these techniques are compared with those which minimize area or the switching activity at the present state bits. experimental results show significant improvements. chi-ying tsui massoud pedram chih-ang chen alvin m. despain transistor level micro-placement and routing for two-dimensional digital vlsi cell synthesis michael a. riepe karem a. sakallah ibm fsd vlsi chip design methodology the ibm corporation's federal systems division (fsd) has designed, developed and implemented a vlsi design methodology selected specifically for the dod's design environment. the military industry vlsi requirements and ibm fsd's design system goals for developing chip design methodology are described. substantial reduction in computer costs and design schedule using the methodology and the design system components is demonstrated. the rationale behind selecting the specific combination of tools, from the many design systems in ibm, for forming the fsd chip design system and the methodology is explained. k. ahdoot r. alvarodiaz l. crawley optimal layout to avoid cmos stuck-open faults a set of layout rules is presented to cope with cmos stuck-open faults by a design for testability at the layout-level. in applying these rules, open connections may either be avoided or their effects can be described by an easily detectable type of open faults known from cmos inverters and nmos logic. hence, remaining open faults are usually covered by a complete stuck-at test pattern set. s. koeppe address generation for memories containing multiple arrays herman schmit donald e. thomas d-c and transient analysis of networks using a digital computer f. h. branin high-level synthesis of fault-secure microarchitectures ramesh karri alex orailoglu lower bounds on the power consumption in scheduled data flow graphs with resource constraints (poster paper) lars kruse eike schmidt gerd jochens ansgar stammermann wolfgang nebel cloning techniques for hierarchical compaction ravi varadarajan cyrus s. bamji cycle-based simulation with decision diagrams raimund ubar adam morawiec jaan raik an efficient technique for device and interconnect optimization in deep submicron designs in this paper, we formulate a new class of optimization problem, named the general ch-posynomial program, and reveal the general dominance property. we propose an efficient algorithm based on the extended local refinement operation to compute lower and upper bounds of the exact solution to the general ch-posynomial program. we apply the algorithm to solve the simultaneous transistor and interconnect sizing (stis) problem under the table-based device model, and the global interconnect sizing and spacing (giss) problem with consideration of the crosstalk capacitance. experiment results show that our algorithm can handle many device and interconnect modeling issues in deep submicron designs and is very efficient. jason cong lei he plasma: an fpga for million gate systems r. amerson r. carter w. culbertson p. kuekes g. snider lyle albertson effects of resource sharing on circuit delay: an assignment algorithm for clock period optimization this paper analyzes the effect of resource sharing and assignment on the clock period of the synthesized circuit. the assignment phase assigns or binds operations of the scheduled behavioral description to a set of allocated resources. we focus on control-flow intensive descriptions, characterized by the presence of mutually exclusive paths due to the presence of nested conditional branches and loops. we show that clustering multiple operations in the same state of the schedule, possibly leading to chaining of functional units (fus) in the rtl circuit, is an effective way to minimize the total number of clock cycles, and hence total execution time. we present an assignment algorithm that is particularly effective for such design styles by minimizing data chaining and hence the clock period of the circuit, thereby leading to further reduction in total execution time. existing resource sharing and assignment approaches for reducing the clock period of the resulting circuit either increase the resource allocation or use faster modules, both leading to leading to larger area requirements. in this paper we show that even when the type of available resource units and the number of resource units of each type is fixed, different assignments may lead to circuits with significant differences in clock period. we provide a comprehensive analysis of how resource sharing and assignment introduces long paths in the circuit. based on the analysis, we develop an assignment algorithm that uses a high-level delay estimator to asign operations to a fixed set of available resources so as to minimize the clock period of the resultant circuit, with no or minimal effect on the area of the circuit. experimental results on several conditional-intensive designs demonstrate the effectiveness of the assignment algorithm. subhrajit bhattacharya sujit dey franc breglez volume holographic data storage sergei s. orlov decomposition of logic networks into silicon this paper describes a module compiler for decomposing arbitrary functional units of any complexity into abstract cells for customized vlsi layouts. the compiler takes the description of a functional unit as input and builds a dependence graph representation. the graph is then partitioned and the nodes are packed into abstract cell output descriptions. the algorithm will tailor the design to a given area and aspect ratio. routing is done automatically through the cells. steven t. healey daniel d. gajski the genealogical approach to the layout problem the concept of the genealogical approach to the layout problem is presented. the system pursues the idea of flexible modules and is capable of dealing with arbitrarily complex tasks. the genealogical tree of the system provides a mainframe for organizing the information flow. results of the system routines are described in terms of transitions between flexibility classes of modules. antoni a. szepieniec ralph h.j.m. otten using lower bounds during dynamic bdd minimization rolf drechsler wolfgang gunther mild - a cell-based layout system for mos-lsi a standard-cell-based layout system for mos-lsi termed mild is reported. mild has several features which seem to be profitable from the practical viewpoint of lsi developers; one of them is that macro blocks such as memory cells can be contained in the chip laid out. koji sato takao nagai mikio tachibana hiroyoshi shimoyama masaru ozaki toshihiko yahara high speed gaas subsytem design using feed through logic j. a. montiel-nelson v. de armas s. nooshabadi compact and complete test set generation for multiple stuck-faults alok agrawal alexander saldanha luciano lavagno alberto l. sangiovanni-vincentelli adding content-based searching to a traditional music library catalogue server most online music library catalogues can only be searched by textual m etadata. whilst highly effective - since the rules for maintaining consistency have been refined over many years - this does not allow searching by musical content. many music librarians are familiar with users humming their enquiries. most systems providing a "query by humming&r; dquo; interface tend to run independently of music library catalogue systems and not offer similar textual metadata searching. this paper discusses the ongoing investigative work on integrating these two types of system conducted as part of the nsf/jisc funded omras project (http://www.omras.org). matthew j. dovey stack search - a graphical search model ted skolnick orienteering in an information landscape: how information seekers get from here to there vicki l. o'day robin jeffries design of a one to many collaborative product jean c. scholtz data modeling: order out of chaos data modeling, in a sense, is an attempt to bring order out of chaos. the data requirements of a system normally consist of an unstructured collection of various types of data descriptions. the aim is to produce from these an understandable, precise, and complete representation of the data requirements. the field of data modeling itself, however, is not so orderly. even a satisfactory definition of the term is hard to find. the different data modeling methodologies seem as numerous and diverse as the situations to be modeled. categorizing them is difficult as well. the paper will present a sampling of the data modeling methodologies currently in use and suggest some possible categorizations. carolyn budinger seaman query-based navigation in semantically indexed hypermedia daniel cunliffe carl taylor douglas tudhope multiple access and retrieval information with annotations (abstract) edward a. fox industry briefs: cambridge technology partners joris groen josine van de ven measures of relative relevance and ranked half-life: performance indicators for interactive ir pia borlund peter ingwersen design and technology for collaborage: collaborative collages of information on physical walls a collaborage is a collaborative collage of physically represented information on a surface that is connected with electronic information, such as a physical in/out board connected to a people-locator database. the physical surface (board) contains items that are tracked by camera and computer vision technology. events on the board trigger electronic services. this paper motivates this concept, presents three different applications, describes the system architecture and component technologies, and discusses several design issues. thomas p. moran eric saund william van melle anuj u. gujar kenneth p. fishkin beverly l. harrison data mining (invited talk. abstract only): crossing the chasm data mining has attracted tremendous interest in the research community as well as commercial market place. the last few years have witnessed a flurry of technical innovations and introduction of comercial products. the next major challenge facing data mining is to make a transition from a niche technology to a main stream technology. i will present key technical and environmental issues that must be addressed for a successful transition. rakesh agrawal connecting citizens to the national spatial data infrastructure via local libraries (poster) derek thompson jeffrey burka gary marchionini hypermedia production (abstract): hand-craft or witchcraft mark bernstein methods of cognitive analysis for hci douglas j. gillan nancy j. cooke accessing multimedia through concept clustering john kominek rick kazman cumulating and sharing end users knowledge to improve video indexing in a video digital library in this paper, we focus on a user driven approach to improve video ind exing. it consists in cumulating the large amount of small, individual efforts done by the users who access information, and to provide a community management mechanism to let users share the elicited knowledge. this technique is currently being developed in the "opales" environment and tuned up at the "institut national de l'audiovisuel&r; dquo;(ina), a national video library in paris, to increase the value of its patrimonial video archive collections. it relies on a portal providing private workspaces to end users, so that a large part of their work can be shared between them. the effort for interpreting documents is directly done by the expert users who work for their own job on the archives. opales provides an original notion of "point of view" to enable the elicitation and the sharing of knowledge between communities of users, without leading to messy structures. the overall result consists in linking exportable private metadata to archive documents and managing the sharing of the elicited knowledge between users communities. provide bring together the leaders developing the national smete digital library to provide a brief background and broad overview of the nsdl program. panelists will discuss the overall vision and broad steps underway to develop the national smete digital library. building the national smete digital library presents many challenges: developing a shared vision for the form and function of the nsdl; meeting the needs of diverse learners and of the many disciplines encompassed by the nsdl; acquiring input from the community of users to ensure that the nsdl is both used and useable; evaluating progress and impacts; integrating technologies that already exist, and the development of new technologies; and providing mechanisms for sharing and cooperation of knowledge and resources among nsdl collaborators. marc nanard jocelyne nanard learning and using the cognitive walkthrough method: a case study approach bonnie e. john hilary packer an adaptive real-time web search engine the internet provides a wealth of information scattered all over the world. the fact that the information may be located anywhere makes it both convenient for placing information on the web and difficult for others to find. conventional search engines can only locate information that is in their search index and users do not have much choice in limiting or expanding the search parameters. some web pages like those for news services change frequently and will not work well with index based search engines because the indexed information may become obsolete at any moment. we are proposing an efficient algorithm for finding information on the web that gives the user greater control over the search path and what to search for. unlike the conventional techniques, our algorithm does not use an index and works in real time. we save on space, we can search any part of the internet indicated by the user, and since the search is in real time, the result will be current. augustine chidi ikeji farshad fotouhi the changing face of information retrieval systems information retrieval is a fundamental capability of computers, of whatever size and type. what distinguishes one computer's retrieval performance from another's, from the standpoint of the user, are such factors as: the number and sizes of files that can be handled; the number and types of access points available to the user; the number, types, and sophistication of the search features provided; the number and kinds of error-correction, tutorial, and other user-help features available; the speed with which the system performs all of its functions; and the knowledge and skill required for successful use of the system. all of these, of course, are very dependent on the retrieval software, as well as on the computer per se. since their beginnings in the 1950s, information retrieval systems have improved steadily---and often dramatically---in capacity, responsiveness, cost-effectiveness, and user- friendliness. multi-user online access, which was just beginning to come into view about 20 years ago, has almost completely supplanted batch processing as the standard means of searching machine- readable files, a change that has been accelerated by the establishment and success of many commercial online retrieval service bureaus. according to the most recent issue of the directory of online databases, more than 150 computer service bureaus are providing online access to over 870 databases, at the present time. some of these databases contain millions of records and require billions of bytes of online storage. carlos a. cuadra strategic outlook: "daddy, won't you buy me a mobile?" andrea moed event tracking based on domain dependency this paper proposes a method for event tracking on broadcast news stories based on distinction between a topic and an event. a topic and an event are identified using a simple criterion called domain dependency of words: how greatly a word features a given set of data. the method was tested on the tdt corpus which has been developed by the tdt pilot study and the result can be regarded as promising the usefulness of the method. fumiyo fukumoto yoshimi suzuki implementing an interface to networked services this paper highlights the general problems and difficulties in using networked services. a prototype has been developed to help users interact with networked services. general design principles which arise in implementing a prototype user interface to networked services are discussed. the construction of the prototype is based on an object-oriented approach. the way it communicates with networked services and a help facility are also described. abdul hanan abdullah brian gay set oriented retrieval the broad way in which we look at how an irs functions influences the types of questions we ask about it and the ways we try to improve performance. in the recent past, retrieval methodologies have been based on retrieving documents one at a time. in this paper we are introducing a set oriented view. we observe that this view is quite consistent with the single-document or sequential methods, and define a precise model to capture the set-oriented approach. we then examine a number consequences of the model, such as the limitations implied by a finite index vocabulary. finally, we discuss various ways in which the set orientation can influence our thinking about ir. a. bookstein design through matchmaking: technology in search of users sara bly elizabeth f. churchill algorithms for creating indexes for very large tables without quiescing updates as relational dbmss become more and more popular and as organizations grow, the sizes of individual tables are increasing dramatically. unfortunately, current dbmss do not allow updates to be performed on a table while an index (e.g., a b+-tree) is being built for that table, thereby decreasing the systems' availability. this paper describes two algorithms in order to relax this restriction. our emphasis has been to maximize concurrency, minimize overheads and cover all aspects of the problem. builds of both unique and nonunique indexes are handled correctly. we also describe techniques for making the index-build operations restartable, without loss of all work, in case a system failure were to interrupt the completion of the creation of the index. in this connection, we also present algorithms for making a long sort of operation restartable. these include algorithms for the sort and merge phases of sorting. c. mohan inderpal narang pick-and-drop: a direct manipulation technique for multiple computer environments jun rekimoto some inconsistencies and misidentified modeling assumptions in probabilistic information retrieval william s. cooper an approach to gatering undocumented information purvis m jackson sara e moss comparison of two information retrieval methods on videotex: tree-structure versus alphabetical directory videotex systems are two-way communnication systems intended to give users access to large amounts of stored information from their homes and offices. these systems link computer databases to modified television sets over the telephone network. a central computer is used as a large information storage device, monitoring requests for information from several users at one time, finding and sending the information to the person who requested it. it is possible however, that the addition of an inexpensive supplement to the hierarchical tree search will improve user search performance without substantially altering the cost-benefits of the menu selection approach. one such method of retrieval is the on-line alphabetical directory approach where users look for a search term from a list of alphabetically organized items by menu selection. an example will indicate the difference between this method and a typical hierarchical search: jo w. tombaugh scott a. mcewen beyond work relations: sigir '97 workshop beth hetzler summary of allen newell's chi'85 address, "the prospects for science in human- computer interaction" thomas p. moran methods & tools: playacting and focus troupes:: theater techniques for creating quick, intense, immersive, and engaging focus group sessions steve sato tony salvador computer communication system design affects group decision making the impact of computer-based communication on group performance depends upon the structure enforced by the communication system. while the ability to introduce structures which enhance human communication processes has been applauded, research to evaluate the impact of various design features is lacking. this research has explored the impact of two synchronous systems which vary in the role of immediacy of interaction and feedback on group decision making. one system is message-oriented, requiring a conferee to complete a message before interacting with others. the other displays what each group member is typing in a separate window on the screens of all participants. in this system, comments can be made as ideas are expressed. groups were asked to solve a problem first individually and then cooperatively using one of the two systems. all groups produced decisions superior to the average initial individual solutions. window system groups both improved more and produced significantly higher quality decisions. these groups focused on fewer topics at one time while spending less time discussing how to organize both system and task efforts. by influencing the group's ability to organize and focus its attention, the design of the communication system influenced decision quality. sharon murrel exploring multimedia applications locality to improve cache performance andreas prati supporting awareness of others in groupware carl gutwin saul greenberg mark roseman baroque: a browser for relational databases the standard, most efficient method to retrieve information from databases can be described as systematic retrieval: the needs of the user are described in a formal query, and the database management system retrieves the data promptly. there are several situations, however, in which systematic retrieval is difficult or even impossible. in such situations exploratory search (browsing) is a helpful alternative. this paper describes a new user interface, called baroque, that implements exploratory searches in relational databases. baroque requires few formal skills from its users. it does not assume knowledge of the principles of the relational data model or familiarity with the organization of the particular database being accessed. it is especially helpful when retrieval targets are vague or cannot be specified satisfactorily. baroque establishes a view of the relational database that resembles a semantic network, and provides several intuitive functions for scanning it. the network integrates both schema and data, and supports access by value. baroque can be implemented on top of any basic relational database management system but can be modified to take advantage of additional capabilities and enhancements often present in relational systems. amihai motro online reading and offline tradition: adapting online help facilities to offline reading strategies alfons maes sandra goutier erik-jan van der linden timely and fault-tolerant data access from broadcast disks: a pinwheel-based approach sanjoy baruah azer bestavros authoring animated web pages using "contact points" pete faraday alistair sutcliffe the third manifesto we present a manifesto for the future direction of data and database management systems. the manifesto consists of a series of prescriptions, proscriptions, and "very strong suggestions." hugh darwen c. j. date a secure dynamic copy protocol in real-time secure database systems sungyoung lee byeong-soo jeong hyon-woo seung on the performance of object clustering techniques we investigate the performance of some of the best-known object clustering algorithms on four different workloads based upon the tektronix benchmark. for all four workloads, stochastic clustering gave the best performance for a variety of performance metrics. since stochastic clustering is computationally expensive, it is interesting that for every workload there was at least one cheaper clustering algorithm that matched or almost matched stochastic clustering. unfortunately, for each workload, the algorithm that approximated stochastic clustering was different. our experiments also demonstrated that even when the workload and object graph are fixed, the choice of the clustering algorithm depends upon the goals of the system. for example, if the goal is to perform well on traversals of small portions of the database starting with a cold cache, the important metric is the per-traversal expansion factor, and a well- chosen placement tree will be nearly optimal; if the goal is to achieve a high steady-state performance with a reasonably large cache, the appropriate metric is the number of pages to which the clustering algorithm maps the active portion of the database. for this metric, the prp clustering algorithm, which only uses access probabilities achieves nearly optimal performance. manolis m. tsangaris jeffrey f. naughton a brief survey of tertiary storage systems and research s. prabhakar d. agrawal a. el abbadi a. singh a new ranking principle for multimedia information retrieval martin wechsler peter schäuble mutual harmony and temporal continuity: a perspective from the japanese garden viewing meeting captured by an omni-directional camera one vision of future technology is the ability to easily and inexpensively capture any group meeting that occurs, store it, and make it available for people to view anytime and anywhere on the network. one barrier to achieving this vision has been the design of low-cost camera systems that can capture important aspects of the meeting without needing a human camera operator. a promising solution that has emerged recently is omni-directional cameras that can capture a 360-degree video of the entire meeting. the panoramic capability provided by these cameras raises both new opportunities and new issues for the interfaces provided for post-meeting viewers --- for example, do we show all meeting participants all the time or do we just show the person who is speaking, how much control do we provide to the end-user in selecting the view, and will providing this control distract them from their task. these are not just user interface issues, they also raise tradeoffs for the client- server systems used to deliver such content. they impact how much data needs to be stored on the disk, what computation can be done on the server vs. the client, and how much bandwidth is needed. we report on a rototype system built using an omni-directional camera and results from user studies of interface preferences expressed by viewers. yong rui anoop gupta j. j. cadiz response to "a close look at the ifo data model" serge abiteboul richard hull incremental clustering for dynamic information processing clustering of very large document databases is useful for both searching and browsing. the periodic updating of clusters is required due to the dynamic nature of databases. an algorithm for incremental clustering is introduced. the complexity and cost analysis of the algorithm together with an investigation of its expected behavior are presented. through empirical testing it is shown that the algorithm achieves cost effectiveness and generates statistically valid clusters that are compatible with those of reclustering. the experimental evidence shows that the algorithm creates an effective and efficient retrieval environment. fazli can keystroke level analysis of email message organization organization of email messages takes an increasing amount of time for many email users. research has demonstrated that users develop very different strategies to handle this organization. in this paper, the relationship between the different organization strategies and the time necessary to use a certain strategy is illustrated by a mathematical model based on keystroke- level analysis. the model estimates time usage for archiving and retrieving email messages for individual users. besides explaining why users develop different strategies to organize email messages, the model can also be used to advise users individually when to start using folders, clean messages, learn the search functionality, and using filters to store messages. similar models could assist evaluation of different interface designs where the number of items increase with time. olle bälter strategic outlook: double exposure janet abrams inferring web communities from link topology david gibson jon kleinberg prabhakar raghavan imprecise schema: a rationale for relations with embedded subrelations exceptional conditions are anomalous data which meet the intent of a schema but not the schema definition, represent a small proportion of the database extension, and may become known only after the schema is in use. admission of exceptional conditions is argued to suggest a representation that locally stretches the schema definition by use of relations with embedded subrelations. attempted normalization of these relations to 1nf does not yield the static schema typically associated with such transformations. a class of relations, termed exceptional condition nested form (ecnf), is defined which allows the necessary representation of exceptional conditions while containing sufficient restrictions to prevent arbitrary and chaotic inclusion of embedded subrelations. queries on a subset of exceptional conditions, the exceptional constraints, are provided an interpretation via an algorithm that transforms ecnf relations into 1nf relations containing two types of null values. extensions of relational algebraic operators, suitable for interactive query navigation, are defined for use with ecnf relations containing all forms of exceptional conditions. howard m. dreizen shi-kuo chang dynamically distributed query evaluation trevor jim dan suciu a spatial approach to organizing and locating digital libraries and their content jason orendorf charles kacmar the alphaslider: a compact and rapid selector christopher ahlberg ben shneiderman microsoft terraserver: a spatial data warehouse microsoft® terraserver stores aerial, satellite, and topographic images of the earth in a sql database available via the internet. it is the world's largest online atlas, combining eight terabytes of image data from the united states geological survey (usgs) and spin-2. internet browsers provide intuitive spatial and text interfaces to the data. users need no special hardware, software, or knowledge to locate and browse imagery. this paper describes how terabytes of "internet unfriendly" geo-spatial images were scrubbed and edited into hundreds of millions of "internet friendly" image tiles and loaded into a sql data warehouse. all meta-data and imagery are stored in the sql database. terraserver demonstrates that general-purpose relational database technology can manage large scale image repositories, and shows that web browsers can be a good geo-spatial image presentation system. tom barclay jim gray don slutz notepals: lightweight note sharing by the group, for the group richard c. davis james a. landay victor chen jonathan huang rebecca b. lee frances c. li james lin charles b. morrey ben schleimer morgan n. price bill n. schilit qoc in action: using design rationale to support design diane mckerlie allan maclean query processing for knowledge bases using join indices adel shrufi thodoros topaloglou preference structure, inference and set-oriented retrieval y. y. yao s. k. m. wong control: continuous output and navigation technology with refinement on-line the control project at u.c. berkeley has developed technologies to provide online behavior for data-intensive applications. using new query processing algorithms, these technologies continuously improve estimates and confidence statistics. in addition, they react to user feedback, thereby giving the user control over the behavior of long-running operations. this demonstration displays the modifications to a database system and the resulting impact on aggregation queries, data visualization, and gui widgets. we then compare this interactive behavior to batch-processing alternatives. ron avnur joseph m. hellerstein bruce lo chris olston bhaskaran raman vijayshankar raman tali roth kirk wylie persival demo: categorizing hidden-web resources panagiotis g. ipeirotis luis gravano mehran sahami disembodied conduct: communication through video in a multi-media office environment christian heath paul luff fish tank virtual reality colin ware kevin arthur kellogg s. booth iterative methodology and designer training in human-computer interface design gregg (skip) bailey the motivational nature of the critical systems operator job: expanding the job characteristics model with relationships d. harrison mcknight norman l. chervany ultra-summarization (poster abstract): a statistical approach to generating highly condensed non-extractive summaries michael j. witbrock vibhu o. mittal neighborhood systems and relational databases queries in database can be classified roughly into two types: specific targets and fuzzy targets. many queries are in effect fuzzy targets, however, because of lacking the supports, the user has been emulating them with specific targets by retiring a query repeatedly with minor changes. in this paper, we augment the relational database, with neighborhood systems, so the database can answer a fuzzy query. there have been many works to combine relational databases and fuzzy theory. bucklles and petry replaced attributes values by sets of values. zemankova- leech, kandel, and zviell used fuzzy logic. the formalism of present work is quite general, it allows numerical or nonnumerical measurements of fuzziness in relational databases. the fuzzy theory present here is quite different from the usual theory. our basic assumption here is that: the data are not fuzzy, the queries are. motro [motr86] introduced the notion of distance into the relational databases. from that he can, then, define the notion of "close-ness" and develop goal queries. though "distance" is a useful concept, yet very often the quantification of it is meaningless or extremely difficult. for example, "very close", "very far" are meaningful concept of distance, yet there is no practical way to quantity them for all occasions. our approach here is more direct, we define directly the meaning of "very close neighborhood". using the concept of neighborhoods is not very original, in fact, in the theory of topological spaces [dugu66], mathematician has been using the "neighborhood system" to study the phenomena of "close-ness". in the territory of fuzzy queries, the notion of "neighborhood" captures the essence of the qualitative information of "close-ness" better than the brute-force- quantified information (distance). a "fuzzy" neighborhood is a qualitative measure of fuzziness. on the surface, it seems a very complicated procedure to define a neighborhood for each value in the attribute. in fact, if we use the characteristic function (membership function) to define a subset, then the defining procedure is merely another type of distance function (non-measure distance or symbolic distance). now, to define the neighborhood system one can simply re-entered the third column of the relation with linguistic values: "very close", "close", "far". note that there is a "greater than" relation among these linguistic values. in mathematical terms, they forms a lattice [jaco60]. for technical reason, we require the values in third column be elements of a lattice. note that real number is a lattice, so we get motro's results back. t. y. lin the logical data model we propose an object-oriented data model that generalizes the relational, hierarchical, and network models. a database scheme in this model is a directed graph, whose leaves represent data and whose internal nodes represent connections among the data. instances are constructed from objects, which have separate names and values. we define a logic for the model, and describe a nonprocedural query language that is based on the logic. we also describe an algebraic query language and show that it is equivalent to the logical language. gabriel m. kuper moshe y. vardi a demonstration of whirl (demonstration abstract) william w. cohen obsm: a notation to integrate different levels of user interface design birgit kneer gerd szwillus a locking protocol for resource coordination in distributed databases a locking protocol to coordinate access to a distributed database and to maintain system consistency throughout normal and abnormal conditions is presented. the proposed protocol is robust in the face of crashes of any participating site, as well as communication failures. recovery from any number of failures during normal operation or any of the recovery stages is supported. recovery is done in such a way that maximum forward progress is achieved by the recovery procedures. integration of virtually any locking discipline including predicate lock methods is permitted by this protocol. the locking algorithm operates, and operates correctly, when the network is partitioned, either intentionally or by failure of communication lines. each partition is able to continue with work local to it, and operation merges gracefully when the partitions are reconnected. a subroutine of the protocol, that assures reliable communication among sites, is shown to have better performance than two-phase commit methods. for many topologies of interest, the delay introduced by the overall protocol is not a direct function of the size of the network. the communications cost is shown to grow in a relatively slow, linear fashion with the number of sites participating in the transaction. an informal proof of the correctness of the algorithm is also presented in this paper. the algorithm has as its core a centralized locking protocol with distributed recovery procedures. a centralized controller with local appendages at each site coordinates all resource control, with requests initiated by application programs at any site. however, no site experiences undue load. recovery is broken down into three disjoint mechanisms: for single node recovery, merge of partitions, and reconstruction of the centralized controller and tables. the disjointness of the mechanisms contributes to comprehensibility and ease of proof. the paper concludes with a proposal for an extension aimed at optimizing operation of the algorithm to adapt to highly skewed distributions of activity. the extension applies nicely to interconnected computer networks. daniel a. menasce gerald j. popek richard r. muntz partial-match retrieval using hashing and descriptors this paper studies a partial-match retrieval scheme based on hash functions and descriptors. the emphasis is placed on showing how the use of a descriptor file can improve the performance of the scheme. records in the file are given addresses according to hash functions for each field in the record. furthermore, each page of the file has associated with it a descriptor, which is a fixed-length bit string, determined by the records actually present in the page. before a page is accessed to see if it contains records in the answer to a query, the descriptor for the page is checked. this check may show that no relevant records are on the page and, hence, that the page does not have to be accessed. the method is shown to have a very substantial performance advantage over pure hashing schemes, when some fields in the records have large key spaces. a mathematical model of the scheme, plus an algorithm for optimizing performance, is given. k. ramamohanarao james a. thom john w. lloyd data modeling of time-based media many aspects of time-based media--- complex data encoding, compression, "quality factors," timing---appear problematic from a data modeling standpoint. this paper proposes timed streams as the basic abstraction for modeling time-based media. several media-independent structuring mechanisms are introduced and a data model is presented which, rather than leaving the interpretation of multimedia data to applications, addresses the complex organization and relationships present in multimedia. simon gibbs christian breiteneder dennis tsichritzis scheduling issues in multimedia query optimization minos n. garofalakis yannis e. ioannidis on databases with incomplete information witold lipski on-demand regional television over the internet haakon bryhni hilde lovett erling maartmann-moe dag solvoll tryggve sørensen information exchanges patterns in a computer-supported cooperative work environment gary j. cook cheryl l. dunn severin v. grabski towards an efficient management of objects in a distributed environment a. el habbash j. grimson c. horn automatic generation of "hyper-paths" in information retrieval systems: a stochastic and an incremental algorithms alain lelu integrating automatic genre analysis into digital libraries with the number and types of documents in digital library systems incr easing, tools for automatically organizing and presenting the content have to be found. while many approaches focus on topic-based organization and structuring, hardly any system incorporates automatic structural analysis and representation. yet, genre information (unconsciously) forms one of the most distinguishing features in conventional libraries and in information searches. in this paper we present an approach to automatically analyze the structure of documents and to integrate this information into an automatically created content-based organization. in the resulting visualization, documents on similar topics, yet representing different genres, are depicted as books in differing colors. this representation supports users intuitively in locating relevant information presented in a relevant form. andreas rauber alexander muller-kogler corrigenda: "concepts and notations for concurrent programs" gregory andrews fred b. schneider the mlpq/gis constraint database system mlpq/gis [4,6] is a constraint database [5] system like ccube [1] and dedale [3] but with a special emphases on spatio-temporal data. features include data entry tools (first four icons in fig. 1), icon-based queries such as @@@@ intersection, @@@@ union, @@@@ area, @@@@ buffer, @@@@ max and @@@@ min, which optimize linear objective functions, and @@@@ for datalog queries. for example, in fig. 1 we loaded and displayed a constraint database that represents the midwest united states and loaded two contraint relations describing the movements of two persons. the query icon opened a dialog box into which we entered the query which finds (t, i) pairs such that the two people are in the same state i at the same time t. mlpq/gis can animate [2] spatio-temporal objects that are linear constraint relations over and t. users can also display in discrete color zones (isometric maps) any spatially distributed variable z that is a linear function and for example, fig. 2 shows the mean annual air temperature nebraska. animation and isometric map display can be combined. peter revesz rui chen pradip kanjamala yiming li yuguo liu yonghui wang is information system a science? an inquiry into the nature of the information systems discipline the information systems (is) discipline is apparently undergoing an identity crisis. academicians question the need for is departments in colleges, stating the absence of a core for the field and its integration within other business functions as a basis for its elimination. at the same time, many practitioners, as reflected in the u.s. government's recent it labor shortage report, continue to ignore is as a distinct field of study. this article briefly outlines these and other challenges and argues that notwithstanding underlying philosophical differences, it can be concluded that is is an emerging scientific discipline. this conclusion is reached through an assessment of the debate surrounding the issue of whether is should be a discipline and an analysis of the is discipline using some key characteristics of "science." the arguments put forth in this paper have four key implications for the is community: a continuing emphasis on adopting scientific principles and practices for conducting inquiry into is phenomena; an enhancement of the self-concept of is academics and professionals through a common identity; it enhances the ability of supporters of the is field to defend against criticisms, integration with other disciplines, and resource rivalry; and it creates the potential of being well-situated to building a cumulative tradition in the field. deepak khazanchi bjørn erik munkvold image mining in iris: integrated retinal information system there is an increasing demand for systems that can automatically analyze images and extract semantically meaningful information. iris, an integrated retinal information system, has been developed to provide medical professionals easy and unified access to the screening, trend and progression of diabetic-related eye diseases in a diabetic patient database. this paper shows how mining techniques can be used to accurately extract features in the retinal images. in particular, we apply a classification approach to determine the conditions for tortuousity in retinal blood vessels. wynne hsu mong li lee kheng guan goh birch: an efficient data clustering method for very large databases tian zhang raghu ramakrishnan miron livny a method for scoring correlated features in query expansion martin franz salim roukos vertical partitioning algorithms for database design this paper addresses the vertical partitioning of a set of logical records or a relation into fragments. the rationale behind vertical partitioning is to produce fragments, groups of attribute columns, that "closely match" the requirements of transactions. vertical partitioning is applied in three contexts: a database stored on devices of a single type, a database stored in different memory levels, and a distributed database. in a two-level memory hierarchy, most transactions should be processed using the fragments in primary memory. in distributed databases, fragment allocation should maximize the amount of local transaction processing. fragments may be nonoverlapping or overlapping. a two-phase approach for the determination of fragments is proposed; in the first phase, the design is driven by empirical objective functions which do not require specific cost information. the second phase performs cost optimization by incorporating the knowledge of a specific application environment. the algorithms presented in this paper have been implemented, and examples of their actual use are shown. shamkant navathe stefano ceri gio wiederhold jinglie dou workshop 1: visual interfaces to digital libraries - its past, present, and future the design of easy-to-use and informative visual interfaces to digital libraries is an integral part to the advances of digital libraries. a wide range of approaches have been developed from a diverse spectrum of perspectives that focus on users and tasks to be supported, data to be modeled, and the efficiency of algorithms. information visualization aims to exploit the human visual information processing system, especially with non- spatial data (such as documents and images typically found in digital libraries). generally, information visualization examines semantic relationships intrinsic to an abstract information space and how they can be spatially navigated and memorized using similar cognitive processes to those that would apply during interactions with the real world. this workshop promotes the convergence of information visualization and digital libraries. it brings together researchers and practitioners in the areas of information visualization, digital libraries, human-computer interaction, library and information science, and computer science to identify the most important issues in the past and the present, and what should be done in the future. katy börner chaomei chen ultra-lightweight constraints scott e. hudson ian smith building information systems for mobile environments it is expected that in the near future, tens of millions of users will have access to distributed information systems through wireless connections. the technical characteristics of the wireless medium and the resulting mobility of both data resources and data consumers raise new challenging questions regarding the development of information systems appropriate for mobile environments. in this paper, we report on the development of such a system. first, we describe the general architecture of the information system and the main considerations of our design. then, based on these considerations, we present our system support for maintaining the consistency of replicated data and for providing transaction schemas that account for the frequent but predictable disconnections, the mobility, and the vulnerability of the wireless environment. evaggelia pitoura bharat bhargava evaluating adaptive navigation support kristina höök martin svensson challenges in automating declarative business rules to enable rapid business response val huber courtyard: integrating shared overview on a large screen and per-user detail on individual screens masayuki tani masato horita kimiya yamaashi koichiro tanikoshi masayasu futakawa interactive techniques jurgen ziegler nested historical relations the paper extends nested relations for managing temporal variation of complex objects. it combines the research in temporal databases and nested relations for nontraditional database applications. the basic modelling construct is a temporal atom as an attribute value. a temporal atom consists of two components, a value and temporal set which is a set of times denoting the validity period of the value. we define algebra operations for nested historical relations. data redundancy in nested historical relations is also discussed and criteria for well-structured nested relations are established. a. u. tansel l. garnett indexing inheritance and aggregation karen c. davis unmi tina kang shobha ravishankar editorial jay blickstein grins lloyd rutledge lynda hardman dick c. a. bulterman strategies for enhancing user access and understanding of digital library structure and content r. b. allen evaluating multimedia presentations for comprehension peter faraday a novel checkpointing scheme for distributed database systems we present a new checkpointing scheme for a distributed database system. our scheme records the states of some selected data items and can be executed at any time without stopping other activities in the database system. it makes use of "shadows" of data items to make sure that the collected data item values are "transaction-consistent". storage overhead is low, since at most one shadow is needed for each data item. slawomir pilarski tiko kameda three implications of spatial access to online information t. r. girill constant-time-maintainable bcnf database schemes hector j. hernandez edward p. f. chan the cscw implementation process: an interpretative model and case study of the implementation of a videoconference system duncan sanderson multiple source information analysis, gis and starlight bruce rex john risch scott dowson brian moon multilingual information discovery and access (midas) douglas w. oard carol peters generation of user profiles for information filtering - research agenda (poster session) in information filtering (if) systems, user long- term needs we expressed as user profiles. the quality of a user profile has a major impact on the performance of if systems. the focus of the proposed research is on the study of user profile generation and update. the paper introduces methods for user profile generation, and proposes a research agenda for their comparison and evaluation. tsvi kuflik peretz shoval participatory design practices (abstract): a special interest group elizabeth b.-n. sanders elizabeth h. nutter multimedia document presentation, information extraction, and document formation in minos: a model and a system minos is an object-oriented multimedia information system that provides integrated facilities for creating and managing complex multimedia objects. in this paper the model for multimedia documents supported by minos and its implementation is described. described in particular are functions provided in minos that exploit the capabilities of a modern workstation equipped with image and voice input- output devices to accomplish an active multimedia document presentation and browsing within documents. these functions are powerful enough to support a variety of office applications. also described are functions provided for the extraction of information from multimedia documents that exist in a large repository of information (multimedia document archiver) and functions that select and transform this information. facilities for information sharing among objects of the archiver are described; an interactive multimedia editor that is used for the extraction and interactive creation of new information is outlined; finally, a multimedia document formatter that is used to synthesize a new multimedia document from extracted and interactively generated information is presented. this prototype system runs on a sun-3 workstation running unix'". an instavox, directly addressable, analog device is used to store voice segments. s. christodoulakis m. theodoridou f. ho m. papa a. pathria manipulation of music for melody matching alexandra l. uitdenbogerd justin zobel research issues in moving objects databases (tutorial session) ouri wolfson the multimedia library: the center of an information rich community gerard jorna mirjam wouters paul gardien hans kemp jack mama irene mavromati ian mcclelland linda vodegel matzen inquiry with imagery: historical archive retrieval with digital cameras this paper describes an integration of geographic information systems (gis) and multimedia technologies to transform the ways k-12 students learn about their local communities. we have augmented a digital camera with a global positioning system (gps) and a digital compass to record its position and orientation when pictures are taken. the metadata are used to retrieve and present historical images of the photographed locations to students. another set of tools allows them to annotate and compare these historical images to develop explanations of how and why their communities have changed over time. we describe the camera architecture and learning outcomes that we expect to see in classroom use. brian k. smith erik blankinship alfred ashford michael baker timothy hirzel bringing order to the web: automatically categorizing search results we developed a user interface that organizes web search results into hierarchical categories. text classification algorithms were used to automatically classify arbitrary search results into an existing category structure on-the-fly. a user study compared our new category interface with the typical ranked list interface of search results. the study showed that the category interface is superior both in objective and subjective measures. subjects liked the category interface much better than the list interface, and they were 50% faster at finding information that was organized into categories. organizing search results allows users to focus on items in categories of interest rather than having to browse through all the results sequentially. hao chen susan dumais delays and temporal incoherence due to the mediated status-status mappings the paper describes how the identification of 'status-status mappings' early in the specification and design of an interactive system can highlight potential temporal problems in the interface. these problems arise because without infinitely fast computation and communication, any constraints between status in the interface are bound to be violated some of the time. this violation will at best be a slight lag between the source of a change and its display and at worst may lead to inconsistency between parts of the interface. we identify the ways in which status-status mappings are violated and the way in which they are mediated by events in the implementation of a system. this enables the designer to control the eventual behaviour of the system and avoid the worst pitfalls. alan dix gregory abowd properties of thinking and feeling transferred from human computer interaction to social interaction ethel h. hanson distributed stream control for self-managing media processing graphs lisa amini jorge lepre martin kienzle visual mesh xia lin datasplash database visualization is an area of growing importance as database systems become larger and more accessible. datasplash is an easy-to- use, integrated environment for navigating, creating, and querying visual representations of data. we will demonstrate the three main components which make up the datasplash environment: a navigation system, a direct-manipulation interface for creating and modifying visualizations, and a direct-manipulation visual query system. chris olston allison woodruff alexander aiken michael chu vuk ercegovac mark lin mybrid spalding michael stonebraker relief: combining expressiveness and rapidity into a single system iadh ounis marius pasca post-optimization and incremental refinement of r-trees yváan j. garcia mario a. lopez scott t. leutenegger a performance analysis of view materialization strategies the conventional way to process commands for relational views is to use query modification to translate the commands into ones on the base relations. an alternative approach has been proposed recently, whereby materialized copies of views are kept, and incrementally updated immediately after each modification of the database. a related scheme exists, in which update of materialized views is deferred until just before data is retrieved from the view. a performance analysis is presented comparing the cost of query modification, immediate view maintenance, and deferred view maintenance. three different models of the structure of views are given a simple selection and projection of one relation, the natural join of two relations, and an aggregate (e.g. the sum of values in a column) over a selection-projection view. the results show that the choice of the most efficient view maintenance method depends heavily on the structure of the database, the view definition, and the type of query and update activity present. eric n. hanson a comparative study of log-only and in-place update based temporal object database systems kjetil nørvåg the time index+: an incremental access structure for temporal databases vram kouramajian ibrahim kamel ramez elmasri syed waheed impact of video frame rate on communicative behaviour in two and four party groups there has been relatively little research on the impact of different levels of video quality on users of multimedia communication systems. this paper describes a study examining the impact of two levels of video frame rate on pairs and groups of four engaged on a design task, looking at one particular aspect of communication, namely reference. it was found that a low frame rate made speakers more communicatively cautious, using longer descriptions and more elaborations to refer to pictures used in the task, possibly as a result of being less certain that they had been understood. this only occurred in the two party groups despite a prediction that groups of four would be affected most by the frame rate manipulation. this study shows that video quality can have subtle effects on communication and that identical levels of quality may have different effects depending on the situation. matthew jackson anne h. anderson rachel mcewan jim mullin confronting the assumptions underlying the management of knowledge: an agenda for understanding and investigating knowledge management knowledge and knowledge management are receiving tremendous interest from both practitioners and academics. although knowledge management is often accepted as a very useful organizational activity, a number of the assumptions underlying knowledge management have not been investigated. this paper examines four knowledge management assumptions: knowledge is worth managing, organizations benefit from managing knowledge, knowledge can be managed, and little risk is associated with managing knowledge. the assumptions are analyzed at strategic and operational levels, and both negating and supporting evidence is presented. based on this analysis, a framework for research in knowledge management is proposed. the framework is used to generate a number of key questions that should be addressed in knowledge management research. particular attention is given to goals and rewards as well as to the role of information technology in knowledge management. kathy a. stewart richard baskerville veda c. storey james a. senn arjan raven cherie long automatic, object-based indexing for assisted analysis of video data: jonathan d. courtney alternatives: exploring information appliances through conceptual design proposals as a way of mapping a design space for a project on information appliances, we produced a workbook describing about twenty conceptual design proposals. on the one hand, they serve as suggestions that digital devices might embody values apart from those traditionally associated with functionality and usefulness. on the other, they are examples of research through design, balancing concreteness with openness to spur the imagination, and using multiplicity to allow the emergence of a new design space. here we describe them both in terms of content and process, discussing first the values they address and then how they were crafted to encourage a broad discussion with our partners that could inform future stages of design. bill gaver heather martin object-oriented approach to interconnecting trusted database management systems bhavani thuraisingham harvey rubinovitz efficient and tumble similar set retrieval set value attributes are a concise and natural way to model complex data sets. modern object relational systems support set value attributes and allow various query capabilities on them. in this paper we initiate a formal study of indexing techniques for set value attributes based on similarity, for suitably defined notions of similarity between sets. such techniques are necessary in modern applications such as recommendations through collaborative filtering and automated advertising. our techniques are probabilistic and approximate in nature. as a design principle we create structures that make use of well known and widely used data structuring techniques, as a means to ease integration with existing infrastructure. we show how the problem of indexing a collection of sets based on similarity can be reduced to the problem of indexing suitably encoded (in a way that preserves similarity) binary vectors in hamming space thus, reducing the problem to one of similarity query processing in hamming space. then, we introduce and analyze two data structure primitives that we use in cooperation to perform similarity query processing in a hamming space. we show how the resulting indexing technique can be optimized for properties of interest by formulating constraint optimization problems based on the space one is willing to devote for indexing. finally we present experimental results from a prototype implementation of our techniques using real life datasets exploring the accuracy and efficiency of our overall approach as well as the quality of our solutions to problems related to the optimization of the indexing scheme. aristides gionis dimitrios gunopulos nick koudas on estimating block accesses in database organizations w. s. luk selectivity estimation in spatial databases selectivity estimation of queries is an important and well-studied problem in relational database systems. in this paper, we examine selectivity estimation in the context of geographic information systems, which manage spatial data such as points, lines, poly-lines and polygons. in particular, we focus on point and range queries over two-dimensional rectangular data. we propose several techniques based on using spatial indices, histograms, binary space partitionings (bsps), and the novel notion of spatial skew. our techniques carefully partition the input rectangles into subsets and approximate each partition accurately. we present a detailed experimental study comparing the proposed techniques and the best known sampling and parametric techniques. we evaluate them using synthetic as well as real-life tiger datasets. based on our experiments, we identify a bsp based partitioning that we call min- skew which consistently provides the most accurate selectivity estimates for spatial queries. the min-skew partitioning can be constructed efficiently, occupies very little space, and provides accurate selectivity estimates over a broad range of spatial queries. swarup acharya viswanath poosala sridhar ramaswamy automatic filing and retrieval of official documents using global mail attributes and a viewdata system with symbolically named pages a. o. amadi browsing the structure of multimedia stories stories may be analyzed as sequences of causally-related events and reactions to those events by the characters. we employ a notation of plot elements, similar to one developed by lehnert,and we extend that by forming higher level ``story threads'' stories may be analyzed as sequences of causally-related events and reactions to those events by the characters. we employ a notation of plot elements, similar to one developed by lehnert,and we extend that by forming higher level ``story threads''we apply the browser to corduroy, a children's short feature which was analyzed in detail. we provide additional illustrations with analysis of kiss of death, a film noir classic. effectively, the browser provides a framework for interactive summaries, video of the narrative robert b. allen jane acheson integrating co-operative work supports from chaos perspective (abstract) carla simone the spatial metaphor for user interfaces: experimental tests of reference by location versus name the enduring dichotomy between spatial and symbolic modes of representation and retrieval acquires an added pragmatic dimension through recent developments in computer-based information retrieval. the standard name-based approach to object reference is now supplemented on some systems by a spatial alternative-often driven by an office or desktop metaphor. little rigorous evidence is available, however, to support the supposition that spatial memory in itself is more effective than symbolic memory. the accuracy of spatial versus symbolic reference was assessed in three experiments. in experiment 1 accuracy of location reference in a location-only filing condition was initially comparable to that in a name-only condition, but deteriorated much more rapidly with increases in the number of objects filed. in experiment 2 subjects placed objects in a two- dimensional space containing landmarks (drawings of a desk, table, filing cabinets, etc.) designed to evoke an office metaphor, and in experiment 3 subjects placed objects in an actual, three-dimensional mock office. neither of these enhancements served to improve significantly the accuracy of location reference, and performance remained below that of a name-only condition in experiment 1. the results raise questions about the utility of spatial metaphor over symbolic filing and highlight the need for continuing research in which considerations of technological and economic feasibility are balanced by considerations of psychological utility. william p. jones susan t. dumais automatic linking of thesauri s. amba n. narasimhamurthi kevin c. o'kane philip m. turner a categorization method for french legal documents on the web guiraude lame 3c or not 3c, that is the question daniel g. bobrow a framework for expressing and combining preferences the advent of the world wide web has created an explosion in the available on- line information. as the range of potential choices expand, the time and effort required to sort through them also expands. we propose a formal framework for expressing and combining user preferences to address this problem. preferences can be used to focus search queries and to order the search results. a preference is expressed by the user for an entity which is described by a set of named fields; each field can take on values from a certain type. the * symbol may be used to match any element of that type. a set of preferences can be combined using a generic combine operator which is instantiated with a value function, thus providing a great deal of flexibility. same preferences can be combined in more than one way and a combination of preferences yields another preference thus providing the closure property. we demonstrate the power of our framework by illustrating how a currently popular personalization system and a real-life application can be realized as special cases of our framework. we also discuss implementation of the framework in a relational setting. rakesh agrawal edward l. wimmers architecting personalized delivery of multimedia information shoshana loeb sorting out searching: a user-interface framework for text searches ben shneiderman donald byrd w. bruce croft extracting usability information from user interface events modern window-based user interface systems generate user interface events as natural products of their normal operation. because such events can be automatically captured and because they indicate user behavior with respect to an application's user interface, they have long been regarded as a potentially fruitful source of information regarding application usage and usability. however, because user interface events are typically voluminos and rich in detail, automated support is generally required to extract information at a level of abstraction that is useful to investigators interested in analyzing application usage or evaluating usability. this survey examines computer-aided techniques used by hci practitioners and researchers to extract usability- related information from user interface events. a framework is presented to help hci practitioners and researchers categorize and compare the approaches that have been, or might fruitfully be, applied to this problem. because many of the techniques in the research literature have not been evaluated in practice, this survey provides a conceptual evaluation to help identify some of the relative merits and drawbacks of the various classes of approaches. ideas for future research in this area are also presented. this survey addresses the following questions: how might user interface events be used in evaluating usability? how are user interface events related to other forms of usability data? what are the key challenges faced by investigators wishing to exploit this data? what approaches have been brought to bear on this problem and how do they compare to one another? what are some of the important open research questions in this area? david m. hilbert david f. redmiles a theory of term weighting based on exploratory data analysis warren r. greiff lyberworld: a 3d graphical user interface for fulltext retrieval matthias hemmje mixing oil and water?: ethnography versus experimental psychology in the study of computer-mediated communication andrew monk bonnie nardi nigel gilbert marilyn mantei john mccarthy using star clusters for filtering javed aslam katya pelekhov daniela rus new digital broadcasting services for use with tv sets containing digital storage devices rina hayasaka hiroshi matoba kazutoshi maeno probabilistic document indexing from relevance feedback data based on the binary independence indexing model, we apply three new concepts for probabilistic document indexing from relevance feedback data: abstraction from specific terms and documents, which overcomes the restriction of limited relevance information for parameter estimation. flexibility of the representation, which allows the integration of new text analysis and knowledge-based methods in our approach as well as the consideration of more complex document structures or different types of terms (e.g. single words and noun phrases). probabilistic learning or classification methods for the estimation of the indexing weights making better use of the available relevance information. we give experimental results for five test collections which show improvements over other indexing methods. n. fuhr c. buckley towards robust distributed systems (abstract) current distributed systems, even the ones that work, tend to be very fragile: they are hard to keep up, hard to manage, hard to grow, hard to evolve, and hard to program. in this talk, i look at several issues in an attempt to clean up the way we think about these systems. these issues include the fault model, high availability, graceful degradation, data consistency, evolution, composition, and autonomy. these are not (yet) provable principles, but merely ways to think about the issues that simplify design in practice. they draw on experience at berkeley and with giant-scale systems built at inktomi, including the system that handles 50% of all web searches. eric a. brewer online services as distributed meeting support software harold emanuel fred niederman stewart shapiro using query mediators for distributed searching in federated digital libraries naomi dushay james c. french carl lagoze open cscw systems for distributed environments yasushi nakauchi yuichiro anzai uc berkeley's digital library project for digital libraries to succeed, we must abandon the traditional notion of "library" altogether. the reason is as follows: the digital "library" will be a collection of distributed information services; producers of material will make it available, and consumers will find it and use it, perhaps through the help of automated agents. libraries in the traditional sense are nowhere to be found in this model (i. e., the notion of a limited intermediary containing some small fraction of preselected material available only to local patrons is replaced by a system providing to users everywhere direct access to the full contents of all available material). robert wilensky a performance comparison of quadtree-based access methods for thematic maps eleni tousidou yannis manolopoulos word sense disambiguation using machine-readable dictionaries r. krovetz w. b. croft video scouting demonstration: smart content selection and recording smart video content selection and recording is the best selling feature of the current personal tv receivers like tivo. these devices operate at the tv program level in that they use electronic program guides and user's program personal preferences to help consumers record and watch programs that match their interests. in this video scouting demonstration, we present a system that allows for the filtering and retrieving of tv sub-programs based on user's content preferences. the filtering process is realized via real-time video, audio, and transcript analysis. the demonstrator personalizes the tv experience in the areas of celebrity and financial information. the technology can translate into differentiating storage and set-top box product features for finding your favorite actors, most interesting personalized financial news of the day, commercial compaction and enhancement, and content augmentation with other sources of information such as web pages and encyclopedia. the demonstrator also reflects our active involvement in the mpeg-7 standard (content description interface). nevenka dimitrova lalitha agnihotri george marmaropoulos thomas mcgee serhan dagtas electronic conferencing - issues beyond software selection electronic conferencing is rapidly becoming a popular medium for communication. we have witnessed wide-spread installations of electronic conferencing systems on college campuses and in academic settings that provide information and resource sharing in a variety of ways. probably you have already made the decision to purchase or develop an electronic conferencing package for your site. you may already be using it to meet some of the demands of your user community. but once the conferencing software has been chosen and installed, there still exists the potential for a myriad of problems to arise. anticipating these issues and planning for them before-hand could determine whether electronic conferencing is considered a success at your installation. this paper examines a variety of issues on a multi-level basis. it focuses primarily on problems that are dealt with at the administrative level and at the individual conference-coordinator level. issues such as qualifications of conference coordinators, selection procedures of coordinators, proposal guidelines for creation of new conferences, accessibility, and establishment of conference etiquette are discussed. the paper also looks at issues at the conference-user level, such as how first-time users react to the new medium and the degree of involvement of the users in the planning process. every institution is unique in its resources, its climate, and its purpose. the types of issues raised by this paper are the kind that have unique answers for each community posing the questions. this paper discusses scenarios that can be resolved to satisfaction in many ways, depending on your own situation. it also presents a checklist of administrative-level considerations, and describes an example of a successful implementation of electronic conferencing. joan o'bryan virtual reality on five dollars a day randy pausch viewing stemming as recall enhancement wessel kraaij renee pohlmann an introduction to the internet and how it can be used for collaboration for k-12 teachers (tutorial session)(abstract only) goals and content: this tutorial will provide a simple overview of the internet for k-12 teachers with no internet experience. it will demonstrate many useful resources that teachers can find on the internet to use directly in their classrooms or in working with other teachers. it will explain how to get started using the internet and will provide free admission to the boston computer museum. nicole yankelovich chimera richard taylor david redmiles webcq-detecting and delivering information changes on the web ling liu calton pu wei tang we can make forgetting impossible, but should we? edwin bos high performance infrastructure for visually-intensive cscw applications we describe a scalable cscw infrastructure designed to handle heavy- weight data sets, such as extremely large images and video. scalability is achieved through exclusive use of reliable and unreliable multicast protocols. the infrastructure uses a replicated architecture rather than a centralized architecture, both to reduce latency and to improve responsiveness. use of 1) reliable (multicast) transport of absolute, rather than relative, information sets, 2) time stamps, and 3) a last-in-wins policy provide coherency often lacking in replicated architectures. the infrastructure allows users to toggle between wysiwis and non-wysiwis modes. that, coupled with effective use of multicast groups, allows greatly improved responsiveness and performance for managing heavy-weight data. stephen zabele steven l. rohall ralph l. vinciguerra a critical assessment of the minimalist approach to documentation carroll's (1991) minimal manual has been considered an important advance in teaching first-time users the basics of computer programs. unfortunately, it is not very clear what minimalism really means. practitioners, for example, will find it difficult to create their own minimal manual because the principles of minimalism have not been described in enough detail (see horn, 1992; tripp, 1990). it is also not yet settled that a minimalist approach is the most effective one because critical experiments have hardly been conducted. this study therefore closely examines the minimalist principles and claims. this paper describes the basic ideas of minimalism, its design principles and how they can be operationalized. a parallel is drawn between a minimalist and constructivist perspective on learning and instruction. like minimalism, constructivism places a high value on experience-based learning in context- rich environments. like minimalism, it stresses the need to capitalize on the learner's prior knowledge as much as possible. and like minimalism, constructivists urge learners to follow their own plans and goals, to make inferences, and to abstract principles from what they experience (see duffy & jonassen, 1991, 1992). an experiment is reported that examines the claims of minimalism. strong and significant gains on several factors were found, all favoring the minimal manual over a control (conventional) manual. the discussion points to several issues that minimalism has yet to address. hans van der meij on the importance of refereeing carl smith communication and information retrieval with a pen-based meeting support tool catherine g. wolf james r. rhyne laura k. briggs applying electric field sensing to human-computer interfaces thomas g. zimmerman joshua r. smith joseph a. paradiso david allport neil gershenfeld "gazetotalk" we propose a new human interface (hi) system named "gazetotalk" that is implemented by vision based gaze detection, acoustic speech recognition (asr), and animated human-like agent cg with facial expressions and gestures. the "gazetotalk" system demonstrates that eye-tracking technologies can be utilized to improve hi effectively by working with other non-verbal messages such as facial expressions and gestures. conventional voice interface system have the following serious drawbacks. (1) they cannot distinct between input voice and other noise, and (2) cannot understand who is the intended hearer of each utterance. a "push-to-wk" mechanism can be used to ease these problems, but it spoils the advantages of voice interfaces (e.g. contact-less, suitability in hand-busy situation). in real human dialogues, besides exchanging content messages, people use non- verbal messages such as gaze, facial expressions and gestures to establish or maintain conversations, or recover from problems that arise in the conversation. the "gazetotalk" system simulates this kind of "meta-communication" facility by utilizing vision based gaze detection, asr, and human-like agent cg. when the user intends to input voice commands, he gazes on the agent on the display in order to request to talk, just as in daily human-human dialogues. this gaze is recognized by the gaze detection module and the agent shows a particular facial expression and gestures as a feedback to establish an "eye- contact." then the system accepts or rejects speech input from the user depending on the state of the "eye- contact." this mechanism allows the "gazetotalk" system to accept only intended voice input and ignore another voices and environmental noises successfully, without forcing any arbitrary operation to the user. we also demonstrate an extended mechanism to treat more flexible "eye contact" variations. the preliminary experiments suggest that in the context of meta-communication, nonverbal messages can be utilized to improve hi in terms of naturalness, friendliness and tactfulness. tetsuro chino kazuhiro fukui kaoru suzuki in reply to domains, relations and religious wars hugh darwen "interactive poem system" naoko tosa ryohei nakatsu issues in multimedia databases multimedia is a popular term these days, and the database community, naturally, is talking about multimedia databases. the reason multimedia is getting so much attention is clear: technology trends are now beginning to make it possible to store and display, at a reasonable price, audio and still images through a computer. it is expected that video storage will also be affordable in the near future. the purpose of this panel is to explore what new challenges this multimedia explosion brings to the database community. h. v. jagadish integrating organizational memory and performance support christopher johnson larry birnbaum ray bareiss tom hinrichs computerization and managerial control in large offices there is considerable debate over whether current practices of office computerization represent a continuation or reversal of "scientific management" principles. this issue will be discussed in light of case study research being conducted at the head office of a large insurance firm. the company is in the process of introducing on-line systems at the clerical level in the policy administration area. as part of this process and the concomitant reorganization of the company along territorial lines, the new jobs being created show a marked reduction in task fragmentation and some lessening of hierarchy, both regarded as characteristics of taylorian organization. however, other features of scientific management, including skill dissociation, separation of conception from execution and control by monopoly of knowledge, are still very much in evidence, with computer systems playing a central role. taylor's ghost may be dressed in modern garb, but it is certainly recognizable in computerized offices. andrew clement zypher: browsing frameworks with an open hypermedia system (abstract) dorothy buchanan stacie hibino on completeness of historical relational query languages numerous proposals for extending the relational data model to incorporate the temporal dimension of data have appeared in the past several years. these proposals have differed considerably in the way that the temporal dimension has been incorporated both into the structure of the extended relations of these temporal models and into the extended relational algebra or calculus that they define. because of these differences, it has been difficult to compare the proposed models and to make judgments as to which of them might in some sense be equivalent or even better. in this paper we define temporally grouped and temporally ungrouped historical data models and propose two notions of historical relational completeness, analogous to codd's notion of relational completeness, one for each type of model. we show that the temporally ungrouped models are less expressive than the grouped models, but demonstrate a technique for extending the ungrouped models with a grouping mechanism to capture the additional semantic power of temporal grouping. for the ungrouped models, we define three different languages, a logic with explicit reference to time, a temporal logic, and a temporal algebra, and motivate our choice for the first of these as the basis for completeness for these models. for the grouped models, we define a many-sorted logic with variables over ordinary values, historical values, and times. finally, we demonstrate the equivalence of this grouped calculus and the ungrouped calculus extended with a grouping mechanism. we believe the classification of historical data models into grouped and ungrouped models provides a useful framework for the comparison of models in the literature, and furthermore, the exposition of equivalent languages for each type provides reasonable standards for common, and minimal, notions of historical relational completeness. james clifford albert croker alexander tuzhilin a spatial data mining method by clustering analysis eun-jeong son in-soo kang tae-wan kim ki-joune li strudel: a web site management system mary fernandez daniela florescu jaewoo kang alon levy dan suciu cscw as form of organizational memory: implications for organizational learning joann brooks prairie (video program) (abstract only): a conceptual framework for a virtual organization prairie is a simulation prototype or vision, demonstrating how individuals may work together in a virtual work enviroment designed for a whole enterprise. prairie addresses various organizational and social issues exacerbated by distance and time. by using the concept of communities and by extending physical interaction cues to others across distance and time, we demonstrate possible solutions to these issues. in prairie, people and information are organized into mission-based (organizational units), goal-based (project teams) and interest-based (special interest groups) hierarchies for ease of navigation. a worker may alternately navigate to communities by using personal links from their private virtual desktops. each community has two areas. one area contains the information germane to a community, that is pushed or pulled depending on the nature of the information. each community also has an area with a shared view where community members can meet or congregate. presence in these community areas range from seeing thumbnail photos to holding a video- conference. the shared view facilitates ad hoc, informal interactions which are important for maintaining and building social networks and organizational culture. we believe the framework for prairie is flexible, integrated, and scaleable so it can be adapted to model other organizations, communities, and processes. stephen h. sato anatole v. gershman kishore s. swaminathan asserting beliefs in mls relational models multilevel relations, based on the current multilevel secure (mls) relational data models, can present a user with information that is difficult to interpret and may display an inconsistent outlook about the views of other users. such ambiguity is due to the lack of a comprehensive method for asserting and interpreting beliefs about lower level information. in this paper we identify different beliefs that can be held by higher level users about lower level information, and we introduce the new concept of a mirage tuple. we present a mechanism for asserting beliefs about all accessible tuples, including lower level tuples. this mechanism provides every user of an mls database with an unambiguous interpretation of all viewable information and presents a consistent account of the views at all levels below the user's level. nenad a. jukic susan v. vrbsky unisql's next-generation object-relational database management system object-relational dbmss have been receiving a great deal of attention from industry analysts and press as the next generation of database management systems. the motivation for a next generation dbms is driven by the reality of shortened business cycles. this dynamic environment demands fast, cost- effective, time-to-market of new or modified business processes, services, and products. to support this important business need, the next generation dbms must: 1. leverage the large investments made in existing relational technology, both in data and skill set; 2. take advantage of the flexibility, productivity, and performance benefits of oo modeling; and 3. integrate robust dbms services for production quality systems. the objective of this article is to provide a brief overview of unisql's commercial object-relational database management system. albert d'andrea phil janus properties of extended boolean models in information retrieval joon ho lee physical design equivalencies in database conversion as relational technology becomes increasingly accepted in commercial data processing, conversion of some of the huge number of existing navigational databases to relational databases is inevitable. it is thus important to understand how to recognize physical design modifications and enhancements in the navigational databases and how to convert them to equivalent relational terms as applicable. mark l. gillenson estimating nested selectivity in object-oriented databases wan-sup cho wook- shin han ki-hyung hong kyu-young whang principles of mixed-initiative user interfaces eric horvitz video as a technology for informal communication robert s. fish robert e. kraut robert w. root ronald e. rice deadlock freedom using edge locks we define a series of locking protocols for database systems that all have three main features: freedom from deadlock, multiple granularity, and support for general collections of locking primitives. a rooted directed acyclic graph is used to represent multiple granularities, as in system r. deadlock freedom is guaranteed by extending the system r protocol to require locks on edges of the graph in addition to the locks required on nodes. henry f. korth relevance feedback with too much data james allan precedental data bases: how and why they are worked out and used the concept of a "precedental data base" is introduced. it is a linguistic data base consisting of a dictionary of lexical patterns (clishes) and a dictionary of discourses. some algorithms for textual information processing using precedental data bases are discussed in detail. these systems are installed on mainframe and minicomputers for test runs. b. pevzner data mining on an oltp system (nearly) for free this paper proposes a scheme for scheduling disk requests that takes advantage of the ability of high-level functions to operate directly at individual disk drives. we show that such a scheme makes it possible to support a data mining workload on an oltp system almost for free: there is only a small impact on the throughput and response time of the existing workload. specifically, we show that an oltp system has the disk resources to consistently provide one third of its sequential bandwidth to a background data mining task with close to zero impact on oltp throughput and response time at high transaction loads. at low transaction loads, we show much lower impact than observed in previous work. this means that a production oltp system can be used for data mining tasks without the expense of a second dedicated system. our scheme takes advantage of close interaction with the on-disk scheduler by reading blocks for the data mining workload as the disk head "passes over" them while satisfying demand blocks from the oltp request stream. we show that this scheme provides a consistent level of throughput for the background workload even at very high foreground loads. such a scheme is of most benefit in combination with an active disk environment that allows the background data mining application to also take advantage of the processing power and memory available directly on the disk drives. erik riedel christos faloutsos gregory r. ganger david f. nagle user interface evaluation of a direct manipulation temporal visual query language stacie hibino elke a. rundensteiner experiments in automatic statistical thesaurus construction a well constructed thesaurus has long been recognized as a valuable tool in the effective operation of an information retrieval system. this paper reports the results of experiments designed to determine the validity of an approach to the automatic construction of global thesauri (described originally by crouch in [1] and [2] based on a clustering of the document collection. the authors validate the approach by showing that the use of thesauri generated by this method results in substantial improvements in retrieval effectiveness in four test collections. the term discrimination value theory, used in the thesaurus generation algorithm to determine a term's membership in a particular thesaurus class, is found not to be useful in distinguishing a "good" from an "indifferent" or "poor" thesaurus class). in conclusion, the authors suggest an alternate approach to automatic thesaurus construction which greatly simplifies the work of producing viable thesaurus classes. experimental results show that the alternate approach described herein in some cases produces thesauri which are comparable in retrieval effectiveness to those produced by the first method at much lower cost. carolyn j. crouch bokyung yang the design of 1nf relational databases into nested normal form we develop new algorithms for the design of non first normal form relational databases that are in nested normal form. previously, a set of given multivalued dependencies and those multivalued dependencies implied by given functional dependencies were used to obtain a nested normal form decomposition of a scheme. this method ignored the semantic distinction between functional and multivalued dependencies and utilized only full multivalued dependencies in the design process. we propose new algorithms which take advantage of this distinction, and use embedded multivalued dependencies to enhance the decomposition. this results in further elimination of redundancy due to functional dependencies in nested normal form designs. mark a. roth henry f. korth some considerations for using approximate optimal queries an optimal query has been defined as one which will recover all the known relevant documents of a query in their best probability of relevance ranking. we have slightly modified the definition so that it also allows one to trace its evolution from the original to the optimal via the various feedback stages. such a query can be constructed by modifying the original query with terms from the known relevant documents. it is pointed out that such a term addition strategy differs materially from other approaches that add terms based on term association with all query terms, and calculated from the whole document collection. the effect of viewing a document as constituted of components, and hence affecting the weighting and retreival results of of the optimal query, is also discussed. k. l. kwok providing better support for a class of decision support queries sudhir g. rao antonio badia dirk van gucht digital library use in social context rob kling building an internet resource for a specialized online community spie is a nonprofit organization with 11,000 members worldwide. the society's constituency comprises optical and optoelectronic scientists and engineers in communications, biomedical, manufacturing, aerospace, and other applications. the perceived need to link our technical community electronically is both a response to a future scenario of pervasive interconnectivity among the scientific community and a need to address the issues raised by a changing paradigm for technical publishing, wherein the rise of electronic communication may obviate the need for the traditional publisher (and its capital investment). through a growing but still modest effort over the last two years, spie has created an array of online services that are essentially paving the way for our organization's future offerings in the electronic publishing world. many of our experiences and observations may apply to any group involved in setting up such a resource, and we hope this case study will provide some assistance to others embarking on that process. rich donnelly rick hermann learnable visual keywords for image classification joo-hwee lim implementing data cubes efficiently venky harinarayan anand rajaraman jeffrey d. ullman collaborative multimedia: getting beyond the obvious bonnie nardi sara bly ellen isaacs sha xin wei steve whittaker linguistic instruments and qualitative reasoning for schema integration two major problems in schema integration are to identify correspondences between different conceptual schemas and to verify that the proposed correspondences are consistent with the semantics of the schemas. we propose a heuristic method, based on the use of galois lattices, for identifying schema correspondences. we show how the results of this method can be checked for correctness by introducing a number of necessary conditions for schema mergeability. these conditions are formulated in the context of a semantically rich modelling formalism, the distinguishing feature of which is the use of case grammar. paul johannesson mdm: a multiple-data model tool for the management of heterogeneous database schemes mdm is a tool that enables the users to define schemes of different data models and to perform translations of schemes from one model to another. these functionalities can be at the basis of a customizable and integrated case environment supporting the analysis and design of information systems. mdm has two main components: the model manager and the schema manager. the model manager supports a specialized user, the model engineer, in the definition of a variety of models, on the basis of a limited set of metaconstructs covering almost all known conceptual models. the schema manager allows designers to create and modify schemes over the defined models, and to generate at each time a translation of a scheme into any of the data models currently available. translations between models are automatically derived, at definition time, by combining a predefined set of elementary transformations, which implement the standard translations between simple combinations of constructs. paolo atzeni riccardo torlone the panq tool and emf sql for complex data management damianos chatziantoniou state of the art issues in distributed databases (acm 81 panel session): transaction processing issues in the distributed database tested system distributed database management systems provide architectures which include multiple independent processors. such systems are capable of supporting multi- process distributed programs. this talk discusses how these features are used to increase inter- and intra-transaction parallelism and discusses the effects of parallelism upon consistency, termination, and recovery of transactions. c. devor honeywell indexing for data models with constraints and classes (extended abstract) we examine i/o-efficient data structures that provide indexing support for new data models. the database languages of these models include concepts from constraint programming (e.g., relational tuples are generalized to conjunctions of constraints) and from object-oriented programming (e.g., objects are organized in class hierarchies). let n be the size of the database, c the number of classes, b the secondary storage page size, and t the size of the output of a query. indexing by one attribute in the constraint data model (for a fairly general type of constraints) is equivalent to external dynamic interval management, which is a special case of external dynamic 2-dimensional range searching. we present a semi-dynamic data structure for this problem which has optimal worst-case space o(n/b) pages and optimal query i/o time o(logbn+t/b) and has o(logbn+(log ;2bn)/b) amortized insert i/o time. if the order of the insertions is random then the expected number of i/o operations needed to perform insertions is reduced to o(logbn). indexing by one attribute and by class name in an object-oriented model, where objects are organized as a forest hierarchy of classes, is also a special case of external dynamic 2-dimensional range searching. based on this observation we first identify a simple algorithm with good worst-case performance for the class indexing problem. using the forest structure of the class hierarchy and techniques from the constraint indexing problem, we improve its query i/o time from o(log2c logbn + t/b) to o(logb \\+ log2b). paris c. kanellakis sridhar ramaswamy darren e. vengroff jeffrey s. vitter the unfinished revolution and xanadu theodor holm nelson a new look at the art of seeing betty edwards cscw and organizational learning (workshop session) (abstract only) this workshop aims to bring together people engaged in the study of the relationships beytween organizational learning and cscw to present and discuss their ideas and findings. issues will include conceptual frameworks; the role of organizational learning in getting the work done; empirical studies of the relation between organizational learning and cscw; methods for developing applications that support organizational learning; and the relationship between studies of organizational learning and studies of organizational memory. liam bannon giorgio de michelis paal soergaard design and execution of adaptive multimedia applications in the internet ana carolina hermann luciano paschoal gaspary janilce b. almeida interviews: fast consulting susan fowler victor stanwick multikey access methods based on term discrimination and signature clustering in order to improve the two-level signature file method designed by sacks- davis et al. [20], we propose new multikey access methods based on term discrimination and signature clustering. by term discrimination, we create separate, efficient access methods for the terms frequently used in user queries. we in addition cluster similar signatures by means of these terms so that we may achieve good performance on retrieval. meanwhile we provide the space-time analysis of the proposed methods and compare them with the two- level signature file method. we show that the proposed methods achieve 15-30% savings in retrieval time and require 3-9 % more storage overhead. j. w. chang j. h. lee y. j. lee generating hypermedia from specifications by sketching multimedia templates: s. fraïsse j. nanard m. nanard flexible search functions for multimedia data with text and other auxiliary data yahiko kambayashi kaoru katayama toshihiro kakimoto hajime iwamoto object lens: letting end-users create cooperative work applications kum-yeq lai thomas w. malone process systems and data bases alfs t. berztiss how you tell your computer what you mean: ostension in interactive systems an important part of communication is being able to point to an object without referring to its components or to the area surrounding it. how to do this is the problem of ostension. we observed many ostension errors in novices learning to use a full-screen text editor. specifically, the novices erroneously tried to use keys that are appropriate for pointing when using a typewriter but incorrect in screen editors (e.g., space bar, backspace key, etc.), they frequently missed the location they intended by one character, they inadvertently pointed to the wrong occurrcnce of a string using a find command, they incorrectly specified boundaries by forgetting about "invisible" characters (e.g., formatting characters), and they mistakenly attempted to point to non- typing areas of the screen that were off-limits. james a. galambos eloise s. wikler john b. black marc m. sebrechts a reference architecture for multi-author world-wide web servers designing publicly accessible and distributed information structures and coordinating distributed authors is a major challenge for most organizations. this paper presents a scaleable reference architecture for multi-author world- wide web (w3) servers, one important type of a distributed and publicly accessible information system. we introduce the paradigm of "lean production of information" and discuss some problems of data quality regarding w3-servers. experiences made with a departmental w3-server designed and maintained according to the principles presented are appended as a case study. louis perrochon a case for parameterized views and relational unification hasan m. jamil hyperlink: visual navigation of the world wide web (abstract) t. alan keahey sue mniszewski human computer interaction laboratory queen mary and westfield college university of london peter johnson a personalized television listings service barry smyth paul cotter the algres testbed of chimera: an active object-oriented database system stefano ceri piero fraternali stefano paraboschi giuseppe psaila query expansion using local and global document analysis jinxi xu w. bruce croft experience with an adaptive indexing scheme previous work has shown that there is a major vocabulary barrier for new or intermittent users of computer systems. the barrier can be substantially lowered with a rich, empirically defined, frequency weighted index. this paper discusses experience with an adaptive technique for constructing such an index. in addition to being an easy way for system designers to collect the necessary data, an adaptive system has the additional advantage that data is collected from real users in real situations, not in some laboratory approximation. implementation considerations, preliminary results and future theoretical directions are discussed. george w. furnas remembering past, present and future - articulating dimensions of "organizational memory" for organizational learning kari kuutti liam bannon space optimization in deductive databases in the bottom-up evaluation of logic programs and recursively defined views on databases, all generated facts are usually assumed to be stored until the end of the evaluation. discarding facts during the evaluation, however, can considerably improve the efficiency of the evaluation: the space needed to evaluate the program, the i/o costs, the costs of maintaining and accessing indices, and the cost of eliminating duplicates may all be reduced. given an evaluation method that is sound, complete, and does not repeat derivation steps, we consider how facts can be discarded during the evaluation without compromising these properties. we show that every such space optimization method has certain components, the first to ensure soundness and completeness, the second to avoid redundancy (i.e., repetition of derivations), and the third to reduce "fact lifetimes" (i.e., the time period for which each fact must be retained during evaluation). we present new techniques based on providing bounds on the number of derivations and uses of facts, and using monotonicity constraints for each of the first two components, and provide novel synchronization techniques for the third component of a space optimization method. we describe how techniques for each of the three components can be combined in practice to obtain a space optimization method for a program. our results are also of importance in applications such as sequence querying, and in active databases where triggers are defined over multiple "events." divesh srivastava s. sudarshan raghu ramakrishnan jeffrey f. naughton ldc-1: a transportable, knowledge-based natural language processor for office environments bruce w. ballard john c. lusth nancy l. tinkham an application oriented approach to view updates views are an indispensable mechanism for providing flexible database access in a workstation environment. on the other hand, views created from more than one base relation have complex and in some cases contradicting update semantics. in this paper we suggest to distinguish between object types as the units for data manipulation and views as the data structures materializing these objects. hence, the same view can represent different object types, and depending on which of them the user is granted access, we can infer simple and coherent semantics, even for complex view definitions. johannes klein andreas reuter workshop on multimedia applications (abstract only) borko furht dragutin petkovic arturo pizano introduction to special section on contextual design karen holtzblatt excentric labeling: dynamic neighborhood labeling for data visualization jean- daniel fekete catherine plaisant constant density visualizations of non-uniform distributions of data allison woodruff james landay michael stonebraker architecture of a metasearch engine that supports user information needs when a query is submitted to a metasearch engine, decisions are made with respect to the underlying search engines to be used, what modifications will be made to the query, and how to score the results. these decisions are typically made by considering only the user's keyword query, neglecting the larger information need. users with specific needs, such as "research papers" or "homepages," are not able to express these needs in a way that affects the decisions made by the metasearch engine. in this paper, we describe a metasearch engine architecture that considers the user's information need for each decision. users with different needs, but the same keyword query, may search different sub-search engines, have different modifications made to their query, and have results ordered differently. our architecture combines several powerful approaches together in a single general purpose metasearch engine. eric j. glover steve lawrence william p. birmingham c. lee giles erc++: a model based on object and logic paradigms zahir tari the multig research programme - distributed multimedia applications on gigabit networks björn pehrson yngve sundblad communication technology and the conference of the future (panel) munir mandviwalla richard a. light ifay f. chang jeff zadeh lorne olfman two-&-two, a high level system for retrieving pairs of documents stephen j. wiesner the new middleware using middleware, customers can deploy cost- effective and highly functional client/server applications --- once they work out the kinks. rich finkelstein the audible web: auditory enhancements for mosaic michael c. albers eric bergman stickychats: remote conversations over digital documents elizabeth churchill jonathan trevor sara bly les nelson on local heuristics to speed up polygon-polygon intersection tests wael m. badawy walid g. aref coefficients of combining concept classes in a collection this report considers combining information to improve retrieval. the vector space model has been extended so different classes of data are associated with distinct concept types and their respective subvectors. two collections with multiple concept types are described, isi-1460 and cacm-3204. experiments indicate that regression methods can help predict relevance, given query- document similarity values for each concept type. after sampling and transformation of data, the coefficient of determination for the best model was .48 (.66) for isi (cacm). average precision for the two collections was 11% (31%) better for probabilistic feedback with all types versus with terms only. these findings may be of particular interest to designers of document retrieval or hypertext systems since the role of links is shown to be especially beneficial. e. a. fox g. l. nunn w. c. lee charting the course of a user survey that will rock the boat what shall we do with the user survey? what shall we do with the user survey? what shall we do with the user survey? early in the morning? thoughts of an upcoming annual user survey should produce neither shudders nor (worse yet) yawns from your computer center staff. granted, most service organizations toss responsibility for the user survey around like a hot potato because the potential to "get burned" is real. but there are strategies for increasing the likelihood that you'll not just survive the survey experience in an academic computing environment, but actually benefit from it. whether done poorly or proficiently, the user survey is a monumental task. it's just common sense, therefore, that if having a greater understanding of your services won't make a difference in how you deliver those services or if you don't have any burning questions about services that need answering, then you shouldn't bother with a user survey. you'll be doing both the campus and yourself a favor. i've put together a list of simple rules for computer user surveys that will hopefully, as the title claims, chart a course that leads to a survey that will produce changes in the way you deliver services. in an attempt to adhere to the nautical theme of the conference, i've couched these user survey rules in terms of safety rules for boating and canoeing that i found in my girl scout handbook (copyright 1955). if that sounds particularly boring to you, consider that most writers couch guidelines for user surveys in the language of statistical inference and hypothesis testing, so anything is an improvement to my way of thinking. actually, it really took very little imagination on my part to translate the simple girl scout boating rules into guidelines for computer center user surveys. for instance, the gem "stay ashore in bad weather" translate very nicely into this user survey guideline: "never administer your user survey immediately after a thunderstorm that resulted in a two week power failure in which all systems were down and a massive loss of user files occurred." sue stager multimedia application sharing in a heterogeneous environment klaus h. wolf konrad froitzheim peter schulthess constructing community in cyberspace mary b. williamson andrew glassner margaret mclaughlin cheryl chase marc smith designing user interfaces for collaborative web-based open hypermedia niels olof bouvin pc note eugene styer two-handed input in a compound task paul kabbash william buxton abigail sellen a sound and sometimes complete query evaluation algorithm for relational databases with null values a sound and, in certain cases, complete method is described for evaluating queries in relational databases with null values where these nulls represent existing but unknown individuals. the soundness and completeness results are proved relative to a formalization of such databases as suitable theories of first-order logic. because the algorithm conforms to the relational algebra, it may easily be incorporated into existing relational systems. raymond reiter information retrieval from hypertext using dynamically planned guided tours catherine guinan alan f. smeaton socially grounded engineering for digital libraries william l. anderson susan l. anderson from dss to dsp: a taxonomic retrospective arun sen the csnet information server: automatic document distribution using electronic mail c. partridge c. mooers m. laubach sirog: a responsive hypertext manual power plant operation and control in modern screen-based control rooms takes place using computer displays which are directly coupled to the plant state. however, operators are provided with operational instructions and background information by means of paper manuals or at best hypertext manuals with fixed structure and contents. thus, information presentation is independent of the current situation. to improve information accessibility we developed a situation-dependent information medium: responsive manuals. a responsive manual consists of a "standard" hypertext-based operational manual and a task description. it monitors the changing situation and based on this is able to point to relevant information. to show the advantages of the responsive manual approach in the domain of power plant operation we implemented the sirog (situation-related operational guidance) system in close cooperation with siemans. it covers all parts of an operational manual for accidents in a siemans nuclear power plant, and is coupled directly to the plant state. the article discusses the basics of the responsive manuals approach and the role of "responsiveness" in sirog. lothar simon jochen erdmann outlier detection for high dimensional data the outlier detection problem has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. most such applications are high dimensional domains in which the data can contain hundreds of dimensions. many recent algorithms use concepts of proximity in order to find outliers based on their relationship to the rest of the data. however, in high dimensional space, the data is sparse and the notion of proximity fails to retain its meaningfulness. in fact, the sparsity of high dimensional data implies that every point is an almost equally good outlier from the perspective of proximity-based definitions. consequently, for high dimensional data, the notion of finding meaningful outliers becomes substantially more complex and non-obvious. in this paper, we discuss new techniques for outlier detection which find the outliers by studying the behavior of projections from the data set. charu c. aggarwal philip s. yu determining relationships among names in heterogeneous databases clement yu biao jia wei sun son dao coordination aspects in a spatial group decision support collaborative system sergio p. j. medeiros jane m. de souza julia celia m. strauch gustavo r. b. pinto extracting entity profiles from semistructured information spaces a semistructured information space consists of multiple collections of textual documents containing fielded or tagged sections. the space can be highly heterogeneous, because each collection has its own schema, and there are no enforced keys or formats for data items across collections. thus, structured methods like sql cannot be easily employed, and users often must make do with only full-text search. in this paper, we describe an approach that provides structured querying for particular types of entities, such as companies and people. entity- based retrieval is enabled by normalizing entity references in a heuristic, type-dependent manner. the approach can be used to retrieve documents and can also be used to construct entity profiles --- summaries of commonly sought information about an entity based on the documents' content. the approach requires only a modest amount of meta- information about the source collections, much of which is derived automatically. robert a. nado scott b. huffman versioning a full-text information retrieval system in this paper, we present an approach to the incorporation of object versioning into a distributed full-text information retrieval system. we propose an implementation based on "partially versioned" index sets, arguing that its space overhead and query-time performance make it suitable for full- text ir, with its heavy dependence on inverted indexing. we develop algorithms for computing both historical queries and time range queries and show how these algorithms can be applied to a number of problems in distributed information management, such as data replication, caching, transactional consistency, and hybrid media repositories. peter g. anick rex a. flynn look who's talking: the gaze groupware system roel vertegaal harro vons robert slagter storage reclamation in object oriented database systems when providing data management for nontraditional data, database systems encounter storage reclamation problems similar to those encountered by virtual memory managers. the paging behavior of existing automatic storage reclamation schemes as applied to objects stored in a database management system is one indicator of the performance cost of various features of storage reclamation algorithms. the results of modeling the paging behavior suggest that mark and sweep causes many more input/output operations than copy-compact. a contributing factor to the expense of mark and sweep is that it does not recluster memory as does copy-compact. if memory is not reclustered, the average cost of accessing data can go up tremendously. other algorithms that do not recluster memory also suffer performance problems, namely all reference counting schemes. the main advantage of a reference count scheme is that it does not force a running program to pause for a long period of time while reclamation takes place, it amortizes the cost of reclamation across all accesses. the reclustering of copy-compact and the cost amortization of reference count are combined to great advantage in baker's algorithm. this algorithm proves to be the least prohibitive for operating on disk-based data. margaret h. butler on the properties and characterization of connection-trap-free schemes e p f chan paolo atzeni working through meetings (tutorial session)(abstract only): a framework for designing meeting support goals and content: through this tutorial, participants will: understand distinctions among various types of meetings and the role of various types of conversations in successful meetings; understand the importance of partnership for achieving team results in meetings; formulate plans for successful technological support for meetings. participants will experience, through a series of connected exercises, an ad hoc meeting designed to highlight what is important about meetings. out of this experience, various theories that apply to meetings will become relevant. from an integration of experience and theory, we will explore how technology can be used innovatively and effectively to support meetings. john bennett john karat a multicast scheme for parallel software-only video effects processing we have developed a parallel software-only processing system for creating real-time video effects such as titling and compositing (e.g., picture-in- picture) using compressed internet video sources. the system organizes processors into a hierarchy of levels. processes at each level of the hierarchy can exploit different types of parallelism and coordinate the actions of lower levels. to control the effect, control messages must be distributed to processors in the hierarchy while preserving the independence of each level. this requires a control mechanism that supports efficient delivery of messages to groups of processors, tunable reliability semantics, and recoverable state information. we describe a mechanism that meets these requirements that uses ip-multicast, the scalable reliable multicast protocol, and the scalable naming and announcement protocol. we also describe an optimization that provides a flexible framework for linking the control of different aspects of one or more related video effects. ketan mayer- patel lawrence a. rowe interpersonal trust and common ground in electronically mediated communication communication and commerce by web or phone creates benefits and challenges for both buyer and seller. websites provide convenience and visualization; telephones provide voice and real-time interaction. to combine key elements of these experiences, we developed phonechannel. using phonechannel, a pc user while talking on the telephone can display visuals on the other person's television. how do these different media affect the consumer experience? in a recent laboratory study, prospective homebuyers selected houses of interest using web, telephone, or phonechannel. using the telephone or phonechannel led to higher trust; but using web or phonechannel led to higher ratings on convenience, enjoyment, and 'good method' scales. steve greenspan david goldberg david weimer andrea basso on hypertext this panel will employ two different interpretations of the phrase "growing up" to address areas of common interest between hypertext and information retrieval researchers. first, the panelists will question whether or not hypertext is "growing up" as a scientific discipline; they will discuss characteristics that separate hypertext research from other related disciplines. second, the panelists will discuss the problems encountered when a hypertext system "grows up" in size and complexity; they will discuss the very real problems expected when representing and integrating large knowledge bases, accommodating multiple users, and distributing single logical hypertexts across multiple physical sites. the panelists will not lecture, but they will advance a number of themes including "the myth of modularity" (frisse), "new architectures employing hyperconcept databases" (agosti), "hypertext in software engineering" (bruandet), "automatic hypertext generation" (hahn), and "large-scale hypertexts" (weiss). m. frisse m. agosti m. f. bruandet u. hahn s. weiss electronic document addressing: dealing with change the management of electronic document collections is fundamentally different from the management of paper documents. the ephemeral nature of some electronic documents means that the document address (i.e., reference details of the document) can become incorrect some time after coming into use, resulting in references, such as index entries and hypertext links, failing to correctly address the document they describe. a classic case of invalidated references is on the world wide web---links that point to a named resource fail when the domain name, file name, or any other aspect of the addressed resource is changed, resulting in the well-known error 404. additionally, there are other errors which arise from changes to document collections. this paper surveys the strategies used both in world wide web software and other hypertext systems for managing the integrity of references and hence the integrity of links. some strategies are preventative, not permitting errors to occur; others are corrective, discovering references errors and sometimes attempting to correct them; while the last strategy is adaptive, because references are calculated on a just-in-time basis, according the current state of the document collection. helen ashman extending and evaluating visual information seeking for video data stacie hibino performance evaluation of attribute-based tree organization a modified version of the multiple attribute tree (mat) database organization, which uses a compact directory, is discussed. an efficient algorithm to process the directory for carrying out the node searches is presented. statistical procedures are developed to estimate the number of nodes searched and the number of data blocks retrieved for most general and complex queries. the performance of inverted file and modified mat organizations are compared using six real-life databases and four types of query complexities. careful tradeoffs are established in terms of storage and access times for directory and data, query complexities, and database characteristics. v. gopalakrishna c. e. veni madhavan freeflow: mediating between representation and action in workflow systems paul dourish jim holmes allan maclean pernille marqvardsen alex zbyslaw time-compression: systems concerns, usage, and benefits nosa omoigui liwei he anoop gupta jonathan grudin elizabeth sanocki multi-modal natural dialogue kristinn r. thorisson david b. koons richard a. bolt report on ngits'99: the fourth international workshop on next generation information technologies and systems opher etzion practical extensions of point labeling in the slider model tycho strijk marc van kreveld pruning and summarizing the discovered associations bing liu wynne hsu yiming ma participatory analysis: shared development of requirements from scenarios george chin mary beth rosson john m. carroll visualizing search results with envision lucy terry nowell robert k. france edward a. fox performance evaluation of new adaptive object replacement techniques for vod systems b. sonah m. r. ito opportunistic exploration of large consumer product spaces doug bryan anatole gershman an user adaptive navigation metaphor to connect and rate the coherence of terms and complex objects holger husemann jörg petersen peter hase christian kanty hans-dieter kochs object-relational database systems - the road ahead ramakanth subrahmanya devarakonda system support for computer mediated multimedia collaborations harrick m. vin p. venkat rangan mon-song chen on correctness of non-serializable executions rajeev rastogi sharad mehrotra yuri breitbart henry f. korth avi silberschatz designing and mining multi-terabyte astronomy archives: the sloan digital sky survey the next-generation astronomy digital archives will cover most of the sky at fine resolution in many wavelengths, from x-rays, through ultraviolet, optical, and infrared. the archives will be stored at diverse geographical locations. one of the first of these projects, the sloan digital sky survey (sdss) is creating a 5-wavelength catalog over 10,000 square degrees of the sky (see http://www.sdss.org/). the 200 million objects in the multi-terabyte database will have mostly numerical attributes in a 100+ dimensional space. points in this space have highly correlated distributions. the archive will enable astronomers to explore the data interactively. data access will be aided by multidimensional spatial and attribute indices. the data will be partitioned in many ways. small _tag_ objects consisting of the most popular attributes will accelerate frequent searches. splitting the data among multiple servers will allow parallel, scalable i/o and parallel data analysis. hashing techniques will allow efficient clustering, and pair-wise comparison algorithms that should parallelize nicely. randomly sampled subsets will allow de-bugging otherwise large queries at the desktop. central servers will operate a data pump to support sweep searches touching most of the data. the anticipated queries will require special operators related to angular distances and complex similarity tests of object properties, like shapes, colors, velocity vectors, or temporal behaviors. these issues pose interesting data management challenges. alexander s. szalay peter z. kunszt ani thakar jim gray don slutz robert j. brunner automatic scene separation and tree structure gui for video editing: hirotada ueda takafumi miyatake the effect of query type on subject searching behavior of image databases (poster session): an exploratory study efthimis n. efthimiadis raya fidel videotex systems deb ghosh the magic of visual interaction design frank m. marchak promoting the organization-wide learning of application software frank linton merz: personal and shared information spaces on the world wide web (abstract) sören lenman henry see michael century integrating geographic information systems, spatial digital libraries and information spaces for conducting humanitarian assistance and disaster relief operations in urban environments vished kumar alejandro bugacov murilo coutinho robert neches sql language summary jim melton temporally threaded workspace: a model for providing activity-based perspectives on document spaces koichi hayashi takahiko nomura tan hazama makoto takeoka sunao hashimoto stephan gumundson data models avi silberschatz henry f. korth s. sudarshan fedstats promotes statistical literacy cathryn s. dippo spatial hypertext: an alternative to navigational and semantic links frank m. shipman catherine c. marshall co-ordinating activity: an analysis of interaction in computer-supported co- operative work steve whittaker susan e. brennan herbert h. clark the impact of electronic mail on managerial and organizational communications the primary objectives of this study were to determine how an electronic mail system was being used by manager and professionals in a business setting and to describe its cognitive, affective and behavioral impacts. the organizational impacts reported by the respondents were compared with research-based evidence reported by experts in an earlier study by kerr and hiltz. the results showed that electronic mail was used extensively to displace phone calls and memos particularly for "organizing" activities, such as scheduling events, asking questions, and providing feedback. the experiences of the users showed that electronic mail reduced lag times in distributing information, created more flexible working hours, and provided lateral linkages throughout the organization. more pervasive social impacts of electronic mail, such as changes in social structure, expansion in group size, and increase in span of control, were not experienced to a marked degree. mary sumner extending a relational database with deferred referential integrity checking and intelligent joins interactive use of relational database management systems (dbms) requires a user to be knowledgeable about the semantics of the application represented in the database. in many cases, however, users are not trained in the application field and are not dbms experts. two categories of functionality are problematic for such users: (1) updating a database without violating integrity constraints imposed by the domain and (2) using join operations to retrieve data from more than one relation. we have been conducting research to help an uninformed or casual user interact with a relational dbms. this paper describes two capabilities to aid an interactive database user who is neither an application specialist nor a dbms expert. we have developed deferred referential integrity checking (ric) and intelligent join (ij) which extend the operations of a relational dbms. these facilities are made possible by explicit representation of database semantics combined with a relational schema. deferred ric is a static validation procedure that checks uniqueness of tuples, non-null keys, uniqueness of keys, and inclusion dependencies. ij allows a user to identify only the "target" data which is to be retrieved without the need to additionally specify "join clauses". in this paper we present the motivation for these facilities, describe the features of each, and present examples of their use. stephanie cammarata prasadram ramachandra darrell shane mining the web for acronyms using the duality of patterns and relations the web is a rich source of information, but this information is scattered and hidden in the diversity of web pages. search engines are windows to the web. however, the current search engines, designed to identify pages with specified phrases have very limited power. for example, they cannot search for phrases related in a particular way (e.g. books and their authors). in this paper we present a solution for identifying a set of inter-related information on the web using the duality concept. duality problems arise when one tries to identify a pair of inter-related phrases such as (book, author), (name, email) or (acronym, expansion) relations. we propose a solution to this problem that iteratively refines mutually dependent approximations to their identifications. specifically, we iteratively refine i) pairs of phrases related in a specific way, and ii) the patterns of their occurrences in web pages, i.e. the ways in which the related phrases are marked in the pages. we cast light on the general solution of the duality problems in the web by concentrating on one paradigmatic duality problem i.e. identifying (acronym, expansion) pairs in terms of the patterns of their occurrences in the web pages. the solution to this problem involves two mutually dependent duality problems of 1) the duality between the related pairs and their patterns, and 2) the duality between the related pairs and the acronym formulation rules. jeonghee yi neel sundaresan a device able to get and play music to process music might appear easy, but to exchange musical information, in real time, between a computer and a device seems more difficult. our paper presents a realization of a musical peripheral built with an electronic organ and that allows a computer to get the information that comes from the keyboard or play music on the organ. the system is not too complex because our first preocupation was to encode music under a digital form, and then to create automatically the encoding. we present algorithms, realizations and perspectives. c. aperghis-tramoni tables, trees and formulas in decision analysis shailendra c. palvia steven r. gordon situated facial displays: towards social interaction akikazu takeuchi taketo naito a dynamic load balancing strategy for parallel datacube computation in recent years, olap technologies have become one of the important applications in the database industry. in particular, the datacube operation proposed in [5] receives strong attention among researchers as a fundamental research topic in the olap technologies. the datacube operation requires computation of aggregations on all possible combinations of each dimension attribute. as the number of dimensions increases, it becomes very expensive to compute datacubes, because the required computation cost grows exponentially with the increase of dimensions. parallelization is very important factor for fast datacube computation. however, we cannot obtain sufficient performance gain in the presence of data skew even if the computation is parallelized. in this paper, we present a dynamic load balancing strategy, which enables us to extract the effectiveness of parallizing datacube computation sufficiently. we perform experiments based on simulations and show that our strategy performs well. seigo muto masaru kitsuregawa research directions for distributed databases hector garcia-molina bruce lindsay toward active, extensible, networked documents: multivalent architecture and applications thomas a. phelps robert wilensky issues in designing an information model for application development gary h. sockut helen p. arzu robert w. matthews david e. shough incomplete information in object-oriented databases we present a way to handle incomplete information both at schema and object instance level in an object-oriented database. incremental schema design becomes possible with the introduction of generic classes. incomplete data in an object instance is handled with the introduction of explicit null values in a similar way as in the relational and nested relations data models. roberto zicari mediacaptain - an interface for browsing streaming media florian mueller the first noble truth of cyberspace: people are people (even when they moo) diane j. schiano sean white infinite detail and emulation in an ontologically minimized hci by default, we attempt to define practical areas of technological endeavor as "applications." for example, the applied psychology of human-computer interaction has characteristically been defined in terms of the methods and concepts basic psychology can provide. this has not worked well. an alternative approach is to begin from a characterization of current practice, to take seriously the requirements of the domain of endeavor, and to define areas of "science" and "application" as possible and appropriate in that context. john m. carroll automatic construction of personalized tv news programs in this paper, we study the automatic construction of personalized tv news programs, where we want to build a program with predefined duration and maximum content value for a specific user. we combine video indexing techniques to parse tv news recordings into stories, and information filtering techniques to select stories which are most adequate given the user profile. we formalize the selection process as an optimization problem, and we study how to take into account duration in the selection of stories. experiments show that a simple heuristic can provide high quality selection with little computation. we also describe two prototypes, which implement two different mechanisms for the construction of user profiles: explicit specification, using a category-based model, implicit specification, using a keyword-based model. bernard merialdo kyung tak lee dario luparello jeremie roudaire semantics and implementation of schema evolution in object-oriented databases object-oriented programming is well-suited to such data-intensive application domains as cad/cam, ai, and ois (office information systems) with multimedia documents. at mcc we have built a prototype object-oriented database system, called orion. it adds persistence and sharability to objects created and manipulated in applications implemented in an object-oriented programming environment. one of the important requirements of these applications is schema evolution, that is, the ability to dynamically make a wide variety of changes to the database schema. in this paper, following a brief review of the object- oriented data model that we support in orion, we establish a framework for supporting schema evolution, define the semantics of schema evolution, and discuss its implementation. jay banerjee won kim hyoung-joo kim henry f. korth unraveling the semantics of conceptual schemas m. p. papazoglou the evaluator effect in usability tests niels ebbe jacobsen morten hertzum bonnie e. john haystack: per-user information environments traditional information retrieval (ir) systems are designed to provide uniform access to centralized corpora by large numbers of people. the haystack project emphasizes the relationship between a particular individual and his corpus. an individual's own haystack priviliges information with which that user interacts, gathers data about those interactions, and uses this metadata to further personalize the retrieval process. this paper describes the prototype haystack system. eytan adar david kargar lynn andrea stein joined normal form: a storage encoding for relational databasess a new on-line query language and storage structure for a database machine is presented. by including a mathematical model in the interpreter the query language has been substantially simplified so that no reference to relation names is necessary. by storing the model as a single joined normal form (jnf) file, it has been possible to exploit the powerful search capability of the content addressable file store (cafs®; cafs is a registered trademark of international computers limited) database machine. e. babb extraction of a word list from an existing dictionary to be used in a communication-aid software brigitte le pevedic freewalk: supporting casual meetings in a network hideyuki nakanishi chikara yoshida toshikazu nishimura toru ishida on the expressive power of the extended relational algebra for the unnormalized relational model d. van gucht video manga: generating semantically meaningful video summaries this paper presents methods for automatically creating pictorial video summaries that resemble comic books. the relative importance of video segments is computed from their length and novelty. image and audio analysis is used to automatically detect and emphasize meaningful events. based on this importance measure, we choose relevant keyframes. selected keyframes are sized by importance, and then efficiently packed into a pictorial summary. we present a quantitative measure of how well a summary captures the salient events in a video, and show how it can be used to improve our summaries. the result is a compact and visually pleasing summary that captures semantically important events, and is suitable for printing or web access. such a summary can be further enhanced by including text captions derived from ocr or other methods. we describe how the automatically generated summaries are used to simplify access to a large collection of videos. shingo uchihashi jonathan foote andreas girgensohn john boreczky video storage and retrieval in microcosm (abstract) sazilah salam interaction design at ideo product development peter spreenberg gitta salomon phillip joe menu stacking - help or hindrance? john s. gray improving database design through the analysis of relationships much of the work on conceptual modeling involves the use of an entity- relationship model in which binary relationships appear as associations between two entities. relationships involving more than two entities are considered rare and, therefore, have not received adequate attention. this research provides a general framework for the analysis of relationships in which binary relationships simply become a special case. the framework helps a designer to identify ternary and other higher-degree relationships that are commonly represented, often inappropriately, as either entities or binary relationships. generalized rules are also provided for representing higher- degree relationships in the relational model. this uniform treatment of relationships should significantly ease the burden on a designer by enabling him or her to extract more information from a real-world situation and represent it properly in a conceptual design. debabrata dey veda c. storey terence m. barron what's happening jennifer bruer showing the context of nodes in the world-wide web sougata mukherjea james d. foley informedia experience-on-demand: capturing, integrating and communicating experiences across people, time and space the informedia experience- on-demand system uses speech, image, and natural language processing combined with gps information to capture, integrate, and communicate personal multimedia experiences. this paper discusses in initial prototype of the eod system. howard d. wactlar michael g. christel alexander g. hauptmann yihong gong the lotus notes storage system kenneth moore using examples to describe categories the successful use of menu- based information retrieval systems depends critically on users understanding the category names and partitions used by system designers. some of the problems in this endeavor are psychological and have to do with naming large and ill-defined categories so that users can understand their contents, and effectively partitioning large sets of objects. systems of interest (like home information systems) often consist of new and frequently changing content in large and varied domains, and are particularly prone to these problems. we explored several ways in which one might name categories in one such domain (yellow page category headings) - category names, category names plus examples, and examples alone. we found that three examples alone were essentially as good a way to name these categories as either an expertly chosen name or a name plus examples. examples provide a promising possibility both as a means of flexibly naming menu categories and as a methodological tool to study certain categorization problems. susan t. dumais thomas k. landauer semantic information retrieval annelise mark pejtersen building flexible groupware through open protocols mark roseman saul greenberg document ranking on weight-partitioned signature files a signature file organization, called the weight-partitioned signature file, for supporting document ranking is proposed. it employs multiple signature files, each of which corresponds to one term frequency, to represent terms with different term frequencies. words with the same term frequency in a document are grouped together and hashed into the signature file corresponding to that term frequency. this eliminates the need to record the term frequency explicitly for each word. we investigate the effect of false drops on retrieval effectiveness if they are not eliminated in the search process. we have shown that false drops introduce insignificant degradation on precision and recall when the false-drop probability is below a certain threshold. this is an important result since false-drop elimination could become the bottleneck in systems using fast signature file search techniques. we perform an analytical study on the performance of the weight-partitioned signature file under different search strategies and configurations. an optimal formula is obtained to determine for a fixed total storage overhead the storage to be allocated to each partition in order to minimize the effect of false drops on document ranks. experiments were performed using a document collection to support the analytical results. dik kun lee liming ren the management of human errors in user-centered design user-centered design puts the users at the center of the design activity by involving them from the very beginning in the process and by iteratively testing and re-designing the product. in every testing and evaluation phase human error analysis plays an important role. although it is not possible to design systems in which people do not make errors, much can be done to minimize the incidence of error, to maximize error detection, and to make easier error recovery. however, the qualitative analysis on human error has not received the attention that it deserves. in the paper the main features of the user-centered approach are sketched and a set of guidelines for handling human error is presented. an example drawn from our design experience is reported for each guideline. a. rizzo o. parlangeli e. marchigiani s. bagnara database theory column serge abiteboul paris kanellakis editorial pointers diane crawford handjive: a device for interpersonal haptic entertainment bj fogg lawrence d. cutler perry arnold chris eisbach handling infinite temporal data in this paper, we present a powerful framework for describing, storing, and reasoning about infinite temporal information. this framework is an extension of classical relational databases. it represents infinite temporal information by generalized tuples defined by linear repeating points and constraints on these points. we prove that relations formed from generalized tuples are closed under the operations of relational algebra. a characterization of the expressiveness of generalized relations is given in terms of predicates definable in presburger arithmetic. finally, we provide some complexity results. f. kabanza j.-m. stevenne p. wolper constructing the next 100 database management systems: like the handyman or like the engineer? andreas geppert klaus r. dittrich spatial querying for image retrieval: a user-oriented evaluation joemon m. jose jonathan furner david j. harper effects of annotations on student readers and writers recent research on annotations has focused on how readers annotate texts, ignoring the question of how reading annotations might affect subsequent readers of a text. this paper reports on a study of persuasive essays written by 123 undergraduates receiving primary source materials annotated in various ways. findings indicate that annotations improve findings indicate that annotations improve recall of emphasized items, influence how specific arguments in the source materials are perceived, decrease students' tendencies to unnecessarily summarize. of particular interest is that students' perceptions of the annotator appeared to greatly influence how they responded to the annotated material. using this study as a basis, i discuss implications for the design and implementation of digitally annotated materials. joanna l. wolfe information dependencies this paper uses the tools of information theory to examine and reason about the information content of the attributes within a relation instance. for two sets of attributes x and y, an information dependency measure (ind measure) characterizes the uncertainty remaining about the values for the set y when the values for the set x are known. a variety of arithmetic inequalities (ind inequalities) are shown to hold among ind measures; ind inequalities hold in any relation instance. numeric constraints (ind constraints) on ind measures, consistent with the ind inequalities, can be applied to relation instances. remarkably, functional and multivalued dependencies correspond to setting certain constraints to zero, with armstrong's axioms shown to be consequences of the arithmetic inequalities applied to constraints. as an analog of completeness, for any set of constraints consistent with the inequalities, we may construct a relation instance that approximates these constraints within any positive ε. ind measures suggest many valuable applications in areas such as data mining. mehmet m. dalkilic edward l. roberston collaborative multimedia scientific design in shastra vinod anupam chandrajit l. bajaj assessed relevance and stylistic variation jussi karlgren multimodal people id for a multimedia meeting browser a meeting browser is a system that allows users to review a multimedia meeting record from a variety of indexing methods. identification of meeting participants is essential for creating such a multimedia meeting record. moreover, knowing who is speaking can enhance the performance of speech recognition and indexing meeting transcription. in this paper, we present an approach that identifies meeting participants by fusing multimodal inputs. we use face id, speaker id, color appearance id, and sound source directional id to identify and track meeting. after describing the different modules in detail, we will discuss a framework for combining the information sources. integration of the multimodal people id into the multimedia meeting browser is in its preliminary stage. jie yang xiaojin zhu ralph gross john kominek yue pan alex waibel a framework for open hypermedia systems (abstract) chao-min chiu rong-shyan chen extending relational algebra and relational calculus with set-valued attributes and aggregate functions in commercial network database management systems, set-valued fields and aggregate functions are commonly supported. however, the relational database model, as defined by codd, does not include set-valued attributes or aggregate functions. recently, klug extended the relational model by incorporating aggregate functions and by defining relational algebra and calculus languages. in this paper, relational algebra and relational calculus database query languages (as defined by klug) are extended to manipulate set-valued attributes and to utilize aggregate functions. the expressive power of the extended languages is shown to be equivalent. we extend the relational algebra with three new operators, namely, pack, unpack, and aggregation-by-template. the extended languages form a theoretical framework for statistical database query languages. g. ozsoyoglu z. m. ozsoyoglu v. matos moving database theory into database practice jeff ullman participatory design michael j. muller sarah kuhn repository system engineering pillip a. bernstein form and room: metaphors for groupware hekki hamalainen chris condon t2: a customizable parallel database for multi-dimensional data chialin chang anurag acharya alan sussman joel saltz towards a model of user perception of computer systems response time the foundational structure of a new model of user perception of computer system response time is proposed. it is suggested that the development of such a model is now of central importance to the computer system configuration design effort. the new model is seen to explain the success of an earlier measure, designed for the non-interactive environment, in predicting user estimates of response time for interactive systems. the results of new empirical studies, designed to delineate specific components of the model, are also discussed. robert geist robert allen ronald nowaczyk from theory to practice or how not to fail in developing information systems the development of information systems consists of a wide variety of activities and processes, which come together to create a product designed for a specific purpose. we often encounter products that do not serve the purpose they were intended for, and the question asked is: how does it happen that a process that has been defined and structured in principle, has such a high failure ratio? the purpose of this article is to attempt to clarify the problematic points or subjects that exist in the process of developing information systems and to supply the tools to help reduce the problem. offer drori pva: a self-adaptive personal view agent system in this paper, we present pva, an adaptive personal view information agent system to track, learn and manage, user's interests in internet documents. when user's interests change, pva, in not only the contents, but also in the structure of user profile, is modified to adapt to the changes. experimental results show that modulating the structure of user profile does increase the accuracy of personalization systems. chien chin chen meng chang chen yeali sun hypertext versus boolean access to biomedical information: a comparison of effectiveness, efficiency, and user preferences this study compared of two modes of access to a biomedical database, in terms of their effectiveness and efficiency in supporting clinical problem solving and in terms of user preferences. boolean access, which allowed subjects to frame their queries as combinations of keywords, was compared to hypertext access, which allowed subjects to navigate from one database node to another. the accessible biomedical data were identical across system versions. performance data were collected from two cohorts of first-year medical students, each student randomly assigned to either the boolean or the hypertext system. additional attitudinal data were collected from the second cohort. at each of two research sessions (one just before and one just after their bacteriology course), subjects worked eight clinical case problems, first using only their personal knowledge and, subsequently, with aid from the database. database retrievals enabled students to answer questions they could not answer based on personal knowledge alone. this effect was greater when personal knowledge of bacteriology was lower. there were not statistically significant differences between the two forms of access, in terms of problem- solving effectiveness or efficiency. students preferred boolean access over hypertext access. barbara m. wildemuth charles p. friedman stephen m. downs security of statistical databases: multidimensional transformation the concept of multidimensional transformation of statistical databases is described. a given set of statistical output may be compatible with more than one statistical database. a transformed database d' is a database which (1) differs from the original database d in its record content, but (2) produces, within certain limits, the same statistical output as the original database. for a transformable database d there are two options: one may physically transform d into a suitable database d', or one may release only that output which will not permit the users to decide whether it comes from d or d'. the second way is, of course, the easier one. basic structural requirements for transformable statistical databases are investigated. the closing section discusses advantages, drawbacks, and open questions. jan schlörer hytime: a discussion with steve newcomb philip c. murray computational implications of human navigation in multiscale electronic worlds susanne jul touchcounters: designing interactive electronic labels for physical containers paul yarin hiroshi ishii toolkits for multimedia awareness ian smith deconstructing the internet paradox joseph m. newcomer an approach to eliminate transaction blocking in locking protocols d. agrawal a. el abbadi r. jeffers database systems: achievements and opportunities avi silberschatz michael stonebraker jeff ullman decompiling codasyl dml into retional queries a "decompilation" algorithm is developed to transform a program written with the procedural operations of codasyl dml into one which interacts with a relational system via a nonprocedural query specification. an access path model is introduced to interpret the semantic accesses performed by the program. data flow analysis is used to determine how find operations implement semantic accesses. a sequence of these is mapped into a relational query and embedded into the original program. the class of programs for which the algorithm succeeds is characterized. r. h. katz e. wong conference preview: siggraph 2000: ideas that inspire the 21st century's digital visions marisa campbell new user interface strategies for public telephones lisa fast ibm's relational dbms products: features and technologies this paper very briefly summarizes the features and technologies implemented in the ibm relational dbms products. the topics covered include record and index management, concurrency control and recovery methods, commit protocols, query optimization and execution techniques, high availability and support for parallelism and distributed data. some indications of likely future product directions are also given. c. mohan text input methods for eye trackers using off-screen targets text input with eye trackers can be implemented in many ways such as on-screen keyboards or context sensitive menu-selection techniques. we propose the use of off-screen targets and various schemes for decoding target hit sequences into text. off-screen targets help to avoid the midas' touch problem and conserve display area. however, the number and location of the off-screen targets is a major usability issue. we discuss the use of morse code, our minimal device independent text input method (mditim), quikwriting, and cirrin-like target arrangements. furthermore, we describe our experience with an experimental system that implements eye tracker controlled mditim for the windows environment. poika isokoski simple conditions for guaranteeing higher normal forms in relational databases a key is simple if it consists of a single attribute. it is shown that if a relation schema is in third normal form and every key is simple, then it is in projection-join normal form (sometimes called fifth normal form), the ultimate normal form with respect to projections and joins. furthermore, it is shown that if a relation schema is in boyce-codd normal form and some key is simple, then it is in fourth normal form (but not necessarily projection-join normal form). these results give the database designer simple sufficient conditions, defined in terms of functional dependencies alone, that guarantee that the schema being designed is automatically in higher normal forms. c. j. date ronald fagin beware of the qwerty mike milne design: dutch design day austin henderson kate ehrlich a novel client-server protocol for the demanding opac user e. j. yannakoudakis combining optimism and pessimism to produce high availability in distributed transaction processing joel m. crichlow dealing with slow-evolving fact: a case study on inventory data warehousing data warehousing for inventory management (dwin) is a production project at telcordia aimed at providing telecommunications service providers with decision support functions for inventory control and monitoring. in this paper, we report some interesting issues related to the design of the data warehouse. specifically, we will discuss the issues of slow-evolving fact, transaction-oriented fact table, and large dimensions. we also propose the concept of virtual data cubes and show its usefulness. we address these issues through a data mart case study and present benchmarking results. finally, based on the experiences learned, we discuss potential research issues that may benefit the data warehousing and olap practice. chung-min chen munir cochinwala elsa yueh a streaming ensemble algorithm (sea) for large-scale classification ensemble methods have recently garnered a great deal of attention in the machine learning community. techniques such as boosting and bagging have proven to be highly effective but require repeated resampling of the training data, making them inappropriate in a data mining context. the methods presented in this paper take advantage of plentiful data, building separate classifiers on sequential chunks of training points. these classifiers are combined into a fixed-size ensemble using a heuristic replacement strategy. the result is a fast algorithm for large-scale or streaming data that classifies as well as a single decision tree built on all the data, requires approximately constant memory, and adjusts quickly to concept drift. w. nick street yongseog kim how can we make groupware practical? (panel) bob ensor "i'll get that off the audio": a case study of salvaging multimedia meeting records thomas p. moran leysia palen steve harrison patrick chiu don kimber scott minneman william van melle polle zellweger learning a word processing system with training wheels and guided exploration a training wheels interface creates a reduced functionality system intended to prevent new users from suffering the consequences of certain types of common errors when they exercise system functions and procedures. this has been shown to be an effective training system design for learning basic text editing functions [4]. we extend this result by examining the extent to which training wheels learners can transfer their skills to interaction with the full- function system. the experiment reported here indicates that training wheels subjects were better able to perform advanced full-system editing functions than subjects who were trained on the full system itself. richard catrambone john m. carroll hypermedia structures and the division of labor in meeting room collaboration gloria mark jörg m. haake norbert a. streitz a comparative analysis of reactions from multicultural and culturally homogeneous teams to decision making with and without gdss technology cultural diversity in the u.s. workforce is increasing, in addition, organizations are requiring greater worker involvement and teamwork. while cultural diversity provides unique opportunities to stimulate the work environment, it can also create problems in individual interaction that often hinders group performance. therefore, it is imperative to determine how multicultural team performance is similar and different from culturally homogeneous team performance. in the information age a number of organizations are using technology such as group decision support systems (gdss) to promote participation and improve team interaction. this article provides the results of a pilot study where survey responses were collected from two categories of teams, multicultural teams and culturally homogeneous teams. participants were surveyed on their preferences for team decision making, in reaction to team exercises, within a gdss environment and within a traditional, non-gdss environment. an analysis of the responses showed that a significantly higher percent of multicultural team members, in comparison to culturally homogeneous team members, responded more favorably for using a gdss in certain aspects of team decision making. for example, multicultural team members to a greater extent preferred the computer environment for "discussion of issues" and "expressing ideas." implications for using gdss technology for team building within multicultural organizations are discussed. bonnie f. daily john loveland robert steiner design methodology and formal validation of hypermedia documents c. a. s. santos l. f. g. soares g. l. de souza j.-p. courtiat holowall: designing a finger, hand, body, and object sensitive wall nobuyuki matsushita jun rekimoto web-based data collection for the analysis of hidden relationships (web mining of hypertext links) edna reid digital music libraries - research and development digital music libraries provide enhanced access and functionality that facilitates scholarly research and education. this panel will present a report on the progress of several major research and development projects in digital music libraries. david bainbridge gerry bernbom mary wallace andrew p. dillon matthew dovey jon w. dunn michael fingerhut ichiro fujinaga eric j. isaacson video portals for the next century (panel session) nevenka dimitrova rob koenen heather yu avideh zakhor francis galliano charles bouman chi 98 basic research symposium joseph a. konstan jane siegel towards supporting hard schema changes in tse young-gook ra elke a. rundensteiner incomplete object - a data model for design and planning applications tomasz imielinski shamim naqvi kumar vadaparty tangible progress: less is more in somewire audio spaces andrew singer debby hindus lisa stifelman sean white taming complexity at maya design peter lucas susan salis cscw and the internet (workshop session) (abstract only) this full- day workshop will focus on understanding the range of ways in which the internet and the web are being used for collaboration, on the communities using it, and on how (and what) cscw tools are appearing in this domain. the workshop will strive to characterize current on-line collaborations and their underlying technologies and to outline the implications of these for cscw and distributed groups more generally. sara bly susan anderson efficient content-based indexing of large image databases large image databases have emerged in various applications in recent years. a prime requisite of these databases is the means by which their contents can be indexed and retrieved. a multilevel signature file called the two signature multi-level signature file (2smlsf) is introduced as an efficient access structure for large image databases. the 2smlsf encodes image information into binary signatures and creates a tree structures can be efficiently searched to satisfy a user's query. two types of signatures are generated. type i signatures are used at all tree levels except the leaf level and are based only on the domain objects included in the image. type ii signatures, on the other hand, are stored at the leaf level and are based on the included domain objects and their spatial relationships. the 2smlsf was compared analytically to existing signature file techniques. the 2smlsf significantly reduces the storage requirements; the index structure can answer more queries; and the 2smlsf performance significantly improves over current techniques. both storage reduction and performance improvement increase with the number of objects per image and the number of images in the database. for an example large image database, a storage reduction of 78% may be archieved while the performance improvement may reach 98%. essam a. el-kwae mansur r. kabuka the automatic indexing system air/phys - from research to applications since october 1985, the automatic indexing system air/phys has been used in the input production of the physics data base of the fachinformationsentrum karlsruhe/west germany. the texts to be indexed are abstracts written in english. the system of descriptors is prescribed. for the application of the air/phys system a large-scale dictionary containing more than 600 000 word- descriptor relations reap. phrase-descriptor relations has been developed. most of these relations have been obtained by means of statistical and heuristical methods. in consequence, the relation system is rather imperfect. therefore, the indexing system needs some fault- tolerating features. an appropriate indexing approach and the corresponding structure of the air/phys system are described. finally, the conditions of the application as well as problems of further development are discussed. p. biebricher n. fuhr g. lustig m. schwantner g. knorz negotiating user-initiated cancellation and interruption requests manuel a. perez-quinones john l. sibert annotating answers with their properties when responding to queries, humans often volunteer additional information about their answers. among other things, they may qualify the answer as to its reliability, and they may provide some abstract characterization of the answer. this paper describes a user interface to relational databases that similarly annotates its answers with their properties. the process assumes that various assertions about properties of the data have been stored in the database (meta-information). these assertions are then used to infer properties of each answer provided by the system (meta-answers). meta-answers are offered to users along with each answer issued, and help them to assess the value and meaning of the information that they receive. amihai motro editorial steven cherry the aqua approximate query answering system aqua is a system for providing fast, approximate answers to aggregate queries, which are very common in olap applications. it has been designed to run on top of any commercial relational dbms. aqua precomputes synopses (special statistical summaries) of the original data and stores them in the dbms. it provides approximate answers along with quality guarantees by rewriting the queries to run on these synopses. finally, aqua keeps the synopses up-to-date as the database changes, using fast incremental maintenance techniques. swarup acharya phillip b. gibbons viswanath poosala sridhar ramaswamy library/computing center mergers: the shape of the future, or an evil fad? bernard hecker john bucher tim foley david lewis john supra carolyn walters salton award lecture on theoretical argument in information retrieval stephen robertson design and implementation of a shared workspace by integrating individual workspaces this paper proposes "teamworkstation" (tws) as an approach to an effective shared workspace for the support of remote collaboration. there are three key design objectives in tws: integration of virtual and actual workspaces, a simultaneously-accessible shared drawing surface, and smooth transition between individual workspaces and shared workspace. to achieve these objectives, images of computers and/or paper are overlaid so that information and images from both are effectively combined and distributed to the group members. m. ohkubo h. ishii towards data mining benchmarking: a test bed for performance study of frequent pattern mining performance benchmarking has played an important role in the research and development in relational dbms, object-relational dbms, data warehouse systems, etc. we believe that benchmarking data mining algorithms is a long overdue task, and it will play an important role in the research and development of data mining systems as well. frequent pattern mining forms a core component in mining associations, correlations, sequential patterns, partial periodicity, etc., which are of great potential value in applications. there have been a lot of methods proposed and developed for efficient frequent pattern mining in various kinds of databases, including transaction databases, time-series databases, etc. however, so far there is no serious performance benchmarking study of different frequent pattern mining methods. to facilitate an analytical comparison of different frequent mining methods, we have constructed an open test bed for performance study of a set of recently developed, popularly used methods for mining frequent patterns in transaction databases and mining sequential patterns in sequence databases, with different data characteristics. the testbed consists of the following components. a synthetic data generator, which can generate large sets of synthetic data in various kinds of data distributions. a few large data sets from real world applications will also be provided. a good set of typical frequent pattern mining methods, ranging from classical algorithms to recent studies. the method are grouped into three classes: frequent pattern mining, max-pattern mining, and sequential pattern mining. for frequent pattern mining, we will demonstrate apriori, hashing, partitioning, sampling, treeprojection, and fp-growth. for maximal pattern mining, we will demonstrate maxminer, treeprojection, and fp-growth-max. for sequential pattern mining, we will demonstrate gsp and freespan. a set of performance curves. these algorithms their running speeds, scalabilities, bottlenecks, and performance on different data distributions, will be compared and demonstrated upon request. some performance curves from our pre-conference experimental evaluations will also be shown. an open testbed. our goal is to construct an extensible test bed which integrates the above components and supports an open-ended testing service. researchers can upload the object codes of their mining algorithms, and run them in the test bed using these data sets. the architecture is shown in figure 1. this testbed is our first step towards benchmarking data mining algorithms. by doing so, performance of different algorithms can be reported consistently, on the same platform, and in the same environment. after the demo, we plan to make the testbed available on the www so that it may, hopefully, benefit further research and development of efficient data mining methods. jian pei runying mao kan hu hua zhu from generation to generation: multimedia, community and personal stories abbe don laura teodosio joe lambert dana atchley scope of the ois/oa project why is xerox computer services working on office automation? first, because our national financial picture has seen ever increasing inflation. the cost of almost everything has increased dramatically, including the cost of doing business. a large part of the cost of doing business is administrative costs. office automation concerns itself with administrative costs of labor (clerical and professional) and time (timely delivery of information). when we look at the future, we know that we have to find better ways of getting information from our heads, pens, and typewriters to those who need it. we're not going to be able to rely on once plentiful secretarial or clerical support staff, because we are already feeling a shortfall of these administrative personnel, and in the future, we will see a decreasing population. the ability of a company to stay price-competitive in any marketplace hinges on its recognition of the need to automate certain functions within the office, so that employee talent can be applied to successfully achieving the overall goals of the company and meeting new challenges. d. d. lombardi can the field of mis be disciplined? preoccupations about the present and future evolution of mis as a scientific field seem to be gaining popularity among researchers. the authors contend that most models used by the investigators of the mis field have been based on an inappropriate monistic view of science. claude banville maurice landry toward an ethos of rationale: commentary on "the role of balloon help" by david k. farkas john m. carroll user-centered video: transmitting video images based on the user's interest kimiya yamaashi yukihiro kawamata masayuki tani hidekazu matsumoto methods & tools: the activity checklist: a tool for representing the "space" of context victor kaptelinin bonnie a. nardi catriona macaulay is information systems a science? an inquiry into the nature of the information systems discipline deepak khazanchi bj relational database: a practical foundation for productivity e. f. codd normalization in oodb design byung s. lee continual computation policies for utility-directed prefetching eric horvitz annette adler communities and cscw computer performance evaluation (sigmetrics): brownian motion? over the past decade, computer performance evaluation (cpe) has gained increasing importance and attention. evaluation was originally performed at the level of the hardware components and the focus has since migrated out towards the perspective of the user. in between, there are methods of analysis that are directed at the evaluation of system software performance and application program performance. the data collection tools used for the analyses range from hardware and software monitors to questionnaires and accounting logs. for every data collection tool, there exist numerous data reduction methods. as a result, several theories of computer performance evaluation have evolved, not all of which are necessarily consistent. the choice of data collection tools, data reduction tools, and cpe theory does not appear to abide by any generally agreed to criteria. an organization that undertakes a cpe effort has no real guidance as to its choice of method. what are the objectives of cpe? in what way can each of the many cpe tools and theories help to achieve these objectives and, just as importantly, in what respects are these same tools merely avenues to the collection of useless data? as cpe proliferates, management becomes increasingly aware of the costs of undertaking cpe efforts. is there a unified theory of cpe; an unstated objective toward which all cpe activity is presumed to be striving? if there is such an unstated objective, to what extent has the last decade of cpe activity seen progress toward that objective? alternatively, to what extent has all this activity simply been the irregular (brownian) motion of cpe practitioners? wayne douglas bennett research alerts marisa campbell oh what a tangled web we weave: metaphor and mapping in graphical interfaces william w. gaver sybase replication server alex gorelik yongdong wang mark deppe methods, models and architectures for graphical user interface design: ifip working groups 13.2/2.7 joint workshop, loughborough, uk, september 1994 alistair sutcliffe len bass gilbert cockton andrew monk ian newman an experimental sound-based hierarchical menu navigation system for visually handicapped use of graphical user interfaces the use of modern computers by the visually handicapped has become more difficult over the past few years. in earlier systems the user interface was a simple character based environment. in those systems, simple devices like screen readers, braille output and speech synthesizers were effective. current systems now run graphical user interfaces (guis) which have rendered these simple aids almost useless. in the current work we are developing a tonally based mechanism that allows the visually handicapped user to navigate through the same complex hierarchical menu structures used in the gui. the software can be easily, and cheaply, incorporated in modern user interfaces, making them available for use by the visually handicapped. in the remainder of this paper we present a description of the sound-based interfaces as well as the techniques we have developed to test them. a. i. karshmer p. brawner g. reiswig algebraic equivalences among nested relational expressions algebraic optimization is both theoretically and practically important for query processing in (nested) relational databases. in this paper, we consider this issue and investigate some algebraic properties concerning the nested relational operators. we also outline a heuristic optimization algorithm for nested relational expressions by adopting algebraic transformation rules developed in this paper and previous related work. hong-chen liu k. ramamohanarao the role of "help networks" in facilitating use of cscw tools the pattern of cscw system users helping other using to resolve problems and make more effective use of such tools has been observed in a variety of settings, but little is known about how help patterns develop or their effects. results from a pre-post study of the implementation of cscw tools among university faculty, staff and administration indicate that the network of helping relationships is largely disaggregated and generally follows work group alignments rather than technical specialization. a relatively small group of "high providers" is responsible for most help to users, and tends to act as a liaison between central support staff and work group members. these providers are not systematically different from other personnel except in terms of their expertise. implications of these findings for the development and cultivation of help relationships in support of cscw are developed. j. d. eveland anita blanchard william brown jennifer mattocks extracting semi-structured data through examples in this paper, we describe an innovative approach to extracting semi- structured data from web sources. the idea is to collect a couple of example objects from the user and to use this information to extract new objects from new pages or texts. to perform the extraction of new objects, we introduce a bottom-up extration strategy and, through experimentation, demonstrate that it works quite effectively with distinct web sources, even if only a few examples are provided by the user. berthier ribeiro-neto alberto h. f. laender altigran s. da silva relief from the audio interface blues: expanding the spectrum of menu, list, and form styles menus, lists, and forms are the workhorse dialogue structures in telephone- based interactive voice response applications. despite diversity in applications, there is a surprising homogeneity in the menu, list, and form styles commonly employed. there are, however, many alternatives, and no single style fits every prospective application and user population. a design space for each dialogue structure organizes the alternatives and provides a framework for analyzing their benefits and drawbacks. in addition to phone- based interactions, the design spaces apply to any limited-bandwidth, temporally constrained display devices, including small-screen devices such as personal digital assistants (pdas) and screen phones. paul resnick robert a. virzi the hci bibliography project: a hypertext research perspective keith instone semantic indexing for a complete subject discipline yi-ming chung qin he kevin powell bruce schatz tilepic: a file format for tiled hierarchical data tilepic is a method for storing tiled data of arbitrary t ype in a hierarchical, indexed format for fast retrieval. it is useful for storing moderately large, static, spatial datasets in a manner that is suitable for panning and zooming over the data, especially in distributed applications. because different data types may be stored in the same object, tilepic can support semantic zooming as well. it has proven suitable for a wide variety of applications involving the networked access and presentation of images, geographic data, and text. the tilepic format and its supporting tools are unencumbered, and available to all. jeff anderson-lee robert wilensky network communities, community networks john m. carroll mary beth rosson the lapidary graphical interface design tool brad vander zanden brad a. myers marquee: a tool for real-time video logging karon weher alex poon towards actor/actress identification in drama videos shiníchi satoh timemine (demonstration session): visualizing automatically constructed timelines russell swan james allan information delivery systems: an exploration of web pull and push technologies julie e. kendall kenneth e. kendall efficient algorithms for mining outliers from large data sets in this paper, we propose a novel formulation for distance-based _outliers_ that is based on the distance of a point from its _kth_ nearest neighbor. we rank each point on the basis of its distance to its _kth_ nearest neighbor and declare the top _n_ points in this ranking to be outliers. in addition to developing relatively straightforward solutions to finding such outliers based on the classical nested-loop join and index join algorithms, we develop a highly efficient _partition-based_ algorithm for mining outliers. this algorithm first partitions the input data set into disjoint subsets, and then prunes entire partitions as soon as it is determined that they cannot contain outliers. this results in substantial savings in computation. we present the results of an extensive experimental study on real-life and synthetic data sets. the results from a real-life nba database highlight and reveal several expected and unexpected aspects of the database. the results from a study on synthetic data sets demonstrate that the partition-based algorithm scales well with respect to both data set size and data set dimensionality. sridhar ramaswamy rajeev rastogi kyuseok shim sensor data management in manufacturing systems hiren parikh kang shin nandit soparkar practically accomplishing immersion john bowers jon o'brien james pycock visual who: animating the affinities and activities of an electronic community judith s. donath interfaces in organizations (panel session): supporting group work research on human factors in computer systems has emphasized supporting individuals. this panel will discuss new issues that emerge when computer systems support groups of people and whole organizations. malone (see following paper) will suggest a broadening of the definition of user interfaces to include "organizational interfaces" and will indicate how a theoretical base for such an endeavor might be developed. then cashman will describe a "coordinator tool" in use at dec for tracking the assignment of tasks to people in activities such as software maintenance. finally, brown will suggest how computer systems can be designed to radically increase the bandwidth of cooperation in groups by, for example, exploiting linguistic notions of context. irene greif john seely brown paul m. cashman thomas malone the role of integral displays in decision making a common approach to designing human-computer decision systems is to divide decision tasks between the person and the computer. the success of this approach depends on knowledge of the specific task components and their interactions, information important for allocating tasks to man and machine. such knowledge is often unavailable for complex, realistic decision situations. also, people are reluctant to relinquish part of their decision- making responsibilities. one way to circumvent these problems is to provide general assistance to the decision maker that is independent of any particular decision situation. we propose to use the computer to reduce the decision maker's cognitive load rather than his task load. specifically, we hope to show that human decision processes can be aided by displaying decision- relevant information in ways that capitalize on certain characteristics of the human perceputal system. timothy e. goldsmith roger w. schvaneveldt empirically-based re-design of a hypertext encyclopedia keith instone barbee mynatt teasley laura marie leventhal automatic generation of help from interface design models roberto moriyon pedro szekely robert neches videomap and videospaceicon: tools for anatomizing video content a new approach to interacting with stored video is proposed. the approach utilizes videomap and videospaceicon. videomap is the interface that shows the essential video features in an easy to perceive manner. videospaceicon represents the temporal and spatial characteristics of a video shot as an intuitive icon. a video indexing method supports both tools. these tools allow the user's creativity to directly interact with the essential features of each video by offering spatial and temporal clues. this paper introduces the basic concept and describes prototype versions of the tools as implemented in a video handling system. videomap and videospaceicon are effective for video handling functions such as video content analysis, video editing, and various video applications which need an intuitive visual interface. yoshinobu tonomura akihito akutsu kiyotaka otsuji toru sadakata ht96: a newbie's view billy bly multilingual communication systems milam aiken theoretical foundations of schema restructuring in heterogeneous multidatabase systems joseph albert integrating system design and organizational learning stefanie n. lindstaedt interactivity and ritual: body and dialogues with artificial systems diana domingues beyond space (abstract) licia calvi algorithm and performance evaluation of adaptive multidimensional clustering technique shinya fushimi masaru kitsuregawa masaya nakayama hidehiko tanaka tohru moto-oka dynamic frame rate control for video streams a mechanism for dynamically varying the frame rate of pre-encoded video clips is described. an off-line encoder creates a high quality bitstream encoded at 30 fps, as well as separate files containing motion vectors for the same clip at lower frame rates. an on-line encoder decodes the bitstream (if necessary) and re-encodes it at lower frame-rates in real-time using the pre-computed, stored motion information. dynamic frame rate control, used in conjunction with dynamic bit- rate control, allows clients to solve the rate mismatch between the bandwidth available to them and the bit-rate of the pre-encoded bitsream. it also provides a means for implementing fast forward control for video streaming without increasing bandwidth consumption. sassan pejhan ti-hao chiang ya-qin zhang chorochronos: a research network for spatiotemporal database systems andrew frank stephane grumbach ralf hartmut guting christian s. jensen manolis koubarakis nikos lorentzos yannis manolopoulos enrico nardelli barbara pernici hans-jörg schek michel scholl timos sellis babis theodoulidis peter widmayer database-friendly random projections a classic result of johnson and lindenstrauss asserts that any set of _n_ points in _d_-dimensional euclidean space can be embedded into _k_-dimensional euclidean space where _k_ is logarithmic in _n_ and independent of _d_ so that all pairwise distances are maintained within an arbitrarily small factor. all known constructions of such embeddings involve projecting the _n_ points onto a random _k_-dimensional hyperplane. we give a novel construction of the embedding, suitable for database applications, which amounts to computing a simple aggregate over _k_ random attribute partitions. dimitris achlioptas liveboard: a large interactive display supporting group meetings, presentations, and remote collaboration this paper describes the liveboard, a large interactive display system. with nearly one million pixels and an accurate, multi-state, cordless pen, the liveboard provides a basis for research on user interfaces for group meetings, presentations and remote collaboration. we describe the underlying hardware and software of the liveboard, along with several software applications that have been developed. in describing the system, we point out the design rationale that was used to make various choices. we present the results of an informal survey of liveboard users, and describe some of the improvements that have been made in response to user feedback. we conclude with several general observations about the use of large public interactive displays. scott elrod richard bruce rich gold david goldberg frank halasz william janssen david lee kim mccall elin pedersen ken pier john tang brent welch ptool: a scalable persistent object manager r. l. grossman x. qin options in physical database design a cornerstone of modern database systems is physical data independence, i.e., the separation of a type and its associated operations from its physical representation in memory and on storage media. users manipulate and query data at the logical level; the dbms translates these logical operations to operations on files, indices, records, and disks. the efficiency of these physical operations depends very much on the choice of data representations. choosing a physical representation for a logical database is called physical database design. the number of possible choices in physical database design is very large; moreover, they very often interact with each other. we attempt to list and classify these choices and to explore their interactions. the purpose of this paper is to provide an overview of possible options to the dbms developer and some guidance to the dbms administrator and user. while much of our discussion will draw on the relational data model, physical database design is of even more importance for object-oriented and extensible systems. the reasons are simple: first, the number of logical data types and their operations is larger, requiring and permitting more choices for their representation. second, the state of the art in query optimization for these systems is much less developed than for relational systems, making careful physical database design even more imperative for object-oriented database systems. goetz graefe editorial david b. johnson christopher rose database model for web-based cooperative applications in this paper we propose a model of a database that could become a kernel of cooperative database applications. first, we propose a new data model cdm (collaborative data model) that is oriented for the specificity of multiuser environments, in particular: cooperation scenarios, cooperation techniques and cooperation management. second, we propose to apply to databases supporting collaboration so called multiuser transactions. multiuser transactions are flat transactions in which, in comparison to classical acid transactions, the isolation property is relaxed. waldemar wieczerzycki mmvis: a multimedia visual information seeking environment for video analysis stacie hibino elke a. rundensteiner managing large publications/communications projects and surviving to tell the tale teresa m. craighead efficient and flexible methods for transient versioning of records to avoid locking by read-only transactions we present efficient and flexible methods which permit read-only transactions that do not mind reading a possibly slightly old, but still consistent, version of the data base to execute without acquiring locks. this approach avoids the undesirable interferences between such queries and the typically shorter update transactions that cause unnecessary and costly delays. indexed access by such queries is also supported, unlike by the earlier methods. old versions of records are maintained only in a transient fashion. our methods are characterized by their flexibility (number of versions maintained and the timing of version switches, supporting partial rollbacks, and different recovery and buffering methods) and their efficiency (logging, garbage collection, version selection, and incremental, record-level versioning). distributed data base environments are also supported, including commit protocols with the read-only optimization. we also describe efficient methods for garbage collecting unneeded older versions. c. mohan hamid pirahesh raymond lorie fast-start: quick fault recovery in oracle availability requirements for database systems are more stringent than ever before with the widespread use of databases as the foundation for ebusiness. this paper highlights _fast-start_ _fault recovery_, an important availability feature in oracle, designed to expedite recovery from unplanned outages. fast- start allows the administrator to configure a running system to impose predictable bounds on the time required for crash recovery. for instance, fast-start allows fine-grained control over the duration of the roll-forward phase of crash recovery by adaptively varying the rate of checkpointing with minimal impact on online performance. persistent transaction locking in oracle allows normal online processing to be resumed while the rollback phase of recovery is still in progress, and fast-start allows quick and transparent rollback of changes made by uncommitted transactions prior to a crash. tirthankar lahiri amit ganesh ron weiss ashok joshi a transaction-based approach to relational database specification an operational approach to database specification is proposed and investigated. valid database states are described as the states resulting from the application of admissible transactions, specified by a transactional schema. the approach is similar in spirit to the modeling of behavior by methods and encapsulation in object- oriented systems. the transactions considered are line programs consisting of insertions, deletions, and modifications, using simple selection conditions. the results concern basic properties of transactional schemas, as well as the connection with traditional constraint schemas. in particular, the expressive power of transactional schemas is characterized. although it is shown that transaction- based specification and constraint-based specification are incomparable, constraints of practical interest that have corresponding transactional schemas are identified. the preservation of constraints by transactions is also studied. serge abiteboul victor vianu nsf-eu multilingual information access judith l. klavans peter schauble editor's introduction john o. limb ascw: an assistant for cooperative work thomas kreifelts wolfgang prinz eiu's viewswire: new wine in a new bottle peter lovelock ali f. farhoomand to see, or not to see - is that the query? robert r. korfhage retrieval performance versus disc space utilization on worm optical discs steady progress in the development of optical disc technology over the past decade has brought it to the point where it is beginning to compete directly with magnetic disc technology. worm optical discs in particular, which permanently register information on the disc surface, have significant advantages over magnetic technology for applications that are mainly archival in nature but require the ability to do frequent on-line insertions. in this paper, we propose a class of access methods that use rewritable storage for the temporary buffering of insertions to data sets stored on worm optical discs and we examine the relationship between the retrieval performance from worm optical discs and the utilization of disc storage space when one of these organizations is employed. we describe the performance trade off as one of fast sequential retrieval of the contents of a block versus wasted space owing to data replication. a model of a specific instance of such an organization (a buffered hash file scheme) is described that allows for the specification of retrieval performance objectives. alternative strategies for managing data replication that allow trade offs between higher consumption rates and better average retrieval performance are also described. we then provide an expected value analysis of the amount of disc space that must be consumed on a worm disc to meet specified performance limits. the analysis is general enough to allow easy extension to other types of buffered files systems for worm optical discs. stavros christodoulakis daniel alexander ford concept mapping: an innovative approach to digital library design and evaluation june p. mead geri gay replication: db2, oracle, or sybase? is replication salvation or the devil in disguise? here's what three implementations tell us doug stacey at the forge: integrating sql with cgi, part 2 reuven lerner sagas long lived transactions (llts) hold on to database resources for relatively long periods of time, significantly delaying the termination of shorter and more common transactions. to alleviate these problems we propose the notion of a saga. a llt is a saga if it can be written as a sequence of transactions that can be interleaved with other transactions. the database management system guarantees that either all the transactions in a saga are successfully completed or compensating transactions are run to amend a partial execution. both the concept of saga and its implementation are relatively simple, but they have the potential to improve performance significantly. we analyze the various implementation issues related to sagas, including how they can be run on an existing system that does not directly support them. we also discuss techniques for database and llt design that make it feasible to break up llts into sagas. hector garcia-molina kenneth salem cscw research in germany rainer unland evaluating interactive retrieval systems nicholas belkin christine l. borgman susan dumais micheline hancock-beaulieu visual information retrieval from large distributed online repositories shih- fu chang john r. smith mandis beigi ana benitez the distributed information search component (disco) and the world wide web the distributed information search component (disco) is a prototype heterogeneous distributed database that accesses underlying data sources. the disco prototype currently focuses on three central research problems in the context of these systems. first, since the capabilities of each data source is different, transforming queries into subqueries on data source is difficult. we call this problem the weak data source problem. second, since each data source performs operations in a generally unique way, the cost for performing an operation may vary radically from one wrapper to another. we call this problem the radical cost problem. finally, existing systems behave rudely when attempting to access an unavailable data source. we call this problem the ungraceful failure problem. disco copes with these problems. for the weak data source problem, the database implementor defines precisely the capabilities of each data source. for the radical cost problem, the database implementor (optionally) defines cost information for some of the operations of a data source. the mediator uses this cost information to improve its cost model. to deal with ungraceful failures, queries return partial answers. a partial answer contains the part of the final answer to the query that was produced by the available data sources. the current working prototype of disco contains implementations of these solutions and operations over a collection of wrappers that access information both in files and on the world wide web. anthony tomasic remy amouroux philippe bonnet olga kapitskaia hubert naacke louiqa raschid corr: a computing research repository this paper describes the decisions by which teh association for computing machinery integrated good features from the los alamos e-print (physics) archive and from cornell university's networked computer science technical reference library to form their own open, permanent, online "computing research repository" (corr). submitted papers are not refereed and anyone can browse and extract corr material for free, so corr's eventual success could revolutionize computer science publishing. but several serious challenges remain: some journals forbid online preprints, teh corr user interface is cumbersome, submissions are only self-indexed, (no professional library staff manages teh archive) and long-term funding is uncertain. joseph y. halpern representation in virtual space: visual convention in the graphical user interface the graphical user interface (gui) typically provides a multi-windowed environment within a flat workspace or "desktop." simultaneously, however, controls for executing commands within this interface are increasingly being rendered three- dimensionally. this paper explores ways in which the space of the gui desktop might be literally and figuratively deepened through the incorporation of visual devices that have emerged during the history of art--- specifically, perspective and light effects. by enriching the visual vocabulary of the gui, greater semantic complexity becomes sustainable. loretta staples interactive journal-title searching via a network of concept-atoms yin wei k. mulliner zin lin xml, the extensible markup language xml has been attracting a lot of attention recently. this article providesa five-minute overview of xml and explains why it matters to you andrew kuchling symmetric bimanual interaction we present experimental work that explores the factors governing symmetric bimanual interaction in a two-handed task that requires the user to track a pair of targets, one target with each hand. a symmetric bimanual task is a two-handed task in which each hand is assigned an identical role. in this context, we explore three main experimental factors. we vary the _distance_ between the pair of targets to track: as the targets become further apart, visual diversion increases, forcing the user to divide attention between the two targets. we also vary the demands of the task by using both a slow and a fast _tracking speed_. finally, we explore _visual integration_ of sub-tasks: in one condition, the two targets to track are connected by a line segment which visually links the targets, while in the other condition there is no connecting line. our results indicate that all three experimental factors affect the degree of parallelism, which we quantify using a new metric of bimanual parallelism. however, differences in tracking error between the two hands are affected only by the visual integration factor. ravin balakrishnan ken hinckley differences in movement microstructure of the mouse and the finger-controlled isometric joystick anant kartik mithal sarah a. douglas synthetic interviews: the art of creating a "dyad" between humans and machine- based characters donald marinelli scott stevens a suitable relational model for temporal databases (abstract only) the relational approach has been applied very literally to temporal databases. for example, the salary and dept of an employee do not necessarily change at the same time, resulting in a split of information into two tables: name salary start end and name dept start end, where start and end are instants of time, treated as special attributes. a unit of information, such as salary of an employee, is further split into several tuples. normal forms, dependencies and other nice concepts of relational databases remain inapplicable to the temporal case. we propose that a tuple, the fundamental molecular unit of information, be viewed as a function from a union of finitely many intervals of time into domains of attributes. this eliminates the problems stated above. it gives rise to elegant relational algebra, tuple calculus and other query languages and brings temporal databases within the classical framework. it is hoped that this model will draw the attention of researchers to temporal databases. shashi gadia instrumental interaction: an interaction model for designing post-wimp user interfaces this article introduces a new interaction model called instrumental interaction that extends and generalizes the principles of direct manipulation. it covers existing interaction styles, including traditional wimp interfaces, as well as new interaction styles such as two-handed input and augmented reality. it defines a design space for new interaction techniques and a set of properties for comparing them. instrumental interaction describes graphical user interfaces in terms of _domain objects_ and _interaction instruments_. interaction between users and domain objects is mediated by interaction instruments, similar to the tools and instruments we use in the real world to interact with physical objects. the article presents the model, applies it to describe and compare a number of interaction techniques, and shows how it was used to create a new interface for searching and replacing text. michel beaudouin-lafon optimum probability estimation based on expectations probability estimation is important for the application of probabilistic models as well as for any evaluation in ir. we discuss the interdependencies between parameter estimation and other properties of probabilistic models. then we define an optimum estimate which can be applied to various typical estimation problems in ir. a method for the computation of this estimate is described which uses expectations from empirical distributions. some experiments show the applicability of our method, whereas comparable approaches are partially based on false assumptions or yield estimates with systematic errors. n. fuhr h. huther chi 99 sig: universal web access: delivering services to everyone gary perlman voicenotes: a speech interface for a hand-held voice notetaker lisa j. stifelman barry arons chris schmandt eric a. hulteen integrating ir and rdbms using cooperative indexing samuel defazio amjad daoud lisa ann smith jagannathan srinivasan human computer interaction: introduction sofia c. defernandez remote usability evaluation: can users report their own critical incidents? jose c. castillo h. rex hartson deborah hix recovery architectures for multiprocessor database machines rakesh agrawal david j. dewitt a comparison of relational database management systems unify and idb one of the most important qualities of a database management system running on a small system is its ability to provide data independence. a relational database management system provides data independence through its use of tables for data storage. use of a standard operating system means that a database management system can be run on a wide variety of systems, allowing data to be transported not only between small systems but between small and large systems. assuming that a database management system provides data independence, what other features are needed? a comparison of the features provided by two such database management systems and how easy they are to use would be helpful in answering this question. this paper presents a comparison of unify and intel database management system (idb). both are relational, multi-user systems that run under the unix operating system. lindsay mcdermid physical spaces, virtual places and social worlds: a study of work in the virtual geraldine fitzpatrick simon kaplan tim mansfield human-computer interaction research at georgia institute of technology hci research at georgia tech is found in three cooperating groups: the engineering psychology and experimental psychology programs in the school of psychology, the center for human-machine systems research in the school of industrial and systems engineering, and the interdisciplinary graphics, visualization and usability (gvu) center. we cooperate via cross-listed courses, having students in one area take a minor in another area, collaborative research projects, serving on ph. d. committees, joint colloquia and brown bag lunches, and joint appointments. the gvu center (housed in the college of computing) and cognitive science program (sponsored by psychology, industrial and systems engineering, and the college of computing) involves a number of the same faculty, further enhancing our collaborations. james d. foley christine m. mitchell neff walker crash recovery in client-server exodus in this paper, we address the correctness and performance issues that arise when implementing logging and crash recovery in a page-server environment. the issues result from two characteristics of page-server systems: 1) the fact that data is modified and cached in client database buffers that are not accessible by the server, and 2) the performance and cost trade-offs that are inherent in a client-server environment. we describe a recovery system that we have implemented for the client-server version of the exodus storage manager. the implementation supports efficient buffer management policies, allows flexibility in the interaction between clients and the server, and reduces the server load by generating log records at clients. we also present a preliminary performance analysis of the implementation. michael j. franklin michael j. zwilling c. k. tan michael j. carey david j. dewitt addressing the it skills crisis (panel session): gender and the it profession denis m. s. lee sue nielsen eileen m. trauth viswanath venkatesh interactive narrative: stepping into our own stories chuck clanton harry marks janet murray mary flanagan francine arble the smooth video db - demonstration of an integrated generic indexing approach the smooth video db is a distributed system proposing an integral query, browsing, and annotation software framework in common with an index database for video media material. alexander bachlechner lazlo boszormenyi bernhard dorfinger christian hofbauer harald kosch carmen riedler roland tusch buckets: smart objects for digital libraries michael l. nelson kurt maly enhancing data warehouse performance through query caching aditya n. saharia yair m. babad polynesian navigation: locomotion and previewing aspects kent wittenburg wissam ali-ahmad daniel laliberte tom lanning `prabha' - a distributed concurrency control algorithm we propose a non-preemptive, deadlock free concurrency control mechanism for distributed database systems. the algorithm uses a combination of transaction blocking and roll-back to achieve serialization. unlike other locking mechanisms presented in the past, the algorithm proposed here uses dynamic attributes of transactions to resolve conflicts to achieve serialization. we argue that using the dynamic attributes of transactions economizes memory use and reduces conflict resolution time. albert burger vijay kumar coordination, overload and team performance: effects of team communication strategies susan r. fussell robert e. kraut f. javier lerch william l. scherlis matthew m. mcnally jonathan j. cadiz multigranularity locking in multiple job classes transaction processing system the conditions of when to apply fine and coarse granularity to different kinds of transaction are well understood. however, it is not very clear how multiple job classes using different lock granularities affect each other. this study aims at exploring the impact of multigranularity locking on the performance of multiple job classes transaction processing system which is common in multiuser database system. there are two key findings in the study. firstly, lock granularity adopted by identical job classes should not differ from each other by a factor of more than 20; otherwise, serious data contention may result. secondly, short job class transactions are generally benefited when its level of granularity is similar to that of the long job class since this will reduce the additional lock overhead and data contention which are induced by multigranularity locking. shan-hoi ng sheung- lun hung a temporally oriented data model the research into time and data models has so far focused on the identification of extensions to the classical relational model that would provide it with "adequate" semantic capacity to deal with time. the temporally oriented data model (todm) presented in this paper is a result of a different approach, namely, it directly operationalizes the pervasive three-dimensional metaphor for time. one of the main results is thus the development of the notion of the data cube: a three-dimensional and inherently temporal data construct where time, objects, and attributes are the primary dimensions of stored data. todm's cube adds historical depth to the tabular notions of data and provides a framework for storing and retrieving data within their temporal context. the basic operations in the model allow the formation of new cubic views from existing ones, or viewing data as one moves up and down in time within cubes. this paper introduces todm, a consistent set of temporally oriented data constructs, operations, and constraints, and then presents tosql, a corresponding end-user's sql-like query syntax. the model is a restricted but consistent superset of the relational model, and the query syntax incorporates temporal notions in a manner that likewise avoids penalizing users who are interested solely in the current view of data (rather than in a temporal perspective). the naturalness of the spatial reference to time and the added semantic capacity of todm come with a price---the definitions of the cubic constructs and basic operations are relatively cumbersome. as rudimentary as it is, todm nonetheless provides a comprehensive basis for formulating an external data model for a temporally oriented database. gad ariav transactional publish/subscribe: the proactive multicast of database changes (abstract) for many years, tibco (the information bus company) has pioneered the use of publish/subscribe---a form of push technology --- to build flexible, real-time loosely-coupled distributed applications. today, publish/subscribe is used by 300 of the world's largest financial institutions, deployed in 6 of the top 10 semiconductor manufacturer' factory floors, utilized in the implementation large-scale internet services like yahoo, intuit, and etrade, and chosen by many of the world's leading corporations as the enterprise infrastructure for integrating disparate applications. in this paper, we will: contrast the publish/subscribe event-driven interaction paradigm against the traditional demand-driven request-reply interaction paradigm; explain the concepts of subject-based addressing and self-describing messages, the cornerstones of publish/subscribe; describe the scalable implementation of publish/subscribe via multicast and broadcast, and the proposed pragmatic general multicast internet standard; and categorize the qualities of service needed by different kinds of event-driven applications. today, tibco products support: reliable delivery for front-office applications which require update notifications only while they are online; guaranteed delivery for back-office applications that cannot afford to lose messages despite network and application failures; and transactionally guaranteed delivery for those applications that must update databases, consume messages on one set of subjects, and publish messages on another set of subjects, all within properly bracketed atomic transactions. three different implementations of transactional publish/subscribe can be found in: a generic, database independent implementation embodied in tibco's enterprise transaction express (etx) product. etx optimizes two-phase commit for those applications that span a single database and the messaging system by using the last resource manager optimization. it also supports more complicated transactions by playing the role of an xa-compliant resource manager, leaving the transaction coordination to standard-based transaction monitors. an informix universal server specific extension package called the tibco message blade. this extends the sql language with tibco-provided user defined routines (udrs) for synchronous publish / subscribe operations. in general, udrs can be used inside stored procedures and triggers to publish and consume (potentially complex) structured messages. the need for two-phase commit is finessed by storing messages in the same database that houses application tables. a bidirectional bridge between oracle 8's advanced queueing (aq) facility and tibco's tib/rendezvous guaranteed message delivery implementation. oracle aq supports enqueue and dequeue operations to queues (actually implemented as oracle tables) that can be performed as part of database transactions. the bridge dequeues from oracle queues and republishes on the information bus. conversely, the bridge subscribes to tib/rendezvous messages and enqueues them to oracle queues for consumption by oracle applications. multiple bridges can be used to route aq messages from one oracle database to another. arvola chan the effect of reducing homing time on the speed of a finger-controlled isometric pointing device sarah a. douglas anant kartik mithal how effective are 3d display modes? sabine volbracht gitta domik khatoun shahrbabaki gregor fels present and future directions in data warehousing many large organizations have developed data warehouses to support decision making. the data in a warehouse are subject oriented, integrated, time variant, and nonvolatile. a data warehouse contains five types of data: current detail data, older detail data, lightly summarized data, highly summarized data, and metadata. the architecture of a data warehouse includes a backend process (the extraction of data from source systems), the warehouse, and the front-end use (the accessing of data from the warehouse). a data mart is a smaller version of a data warehouse that supports the narrower set of requirements of a single business unit. data marts should be developed in an integrated manner in order to avoid repeating the "silos of information" problem.an operational data store is a database for transaction processing systems that uses the data warehouse approach to provide clean data. data warehousing is constantly changing, with the associated opportunities for practice and research, such as the potential for knowledge management using the warehouse. paul gray hugh j. watson panel: building and using test collections donna harman espace 2: an experimental hyperaudio environment nitin sawhney arthur murphy logical design of relational database schemes we define extended conflict free dependencies in the context of functional and multivalued dependencies, and prove that there exists an acyclic, dependency preserving, 4nf database scheme if and only if the given set of dependencies has an extended conflict free cover. this condition can be checked in polynomial time. a polynomial time algorithm to obtain such a scheme for a given extended conflict free set of dependencies is also presented. the result is also applicable when the data dependencies consists of only functional dependencies, giving the necessary and sufficient condition for an acyclic, dependency preserving bcnf database scheme l. y. yuan z. m. ozsoyoglu incremental conceptual clustering from existing databases conceptual clustering enhances the value of existing databases by revealing patterns in the data. these patterns may be useful for understanding trends, making predictions of future events from historical data, or synthesizing data records into meaningful clusters. lode (learning on database environments) is an incremental conceptual clustering program. the premise of the lode system is that the task of discovering patterns in a large set of potentially noisy examples can be accomplished in a generate and test paradigm using generalization techniques to generate hypotheses describing similar examples and then testing the accuracy of these hypotheses by comparing them to examples. the lode system is an implementation of this premise. lode was used to analyze keystroke data collected from novices learning to use the vi editor. the analysis shows that lode discovered descriptions of recurring patterns of errors made by the novices that are known as mode errors. james r. rowland gregg t. vesonder trust breaks down in electronic contexts but can be repaired by some initial face-to-face contact: elena rocco document filtering with inference networks jamie callan mars - machine automated response system michael robertson experiments in inhabited tv steve benford chris greenhalgh chris brown graham walker tim regan jason morphett john wyver paul rea from "model world" to "magic world": making graphical objects the medium for intelligent design assistance (abstract) loren terveen markus stolze will hill dynamic functional dependencies and database aging a simple extension of the relational model is introduced to study the effects of dynamic constraints on database evolution. both static and dynamic constraints are used in conjunction with the model. the static constraints considered here are functional dependencies (fds). the dynamic constraints involve global updates and are restricted to certain analogs of fds, called "dynamic" fds. the results concern the effect of the dynamic constraints on the static constraints satisfied by the database in the course of time. the effect of the past history of the database on the static constraints is investigated using the notions of age and age closure. the connection between the static constraints and the potential future evolution of the database is briefly discussed using the notions of survivability and survivability closure. victor vianu technology matters in virtual communities erik stolterman an information retrieval perspective on fuzzy database systems (acm 82 panel session) database in which domain values are not crisp and precise exhibit properties normally associated with information retrieval systems. for instance, a boolean query induces a membership value for each tuple (i.e., record) that is analogous in function to a similarity measure. thus, precision and recall measures are legitimate areas of interest that pertain to fuzzy databases but not ordinary databases. these ideas will be expounded in the context of a database for expert advice on national energy policies. bill p. buckles pointing in entertainment-oriented environments: appreciation versus performance multimedia applications for consumer entertainment often employ a point-and-select interaction style, borrowed from more task-oriented computer applications. the environment of use and the pointing devices involved are so different, however, that a higher importance should be attributed to the users' appreciation of the pointing device than to its efficiency or any other objective performance measure. we set up an experiment to investigate how appreciation and performance measures relate. the experiment involved six different input devices, two cd-i titles and 16 subjects making both voluntary and prescribed cursor control movements. for the mouse-like pointing devices we obtained a fitts' law-like dependence on target width and target distance. this was, however, not replicated for any of the other input devices, mainly owing to a positive influence of cursor constraints. concerning the relation of the performance measures with the users' appreciation, we found that neither time-to-target nor relative-path-length on its own is a reliable indicator of the users' appreciation. together, however, they might explain the appreciation scores to a considerable extent. j. h. d. m. westerink k. van den reek automatic feedback using past queries: social searching? larry fitzpatrick mei dent programming constraint system by demonstration takashi hattori introducing users to network resources users lost: reflections on the past, future, and limits of information science tefko saracevic collaborating on an online information system for metacenter dan dwyer practical gui screen design: making it usable cliff wilding selectively materializing data in mediators by analyzing source structure, query distribution and maintenance cost we present an approach to selecting data to materialize in web based information mediators by analyzing multiple factors. an issue in building web based information mediators is how to improve the query response time given the high response time for retrieving data from remote web sources. we had earlier presented a framework for optimizing the performance of information mediators by selectively materializing data. in this paper we describe our approach for automatically selecting the portion of data that must be materialized by analyzing a combination of several factors, namely the distribution of user queries, the structure of sources and the update cost. naveen ashish craig a. knoblock cyrus shahabi visualizing information retrieval results: a demonstration of the tilebar interface marti a. hearst jan o. pedersen take cover: exploiting version support in cooperative systems anja haake jörg m. haake simplifying conformance pat billingsley abc: a hypermedia system for artifact-based collaboration john b. smith f. donelson smith introduction to object-oriented design: a minimalist approach mary beth rosson john m. carroll an empirical study of collaborative wearable computer systems jane siegel robert e. kraut bonnie e. john kathleen m. carley modeling users' interests in information filters irene stadnyk robert kass apl programming: a psychological model this paper seeks to provide insights into the psychological dynamics of programming in apl. these insights should allow the practitioner to better approach the apl programming environment and may stimulate more in-depth, empirical research into these psychological dynamics. the paper approaches apl both as a language and as a system which maps problems into solutions. the paper is analytical versus empirical in nature. the author draws on the plethora of work related to the topic of the paper in the development of a model of programming in apl. raymond c. hooker a simple reference string sampling method the performance of a process executing in a virtual memory environment is largely determined by the interaction between the process's memory referencing behavior and the system's memory management policies. the structure of the memory reference string, reflected in such quantities as the page fault rate, the stack depth distribution, and the working set size, has traditionally been expensive to measure because of the overhead in capturing every memory reference. this paper reports on the development and testing of a sampling technique designed to extract accurate measurements of reference string characteristics while recording only a part of the complete reference string. the cost of the measurements is controlled by the sampling rate. results in this paper are based on experiments using synthetic reference strings from an lru generative model. james wittneben dennis kafura a future for e-mail stacey l. ashlund steven pemberton efficient and extensible algorithms for multi query optimization complex queries are becoming commonplace, with the growing use of decision support systems. these complex queries often have a lot of common sub- expressions, either within a single query, or across multiple such queries run as a batch. multiquery optimization aims at exploiting common sub- expressions to reduce evaluation cost. multi-query optimization has hither-to been viewed as impractical, since earlier algorithms were exhaustive, and explore a doubly exponential search space. in this paper we demonstrate that multi-query optimization using heuristics is practical, and provides significant benefits. we propose three cost-based heuristic algorithms: volcano-sh and volcano-ru, which are based on simple modifications to the volcano search strategy, and a greedy heuristic. our greedy heuristic incorporates novel optimizations that improve efficiency greatly. our algorithms are designed to be easily added to existing optimizers. we present a performance study comparing the algorithms, using workloads consisting of queries from the tpc-d benchmark. the study shows that our algorithms provide significant benefits over traditional optimization, at a very acceptable overhead in optimization time. prasan roy s. seshadri s. sudarshan siddhesh bhobe using clustering and visualization for refining the results of a www image search engine sougata mukherjea kyoji hirata yoshinori hara remarks on two new theorems of date and fagin h. w. buff joyce+: model and language for multi-site distributed systems the models and languages most widely used for distributed systems programming (such as ada, csp, and occam) do not explicitly consider the process execution site, they assume a synchronous communication scheme of "rendezvous" style, and require that each process recognize the names of their interlocutors. joyce+ is a modification of the joyce language defined by brinch hansen [brinch 87, brinch 89]. in the joyce+ model each process is executed in a specific site, it is assumed that the communication between processes at the same site is much more faster than between processes at different sites, and it is also assumed that the communication between processes is not always reliable. for this multi-site environment an asynchronous communication scheme is considered, eliminating waiting states for a process that sends a message to another process. furthermore, modelling communication between processes through "channel" objects, the spaces for process names become independents. in this paper the joyce+ model for multi-site distributed systems is presented and the operational semantic of the asynchronous communication between processes is illustrated with petri nets. the syntax of the joyce+ language is presented in terms of the guarded commands language [dijkstra 75]. the expressive power of the joyce+ language in distributed synchronization problems with timeout handling is illustrated through examples. finally, we discuss about software development environments (based on joyce+) for distributed systems over multi- and mono-process computer networks. index terms \\- languages for distributed systems, tools and methodologies for distributed systems. maría consuelo franky an efficient video segmentation scheme for mpeg video stream using macroblock information in this paper we propose and implement an efficient scheme for automatically detecting the abrupt shot changes in a video stream compressed in mpeg video format. in the proposed scheme, the type of each macroblock in a b-frame is compared with the type of corresponding macroblock (i.e. the macroblock in the same position) of the previous b-frame. the results of comparisons are accumulated and compared to a threshold in order to decide if a shot change occurs. since the proposed scheme uses not only information about the type of each macroblock but also its location, it can provide more robust detection capability. moreover, since the proposed scheme can also detect shot changes in both i- and p-frames based on the information in b-frame, it can detect a changing point more precisely, that is, the granularity of detection is the frame in the proposed scheme. johngho nang seungwook hong youngin ihm a new basis for the weak instance model a new definition of the weak instance model is presented, which does not consider the missing values as existent though unknown, but just assumes that no information is available about them. it is possible to associate with the new definition logical theories that do not contain universally quantified variables. the new model enjoys various desirable properties of the old weak instance model, with respect to dependency satisfaction, query answering, and associated logical theories. p. atzeni m. c. de bernardis database research at columbia university shih-fu chang luis gravano gail e. kaiser kenneth a. ross salvatore j. stolfo design and implementation of cb lite dan kogan situated evaluation for cooperative systems this paper discusses an evaluation of the mead prototype, a multi-user interface generator tool particularly for use in the context of air traffic control (atc). the procedures we adopted took the form of opportunistic and informal evaluation sessions with small user groups, including air traffic controllers (atcos). we argue that informal procedures are a powerful and cost effective method for dealing with specific evaluation issues in the context of cscw but that wider issues are more problematic. most notably, identifying the "validity" or otherwise of cscw systems requires that the context of use be taken seriously, necessitating a fundamental re-appraisal of the concept of evaluation. michael twidale david randall richard bentley the psychology of multimedia databases multimedia information retrieval in digital libraries is a difficult task for computers in general. humans on the other hand are experts in perception, concept representation, knowledge organization and memory retrieval. cognitive psychology and science describe how cognition works in humans, but can offer valuable clues to information retrieval researchers as well. cognitive psychologists view the human mind as a general-purpose ysymbol-processing system that interacts with the world. a multimedia yinformation retrieval system can also be regarded as a symbol-processing system that interacts with the environment. its underlying information retrieval model can be seen as a cognitive framework that describes how the describe the design and implementation of a combined text/image retrieval system (as an example of a multimedia retrieval system) that is inspired by cognitive theories such as paivio's dual coding theory and marr's theory of perception. user interaction and an automatically created thesaurus that maps text concepts and internal image concept representations, generated by various feature extraction algorithms, improve the query formulation process of the image retrieval system. unlike most``multimedia databases'' found in literature, this image retrieval system uses the the functionality provided by an extensible multimedia dbmsthat itself is part of an open distributed environment. mark g. l. m. van doorn arjen p. de vries predator: a resource for database research praveen seshadri avoiding cultural false positives deborah mrazek cynthia baldaccini detection, analysis and rendering of audience reactions in distributed multimedia performance recent advances in distributed multimedia technologies encourage the development of interactive performance systems. these systems allow multiple users to take part in a performance either as players or spectators and influence its development in real time. however, in order for these applications to become effective they have to provide meaningful interaction capabilities and adequate means of expression to all the participants. this research provide an interaction framework that seeks to address these requirements. in particular, the system allows players and spectators to exchange messages during the performance. these messages describe player actions and audience reactions. furthermore, the framework monitors the development of the event and analyzes the behavior of the participants in order to: (i) detect and render shared audience reactions (ii) achieve a high degree of audience engagement. finally, the system synchronizes the presentation of audience reactions with performance developments at each site. this method has been applied in mission, a multi-player game on the web. nikitas m. sgouros methodologies for evaluation of collaborative systems jeff sokolov resources section: conferences jay blickstein collaborative virtual workspace peter j. spellman jane n. mosier lucy m. deus jay a. carlson conference preview: chi 99 conference on human factors in computing systems jennifer bruer t-cube: a fast, self-disclosing pen-based alphabet dan venolia forrest neiberg nfql: the natural forms query language a means by which ordinary forms can be exploited to provide a basis for nonprocedural specification of information processing is discussed. the natural forms query language (nfql) is defined. in nfql data retrieval requests and computation specifications are formulated by sketching ordinary forms to show what data are desired and update operations are specified by altering data on filled-in forms. the meaning of a form depends on a store of knowledge that includes extended abstract data types for defining elementary data items, a database scheme defined by an entity-relationship model, and a conceptual model of an ordinary form. based on this store of knowledge, several issues are addressed and resolved in the context of nfql. these issues include automatic generation of query expressions from weak specifications, the view update problem, power and completeness, and a heuristic approach to resolving computational relationships. a brief status report of an implementation of nfql is also given. david w. embley materialized views and data warehouses nick roussopoulos pagejokey, an object-oriented hypermedia design environment (abstract) sylvain fraisse jocelyne nanard marc nanard design principles for the virtual workplace charles e. grantham an optimality proof of the lru-k page replacement algorithm this paper analyzes a recently published algorithm for page replacement in hierarchical paged memory systems [o'neil et al. 1993]. the algorithm is called the lru-k method, and reduces to the well-known lru (least recently used) method for k = 1. previous work [o'neil et al. 1993; weikum et al. 1994; johnson and shasha 1994] has shown the effectiveness for k > 1 by simulation, especially in the most common case of k = 2. the basic idea in lru-k is to keep track of the times of the last k references to memory pages, and to use this statistical information to rank-order the pages as to their expected future behavior. based on this the page replacement policy decision is made: which memory-resident page to replace when a newly accessed page must be read into memory. in the current paper, we prove, under the assumptions of the independent reference model, that lru-k is optimal. specifically we show: given the times of the (up to) k most recent references to each disk page, no other algorithm a making decisions to keep pages in a memory buffer holding n \\- 1 pages based on this infomation can improve on the expected number of i/os to access pages over the lru-k algorithm using a memory buffer holding n pages. the proof uses the bayesian formula to relate the space of actual page probabilities of the model to the space of observable page numbers on which the replacement decision is acutally made. elizabeth j. o'neil patrick e. o'neil gerhard weikum the push technology rage…so what's next? kate gerwig surveyor's forum: technical transactions philip bernstein nathan goodman uc berkeley's nsf/arpa/nasa digital libraries project nancy van house on wrapping query languages and efficient xml integration modern applications (web portals, digital libraries, etc.) require integrated access to various information sources (from traditional dbms to semistructured web repositories), fast deployment and low maintenance cost in a rapidly evolving environment. because of its flexibility, there is an increasing interest in using xml as a middleware model for such applications. xml enables fast wrapping and declarative integration. however, query processing in xml- based integration systems is still penalized by the lack of an algebra with adequate optimization properties and the difficulty to understand source query capabilities. in this paper, we propose an algebraic approach to support efficient xml query evaluation. we define a general purpose algebra suitable for semistructured on xml query languages. we show how this algebra can be used, with appropriate type information, to also wrap more structured query languages such as oql or sql. finally, we develop new optimization techniques for xml-based integration systems. vassilis christophides sophie cluet jerome simeon database systems (acm 82 panel session): user interfaces we want to look at general ideas governing user inter- faces to database systems as well as specific interfaces. the session starts with the more general presentations and ends with the more specific. these short presentations (approximately 10-15 minutes each) will be followed by questions from the audience. charles welty research problems in data warehousing jennifer widom design issues involving entertainment click-ons douglas super marvin westrom maria klawe a hierarchical access control scheme for digital libraries chaitanya baru arcot rajasekar demonstration of the cinema system: k. rothermel i. barth g. dermler w. fiederer t. helbig t. leopold w. sinz an extensible constructor tool for the rapid, interactive design of query synthesizers michelle baldonado seth katz andreas paepcke chen-chuan chang hector garcia-molina terry winograd a magnifier tool for video data we describe an interface prototype, the hierarchical video magnifier, which allows users to work with a video source at fine-levels of detail while maintaining an awareness of temporal context. the technique allows the user to recursively magnify the temporal resolution of a video source while preserving the levels of magnification in a spatial hierarchy. we discuss how the ability to inspect and manipulate hierarchical views of temporal magnification affords a powerful tool for navigating, analyzing and editing video streams. michael mills jonathan cohen yin yin wong join processing in database systems with large main memories we study algorithms for computing the equijoin of two relations in a system with a standard architecture hut with large amounts of main memory. our algorithms are especially efficient when the main memory available is a significant fraction of the size of one of the relations to he joined; but they can be applied whenever there is memory equal to approximately the square root of the size of one relation. we present a new algorithm which is a hybrid of two hash-based algorithms and which dominates the other algorithms we present, including sort-merge. even in a virtual memory environment, the hybrid algorithm dominates all the others we study. finally, we describe how three popular tools to increase the efficiency of joins, namely filters, babb arrays, and semijoins, can he grafted onto any of our algorithms. leonard d. shapiro user preferences for task-specific vs. generic application software bonnie a. nardi jeff a. johnson inside risks: toward trustworthy networked information systems fred b. schneider user centred design principles - how far have they been industrialised? ian mcclelland bronwen taylor bill hefley introducing groupware into organizations (workshop session)(abstract only): what leads to successes and failures? this full-day workshop is intended for designers, researchers, and decision- makers to discuss and compare their experiences with designing and introducing groupware in an organizational context. considering the impact that groupware has had on collaboration in recent years, there are relatively few published studies on experiences with introducing groupware. with so few comparisons, it is difficult to develop and appropriate framework which could guide its introduction. yet it is important not only to understand successes and failiures with methods, but also design and methodology compromises that groupware implementers must live with. workshop participants shall present and discuss their experinces with requirement anlalysis, design and realization, training, user support/mediation, roles in the design team, and user acceptance. one goal of the workshop is to identify commonalities between different methods associated with successes and problems in order to move in the direction of developing approaches that will benefit users in system adaptation. an important issue here will be to view the introduction of groupware as an integrated organizational and technological development, i.e., a design of technology, work, and organizations. gloria mark wolfgang prinz volker wulf vidar hepsoe the wasa2 object-oriented workflow management system gottfried vossen mathias weske enabling dynamic content caching for database-driven web sites web performance is a key differentiation among content providers. snafus and slowdowns at major web sites demonstrate the difficulty that companies face trying to scale to a large amount of web traffic. one solution to this problem is to store web content at server-side and edge-caches for fast delivery to the end users. however, for many e-commerce sites, web pages are created dynamically based on the current state of business processes, represented in application servers and _databases_. since application servers, databases, web servers, and caches are independent components, there is no efficient mechanism to make changes in the database content reflected to the cached web pages. as a result, most application servers have to mark dynamically generated web pages as non-cacheable. in this paper, we describe the architectural framework of the cacheportal system for enabling dynamic content caching for database-driven e-commerce sites. we describe techniques for intelligently invalidating dynamically generated web pages in the caches, thereby enabling _caching_ of web pages generated based on database contents. we use some of the most popular components in the industry to illustrate the deployment and applicability of the proposed architecture. k. selçuk candan wen-syan li qiong luo wang-pin hsiung divyakant agrawal towards interoperable heterogeneous information systems: an experiment using the diom approach yooshin lee ling liu calton pu remote evaluation: the network as an extension of the usability laboratory h. rex hartson jose c. castillo john kelso wayne c. neale pragmatic solutions for better integration of the visually impaired in virtual communities this article introduces and discusses issues in the design of user interfaces for visually impaired people in the domain of virtual communities. we begin by pointing out that collaborative virtual environments provide additional means for visually impaired people which may help to accomplish a better integration into existing communities and social activities. we give a short introduction to the way visually impaired people usually work with a pc and show how their method of information access differs to sighted people. we then take a look at the advantages and disadvantages of existing adaptations to operating systems. based on this analysis we describe some requirements for user interfaces the usability for visually impaired people without losing the attractiveness and intuitiveness for the sighted. we finally describe a prototype of a special irc-client, called birc, and discuss its advantages and limitations. thorsten hampel reinhard keil-slawik bastian ginger claassen frank plohmann christian reimann type-syntax and token-syntax in diagrammatic systems while it is crucial to understand the formal structure of thesemantic domain of an information system, in this paper we raise anontological issue about the syntactic aspect of a representationsystem through a case study on a diagrammatic system. the uptake inthe software industry of notations for designing systems visuallyhas been accelerated with the standardization of the unifiedmodeling language (uml). the formalization of diagrammaticnotations is important for the development of essential toolsupport and to allow reasoning to take place at the diagrammaticlevel. focusing on an extended version of venn and eulerdiagram(which was developed to complement uml in the specificationof software systems), this paper presents two levels of syntax forthis system: type-syntax and token-syntax. token-syntax is aboutparticular diagrams instantiated on some physical medium, andtype-syntax provides a formal definition with which a concreterepresentation of a diagram must comply. while these two levels ofsyntax are closely related, the domains of type-syntax andtoken-syntax are ontologically independent, that is, one isabstract and the other concrete. we discuss the roles oftype-syntax and token-syntax in diagrammatic systems and show thatit is important to consider both levels of syntax in diagrammaticreasoning systems and in developing software tools to support suchsystems. john howse fernando molina john taylor sun-joo shin associative hardware and software techniques for integrity control this paper presents the integrity control mechanism of the associative processing system, cassm. the mechanism takes advantage of the associative techniques, such as content and context addressing, tagging and marking data, parallel processing, automatic triggering of integrity control procedures, etc., for integrity control and as a result offers three significant advantages: (1) the problem of staging data in a main memory for integrity checking can be eliminated because database storage operations are verified at the place where the data are stored. (2) the backout or merging procedures are relatively easy and inexpensive in the associative system because modified copies can be substituted for the originals or may be discarded by merely changing their associated tags. (3) the database management system software is simplified because database integrity functions are handled by the associative processing system to which a mainframe computer is a front-end computer. y. c. hong stanley y. w. su the king is dead; long live the king (keynote) john b. smith expanded design procedures for learnable, usable interfaces (panel session) designers of interactive computer systems have begun to incorporate a number of good techniques in the design process to insure that the system will be easy to learn and easy to use. though not all design projects use all the steps recommended, the steps are well known: define the tasks the user has to perform, know the capabilities of the user, gather relevant hardware/software constraints, from guidelines, design a first prototype, test the prototype with users, iterate changes in the design and repeat the tests until the deadline is reached. in our experience designing a new interface, steps 1 and 4 were the ones that were the most difficult and step 5 was the one that took extra time to plan well. we had difficulty defining what would go into a new task, and from broad guidelines, we had to develop one specific implementation for our tasks. furthermore, so that in each test we would learn something of value for future designs, we knew that we wanted to test pairs of prototypes that differed in only one feature. choosing which single feature to alter in each pair required careful planning. in what follows, i describe each of these difficulties more fully and show how we approached each in our environment. normally, a task is defined as a computer-based analog of an existing task, such as wordprocessing being the computer-based analog of typing. since we had to build an interface for an entirely new task, we had to invent how the user would think about the task. we had to invent the objects on which the user would operate and then the actions that would be performed on those objects. we had to specify the mental representation in the absence of previous similar tasks. in our case, we were designing the interface for a communications manager to designate the path to be taken for routing 800-calls to their final destination as a function of time of day, day of week, holidays, percentage distribution, etc. from the large set of known formal representations of data, e.g. lists, pictures, tables, hierarchies, and networks, we found three that seemed to capture the information sufficient for our task. we found that a hierarchy (tree structure), a restricted programming language in which there were only if-then-elses and definitions, and a long form to be filled out with all possible ordered combinations of the desired features, were all sufficient representations. we then asked potential users in casual interviews which format they found easiest to understand. it was immediately clear even from a relatively small number of subjects that the tree representation was preferred. the second aspect of defining the task involved specifying what actions the user would take on this representation. since in all interfaces, users have to move about, select an item to work on, enter information, delete information, and change modes (from data entry to command, typically), we looked for these kinds of actions in our task. the actions immediately fell into place, with commands being generated for moving about a tree, entering nodes and branches, etc. after gathering information on who the end users were and what hardware constraints we had, we designed our first prototype. this was our next most involved chore. our broad guidelines said that we should: present information on the computer in a representation as close as possible to the user's mental representation. minimize the long- term and short-term memory loads (e.g. make retrieval of commands and codes easy, give the user clues about where he or she is in a complicated procedure or data structure). construct a dialog that holds to natural conversational conventions (e.g., make pauses predictable, acknowledge long delays, use english imperative structure in the command syntax). our initial design on paper was fairly easy to construct. we followed that, however, with an important analysis step before we built our first prototype. for each part of the design, we constructed an alternative design that seemed to fit within the same constraints and within the guidelines. that is, we identified the essential components of our interface: the representation of the data, the organization of the command sector, the reminders, and the specific command implementations such as how to move around the data representation. for example, in the command sector there are alternative ways to arrange the commands for display: they could be grouped by similar function so that all "move" commands were clustered and all "entry" commands were clustered, etc, or they could be grouped into common sequences, such as those that people naturally follow in initially entering the nodes and branches of the tree structures. once each component had an alternative, we debated the merits of each. our first prototype, then, was the result of this first paper design plus the alterations that were generated by this analysis procedure. the next step entailed testing our design with real users. since we wanted to test our prototypes so that we learned something useful for our next assignment, we chose to test two prototypes at a time. if we were to learn something from the test, then only one component could differ between the two prototypes. the difficulty arose in deciding which component was to be tested in each pair. for this task, we went back to our initial component-by- component debate about the prototype. for each of the components and its alternative, we scored the choice on three dimensions: that is, first, for some alternatives, the better choice was predictable. for example, displaying command names was known to be more helpful than not displaying them. testing this alternative would not teach us very much. second, we needed to choose some alternatives early, so that the developers could begin immediately with some preliminary work. for example, our developers needed to know early whether the data would be displayed as a form or a tree so they could set up appropriate data structures. and third, some alternatives would appear again in future design projects. for example, all projects require some way of moving about the data but few deal directly with trees. knowledge gained now about the movement function would pay off in the future whereas how to display trees may not. once we prioritized our alternatives on these dimensions, we were able to choose the alternative for the first prototype test. after the test, we found other ideas to incorporate in the next iteration, but went through the same analysis procedure, listing the components, debating alternatives, and prioritizing those to be tested in the next iteration. in summary, the procedure we followed in designing and testing our prototypes was standard in overall form, flowing from defining the task, user, and constraints; building prototypes; and testing them with users. we differed, however, in three of our steps. we spent important initial time considering the best task representation to display to the user. we analyzed the individual components of our first prototype, generating a design for actual implementation that was more defensibly good than our first paper design. and, in our iterative testing procedure, we selected pairs of prototypes for test, the pairs differing on only one component of the design. the component for testing was selected according to whether the test would teach us something, whether it was important to decide early in the development process, and whether the component would appear again in designs we encountered in the future. these expanded steps in the design process not only added to our confidence that our early design was reasonably good, but also gave us the data and theory with which to convince others, notably developers and project managers, of the merit of our design. and, the process taught us something of use for our next design project. judith reitman olson compensation-based on-line query processing it is well known that using conventional concurrency control techniques for obtaining serializable answers to long-running queries leads to an unacceptable drop in system performance. as a result, most current dbmss execute such queries under a reduced degree of consistency, thus providing non-serializable answers. in this paper, we present a new and highly concurrent approach for processing large decision support queries in relational databases. in this new approach, called compensation-based query processing, concurrent updates to any data participating in a query are communicated to the query's on-line query processor, which then compensates for these updates so that the final answer reflects changes caused by the updates. very high concurrency is achieved by locking data only briefly, while still delivering transaction-consistent answers to queries. v. srinivasan michael j. carey a retrospective on the development of star star, officially known as the xerox 8010 information system, is a workstation for professionals, providing a comprehensive set of capabilities for the office environment. the star software consists of just over 250,000 lines of code. its development required 93 work years over a 3.5 year period. the development of star depended heavily on the use of powerful personal computers connected to a local-area network and on the use of the mesa language and development environment. an integration service was introduced to speed up the building of star and to relieve the programmers of many complex, but repetitive, tasks. eric harslem leroy e. nelson working the net: time to consider a web search specialist jill h. ellsworth hybrid inferential security methods for statistical databases memoryless inference control methods have been shown to provide effective means of reducing the amount of sensitive information released from a statistical database while maximizing the release of non-sensitive information. early memoryless inference controls have the additional benefit of providing control at a low computation and storage cost. two recent extensions to memoryless inference control allow the controls to release more non-sensitive information but do so at a greater cost in terms of computation and storage. this paper describes a proposed hybrid inference control method that can potentially maximize the release of information while holding down computation costs. steven c. hansen online help: a part of documentation susan d. goodall learning user's preferences by analyzing web-browsing behaviors young-woo seo byoung-tak zhang security for oodbms (or systems) ravi sandhu real world performance of association rule algorithms this study compares five well-known association rule algorithms using three real-world datasets and an artificial dataset. the experimental results confirm the performance improvements previously claimed by the authors on the artificial data, but some of these gains do not carry over to the real datasets, indicating overfitting of the algorithms to the ibm artificial dataset. more importantly, we found that the choice of algorithm only matters at support levels that generate more rules than would be useful in practice. for support levels that generate less than 1,000,000 rules, which is much more than humans can handle and is sufficient for prediction purposes where data is loaded into ram, apriori finishes processing in less than 10 minutes. on our datasets, we observed super-exponential growth in the number of rules. on one of our datasets, a 0.02% change in the support increased the number of rules from less than a million to over a billion, implying that outside a very narrow range of support values, the choice of algorithm is irrelevant. zijian zheng ron kohavi llew mason using schematic scenarios to understand user needs colin potts murax: a robust linguistic approach for question answering using an on-line encyclopedia robust linguistic methods are applied to the task of answering closed-class questions using a corpus of natural language. the methods are illustrated in a broad domain: answering general-knowledge questions using an on-line encyclopedia. a closed-class question is a question stated in natural language, which assumes some definite answer typified by a noun phrase rather than a procedural answer. the methods hypothesize noun phrases that are likely to be the answer, and present the user with relevant text in which they are marked, focussing the user's attention appropriately. furthermore, the sentences of matching text that are shown to the user are selected to confirm phrase relations implied by the question, rather than being selected solely on the basis of word frequency. the corpus is accessed via an information retrieval (ir) system that supports boolean search with proximity constraints. queries are automatically constructed from the phrasal content of the question, and passed to the ir system to find relevant text. then the relevant text is itself analyzed; noun phrase hypotheses are extracted and new queries are independently made to confirm phrase relations for the various hypotheses. the methods are currently being implemented in a system called murax and although this process is not complete, it is sufficiently advanced for an interim evaluation to be presented. julian kupiec tolerating bounded inconsistency for increasing concurrency in database systems recently, the scope of databases has been extended to many non-standard applications, and serializability is found to be too restrictive for such applications. in general, two approaches are adopted to address this problem. the first approach considers placing more structure on data objects to exploit type specific properties while keeping serializability as the correctness criterion. the other approach uses explicit semantics of transactions and databases to permit interleaved executions of transactions that are non- serializable. in this paper, we attempt to bridge the gap between the two approaches by using the notion of serializability with bounded inconsistency. users are free to specifiy the maximum level of inconsistency that can be allowed in the executions of operations dynamically. in particular, if no inconsistency is allowed in the execution of any operation, the protocol will be reduced to a standard strict two phase locking protocol based on type- specific semantics of data objects. bounded inconsistency can be applied to many areas which do not require exact values of the data such as for gathering information for statistical purpose, for making high level decisions and reasoning in expert systems which can tolerate uncertainty in input data. m. h. wong d. agrawal dynamic information visualization dynamic queries constitute a very powerful mechanism for information visualization; some universe of data is visualized, and this visualization is modified on-the-fly as users modify the range of interest within the domains of the various attributes of the visualized information. in this paper, we analyze dynamic queries and offer some natural generalizations of the original concept by establishing a connection to sql. we also discuss some implementation ideas that should make these generalizations efficient as well. yannis e. ioannidis the formal specification of adaptive user interfaces using command language grammar the design and implementation of adaptive systems as opposed to non-adaptive systems creates new demands on user interface designers. this paper discusses a few of these demands as encountered by the authors while utilising a formal notation for the design of an adaptive user interface to an electronic mail system. recommendations for the extension of this formal notation are proposed and discussed. d. p. browne b. sharratt m. norman c2 secure database management systems - a comparative study ramzi a. haraty efficient commit protocols for the tree of processes model of distributed transactions this paper describes two efficient distributed transaction commit protocols, the presumed abort (pa) and presumed commit (pc) protocols, which have been implemented in the distributed data base system r* [dshlm82, lhmwy83]. pa and pc are extensions of the well-known two-phase (2p) commit protocol [gray78, lamp80, lsggl80]. pa is optimized for read-only transactions and a class of multi-site update transactions, and pc is optimized for other classes of multi-site update transactions. the optimizations result in reduced inter-site message traffic and log writes, and, consequently, a better response time for such transactions. we derive the new protocols in a step-wise fashion by modifying the 2p protocol. c. mohan b. lindsay integrating infrastructure: enabling large-scale client integration kenneth m. anderson christian och roger king richard m. osborne query processing in a multimedia document system query processing in a multimedia document system is described. multimedia documents are information objects containing formatted data, text, image, graphics, and voice. the query language is based on a conceptual document model that allows the users to formulate queries on both document content and structure. the architecture of the system is outlined, with focus on the storage organization in which both optical and magnetic devices can coexist. query processing and the different strategies evaluated by our optimization algorithm are discussed. elisa bertino fausto rabbiti simon gibbs testing pointing device performance and user assessment with the iso 9241, part 9 standard sarah a. douglas arthur e. kirkpatrick i. scott mackenzie posters: abstracts elizabeth d. liddy parametric databases: seamless integration of spatial, temporal, belief and ordinary data our model, algebra and sql-like query language for temporal databases extend naturally to parametric data, of which spatial, temporal, spatio-temporal, belief and ordinary data are special cases. shashi k. gadia re-coupling tailored user interfaces gareth smith jon o'brien a paradigm shift in the distribution of multimedia gerard parr kevin curran converting nested algebra expressions into flat algebra expressions nested relations generalize ordinary flat relations by allowing tuple values to be either atomic or set valued. the nested algebra is a generalization of the flat relational algebra to manipulate nested relations. in this paper we study the expressive power of the nested algebra relative to its operation on flat relational databases. we show that the flat relational algebra is rich enough to extract the same "flat information" from a flat database as the nested algebra does. theoretically, this result implies that recursive queries such as the transitive closure of a binary relation cannot be expressed in the nested algebra. practically, this result is relevant to (flat) relational query optimization. jan paredaens dirk van gucht performance enhancements to a relational database system in this paper we examine four performance enhancements to a database management system: dynamic compilation, microcoded routines, a special-purpose file system, and a special-purpose operating system. all were examined in the context of the ingres database management system. benchmark timings that are included suggest the attractiveness of dynamic compilation and a special- purpose file system. microcode and a special-purpose operating system are analyzed and appear to be of more limited utility in the ingres context. michael stonebraker john woodfill jeff ranstrom marguerite murphy marc meyer eric allman concepts and methods for the optimization of distributed data processing in this paper we introduce and discuss a model of distributed data processing. for this purpose, a typical application system is analyzed and divided into sub-applications. to fulfill the task of the global application, the sub- applications have to communicate in an appropriate manner by exchanging data resp. information. in our model the communication between sub- applications is split up into two steps: the offering of information by sending sub- applications, and its acceptance by receiving sub-applications. for both communication steps synchronous and asynchronous processing modes are defined. supporting those different communication modes the cooperation between sub- applications can be defined very closely to the specific demands of the application system. this optimizes distributed data processing. at last we demonstrate the prototype implementation of a distributed data management system, which is based on the flexible communication mechanism described in the paper. s. jablonski t. ruf h. wedekind working the net: life on the bleeding edge jill h. ellsworth panel on multimedia computer mail - technical issues and future standards a computer-based message system (cbms) consists of computer facilities that expedite the creation, management, and nonreal time distribution of messages. these systems are increasingly being used for formal and informal communication, supporting the distribution of textual messages. however, nontext media such as voice and graphics are very important for human interaction, and are becoming more and more important in computer system applications. as a result, there is a growing need for the support of media other than text in cbms. before standards for multimedia cbms can be implemented, there are many technical issues that must be addressed. these range from the design of new mechanisms for the distribution of multimedia mail, to the design of tools needed for editing nontext media. this panel will be discussing issues encountered in the development of multimedia cbms and an outlook for standards for multimedia computer mail. franklin f. kuo debra p. deutsch harry c. forsdick j. j. garcia luna aceves najah naffah andrew poggio jonathan b. postel james e. white automatic creation of hypervideo news libraries for the world wide web guillaume boissière user interface correctness ian maccoll david carrington computing surveys' electronic symposium on hypertext and hypermedia: editorial helen ashman rosemary michelle simpson another approach to the data base computer data base computers have been identified as one means of using hardware to improve the performance of current data base management (software) systems while offering increased functionality. this paper describes a data base computer architecture and its operation that was developed during one of sperry univac's research efforts in this area. parallel transfer of large blocks of data which are then processed in parallel on a content-addressable basis by a series of microprocessors is the key to this design. the data base computer described in this paper is indicative of the special-purpose devices that will be available to meet the information storage and processing needs of the 1980s. harvey a. freeman john r. jordan towards estimation error guarantees for distinct values we consider the problem of estimating the number of distinct values in a column of a table. for large tables without an index on the column, random sampling appears to be the only scalable approach for estimating the number of distinct values. we establish a powerful negative result stating that no estimator can guarantee small error across all input distributions, unless it examines a large fraction of the input data. in fact, any estimator must incur a significant error on at least some of a natural class of distributions. we then provide a new estimator which is provably optimal, in that its error is guaranteed to essentially match our negative result. a drawback of this estimator is that while its worst-case error is reasonable, it does not necessarily give the best possible error bound on any given distribution. therefore, we develop heuristic estimators that are optimized for a class of typical input distributions. while these estimators lack strong guarantees on distribution-independent worst-case error, our extensive empirical comparison indicate their effectiveness both on real data sets and on synthetic data sets. moses charikar surajit chaudhuri rajeev motwani vivek narasayya independence-reducible database schemes edward p. f. chan hector j. hernandez office automation: new arena for old struggle office work accounts for one-half of the total employment in the united states. in 1978 there were 15.6 million professional and technical workers, 8.8 million managers and administrators, 6.4 million salesworkers, and 17.8 million clerical workers, for a grand total of 48.6 million white-collar employees constituting 49.8 percent of all workers. 1 the bureau of labor statistics projects an increase in this category of at least 24.9 percent, compared with 18.6 percent for blue-collar workers, 31.4 percent for service workers, and a decline of about 21 percent for farm workers. although service occupations are expected to be the fastest growing occupational group during 1978-1990, the largest number of new jobs will occur in the white- and blue- collar categories. the former is expected to increase by 12.1 to 16.1 million jobs; the latter by 5.9 to 8.9 million. abbe mowshowitz a synchronization model for recorded presentations and its relevance for information retrieval in order to improve the acceptance of recorded presentations, we introduce a new open document type covering a wide range of different media classes typically appearing in this scenario. instances of this document type can be replayed using our time-based synchronization model. random access in combination with the realized stream/media-layered synchronization mechanism results in essential features such as random visible scrolling and unrestricted cross-referencing. the proposed synchronization model has been implemented and is routinely used by a variety of institutions. furthermore we show how the properties of the considered documents and the functionality of the proposed synchronization model affect cross-modal retrieval techniques. w. hurst r. muller a loosely-coupled integration of a text retrieval system and an object- oriented database system document management systems are needed for many business applications. this type of system would combine the functionality of a database system, (for describing, storing and maintaining documents with complex structure and relationships) with a text retrieval system (for effective retrieval based on full text). the retrieval model for a document management system is complicated by the variety and complexity of the objects that are represented. in this paper, we describe an approach to complex object retrieval using a probabilistic inference net model, and an implementation of this approach using a loose coupling of an object-oriented database system (iris) and a text retrieval system based on inference nets (inquery). the resulting system is used to store long, structured documents and can retrieve document components (sections, figures, etc.) based on their contents or the contents of related components. the lessons learnt from the implementation are discussed. w. bruce croft lisa a. smith howard r. turtle extending complex ad-hoc olap large scale data analysis and mining activities require sophisticated information extraction queries. many queries require complex aggregation, and many of these aggregates are non- distributive. conventional solutions to this problem involve defining user defined aggregate functions (udafs). however, the use of udafs entails several problems. defining a new udaf can be a significant burden for the user, and optimizing queries involving udafs is difficult because of the "black box" nature of the udaf. in this paper, we present a method for expressing nested aggregates in a declarative way. a nested aggregate, which is a rollup of another aggregated value, expresses a wide range of useful non-distributive aggregation. for example, most frequent type aggregation can be naturally expressed using nested aggregation, e.g. "for each product, report its total sales during the month with the largest total sales of the product". by expressing compex aggregates declaratively, we relieve the user of the burden of defining udafs, and allow the evalution of the complex aggregates to be optimized. we use the extended multi-feature (emf) syntax as the basis for expressing nested aggregation. an advantage of this approach is that emf sql can already express a wide range of complex aggregation in a succinct way, and emf sql is easily optimized into efficient query plans. we show that nested aggregation queries can be evaluated efficiently by using a small extension to the emf sql query evaluation algorithm. a side effect of this extension is to extend emf sql to permit complex aggregation of data from multiple sources. theodore johnson damianos chatziantoniou prima - a database system supporting dynamically defined composite objects michael gesmann andreas grasnickel theo härder christoph hubel wolfgang käfer bernhard mitschang harald schöning thinksheet: a tool for tailoring complex documents peter piatko roman yangarber daoi lin dennis shasha internet television: net makeover? dan schiller page and link classifications: connecting diverse resources stephanie w. haas erika s. grams hypertext data mining (tutorial am-1) soumen chakrabarti how do experienced information lens users use rules? the information lens provides electronic mail users with the ability to write rules that automatically sort, select, and filter their messages. this paper describes preliminary results from an eighteen-month investigation of the use of this system at a corporate test site. we report the experiences of 13 voluntary users who have each had at least three months experience with the most recent version of the system. we found that: people without significant computer experience are able to create and use rules effectively. useful rules can be created based on the fields present in all messages (e.g., searching for distribution lists or one's own name in the address fields or for character strings in the subject field), even without any special message templates. people use rules both to prioritize messages before reading them and to sort messages into folders for storage after reading them. people use delete rules primarily to filter out messages from low-priority distribution lists, not to delete personal messages to themselves. w. e. mackay t. w. malone k. crowston r. rao d. rosenblitt s. k. card arc: an oai service provider for cross-archive searching the usefulness of the many on-line journals and scientific digital lib raries that exist today is limited by the lack of a service that can federate them through a unified interface. the open archive initiative (oai) is one major effort to address technical interoperability among distributed archives. the objective of oai is to develop a framework to facilitate the discovery of content in distributed archives. in this paper, we describe our experience and lessons learned in building arc, the first federated searching service based on the oai protocol. arc harvests metadata from several oai compliant archives, normalizes them, and stores them in a search service based on a relational database (mysql or oracle). at present we have over 165k metadata records from 16 data providers from various domains. xiaoming liu kurt maly mohammad zubair michael l. nelson formanager: an office forms management system s. bing yao alan r. hevner zhongzhi shi dawei luo user population and user contributions to virtual publics: a systems model this paper provides a comprehensive review of empirical research into user contributions to computer-mediated discourse in public cyber-spaces, referred to here as virtual publics. this review is used to build a systems model of such discourse. the major components of the model are i) critical mass, ii) social loafing, and iii) the collective impact of individual cognitive constraints on the processing of group messages. by drawing these three components into a single model it becomes possible to describe the shape of a "user-contributions/user-population function" after controlling for context. virtual publics can be created with the support of various technologies including email, newsgroups, webbased bulletin boards etc. traditionally the choice of technology platform and the way it is used has largely depended on arbitrary factors. this paper suggests that choices of this nature can be based on knowledge about required segmentation points for discourse as they relate to a particular type of technology. this is because the "user- contributions/user-population function" will map differently to different classes of technology. similarly the different classes of technologies used to enable virtual publics will each have different stress zones at which users will experience information overload resulting from computer mediated discourse. quentin jones sheizaf rafaeli a theory of stimulus-response compatibility applied to human-computer interaction a goms theory of stimulus-response compatibility is presented and applied to remembering computer command abbreviations. two abbreviation techniques, vowel-deletion and special-character-plus-first- letter, are compared in an encoding task. significant differences are found in the time to type the first letter of the abbreviation, and in the time to complete the typing of the abbreviation. these differences are analyzed using the theory which produces an excellent quantitative fit to the data (r2 = 0.97). bonnie e. john paul s. rosenbloom allen newell experiments with a component theory of probabilistic information retrieval based on single terms as document components a component theory of information retrieval using single content terms as component for queries and documents was reviewed and experimented with. the theory has the advantages of being able to (1) bootstrap itself, that is, define initial term weights naturally based on the fact that items are self relevent; (2) make use of within-item term frequencies; (3) account for query- focused and document- focused indexing and retrieval strategies cooperatively; and (4) allow for component-specific feedback if such information is available. retrieval results with four collections support the effectiveness of all the first three aspects, except for predictive retrieval. at the initial indexing stage, the retrieval theory performed much more consistantly across collections than croft's model and provided results comparable to salton's tf*idf approach. an inverse collection term frequency (ictf) formula was also tested that performed much better than the inverse document frequency (idf). with full feedback retrospective retrieval, the component theory performed substantially better than croft's, because of the highly specific nature of document-focused feedback. repetitive retireval results with partial relevance feedback mirrored those for the retrospective. however, for the important case of predictive retrieval using residual ranking, results were not unequivocal. k. l. kwok value propagation in object-oriented database part hierarchies michael halper james geller yehoshua perl automating the assignment of submitted manuscripts to reviewers the 117 manuscripts submitted for the hypertext '91 conference were assigned to members of the review committee, using a variety of automated methods based on information retrieval principles and latent semantic indexing. fifteen reviewers provided exhaustive ratings for the submitted abstracts, indicating how well each abstract matched their interests. the automated methods do a fairly good job of assigning relevant papers for review, but they are still somewhat poorer than assignments made manually by human experts and substantially poorer than an assignment perfectly matching the reviewers' own ranking of the papers. a new automated assignment method called "n of 2n" achieves better performance than human experts by sending reviewers more papers than they actually have to review and then allowing them to choose part of their review load themselves. susan t. dumais jakob nielsen the orchestration age kim vonder haar synglish - a high level query language for the rap database machine this paper describes a high-level query language developed and implemented for the rap database machine. the language, called synglish, is based on the semantic structure of the english sentences. the software system developed accepts synglish queries and produces rap assembler code which is then executed by the rap software emulator. tamer m. ozso esen a. ozkarahan an evaluation of earcons for use in auditory human-computer interfaces an evaluation of earcons was carried out to see whether they are an effective means of communicating information in sound. an initial experiment showed that earcons were better than unstructured bursts of sound and that musical timbres were more effective than simple tones. a second experiment was then carried out which improved upon some of the weaknesses shown up in experiment 1 to give a significant improvement in recognition. from the results of these experiments some guidelines were drawn up for use in the creation of earcons. earcons have been shown to be an effective method for communicating information in a human-computer interface. stephen a. brewster peter c. wright alistair d. n. edwards computerized writing tools for the 80's and 90's r. hanson mbase lynn wilcox shingo uchihashi andreas girgensohn jonathan foote john boreczky an integrated solution for managing replicated data in distributed systems panduranga rao adusumilli lawrence j. osborne mocha: a database middleware system featuring automatic deployment of application-specific functionality manuel rodríguez- martinez nick roussopoulos john m. mcgann stephen kelley vadim katz zhexuan song joseph jájá agents that reduce work and information overload pattie maes visualizing large trees using the hyperbolic browser john lamping ramana rao tourist: the application of a description logic based semantic hypermedia system for tourism joe bullock carole goble object-relational database systems (tutorial): principles, products and challenges object-relational database systems, a.k.a. "universal servers," are emerging as the next major generation of commercial database system technology. products from relational dbms vendors including ibm, informix, oracle, unisql, and others, include object-relational features today, and all of the major vendors appear to be on course to delivering full object-relational support in their products over the next few years. in addition, the sql3 standard is rapidly solidifying in this area. the goal of this tutorial is to explain what the key features are of object-relational database systems, review what today's products provide, and then look ahead to where these systems are heading. the presentation will be aimed at general sigmod audience, and should therefore be appropriate for users, practitioners, and/or researchers who want to learn about object- relational database systems. michael j. carey nelson m. mattos anil k. nori understanding designers' approaches to design (abstract) trond knudsen classic: a structural data model for objects classic is a data model that encourages the description of objects not only in terms of their relations to other known objects, but in terms of a level of intensional structure as well. the classic language of structured descriptions permits i) partial descriptions of individuals, under an 'open world' assumption, ii) answers to queries either as extensional lists of values or as descriptions that necessarily hold of all possible answers, and iii) an easily extensible schema, which can be accessed uniformly with the data. one of the strengths of the approach is that the same language plays multiple roles in the processes of defining and populating the db, as well as querying and answering. classic (for which we have a prototype main-memory implementation) can actively discover new information about objects from several sources: it can recognize new classes under which an object falls based on a description of the object, it can propagate some deductive consequences of db updates, it has simple procedural recognizers, and it supports a limited form of forward- chaining rules to derive new conclusions about known objects. the kind of language of descriptions and queries presented here provides a new arena for the search for languages that are more expressive than conventional dbms languages, but for which query processing is still tractable. this space of languages differs from the subsets of predicate calculus hitherto explored by deductive databases. alexander borgida ronald j. brachman deborah l. mcguinness lori alperin resnick jester 2.0 (poster abstract): evaluation of an new linear time collaborative filtering algorithm dhruv gupta mark digiovanni hiro narita ken goldberg miyabi: a hypermedia database with media-based navigation (abstract) kyoji hirata hajime takano yoshinori hara on bi-level conceptual schemas the advent of entity-relationship (e-r) model has revolutionized the database design process. the overwhelming success of entity-relationship approach to database design partly lies in the simplicity and semantic clarity of the e-r model. however, two seemingly conflicting measurements of database design: logical-clarity and physical-efficiency, force database designers to choose either logically-clear design or physically-efficient design. as a result, the resulting design is either logically-clear or physically- efficient, but not both. this paper is to provide a remedy for this situation. in this paper, we first discuss the concept of logical-clarity and physical- efficiency of logical database design. then the concept of bi- level conceptual schema is introduced. the bi-level conceptual schema concept allows us to create a logical database design that is both logically-clear and physically-efficient. finally, a database management system prototype (dbmsb) that supports bi-level conceptual schema is described. eugene y. sheng ear tracking: visualizing auditory localization strategies william joseph king suzanne j. weghorst kdd-cup 2000: question 2 winner's report salfor sytems dan steinberg n. scott cardell mykhaylo golovnya the olap market: state of the art and research issues barbara dinter carsten sapia gabriele höfling markus blaschka user interfaces for creativity support tools ben shneiderman synergies: a vision of information products working together steve anderson shiz kobara barry mathis dustin rosing eviatar shafrir a fifth generation approach to intelligent information retrieval this paper briefly examines certain of the intelligent information retrieval (iir) mechanisms used in the reseda system, a system equipped with "reasoning" capabilities in the field of complex biographical data management. particular attention is paid to a description of the different "levels" of inference procedure which can be executed by the system. the intention is to show that the technical solutions to iir problems implemented in reseda are of an equivalent level to those now proposed in the same field by the japanese project for fifth generation computer systems. gian piero zarri hypercafe and the "hypervideo engine": a generalized approach for hypervideo authoring and navigation (abstract) nitin sawhney david balcom a system for effective content based image retrieval: y. alp aslandogan chuck thier clement yu integrity = validity + completeness database integrity has two complementary components: validity, which guarantees that all false information is excluded from the database, and completeness, which guarantees that all true information is included in the database. this article describes a uniform model of integrity for relational databases, that considers both validity and completeness. to a large degree, this model subsumes the prevailing model of integrity (i.e., integrity constraints). one of the features of the new model is the determination of the integrity of answers issued by the database system in response to user queries. to users, answers that are accompanied with such detailed certifications of their integrity are more meaningful. first, the model is defined and discussed. then, a specific mechanism is described that implements this model. with this mechanism, the determination of the integrity of an answer is a process analogous to the determination of the answer itself. amihai motro serf: odmg-based generic re-structuring facility the age of information management and with it the advent of increasingly sophisticated technologies have kindled a need in the database community and others to re-structure existing systems and move forward to make use of these new technologies. legacy application systems are being transformed to newer state-of-the-art systems, information sources are being mapped from one data model to another, a diversity of data sources are being transformed to load, cleanse and consolidate data into modern data-warehouses [cr99]. re-structuring is thus a critical task for a variety of applications. for this reason, most object-oriented database systems (oodb) today support some form of re-structuring support [tec94, obj93, bkkk87]. this existing support of current oodbs [bkkk87, tec94, obj93] is limited to a pre-defined taxonomy of simple fixed-semantic schema evolution operations. however, such simple changes, typically to individual types only, are not sufficient for many advanced applications [bre96]. more radical changes, such as combining two types of redefining the relationship between two types, are either very difficult or even impossible to achieve with current commercial database technology [tec94, obj93]. in fact, most oodbs would typically require the user to write ad-hoc programs to accomplish such transformations. research that has begun to look into the issue of complex changes [bre96, ler96] is still limited by providing a fixed set of some selected (even if now more complex) operations. to address these limitations of the current restructuring technology, we have proposed the serf framework which aims at providing a rich environment for doing complex user-defined transformations flexibly, easily and correctly [cjr98b]. the goal of our work is to increase the usability and utility of the serf framework and its applicability to re-structuring problems beyond oodb evolution. towards that end, we provide re-usable transformations via the notion of serf templates that can be packaged into libraries, thereby increasing the portability of these transformations. we also now have a first cut at providing an assurance of consistency for the users of this system, a semantic optimizer that provides some performance improvements via enhanced query optimization techniques with emphasis on the re-structuring primitives [cnr99]. in this demo we give an overview of the serf framework, its current status and the enhancements that are planned for the future. we also present an example of the application of serf to a domain other than schema evolution, i.e., the web restructuring. e. a. rundensteiner k. claypool m. li l. chen z. zhang c. natarajan j. jin s. de lima s. weiner bibliographic searching: now within your reach barbara lawrence detection and resolution of deadlocks in distributed database systems kia makki niki pissinou towards on-line analytical mining in large databases jiawei han calculating constraints on relational expression this paper deals with the problem of determining which of a certain class of constraints hold on a given relational algebra expression where the base relations come from a given schema. the class of constraints includes functional dependencies, equality of domains, and constancy of domains. the relational algebra consists of projection, selection, restriction, cross product, union, and difference. the problem as given is undecidable, but if set difference is removed from the algebra, there is a solution. operators specifying a closure function (similar to functional dependency closure on one relation) are defined; these will generate exactly the set of constraints valid on the given relational algebra expression. we prove that the operators are sound and complete. a. klug document filtering for fast ranking michael persin reexamining the cluster hypothesis: scatter/gather on retrieval results marti a. hearst jan o. pedersen a multiple device approach for supporting whiteboard-based interactions: jun rekimoto sketching storyboards to illustrate interface behaviors james a. landay brad a. myers database systems management and oracle8 oracle's corporate mission is to enable the information age through network computing, a vision of broader access to information for all and the empowerment and increased productivity that can result. the technology implications of the network computing vision are ubiquitous access via low- cost appliances to smaller numbers of larger databases, accessed via professionally managed networks compliant with open internetworking protocols. the latest release of the oracle data server, oracle8, provides new technology for management of very large databases containing rich and user-defined data types, and is continuing to evolve to make it economically beneficial to store all forms of digital information in a database. c. gregory doherty theme-based retrieval of web news (poster session) we present our framework for classification of web news, based on support vector machines, and some of the initial measurements of its accuracy. nuno maria mario j. silva multiple-view approach for smooth information retrieval toshiyuki masui mitsuru minakuchi george r. borden kouichi kashiwagi constant interaction-time scatter/gather browsing of very large document collections the scatter/gather document browsing method uses fast document clustering to produce table-of-contents-like outlines of large document collections. previous work [1] developed linear-time document clustering algorithms to establish the feasibility of this method over moderately large collections. however, even linear-time algorithms are too slow to support interactive browsing of very large collections such as tipster, the darpa standard text retrieval evaluation collection. we present a scheme that supports constant interaction-time scatter/gather of arbitrarily large collections after near- linear time preprocessing. this involves the construction of a cluster hierarchy. a modification of scatter/gather employing this scheme, and an example of its use over the tipster collection are presented. douglass r. cutting david r. karger jan o. pedersen a comparison of three user interfaces to relational microcomputer data bases payoff idea. different styles of user interfaces can dramatically affect data base capabilities. in an environment comprising many different data bases, the goal is to select one data base management system (dbms) that provides the best selection of design tools, minimizes development times, and enforces relational rules. this article presents a case study performed at the hospital of the university of pennsylvania, in which a test data base was developed for implementation with three dbmss, each with a distinctly different user and programmer interface. carl medsker margaret christensen il-yeol song group decision support systems: the cultural factor t. h. ho k. s. raman r. t. watson mmm: a user interface architecture for shared editors on a single screen eric a. bier steven freeman corrigenda: "the logical record access approach to database design" toby j. teorey james p. fry on design of network scheduling in parallel video-on-demand systems chow-sing lin an open agent architecture for integrating multimedia services p. charlton y. chen e. mamdani o. olsson j. pitt f. somers a. wearn tutorial database mining we view database mining as the efficient construction and verification of models of patterns embedded in large databases. many of the database mining problems have been motivated by the practical decision support problems faced by most large retail organizations. in the quest project at the ibm almaden research center, we have focussed on three classes of database mining problems involving classification, associations, and sequences. in this tutorial, i will draw upon my quest experience to present my perspective of database mining, describe current work, and present some open problems. rakesh agrawal impact of timing constraints on real-time database recovery jing huang le gruenwald the voodoo experience jin li a collaborative model of feedback in human-computer interaction manuel a. perez-quinones john l. sibert design and management of data warehouses report on the dmdw'99 workshop stella gatziu manfred jeusfeld martin staudt yannis vassiliou searching distributed collections with inference networks james p. callan zhihong lu w. bruce croft self-adaptive, on-line reclustering of complex object data a likely trend in the development of future cad, case and office information systems will be the use of object-oriented database systems to manage their internal data stores. the entities that these applications will retrieve, such as electronic parts and their connections or customer service records, are typically large complex objects composed of many interconnected heterogeneous objects, not thousands of tuples. these applications may exhibit widely shifting usage patterns due to their interactive mode of operation. such a class of applications would demand clustering methods that are appropriate for clustering large complex objects and that can adapt on-line to the shifting usage patterns. while most object-oriented clustering methods allow grouping of heterogeneous objects, they are usually static and can only be changed off- line. we present one possible architecture for performing complex object reclustering in an on-line manner that is adaptive to changing usage patterns. our architecture involves the decomposition of a clustering method into concurrently operating components that each handle one of the fundamental tasks involved in reclustering, namely statistics collection, cluster analysis, and reorganization. we present the results of an experiment performed to evaluate its behavior. these results show that the average miss rate for object accesses can be effectively reduced using a combination of rules that we have developed for deciding when cluster analyses and reorganizations should be performed. william j. mciver roger king the incinerate data model in this article, we present an extended relational algebra with universally or existentially quantified classes as attribute values. the proposed extension can greatly enhance the expressive power of relational systems, and significantly reduce the size of a database, at small additional computational cost. we also show how the proposed extensions can be built on top of a standard relational database system. h. v. jagadish transaction-oriented work-flow concepts in inter-organizational environments jian tang jari veijalainen building non-visual interaction through the development of the rooms metaphor anthony savidis constantine stephanidis the human aspects of computing: a note on this collection what makes computer users happy? can systems help humans to use them? does programming sharpen other thinking skills? is computer anxiety important? will programmers use ada packages? how do students learn programming concepts? are spelling correctors at their limits? when does the work load on a terminal get too heavy? are instruction sets too large? these nine questions define some important issues for those of us who study human factors---the ways hardware and software affect, and are affected by, their users. although the major accomplishments in the field will always rest on careful inspiration, to some degree these questions can be resolved empirically. generally, empirical work falls into three broad categories: with experiments the goal is to determine whether some design or principle a is better than some design or principle b. ideally, experimenters should make some tentative hypothesis first, or at least, decide what needs to be known. in data collection the objective is to observe users, give questionnaires, examine listings, count errors, or otherwise record data. by examining such data, we are able to draw conclusions and discover trouble spots. general observation is a method that involves complete systems or prototypes. these systems are observed, measured in broad ways, compared, or iteratively redesigned to complement the behavior of typical users. the goal is to understand the needs of users and to optimize system behavior. although not at all by choice, only the first two of these categories are represented in this collection. as with most papers that are accepted and appear in print, the reviewers or editors have some special reason for liking a paper in the first place. this reason may not appear in the reviews of the paper, but it does capture some underlying point or principle deemed to be important. each of the articles in this special section is a short description of a study addressing one of the nine questions. the results of these studies do not always reinforce conventional wisdom. some of the conclusions are provocative. i invite you to read the articles and to see whether you agree with their conclusions. henry ledgard combining multiple evidence from different types of thesaurus for query expansion rila mandala takenobu tokunaga hozumi tanaka automatic audio content analysis silvia pfeiffer stephan fischer wolfgang effelsberg systematic biasing of negative feedback amplifiers c. j. m. verhoeven a. van staveren multi-resolution indexing for shape images tzi-cker chiueh allen ballman kevin kreeger xel: extended ephemeral logging for log storage management extended ephemeral logging (xel) is a more general variation of the ephemeral logging (el) technique for managing a log of database activity on disk; it does not require a timestamp to be maintained with each object in the database. xel does not require periodic checkpoints and does not abort lengthy transactions as frequently as traditional firewall logging for the same amount of disk space. therefore, it is well suited for concurrent databases and applications which have a wide distribution of transaction lifetimes. simulation results indicate that xel can offer significant savings in disk space, at the expense of slightly higher bandwidth for logging and more main memory. the reduced size of the log permits much faster recovery after a crash as well as cost savings. john s. keen william j. dally analysis of recovery in a database system using a write-ahead log protocol in this paper we examine the recovery time in a database system using a write- ahead log protocol, such as aries [9], under the assumption that the buffer replacement policy is strict lru. in particular, analytical equations for log read time, data i/o, log application, and undo processing time are presented. our initial model assumes a read/write ratio of one, and a uniform access pattern. this is later generalized to include different read/write ratios, as well as a "hot set" model (i.e. x% of the accesses go to y% of the data). we show that in the uniform access model, recovery is dominated by data i/o costs, but under extreme hot-set conditions, this may no longer be true. furthermore, since we derive anaytical equations, recovery can be analyzed for any set of parameter conditions not discussed here. anant jhingran pratap khedkar a theory of relaxed atomicity (extended abstract) eliezer levy henry f. korth abraham silberschatz webview materialization a _webview_ is a web page automatically created from base data typically stored in a dbms. given the multi-tiered architecture behind database-backed web servers, we have the option of materializing a webview inside the dbms, at the web server, or not at all, always computing it on the fly (virtual). since webviews must be up to date, materialized webviews are immediately refreshed with every update on the base data. in this paper we compare the three materialization policies (materialized inside the dbms, materialized at the web server and virtual) analytically, through a detailed cost model, and quantitatively, through extensive experiments on an implemented system. our results indicate that materializing at the web server is a more scalable solution and can facilitate an order of magnitude more users than the virtual and materialized inside the dbms policies, even under high update workloads. alexandros labrinidis nick roussopoulos comparison of empirical testing and walkthrough methods in user interface evaluation we investigated the relative effectiveness of empirical usability testing and individual and team walkthrough methods in identifying usability problems in two graphical user interface office systems. the findings were replicated across the two systems and show that the empirical testing condition identified the largest number of problems, and identified a significant number of relatively severe problems that were missed by the walkthrough conditions. team walkthroughs achieved better results than individual walkthroughs in some areas. about a third of the significant usability problems identified were common across all methods. cost- effectiveness data show that empirical testing required the same or less time to identify each problem when compared to walkthroughs. claire-marie karat robert campbell tarra fiegel on the content of materialized aggregate views we consider the problem of answering queries using only materialized views. we first show that if the views subsume the query from the point of view of the information content, then the query can be answered using only the views, but the resulting query might be extremely inefficient. we then focus on aggregate views and queries over a single relation, which are fundamental in many applications such as data warehousing. we show that in this case, it is possible to guarantee that as soon as the views subsume the query, it can be completely rewritten in terms of the views in a simple query language. our main contribution is the conception of various rewriting algorithms which run in polynomial time, and the proof of their completeness which relies on combinatorial arguments. finally, we discuss the choice of materializing or not ratio views such as average and percentage, important for the design of materialized views. we show that it has an impact on the information content, which can be used to protect data, as well as on the maintenance of views. stephane grumbach leonardo tininini using a theoretical multimedia taxonomy framework multimedia (mm) is a polysemous term, a term with many definitions, and in this case, many roots. in this paper, multimedia is defined as the seamless integration of two or more media. each ancestor brings another requirement, muddying the field and making it difficult to work through. a multimedia taxonomy based on a previous media taxonomy is proposed to help organize the discipline. the taxonomy helps to classify the space called multimedia and to draw attention to difficult issues. the paper outlines the forms contributing to multimedia---text, sound, graphics, and motion---and aligns them with probable formats---elaboration, representation, and abstraction--- and sets them within a context---audience, discipline, interactivity, quality, usefulness, and aesthetics. the contexts are more clearly defined in two areas: interactivity and the information basis for a discipline. examples are presented describing the use of the taxonomy in the design and evaluation of student projects in a computer science-based multimedia course. rachelle s. heller c. dianne martin nuzi haneef sonja gievska-krliu the goms family of user interface analysis techniques: comparison and contrast sine the publication of the psychology of human- computer interaction, the goms model has been one of the most widely known theoretical concepts in hci. this concept has produced severval goms analysis techniques that differ in appearance and form, underlying architectural assumptions, and predictive power. this article compares and contrasts four popular variantsof the goms family (the keystroke-level model, the original goms formulation, ngomsl, and cpm-goms) by applying them to a single task example. bonnie e. john david e. kieras examples (solution session): the good, the bad, and the ugly and how to identify them at a glance lori e. kaplan searching for unity among diversity: exploring the "interface" concept despite widespread interest in the human-computer interaction (hci) field, there remains much debate as to appropriate conceptual frameworks for the field, and even confusion surrounding the meaning of basic terms in the field. hci is seen by many as focusing on the design of interfaces to computer systems, yet exactly what is implied by this focus on "interfaces" is unclear. in this paper we show how a better understanding of what is meant by the interface is possible via the concept of abstraction levels. we show how this levels approach can clarify some ambiguities, and also how it can be related to different phases in the evolution of the human-computer interaction field itself. in this context, we are able to account for the recent interest in activity theory as a possible alternative framework for hci work, while stressing the need for hci research and design to consider each of the separate, but related, levels. kari kuutti liam j. bannon creating a networked computer science technical report library (poster) james r. davis important issues in hypertext documentation usability florence m. fillion craig d. b. boyle dynamic digital libraries for children the majority of current digital libraries (dls) are not designed forchildren. for dls to be popular with children, they need to be fun, easy-to-use and empower them, whether as readers or authors. this paper describes a new childrens dl emphasizing its design and evaluation, working with the children (11-14 year olds) as design partners and testers. a truly participatory process was used, and observational study was used as a means of refinement to the initial design of the dl prototype. in contrast with current dls, the childrens dl provides both a static as well as a dynamic environment to encourage active engagement of children in using it. design, implementation and security issues are also raised. yin leng theng norliza mohd-nasir george buchanan bob fields harold thimbleby noel cassidy query processing in the objectstore database system objectstore is an object-oriented database system supporting persistence orthogonal to type, transaction management, and associative queries. collections are provided as objects. the data model is non-1nf, as objects may have embedded collections. queries are integrated with the host language in the form of query operators whose operands are a collection and a predicate. the predicate may itself contain a (nested) query operating on an embedded collection. indexes on paths may be added and removed dynamically. collections, being treated as objects, may be referred to indirectly, e.g., through a by-reference argument. for this reason and others, multiple execution strategies are generated, and a final selection is made just prior to query execution. nested queries can result in interleaved execution and strategy selection. jack orenstein sam haradhvala benson margulies don sakahara towards an international information interface alison popowicz-toon eviatar shafrir hci, natural science and design: a framework for triangulation across disciplines wendy e. mackay anne-laure fayard distributed cooperative control for application sharing based on multiparty and multimedia desktop conferencing system: mermaid t. ohmori k. maeno s. sakata h. fukuoka k. watabe carat: a testbed for the performance evaluation of distributed database systems walt kohler bao-chyuan jenq predicting document access in large multimedia repositories network- accessible multimedia databases, repositories, and libraries are proliferating at a rapid rate. a crucial problem for these repositories remains timely and appropriate document access. in this article, we borrow a model from psychological research on human memory, which has long studied retrieval of memory items based on frequency and recency rates of past item occurrences. specifically, the model uses frequency and recency rates of prior document accesses to predict future document requests. the model is illustrated by analyzing the log file of document accesses to the georgia institute of technology world wide web (www) repository, a large multimedia respository exhibiting high access rates. results show that the model predicts document access rates with a reliable degree of accuracy. we describe extensions to the basic approach that combine the recency and frequency analyses and which incorporate respository structure and document type. these results have implications for the formulation of descriptive user models of information access in large repositories. in addition, we sketch applications in the areas of design of information systems and interfaces and their document-caching algorithms. margaret m. recker james e. pitkow empirical results on locality in database referencing database referencing behaviour is analyzed with respect to locality features. the analysis is based on database reference strings collected from several runs of typical batch programs accessing a real database. locality of reference is measured by the stack distance probability distribution, the number of block faults, and a locality measure based on the memory reservation size. in all the experiments, locality of reference is observed, but it is found to be weaker than in code referencing or even in some previous studies on database referencing. the phase/transition concept used in virtual memory systems is not well applicable to database referencing, since a large part of the locality set is constantly changing. the disruption of the phases is predominantly due to random referencing of data blocks. the references to index blocks show stronger locality. in some special cases, sequentiality is observed in the use of the data blocks. in general, neither replacement strategies developed for virtual memory systems nor prefetching techniques seem adequate for performance improvement of database referencing. a. inkeri verkamo hierarchies and relative operators in the olap environment in the last few years, numerous proposals for modelling and querying multidimensional databases (mddb) are proposed. a rigorous classification of the different types of hierarchies is still an open problem. in this paper we propose and discuss some different types of hierarchies within a single dimension of a cube. these hierarchies divide in different levels of aggregation a single dimension. depending on them, we discuss the characterization of some olap operators that refer to hierarchies in order to maintain the data cube consistency. moreover, we propose a set of operators for changing the hierarchy structure. the issues discussed provide modelling flexibility during the scheme design phase and correct data analysis. elaheh pourabbas maurizio rafanelli assessing the climate for change: a methodology for managing human factors in a computerized information system implementation it is now well established that computers and other information technologies are instruments of organizational change. that is, successful implementation of an information system will change the way in which organizational members do their work. there is an extensive literature on social change which characterizes the change process and how to manage it. in particular, there is a subset of this literature which concerns the implementation of computerized information systems david g. hopelain linear approximation of planar spatial databases using transitive-closure logic we consider spatial databases in the plane that can be defined by polynomial constraint formulas. motivated by applications in geographic information systems, we investigate linear approximations of spatial databases and study in which language they can be expressed effectively. specifically, we show that they cannot be expressed in the standard first-order query language for polynomial constraint databases but that an extension of this first-order language with transitive closure suffices to express the approximation query in an effective manner. furthermore, we introduce an extension of transitive- closure logic and show that this logic is complete for the computable queries on linear spatial databases. this result together with our first result implies that this extension of transitive-closure logic can express all computable topological queries on arbitrary spatial databases in the plane. floris geerts bart kuijpers teraphim: an engine for distributed information retrieval owen de kretser alistair moffat justin zobel just-in-case linking vs. just-in-time-linking - the library without walls experience miriam blake herbert van de sompel hci education in sweden jan gulliksen lars oestreicher graph-based object-oriented approach for structural and behavioral representation of multimedia data the management of multimedia information poses special requirements for multimedia information systems. both representation and retrieval of the complex and multifaceted multimedia data are not easily handled with the flat relational model and require new data models. in the last several years, object-oriented and graph-based data models are actively pursued approaches for handling the multimedia information. in this paper the characteristics of the novel graph-based object-oriented data model are presented. this model represents the structural and behavioral aspects of data that form multimedia information systems. it also provides for handling the continuously changing user requirements and the complexity of the schema and data representation in multimedia information systems using the schema versioning approach and perspective version abstraction. ivan radev niki pissinou kia makki e. k. park "it's infrastructure all the way down" (keynote address) what is infrastructure and how shall we know it? as libraries move partly to desktops, one of the challenges facing the digital library community becomes designing for distributed use across many kinds of local circumstance. these circumstances vary widely in terms of people, resources, support, and technical configurations. designing for this variety means reconceptualizing "user meets screen" as "user meets infrastructure." this requires scaling up traditional design and evaluation methods, as well as a richer knowledge of the organizational and historical contexts of use. this talk addresses some of the methodological challenges involved in such work. susan leigh star (leigh) is professor of communication at the university of california, san diego. she received her ph.d in sociology of science and medicine from uc san francisco. before coming to ucsd in 1999, she was professor of information science at the university of illinois, urbana- champaign. she has also taught at uc irvine and keele university, in england, and several universities in scandinavia as guest professor. much of her research has been on the social implications and design of large-scale technology, especially information technology. among her publications are "the cultures of computing" (ed) (blackwell, 1995), "regions of the mind: brain research and the quest for scientific certainty" (stanford 1989), and (with geoffrey bowker), "sorting things out: classification and its consequences" (mit, 1999). she is volume editor for science and technology for the women's studies international encyclopedia (edited by cheris kramarae and dale spender), forthcoming from routledge in 2000. her current research concerns ethical and methodological dilemmas in on-line research with human subjects. susan leigh star facile 3d direct manipulation dan venolia security-control methods for statistical databases: a comparative study this paper considers the problem of providing security to statistical databases against disclosure of confidential information. security-control methods suggested in the literature are classified into four general approaches: conceptual, query restriction, data perturbation, and output perturbation. criteria for evaluating the performance of the various security-control methods are identified. security-control methods that are based on each of the four approaches are discussed, together with their performance with respect to the identified evaluation criteria. a detailed comparative analysis of the most promising methods for protecting dynamic- online statistical databases is also presented. to date no single security-control method prevents both exact and partial disclosures. there are, however, a few perturbation-based methods that prevent exact disclosure and enable the database administrator to exercise "statistical disclosure control." some of these methods, however introduce bias into query responses or suffer from the 0/1 query-set-size problem (i.e., partial disclosure is possible in case of null query set or a query set of size 1). we recommend directing future research efforts toward developing new methods that prevent exact disclosure and provide statistical- disclosure control, while at the same time do not suffer from the bias problem and the 0/1 query- set-size problem. furthermore, efforts directed toward developing a bias- correction mechanism and solving the general problem of small query-set-size would help salvage a few of the current perturbation- based methods. nabil r. adam john c. worthmann spatial memory and design: a conceptual approach to the creation of navigable space in multimedia design jean trumbo research alerts jennifer bruer a dual copy method for transaction separation with multiversion control for read-only transactions baojing lu qinghua zou william perrizo knowledgebase transformations we propose a language that expresses uniformly queries and updates on knowledgebases consisting of finite sets of relational structures. the language contains an operator that "inserts" arbitrary first-order sentences into knowledgebase. the semantics of the insertion is based on the notion of update formalized by katsuno and mendelzon in the context of belief revision theory. our language can express, among other things, hypothetical queries and queries on recursively indefinite databases. the expressive power of our language lies between existential second-order and general second-order queries. the data complexity is in general within exponential time, although it can be lowered to co-np and to polynomial time by restricting the form of queries and updates. gösta grahne alberto o. mendelzon peter z. revesz precision locking for nested transaction systems john kyu lee content-based browsing of video sequences a novel methodology to represent the contents of a video sequence is presented. the representation is used to allow the user to rapidly view a video sequence in order to find a particular point within the sequence and/or to decide whether the contents of the sequence are relevant to his or her needs. this system, referred to as content-based browsing, forms an abstraction to represent each shot of the sequence by using a representative frame, or an rframe, and it includes management techniques to allow the user to easily navigate the rframes. this methodology is superior to the current techniques of fast forward and rewind because rather than using every frame to view and judge the contents, only a few abstractions are used. therefore, the need to retrieve the video from a storage system and to transmit every frame over the network in its entirety no longer exists, saving time, expenses, and bandwidth. f. arman r. depommier a. hsu m.-y. chiu on the development of a site selection optimizer for distributed and parallel database systems fotis barlos ophir frieder controlled natural language interfaces (extended abstract): the best of three worlds this paper will discuss the problem of designing user-friendly interfaces for computer applications. in particular, we will describe an interface that is based on mapping formal into natural languages in a controlled and structured way. the basic approaches for designing interfaces range from formal or natural language to menu driven ones. formal language interfaces such as query or programming languages are typically powerful in terms of their manipulative capabilities, safe in terms of their side effects, and optimized in terms of their execution. however, they often are not especially user-friendly with respect to the formal detail they require users to specify or feedback such as error messages or interactive help when mistakes are made. the necessary semantics for execution are embedded in compilers and not accessible to the user in an understandable way. designers of natural language interfaces are generally concerned with anticipating how humans communicate within certain applications: what vocabulary and syntactic constructs need to be handled and what the range of variations or synonyms must be. natural language query systems are a case in point. they are created to allow users to submit their data base requests in more or less natural language, absorbing the burden of mapping different variations of the natural language expressions into a formal or computer interpretable form, thereby also resolving ambiguities. even though natural language processing has progressed in the last decade, available natural language interfaces still are disappointing with respect to both their natural language coverage on the one hand and their formal capabilities on the other. moreover, the customizing process for specific applications is necessarily tedious and requires linguistic expertise. menu driven systems guide users through prepared screens to their desired state of affairs. they replace the burden of typing and having to formulate problems by predefined menus with choices that have to be followed. while such an approach clearly defines the capabilities of the system and prevents common mistakes, the implementations of such systems tend to lack in flexibility of maneuvering back and forth between states, of skipping screens and undoing decisions without losing the process up to that point. the result is that users have to cope with too many screens that have to be visited to achieve their goals, particularly once they are familiar with the domain. our work, represented by the interpret system, is an attempt to combine the best of all three worlds by taking advantage of the power of the formal languages, the ease and friendliness of natural language and the convenience and error security of menu driven systems. to that end, we "interpret" a well established formal interface for the user in natural language. this means that rather than anticipating what or and how users might express their goal in natural language, we try to semantically describe an existing formal interface using natural language as the target. the formal interface is thereby enriched through natural language modes: for those users who want to take advantage of the formal interface we provide feedback in form of paraphrases of the formal commands and warnings and error messages of possible problems in their commands (perform); for those users that rather use natural language, we provide a controlled mechanism in natural language to construct their commands in a flexible selection process (conform), where the type of language presented here is almost identical to the language used in the feedback mechanism. for our prototype we chose sql, a database query language for relational databases. in designing the feedback or interpretation mechanism (perform), our goal was to preserve as much of the sql structure as necessary to reflect the internal logic to the user, and at the same time represent as natural english paraphrases as possible. consider the following sql query: select dept, avg(comm), max(comm), min(comm) from staff, org where dept = deptno and sal > 25000 group by dept which is translated by perform into: display - for each department - the average, largest, and smallest commission for employees with a salary of more than 25000 dollars. in order to produce such a paraphrase, perform has a semantic model of the sql language which is database or application independent. for example, it knows the relationship between the term "dept" in the "select"-clause and its occurrence in the "group"-clause. based on that relationship, perform inserts "for each" and suppresses the entire translation of "group"-clause, since it is not modified any further. in addition to such application independent knowledge, it also has to model relationships between terms such as "dept" and "deptno". in order to recognize the "where"-clause as a join phrase which is also suppressed in this case, and to realize the semantics of "sal", resulting in the insertion of "dollars" perform has to gather database specific information. this is accomplished in an interactive knowledge acquisition or customizing process (customizer) which asks users on the average three questions about each term in their database, including a natural language phrase. with this information we build a lexicon that contains syntactic and semantic records for each item in a database. the same information is also used by conform which allows users to construct their queries with selections from the screen in natural language where a skeleton of the entire query with windows for selections appears on the screen. users can scroll within windows and go back and forth between those windows to create and edit their query without having to scroll through many different menus. after selecting a general topic which in database terms corresponds to one or more relations, the following query skeleton appears on the screen: find the department id number last name manager salary for employees with the following restrictions: with a commission equal to first name ending in job description containing phone number greater than zip code smaller than the window on line 1 moves horizontally after each selection, the windows of line 3 are repeated after each completed selection below the restriction. the user makes selections at every point before submitting the query which will get translated into sql by the conform system. since our natural language scope is rather restrictive, the syntax that interpret recognizes is easily described to and learned by users. our technical approach to all three aspects (paraphrase, error message, and controlled natural language input) is based on a syntactic analysis of the input which then gets semantically evaluated with an attribute grammar, resulting in the appropriate output. the semantic information required is mainly of referential and datatyping nature and has to be made available to different parts of the syntactic structure of the input string. the attribute grammar formalism has been very useful for these specific tasks and furthermore offers a consistent and transparent way to incrementally increase the semantic coverage. the perform and customizer modules of interpret are implemented for sql, its conform counterpart, i.e. the natural language front end is in the implementation stage. the techniques and philosophy of interpreting formal interfaces through natural language seem also promising for the programming environment in general. the challenge in the programming context is to gain a conceptual understanding of the contents of code that would permit, for example, the generation of more insightful error messages and automatic documentation on the perform side and a controlled natural language environment for the programming task itself on the conform side. eva-martin mueckstein six readings of a single text (panel): a videoanalytic session timothy koschmann modified field studies for cscw systems michelle potts steves jean scholtz information organization using rufus computer system users today are inundated with a flood of semi-structured information, such as documents, electronic mail, programs, and images. today, this information is typically stored in filesystems that provide limited support for organizing, searching, and operating upon this data, all operations that are vital to the ability of users to effectively use this data. database systems provide good function for organizing, searching, managing and writing applications on structured data. current database systems are inappropriate for semi-structured information because moving the data into the database breaks all existing applications that use the data. the rufus system attacks the problems of semi-structured information by using database function to help users manage semi-structured information without requiring that the user's information reside in the database. allen luniewski peter schwarz kurt shoens jim stamos john thomas a federated architecture for information management an approach to the coordinated sharing and interchange of computerized information is described emphasizing partial, controlled sharing among autonomous databases. office information systems provide a particularly appropriate context for this type of information sharing and exchange. a federated database architecture is described in which a collection of independent database systems are united into a loosely coupled federation in order to share and exchange information. a federation consists of components (of which there may be any number) and a single federal dictionary. the components represent individual users, applications, workstations, or other components in an office information system. the federal dictionary is a specialized component that maintains the topology of the federation and oversees the entry of new components. each component in the federation controls its interactions with other components by means of an export schema and an import schema. the export schema specifies the information that a component will share with other components, while the import schema specifies the nonlocal information that a component wishes to manipulate. the federated architecture provides mechanisms for sharing data, for sharing transactions (via message types) for combining information from several components, and for coordinating activities among autonomous components (via negotiation). a prototype implementation of the federated database mechanism is currently operational on an experimental basis. dennis heimbigner dennis mcleod the montage extensible datablade architecture michael ubell user friendly interfaces for data base systems as the number of non- computer scientist users increases, the importance of friendly operating systems does likewise. data base systems will serve one of the largest groups of such users. for these casual users, "friendly" takes on stronger meanings than in the world of computer scientists. systems must be easy to use even for the user who may only use a computer once a month or even less. greg w. scragg text databases and information retrieval ellen riloff lee hollaar remote usability testing monty hammontree paul weiler nandini nayak data base processor mage in this paper, we present the design of data base processor mage. this dbp is based on a hierarchical db access method. it is composed of two microprocessors, a disk processor and a moving head disk. the processor requirements, design decisions, and architecture are discussed. we start with an overview of dbp framework. the mage db access method is then summarized. finally, we present the design and implementation of the dbp, with a particular emphasis on the disk processor. g. berger sabbatel a video parsing, indexing and retrieval system h. j. zhang j. h. wu c. y. low s. w. smoliar the interface of the future s. levialdi a. n. badre m. chalmers p. copeland p. mussio c. solomon visual information management ramesh jain techniques for structuring database records salvatore t. march information design considerations for improving situation awareness in complex problem-solving the conventional techniques for task analysis derive the basic tasks that make up user actions. however, in the complex-problem solving environment, attempts to describe step-by-step actions breakdown because no single route to a solution exists. although individual tasks can be defined, task-analysis normally results in the tasks being divorced from context. however, to support complex problem-solving, the design must place the information within the situation context and allow users to develop and maintain situation awareness. michael j. albers the interagency digital library for science and engineering: a federated digital library pilot for the u.s. government scientist blaine baker john salerno open decdtm: constraint based transaction management open decdtm offers portable transaction management services layered on osf dce which support the application (tx), resource manager (xa), and transactional dce rpc (txrpc) interfaces specified by x/open. open decdtm also provides interoperability with osi transaction processing (osi tp) and openvms systems using the decdtm openvms protocol. protocols executed by open decdtm are specified by constraints. this simplifies the development of transactional gateways between different data transfer protocols and transaction models. johannes klein francis upton a computational model and classification framework for social navigation social navigation is the process of making navigational decisions in real or virtual environments based on social and communicative interaction with others. a computational model for social navigation is presented as an extension to an existing framework for general navigation, reducing decision- making to the minimization of cognitive costs. consideration for social navigation gives rise to a classification framework based on the synchronicity, directness, and social presence during social interaction, each of which has direct effect on the cognitive costs of navigational tasks. finally, a new recommender system, trailguide, is presented as a tool that facilitates social navigation by allowing authors to explicitly publish "trails" within and between world wide web pages. mark o. riedl dynamic map synthesis utilizing extended thesauruses and reuse of query generation process ken'ichi horikawa masatoshi arikawa hiroki takakura yahiko kambayashi bags and viewers: a metaphor for structuring a database browser robert inder jussi stader ocelot: a system for summarizing web pages we introduce ocelot, a prototype system for automatically generating the "gist" of a web page by summarizing it. although most text summarization research to date has focused on the task of news articles, web pages are quite different in both structure and content. instead of coherent text with a well- defined discourse structure, they are more often likely to be a chaotic jumble of phrases, links, graphics and formatting commands. such text provides little foothold for extractive summarization techniques, which attempt to generate a summary of a document by excerpting a contiguous, coherent span of text from it. this paper builds upon recent work in _non-extractive_ summarization, producing the gist of a web page by "translating" it into a more concise representation rather than attempting to extract a text span verbatim. ocelot uses probabilistic models to guide it in selecting and ordering words into a gist. this paper describes a technique for learning these models automatically from a collection of human-summarized web pages. adam l. berger vibhu o. mittal negative inertia: a dynamic pointing function r. c. barrett e. j. selker j. d. rutledge r. s. olyha conflicts and correspondence assertions in interoperable databases stefano spaccapietra christine parent e-mail as habitat: an exploration of embedded personal information management nicolas ducheneaut victoria bellotti a research prototype image retrieval system s. nepal m. v. ramakrishna j. a. thom integrating gps data within embedded internet gis arunas stockus alain bouju fred patrice boursier the "super" project m. andersson a-m. auddino y. dupont e. fontana m. gentile s. spaccapietra auto-summarization of audio-video presentations as streaming audio- video technology becomes widespread, there is a dramatic increase in the amount of multimedia content available on the net. users face a new challenge: how to examine large amounts of multimedia content quickly. one technique that can enable quick overview of multimedia is video summaries; that is, a shorter version assembled by picking important segments from the original. we evaluate three techniques for automatic creation of summaries for online audio-video presentations. these techniques exploit information in the audio signal (e.g., pitch and pause information), knowledge of slide transition points in the presentation, and information about access patterns of previous users. we report a user study that compares automatically generated summaries that are 20%-25% the length of full presentations to author generated summaries. users learn from the computer- generated summaries, although less than from authors' summaries. they initially find computer-generated summaries less coherent, but quickly grow accustomed to them. liwei he elizabeth sanocki anoop gupta jonathan grudin an access control model for video database systems elisa bertino moustafa a. hammad walid g. aref ahmed k. elmagarmid steerable media: interactive television via video synthesis chris marrin rob myers jim kent peter broadwell automatic generation of interactively consistent search dialogs dan r. olsen walter holladay viewpoint: information in the information age wei-lung wang finding relevant passages using noun-noun compounds (poster session): coherence vs. proximity intuitively, words forming phrases are a more precise description of content than words as a sequence of keywords. yet, evidence that phrases would be more effective for information retrieval is inconclusive. this paper isolates a neglected class of phrases, that is abundant in communication, has an established theoretical foundation, and shows promise for an effective expression of the user's information need: the noun-noun compound (_nnc_). in an experiment, a variety of meaningful _nnc_s were used to isolate relevant passages in a large and varied corpus. in a first pass, passages were retrieved based on textual proximity of the words or their semantic peers. a second pass retained only passages containing a syntactically coherent structure equivalent to the original _nnc_. this second pass showed a dramatic increase in precision. preliminary results show the validity of our intuition about phrases in the special but very productive case of _nnc_s. eduard hoenkamp rob de groot gesture at the user interface: a chi '95 workshop alan wexelblat learning to use word processors: problems and prospects robert l. mack clayton h. lewis john m. carroll expected usability and product preference turkka keinonen a layered architecture for querying dynamic web content the design of webbases, database systems for supporting web- based applications, is currently an active area of research. in this paper, we propose a 3-year architecture for designing and implementing webbases for querying dynamic web content(i.e., data that can only be extracted by filling out multiple forms). the lowest layer, virtual physical layer, provides navigation independence by shielding the user from the complexities associated with retrieving data from raw web sources. next, the traditional logical layer supports site independence. the top layer is analogous to the external schema layer in traditional databases. within this architectural framework we address two problems unique to webbases --- retrieving dynamic web content in the virtual physical layer and querying of the external schema by the end user. the layered architecture makes it possible to automate data extraction to a much greater degree than in existing proposals. wrappers for the virtual physical schema can be created semi- automatically, by asking the webbase designer to navigate through the sites of interest --- we call this approach mapping by example. thus, the webbase designer need not have expertise in the language that maps the physical schema to the raw web (this should be contrasted to other approaches, which require expertise in various web-enabled flavors of sql). for the external schema layer, we propose a semantic extension of the universal relation interface. this interface provides powerful, yet reasonably simple, ad hoc querying capabilities for the end user compared to the currently prevailing "canned" form-based interfaces on the one hand or complex web-enabling extensions of sql on the other. finally, we discuss the implementation of the proposed architecture. hasan davulcu juliana freire michael kifer i. v. ramakrishnan conivas: content-based image and video access system mohamed abdel-mottaleb nevenka dimitrova ranjit desai jacquelyn martino software video production switcher: david simpson richard fromm tina wong lawrence a. rowe conversation-based mail a new message communication paradigm based on conversations that provides an alternative to memo\\- and conference-based mail is described. a conversation- based message system groups messages into conversations, and orders messages within a conversation according to the context in which they were written. the message context relation leads to an efficient implementation of conversations in a distributed environment and supports a natural ordering of messages when viewed by the user. experience with a prototype demonstrates the workability of conversation-based mail and suggests that conversations provide a powerful tool for message communication. douglas e. comer larry l. peterson integration of interpersonal space and shared workspace: clearboard design and experiments we describe the evolution of the novel shared drawing medium clearboard which was designed to seamlessly integrate an interpersonal space and a shared workspace. clearboard permits coworkers in two locations to draw with color markers or with electronic pens and software tools while maintaining direct eye contact and the ability to employ natural gestures. the clearboard design is based on the key metaphor of "talking through and drawing on a transparent glass window." we describe the evolution from clearboard-1 (which enables shared video drawing) to clearboard-2 (which incorporates teampaint, a multiuser paint editor). initial observations and findings gained through the experimental use of the prototype, including the feature of "gaze awareness," are discussed. further experiments are conducted with clearboard-0 (a simple mockup), clearboard-1, and an actual desktop as a control. in the settings we examined, the clearboard environment led to more eye contact and potential awareness of collaborator's gaze direction over the traditional desktop environment. hiroshi ishii minoru kobayashi jonathan grudin farming the web for systematic business intelligence (invited talk. abstract only) the technologies of data warehousing, data mining, hypertext analysis, information visualization, and web information resources are rapidly converging. the challenge is to architect these technologies into a system for systematic business intelligence for a corporation. we need to move from an information refining process that is often haphazard and narrow to one that is reliable and continuous. web farming is a new area that suggests a methodology and architecture for accomplishing this. richard hackathorn interfaces for cooperative work: an eclectic look at cscw '88 t. erickson any algorithm in the complex object algebra with powerset needs exponential space to compute transitive closure the abiteboul and beeri algebra for complex objects can express a query whose meaning is transitive closure, but the algorithm is naturally associated to this query needs exponential space. we show that any other query in the algebra which expresses transitive closure needs exponential space. this proves that in general the powerset is an intractable operator for implementing fixpoint queries. dan suciu jan paredaens adding a collaborative agent to graphical user interfaces charles rich candace l. sidner a sophisticated microcomputer user interface the design and implementation of a menu-oriented interface for personal computers is discussed. factors pertaining to the cognitive limitations of users are examined and their impact on the design of the system is described. the major attributes of the system are (1) all communication between the operator and the computer is through menus or forms (which are analogous to hard copy documents); (2) extensive help is available at all times; (3) the interface can adapt to the experience of the user; (4) the display processing time is short; and (5) an external data format exists that completely defines the interface. the various components of the interface are discussed in detail, followed by a discussion of the implementation. richard snodgrass the miro dbms this short paper explains the key object-relational (or) dbms technology used by the miro dbms. michael stonebraker seven experiences with contextual field research m. good automatic generation of textual, audio, and animated help in uide: the user interface design research on automatic help generation fails to match the advance in user interface technology. with users and interfaces becoming increasingly sophisticated, generating help information must be presented with a close tie to the current work context. help research also needs to utilize the media technology to become effective in conveying information to users. our work on automatic generation of help from user interface specifications attempts to bridge the gaps, both between help and user interface making help truly sensitive to the interface context, and between the help media and the interface media making communication more direct and more effective. our work previously reported emphasized a shared knowledge representation for both user interface and help, and an architecture for automatic generation of context- sensitive animated help in smalltalk-80. this paper presents a new integrated architecture in c++ which not only generates animation, but also audio as procedural help. the architecture also uses the knowledge representation to automatically provide textual help of why an object in an interface is disabled. piyawadee noi sukaviriya jeyakumar muthukumarasamy anton spaans hans j. j. de graaff a transaction model supporting complex applications in integrated information systems p. klahold g. schlageter r. unland w. wilkes a direct manipulation interface for boolean information retrieval via natural language query this paper describes the design of a direct manipulation user interface for boolean information retrieval. intended to overcome the difficulties of manipulating explicit boolean queries as well as the "black box" drawbacks of so-called natural language query systems, the interface presents a two- dimensional graphical representation of a user's natural language query which not only exposes heuristic query transformations performed by the system, but also supports query reformulation by the user via direct manipulation of the representation. the paper illustrates the operation of the interface as implemented in the ai- stars full-text information retrieval system. p. g. anick j. d. brennan r. a. flynn d. r. hanssen b. alvey j. m. robbins knowledge-based support for the user interface design process (abstract) uwe malinowski kumiyo nakakoji jonas löwgren natural command names and initial learning: a study of text-editing terms in the first of two studies of "naturalness" in command names, computer-naive typists composed instructions to "someone else" for correcting a sample text. there was great variety in their task-descriptive lexicon and a lack of correspondence between both their vocabulary and their underlying conceptions of the editing operations and those of some computerized text editors. in the second study, computer-naive typists spent two hours learning minimal text- editing systems that varied in several ways. lexical naturalness (frequency of use in study 1) made little difference in their performance. by contrast, having different, rather than the same names for operations requiring different syntax greatly reduced difficulty. it is concluded that the design of user-compatible commands involves deeper issues than are captured by the slogan "naturalness." however, there are limitations to our observations. only initial learning of a small set of commands was at issue and generalizations to other situations will require further testing. t. k. landauer k. m. galotti s. hartwell a test-bed for user interface designs most presently available interactive computer interfaces treat their users in an unfriendly, uncooperative, and inflexible way, resulting in feelings of frustration and a conseqeuent loss of productivity for the users. these problems have led to attempts (e.g. [6, 8, 12, 13]) to make interfaces appear more friendly and cooperative through the addition of advanced interface features such as spelling correction, on-line help, personalized defaults, etc.. while common- sense suggests such features may be helpful, there is little hard evidence about how helpful they are or whether they are worth the overheads they entail. a primary reason for this lack of information is the practical difficulty of experimentation. many of these features are time- consuming to implement, are usually implemented without adequate instrumentation, and are implemented in different and difficult to compare ways from system to system (see [10], for example). these problems in evaluation suggest the need for a test-bed interface in which various advanced features could be tried out in a consistent and adequately instrumented way with a variety of application systems. in this paper, we present a detailed rationale and a partially implemented design for a test-bed of this kind. eugene ball phil hayes online help: exploring static information or constructing personal and collaborative solutions using hypertext dickie selfe stuart selber dan mcgavin johndan johnson-eilola carol brown implications for a gesture design tool allan christian long james a. landay lawrence a. rowe concerning sigdoc 92: text transformation and the world of multimedia documentation brad mehlenbacher security of statistical databases: multidimensional transformation statistical evaluation of databases which contain personal records may entail risks for the confidentiality of the individual records. the risk has increased with the availability of flexible interactive evaluation programs which permit the use of trackers, the most dangerous class of snooping tools known. a class of trackers, called union trackers, is described. they permit reconstruction of the entire database without supplementary knowledge and include the general tracker recently described as a special case. for many real statistical databases the overwhelming majority of definable sets of records will form trackers. for such databases a random search for a tracker is likely to succeed rapidly. individual trackers are redefined and counted and their cardinalities are investigated. if there are n records in the database, then most individual trackers employ innocent cardinalities near n/3, making them difficult to detect. disclosure with trackers usually requires little effort per retrieved data element. jan schlöer diogenes: a web search agent for person images yuksel alp aslandogan clement t. yu a response to the commentaries on corr this paper responds to specific comments on, suggestions about, and analysis of acms computing research repository (corr), agruing that corr is both viable and suitably placed amid current online publishing alternatives. joseph y. halpern chickens and eggs - the interrelationship of systems and theory this paper describes a personal perspective of the kinds of contributions that systems research and theoretical research make to one another particularly in the database area. examples of each kind of contribution are given, and then several case studies from the author a personal experience are presented. the case studies illustrate database systems research where theoretical work contributed to systems results and vice versa. areas of database systems which need more contributions from the theoretical community will also be presented. p. selinger seave: a mechanism for verifying user presuppositions in query systems every information system incorporates a database component, and a frequent activity of users of information systems is to present it with queries. these queries reflect the presuppositions of their authors about the system and the information it contains. with most query processors, queries that are based on erroneous presuppositions often result in null answers. these fake nulls are misleading, since they do not point out the user's erroneous presuppositions (and can even be interpreted as their affirmation). this article describes the seave mechanism for extracting presuppositions from queries and verifying their correctness. the verification is done against three repositories of information: the actual data, their integrity constraints, and their completeness assertions. consequently, queries that reflect erroneous presuppositions are answered with informative messages instead of null answers, and user-system communication is thus improved (an aspect that is particularly important in systems that often are accessed by naive users). first, the principles of seave are described abstractly. then, specific algorithms for implementing it with relational databases are presented, including a new method for storing knowledge and an efficient algorithm for processing queries against the knowledge. amihai motro why goms? bonnie john partitioned signature files: design issues and performance evaluation a signature file acts as a filtering mechanism to reduce the amount of text that needs to be searched for a query. unfortunately, the signature file itself must be exhaustively searched, resulting in degraded performance for a large file size. we propose to use a deterministic algorithm to divide a signature file into partitions, each of which contains signatures with the same "key." the signature keys in a partition can be extracted and represented as the partition's key. the search can then be confined to the subset of partitions whose keys match the query key. our main concern here is to study methods for obtaining the keys and their performance in terms of their ability to reduce the search space. owing to the reduction of search space, partitioning a signature file has a direct benefit in a sequential search (single-processor) environment. in a parallel environment, search can be conducted in parallel effectively by allocating one or more partitions to a processor. partitioning the signature tile with a deterministic method (as opposed to a random partitioning scheme) provides intraquery parallelism as well as interquery parallelism. in this paper, we outline the criteria for evaluating partitioning schemes. three algorithms are described and studied. an analytical study of the performance of the algorithms is provided and the results are verified with simulation. dik lun lee chun-wu leng object-oriented database systems: in transition françois bancilhon won kim sierra: an interactive system for ergonomic realization of applications this short paper discusses some early work on the sierra (systeme interactif pour l'ergonomie de realisation d'applications - an interactive system for ergonomic realization of applications) tool for managing guidelines. the goal of this tool is to develop a hypermedia system managing guidelines, human- computer principles. jean vanderdonckt experiences with workflow management: issues for the next generation workflow management is a technology that is considered strategically important by many businesses, and its market growth shows no signs of abating. it is, however, often viewed with skepticism by the research community, conjuring up visions of oppressed workers performing rigidly-defined tasks on an assembly line. although the potential for abuse no doubt exists, workflow management can instead be used to help individuals manage their work and to provide a clear context for performing that work. a key challenge in the realization of this ideal is the reconciliation of workflow process models and software with the rich variety of activities and behaviors that comprise "real" work. our experiences with the inconcert workflow management system are used as a basis for outlining several issues that will need to be addressed in meeting this challenge. this is intended as an invitation to cscw researchers to influence this important technology in a constructive manner by drawing on research and experience. kenneth r. abbott sunil k. sarin power to the people: end-user building of digital library collections naturally, digital library systems focus principally on the reader: th e consumer of the material that constitutes the library. in contrast, this paper describes an interface that makes it easy for people to build their own library collections. collections may be built and served locally from the user's own web server, or (given appropriate permissions) remotely on a shared digital library host. end users can easily build new collections styled after existing ones from material on the web or from their local files-or both, and collections can be updated and new ones brought on-line at any time. the interface, which is intended for non-professional end users, is modeled after widely used commercial software installation packages. lest one quail at the prospect of end users building their own collections on a shared system, we also describe an interface for the administrative user who is responsible for maintaining a digital library installation. ian h. witten david bainbridge stefan j. boddie the organizational implementation of an electronic meeting system: an analysis of the innovation process electronic meeting systems (ems) are slowly moving out of university environments into work organizations. they constitute an innovative method of supporting group meetings. this paper reports on the innovation process in one organization that has recently adopted and implemented an ems. the paper traces the innovation process through four stages: conception of an idea: proposal; decision to adopt; and implementation. important factors from the innovation literature are considered as explanators of the innovation process involving ems in this particular organization. joey f. george joseph s. valacich j. f. nunamaker comparing representations with relational and eer models the diffusion of technology to end users who can now develop their own information systems raises issues concerning the cost, quality, efficiency, and accuracy of such systems. d. batra j. a. hoffler r. p. bostrom the ores temporal database management system babis theodoulidis aziz ait- braham george andrianopoulos jayant chaudhary george karvelis simon sou the action workflow approach to workflow management technology raul medina- mora terry winograd rodrigo flores fernando flores plans for the trec-9 web track david hawking the state of practice of data administration - 1981 mark l. gillenson architecture of a networked image search and retrieval system large scale networked image retrieval systems face a number of problems that are not fully satisfied by current systems. on one hand, integrated solutions that store all image data centrally are often limited in terms of scalability and autonomy of data providers. on the other hand, www-based search engines proved to be fairly scalable, and data providers retain their autonomy. however, such engines often confront users with links to servers that are not available or to images that no longer exist, i.e., they are unable to keep their meta- database consistent with the repositories' contents. furthermore, existing solutions often neglect the cost of image delivery. the considerable variations in the effective bandwidth in today's internet lead to highly unpredictable response times, which are often intolerable from the user's point of view. this paper presents the architecture of chariot, a networked image search and retrieval system that tackles these concerns. with respect to scalability and autonomy, chariot follows the approach of www-based search engines by maintaining only the meta-data in a central database. various specialized components (feature extraction, indexes, images servers) are coordinated by a middleware component that employs transactional process management to enforce consistency between the meta-data and all components. moreover, chariot incorporates mechanisms to provide more predictable response times for the image delivery over the internet by employing network- aware image servers. these servers trade off the quality of the images to be delivered with the bandwidth required to transmit the images. r. weber j. bollinger t. gross h.-j. schek 101 spots, or how do users read menus?: antti aaltonen aulikki hyrskykari kari-jouko räihä instruction sets for evaluating arithmetic expressions e. g. coffman ravi sethi evaluation francisco v. cipolla ficarra punyashloke mishra kim nguyen blair nonnecke jenny preece gary marchionini the database research group at eth zurich moira c. norrie stephen m. blott hans-jörg schek gerhard weikum versioning hypermedia fabio vitali variations in relevance judgments and the measurement of retrieval effectiveness ellen m. voorhees a goms analysis of the advanced automated cockpit sharon irving peter polson j. e. irving reports of the workshop on interdisciplinary theory for cscw design william hunt data mining case study: modeling the behavior of offenders who commit serious sexual assaults this paper looks at the use of a self organizing map (som), to link of records of crimes of serious sexual attacks. once linked a profile can be derived of the offender(s) responsible.the data was drawn from the major crimes database at the national crime faculty of the national police staff college bramshill uk. the data was encoded from text by a small team of specialists working to a well-defined protocol. the encoded data was analyzed using soms. two exercises were conducted. these resulted in the linking of several offences in to clusters each of which were sufficiently similar to have possibly been committed by the same offender(s). a number of clusters were used to form profiles of offenders. some of these profiles were confirmed by independent analysts as either belonging to known offenders or appeared sufficiently interesting to warrant further investigation.the prototype was developed over 10 weeks. this contrasts with an in-house study using a conventional approach, which took 2 years to reach similar results. as a consequence of this study the ncf intends to pursue an in-depth follow up study. richard adderley peter b. musgrove evaluation of semantic hypermedia links for reading of scholarly writing (abstract) ajaz r. rana eduardo morales the dedale system for complex spatial queries this paper presents dedale, a spatial database system intended to overcome some limitations of current systems by providing an abstract and non- specialized data model and query language for the representation and manipulation of spatial objects. dedale relies on a logical model based on linear constraints, which generalizes the constraint database model of [kkr90]. while in the classical constraint model, spatial data is always decomposed into its convex components, in dedale holes are allowed to fit the need of practical applications. the logical representation of spatial data although slightly more costly in memory, has the advantage of simplifying the algorithms. dedale relies on nested relations, in which all sorts of data (thematic, spatial, etc.) are stored in a uniform fashion. this new data model supports declarative query languages, which allow an intuitive and efficient manipulation of spatial objects. their formal foundation constitutes a basis for practical query optimization. we describe several evaluation rules tailored for geometric data and give the specification of an optimizer module for spatial queries. except for the latter module, the system has been fully implemented upon the o2 dbms, thus proving the effectiveness of a constraint- based approach for the design of spatial database systems. stephane grumbach philippe rigaux luc segoufin yin and yang in computer science a. c. sodan methodologies for evaluation of collaborative systems workshop jill drury an integrated information system on the web for catchment management kenny taylor mark cameron jason haines analysis of a very large web search engine query log craig silverstein hannes marais monika henzinger michael moricz bounded ignorance in replicated systems narayanan krishnakumar arthur j. bernstein managing metaphors for advanced user interfaces user interface design includes designing metaphors, the essential terms, concepts, and images representing data, functions, tasks, roles, organizations, and people. advanced user interfaces require consideration of new metaphors and repurposing of older ones. awareness of semiotics principles can assist researchers in developing more efficient and effective ways to communicate to more diverse user groups. aaron marcus automatic generation of overview timelines we present a statistical model of feature occurrence over time, and develop tests based on classical hypothesis testing for significance of term appearance on a given date. using additional classical hypothesis testing we are able to combine these terms to generate "topics" as defined by the topic detection and tracking study. the groupings of terms obtained can be used to automatically generate an interactive timeline displaying the major events and topics covered by the corpus. to test the validity of our technique we extracted a large number of these topics from a test corpus and had human evaluators judge how well the selected features captured the gist of the topics, and how they overlapped with a set of known topics from the corpus. the resulting topics were highly rated by evaluators who compared them to known topics. russell swan james allan locking expressions for increased database concurrency anthony klug effects of interfaces for annotation on communication in a collaborative task: patricia g. wojahn christine m. neuwirth barbara bullock using information scent to model user information needs and actions and the web on the web, users typically forage for information by navigating from page to page along web links. their surfing patterns or actions are guided by their information needs. researchers need tools to explore the complex interactions between user needs, user actions, and the structures and contents of the web. in this paper, we describe two computational methods for understanding the relationship between user needs and user actions. first, for a particular pattern of surfing, we seek to infer the associated information need. second, given an information need, and some pages as starting pints, we attempt to predict the expected surfing patterns. the algorithms use a concept called "information scent", which is the subjective sense of value and cost of accessing a page based on perceptual cues. we present an empirical evaluation of these two algorithms, and show their effectiveness. ed h. chi peter pirolli kim chen james pitkow realizing a video environment: europarc's rave system at europarc, we have been exploring ways to allow physically separated colleagues to work together effectively and naturally. in this paper, we briefly discuss several examples of our work in the context of three themes that have emerged: the need to support the full range of shared work; the desire to ensure privacy without giving up unobtrusive awareness; and the possibility of creating systems which blur the boundaries between people, technologies and the everyday world. william gaver thomas moran allan maclean lennart lövstrand paul dourish kathleen carter william buxton editorial steven pemberton a simple guide to five normal forms in relational database theory the concepts behind the five principal normal forms in relational database theory are presented in simple terms. william kent the paper model for computer-based writing ann fatton staffan romberger kerstin severinson eklundh temporal database modeling: an object-oriented approach ramez elmasri vram kouramajian shian fernando flexible support for business processes: extending cooperative hypermedia with process support jörg m. haake weigang wang the user-centred iterative design of collaborative writing software ronald m. baecker dimitrios nastos ilona r. posner kelly l. mawby the data-document distinction in information retrieval david c. blair cyberdesk: automated integration of desktop and network services andrew wood anind dey gregory d. abowd using spatial cues to improve videoconferencing abigail sellen bill buxton john arnott transactions and consistency in distributed database systems the concepts of transaction and of data consistency are defined for a distributed system. the cases of partitioned data, where fragments of a file are stored at multiple nodes, and replicated data, where a file is replicated at several nodes, are discussed. it is argued that the distribution and replication of data should be transparent to the programs which use the data. that is, the programming interface should provide location transparency, replica transparency, concurrency transparency, and failure transparency. techniques for providing such transparencies are abstracted and discussed. by extending the notions of system schedule and system clock to handle multiple nodes, it is shown that a distributed system can be modeled as a single sequential execution sequence. this model is then used to discuss simple techniques for implementing the various forms of transparency. irving l. traiger jim gray cesare a. galtieri bruce g. lindsay learning to extract hierarchical information from semi-structured documents wai-yip lin wai lam dolphin: integrated meeting support across local and remote desktop environments and liveboards this paper describes dolphin, a fully group aware application designed to provide computer support for different types of meetings: face-to-face meetings with a large interactive electronic whiteboard with or without networked computers provided for the participants, extensions of these meetings with remote participants at their desktop computers connected via computer and audio/video networks, and/or participants in a second meeting room also provided with an electronic whiteboard as well as networked computers. dolphin supports the creation and manipulation of informal structures (e.g., freehand drawings, handwritten scribbles), formal structures (e.g., hypermedia documents with typed nodes and links), their coexistence, and their transformation. norbert a. streitz jorg geibler jorg m. haake jeroen hol "user revealment" - a comparison of initial queries and ensuing question development in online searching and in human reference interactions ragnar nordlie hypertext functionality michael bieber harri oinas-kukkonen v. balasubramanian the statistical security of a statistical database this note proposes a statistical perturbation scheme to protect a statistical database against compromise. the proposed scheme can handle the security of numerical as well as nonnumerical sensitive fields. furthermore, knowledge of some records in a database does not help to compromise unknown records. we use chebyshev's inequality to analyze the trade-offs among the magnitude of the perturbations, the error incurred by statistical queries, and the size of the query set to which they apply. we show that if the statistician is given absolute error guarantees, then a compromise is possible, but the cost is made exponential in the size of the database. j. f. traub y. yemini h. wozniakowski the garlic project m. tork roth m. arya l. haas m. carey w. cody r. fagin p. schwarz j. thomas e. wimmers integrated search tools for newspaper digital libraries (demonstration session) s. l. mantzaris b. gatos n. gouraros p. tzavelis web navigationn: resolving conflicts between the desktop and the web: a chi98 workshop carola fellenz jarmo parkkinen hal shubin management of information technology innovation: a heuristic contingency paradigm research perspective this paper presents a contingency model which causally relates information technology acquisition and diffusion (it/ad) to organization culture (heterogeneous versus homogeneous), organization learning (innovative versus adaptive), and knowledge sharing (networked versus hierarchical). the model integrates multiple and well- grounded theoretical streams of research. these three primary driving forces interact in a recursive dynamic, expressed in both rational driving forces and political driving forces. this paper focuses on the political driving forces, operationalizing them with five categories of measurement variables. a preliminary set of research propositions associated with the five categories of political driving forces are presented. future research is suggested, addressing moderating variables, and information technology acquisition and diffusion patterns of s-curves. mathew j. klempa support for collaborative design: agents and emergence ernest a. edmonds linda candy rachel jones bassel soufi characterization of hierarchies and some operators in olap environment recently numerous proposals for modelling and querying multidimensional databases (mddb) are proposed. among the still open problems there is a rigorous classification of the different types of hierarchies. in this paper we propose and discuss some different types of hierarchies within a single dimension of a cube. these hierarchies divide in different levels of aggregation a single dimension. depending on them, we discuss the characterization of some olap operators which refer to hierarchies in order to maintain the data cube consistency. moreover, we propose a set of operators for changing the hierarchy structure. the issues discussed provides modelling flexibility during the scheme design phase and correct data analysis. elaheh pourabbas maurizio rafanelli the is effectiveness matrix: the importance of stakeholder and system in measuring is success peter b. seddon d. sandy staples ravi patnayakuni matthew j. bowtell the araneus web-based management system g. mecca p. atzeni a. masci g. sindoni p. merialdo application of the entity-relationship model to picture representation (abstract only) the entity-relationship diagram technique has been applied to picture representation. the advantages of this approach together with illustrative examples are presented. the applications of the concepts and techniques developed in fuzzy sets, fuzzy languages, similarity retrieval techniques, and similarity retrieval for pictorial databases2, to picture representation are also presented. edward t. lee caching and database scaling in distributed shared-nothing information retrieval systems a common class of existing information retrieval system provides access to abstracts. for example stanford university, through its folio system, provides access to the inspect database of abstracts of the literature on physics, computer science, electrical engineering, etc. in this paper this database is studied by using a trace-driven simulation. we focus on physical index design, inverted index caching, and database scaling in a distributed shared-nothing system. all three issues are shown to have a strong effect on response time and throughput. database scaling is explored in two ways. one way assumes an "optimal" configuration for a single host and then linearly scales the database by duplicating the host architecture as needed. the second way determines the optimal number of hosts given a fixed database size. anthony tomasic hector garcia-molina directory service for groupwork (abstract) r. kanman c. l. chen michael packer hawa singh the database group at national technical university of athens (ntua) corporate national technical univ. of athens predicting the performance of linearly combined ir systems christopher c. vogt garrison w. cottrell visualizing the evolution of web ecologies: ed h. chi james pitkow jock mackinlay peter pirolli rich gossweiler stuart k. card improving the effectiveness of information retrieval with local context analysis techniques for automatic query expansion have been extensively studied in information research as a means of addressing the word mismatch between queries and documents. these techniques can be categorized as either global or local. while global techniques rely on analysis of a whole collection to discover word relationships, local techniques emphasize analysis of the top- ranked documents retrieved for a query. while local techniques have shown to be more effective that global techniques in general, existing local techniques are not robust and can seriously hurt retrieved when few of the retrieval documents are relevant. we propose a new technique, called local context analysis, which selects expansion terms based on cooccurrence with the query terms within the top-ranked documents. experiments on a number of collections, both english and non- english, show that local context analysis offers more effective and consistent retrieval results. jinxi xu w. bruce croft automatic abstracting of magazine articles: the creation of 'highlight' abstracts marie-francine moens jos dumortier finding linking opportunities through relationship-based analysis joonhee yoo michael bieber query expansion using lexical-semantic relations ellen m. voorhees magic conditions much recent work has focused on the bottom-up evaluation of datalog programs [bancilhon and ramakrishnan 1988]. one approach, called magic-sets, is based on rewriting a logic program so that bottom-up fixpoint evaluation of the program avoids generation of irrelevant facts [bancilhon et al. 1986; beeri and ramakrishnan 1987; ramakrishnan 1991]. it was widely believed for some time that the principal application of the magic-sets technique is to restrict computation in recursive queries using equijoin predicates. we extend the magic-sets transformation to use predicates other than equality (x>10, for example) in restricting computation. the resulting ground magic-sets transformation is an important step in developing an extended magic-sets transformation that has practical utility in "real" relational databases, not only for recursive queries, but for nonrecursive queries as well [mumick et al. 1990b; mumick 1991]. inderpal singh mumick sheldon j. finkelstein hamid pirahesh raghu ramakrishnan the rhetoric of an online document retrieval system the purpose of this paper is to establish a working definition of the term rhetoric, and to explore four tools that can be used to understand and analyze the rhetoric of online sytems. it is meant to persuade designers and writers of online systems that all information presented online is rhetorical, and as such its impact is better managed than ignored. working definition: rhetoric means creating or using an "orientation to action" to persuade someone to act in a certain way. tools: the rhetoric of the following are addressed: graphics, discourse communities, readership, and story emplotment. carol m. l. lee concall: edited and adaptive information filtering annika waern mark tierneyÃ...sa rudsström jarmo laaksolahti the cyberarium dave warner an iterative design methodology for user-friendly natural language office information applications j. f. kelley hyperspeech barry arons smart ideas as a tool for user participation in product development tom fukushima david martin "the virtual theatre" immersive participatory drama research at the centre for communications systems research, cambridge university sharon springel function materialization in object bases alfons kemper christoph kilger guido moerkotte sql open heterogeneous data access we describe the open, extensible architecture of sql for accessing data stored in external data sources not managed by the sql engine. in this scenario, sql engines act as middleware servers providing access to external data using sql dml statements and joining external data with sql tables in heterogeneous queries. we describe the state- of-the art in object-relational systems and their companion products, and provide an outlook on future directions. berthold reinwald hamid pirahesh voicefax: a shared workspace for voicemail partners david frohlich owen daly- jones structured answers for a large structured document collection there is a simple method for integrating information retrieval and hypertext. this consists of treating nodes as isolated documents and retrieving them in order of similarity. if the nodes are structured, in particular, if sets of nodes collectively constitute documents, we can do better. this paper shows how the formation of the hypertext, the retrieval of nodes in response to content based queries, and the presentation of the nodes can be achieved in a way that exploits the knowledge encoded as the structure of the documents. the ideas are then exemplified in an sgml based hypertext information retrieval system. michael fuller eric mackie ron sacks-davis ross wilkinson object normal forms and dependency constraints for object-oriented schemata we address the development of a normalization theory for object- oriented data models that have common features to support objects. we first provide an extension of functional dependencies to cope with the richer semantics of relationships between objects, called path dependency, local dependency, and global dependency constraints. using these dependency constraints, we provide normal forms for object-oriented data models based on the notions of user interpretation (user-specified dependency constraints) and object model. in constrast to conventional data models in which a normalized object has a unique interpretation, in object-oriented data models, an object may have many multiple interpretations that form the model for that object. an object will then be in a normal form if and only if the user's interpretation is derivable from the model of the object. our normalization process is by nature iiterative, in which objects are restructured until their models reflect the user's interpretation. zahir tari john stokes stefano spaccapietra 2001 (invited talk. abstract only): a statistical odyssey this talk is an interim report on the 5 year plan launched in 1996 to provide a theoretical and computational foundation of statistics for massive data sets. the plan coincided with the formation of at&t; labs and the proposed research agenda of the infolab, which is both a physical laboratory and an interdisciplinary collection of information researchers in cs, mathematics, and statistics. at the halfway point of this odyssey we can identify some success stories but more importantly it is an opportune time to re-calibrate the challenges and the milestones. daryl pregibon affordances, motivation, and the design of user interfaces john karat clare- marie karat jacob ukelson an evaluation of earcons for use in auditory human-computer interfaces stephen a. brewster peter c. wright alistair d. n. edwards are standards the panacea for heterogeneous distributed dbmss? glenn thompson an efficient multiversion algorithm for secure servicing of transaction reads we propose an efficient multiversion algorithm for servicing read requests in secure multilevel databases. rather than keep an arbitrary number of versions of a datum, as standard multiversion algorithms do, the algorithm presented here maintains only a small fixed number of versions---up to three \---for a modified datum. each version corresponds to the state of the datum at the end of an externally defined version period. the algorithm avoids both covert channels and starvation of high transactions, and applies to security structures that are arbitrary partial orders. the algorithm also offers long- read transactions at any security level conflict-free access to a consistent, though slightly dated, view of any authorized portion of the database. we derive constraints sufficient to guarantee one-copy serializability of executions histories, and then exhibit an algorithm that satisfies these constraints. paul ammann sushil jajodia automatically summarising web sites: is there a way around it? einat amitay cecile paris xmill: an efficient compressor for xml data we describe a tool for compressing xml data, with applications in data exchange and archiving, which usually achieves about twice the compression ratio of gzip at roughly the same speed. the compressor, called xmill, incorporates and combines existing compressors in order to apply them to heterogeneous xml data: it uses zlib, the library function for gzip, a collection of datatype specific compressors for simple data types, and, possibly, user defined compressors for application specific data types. hartmut liefke dan suciu vgrep: a graphical tool for the exploration of textural documents jeffrey d. mcwhirter assessing the quality of hypertext views p. d. bruza th. p. van der weide altruistic locking long-lived transactions (llts) hold on to database resources for relatively long periods of time, significantly delaying the completion of shorter and more common transactions. to alleviate this problem we propose an extension to two-phase locking, called altruistic locking, whereby llts can release their locks early. transactions that access this released data are said to run in the wake of the llt and must follow special locking rules. like two-phase locking, altruistic locking is easy to implement and guarantees serializability. kenneth salem hector garcia-molina jeannie shands rtpmon: a third-party rtcp monitor david bacher andrew swan lawrence a. rowe an extensible notation for spatiotemporal index queries vassilis j. tsotras christian s. jensen richard t. snodgrass dealing with difficult customers - the most fun you can have at work leslie barden beyond query by example simone santini ramesh jain object identity as a query language primitive we demonstrate the power of object identities (oids) as a database query language primitive. we develop an object-based data model, whose structural part generalizes most of the known complex-object data models: cyclicity is allowed in both its schemas and instances. our main contribution is the operational part of the data model, the query language iql, which uses oids for three critical purposes: (1) to represent data-structures with sharing and cycles, (2) to manipulate sets, and (3) to express any computable database query. iql can be type checked, can be evaluated bottom-up, and naturally generalizes most popular rule-based languages. the model can also be extended to incorporate type inheritance, without changes to iql. finally, we investigate an analogous value-based data model, whose structural part is founded on regular infinte trees and whose operational part is iql. serge abiteboul paris c. kanellakis site outlining koichi takeda hiroshi nomiyama a note on the new screen reader/2 for os/2 jim thatcher file service for sigdoc 93: function, structure, content paul beam travis capener sanjay singh distributed logging for transaction processing increased interest in using workstations and small processors for distributed transaction processing raises the question of how to implement the logs needed for transaction recovery. although logs can be implemented with data written to duplexed disks on each processing node, this paper argues there are advantages if log data is written to multiple log server nodes. a simple analysis of expected logging loads leads to the conclusion that a high performance, microprocessor based processing node can support a log server if it uses efficient communication protocols and low latency, non volatile storage to buffer log data. the buffer is needed to reduce the processing time per log record and to increase throughput to the logging disk. an interface to the log servers using simple, robust, and efficient protocols is presented. also described are the disk data structures that the log servers use. this paper concludes with a brief discussion of remaining design issues, the status of a prototype implementation, and plans for its completion. dean s. daniels alfred z. spector dean s. thompson ags: introducing agents as services provided by digital libraries j. alfredo sanchez john j. leggett john l. schnase integrating theoreticians' and practitioners' perspectives with design rationale qoc design rationale represents argumentation about design alternatives and assessments. it can be used to generate design spaces which capture and integrate information from design discussions and diverse kinds of theoretical analyses. such design spaces highlight how different theoretical approaches can work together to help solve design problems. this paper describes an example of the generation of a multi-disciplinary qoc design space which shows how designers' deliberations can be augmented with design contributions from a combination of different theoretical hci approaches. victoria bellotti on the equivalence of an egd to a set of fd's the question "is a given join dependency equivalent to some set of multivalued dependencies?" led to the development of acyclicity theory [1]. the central question of this paper is: "is a given equality-generating dependency equivalent to a set of functional dependencies?" an algorithm is presented that answers that question in polynomial time without using the chase process and, in the case of a "yes" answer, can be used to find (a cover of) the set of functional dependencies involved. this question is also related to the similar question about join dependencies and multivalued dependencies by proving a result about the hypergraph representation of an egd. it is interesting to note that a minimal representation of an egd must be β-acyclic for the egd to be equivalent to a set of fd's, in contrast to the jd/mvd case, in which only α-acyclicity is needed. the β-acyclicity of an egd not necessarily minimal is always sufficient for the egd to be equivalent to a set of fd's as shown. finally, the algorithm is extended for a single egd to answer the question whether a set of egd's with the same right-hand-side column is equivalent to a set of fd's. marc h. graham ke wang niupepa: a historical newspaper collection mark apperley sally jo cunningham te taka keegan ian h. witten objects with roles the use of object-oriented conceptual models for modeling office applications and information systems is discussed. a model for describing object behavior based on the concept of role is presented. roles allow one to describe different perspectives for object evolution. for each role, relevant characteristics such as role properties, role states, messages, and role-state transition rules and constraints are defined. the implications of considering several roles in parallel for an object are discussed, and a classification of possible role interactions is given. b. pernici creation of interactive media content by the reuse of images tsutomu miyasato computer human factors in computer interface design (panel session) human factors psychologists contribute in many ways to improving human- computer interaction. one contribution involves evaluating existing or prototype systems, in order to assess usability and identify problems. another involves contributing more directly to the design of systems in the first place: that is, not only evaluating systems but bringing to bear empirical methods and theoretical considerations that help specify what are plausible designs in the first place. the goal of this panel is to discuss four case studies emphasizing this role of cognitive human factors, and identify relevant methods and theoretical considerations. the panelists will present examples of prototypes or products to whose design they contributed, with the aim of characterizing the problem (or problems) they tried to solve, the approach to identifying a design solution for that problem, and evidence that the approach was useful. robert mack will discuss an editor prototype designed to get novices started doing meaningful work quickly and helping them to continue acquiring new skills, with virtually no explicit instruction. the prototype is being designed in large part by identifying key novice problems and expectations, and trying to design the interface to better accommodate these expectations. the first goal of getting novices started relatively quickly has been achieved but problems remain as novices try to acquire further text-editing skill. these problems \--- and solutions to them --- are being identified through a process of iterative design and evaluation. dennis wixon will discuss implications for designing usable interfaces of the user-derived-interface project (good, m., whiteside, j., wixon, d. and jones, s., 1984). the project involved a simulation of a restricted natural language interface for an electronic mail system. the design process was driven by the behavioral goal of getting users started relatively quickly with little or no instruction or interface aids. actual user interaction with the simulation coupled with iterative design and evaluation provided interface specifications. this prototype illustrates a number of techniques for bringing usability into the software engineering process. these presentations will discuss the role of empirical methods such as verbal protocol techniques for identifying user problems with existing computer systems (e.g., lewis, 1982; mack, lewis & carroll, 1983; douglas & moran, 1983), including variations aimed at identifying user expectations that may be able to guide design (e.g., mack, 1984); interface simulations for studying user interactions again with the aim of letting user behavior guide interface design (e.g., kelley, 1984; good, whiteside, wixon & jones, 1984), and iterative design and evaluation of interfaces, aimed at achieving behavioral goals (e.g., carroll & rosson, 1984; gould & lewis, 1983). robert mack thomas moran judith reitman olson dennis wixon john whiteside aid - access to informal documentation roger b. chaffee information filtering and information retrieval: two sides of the same coin? nicholas j. belkin w. bruce croft ajax: an extensible data cleaning tool @@@@ groups together matching pairs with a high similarity value by applying a given grouping criteria (e.g. by transitive closure). finally, ging collapses each individual cluster into a tuple of the resulting data source. ajax provides @@@@ for specifying data cleaning programs, which consists of sql statements enriched with a set of specific primitives to express these transformations. ajax also @@@@. it allows the user to interact with an executing data cleaning program to handle exceptional cases and to inspect intermediate results. finally, ajax provides @@@@ @@@@ that permits users to determine the source and processing of data for debugging purposes. we will present the ajax system applied to two real world problems: the consolidation of a telecommunication database, and the conversion of a dirty database of bibliographic references into a set of clean, normalized, and redundancy free relational tables maintaining the same data. helena galhardas daniela florescu dennis shasha eric simon an adaptive hypertext system for reference manuals (abstract) nathalie mathe rich keller typed query languages for databases containing queries frank neven dirk van gucht jan van den bussche gottfried vossen a componential model of human interaction with graphical display douglas j. gillan index structures for selective dissemination of information under the boolean model the number, size, and user population of bibliographic and full-text document databases are rapidly growing. with a high document arrival rate, it becomes essential for users of such databases to have access to the very latest documents; yet the high document arrival rate also makes it difficult for users to keep themselves updated. it is desirable to allow users to submit profiles, i.e., queries that are constantly evaluated, so that they will be automatically informed of new additions that may be of interest. such service is traditionally called selective dissemination of information (sdi). the high document arrival rate, the huge number of users, and the timeliness requirement of the service pose a challenge in achieving efficient sdl. in this article, we propose several index structures for indexing profiles and algorithms that efficiently match documents against large number of profiles. we also present analysis and simulation results to compare their performance under different scenarios. tak w. yan hector garcia-molina webcompass: an agent-based metasearch and metadata discovery tool for the web brad allen john jensen jay nelson brian ulicny kristina lerman linda rudell- betts hello users: this is control: or cd-rom access for all sylvia m. berta performance differences in the fingers, wrist, and forearm in computer input control ravin balakrishnan i. scott mackenzie querying across languages: a dictionary-based approach to multilingual information retrieval david a. hull gregory grefenstette automatic information retrieval (tutorial and panel discussion) information files of all kinds are now in common use---personnel records, parts inventories, customer account information, business correspondence, document holdings in libraries, patient records in hospitals, and so on. information retrieval systems are designed to help analyze and describe the items stored in a file, to organize them and search among them, and finally to retrieve them in response to a user's query. designing and using a retrieval system involves four major activities: information analysis, information organization and search, query formulation, and information retrieval and dissemination. the tutorial and panel discussion are designed to examine the current developments in the field and to point the way to the future. the emphasis in the discussion will be placed on modern on- line retrieval services, automatic indexing and abstracting and novel text matching systems. gerard salton evaluation of electronic work: research on collaboratories at the university of michigan thomas a. finholt browsing graphs using a fisheye view (abstract) marc h. brown james r. meehan manojit sarkar social activity indicators: interface components for cscw systems mark s. ackerman brian starr metu interoperable database system asuman dogac ugur halici ebru kilic gokhan ozhan fatma ozcan sena nural cevdet dengi sema mancuhan budak arpinar pinar koksal cem evrendilek backtracking in a multiple-window hypertext environment multi-window interfaces allow users to work on logically independent taks simultaneously in different sets of windows and to move among these logical tasks at will (e.g., through selecting a window in a different task). hypertext backtracking should be able to treat each logical task separately. combining all traversals in a single chronological history log would violate the user's mental model and cause disorientation. in this paper we introduce task-based backtracking, a technique for backtracking within the various logical tasks a user may be working on at any given time. we present a preliminary algorithm for its implementation. we also discuss several ramifications of multi-window backtracking including the types of events history logs must record, deleting nodes from history logs that appear in multiple logical tasks, and in general the choices hypermedia designers face in multi-window environments. michael bieber jiangling wan explaining collaborative filtering recommendations automated collaborative filtering (acf) systems predict a person's affinity for items or information by connecting that person's recorded interests with the recorded interests of a community of people and sharing ratings between like-minded persons. however, current recommender systems are black boxes, providing no transparency into the working of the recommendation. explanations provide that transparency, exposing the reasoning and data behind a recommendation. in this paper, we address explanation interfaces for acf systems - how they should be implemented and why they should be implemented. to explore how, we present a model for explanations based on the user's conceptual model of the recommendation process. we then present experimental results demonstrating what components of an explanation are the most compelling. to address why, we present experimental evidence that shows that providing explanations can improve the acceptance of acf systems. we also describe some initial explorations into measuring how explanations can improve the filtering performance of users. jonathan l. herlocker joseph a. konstan john riedl tight bounds for 2-dimensional indexing schemes elias koutsoupias d. s. taylor cu-seeme vr immersive desktop teleconferencing jefferson han brian smith hypertext servers for team environments position paper and project description kasper Østerbye database research at the queensland univ. of technology m. p. papazoglou m. mcloughlin e. lindsay s. willie database theory - past and future we briefly sketch the development of the various branches of database theory. one important branch is the theory of relational databases, including such areas as dependency theory, universal- relation theory, and hypergraph theory. a second important branch is the theory of concurrency control and distributed databases. two other branches have not in the past been given the attention they deserve. one of these is "logic and databases," and the second is "object-oriented database systems," which to my thinking includes systems based on the network or hierarchical data models. both these areas are going to be more influential in the future. j. d. ullman elastic windows: evaluation of multi-window operations eser kandogan ben shneiderman interface ecology andruid kerne dblearn: a system prototype for knowledge discovery in relational databases a prototyped data mining system, dblearn, has been developed, which efficiently and effectively extracts different kinds of knowledge rules from relational databases. it has the following features: high level learning interfaces, tightly integrated with commercial relational database systems, automatic refinement of concept hierarchies, efficient discovery algorithms and good performance. substantial extensions of its knowledge discovery power towards knowledge mining in object-oriented, deductive and spatial databases are under research and development. jiawei han yongjian fu yue huang yandong cai nick cercone workshop on formal specification of user interfaces (abstract) christopher rouff a response to r. camps' article "domains, relations and religious wars" since it quotes extensively from writings of my own, i feel obliged to respond to the article "domains, relations and religious wars," by r. camps (sigmod record 25, no. 3, september 1996). in that article, camps is clearly suggesting (among other things) that my definition of the term "domain" has changed over the years. i agree, it has! but camps goes on to say: "… considering that [date's book an introduction to database systems] was the bible [camps' italics] where most university graduates all over the world learnt, i believe that date can be held partly responsible for the lack of implementation of domains [in today's sql dbmss]." c. j. date the making of the sigchi identity design suzanne watzman visual information foraging in a focus + context visualization eye tracking studies of the hyperbolic tree browser [10] suggest that visual search in focus+context displays is highly affected by information scent (i.e., local cues, such as text summaries, used to assess and navigate toward distal information sources). when users detected a strong information scent, they were able to reach their goal faster with the hyperbolic tree browser than with a conventional browser. when users detected a weak scent or no scent, users exhibited less efficient search of areas with a high density of visual items. in order to interpret these results we present an integration of the code theory of visual attention (ctva) with information foraging theory. development of the ctva-foraging theory could lead to deeper analysis of interaction with visual displays of content, such as the world wide web or information visualizations. peter pirolli stuart k. card mija m. van der wege hypertext '87: keynote address andries van dam multimedia data mining (workshop session) (title only) simeon j. simoff osmar r. zaïane facilitating orientation in shared hypermedia workspaces shared workspaces are an important means for supporting long-term synchronous and asynchronous collaboration. shared workspaces themselves become difficult to manage due to increasing size and constant change. this is especially true for shared hypermedia workspaces. thus means for managing the shared hypermedia workspace in terms of keeping an overview of the group's work and coordinating changes become necessary. in this paper we propose a shared hypermedia workspace model representing not only shared content but also team and process related information. four complementary tools facilitate orientation and coordination in the shared workspace: a group aware content browser, a group aware overview browser, a shared workspace search tool, and a shared process space browser. together, these tools should enable groups to stay aware of each other's activities and to control the level of awareness according to their needs. jörg m. haake a schema-less spatio-temporal database system michael bodolay martha l. escobar-molano quasi-dynamic two-phase locking among the plethora of concurrency control algorithms that have been proposed and analyzed, two-phase locking (2pl) has been adapted as the industry de facto standard concurrency control. in accord, current research in concurrency control is focusing on enhancing the scalability of 2pl performance in highly concurrent and contentious environments. this is especially needed in future on-line transaction processing systems, where thousand transaction per second performance will be required. static locking (sl) and dynamic locking (dl) are two famous adaptations of 2pl that are used under different degrees of data contention. in this paper, we offer our observation that 2pl is indeed a family of methods, of which sl and dl are extreme case members. further, we argue for and verify the existence of other 2pl member methods that, under variable conditions, outperform sl and dl. we propose two novel schemes which we categorize as quasi-dynamic two- phase locking on account of their behavior in comparison with dynamic/static two-phase locking. we present a simulation study of the performance of the proposed schemes and their comparison to dynamic and static locking methods. abdelsalam helal tung-hui ku jud fortner method for distributed transaction commit and recovery using byzantine agreement within clusters of processors this paper describes an application of byzantine agreement [dost82a, dost82c, lyff82] to distributed transaction commit. we replace the second phase of one of the commit algorithms of [moli83] with byzantine agreement, providing certain trade-offs and advantages at the time of commit and providing speed advantages at the time of recovery from failure. the present work differs from that presented in [dost82b] by increasing the scope (handling a general tree of processes, and multi-cluster transactions) and by providing an explicit set of recovery algorithms. we also provide a model for classifying failures that allows comparisons to be made among various proposed distributed commit algorithms. the context for our work is the highly available systems project at the ibm san jose research laboratory [aafkm83]. c. mohan r. strong s. finkelstein warming up to computers: a study of cognitive and affective interaction over time this experiment studies how people learn to use computers. four computer-naive persons performed six computer tasks at each of 20 task sessions over a one month period. participants were allowed to choose a menu- driven or command- driven dialogue at any point during the study. cognitive, affective, and performance variables were closely monitored. results generally support the appropriateness of a menu-driven dialogue for novice users and the transition to a command-driven dialogue after approximately 16 - 20 hours of task experience. with experience, users were shown to a) choose b) perform better, and c) be more satisfied with a command driven dialogue. results are explained within the context of a "cognitive schema" theory. david m. gilfoil an introduction to distributed cognition (tutorial session)(abstract only): analyzing the organizational, the social and the cognitive for designing and implementing cscw applications goals and content: this tutorial will give a detailed overview of the theoretical and methodoligical framework of distributed cognition. detailed case studies will be presented to demonstrate how it can be applied to the design and implementation of cscw systems. participants will then put into practice the theory and methodology through hands-on group exercises using video material of actual and hypothetical work settings. christine halverson yvonne rogers interspace project - cybercampus (video program) (abstract only) interspace is a revolutionary communication environment that allows users the flexibility of multi-modal interaction. people in interspace communicate using audio as well as video interaction in a three dimensional world. remote terimals are connected to a central server via networks. facial image, audio, and proximity, are processed and sent out to the remote terminals to enable multi-modal communication in a virtual world. interspace technology comes a step closer to bridging the gap between virtual reality and world experiences. we conducted a trial service, cybercampus, based on the interspace platform. individual actions can now be shared with other users as you explore, talk, shop, learn, and experience the many facets of cybercampus. environments related to entertainment, distance learning, on-line shopping, and advertisement are currently being explored in cybercampus with unlimited expansion capabilities. cybercampus debuted in september 1995 and has been hosted by several universities and businesses in the san francisco area. preliminary usage suggests that multi-user, multi-modal interaction has a prominent role in the future of telecommunications. shohei sugawara norihiko matsuura yoichi kato keiichi sasaki michita imai takashi yamana yasuyuki kiyosue kazunori shimamura tamoaki tanaka takashi nishimura carol leick tim takeuchi gen suzuki dynamic timelines: visualizing the history of photography robin l. kullberg adoption and utilization of voice mail (abstract) harry c. benham bruce raymond a 3d audio only interactive web browser: using spatialization to convey hypermedia document structure interactive audio browsers provide both sighted and visually impaired users with access to the www. in addition to the desktop pc, audio browsing technology can be deployed that enable users to browse the www using a telephone or while driving a car. this paper describes a new conceptual model of the html document structure and its mapping to a 3d audio space. novel features are discussed that provide information such as: an audio structural survey of the html document; accurate positional audio feedback of the source and destination anchors when traversing both inter-and intra-document links; a linguistic progress indicator; the announcement of destination document meta- information as new links are encountered. these new features can improve both the user's comprehension of the html document structure and their orientation within it. these factors, in turn, can improve the effectiveness of the browsing experience. stuart goose carsten möller prompted reflections: a technique for understanding complex work finn kensing devise: integrated querying and visual exploration of large datasets devise is a data exploration system that allows users to easily develop, browse, and share visual presentation of large tabular datasets (possibly containing or referencing multimedia objects) from several sources. the devise framework is being implemented in a tool that has been already successfully applied to a variety of real applications by a number of user groups. our emphasis is on developing an intuitive yet powerful set of querying and visualization primitives that can be easily combined to develop a rich set of visual presentations that integrate data from a wide range of application domains. while devise is a powerful visualization tool, its greatest strengths are the ability to interactively explore a visual presentation of the data at any level of detail (including retrieving individual data records), and the ability to seamlessly query and combine data from a variety of local and remote sources. in this paper, we present the devise framework, describe the current tool, and report on our experience in applying it to several real applications. m. livny r. ramakrishnan k. beyer g. chen d. donjerkovic s. lawande j. myllymaki k. wenger flexible coordination with cooperative hypertext weigang wang jörg m. haake book review: world wide web journal danny yee crystal (demonstration abstract): a content-based music retrieval system yuen- hsien tseng uexk ll (demonstration session): an interactive visual user interface for document retrieval in vector space michael preminger sandor daranyi supporting collaborative process with conversation builder simon m. kaplan alan m. carroll kenneth j. macgregor navigational correlates of comprehension in hypertext john e. mceneaney the rockin'mouse: integral 3d manipulation on a plane ravin balakrishnan thomas baudel gordon kurtenbach george fitzmaurice technological capacitation in customer service work: a sociotechnical approach stephen corea never mind the ethno' stuff, what does all this mean and what do we do now: ethnography in the commercial world steve blythin mark rouncefield john a. hughes the trigs active object-oriented database system - an overview the active object-oriented database system trigs has been developed as part of a larger ec esprit project aiming at the development of next-generation production scheduling and control systems [huem93]. the goal of this paper is to summarize the work on trigs which comprises both aspects concerning the development of the active system itself, and guidelines concerning the design of active databases. g. kappel w. retschitzegger computer image retrieval by features: suspect identification eric lee thom whalen dynamic queries: database searching by direct manipulation ben shneiderman christopher williamson christopher ahlberg usage analysis of a digital library steve jones sally jo cunningham rodger mcnab a new character-based indexing method using frequency data for japanese documents ogawa yasushi iwasaki masajirou adaptive methods for distributed video presentation crispin cowan shanwei cen jonathan walpole calton pu a generalized content-based image retrieval system nina l. ma jesse s. jin the whiteboard: seven great myths of usability marc chrusch film, form, and culture: a hypermedia analysis of cinema (abstract) robert kolker improving electronic guidebook interfaces using a task-oriented design approach item selection is a key problem in electronic guidebook design. many systems do not apply so-called context-awareness technologies to infer user interest, placing the entire burden of selection on the user. conversely, to make selection easier, many systems automatically eliminate information that they infer is not of interest to the user. however, such systems often eliminate too much information, preventing the user from finding what they want. to realize the full potential of electronic guidebooks, designers must strike the right balance between automatic context- based inference and manual selection. in this paper, we introduce a task- oriented model of item selection for electronic guidebooks to help designers explore this continuum. we argue that item selection contains three sub-tasks and that these sub-tasks should be considered explicitly in system design. we apply our model to existing systems, demonstrating pitfalls of combining sub- tasks, and discuss how our model has improved the design of our own guidebook prototype. paul m. aoki allison woodruff a model for data reallocation in a distributed database system taher a. m. al- rashahi term weighting in information retrieval using the term precision model c. t. yu k. lam g. salton sort sets in the relational model the notion of sort set is introduced here to formalize the fact that certain database relations can be sorted so that two or more columns are simultaneously listed in order. this notion is shown to be applicable in several ways to enhance the efficiency of an implemented database. a characterization of when order dependency implies the existence of sort sets in a database is presented, along with several corollaries concerning complexity, armstrong relations, and cliques of certain graphs. sort-set dependencies are then introduced. a (finite) sound and complete set of inference rules for sort-set dependencies is presented, as well as a proof that there is no such set for functional and sort-set dependencies taken together. deciding logical implication for sort-set dependencies is proved to be polynomial, but if functional dependencies are included the problem is co- np-complete. each set of sort-set and functional dependencies is shown to have an armstrong relation. a natural generalization of armstrong relation, here called separator, is given and then used to study the relationship between order and sort-set dependencies. seymour ginsburg richard hull fundamental properties of aboutness (poster abstract) peter bruza dawei song kam-fai wong phrase recognition and expansion for short, precision-biased queries based on a query log erika f. de lima jan o. pedersen what mix of video and audio is useful for small groups doing remote real-time design work? judith s. olson gary m. olson david k. meader high speed on-line backup when using logical log operations media recovery protects a database from failures of the stable medium by maintaining an extra copy of the database, called the backup, and a media recovery log. when a failure occurs, the database is "restored" from the backup, and the media recovery log is used to roll forward the database to the desired time, usually the current time. backup must be both fast and "on- line", i.e. concurrent with on-going update activity. conventional online backup sequentially copies from the stable database, almost independent of the database cache manager, but requires page-oriented log operations. but results of logical operations must be flushed to a stable database (a backup is a stable database) in a constrained order to guarantee recovery. this order is not naturally achieved for the backup by a cache manager concerned only with crash recovery. we describe a "full speed" backup, only loosely coupled to the cache manager, and hence similar to current online backups, but effective for general logical log operations. this requires additional logging of cached objects to guarantee media recoverability. we then show how logging can be greatly reduced when log operations have a constrained form which nonetheless provides very useful additional logging efficiency for database systems. david b. lomet a general language model for information retrieval statistical language modeling has been successfully used for speech recognition, part-of- speech tagging, and syntactic parsing. recently, it has also been applied to information retrieval. according to this new paradigm, each document is viewed as a language sample, and a query as a generation process. the retrieved documents are ranked based on the probabilities of producing a query from the corresponding language models of these documents. in this paper, we will present a new language model for information retrieval, which is based on a range of data smoothing techniques, including the good- turning estimate, curve-fitting functions, and model combinations. our model is conceptually simple and intuitive, and can be easily extended to incorporate probabilities of phrases such as word pairs and word triples. the experiments with the wall street journal and trec4 data sets showed that the performance of our model is comparable to that of inquery and better than that of another language model for information retrieval. in particular, word pairs are shown to be useful in improving the retrieval performance. fei song w. bruce croft the effect of windows on man-machine interfaces (or opening doors with windows) a recent development in human-machine interfaces is the partitioning of a computer terminal screen into distinct "windows" of information. this paper defines the concept of "windows" and describes its most common features and applicable operations. it then investigates the utility and application of windowing and compares its features with those of existing user interfaces. the emphasis is on improving the human-machine interface. richard holcomb alan l. tharp studying long-term system use judy kay richard c. thomas the impact of database selection on distributed searching the proliferation of online information resources increases the importance of effective and efficient distributed searching. distributed searching is cast in three parts --- database selection, query processing, and results merging. in this paper we examine the effect of database selection on retrieval performance. we look at retrieval performance in three different distributed retrieval testbeds and distill some general results. first we find that good database selection can result in better retrieval effectiveness than can be achieved in a centralized database. second we find that good performance can be achieved when only a few sites are selected and that the performance generally increases as more sites are selected. finally we find that when database selection is employed, it is not necessary to maintain collection wide information (cwi), e.g. global idf. local information can be used to achieve superior performance. this means that distributed systems can be engineered with more autonomy and less cooperation. this work suggests that improvements in database selection can lead to broader improvements in retrieval performance, even in centralized (i.e. single database) systems. given a centralized database and a good selection mechanism, retrieval performance can be improved by decomposing that database conceptually and employing a selection step. allison l. powell james c. french jamie callan margaret connell charles l. viles integrating communication, cooperation, and awareness: the diva virtual office environment diva, a novel environment for group work, is presented. this prototype virtual office environment provides support for communication, cooperation, and awareness in both the synchronous and asynchronous modes, smoothly integrated into a simple and intuitive interface which may be viewed as a replacement for the standard graphical user interface desktop. in order to utilize the skills that people have acquired through years of shared work in real offices, diva is modeled after the standard office, abstracting elements of physical offices required to support collaborative work: people, rooms, desks, and documents. markus sohlenkamp greg chwelos visual who: a demonstration judith s. donath some effects of angle of approach on icon selection thomas g. whisenand henry h. emurian distribution, parallelism, and availability in nonstop sql pedro celis user interface evaluation in an iterative design process: a comparison of three techniques pamela savage interface design when you don't know how marc rettig articulating the experience of transparency: an example of field research techniques karen a. holtzblatt sandy jones michael good considering an organization's memory mark s. ackerman christine halverson tables as a paradigm for querying and restructuring (extended abstract) marc gyssens laks v. s. lakshmanan iyer n. subramanian nonconsensual negotiation in distributed collaboration roberto evaristo the "homeopathic fallacy" in learning from hypertext jean mckendree will reader nick hammon the precedence-assignment model for distributed databases concurrency control algorithms we have developed a unified model, called the precedence- assignment model (pam), of concurrency control algorithms in distributed database. it is shown that two-phase locking timestamp-ordering and other existing concurrency control algorithms may be modeled by pam. we have also developed a new concurrency control algorithm under the pam modeling framework, which is free from deadlocks and transaction restarts. finally, a unified concurrency control subsystem for precedence-assignment algorithms is developed. by using this subsystem, different transactions may be executed under different concurrency control algorithms simultaneously. c. p. wang v. o. k. li concurrency control in a system for distributed databases (sdd-1) this paper presents the concurrency control strategy of sdd-1. sdd-1, a system for distributed databases, is a prototype distributed database system being developed by computer corporation of america. in sdd-1, portions of data distributed throughout a network may be replicated at multiple sites. the sdd-1 concurrency control guarantees database consistency in the face of such distribution and replication. this paper is one of a series of companion papers on sdd-1 [4, 10, 12, 21]. philip a. bernstein david w. shipman james b. rothnie respecting diversity: designing from a feminine perspective sheila kieran- greenbush the bookmark and the compass: orientation tools for hypertext users mark bernstein adjusting the performance of an information retrieval system j. nie f. paradis j. vaucher some non-technical issues in the implementation of corporate e-mail: lessons from case studies kai jakobs rob procter robin williams martina fichtner a contribution to the design process klaus b. bærentsen henning slavensky an interactive integrated system to design and use data bases recent works on languages for modeling complex data base application environments show overlapping issues with other research areas such as artificial intelligence and programming languages. moreover, a lot of attention is nowadays given to another important field, the overall data base design process, which, as it will be shown, furthermore extends the above connections. antonio albano renzo orsini an agenda for human-computer interaction: science and engineering serving human needs gary marchionini john sibert kdd-cup 2000: question 3 winner's report salford systems dan steinberg richard carson deepak agarwal junyan andre rupp problem solving performance and display preference for information displays depicting numerical functions mary j. lalomia michael d. coovert eduardo salas survival of the fittest: the evolution of multimedia user interfaces jenny preece ben shneiderman surfing the movie space: advanced navigation in movie-only hypermedia jorg geibler creating a cd-rom from scratch: a case study technology has moved us to the point where creating a cd-rom as an alternative to paper volumes is not only cost effective, but also provides an opportunity to add significant value to the information presented, both in terms of quantity and usefulness. the past year has been a pivotal one in terms of access to simple and cost- effective tools and technologies that push "personal publishing" of cd-roms closer to reality for a whole range of publishers and information providers. this paper will review the development cycle of spie's first cd-rom product, the electronic imaging '93 proceedings on cd-rom, a hybrid windows/macintosh disc that was created without the use of any of the proprietary (and often expensive) software royalty-based contracts that have been the established turn-key solution until now. the intent of this paper is to provide a first-hand look at the developmental, technical, and financial issues involved in creating a cd-rom publication. brian j. thomas relational expressive power of constraint query languages michael benedikt guozhu dong leonid libkin limsoon wong an efficient nearest-neighbour search while varying euclidean metrics r. kurniawati j. s. jin j. a. shepherd mapa: a system for inducing and visualizing hierarchy in websites david durand paul kahn commitment in a partitioned distributed database network partition is among the hardest failure types in a distributed system even if all processors and links are of fail-stop type. we address the transaction commitment problem in a partitioned distributed database. it is assumed that partitions are detectable. the approach taken is conservative - that is, the same transaction cannot be committed by one site and aborted by another. a new and very general formal model of protocols operating in a partitioned system is introduced and protocols more efficient than the existing ones are constructed. k. v. s. ramarao software profiling for hot path prediction evelyn duesterwald vasanth bala multiple uses of scenarios: a reply to campbell richard m. young philip j. barnard a complete axiomatization of full join dependencies edward sciore hign interaction data visualization using seesoft to visualize program change history joseph l. steffen stephen g. eick the aleph: a cartographer for www (abstract) fernando das neves some remarks on variable independence, closure, and orthographic dimension in constraint databases the notion of variable independence was introduced by chomicki, goldin, and kuper in their pods'96 paper as a means of adding a limited form of aggregation to constraint query languages while retaining the closure property. later, grumbach, rigoux and segoufin showed in their icdt'99 paper that variable independence and a related notion of orthographic dimension are useful tools for optimizing constraint queries. however, several results in those papers are incorrect as stated. as the notions of variable independence and orthographic dimension appear to be important for implementing constraint database prototypes, i explain in this short note the problems with the above mentioned papers and outline a solution for aggregate closure. leonid libkin scalable multimedia delivery for pervasive computing growing numbers of pervasive devices are gaining access to the internet and other information sources. however, much of the rich multimedia content cannot be easily handled by the client devices with limited communication, processing, storage and display capabilities. in order to improve access, we are developing a system for scalable delivery of multimedia. the system uses an infopyramid for managing and manipulating multimedia content composed of video, images, audio and text. the infopyramid manages the different variations of media objects with different fidelities and modalities and generates and selects among the alternatives in order to adapt the delivery to different client devices. we describe a system for scalable multimedia delivery for a variety of client devices, including pdas, hhcs, smart phones, tv browsers and color pcs. john r. smith rakesh mohan chung-sheng li the diversity of usability practices kim halskov madsen walking the walk is doing the work - flexible interaction management in video- supported cooperative work steinar kristoffersen tom rodden rules in database systems stefano ceri raghu ramakrishnan query optimization yannis e. ioannidis mad: a movie authoring and design system naomi friedlander ronald baecker alan j. rosenthal eric smith a comparison of input devices in element pointing and dragging tasks i. scott mackenzie abigail sellen william a. s. buxton multimedia documents as user interfaces m. cecelia buchanan polle t. zellweger ken pier storing hytime documents in an object-oriented databases an open hypermedia-document storage system has to meet requirements that are not satisfied by existing systems: it has to support non-generic hypermedia document types, i.e. document types enriched with application-specific semantics. it has to provide hypermedia-document access methods. finally, it has to allow the exchange of hypermedia documents with other systems. on a technical level, an object-oriented database-management system, on a logical level, a well established iso standard, namely hytime, is used to satisfy the requirements mentioned above. by means of the example of documents incorporating hypertext structures we discuss the impact of taking such an approach on representation and processing within the database system. klemens böhm karl aberer the application of spatialization and spatial metaphor to augmentative and alternative communication the university of delaware and the university of dundee are collaborating on a project that is investigating the application of spatialization and spatial metaphors to interfaces for augmentative and alternative communication. this paper outlines the project's motivation, goals, and methodological considerations. it presents a number of design principles obtained from a review of the hci literature. finally, it describes progress on the demonstration of this approach. this application called val provides a computer-based word board that retains spatial equivalence to the user's paper-based system. it also allows the user to access an extended lexicon through an interface to the wordnet lexical database. p. demasco a. f. newell j. l. arnott affordance, conventions, and design donald a. norman human factors in programming and software development mary beth rosson managing the design of the user interface deborah j. mayhew road crew saveen reddy web schemas in whoweda saurav s. bhowmick wee keong ng sanjay madria world wide web applications track (track introduction only) robert inder what does cooperation need to create knowledge? giorgio de michelis talking in circles: designing a spatially-grounded audioconferencing environment this paper presents _talking in circles_, a multimodal audioconferencing environment whose novel design emphasizes spatial grounding with the aim of supporting naturalistic group interaction behaviors. participants communicate primarily by speech and are represented as colored circles in a two- dimensional space. behaviors such as subgroup conversations and social navigation are supported through circle mobility as mediated by the environment and the crowd and distance-based attenuation of the audio. the circles serve as platforms for the display of identity, presence and activity: graphics are synchronized to participants' speech to aid in speech-source identification and participants can sketch in their circle, allowing a pictorial and gestural channel to complement the audio. we note user experiences through informal studies as well as design challenges we have faced in the creation of a rich environment for computer- mediated communication. roy rodenstein judith s. donath toward a dexter-based model for open hypermedia: unifying embedded references and link objects kaj randall h. trigg improving personal efficiency: time management in today's changing university computing environment with escalating technological advances and increased computing demands, the director of a university based computing facility finds greater professional responsibilities to perform with somewhat diminishing resources. to partially resolve this imbalance of task and resources, the leaders of computing organizations must seek to utilize their own time as efficaciously as possible. planning to achieve maximum efficiency in a given time frame is a complex and individualized process. even though people react differently to time constraints, each can seek to improve individual productivity within a constant time parameter. practical ways to manage time to improve performance in a changing university computing environment is the theme of this paper. darleen pigford hci and the inadequacies of direct manipulation systems bill buxton personalized communication networks doug riecken informed knowledge discovery (invited talk) (abstract only): using prior knowledge in discovery programs bruce buchanan journal review: world wide web journal, issue one corporate linux journal staff graphs and tables: a four-factor experiment richard a. coll joan h. coll ganesh thakur increasing ease of use karel vredenburg an approach to natural gesture in virtual environments this article presents research---an experiment and the resulting prototype--- on a method for treating gestural input so that it can be used for multimodal applications, such as interacting with virtual environments. this method involves the capture and use of natural , empty-hand gestures that are made during conventional descriptive utterances. users are allowed to gesture in a normal continuous manner, rather than being restricted to a small set of discrete gestural commands as in most other systems. the gestures are captured and analyzed into a higher-level description. this description can be used by an application-specific interpreter to understand the gestural input in its proper context. having a gesture analyzer of this sort enables natural gesture input to any appropriate application. alan wexelblat reflective programming in the relational algebra in reflective programming languages it is possible for a program to generate code that is integrated into the program's own execution. we introduce a reflective version of the relational algebra. reflection is achieved by storing and manipulating relational algebra programs as relations in the database. we then study the expressibility and complexity of the reflective algebra thus obtained. it turns out that there is a close correspondence between reflection and bounded looping. we also discuss the applicability of the reflective algebra. jan van den bussche dirk van gucht gottfried vossen an extension of the database language sql to capture more relational concepts an extension of the database language sql is described which introduces several new concepts into the language that are standard in the relational model, but surprisingly not present in available sql-based systems such as sql/ds and db2. gottfried vossen jim yacabucci statistical profile estimation in database systems a statistical profile summarizes the instances of a database. it describes aspects such as the number of tuples, the number of values, the distribution of values, the correlation between value sets, and the distribution of tuples among secondary storage units. estimation of database profiles is critical in the problems of query optimization, physical database design, and database performance prediction. this paper describes a model of a database of profile, relates this model to estimating the cost of database operations, and surveys methods of estimating profiles. the operators and objects in the model include build profile, estimate profile, and update profile. the estimate operator is classified by the relational algebra operator (select, project, join), the property to be estimated (cardinality, distribution of values, and other parameters), and the underlying method (parametric, nonparametric, and ad- hoc). the accuracy, overhead, and assumptions of methods are discussed in detail. relevant research in both the database and the statistics disciplines is incorporated in the detailed discussion. michael v. mannino paicheng chu thomas sager what's happening jennifer bruer protocols for integrated audio and shared windows in collaborative systems this paper describes the architecture and protocols for integrating real-time audio and shared windows in computer-supported cooperative work (cscw) environments. such applications require that actions on shared windows be synchronized with accompanying audio. we give a characterization of this synchronization problem and propose an architecture for handling audio- enhanced cooperative work. we present a protocol for synchronizing the audio stream and window-event streams and evaluate its performance. a. mathur a. prakash improving relevance feedback in the vector space model carol lundquist david a. grossman ophir frieder functionality and architecture of a cooperative database system: a vision a database system fostering the cooperative usage and modification of a common data pool should provide standard database functionality (e.g. application- independent correctness criteria and data modelling) plus means for a step- wise, cooperative refinement of data over a long period of time. key ingredients are a hierarchical organization of work, a sound data model covering cooperative uncertainly, and support for long-living cooperative processes. furthermore, mechanisms for data passing and hiding, negotiation means, and notification are of prominent importance. in the paper, the rationale behind such a functionality is described. furthermore, a proposal is made for the software architecture of what is called a cooperative database system (cdbms). thomas kirsche richard lenz hans schuster touchscreen usability in microgravity jurine a. adolf kritina l. holden query refinement for multimedia similarity retrieval in mars kriengkrai porkaew kaushik chakrabarti conceptual multidimensional data model based on object-oriented metacube nguyen thanh binh a. min tjoa who exactly is trying to help us? the ethos of help systems in popular computer applications neil randall isabel pedersen datablitz storage manager: main-memory database performance for critical applications j. baulier p. bohannon s. gogate c. gupta s. haldar issues affecting the success/failure of remote group decision support systems (panel) - gdss (how to insure that it works) james frost corey schou w. v. maconachy elliott masie a formal approach to the definition and the design of conceptual schemata for databased systems a formal approach is proposed to the definition and the design of conceptual database diagrams to be used as conceptual schemata in a system featuring a multilevel schema architecture, and as an aid for the design of other forms of schemata. we consider e-r (entity-relationship) diagrams, and we introduce a new representation called caz-graphs. a rigorous connection is established between these diagrams and some formal constraints used to describe relationships in the framework of the relational data model. these include functional and multivalued dependencies of database relations. the basis for our schemata is a combined representation for two fundamental structures underlying every relation: the first defined by its minimal atomic decompositions, the second by its elementary functional dependencies. the interaction between these two structures is explored, and we show that, jointly, they can represent a wide spectrum of database relationships, of which the well-known one-to-one, one-to-many, and many-to-many associations constitute only a small subset. it is suggested that a main objective in conceptual schema design is to ensure a complete representation of these two structures. a procedure is presented to design schemata which obtain this objective while eliminating redundancy. a simple correspondence between the topological properties of these schemata and the structure of multivalued dependencies of the original relation is established. various applications are discussed and a number of illustrative examples are given. carlo zaniolo michel a. melkaoff lost worlds: micro/macro world, river world, city, dwelling valerie sullivan- fuchs branch libraries for multimedia repositories (poster) michael kozuch wayne wolf andrew wolfe don mckay analyses of multiple evidence combination joon ho lee improving the usability of programming publications f. j. bethke w. m. dean p. h. kaiser e. ort f. h. pessin a cache filtering optimisation for queries to massive datasets on tertiary storage we consider a system in which many users run queries to examine subsets of a large object set. the object set is partitioned into files on tape. a single subset of objects will be visited by multiple queries in the workload. this locality of access creates the opportunity for caching on disk. we introduce and evaluate a novel optimisation, cache filtering, in which the 'hot' objects are automatically extracted from the files that are staged on disk, and then cached separately in new files on disk. cache filtering can lead to complex situations in the disk cache. we show that these do not prevent effective caching and we introduce a special cache replacement algorithm to maximise efficiency. through simulations we evaluate the system over a broad range of likely workloads. depending on workload and system parameters, the cache filtering optimisation yields speedup factors up to 6. koen holtman peter van der stok ian willers where will object technology drive data administration? arnon rosenthal transmitting mpeg-4 video streams over the internet gerald ku christoph kuhmu spatial user interface metaphors in hypermedia systems andreas dieberger keith andrews an investigation of social loafing and social compensation in computer- supported cooperative work the effects of computer-mediated communication on social loafing in brainstorming tasks and social compensation in decision-making tasks are examined. in the first experiment, subjects performed a brainstorming task in either nominal, face-to-face or computer- mediated brainstorming group conditions. production blocking, in which brainstorming group members interfere with each other's output, was minimised, but the nominal group still out-performed the other groups. in the second experiment, subjects performed a group decision task in face-to-face and computer mediated communication conditions. social compensation in the presence of social loafing was seen to occur in the first condition, but not in the second. the paper concludes by discussing some of the consequences of both experiments for the future role of computer-mediated communication in group work. andy mckinlay rob procter anne dunnett optimization of object queries containing encapsulated methods zhaohui xie secure distribution of watermarked images for a digital library of ancientpapers christian rauber joe Ã" ruanaidh thierry pun a system for semantic query optimization this paper describes a scheme to utilize semantic integrity constraints in optimizing a user specified query. the scheme uses a graph theoretic approach to identify redundant join clauses and redundant restriction clauses specified in a user query. an algorithm is suggested to eliminate such redundant joins and avoid unnecessary restrictions. in addition to these eliminations, the algorithm aims to introduce as many restrictions on indexed attributes as possible, thus yielding an equivalent, but potentially more profitable, form of the original query. sreekumar t. shenoy z. meral ozsoyoglu multimedia networks: fundamentals and future directions nalin sharda designing an interactive tool for video object segmentation and annotation huitao luo alexandros eleftheriadis perceptual vs. hardware performance in advanced acoustic interface design (panel) elizabeth m. wenzel william w. gaver scott h. foster haim levkowitz roger powell the express web server: a user interface for standards development david a. sauder joshua lubell script-free scenario authoring in mediadesc andrea caloini daigo taguchi kazuo yanoo eiichiro tanaka conceptual prototyping requirements engineering involves three processes: (1) problem recognition; (2) problem understanding; (3) solution space specification. [5] this paper focuses on problem recognition and problem understanding---the needs determination component of requirements engineering. during needs determination effective communication between information system users and information system designers is critical---creative exploration of the problem environment is imperative. anne c. steele barbara j. nowell sentinel: an object-oriented dbms with event-based rules s. chakravarthy embedding computer-based critics in the contexts of design gerhard fischer kumiyo nakakoji jonathan ostwald gerry stahl tamara sumner the cubic mouse: a new device for three-dimensional input we have developed a new input device that allows users to intuitively specify three- dimensional coordinates in graphics applications. the device consists of a cube-shaped box with three perpendicular rods passing through the center and buttons on the top for additional control. the rods represent the x, y, and z axes of a given coordinate system. pushing and pulling the rods specifies constrained motion along the corresponding axes. embedded within the device is a six degree of freedom tracking sensor, which allows the rods to be continually aligned with a coordinate system located in a virtual world. we have integrated the device into two visualization prototypes for crash engineers and geologists from oil and gas companies. in these systems the cubic mouse controls the position and orientation of a virtual model and the rods move three orthogonal cutting or slicing planes through the model. we have evaluated the device with experts from these domains, who were enthusiastic about its ease of use. bernd fröhlich john plate interfacing realities to the human body (abstract) almost 25 years ago, the first glimmerings of a revolution in the ways people interact with computers began to appear. however, it was not until the mid-1980s that radically new human interfaces started to be taken seriously. the ultimate computer interface is the human body and the human senses. this idea is now termed virtual reality. the idea works in that it has already changed how people think. the technology is moving rapidly into research labs, but much more slowly into the realm of practical applications. when the ultimate expression of this concept is achieved, any fantasy that can be imagined could be experienced. long before that time, preliminary implementations of the idea will radically change how we interact with computers. this new technology changes the focus of computer science from the computer to the human. instead of receiving input from a sedentary user, the computer now perceives the behavior of a mobile participant. it allows conceptual information to be presented perceptually. similarly, by immersing the participant in a graphic world, this technology suggests a new approach to a host of old problems. by creating a telecommunication environment in which remote participants can share information as if they are together, it will change how we relate to each other. the advent of virtual reality can be seen as a major cultural event. indeed, it is a medium in which content and aesthetics will be as important as technology. current virtual reality orthodoxy assumes a goggles-and-gloves approach and is willing to encumber users with as much electronics as necessary to accomplish its purely technical ends. it is not likely that this attitude will succeed in the marketplace. alternative approaches that instrument the environment instead of the body, but provide some of the functionality of virtual reality are more compatible with the existing workplace. it is likely that such approximations of virtual reality will be the first techniques that are widely used. myron w. krueger wayfinding strategies and behaviors in large virtual worlds rudolph p. darken john l. sibert toward integrated support of synchronous and asynchronous communication in cooperative work: an empirical study of real group communication yasuhisa sakamoto eiji kuwana implementing interface attachments based on surface representations dan r. olsen scott e. hudson thom verratti jeremy m. heiner matt phelps an experimental performance study of a pipelined recursive query processing strategy in [16] a pipelined strategy is presented for processing recursive queries in deductive database systems. as a follow-up, this paper studies the run-time performance of the proposed strategy. the algorithm, introduced informally by examples in this paper, is coded in occam2 and runs on a network of transputers. a wide range of recursive queries and database structures are used as benchmarks in this study. both the speedup factors achieved and the elapsed time spent by the strategy in answering recursive queries are analysed. experimental results show that it is possible to achieve significant performance improvement when queries are evaluated in parallel, and provide insights into the success of this strategy in meeting the primary objective of focusing on relevant data. j. shao d. a. bell m. e. c. hull human factors guidelines for terminal interface design this paper provides a set of guidelines for the design of software interfaces for video terminals. it describes how to optimize screen layouts, interactive data entry, and error handling, as well as many practical techniques for improving man-machine interaction. emphasis is placed on factors relating to perceptual and cognitive psychology rather than on gross physiological concerns. ways in which interfaces can be evaluated to improve their user friendliness are also suggested. the author summarizes many ideas that can be found in other, more comprehensive texts on the subject. these guidelines will provide practicing software designers with useful insights into some of today's principal terminal interface design considerations. d. verne morland adaptive linear information retrieval models missing, non-applicable and imprecise values arise frequently in office information systems. there is a need to treat them in a consistent and useful manner. this paper proposes a method and gives the precise semantics of the retrieval operations in a system where imprecision is allowed. it also suggests a way to handle the uncertainty introduced by imprecise data values. p. bollmann s. k. m. wong unblocking brainstorming through the use of a simple group editor charles mclaughlin hymes gary m. olson arranging to do things with others herbert h. clark a taxonomy of time databases richard snodgrass ilsoo ahn role-based access control in oracle7 and trusted oracle7 louanna notargiacomo digital libraries and knowledge disaggregation: the use of journal article components ann peterson bishop daytona and the foruth-generation language cymbal the daytona data management system is used by at&t; to solve a wide spectrum of data management problems. for example, daytona is managing a 4 terabyte data warehouse whose largest table contains over 10 billion rows. daytona's architecture is based on translating its high-level query language cymbal (which includes sql as a subset) completely into c and then compiling that c into object code. the system resulting from this architecture is fast, powerful, easy to use and administer, reliable and open to unix tools. in particular, two forms of data compression plus robust horizontal partitioning enable daytona to handle terabytes with ease. rick greer estimating block accesses and number of records in file management we consider the problems of estimating the number of secondary storage blocks and the number of distinct records accessed when a transaction consisting of possibly duplicate requested records is presented to a file management system. our main results include (1) a new formula for block access estimation for the case where the requested records may have duplications and their ordering in immaterial and (2) a simple formula for estimating the number of distinct records in the transaction. to-yat cheung the breakdown of the information model in multi-database systems william kent the context interchange mediator prototype the context interchange strategy presents a novel approach for mediated data access in which semantic conflicts among heterogeneous systems are not identified a priori, but are detected and reconciled by a context mediator through comparison of contexts. this paper reports on the implementation of a context interchange prototype which provides a concrete demonstration of the features and benefits of this integration strategy. s. bressan c. h. goh k. fynn m. jakobisiak k. hussein h. kon t. lee s. madnick t. pena j. qu a. shum m. siegel distributed and parallel database systems m. tamer Ã-zsu patrick valduriez the maximum entropy approach and probabilistic ir models this paper takes a fresh look at modeling approaches to information retrieval that have been the basis of much of the probabilistically motivated ir research over the last 20 years. we shall adopt a subjectivist bayesian view of probabilities and argue that classical work on probabilistic retrieval is best understood from this perspective. the main focus of the paper will be the ranking formulas corresponding to the binary independence model (bim), presented originally by roberston and sparck jones [1977] and the combination match model (cmm), developed shortly thereafter by croft and harper [1979]. we will show how these same ranking formulas can result from a probabilistic methodology commonly known as maximum entropy (maxent). warren r. greiff jay m. ponte template dependencies: a large class of dependencies in relational databases and its complete axiomatization fereidoon sadri jeffrey d. ullman mapping the discourse of hci researchers with citation analysis this paper observes the development of human-computer interaction as a research discipline from 1991 to 1993. from a citation analysis of three volumes of three journals, the field of human computer interaction is identified as emerging from a supporting base of four fields: computer science, information systems, psychology, and human factors/ ergonomics. results of this analysis support the proposition that human-computer interaction is emerging as a distinct field of study. thomas w. dillon the v3 video server - managing analog and digital video clips the v3 video server is a demonstration showing a multimedia application developed on top of the vodak database management system. vodak is a prototype of an object-oriented and distributed database management system (dbms) developed at gmd-ipsi. the v3 video server allows a user to interactively store, retrieve, manipulate, and present analog and short digital video clips. a video clip consists of a sequence of pictures and corresponding sound. several attributes like author, title, and a set of keywords are annotated. the highlights of the demonstration are as follows. (1) it is shown that an object-oriented database management systems is very useful for the development of multimedia applications. (2) the video server gives valuable hints for the development of an object-oriented database management system in direction to a multimedia database management system. thomas c. rakow peter muth voting class - an approach to achieving high availability for replicated data in a distributed system, data are often replicated to increase the availability in the face of node and communication failures. however, updates to replicated data must be properly controlled to avoid data inconsistency. this can adversely affect the availability. in this paper, we propose an approach to the design of replica control schemes which can provide higher availability than currently existing voting schemes. the approach is based on the observation that the existing voting schemes actually belong to a general class, called voting class. this class of voting schemes can be represented in a very simple and uniform way. thus a designer can choose the optimal scheme within the voting class by evaluating each of them and choose the one which maximizes the availability. jian tang a transient hypergraph-based model for data access two major methods of accessing data in current database systems are querying and browsing. the more traditional query method returns an answer set that may consist of data values (dbms), items containing the answer (full text), or items referring the user to items containing the answer (bibliographic). browsing within a database, as best exemplified by hypertext systems, consists of viewing a database item and linking to related items on the basis of some attribute or attribute value. a model of data access has been developed that supports both query and browse access methods. the model is based on hypergraph representation of data instances. the hyperedges and nodes are manipulated through a set of operators to compose new nodes and to instantiate new links dynamically, resulting in transient hypergraphs. these transient hypergraphs are virtual structures created in response to user queries, and lasting only as long as the query session. the model provides a framework for general data access that accommodates user-directed browsing and querying, as well as traditional models of information and data retrieval, such as the boolean, vector space, and probabilistic models. finally, the relational database model is shown to provide a reasonable platform for the implementation of this transient hypergraph-based model of data access. carolyn watters michael a. shepherd cluster-based text categorization: a comparison of category search strategies makoto iwayama takenobu tokunaga systems, interactions, and macrotheory a significant proportion of early hci research was guided by one very clear vision: that the existing theory base in psychology and cognitive science could be developed to yield engineering tools for use in the interdisciplinary context of hci design. while interface technologies and heuristic methods for behavioral evaluation have rapidly advanced in both capability and breadth of application, progress toward deeper theory has been modest, and some now believe it to be unnecessary. a case is presented for developing new forms of theory, based around generic "systems of interactors." an overlapping, layered structure of macro- and microtheories could then serve an explanatory role, and could also bind together contributions from the different disciplines. novel routes to formalizing and applying such theories provide a host of interesting and tractable problems for future basic research in hci. philip barnard jon may david duke david duce web navigation: resolving conflicts between the desktop and the web hal shubin ron perkins converting to graphical user interfaces: design guidelines for success arlene f. aucella to influence time perception erik geelhoed peter toft suzanne roberts patrick hyland new techniques for open-vocabulary spoken document retrieval martin wechsler eugen munteanu peter schäuble scaling question answering to the web the wealth of information on the web makes it an attractive resource for seeking quick answers to simple, factual questions such as who was the first american in space? or what is the second tallest mountain in the world? yet today's most advanced web search services (e.g., google and askjeeves) make it surprisingly tedious to locate answers to such questions. in this paper, we extend question-answering techniques, first studied in the information retrieval literature, to the web and experimentally evaluate their performance.first we introduce mulder, which we believe to be the first general-purpose, fully-automated question-answering system available on the web. second, we describe mulder's architecture, which relies on multiple search-engine queries, natural-language parsing, and a novel voting procedure to yield reliable answers coupled with high recall. finally, we compare mulder's performance to that of google and askjeeves on questions drawn from the trec-8 question answering track. we find that mulder's recall is more than a factor of three higher than that of askjeeves. in addition, we find that google requires 6.6 times as much user effort to achieve the same level of recall as mulder. cody kwok oren etzioni daniel s. weld the virtual video browser in mosaic (demonstration) dinesh venkatesh t. d. c. little data abstraction tools: design, specification and application current research in data modeling is motivated by the following dilemma: \\- at the application level - being confronted with "slices of reality" \- details are perceived that, in general, cannot be represented. \\- at the representation level - being confronted with "levels of machines" \- details are represented that, in general, cannot be perceived. abstraction methods cope with that problem by suppressing unnecessary details and by formalizing and structuring the relevant information. joachim w. schmidt role conflict and ambiguity: critical variables in the mis user-designer relationship one way to make progress in improving the probability of successful management information systems (mis) and management science and operation research (ms/or)1 projects is to take the view that the design process is a politically-based, planned change process. the political dimension emphasizes that mis design is inevitable imbedded in an organization with a political order which acts to shape and constrain the design and use of an mis. the planned change dimension emphasizes the interaction process and relationships among the participants in the design process. recent articles in the mis/ms literature have suggested that the politically-based change approach is the most viable for understanding and managing the design process and enhancing the chances for a successful system (for example, see: bostrom and heinen, 1977; keen and gerson, 1977; and ginzberg, 1978). but to date little research, other than case studies, has been conducted with this focus. robert p. bostrom quality of experience: defining the criteria for effective interaction design lauralee alben o2, an object-oriented data model the altair group is currently designing an object-oriented data base system called o2. this paper presents a formal description of the object-oriented data model of this system. it proposes a type system defined in the framework of a set-and-tuple data model. it models the well known inheritance mechanism and enforces strong typing. c. lecluse p. richard f. velez optimal determination of user-oriented clusters user-oriented clustering schemes enable the classification of documents based upon the user perception of the similarity between documents, rather than on some similarity function presumed by the designer to represent the user criteria. in this paper, an enhancement of such a clustering scheme is presented. this is accomplished by the formulation of the user-oriented clustering as a function- optimization problem. the problem formulated is termed the boundary selection problem (bsp). heuristic approaches to solve the bsp are proposed and a preliminary for evaluation of these approaches is provided. j. deogun v. raghavan ensuring consistency in multidatabases by preserving two-level serializability the concept of serializability has been the traditionally accepted correctness criterion in database systems. however in multidatabase systems (mdbss), ensuring global serializability is a difficult task. the difficulty arises due to the heterogeneity of the concurrency control protocols used by the participating local database management systems (dbmss), and the desire to preserve the autonomy of the local dbmss. in general, solutions to the global serializability problem result in executions with a low degree of concurrency. the alternative, relaxed serializability, may result in data inconsistency. in this article, we introduce a systematic approach to relaxing the serializability requirement in mdbs environments. our approach exploits the structure of the integrity constraints and the nature of transaction programs to ensure consistency without requiring executions to be serializable. we develop a simple yet powerful classification of mdbss based on the nature of integrity constraints and transaction programs. for each of the identified models we show how consistency can be preserved by ensuring that executions are two-level serializable (2lsr). 2lsr is a correctness criterion for mdbs environments weaker than serializability. what makes our approach interesting is that unlike global serializability, ensuring 2lsr in mdbs environments is relatively simple and protocols to ensure 2lsr permit a high degree of concurrency. furthermore, we believe the range of models we consider cover many practical mdbs environments to which the results of this article can be applied to preserve database consistency. sharad mehrotra rajeev rastogi henry f. korth abraham silberschatz data bases: a logical perspective my work in data base theory is a natural outgrowth of my longstanding concern with the problem of representing and reasoning with domain specific knowledge, a problem of major concern in artificial intelligence. in data base terminology this is the conceptual modelling issue. my own methodological bias favours logic as a representation language for conceptual modelling, a bias which historically arose within ai in response to ai's emphasis on the ability to reason deductively with representations. in this position paper i shall argue that logic has other advantages for data base theory. specifically my objective is to provide the outline of a logical reconstruction of certain aspects of conventional data base theory. raymond reiter query-based sampling of text databases the proliferation of searchable text databases on corporate networks and the internet causes a database selection problem for many people. algorithms such as ggloss and cori can automatically select which text databases to search for a given information need, but only if given a set of resource descriptions that accurately represent the contents of each database. the existing techniques for a acquiring resource descriptions have significant limitations when used in wide-area networks controlled by many parties. this paper presents query-based sampling, a new technicque for acquiring accurate resource descriptions. query-based sampling does not require the cooperation of resource providers, nor does it require that resource providers use a particular search engine or representation technique. an extensive set of experimental results demonstrates that accurate resource descriptions are crated, that computation and communication costs are reasonable, and that the resource descriptions do in fact enable accurate automatic dtabase selection. jamie callan margaret connell the prototype of the dare system tiziana catarci giuseppe santucci a testbed for characterizing dynamic response of virtual environment spatial sensors this paper describes a testbed and method for characterizing the dynamic response of the type of spatial displacement transducers commonly used in virtual environment (ve) applications. the testbed consists of a motorized rotary swing arm that imparts known displacement inputs to the ve sensor. the experimental method involves a series of tests in which the sensor is displaced back and forth at a number of controlled frequencies that span the bandwidth of volitional human movement. during the tests, actual swing arm angle and reported ve sensor displacements are collected and time stamped. because of the time stamping technique, the response time of the sensor can be measured directly, independent of latencies in data transmission from the sensor unit and any processing by the interface applications running on the host computer. analysis of these experimental results allows sensor time delay and gain characteristics to be determined as a function of input frequency. results from tests of several differnt ve spatial sensors (ascension, logitech, and polhemus) are presented here to demonstrate use of the testbed and method. bernard d. adelstein eric r. johnston stephen r. ellis object manipulation in virtual environments: relative size matters yanqing wang christine l. mackenzie enhancing database correctness: a statistical approach wen-chi hou zhongyang zhang perceptual user interfaces: haptic interfaces hong z. tan an approach to the recursive retrieval problem in the relational database the host query language often impairs data retrieval in recursive database structures. functional extensions to quel are explored in order to simplify the user interface. f.-y. kuo j. tillquist spatial operators eliseo clementini paolino di felice personal end-user tools doug riecken specifying user interfaces in disco kari systä awareness in collaborative systems: a chi 97 workshop susan e. mcdaniel tom brinck research issues in real-time dbms in the context of electronic commerce prabhudev konana alok gupta andrew b. whinston using information murals in visualization applications dean f. jerding john t. stasko the integrated dictionary/directory system frank w. allen mary e. s. loomis michael v. mannino user information processing strategies and online visual structure elizabeth keyes robert krull visualization is a state of mind maarten van dantzich remembering while mousing: the cognitive costs of mouse clicks patricia wright ann lickorish robert milroy efficient recompression techniques for dynamic full-text retrieval systems shmuel t. klein starer: a conceptual model for data warehouse design modeling data warehouses is a complex task focusing, very often, into internal structures and implementation issues. in this paper we argue that, in order to accurately reflect the users requirements into an error-free, understandable, and easily extendable data warehouse schema, special attention should be paid at the conceptual modeling phase. based on a real mortgage business warehouse environment, we present a set of user modeling requirements and we discuss the involved concepts. understanding the semantics of these concepts, allow us to build a conceptual model---namely, the starer model---for their efficient handling. more specifically, the starer model combines the star structure, which is dominant in data warehouses, with the semantically rich constructs of the er model; special types of relationships have been further added to support hierarchies. we present an evaluation of the starer model as well as a comparison of the proposed model with other existing models, pointing out differences and similarities. examples from a mortgage data warehouse environment, in which starer is tested, reveal the ease of understanding of the model, as well as the efficiency in representing complex information at the semantic level. nectaria tryfona frank busborg jens g. borch christiansen research issues in multimedia storage servers banu Ã-zden rajeev rastogi avisilberschatz metadata in video databases ramesh jain arun hampapur about this issue… anthony i. waserman spot: distance based join indices for spatial data tsin shu yeh variations: a digital music library system at indiana university jon w. dunn constance a. mayer preparing for the digital media monsoons andy hopper industry briefs: cadence scott joaquim text and image retrieval in cheshire ii (demonstration abstract) ray r. larson high performance multidimensional analysis of large datasets sanjay goil alok choudhary focusing search in hierarchical structures with directory sets guy jacobson balachander krishnamurthy divesh srivastava dan suciu empirical evaluation of explicit versus implicit acquisition of user profiles in information filtering systems luz marina quiroga javed mostafa group formation mechanisms for transactions in isis distributed toolkits like isis provide means of replicating data but not means for making it persistent. this makes the use of transactions desirable, even in non- database applications. using isis can alleviate the programming cost of distributed transaction processing the multi-phase commit protocols. using the isis transaction tool, however, imposes additional cost, and we examine the effect of group formation strategies on the overhead. the paper presents three different group formation mechanisms in isis and compares the costs associated with them. neel k. jain the technology of word processing the word processing industry offers over 160 model lines which are provided and marketed by 60 vendors. that gives you a lot of things to choose. to make matters even worse, one of the leaders in this industry announces a new feature, either hardware or software, every 3 days! it is a jungle out there. so, how can you keep up? the answer varies because it depends on the person and range of interests. if you are involved in this business as a full-time responsibility, then it is possible to keep up. but, it will probably take 6 to 9 months to reach the basic level of understanding. for the rest of us who can only afford a modest amount of time to learn the basics and then maybe a few hours a week to keep up, it is an impossible task and leads to much frustration. this paper is intended to reduce the frustration level a bit and put you on the trail of learning more about word processing. gene t. sherron visual profiles: a critical component of universal access julie a. jacko max a. dixon robert h. rosa ingrid u. scott charles j. pappas designing the human-computer interface there is a growing awareness in the academic and industrial computing communities of the need to introduce human factors considerations into the design of computer systems. at the school of information and computer science of the georgia institute of technology, this need is being met through a well- funded graduate research program in the human factors of computer systems as well as the introduction of courses that emphasize usability in designing the human-computer interface. the course detailed in this paper has the title "human-computer interface." albert n. badre establishing a foundation for collaborative scenario elicitation eliciting and integrating requirements from large groups of diverse users remains a major challenge for the software engineering community. scenarios are becoming recognized as valuable means of identifying actions taken by users when executing a business process and interacting with an information system, and therefore have great potential for addressing requirements elicitation problems. a review of the scenario literature indicates that, although there is widespread agreement on the usefulness of scenarios, there are many unanswered questions about how to elicit scenario definitions from individual users and user groups efficiently.this research examines how increasing the structure of scenario definitions affects scenario quality and the efficiency of scenario definition by individual users. during a laboratory experiment, subjects defined scenarios using a general-purpose gss, groupsystems group outliner, with one of three textual scenario formats that ranged from unstructured to very structured. scenario quality and the efficiency of scenario definition by users were compared across the formats. results highlighted the efficiency of the unstructured format but revealed that all formats produced incomplete scenario definitions. recommendations are made for an iterative collaborative scenario process and a special-purpose gss scenario tool that may overcome some of these problems. ann m. hickey douglas l. dean jay f. nunamaker term clustering of syntactic phrases term clustering and syntactic phrase formation are methods for transforming natural language text. both have had only mixed success as strategies for improving the quality of text representations for document retrieval. since the strengths of these methods are complementary, we have explored combining them to produce superior representations. in this paper we discuss our implementation of a syntactic phrase generator, as well as our preliminary experiments with producing phrase clusters. these experiments show small improvements in retrieval effectiveness resulting from the use of phrase clusters, but it is clear that corpora much larger than standard information retrieval test collections will be required to thoroughly evaluate the use of this technique. d. d. lewis w. b. croft flowback: providing backward recovery for workflow management systems the distributed systems technology centre (dstc) framework for workflow specification, verification and management captures workflows transaction-like behavior for long lasting processes. flowback is an advanced prototype functionally enhancing an existing workflow management system by providing process backward recovery. it is based on extensive theoretical research ([3],[4],[5],[6],[8],[9]), and its architecture and construction assumptions are product independent. flowback clearly demonstrates the extent to which generic backward recovery can be automated and system supported. the provision of a solution for handling exceptional business process behavior requiring backward recovery makes workflow solutions more suitable for a large class of applications, therefore opening up new dimensions within the market. for the demonstration purpose, flowback operates with ibm flowmark, one of the leading workflow products. bartek kiepuszewski ralf muhlberger maria e. orlowska a high performance multiversion concurrency control protocol for object databases craig harris madhu reddy carl woolf application-layer broker for scalable internet services with resource reservation ping bai b. prabhakaran aravind srinivasan understanding gss use by executive groups jaime f. serida-nishimura d. harrison mcknight facilitating teamwork with computer technology (panel): supporting group task or group process? catherine beise bob bostrom gigi kelly a. lynn daniel owen kingman pak yoong research in communication services for collaborative systems (doctoral colloquium) robert w. hall search technology, inc. r. m. hunt metaphorically speaking steven pemberton clustering methods for large databases: from the past to the future alexander hinneburg daniel a. keim a web information organization and management system (wioms) tynan d. grayson ralph a. grayson g. e. hedrick guidebook: design guidelines database for assisting the interface design task katsuhiko ogawa kaori ueno building concept hierarchies for schema integration in hddbs using incremental concept formation cyrus azarbod william perrizo data management: lasting impact on wild, wild, web this paper describes some of the ways the internet and world wide web have affected databases and data warehousing and the lasting impact in these areas. reed m. meseck groupware: coining and defining it peter johnson-lenz trudy johnson-lenz from document to knowledge base: intelligent hypertext as minimalist instruction patricia a. carlson maintaining views incrementally we present incremental evaluation algorithms to compute changes to materialized views in relational and deductive database systems, in response to changes (insertions, deletions, and updates) to the relations. the view definitions can be in sql or datalog, and may use union, negation, aggregation (e.g. sum, min), linear recursion, and general recursion. we first present a counting algorithm that tracks the number of alternative derivations (counts) for each derived tuple in a view. the algorithm works with both set and duplicate semantics. we present the algorithm for nonrecursive views (with negation and aggregation), and show that the count for a tuple can be computed at little or no cost above the cost of deriving the tuple. the algorithm is optimal in that it computes exactly those view tuples that are inserted or deleted. note that we store only the number of derivations, not the derivations themselves. we then present the delete and rederive algorithm, dred, for incremental maintenance of recursive views (negation and aggregation are permitted). the algorithm works by first deleting a superset of the tuples that need to be deleted, and then rederiving some of them. the algorithm can also be used when the view definition is itself altered. ashish gupta inderpal singh mumick v. s. subrahmanian viewing morphology as an inference process morphology is the area of linguistics concerned with the internal structure of words. information retrieval has generally not paid much attention to word structure, other than to account for some of the variability in word forms via the use of stemmers. this paper will describe our experiments to determine the importance of morphology, and the effect that it has on performance. we will also describe the role of morphological analysis in word sense disambiguation, and in identifying lexical semantic relationships in a machine-readable dictionary. we will first provide a brief overview of morphological phenomena, and then describe the experiments themselves. robert krovetz representation in virtual space: visual convention in the graphical user interface loretta staples models for reader interaction systems daniel berleant editorial pointers diane crawford detecting change in categorical data: mining contrast sets stephen d. bay michael j. pazzani all talk and all action: strategies for managing voicemail messages steve whittaker julia hirschberg christine h. nakatani collection development in the electronic library kristin antelman david langenberg state of the art issues in distributed databases (panel session): site autonomy issues in the r@@@@ distributed database system it is desirable to have a distributed database management system (ddbms) whose behavior and control is as identical as possible to that used in single site database management systems. we call this notion site autonomy. preserving the autonomy of sites which join a ddbms network is essential to the peace of mind of its managers and users, and more technically, is essential in an environment where sites and communication lines fail. to achieve resilience to failures of sites and communication lines (these are indistinguishable from one site's viewpoint), there can be no reliance on centralized functions or services. this means no global dictionary, no global information collector for deadlock detection, and no broadcast of local events such as file creation or addition of a new site. aside from these prohibitions, this goal of site autonomy also implies that sites should perform their own compilation, their own binding of print names to internal names, their own decomposition of compound objects (such as relational views), and their own authorization checking. this talk discusses the concept of site autonomy in further detail and discusses several specific issues on which the site autonomy philosophy has an impact. p. selinger audiostreamer: exploiting simultaneity for listening chris schmandt atty mullins incremental maintenance of views with duplicates timothy griffin leonid libkin decision support systems: a rule-based approach man-kuen s. chen chui-fat c. chau waldo c. kabat the information periscope "i-steer" junko misawa junichi osada benchmarking queries over trees: learning the hard truth the hard way fanny wattez sophie cluet veronique benzaken guy ferran christian fiegel locating passages using a case-base of excerpts jody j. daniels edwina l. rissland teaching experienced developers to design graphical user interfaces five groups of developers with experience in the design of character- based user interfaces were taught graphical user interface design through a short workshop with a focus on practical design exercises using low-tech tools derived from the pictive method. several usability problems were found in the designs by applying the heuristic evaluation method, and feedback on these problems constituted a way to make the otherwise abstract usability principles concrete for the designers at the workshop. based on these usability problems and on observations of the design process, we conclude that object-oriented interactions are especially hard to design and that the developers were influenced by the graphical interfaces of personal computers with which they had interacted as regular users. jakob nielsen rita m. bush tom dayton nancy e. mond michael j. muller robert w. root on the issue of valid time(s) in temporal databases stavros kokkotos efstathios v. ioannidis themis panayiotopoulos constantine d. spyropoulos project envision (abstract) edward a. fox notification servers for synchronous groupware john f. patterson mark day jakov kucan networked knowledge organization systems (nkos): nkos workshop at acm dl'99, berkeley, saturday august 14, 1999 l. l. hill g. hodge j. busch uniform resource names: handles, purls, and digital object identifiers william y. arms security of random data perturbation methods statistical databases often use random data perturbation (rdp) methods to protect against disclosure of confidential numerical attributes. one of the key requirements of rdp methods is that they provide the appropriate level of security against snoopers who attempt to obtain information on confidential attributes through statistical inference. in this study, we evaluate the security provided by three methods of perturbation. the results of this study allow the database administrator to select the most effective rdp method that assures adequate protection against disclosure of confidential information. krishnamurty muralidhar rathindra sarathy capturing human intelligence in the net paul b. kantor endre boros benjamin melamed vladimir meñkov bracha shapira david j. neu hypertext by link-resolving components frank wm. tompa g. elizabeth blake darrell r. raymond reaching through analogy: a design rationale perspective on roles of analogy allan maclean victoria bellotti richard young thomas moran where have you been from here? trials in hypertext systems siegfried reich leslie carr david de roure wendy hall explaining ambiguity in a formal query language the problem of generating reasonable natural language-like responses to queries formulated in nonnavigational query languages with logical data independence is addressed. an extended er model, the entity-relationship- involvement model, is defined which assists in providing a greater degree of logical data independence and the generation of natural language explanations of a query processor's interpretation of a query. these are accomplished with the addition of the concept of an involvement to the model. based on involvement definitions in a formally defined data definition language, ddl, an innovative strategy for generating explanations is outlined and exemplified. in the conclusion, possible extensions to the approach are given. joseph a. wald paul g. sorenson constructive abstract data types (cad) 1\\. distinction from conventional approaches the motivation for cad is to extend the idea of data abstraction to application programming. conventional approaches confine themselves to operational concepts of system programming like stack, queue etc. and do not consider concepts like contract, invoice etc. of the application world (aw) with an arbitrary number of operations applicable to them, including those of the ad hoc type. hartmut h. wedekind revisiting the concept of hypermedia document consistency c. a. s. santos p. n. m. sampaio j. p. courtiat parallel index building in informix online 6.0 wayne davison the effects of emotional icons on remote communication krisela rivera nancy j. cooke jeff a. bauhs a basic set of abstract object classes for representation of complex documents randy s. weinberg michael j. bozonie videowhiteboard: video shadows to support remote collaboration john c. tang scott minneman the hci bibliography project gary perlman dhm: an open dexter-based hypermedia service (abstract) kaj lennert sloth semantic search on internet tabular information extraction for answering queries h. l. wang s. h. wu i. c. wang c. l. sung w. l. hsu w. k. shih multimedia authoring tools michael d. rabin michael j. burns a formal approach to the assessment and improvement of terminological models used in information systems engineering in the design and implementation of any information system identifiers are used to designate concepts. typical examples are names of classes, variables, modules, database fields, etc. a terminological model is a set of identifiers together with a set of abstractions and a set of links between identifiers and abstractions. naturally, terminological models embody important knowledge of a system, and therefore they play an important role during the development of information systems. in this paper we propose a metamodel for terminological models that is based on category theory as conceptual and notational framework. peter wendorff towards self-tuning data placement in parallel database systems parallel database systems are increasingly being deployed to support the performance demands of end-users. while declustering data across multiple nodes facilitates parallelism, initial data placement may not be optimal due to skewed workloads and changing access patterns. to prevent performance degradation, the placement of data must be reorganized, and this must be done on-line to minimize disruption to the system. in this paper, we consider a dynamic self-tuning approach to reorganization in a shared nothing system. we introduce a new index-based method that faciliates fast and efficient migration of data. our solution incorporates a globally height- balanced structure and load tracking at different levels of granularity. we conducted an extensive performance study, and implemented the methods on the fujitsu ap3000 machine. both the simulation and empirical results demonstratic that our proposed method is indeed scalable and effective in correcting any deterioration in system throughput. mong li lee masaru kitsuregawa beng chin ooi kian-lee tan anirban mondal object management in distributed information systems peter lyngbaek dennis mcleod salticus: guided crawling for personal digital libraries in this paper, we describe salticus, a web crawler that learns from us ers web browsing activity. salticus enables users to build a personal digital library by collecting documents and generalizing over the user's choices. robin burke sql/se: a query language extension for databases supporting schema evolution the incorporation of a knowledge of time within database systems allows for temporally related information to be modelled more naturally and consistently. adding this support to the metadatabase further enhances its semantic capability and allows elaborate interrogation of data. this paper presents sql/se, an sql extension capable of handling schema evolution in relational database systems. john f. roddick a distributed information handling system lee a. hollaar interactive multimedia conference proceedings samuel a. rebelsky james ford kenneth harker fillia makedon p. takis metaxas charles owen a comparison of rule-based and positionally constant arrangements of computer menu items an experiment was conducted to evaluate user performance under four different menu item arrangements: alphabetic, probability of selection (most popular choices are positioned near the beginning of the list), random, and positionally constant (consistent assignment of individual items to screen positions). during the initial stages of practice, the rule- based approaches produced faster mean search times, but after moderate amounts of practice, the positionally constant arrangement appeared to be most efficient. people seem to remember quite easily the location of items on a display, indicating that positional constancy can be an important factor in increasing the efficiency of the search of computer menus and other displays. benjamin l. somberg dynamic query interpretation in relational databases a new dynamic approach to the problem of determining the correct interpretation of a logically independent query to a relational database is described. the proposed disambiguating process is based on a simple user- system dialogue that consists in a sequence of decisions about the relevance (or not) of an attribute with respect to the user interpretation a. d'atri p. di felice m. moscarini preserving electronic documents douglas a. kranch computational aspects of resilient data extraction from semistructured sources (extended abstract) automatic data extraction from semistructured sources such as html pages is rapidly growing into a problem of significant importance, spurred by the growing popularity of the so called "shopbots" that enable end users to compare prices of goods and other services at various web sites without having to manually browse and fill out forms at each one of these sites. the main problem one has to contend with when designing data extraction techniques is that the contents of a web page changes frequently, either because its data is generated dynamically, in response to filling out a form, or because of changes to its presentation format. this makes the problem of data extraction particularly challenging, since a desirable requirement of any data extraction technique is that it be "resilient", i.e., using it we should always be able to locate the object of interest in a page (such as a form or an element in a table generated by a form fill-out) in spite of changes to the page's ntent and layout. in this paper we propose a formal computation model for developing resilient data extraction techniques from semistructured sources. specifically we formalize the problem of data extraction as one of generating unambiguous extraction expressions, which are regular expressions with some additional structure. the problem of resilience is then formalized as one of generating a maximal extraction expression of this kind. we present characterization theorems for maximal extraction expressions, complexity results for testing them, and algorithms for synthesizing them. hasan davulcu guizhen yang michael kifer i. v. ramakrishnan non-deterministic queue operations hector garcia-molina kenneth salem friend21 project: a construction of 21st century human interface hajime nonogaki hirotada ueda spiritual life and information technology michael j. muller ellen christiansen bonnie nardi susan dray enhanced nearest neighbour search on the r-tree multimedia databases usually deal with huge amounts of data and it is necessary to have an indexing structure such that efficient retrieval of data can be provided. r-tree with its variations, is a commonly cited indexing method. in this paper we propose an improved nearest neighbor search algorithm on the r-tree and its variants. the improvement lies in the removal of two hueristics that have been used in previous r*-tree work, which we prove cannot improve on the pruning power during a search. king lum cheung ada wai-chee fu behavioral situations and active database systems agnès front claudia roncancio jean-pierre giraudin user effort in query construction and interface selection this study was designed to examine user beliefs and behavior on the selection and use of search features and search interfaces. five weeks of user logs were taken from a user-targeted collection and surveys were administered immediately before and after this time period. survey results indicate a significant correlation between a user's level of effort and their perceived benefit from that effort.reported search feature use increased by more than 35\% over the fiveweeks. this raises the question of how the behavior of an internet user changes over time. results from the log files were inconclusive but suggest a reluctance to use the advanced search interface. paul gerwe charles l. viles methods & tools: constructive interaction and collaborative work: introducing a method for testing collaborative systems helge kahler finn kensing michael muller reflections john rheinfrank bill hefley is the web really different from everything else? ben shneiderman jakob nielsen scott butler michael levi frederick conrad methods and tools: a method for evaluating the communicability of user interfaces raquel o. prates clarisse s. de souza simone d. j. barbosa rough sets and information retrieval the theory of rough sets was introduced [pawlak82]. it allows us to classify objects into sets of equivalent members based on their attributes. we may then examine any combination of the same objects (or even their attributes) using the resultant classification. the theory has direct applications in the design and evaluation of classification schemes and the selection of discriminating attributes. pawlak's papers discuss its application in the domain of medical diagnostic systems. here we apply it to the design of information retrieval systems accessing collections of documents. advantages offered by the theory are: the implicit inclusion of boolean logic; term weighting; and the ability to rank retrieved documents. in the first section we describe the theory. this is derived from the work by [pawlak84, pawlak82] and includes only the most relevant aspects of the theory. in the second we apply it to information retrieval. specifically, we design the approximation space, search strategies as well as illustrate the application of relevance feedback to improve document indexing. following this in section three we compare the rough set formalism to the boolean, vector and fuzzy models of information retrieval. finally we present a small scale evaluation of rough sets which indicates its potential in information retrieval. p. das-gupta object-focused interaction in collaborative virtual environments this paper explores and evaluates the support for object-focused interaction provided by a desktop collaborative virtual environment. an experimental "design" task was conducted, and video recordings of the participants' activities facilitated an observational analysis of interaction in, and through, the virtual world. observations include: problems due to "fragmented" views of embodiments in relation to shared objects; participants compensating with spoken accounts of their actions; and difficulties in understanding others' perspectives. implications and proposals for the design of cves drawn from these observations are: the use of semidistorted views to support peripheral awareness; more explicit or exaggerated representations of actions than are provided by pseudohumanoid avatars; and navigation techniques that are sensitive to the actions of others. the paper also presents some examples of the ways in which these proposals might be realized. jon hindmarsh mike fraser christian heath steve benford chris greenhalgh documentation writers as screen designers for interactive computer systems for interactive computer systems, most of the communication between users and the computer occurs via the written and graphic information shown on the video display terminal. let's call that information "the screen," whether it shows text or graphics. effective communication between the user and the system depends on how well the screen has been designed. one aspect of that design is how the screens, individually and as a whole, have been worded and organized for overall readability. karen van dusen nichole j. vick thwarting the web users' expectations helia vannucchi de almeida santos database management systems past and present the theme of this note is on the two points: 1\\. many of the ideas discussed in both data base and the other two areas were well known in the sixties---some go back to the fifties. 2.too much attention was placed on trivia. e. h. sibley participatory design of a portable torque-feedback device customer- driven design processes such as participatory design can be used to develop new presence, or virtual reality, technology. chemists worked together with computer company engineers to develop scenarios for how present technology could be used to support future molecular modeling work in drug design. these scenarios led to the development of a portable torque-feedback device which can be used with either workstation or virtual reality technology. this paper discusses both the experience with the participatory design process and the novel features of the portable torque-feedback device. michael good experiments on incorporating syntactic processing of user queries into a document retrieval strategy traditional information has relied on the extensive use of statistical parameters in the implementation of retrieval strategies. this paper sets out to investigate whether linguistic processes can be used as part of a document retrieval strategy. this is done by predefining a level of syntactic analysis of user queries only, to be used as part of the retrieval process. a large series of experiments on an experimental test collection are reported which use a parser for noun phrases as part of the retrieval strategy. the results obtained from the experiments do yield improvements in the level of retrieval effectiveness and given the crude linguistic process used and the way it was used on queries and not on document texts, suggests that the approach of using linguistic processing in retrieval, is valid. a. f. smeaton c. j. van rijsbergen text in context: writing online documentation for the workplace roger d. theodos an organic user interface for searching citation links jock d. mackinlay ramana rao stuart k. card transforming relational schemes with foreign keys into object-oriented anna rozeva galaxy of news and information landscapes (demonstration): dynamic visualization and access of information in a multidimensional space earl rennison monitoring database performance - a control issue carl s. guynes on the development of user interface systems for object-oriented database in this paper we present a user interface system which provides a complete environment for improved browsing in object-oriented database management systems (oodbmss). the system uses only the basic features of the object- oriented data model, so it can be associated to different oodbmss. the design of our system is based on features that, in our opinion, should be present in any database user interface system. we present and discuss such features. j. lopes de oliveira network communities: supporting distributed field organizations john c. tang nicole yankelovich searching on the web (poster abstract): two types of expertise christoph hoelscher gerhard strube the myth of semantic video retrieval nevenka dimitrova excerpts from: an information systems manifesto james martin talks with leonard kleinrock about the state of the art in data processing---how corporate and mis managers should manage the new technology, and how computer science education should relate to it. james martin leonard kleinrock usability revisited frederick j. bethke aries/im: an efficient and high concurrency index management method using write-ahead logging this paper provides a comprehensive treatment of index management in transaction systems. we present a method, called ariesiim (algorithm for recovery and isolation exploiting semantics for index management), for concurrency control and recovery of b+-trees. aries/im guarantees serializability and uses write-ahead logging for recovery. it supports very high concurrency and good performance by (1) treating as the lock of a key the same lock as the one on the corresponding record data in a data page (e.g., at the record level), (2) not acquiring, in the interest of permitting very high concurrency, commit duration locks on index pages even during index structure modification operations (smos) like page splits and page deletions, and (3) allowing retrievals, inserts, and deletes to go on concurrently with smos. during restart recovery, any necessary redos of index changes are always performed in a page-oriented fashion (i.e., without traversing the index tree) and, during normal processing and restart recovery, whenever possible undos are performed in a page- oriented fashion. aries/im permits different granularities of locking to be supported in a flexible manner. a subset of aries/im has been implemented in the os/2 extended edition database manager. since the locking ideas of aries/im have general applicability, some of them have also been implemented in sql/ds and the vm shared file system, even though those systems use the shadow-page technique for recovery. c. mohan frank levine when everything is searchable eric a. brewer teamrooms: network places for collaboration mark roseman saul greenberg the effects of bargaining orientation and communication medium on negotiations in the bilateral monopoly task: a comparison of decision room and computer conferencing communication media pairs of subjects with either a competitive or an integrative bargaining orientation completed the bilateral monopoly task in one of four communication media (text- only, text-plus- visual-access, audio-only, and audio-plus- visual-access). as hypothesized, an integrative bargaining orientation and/or an audio mode of communication led to a higher joint outcome. in addition, visual access resulted in higher joint outcomes for subjects with integrative bargaining orientations, but lower joint outcomes for those with competitive orientations. the support for negotiation offered by decision room and computer conferencing technologies is compared based on the efficiency and richness of the communication media available in each. j. sheffield classifying users: a hard look at some controversial issues it has become a common recommendation to computer interface designers that they should "know the user" (e.g., rubinstein and hersh, 1984). this panel discussion will examine the issues that arise from this advice. k. potosnak p. j. hayes m. b. rosson m. l. schneider j. a. whiteside a framework for effective retrieval the aim of an effective retrieval system is to yield high recall and precision (retrieval effectiveness). the nonbinary independence model, which takes into consideration the number of occurrences of terms in documents, is introduced. it is shown to be optimal under the assumption that terms are independent. it is verified by experiments to yield significant improvement over the binary independence model. the nonbinary model is extended to normalized vectors and is applicable to more general queries. various ways to alleviate the consequences of the term independence assumption are discussed. estimation of parameters required for the nonbinary independence model is provided, taking into consideration that a term may have different meanings. c. t. yu w. meng s. park reversing the charges frank montaniz george v. kissel index organization for multimedia database systems alistair moffat justin zobel archival perspectives on the emerging digital library helen r. tibbo helping users navigate in multimedia documents: the affective domain marcia peoples halio a language and a physical organization technique for summary tables gultekin ozsoyoglu z. meral ozsoyoglu francisco mata interface issues in text based chat rooms brian j. roddy hernan epelman-wang pipelining in multi-query optimization database systems frequently have to execute a set of related queries, which share several common subexpressions. multi-query optimization exploits this, by finding evaluation plans that share common results. current approaches to multi-query optimization assume that common subexpressions are materialized. significant performance benefits can be had if common subexpressions are pipelined to their uses, without being materialized. however, plans with pipelining may not always be realizable with limited buffer space, as we show. we present a general model for schedules with pipelining, and present a necessary and sufficient condition for determining validity of a schedule under our model. we show that finding a valid schedule with minimum cost is _np_-hard. we present a greedy heuristic for finding good schedules. finally, we present a performance study that shows the benefit of our algorithms on batches of queries from the tpcd benchmark. nilesh n. dalvi sumit k. sanghai prasan roy s. sudarshan testing satisfaction of functional dependencies peter honeyman a theory of intersection anomalies in relational database schemes the desirability of acyclic database schemes is well argued in [8] and [13]. for schemas described by multivalued dependencies, acyclicity means that the dependencies do not split each other's left-hand sides and do not form intersection anomalies. in a recent work [4] it is argued that real-world database schemes always meet the former requirement, and in [5] it is shown that any given real-world scheme can be made to satisfy also the latter requirement, after being properly extended. however, the method of elimination of intersection anomalies proposed in [5] is intrinsically nondeterministic--- an undesirable property for a design tool. in the present work it is shown that this nondeterminism does not, however, affect the final result of the design process. in addition, we present an efficient deterministic algorithm, which is equivalent to the nondeterministic process of [5]. along the way a study of intersection anomalies, which is interesting in its own right, is performed. catriel beeri michael kifer the extension of data abstraction to database management the long- term goal of the user software engineering (use) project at the university of california, san francisco, is to provide an integrated homogeneous programming environment for the design and development of interactive information systems. realization of this goal involves the development of new software tools, their integration with existing tools, and the creation of an information system development methodology in which these tools are systematically used [1,2]. the successful construction of interactive information systems requires the utilization of principles of user-centered design [3,4,5], combined with features traditionally associated with the separate areas of programming languages, operating systems, and data base management [6]. it has become increasingly clear that the key to being able to provide such a unified view lies in providing a unified view of data [7]. the potential benefits of such a unification are considerable, including: 1) conceptual simplification of the system structure permitting, for example, joint design of data structures and data bases 2) the elimination of duplication or inconsistencies among diverse software components 3) the ability to achieve greater reliability in systems because of reduced dependence upon multiple software systems anthony i. wasserman culture vultures: considering culture and communication in virtual environments elizabeth f. churchill sara bly semantically indexed hypermedia: linking information disciplines douglas tudhope daniel cunliffe chasing constrained tuple-generating dependencies michael j. maher divesh srivastava introduction to the electronic symposium on computer-supported cooperative work computer-supported cooperative work (cscw) holds great importance and promise for modern society. this paper provides an overview of seventeen papers comprising a symposium on cscw. the overview also discusses some relationships among the contributions made by each paper, and places those contributions into a larger context by identifying some of the key challenges faced by computer science researchers who aim to help us work effectively as teams mediated through networks of computers. the paper also describes why the promise of cscw holds particular salience for the u.s military. in the context of a military setting, the paper describes five particular challenges for cscw researchers. while most of these challenges might seem specific to military environments, many others probably already face similar challenges, or soon will, when attempting to collaborate through networks of computers. to support this claim, the paper includes a military scenario that might hit fairly close to home for many, and certainly for civilian emergency response personnel. after discussing the military needs for collaboration technology, the paper briefly outlines the motivation for a recent darpa research program along these lines. that program, called intelligent collaboration and visualization, sponsored the work reported in this symposium. kevin l. mills hierarchical indexing and document matching in bow bow is an on-line bibliographical repository based on a hierarchical c oncept index to which entries are linked. searching in the repository should therefore return matching topics from the hierarchy, rather than just a list of entries. likewise, when new entries are inserted, a search for relevant topics to which they should be linked is required. we develop a vector-based algorithm that creates keyword vectors for the set of competing topics at each node in the hierarchy, and show how its performance improves when domain- specific features are added (such as special handling of topic titles and author names). the results of a 7-fold cross validation on a corpus of some 3,500 entries with a 5-level index are hit ratios in the range of 89-95%, and most of the misclassifications are indeed ambiguous to begin with. maayan geffet dror g. feitelson greenstone: open-source dl software ian h. witten david bainbridge stefan boddie integrating information by outerjoins and full disjunctions (extended abstract) anand rajaraman jeffrey d. ullman signature file generation techniques for query processing in object-oriented databases hwan-seung young sukho lee individual performance in distributed design groups: an empirical study manju k. ahuja kathleen carley dennis f. galletta using multiversioning to improve performance without loss of consistency (abstract only) abstract oracle first implemented multiversioning, which was called readconsistency, in version 4 (ea. 1983) of the kernel. it supportedthe old master/new master programming paradigm for both applicationbuilders and query-processor implementors while avoiding thereader/writer serialization that results from using automatic sharelocks. the resulting transaction consistency model was arcane, notexactly serializable, but pragmatic in actual use. in this session,mr. bamford will cover read consistency from a number ofviewpoints. first, the model will be described and a little detailabout oracles current implementation will be given. next, theconsequences of having read consistency as part of the databasescore architecture will be discussed. in particular, the effects onthe designs for row locking, query processing, distributed query,distributed cache coherence, referential integrity, and set updateswill be described. this will be followed by a comparison of oraclesread consistency model to other common concurrency models, lookingat cost and throughput under a variety of workloads, finally, mr.barnford will propose how oracle intends to carry read consistencyinto the future as support is added for complex objects,distributed computations, and long transactions. roger bamford personal ontologies for web navigation jason chaffee susan gauch purpose and usability of digital libraries a preliminary study was conducted to help understand the purpose of digital libraries (dls) and to investigate whether meaningful results could be obtained from small user studies of digital libraries. results stress the importance of mental models, and of "traditional" library support. yin leng theng norliza mohd- nasir harold thimbleby the failure and recovery problem for replicated databases a replicated database is a distributed database in which some data items are stored redundantly at multiple sites. the main goal is to improve system reliability. by storing critical data at multiple sites, the system can operate even though some sites have failed. however, few distributed database systems support replicated data, because it is difficult to manage as sites fail and recover. a replicated data algorithm has two parts. one is a discipline for reading and writing data item copies. the other is a concurrency control algorithm for synchronizing those operations. the read-write discipline ensures that if one transaction writes logical data item ×, and another transaction reads or writes x, there is some physical manifestation of that logical conflict. the concurrency control algorithm synchronizes physical conflicts; it knows nothing about logical conflicts. in a correct replicated data algorithm, the physical manifestation of conflicts must be strong enough so that synchronizing physical conflicts is sufficient for correctness. this paper presents a theory for proving the correctness of algorithms that manage replicated data. the theory is an extension of serializability theory. we apply it to three replicated data algorithms: gifford's "quorum consensus" algorithm, eager and sevcik's "missing writes" algorithm, and computer corporation of america's "available copies" algorithm. philip a. bernstein nathan goodman on the contributions of different empirical data in usability testing many sources of empirical data can be used to evaluate an interface (e.g., time to learn, time to perform benchmark tasks, number of errors on benchmark tasks, answers on questionnaires, comments made in verbal protocols). this paper examines the relative contributions of both quanti?tive and qualitative data gathered during a usability study. for each usability problem uncovered by this study, we trace each contributing piece of evidence back to its empirical source. for this usability study, the verbal protocol provided the sole source of evidence for more than one third of the most severe problems and more than two thirds of the less severe problems. thus, although the verbal protocol provided the bulk of the evidence, other sources of data contributed disproportionately to the more critical problems. this work suggests that further research is required to determine the relative value of different forms of empirical evidence. maria r. ebling bonnie e. john generalized fitts' law model builder r. william soukoreff i. scott mackenzie escalations in workflow management systems e. panagos m. rabinovich partitioning digital worlds: focal and peripheral awareness in multiple monitor use software today does not help us partition our digital worlds effectively. we must organize them ourselves. this field study of users of multiple monitors examines how people with a lot of display space arrange information. second monitors are generally used for secondary activities related to principal tasks, for peripheral awareness of information that is not the main focus, and for easy access to resources. a second monitor improves efficiency in ways that are difficult to measure yet can have substantial subjective benefit. the study concludes with illustrations of shortcomings of today's systems and applications: the way we work could be improved at relatively low cost. jonathan grudin mmm: the multi-device multi-user multi-editor eric a. bier steve freeman ken pier lots of data, lots of evaluation - lots of findings? saila ovaska frame-axis model for automatic information organizing and spatial navigation in taxonomic reasoning tasks, such as scientific research or decision making, people gain insight and find new ideas through analysis of large numbers of factual data or material documents, which are generally disorganized and unstructured. hypermedia technology provides effective means of organizing and browsing information with such nature. however, for large amounts of information, the conventional node-link model makes linking or browsing operations be complicated because their relationship have to be represented as binary relations. in this paper, we propose a hypermedia data model call frame-axis model, which represents relationship between information as n-ary relations on mapped space. also, the automatic information organizing mechanism which is based on this data model and the browsing interface hypercharts which employ spatial layout are provided. finally, we show some browsing examples on our working prototype system, castingnet. yoshihiro masuda yasuhiro ishitobi manabu ueda a parallel data base machine for query translation in a distributed database system a special purpose data base machine (dbm) designed to translate queries between data models is examined. the dbm will provide a means of direct communication between different dbmss in a distributed database system. the design utilizes the concept of parallelism to handle simultaneous queries and to improve the performance of the translation with respect to cost and time. the components of the dbm are described and an example is provided to illustrate the stages of the query translation utilizing the proposed dbm. m. mehdi owrang o. massoud omidvar a progress report from the sigchi committee on publications/communications peter g. polson the limits of expert performance using hierarchic marking menus a marking menu allows a user to perform a menu selection by either popping-up a radial (or pie) menu, or by making a straight mark in the direction of the desired menu item without popping-up the menu. a hierarchic marking menu uses hierarchic radial menus and "zig-zag" marks to select from the hierarchy. this paper experimentally investigates the bounds on how many items can be in each level, and how deep the hierarchy can be, before using a marking to select an item becomes too slow or prone to errors. gordon kurtenbach william buxton information retrieval library (irlib) (demonstration abstract) carolyn schmidt donna harman interactive visualization of video metadata much current research on digital libraries focuses on named entityextraction and transformation into structured information. examples include entities like events, people, and places, and attributes like birth date or latitude. this video demonstration illustrates the potential for finding relationships among entities extracted from 50,000 news segments from cmus informedia digital video library. a visual query language is used to specify relationships among entities. data populate the query structure, which becomes an interface for exploration that gives continuous feedback in the form of visualizations of summary statistics. the target user is a data analyst familiar with the domain from which the entities come, but not a computer scientist. mark derthick an experimental evaluation of video support for shared work-space interaction mark apperley masood masoodian considerations in the design of office dbms's "office automation" has become a hot topic in recent years. there is considerable interest in developing database management systems (dbms's) specifically for office applications. before rushing into the bandwagon of developing office dbms's the following factors should be considered: peter pin-shan chen scratchpad: mechanisms for better navigation in directed web searching dale newfield bhupinder singh sethi kathy ryall hierarchical conferencing architectures for inter-group multimedia collaboration harrick m. vin p. venkat rangan srinivas ramanathan spatial join selectivity using power laws we discovered a surprising law governing the spatial join selectivity across two sets of points. an example of such a spatial join is "_find the libraries that are within 10 miles of schools_". our law dictates that the number of such qualifying pairs follows a power law, whose exponent we call "pair-count exponent" (pc). we show that this law also holds for self-spatial-joins ("_find schools within 5 miles of other schools_") in addition to the general case that the two point-sets are distinct. our law holds for many real datasets, including diverse environments (geographic datasets, feature vectors from biology data, galaxy data from astronomy). in addition, we introduce the concept of the box-occupancy-product-sum (bops) plot, and we show that it can compute the pair-count exponent in a timely manner, reducing the run time by orders of magnitude, from quadratic to linear. due to the pair-count exponent and our analysis (law 1), we can achieve accurate selectivity estimates in constant time (o(1)) without the need for sampling or other expensive operations. the relative error in selectivity is about 30% with our fast bops method, and even better (about 10%), if we use the slower, quadratic method. christos faloutsos bernhard seeger agma traina caetano traina the role of kinesthetic reference frames in two-handed input performance we present experimental work which explores how the match (or mismatch) between the input space of the hands and the output space of a graphical display influences two-handed input performance. during interaction with computers, a direct correspondence between the input and output spaces is often lacking. not only are the hands disjoint from the display space, but the reference frames of the hands may in fact be disjoint from one another if two separate input devices (e.g. two mice) are used for two-handed input. in general, we refer to the workspace and origin within which the hands operate as kinesthetic reference frames. our goal is to better understand how an interface designer's choice of kinesthetic reference frames influences a user's ability to coordinate two-handed movements, and to explore how the answer to this question may depend on the availability of visual feedback. understanding this issue has implications for the design of two-handed interaction techniques and input devices, as well as for the reference principle of guiard's kinematic chain model of human bimanual action. our results suggest that the guiard reference principle is robust with respect to variances in the kinesthetic reference frames as long as appropriate visual feedback is present. ravin balakrishnan ken hinckley an investigation of the roles of individual differences and user interface on database usability this research seeks to understand to what extent leveraging the graphical user interface's ability to convey spatial information can improve a user's ability to write effective database queries. this capability is believed to be especially important when nontechnical individuals, with diverse backgrounds and cognitive abilities, are expected to interact directly with these systems in the query formulation process.this study makes use of recent developments in graphical user interface technology to manipulate the level of spatial visualization support provided by the interface. a laboratory experiment was conducted to explore the influence of interface style and the spatial visualization ability of the user on the performance of the query development process. the application used in the experiment was a visual database query system developed for this study. one hundred sixty-two volunteers participated in the experiment. spatial visualization ability was assessed using a paper-folding test. the results indicate that both spatial visualization support of the system and spatial visualization ability of the user are important components of database usability. steven s. curl lorne olfman john w. satzinger type hierarchies and semantic data models the basic abstraction mechanisms of semantic data models - aggregation, classification and generalization - are considered the essential features to overcome the limitations of traditional data models in terms of semantic expressiveness. an important issue in database programming language design is which features should a programming language have to support the abstraction mechanisms of semantic data models. this paper shows that when using a strongly typed programming language, that language should support the notion of type hierarchies to achieve a full integration of semantic data models abstraction mechanisms within the language's type system. the solution is presented using the language galileo, a strongly typed, interactive programming language specifically designed for database applications. antonio albano functions in databases we discuss the objectives of including functional dependencies in the definition of a relational database. we find two distinct objectives. the appearance of a dependency in the definition of a database indicates that the states of the database are to encode a function. a method based on the chase of calculating the function encoded by a particular state is given and compared to methods utilizing derivations of the dependency. a test for deciding whether the states of a schema may encode a nonempty function is presented as is a characterization of the class of schemas which are capable of encoding nonempty functions for all the dependencies in the definition. this class is the class of dependency preserving schemas as defined by beeri et al. and is strictly larger than the class presented by bernstein. the second objective of including a functional dependency in the definition of a database is that the dependency be capable of constraining the states of the database; that is, capable of uncovering input errors made by the users. we show that this capability is weaker than the first objective; thus, even dependencies whose functions are everywhere empty may still act as constraints. bounds on the requirements for a dependency to act as a constraint are derived. these results are founded on the notion of a weak instance for a database state, which replaces the universal relation instance assumption and is both intuitively and computationally more nearly acceptable. marc h. graham cautious transaction schedulers with admission control we propose a new class of schedulers, called cautious schedulers, that grant an input request if it will not necessitate any rollback in the future. in particular, we investigate cautious wrw-schedulers that output schedules in class wrw only. class wrw consists of all schedules that are serializable, while preserving the write-read and read- write conflict, and is the largest polynomially recognizable subclass of serializable schedules currently known. it is shown, in this paper however, that cautious wrw-scheduling is, in general, np-complete. therefore, we introduce a special type (type 1r) of transaction, which consists of no more than one read step (an indivisible set of read operations) followed by multiple write steps. it is shown that cautious wrw-scheduling can be performed efficiently if all transactions are of type 1r and if admission control can be exercised. admission control rejects a transaction unless its first request is immediately grantable. naoki katoh toshihide ibaraki tiko kameda uist'007 (panel): where will we be ten years from now? robert j. k. jacob steven k. feiner james d. foley jock d. mackinlay dan r. olsen human performance using computer input devices in the preferred and non- preferred hands paul kabbash i. scott mackenzie william buxton evaluation of signature files as set access facilities in oodbs object-oriented database systems (oodbs) need efficient support for manipulation of complex objects. in particular, support of queries involving evaluations of set predicates is often required in handling complex objects. in this paper, we propose a scheme to apply signature file techniques, which were originally invented for text retrieval, to the support of set value accesses, and quantitatively evaluate their potential capabilities. two signature file organizations, the sequential signature file and the bit-sliced signature file, are considered and their performance is compared with that of the nested index for queries involving the set inclusion operator ( ). we develop a detailed cost model and present analytical results clarifying their retrieval, storage, and update costs. our analysis shows that the bit-sliced signature file is a very promising set access facility in oodbs. yoshiharu ishikawa hiroyuki kitagawa nobuo ohbo secure interoperability of trusted database management systems bhavani thuraisingham merging multiple perspectives in groupware use: intra- and intergroup conventions gloria mark coupling field studies with laboratory experiments for the evaluation of computer languages jon a. turner matthias jarke edward a. stohr yannis vassiliou norman h. white the onion technique: indexing for linear optimization queries _this paper describes the onion technique, a special indexing structure for linear optimization queries. linear optimization queries ask for top-n records subject to the maximization or minimization of linearly weighted sum of record attribute values. such query appears in many applications employing linear models and is an effective way to summarize representative cases, such as the top-50 ranked colleges. the onion indexing is based on a geometric property of convex hull, which guarantees that the optimal value can always be found at one or more of its vertices. the onion indexing makes use of this property to construct convex hulls in layers with outer layers enclosing inner layers geometrically. a data record is indexed by its layer number or equivalently its depth in the layered convex hull. queries with linear weightings issued at run time are evaluated from the outmost layer inwards. we show experimentally that the onion indexing achieves orders of magnitude speedup against sequential linear scan when n is small compared to the cardinality of the set. the onion technique also enables progressive retrieval, which processes and returns ranked results in a progressive manner. furthermore, the proposed indexing can be extended into a hierarchical organization of data to accommodate both global and local queries._ yuan-chi chang lawrence bergman vittorio castelli chung-sheng li ming-ling lo john r. smith view maintenance in a warehousing environment yue zhuge hector garcia-molina joachim hammer jennifer widom linear vs. order constraint queries over rational databases (extended abstract) alexei p. stolboushkin michael a. taitslin visual interaction design special interest area: notes from interchi maria g. wadlow interactive systems for supporting the emergence of concepts and ideas: a chi 97 workshop ernest edmonds thomas moran ellen do when worlds collide: reconciling the research, marketplace, and application views of hypertext robert glushko david gunning ken kershner catherine marshall louis reynolds minimum-drift digital video down-conversion this paper presents a new technique for decoding a full-resolution video bitstream at low memory cost and displaying the signal at a lower resolution. existing techniques solve the problem by storing the down-converted blocks into memory instead of the full- resolution blocks. while the memory is reduced, these techniques introduce drift errors because the decoder does not have the same pixels as the encoder in performing motion-compensated prediction. the approach proposed here alleviates the problem by tracking the drift at the decoder. it improves the video quality without any increase in decoder complexity. the effectiveness of the approach is evaluated using both objective and subjective tests. this minimum-drift approach is very simple to implement and can also be applied for memory reduction of a full resolution hdtv decoder. osama alshaykh homer chen towards an information logic 'probability is expectation founded upon partial knowledge.' (boole, 1854) information retrieval based on stored program electronic computers has been an active area of research since the time these machines were invented. it is therefore somewhat surprising that even now no formal computational model for ir exists. there is no well-defined logic to describe information retrieval, and there is no proof or model theory to talk about the truths of ir. this paper argues that much of the research work in the past has been steps in the direction of a logic for ir. these steps have been taken by developing formal models for information retrieval, but to date none of these are complete nor could any claim to be a computational model for ir. to appreciate this development i shall present a picture of ir, describing bits of a puzzle which may fit together to point to a new framework within which a computational model or logic could be described. c. j. van rijsbergen the center for people and systems interaction (cpsi) jenny preece judith ramsay richard jacques alessandro barabesi database design: composing fully normalized tables from a rigorous dependency diagram a new simplified methodology for relational-database design overcomes the difficulties associated with nonloss decomposition. it states dependencies between data fields on a dependency list and then depicts them unambiguously as interlinked bubbles and doublebubbles on a dependency diagram. from the dependency diagram, a set of fully normalized tables is derived. henry c. smith on the design of relational database schemata the purpose of this paper is to present a new approach to the conceptual design of relational databases based on the complete relatability conditions (crcs). it is shown that current database design methodology based upon the elimination of anomalies is not adequate. in contradistinction, the crcs are shown to provide a powerful criticism for decomposition. a decomposition algorithm is presented which (1) permits decomposition of complex relations into simple, well-defined primitives, (2) preserves all the original information, and (3) minimizes redundancy. the paper gives a complete derivation of the crcs, beginning with a unified treatment of functional and multivalued dependencies, and introduces the concept of elementary functional dependencies and multiple elementary multivalued dependencies. admissibility of covers and validation of results are also discussed, and it is shown how these concepts may be used to improve the design of 3nf schemata. finally, a convenient graphical representation is proposed, and several examples are described in detail to illustrate the method. carlo zaniolo miachel a. meklanoff transferring database contents from a conventional information system to a corresponding existing object oriented information system reda alhajj faruk polat multimedia database management systems arif ghafoor using statecharts to model hypertext yi zheng man-chi pong efficient mining of weighted association rules (war) wei wang jiong yang philip s. yu cpm-goms: an analysis method for tasks with parallel activities bonnie e. john wayne d. gray the future of very large data base systems (panel session) robert j. tufts tom lucas randy lee jon baake mediate: video as a first-order datatype steinar kristoffersen multi-vendor interoperability through sql access scott newmann lag as a determinant of human performance in interactive systems the sources of lag (the delay between input action and output response) and its effects on human performance are discussed. we measured the effects in a study of target acquisition using the classic fitts' law paradigm with the addition of four lag conditions. at the highest lag tested (225 ms), movement times and error rates increased by 64% and 214% respectively, compared to the zero lag condition. we propose a model according to which lag should have a multiplicative effect on fitts' index of difficulty. the model accounts for 94% of the variance and is better than alternative models which propose only an additive effect for lag. the implications for the design of virtual reality systems are discussed. i. scott mackenzie colin ware toward a more precise definition of task-sufficient information: reconsidering bethke et alii on task orientation and usability barbara mirel jester 2.0 (demonstration abstract): collaborative filtering to retrieve jokes dhruv gupta mark digiovanni hiro narita ken goldberg a comparison of the use of text and multimedia interfaces to provide information to the elderly virginia z. ogozalek building a community website: sigchi.nl goes online peter boersma involving the "user": blurred roles and co-design over time karen ruhleder user-oriented smart-cache for the web: what you seek is what you get! standard database approaches to querying information on the web focus on the source(s) and provide a query language based on a given predefined organization (schema) of the data: this is the source- driven approach. however, can the web be seen as a standard database? there is no super-user in charge of monitoring the source(s) (the data is constantly updated), there is no homogeneous structure (no common explicit structure thus), the web itself never stops growing, etc. for these reasons, we believe that the source-driven standard approach is not suitable to the web. as an alternative, we propose a user-oriented approach based on the idea that the schema is a posteriori expressed by the user's needs when asking a query. given a user query, akira (agentive knowledge-based information retrieval architecture) [6] extracts a target structure (structure expressed in the query) and uses standard information retrieval and filtering techniques to access potentially relevant documents. the user-oriented paradigm means that the structure through which the data is viewed does not come from the source but is extracted from the user query. when a user asks a query, the relevant information is retrieved from the web and stored as is in a cache. then the information is extracted from the raw data using computational linguistic techniques. the akira cache (smart-cache) represents these extracted layers of meta-information on top of the raw data. the smart-cache is an object-oriented database whose schema is inferred from the user's target structure. it is designed on demand through a library of concepts that can be assembled together to match concepts and meta-concepts required in the user's query. the smart cache can be seen as a view of the web. to the best of our knowledge, akira is the only system that uses information retrieval and extraction integrated with database techniques to provide maximum flexibility to the user and offer transparent access to the content of web documents. zoe lacroix arnaud sahuguet raman chandrasekar a user-centered interface for querying distributed multimedia databases facilitating information retrieval in the vastly growing realm of digital media has become increasingly difficult. delaunaymm seeks to assist all users in finding relevant information through an interactive interface that supports pre- and post-query refinement, and a customizable multimedia information display. this project leverages the strengths of visual query languages with a resourceful framework to provide users with a single intuitive interface. the interface and its supporting framework are described in this paper. isabel f. cruz kimberly m. james finitely specifiable implicational dependency families richard hull research alerts jennifer bruer architectures for object data management jack orenstein aqua: query visualization for the ncstrl digital library laszlo kovacs andrás micsik balázs pataki archival storage for digital libraries arturo crespo hector garcia-molina a framework for shared applications with a replicated architecture thomas berlage andreas genau querying spatial databases via topological invariants luc segoufin victor vianu adaptive foveation of mpeg video t. h. reeves j. a. robinson evaluating user interfaces to information retrieval systems: a case study on user support giorgio brajnik stefano mizzaro carlo tasso user-centered methods for library interface design gary marchionini a decompositional approach to database constraint enforcement this paper presents a decompositional approach to integrity constraint enforcement for very large database systems. the central theme of the paper is the development of the constraint decomposition theorem which can be used to decompose any complex constraint formula into a set of simpler constraint sub- formulas, each of which is a sufficient condition for the original constraint formula and they can be enforced independently of one another. by utilizing the decomposition theorem and the fact that both cost for checking each sub- formula and satisfiablity of each sub-formula may be different from one another, we developed a new constraint enforcement strategy which is much efficient than the previous approaches. eugene y. sheng optical disks (panel session): effecting successful integration d'ellen bardes patrick call michael s. theis taroon c. kamdar the notification collage: posting information to public and personal displays the notification collage (nc) is a groupware system where distributed and co- located colleagues comprising a small community post media elements onto a real-time collaborative surface that all members can see. akin to collages of information found on public bulletin boards, nc randomly places incoming elements onto this surface. people can post assorted media: live video from desktop cameras; editable sticky notes; activity indicator; slide shows displaying a series of digital photos, snapshots of a person's digitial desktop, and web page thumbnails. user experiences show that nc becomes a rich resource for awareness and collaboration. community members indicate their presence to others by posting live video. they regularly act on this information by engaging in text and video conversations. because all people can overhear conversations, these become active opportunities to join in. people also post items they believe will be interesting to others, such as desktop snapshots and vacation photos. finally, people use nc somewhat differently when it is displayed on a large public screen than when it appears on a personal computer. saul greenberg michael rounding the whips prototype for data warehouse creation and maintenance a data warehouse is a repository of integrated information from distributed, autonomous, and possibly heterogeneous, sources. in effect, the warehouse stores one or more materialized views of the source data. the data is then readily available to user applications for querying and analysis. figure 1 shows the basic architecture of a warehouse: data is collected from each source, integrated with data from other sources, and stored at the warehouse. users then access the data directly from the warehouse. as suggested by figure 1, there are two major components in a warehouse system: the integration component, responsible for collecting and maintaining the materialized views, and the query and analysis component, responsible for fulfilling the information needs of specific end users. note that the two components are not independent. for example, which views the integration component materializes depends on the expected needs of end users. most current commercial warehousing systems (e.g., redbrick, sybase, arbor) focus on the query and analysis component, providing specialized index structures at the warehouse and extensive querying facilities for the end user. in the whips (warehousing information project at stanford) project, on the other hand, we focus on the integration component. in particular, we have developed an architecture and implemented a prototype for identifying data changes at heterogeneous sources, transforming them and summarizing them in accordance to warehouse specifications, and incrementally integrating them into the warehouse. we propose to demonstrate our prototype at sigmod, illustrating the main features of our architecture. our architecture is modular and we designed it specifically to fulfill several important and interrelated goals: data sources and warehouse views can be added and removed dynamically; it is scalable by adding more internal modules; changes at the sources are detected automatically; the warehouse may be updated continuously as the sources change, without requiring "down time;" and the warehouse is always kept consistent with the source data by the integration algorithms. more details on these goals and how we achieve them are provided in [wgl+96]. wilburt j. labio yue zhuge janet l. wiener himanshu gupta hector garcia-molina jennifer widom structuring and querying personalized audio using ontologies latifur khan graphical table of contents xia lin first story detection in tdt is hard james allan victor lavrenko hubert jin solving queries by tree projections suppose a database schema d is extended to d by adding new relation schemas, and states for d are extended to states for d by applying joins and projections to existing relations. it is shown that certain desirable properties that d has with respect to d. these properties amount to the ability to compute efficiently the join of all relations in a state for d from an extension of this state over d . the equivalence is proved for unrestricted (i.e., both finite and infinite) databases. if d is obtained from d by adding a set of new relation schemas that form a tree schema, then the equivalence also holds for finite databases. in this case there is also a polynomial time algorithm for testing the existence of a tree projection of d with respect to d. yehoshua sagiv oded shmueli organization of clustered files for consecutive retrieval this paper studies the problem of storing single-level and multilevel clustered files. necessary and sufficient conditions for a single-level clustered file to have the consecutive retrieval property (crp) are developed. a linear time algorithm to test the crp for a given clustered file and to identify the proper arrangement of objects, if crp exists, is presented. for the single- level clustered files that do not have crp, it is shown that the problem of identifying a storage organization with minimum redundancy is np- complete. consequently, an efficient heuristic algorithm to generate a good storage organization for such files is developed. furthermore, it is shown that, for certain types of multilevel clustered files, there exists a storage organization such that the objects in each cluster, for all clusters in each level of the clustering, appear in consecutive locations. j s. deogun v v. raghavan t k.w. tsou toward developing global is specialists thomas w. ferratt lucette fogel jpeg transcompressor and video networks tse-hua lan ahmed h. tewfik transient cooperating communities catherine wolf reflections: the accidental death of reviewing steven pemberton a deductive rules processor for sql databases rajshekhar sunderraman rajaraman sunderraman evaluation of remote backup algorithms for transaction processing systems a remote backup is a copy of a primary database maintained at a geographically separate location and is used to increase data availability. remote backup systems are typically log-based and can be classified into 2-safe and 1-safe, depending on whether transactions commit at both sites simultaneously or they first commit at the primary and are later propagated to the backup. we have built an experimental database system on which we evaluated the performance of the epoch algorithm, a 1-safe algorithm we have developed, and compared it with the 2-safe approach under various conditions. we also report on the use of multiple log streams to propagate information from the primary to the backup. christos a. polyzois hector garcia- molina parallel processing and trusted database management systems this paper applies parallel processing technology to database security technology and vice versa. we first describe the issues involved in incorporating multilevel security into parallel database management systems. in particular, we describe how multilevel security could be incorporated into the gamma architecture. then we describe the use of parallel architectures to perform trusted database management system functions. in particular, we describe security constraint processing in trusted database management systems and show how parallel processing could enhance the performance of this function. bhavani thuraisingham william ford bibliographic integration in digital document libraries (poster) atsuhiro takasu a visual retrieval environment for hypermedia information systems we present a graph-based object model that may be used as a uniform framework for direct manipulation of multimedia information. after an introduction motivating the need for abstraction and structuring mechanisms in hypermedia systems, we introduce the data model and the notion of perspective, a form of data abstraction that acts as a user interface to the system, providing control over the visibility of the objects and their properties. a perspective is defined to include an intension and an extension. the intension is defined in terms of a pattern, a subgraph of the schema graph, and the extension is the set of pattern-matching instances. perspectives, as well as database schema and instances, are graph structures that can be manipulated in various ways. the resulting uniform approach is well suited to a visual interface. a visual interface for complex information systems provides high semantic power, thus exploiting the semantic expressibility of the underlying data model, while maintaining ease of interaction with the system. in this way, we reach the goal of decreasing cognitive load on the user, with the additional advantage of always maintaining the same interaction style. we present a visual retrieval environment that effectively combines filtering, browsing, and navigation to provide an integrated view of the retrieval problem. design and implementation issues are outlined for more (multimedia object retrieval environment), a prototype system relying on the proposed model. the focus is on the main user interface functionalities, and actual interaction sessions are presented including schema creation, information loading, and information retrieval. dario lucarella antonella zanzi sustaining mentoring relationships on-line d. kevin o'neill louis m. gomez generating mixed-initiative hypertexts: a reactive approach berardina de carolis towards practical constraint databases (extended abstract) stephane grumbach jianwen su access methods for multiversion data we present an access method designed to provide a single integrated index structure for a versioned timestamped database with a non-deletion policy. historical data (superceded versions) is stored separately from current data. our access method is called the time-split b-tree. it is an index structure based on malcolm easton's write once b-tree. the write once b-tree was developed for data stored entirely on a write-once read-many or worm optical disk. the time-split b-tree differs from the write once b-tree in the following ways: current data must be stored on an erasable random-access device. historical data may be stored on any random-access device, including worms, erasable optical disks, and magnetic disks. the point is to use a faster and more expensive device for the current data and a slower cheaper device for the historical data. the splitting policies have been changed to reduce redundancy in the structure ---the option of pure key splits as in b+-trees and a choice of split times for time-based splits enable this performance enhancement. when data is migrated from the current to the historical database, it is consolidated and appended to the end of the historical database, allowing for high space utilization in worm disk sectors. david lomet betty salzberg visual information retrieval amarnath gupta ramesh jain office of the future: using the structure of the human communication system to build the office of the future (i) the basic concept of the office of the future is a natural outgrowth of the expansion of computers helping people in business do what they could not, or would not, do. the presentation will be confined to describing how computers will be used to provide a technological linkage between the three parts of the human communication process and to clearly communicate complex financial information to all levels of management. communication, or lack thereof, is one of the most serious problems constraining efficient business operations. the human communication system uses three basic input and output channels to pass information. these three channels---voice, the written word, and symbols--- provide a complete, sensitive and effective process that permits humans to communicate one with another. the system consists of a sender sending a message over one or more of the channels at a time, and the receiver receiving the message through one or more of the channels. the channels can and most often do have "static" in them caused by either the sender, the receiver or the channel. the static is more prevalent in business because different professional cultures have different vocabularies and customs. the result of the static is three levels of communication problems in business organizations: irwin m. jarett lof: identifying density-based local outliers for many kdd applications, such as detecting criminal activities in e-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. existing work in outlier detection regards being an outlier as a binary property. in this paper, we contend that for many scenarios, it is more meaningful to assign to each object a _degree_ of being an outlier. this degree is called the _local outlier factor_ (lof) of an object. it is _local_ in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. we give a detailed formal analysis showing that lof enjoys many desirable properties. using real-world datasets, we demonstrate that lof can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical. markus m. breunig hans-peter kriegel raymond t. ng jörg sander assistant for an information database michael fruchtl jurgen kreuziger michael beigl exploring novel ways of interaction in musical performance bert bongers towards a framework for integrating multilevel secure models and temporal data models within many organizations the number of databases containing classified or otherwise sensitive data is increasing rapidly. access to these databases must be restricted and controlled to limit the unauthorized disclosure, or malicious modification of data contained in them. however, the conventional models of authorization that have been designed for database systems supporting the hierarchical, network and relational models of data do not provide adequate mechanisms to support controlled access of temporal objects and context based temporal information. in this paper we extend the multilevel secure relational model to capture the functionality required of a temporal database, i.e. a database that supports some aspect of time, not counting user-defined time. in particular we assign class access to bitemporal timestamped attributes, and give explicit security classifications to temporal elements. niki pissinou kia makki e. k. park i/s attitudes: toward theoretical and definitional clarity dale goodhue beyond fitts' law: models for trajectory-based hci tasks johnny accot shumin zhai communication, action and history alan dix roberta mancini stefano levialdi open cscw systems henrik lewe volker barent collaborative customer services using synchronous web browser sharing makoto kobayashi masahide shinozaki takashi sakairi maroun touma shahrokh daijavad catherine wolf approximate ad-hoc query engine for simulation data in this paper, we describe aqsim, an ongoing effort to design and impl ement a system to manage terabytes of scientific simulation data. the goal of this project is to reduce data storage requirements and access times while permitting ad-hoc queries using statistical and mathematical models of the data. in order to facilitate data exchange between models based on different representations, we are evaluating using the asci common data model that is comprised of several layers of increasing semantic complexity. to support queries over the spatial- temporal mesh structured data we are in the process of defining and implementing a grammar for meshsql ghaleb abdulla chuck baldwin terence critchlow roy kamimura ida lozares ron musick nu ai tang byung s. lee robert snapp a process model and system for supporting collaborative work sunil k. sarin kenneth r. abbott dennis r. mccarthy efficient spatial data transmission in web-based gis this paper has proposed a new method of efficient spatial data transmission in the client- side web-based gis (geographic information system) which handles large-size spatial geographic information on the internet. the basic idea is that, firstly, a largesize map is divided into several parts, where each part is called a "tile", according to appropriate division granularity. secondly, when an user requires a certain region in the map at client side, the gis server only transmits spatial data in the tiles which overlap with the requested region. and the received data are stored in client local machine for reuse. here a method of tile-division and tile-query processing has been provided. comparing with traditional client-side web-based gis, the performance improvement is achieved by the usage of the proposed method. zu-kuan wei young-hwan oh jae-dong lee jae-hong kim dong-sun park young-geol lee hae-young bae hci in the global knowledge-based economy: designing to support worker adaptation increasingly, people are being required to perform open- ended intellectual tasks that require discretionary decision making. these demands require a relatively unique approach to the design of computer-based support tools. a review of the characteristics associated with the global knowledge-based economy strongly suggests that there will be an increasing need for workers, managers, and organizations to adapt to change and novelty. this is equivalent to a call for designing computer tools that foster continuous learning. there are reasons to believe that the need to support adaptation and continuous learning will only increase. thus, in the new millennium hci should be concerned with explicitly designing for worker adaptation. the cognitive work analysis framework is briefly described as a potential programmatic approach to this practical design challenge. kim j. vicente paperbuttons: expanding a tangible user interface expanding the functionality of a successful system is always a challenge; the initial simplicity and ease-of-use is easily lost in the process. experience indicates that this problem is worsened in systems with tangible interfaces: while it might be relatively easy to suggest a single successful tangible interaction component, it is notoriously hard to preserve the success when expanding with more components or more manipulation using the same component. this paper describes our approach to creating and expanding tangible interfaces. the approach consist of adherence to a set of guidelines for tangible interfaces, derived from practical tangible design and general object-oriented design, and solicitation of user requirements to the particular interaction method in question. finally the paper describes a prototype of paperbuttons built in response to these requirements and designed in accordance to the guidelines for tangible interfaces. elin rønby pedersen tomas sokoler les nelson the children's challenge: new technologies to support co-located and distributed collaboration allison druin steve benford amy bruckman kori inkpen shelia o'rouke criteria for effective groupware andrew f. monk jean scholtz bill buxton sarah bly david frohlich steve whitaker ripple joins for online aggregation we present a new family of join algorithms, called ripple joins, for online processing of multi-table aggregation queries in a relational database management system (dbms). such queries arise naturally in interactive exploratory decision-support applications. traditional offline join algorithms are designed to minimize the time to completion of the query. in contrast, ripple joins are designed to minimize the time until an acceptably precise estimate of the query result is available, as measured by the length of a confidence interval. ripple joins are adaptive, adjusting their behavior during processing in accordance with the statistical properties of the data. ripple joins also permit the user to dynamically trade off the two key performance factors of on-line aggregation: the time between successive updates of the running aggregate, and the amount by which the confidence-interval length decreases at each update. we show how ripple joins can be implemented in an existing dbms using iterators, and we give an overview of the methods used to compute confidence intervals and to adaptively optimize the ripple join "aspect-ratio" parameters. in experiments with an initial implementation of our algorithms in the postgres dbms, the time required to produce reasonably precise online estimates was up to two orders of magnitude smaller than the time required for the best offline join algorithms to produce exact answers. peter j. haas joseph m. hellerstein history-rich tools for social navigation alan wexelblat an algebraic approach to static analysis of active database rules rules in active database systems can be very difficult to program due to the unstructured and unpredictable nature of rule processing. we provide static analysis techniques for predicting whether a given rule set is guaranteed to terminate and whether rule execution is confluent (guaranteed to have a unique final state). our methods are based on previous techniques for analyzing rules in active database systems. we improve considerably on the previous techniques by providing analysis criteria that are much less conservative: our methods often determine that a rule set will terminate or is confluent when previous methods could not make this determination. our improved analysis is based on a "propagation" algorithm, which uses an extended relational algebra to accurately determine when the action of one rule can affect the condition of another, and determine when rule actions commute. we consider both conditon- action rules and event- condition-action-rules, making our approach widely applicable to relational active database rule languages and to the trigger language in the sql:1999 standard. elena baralis jennifer widom modern classical document indexing: a linguistic contribution to knowledge- based ir bas van bakel critical success factors of decision support systems: an experimental study ting peng liang use of mouse buttons two experimental tasks were designed to test use of multiple-button mice. in the first, number of errors made and time to complete subtasks were measured as subjects attempted to depress one, two, or three buttons under three sets of conditions. in the second, subjects were asked to indicate true or false either by pressing one of two different buttons or by clicking a single button one or two times. people tended to be faster and more accurate using different buttons than different numbers of clicks. lynne a. price carlos a. cordova if vr is so great, why are vr entertainment systems so poor? roy w. latham a software system for automatic albuming of consumer pictures alexander c. loui mark d. wood do-i-care: a collaborative web agent brian starr mark s. ackerman michael pazzani action assignable graphics: a flexible human-computer interface design process matthew d. russell howard xu lingtao wang the structured information manager (sim) ron sacks-davis alan kent hyped-media to hyper-media: toward theoretical foundations of design, use and evaluation n. hari narayanan sigchi: the later years steven pemberton interactive simulation in a multi-person virtual world a multi-user virtual world has been implemented combining a flexible-object simulator with a multisensory user interface, including hand motion and gestures, speech input and output, sound output, and 3-d stereoscopic graphics with head-motion parallax. the implementation is based on a distributed client/server architecture with a centralized dialogue manager. the simulator is inserted into the virtual world as a server. a discipline for writing interaction dialogues provides a clear conceptual hierarchy and the encapsulation of state. this hierarchy facilitates the creation of alternative interaction scenarios and shared multiuser environment. christopher codella reza jalili lawrence koved j. bryan lewis daniel t. ling james s. lipscomb david a. rabenhorst chu p. wang alan norton paula sweeney greg turk flexibility and control for dynamic workflows in the worlds environment this paper presents a model and prototype implementation, called obligations, for handling flexible, dynamic changes to workflows. the model uses multiple inheritance and an overhead transparency metaphor to construct a network of activities. each 'sheet' holds portions of the network to be constructed. some of these sheets contain local modifications that are not shared among other similar activities and others hold general specifications that all instances should follow, assuming that they have not been locally modified. when all the sheets are stacked together, they create a composite view of the network. individual sheets can be removed and replaced with newer, presumably compatible, sheets that change the network. this type of replacement can be encoded into surrogates which automatically carry out the replacements to keep the obligation up-to-date. the obligation system has a built in error detection scheme that determines if network construction is invalid and, if so, disallows execution of the portions of the network that are in error. douglas p. bogia simon m. kaplan user needs assessment and evaluation (working session) nancy a. van house david levy barriers to the effective search and exploration of large document databases philip j. smith apple computer's human interface group: advanced technology group kathleen m. gomoll peirce: a relational dbms for small systems peirce is an interactive relational database management system intended for mini and microcomputers, with an extensible command facility, query, based on codd's relational algebra and supporting a clerk oriented interface. peirce contains three major independent on-line modules: dba, query, and csu. dba provides the data definition and data dictionary facilities and is discussed in section 2. query provides both a relational algebra data manipulation language and a clerk oriented interface. query is also discussed in section 2. command sequences are discussed in section 3. we discuss the virtual file facilities of peirce (which include the updating of virtual files) in section 4. the underlying storage structure, namely entry order sequence with secondary indexes, was chosen primarily with simplicity of recovery in mind and is discussed in section 5. in section 6 we discuss the control commands if and for, which help to conveniently avoid having a host language facility. we conclude in section 7 by describing further work planned on peirce. martin k. solomon carl d. kirshen the information transfer specialist in successful implementation of decision support systems gemma m welsch model management and structured modeling: the role of an information resource dictionary system models have historically occupied an ambiguous position within organizations. management acceptance of management science and operations research models for decision-making has lagged far behind technical advances in these areas. structured modeling has emerged as a unifying approach to the modeling process with potential to reduce this ambiguity. structured modeling is primarily oriented to the individual, however. a way of incorporating structured modeling into the organizational framework via existing information resource channels is discussed. a relational model is presented for an information resource dictionary system (irds). this irds model is then extended to accommodate representation of structured models. this extension of the irds can answer queries about structured modeling as well as model instances. the concept of an active irds is introduced and an example presented of how an active irds can be linked with an optimization algorithm. the conclusion is that the irds is a suitable vehicle for incorporating model management and structured modeling as part of an organization's information resource management environment. d. r. dolk the use of anaphoric resolution for document description in information retrieval this study investigated two hypotheses concerning the use of anaphors in information retrieval. the first hypothesis, that anaphors tend to refer to integral concepts rather than to peripheral concepts, was well supported. two samples of documents, one in psychology and the other in computer science, were examined by subject experts who judged the centrality of phrases which were referred to anaphorically. the second hypothesis, that various term weighting schemes are affected differently by anaphoric resolution, was also well supported. it was found that schemes which incorporate document length into the calculations produce much smaller increases in term weights for terms occurring in anaphoric resolutions than do those which do not consider document length. it is concluded that although anaphoric resolution has potential for better representing the "aboutness" of a document, care must be taken in choosing both the anaphoric classes to be resolved and the term weighting schemes to be used in measuring a document's topicality. s. bonzi e. liddy physical design of a menu relational database system package (abstract only) this paper outlines the design, implementation, and capabilities of a menu drive transportable relational database package. after identifying the goals of the system, a top down strategy, via data flow diagrams was used to outline the physical design of the package. during the physical design phase, the detail functions and design procedures of the dol, dml, system support, and control routines were identified. the package is written in ansi fortran 66 and implemented on a decsystem-10. since ansi fortran 66 compilers are available for the majority of mini and micro computer systems, this system can be implemented on different systems with a minimum of conversion efforts. asad khailany arnold gasper initial design and evaluation of an interface to hypermedia systems for blind users helen petrie sarah morley peter mcnally anne-marie o'neill dennis majoe view updates in relational databases with an independent scheme a view on a database is a mapping that provides a user or application with a suitable way of looking at the data. updates specified on a view have to be translated into updates on the underlying database. we study the view update translation problem for a relational data model in which the base relations may contain (indexed) nulls. the representative instance is considered to be the correct representation of all data in the database; the class of views that is studied consists of total projections of the representative instance. only independent database schemes are considered, that is, schemes for which global consistency is implied by local consistency. a view update can be an insertion, a deletion, or a modification of a single view tuple. it is proven that the constant complement method of bancilhon and spyratos is too restrictive to be useful in this context. structural properties of extension joins are derived that are important for understanding views. on the basis of these properties, minimal algorithms for translating a single view- tuple update are given. rom langerak design: cultural probes bill gaver tony dunne elena pacenti workload models for dbms performance evaluation a hierarchy of workload models to support database management system performance measurement and evaluation studies is defined herein. the hierarchy is structured so that each successive layer represents database design, dbms functionality and workload characterization in progressively greater detail. the design of the hierarchy was influenced by the ansi/x3/sparc (3) model for dbms architecture, by senko's diam (20), by scheuermann's dbms model (19) and by sevcik's layered model for database system performance evaluation (21). the hierarchical layers clearly isolate dbms functionality from organizational activities and underlying machine characteristics. specific modeling techniques for deriving both database design and workload models at each level of the hierarchy are identified. the database design models form the basis for defining certain critical components of the workload models. the hierarchy of workload models facilitates a clear separation of the relevant performance measurement and evaluation issues and objectives based upon an increasingly detailed view of the total computing environment. figure 1 illustrates the major layers of the hierarchy. the remainder of this paper defines the workload models developed at each layer and the measurement parameters and performance metrics derived therein. evans j. adams a framework for providing consistent and recoverable agent-based access to heterogeneous mobile databases evaggelia pitoura bharat bhargava incremental and interactive sequence mining the discovery of frequent sequences in temporal databases is an important data mining problem. most current work assumes that the database is static, and a database update requires rediscovering all the patterns by scanning the entire old and new database. in this paper, we propose novel techniques for maintaining sequences in the presence of a) database updates, and b) user interaction (e.g. modifying mining parameters). this is a very challenging task, since such updates can invalidate existing sequences or introduce new ones. in both the above scenarios, we avoid re-executing the algorithm on the entire dataset, thereby reducing execution time. experimental results confirm that our approach results in execution time improvements of up to several orders of magnitude in practice. s. parthasarathy m. j. zaki m. ogihara s. dwarkadas mining association rules with multiple minimum supports bing liu wynne hsu yiming ma web page design: implications of memory, structure and scent for information retrieval kevin larson mary czerwinski editorial steven pemberton on optimizing an sql-like nested query sql is a high-level nonprocedural data language which has received wide recognition in relational databases. one of the most interesting features of sql is the nesting of query blocks to an arbitrary depth. an sql-like query nested to an arbitrary depth is shown to be composed of five basic types of nesting. four of them have not been well understood and more work needs to be done to improve their execution efficiency. algorithms are developed that transform queries involving these basic types of nesting into semantically equivalent queries that are amenable to efficient processing by existing query-processing subsystems. these algorithms are then combined into a coherent strategy for processing a general nested query of arbitrary complexity. won kim a vision for management of complex models phillip a. bernstein alon y. halevy rachel a. pottinger argos: a display system for augmenting reality david drascic julius j. grodski paul milgram ken ruffo peter wong shumin zhai critical zones in desert fog: aids to multiscale navigation susanne jul george w. furnas data mining techniques jiawei han error-constrained count query evaluation in relational databases wen-chi hou gultekin ozsoyoglu erdogan dogdu efficient tools for power annotation of visual contents: a lexicographical approach youngchoon park projecting demand for electronic communications in automated offices stephen a. smith robert i. benjamin webcluster, a tool for mediated information access (demonstration abstract) gheorghe muresan david j. harper mourad mechkour constraint programming and database languages: a tutorial paris kanellakis an analysis of cardinality constraints in redundant relationships james dullea il-yeol song changing to stay itself helge kahler markus rohde packet radio under linux jeff tranter tracing the lineage of view data in a warehousing environment we consider the view data lineageproblem in a warehousing environment: for a given data item in a materialized warehouse view, we want to identify the set of source data items that produced the view item. we formally define the lineage problem, develop lineage tracing algorithms for relational views with aggregation, and propose mechanisms for performing consistent lineage tracing in a multisource data warehousing environment. our result can form the basis of a tool that allows analysts to browse warehouse data, select view tuples of interest, and then "drill-through" to examine the exact source tuples that produced the view tuples of interest. yingwei cui jennifer widom janet l. wiener spatio-temporal conceptual models: data structures + space + time christine parent stefano spaccapietra esteban zimányi scalable data mining with model constraints minos garofalakis rajeev rastogi real world requirements for decision support - implications for rdbms sanju k. bansal resource section: web sites michele tepper the "virtual hospital" project g. franco u. gaggeri s. bisio e. minisci r. fonte a. barbieri general research issues in multimedia database systems hanan samet error propagation in distributed databases o. haase a. henrich active mail - a framework for implementing groupware yaron goldberg marilyn safran ehud shapiro a homogeneous relational model and query languages for temporal databases in a temporal database, time values are associated with data item to indicate their periods of validity. we propose a model for temporal databases within the framework of the classical database theory. our model is realized as a temporal parameterization of static relations. we do not impose any restrictions upon the schemes of temporal relations. the classical concepts of normal forms and dependencies are easily extended to our model, allowing a suitable design for a database scheme. we present a relational algebra and a tuple calculus for our model and prove their equivalence. our data model is homogeneous in the sense that the periods of validity of all the attributes in a given tuple of a temporal relation are identical. we discuss how to relax the homogeneity requirement to extend the application domain of our approach. shashi k. gadia a computing research repository: why not solve the problems first? the computing research repository (corr) described by halpern is potentially a powerful tool for researchers in computing science.in its current form, however, shortcomings exist that restrict its value and that,in the long term,might strongly undermine its usefulness.important aspects that have insufficiently been taken care of are (1) the quality and consequently the reliability of the material stored, (2) the still restricted submission of material,which implies that other sources have to be consulted by researchers as well, (3) the still unsound financial basis of the project, and (4) the confusion that may easily arise when a preliminary version is stored in the corr, while a different final version is published in a journal. a. j. van loon systems training for the user executive this paper describes an approach to systems training for user executives who have little or no direct experience in the management information systems (mis) field. jack stone living web: supporting internet-based user-centered design jeffrey d. smith kenji takahashi eugene liang new paradigms for using computers ted selker estimating the relative usability of two interfaces: heuristic, formal, and empirical methods compared two alternative user interface designs were subjected to user testing to measure user performance in a database query task. user performance was also estimated heuristically in three different ways and by use of formal goms modelling. the estimated values for absolute user performance had very high variability, but estimates of the relative advantage of the fastest interface were less variable. choosing the fastest of the two designs would have a net present value more than 1,000 times the cost of getting the estimates. a software manager would make the correct choice every time in our case study if decisions were based on at least three independent estimates. user testing was 4.9 times as expensive as the cheapest heuristic method but provided better performance estimates. jakob nielsen victoria l. phillips a new normal form for nested relations we consider nested relations whose schemes are structured as trees, called scheme trees, and introduce a normal form for such relations, called the nested normal form. given a set of attributes u, and a set of multivalued dependencies (mvds) m over these attributes, we present an algorithm to obtain a nested normal form decomposition of u with respect to m. such a decomposition has several desirable properties, such as explicitly representing a set of full and embedded mvds implied by m, and being a faithful and nonredundant representation of u. moreover, if the given set of mvds is conflict-free, then the nested normal form decomposition is also dependency-preserving. finally, we show that if m is conflict-free, then the set of root-to-leaf paths of scheme trees in nested normal form decomposition is precisely the unique 4nf decomposition [9, 16] of u with respect to m. z. meral ozsoyoglu li-yan yuan cscw, groupware and workflow: experiences, state of art, and future trends jonathan grudin steven poltrock a taxonomy of user interface terminology mark h. chignell performance evaluation of input devices in trajectory-based tasks: an application of the steering law johnny accot shumin zhai natural interaction mark kucente information systems, knowledge work and the is professional: implications for human resources management paul j. ambrose arkalgud ramaprasad arun rai meeting the challenge: application of communication technologies to group interactions roger j. volkema fred niederman majic and desktopmajic conferencing system (video program) (abstract only) this video shows a multiparty videoconferencing system "majic" and a multiparty desktop conferencing system "desktopmajic". majic is composed of 2 video cameras, 2 video projectors, a one-way transparent screen, and a tilted workstation forming a desk. life-size video images of participants are projected without boundaries onto a large curved screen as if users in remote locations are sitting around a table attending a meeting together. majic supports gaze awareness and multiple eye-contact among the participants. moreover, a shared work space is provided at the center, enabling users to carry on a discussion in a manner comparable to face-to-face meetings. although majic is very effective, it needs a high speed network and special facilities. desktopmajic is implemented on a convential computer workstation, and supports pseudo gaze awareness and pseudo hand action. still-picture portraits of the user in 9 different gaze directions are sent to every desktopmajic in advance, and an appropriate one is dynamically selected during the conference to reflect where the user is paying attention. moreover, other participants' mouse cursors on the shared application window are linked to their portrait window, allowing each user to intuitively see which cursor belongs to whom. since desktopmajic does not need a high speed network, it may work smoothly even in a telephone or wireless network environment. ken-ichi okada shunsuke tanaka yutaka matsushita harp: a distributed query system for legacy public libraries and structured databases the main purpose of a digital library is to facilitate users easy access to enormous amount of globally networked information. typically, this information includes preexisting public library catalog data, digitized document collections, and other databases. in this article, we describe the distributed query system of a digital library prototype system known as harp. in the harp project, we have designed and implemented a distributed query processor and its query front-end to support integrated queries to preexisting public library catalogs and structured databases. this article describes our experiences in the design of an extended sequel (sql) query language known as harpsql. it also presents the design and implementation of the distributed query system. our experience in distributed query processor and user interface design and development will be highlighted. we believe that our prototyping effort will provide useful lessons to the development of a complete digital library infrastructure. ee-peng lim ying lu computer science research in office automation whether an "office" is situated in academia, business, industry or government, the potential application of computer and communication technologies toward its more effective functioning presents particularly complex and challenging problems to researchers in computer science. it is no longer satisfactory to merely have a separate, supportive computer center configured on a centralized basis with its hardware/software/database constituents designed for and situated in a tightly controlled, remote computer room, accessible only to well-trained staff. instead a modern-day office demands distributed processing and access for potentially all of its employees. it is forcing us to consider the networking of a number of heterogeneous types of computers and terminals with various kinds of nonspecialist users who need or want suitable access from their personal workstations. this precipitates not only "distribution" of hardware, but also systems software, applications software, databases, network control, access security, and more. indeed, if we define an automated office (or a computer-based office system) to be an organizational/technological entity which encompasses a set of resources (computer hardware, software, databases, communications equipment, people, etc.) working in some specified pattern(s) of interconnection, interdependency or relation toward the accomplishment of one or more organizational objectives, we have quite a job on our hands in terms of applied research which can support the design of such a system. not only must its low-level functioning with respect to its component resources be addressed; attention must also be paid to putting those pieces together into an effectively operating complete system. the latter requires research with a global view and with abilities to conceptualize, model, analyze, synthesize and integrate the various component parts and related considerations toward creating a unified, harmonious whole. siegfried treu a research agenda for highly effective human-computer interaction: useful, usable, and universal jean scholtz michael muller david novick dan r. olsen ben shneiderman cathleen wharton tools for view generation in object-oriented databases elke a. rundensteiner the fate of indexes in an online world mary jane northrop a use of drawing surfaces in different collaborative settings two-person design sessions were studied in three different settings: face-to- face, geographically separated with an audio/video link, and a telephone-only connection. in all settings, the designers' uses of a drawing surface were noted. many similar drawing surface activities occurred in all design settings even though the settings did not each allow for the same sharing and interaction with the drawing surfaces. observations suggest that the process of creating drawings may be as important to the design process as the drawings themselves. these preliminary results raise issues for further study, particularly with respect to computer support for collaborative drawing surface use. sara a. bly devise (demo abstract): integrated querying and visual exploration of large datasets devise is a data exploration system that allows users to easily develop, browse, and share visual presentations of large tabular datasets (possibly containing or referencing multimedia objects) from several sources. the devise framework, implemented in a tool that has been already successfully applied to a variety of real applications by a number of user groups, makes several contributions. in particular, it combines support for extended relational queries with powerful data visualization features. datasets much larger than available main memory can be handled---devise is currently being used to visualize datasets well in excess of 100mb---and data can be interactively examined at several levels of detail: all the way from meta-data summarizing the entire dataset, to large subsets of the actual data, to individual data records. combining querying (in general, data processing) with visualizations gives us a very versatile tool, and presents several novel challenges. our emphasis is on developing an intuitive yet powerful set of querying and visualization primitives that can be easily combined to develop a rich set of visual presentations that integrate data from a wide range of application domains. in this demo, we will present a number of examples of the use of the devise tool for visualizing and interactively exploring very large datasets, and report on our experience in applying it to several real applications. m. livny r. ramakrishnan k. beyer g. chen d. donjerkovic s. lawande j. myllymaki k. wenger support for end user participation using replicated versions & group communication g. canals p. molli c. godart inference rules for multivalued dependencies william j. pervin personalized spiders for web search and analysis searching for useful information on the world wide web has become incr easingly difficult. while internet search engines have been helping people to search on the web, low recall rate and outdated indexes have become more and more problematic as the web grows. in addition, search tools usually present to the user only a list of search results, failing to provide further personalized analysis which could help users identify useful information and comprehend these results. to alleviate these problems, we propose a client- based architecture that incorporates noun phrasing and self-organizing map techniques. two systems, namely ci spider and meta spider, have been built based on this architecture. user evaluation studies have been conducted and the findings suggest that the proposed architecture can effectively facilitate web search and analysis. michael chau daniel zeng hinchun chen implementation of a dynamic web database: interface using cold fusion gary hutchinson greg baur darleen pigford tuple sequences and lexicographic indexes the concept of a tuple sequence is introduced in order to investigate structure connected with relational model implementation. analogs are presented for the relational operations of projection, join, and selection, and the decomposition problem for tuple sequences is considered. the lexicographical ordering of tuple sequences is studied via the notion of (lexicographic) index. a sound and complete set of inference rules for indexes is exhibited, and two algorithmic questions related to indexes examined. finally, indexes and functional dependencies in combination are studied. serge abiteboul seymour ginsburg the dynamic homefinder: evaluating dynamic queries in a real-estate information exploration system we designed, implemented, and evaluated a new concept for visualizing and searching databases utilizing direct manipulation called dynamic queries. dynamic queries allow users to formulate queries by adjusting graphical widgets, such as sliders, and see the results immediately. by providing a graphical visualization of the database and search results, users can find trends and exceptions easily. user testing was done with eighteen undergraduate students who performed significantly faster using a dynamic queries interface compared to both a natural language system and paper printouts. the interfaces were used to explore a real-estate database and find homes meeting specific search criteria. christopher williamson ben shneiderman remote assistance: a view of the work and a view of the face? leon watts andrew f. monk on an algebra for historical relational databases: two views james clifford abdullah uz tansel accessing hyperdocuments through interactive dynamic maps we propose a new navigation paradigm based on a spatial metaphor to help users access and navigate within large sets of documents. this metaphor is implemented by a computer artifact called an interactive dynamic map (idm). an idm plays a role similar to the role of a real map with respect to physical space. two types of idms are computed from the documents: topic idms represent the semantic contents of a set of documents while document idms visualize a subset of documents such as those resulting from a query. idms can be used for navigating, browsing, and querying. they can be made active, they can be customized and they can be shared among users. the article presents the shadocs document retrieval system and describes the role, use and generation of idms in shadocs. mountaz zizi michel beaudouin-lafon human-computer interaction: input devices robert j. k. jacob gesturecam: a video communication system for sympathetic remote collaboration an approach supporting spatial workspace collaboration via a video- mediated communication system is described. based on experimental results, the following were determined to be the system requirements to support spatial workspace collaboration: independency of a field of view, predictability, confidence in transmission and sympathy toward the system. additionally, a newly developed camera system, the gesturecam system, is introduced. a camera is mounted on an actuator with three degrees of freedom. it is controlled by master-slave method or by a touch-sensitive crt. also, a laser pointer is mounted to assist with remote pointing. preliminary experiments were conducted and the results are described herein. hideaki kuzuoka toshio kosuge masatomo tanaka assisting the "virtual" user the development of online assistance for remote users of computer-based systems depends upon the assumptions made about the barriers users face and about the available strategies for addressing them. drawing from the domain of digital libraries and information discovery and retrieval, this commentary speculates about missed opportunities hidden by those assumptions. critical questions about strategies to remove user barriers, to guide users past barriers, and to help educate users to anticipate and independently overcome barriers suggest transitions for online user assistance technologies that could take the form of "confusion recognizers," "assistance avatars," and "pedagogically aware" user interfaces. john ober open object database management systems jose a. blakeley human factors in is: symposium report on the fifth hfis symposium, 14 - 15 oct. 1993, cleveland, ohio, usa dov te'eni katrine kirk ulrike schultze donald day database techniques for the world-wide web: a survey daniela florescu alon levy alberto mendelzon constraint-based query optimization for spatial databases richard helm kim marriott martin odersky how machine delays change user strategies as machine response delays vary, the most important effect on users may be not their annoyance but that they change the way they use an interface. even the very simple task of copytyping three digit numbers gives rise to at least three different user strategies (i.e. procedures). however the effect seems not be a simple function of delay length, contrary to earlier reported work. instead users are probably shifting between strategies more fluidly. paddy o'donnell stephen w. draper conferencing and collaboration project sun microsystems laboratories, inc. david gedye greg mclaughlin amy pearl john tang aesthetics and apparent usability: empirically assessing cultural and methodological issues noam tractinsky a supplement to sampling-based methods for query size estimation in a database system sampling-based methods for estimating relation sizes after relational operators such as selections, joins and projections have been intensively studied in recent years. methods of this type can achieve high estimation accuracy and efficiency. since the dominating overhead involved in a sampling- based method is the sampling cost, different variants of sampling methods are proposed so as to minimize the sampling percentage (thus reducing the sampling cost) while maintaining the estimation accuracy in terms of the confidence level and relative error (to be precisely defined later in section 2). in order to determine the minimal sampling percentage, the overall characteristics of the data such as the mean and variance are needed. currently, the representative sampling-based methods in literature are based on the assumption that overall characteristics of data are unavailable, and thus a significant amount of effort is dedicated to estimating these characteristics so as to approach the optimal (minimal) sampling percentage. the estimation for these characteristics incurs cost as well as suffers the estimation error. in this short essay, we point out that the exact values of these characteristics of data can be kept track of in a database system at a negligible overhead. as a result, the minimal sampling percentage while ensuring the specified relative error and confidence level can be precisely determined. yibei ling wei sun kdd-cup 2000: question 4 rafal kustra jorge a. picazo bogdan e. popescu conceptual analysis of lexical taxonomies: the case of wordnet top-level in this paper we propose an analysis and an upgrade of wordnet's top-level synset taxonomy. we briefly review wordnet and identify its main semantic limitations. some principles from a forthcoming _ontoclean_ methodology are applied to the ontological analysis of wordnet. a revised top- level taxonomy is proposed, which is meant to be more conceptually rigorous, cognitively transparent, and efficiently exploitable in several applications. aldo gangemi nicola guarino alessandro oltramari index structures for structured documents yong kyu lee seong-joon yoo kyoungro yoon p. bruce berra transforming user-centered analysis into concrete design larry e. wood ron zeno sql+d: extended display capabilities for multimedia database queries chitta baral graciela gonzalez amarendra nandigam florida international university high performance database research center naphtali rishe wei sun david barton yi deng cyril orji michael alexopoulos leonardo loureiro carlos ordonez mario sanchez artyom shaposhnikov synchronizing a database to improve freshness in this paper we study how to refresh a local copy of an autonomous data source to maintain the copy up-to-date. as the size of the data grows, it becomes more difficult to maintain the copy \ fresh, "making it crucial to synchronize the copy effectively. we define two freshness metrics, change models of the underlying data, and synchronization policies. we analytically study how effective the various policies are. we also experimentally verify our analysis, based on data collected from 270 web sites for more than 4 months, and we show that our new policy improves the \ freshness" very significantly compared to current policies in use. junghoo cho hector garcia-molina malm: a framework for mining sequence database at multiple abstraction levels chung-sheng li philip s. yu vittorio castelli indexing a transaction-decision time database mario a. nascimento margaret h. dunham gui users have trouble using graphic conventions on novel tasks catherine a. ashworth an efficient and flexible method for archiving a data base we describe an efficient method for supporting incremental and full archiving of data bases (e.g., individual files). customers archive their data bases quite frequently to minimize the duration of data outage. because of the growing sizes of data bases and the ever increasing need for high availability of data, the efficiency of the archive copy utility is very important. the method presented here minimizes interferences with concurrent transactions by not acquiring any locks on the data being copied. it significantly reduces disk i/os by not keeping on data pages any extra tracking information in connection with archiving. these features make the archive copy operation be more efficient in terms of resource consumption compared to other methods. the method is also flexible in that it optionally supports direct copying of data from disks, bypassing the dbms's buffer pool. this reduces buffer pool pollution and processing overheads, and allows the utility to take advantage of device geometries for efficiently retrieving data. we also describe extensions to the method to accommodate the multisystem shared disks transaction environment. the method tolerates gracefully system failures during the archive copy operation. c. mohan inderpal narang optimising web queries using document type definitions a document type definition (dtd) d defines the structure of elements permitted in any web document valid with respect to d. from a given dtd d we show how to derive a number of simple structural constraints which are implied by d. using a relational abstraction of web databases, we consider a class of conjunctive queries which retrieve elements from web documents stored in a database d. for simplicity, we assume that all documents in d are valid with respect to the same dtdd. the main contribution of the paper is the use of the constraints derived from d to optimise conjunctive queries on d by removing redundant conjuncts. the relational abstraction allows us to show that the constraints derived from a dtd are equivalent to tuple-generating and equality-generating dependencies which hold on d. having done so, we can use the chase algorithm to show equivalence between a query and its reduced form. peter t. wood its and user interface consistency: a response to grudin charles wiecha the universal relation revisited (technical correspondence) william kent a data definition and mapping language for numerical data bases numerical data bases arise in many scientific applications to keep track of large sparse and dense matrices. unlike the many matrix data storage techniques available for incore manipulation, very large matrices are currently limited to a few compact storage schemes on secondary devices, due to the complex underlying data management facilities. this paper proposes an approach for generalized numerical database management that would promote physical data independence by relieving users from the need for knowledge of the physical data organization on the secondary devices. our approach is to describe each of the storage techniques for dense and sparse matrices by a physical schema, which encompasses the corresponding access path, the encoding to storage structures, and the file access method. a generalized facility for describing any kind of numerical database and its mapping to storage is provided via nonprocedural stored-data description and mapping languages (sddl and sdml). the languages are processed by a generalized syntax-directed translation scheme (gsdts) to automatically generate fortran conversion programs for creating or translating numerical database from one compact storage scheme to another. the feasibility of the generalized approach with regard to our current implementation is also discussed. ola-olu a. daini peter scheuermann global user interface design tony fernandes the forum andrew mcgrath extending a database system with procedures this paper suggests that more powerful database systems (dbms) can be built by supporting database procedures as full-fledged database objects. in particular, allowing fields of a database to be a collection of queries in the query language of the system is shown to allow the natural expression of complex data relationships. moreover, many of the features present in object- oriented systems and semantic data models can be supported by this facility. in order to implement this construct, extensions to a typical relational query language must be made, and considerable work on the execution engine of the underlying dbms must be accomplished. this paper reports on the extensions for one particular query language and data manager and then gives performance figures for a prototype implementation. even though the performance of the prototype is competitive with that of a conventional system, suggestions for improvement are presented. michael stonebraker jeff anton eric hanson optical storage of page images and pictorial data - opportunities and needed advances in information retrieval we describe two current development projects at the library of congress using high-density optical storage, both of which require more advanced and improved computer-based information retrieval methodologies than existing bibliographic retrieval systems. a much greater emphasis will be placed on the information content of the articles rather than on the broad subject categories in general use for computer retrieval citations to book materials. needed approaches include the linking of selected external reference sources and the extraction of character-encoded index information from the page images by ocr techniques. our position is that extremely high density storage of articles in a single subject area requires a finer resolution in the retrieval system accessing the material. a single side of a 12-inch digital optical disk, for example, can hold 15 years of the journal of the acm. william r. nugent jessica r. harding using words and phonetic strings for efficient information retrieval from imperfectly transcribed spoken documents michael j. witbrock alexander g. hauptmann introduction to the special issue on video information retrieval scott stevens thomas little node autonomy in distributed systems the goal of this paper is to explore the notion of node autonomy in distributed computer systems. some motivations for autonomy are exposed. different facets of autonomy as well as relationships among them are discussed. finally, we look into how autonomy affects other aspects of distributed computing, including timeliness, correctness, load sharing, data sharing, and data replication. h. garcia molina b. kogan using email and www in a distributed participatory design project babak a. farshchian monica divitini predicate rewriting for translating boolean queries in a heterogeneous information system searching over heterogeneous information sources is difficult in part because of the nonuniform query languages. our approach is to allow users to compose boolean queries in one rich front-end language. for each user query and target source, we transform the user query into a subsuming query that can be supported by the source but that may return extra documents. the results are then processed by a filter query to yield the correct final results. in this article we introduce the architecture and associated mechanism for query translation. in particular, we discuss techniques for rewriting predicates in boolean queries into native subsuming forms, which is a basis of translating complex queries. in addition, we present experimental results for evaluating the cost of postfiltering. we also discuss the drawbacks of this approach and cases when it may not be effective. we have implemented prototype versions of these mechanisms and demonstrated them on heterogeneous boolean systems. chen-chuan k. chang hector garcia-molina andreas paepcke a pilotcard-based shared hypermedia system supporting shared and private databases satoshi ichimura takeshi kamita yutaka matsushita evolution of the talking dinosaur: the (not so) natural history of a new interface for children kirsten alexander erik strommen automatic migration of files into relational databases in order to provide database-like features for files, particularly for searching in web data, one solution is to migrate file data into a relational database. having stored the data, the capabilities of sql can be used for querying, provided, the data has been given some structure. to this end, an adapter must be implemented that converts data from files into the database. this paper proposes a specification-based automation for this procedure: given some descriptive specification of file contents, those file adapters are generated. an adequate specification language provides powerful concepts to describe the contents of files. in contrast to similar work, directory structures are taken into account because they often contain useful semantics. uwe hohenstein andreas ebert engineering for usability (panel session): lessons from the user derived interface the focus here is on the lessons learned from the udi project for building usability into the software development process. in the udi project we attempted to engineer a usable system. that process involved: defining an appropriate metric for measuring usability, setting explicit levels of usability to be achieved determining an appropriate methodology for building usability into the system, delivering a seemingly functional system with an easily changed interface very early in the development cycle, and recognizing the tentative nature of the initial design. using the udi project as an example, each of the above principles will be discussed in detail. dennis wixon john whiteside displaying search results in textual databases this article presents the preliminary results of a study currently in progress on the display of search results from textual databases. the study deals with information that should be displayed in a list of documents that match the user's search criteria. offer drori an empirical evaluation of graspable user interfaces: towards specialized, space-multiplexed input george w. fitzmaurice william buxton the metadatabase project at rensselaer the metadatabase project is a multi-year research effort at rensselaer polytechnic institute. sponsored by industry (alcoa, dec, ge, cm, ibm and other) through rensselaer's computer integrated manufacturing program, this project seeks to develop novel concepts, methods and techniques for achieving information integration across major functional systems pertaining to computerized manufacturing enterprises. thus, the metadatabase model emcompasses the generic tasks of heterogeneous, distributed, aand autonomous databases administration, but also includes information resources management and integration of concurrent (functional) systems. the model entails (1) an integrated data and knowledge modeling and representation method; (2) an online kernel (the metadatabase) for information modeling and management; (3) metadatabase assisted global query formulation and processing; (4) a concurrent architectural whereby global synergies are achieved through (distributed) metadata management rather than synchronization of (distributed) database processing; and (5) a theory of information requirements for integration. a metadatabase prototype was recently demonstrated to the industrial sponsors. the basic concept of the metadatabase model is discussed in this paper. cheng hsu what to do when there's too much information hypertext systems with small units of text are likely to drown the user with information, in the same way that online catalogs or bibliographic retrieval systems often do. experiments with a catalog of 800,000 book citations have shown two useful ways of dealing with the "too many hits" problem. one is a display of phrases containing the excessively frequent words; another is a display of titles by hierarchical category. the same techniques should apply to other text-based retrieval systems. in general, interactive solutions seem more promising than attempts to do detailed query analysis and get things right the first time. m. lesk launching the usability approach: experience at helsinki university of technology this article describes how the usability approach was introduced into finnish companies and to computer science students by combining teaching and real cases from companies. two courses were developed where usability study projects were part of the usability education. the courses not only taught the evaluation methods but also radically changed the students' attitudes towards usability. marja-riitta koivunen marko nieminen sirpa riihiaho kms: a distributed hypermedia system for managing knowledge in organizations developers of hypermedia systems face many design issues. the design for kms, a large-scale hypermedia system for collaborative work, seeks improved user productivity through simplicity of the conceptual data model. robert m. akscyn donald l. mccracken elise a. yoder approximate spatio-temporal retrieval this paper proposes a framework for the handling of spatio-temporal queries with inexact matches, using the concept of relation similarity. we initially describe a binary string encoding for 1d relations that permits the automatic derivation of similarity measures. we then extend this model to various granularity levels and many dimensions, and show that reasoning on spatio- temporal structure is significantly facilitated in the new framework. finally, we provide algorithms and optimization methods for four types of queries: (i) object retrieval based on some spatio-temporal relations with respect to a reference object, (ii) spatial joins, i.e., retrieval of object pairs that satisfy some input relation, (iii) structural queries, which retrieve configurations matching a particular spatio-temporal structure, and (iv) special cases of motion queries. considering the current large availability of multidimensional data and the increasing need for flexible query-answering mechanisms, our techniques can be used as the core of spatio-temporal query processors. dimitris papadias nikos mamoulis vasilis delis answering queries in categorical databases a compatible categorical data base can be viewed as a single (contingency) table by taking the maximum-entropy extension of the component tables. such a view, here called universal table model, is needed to answer a user who wishes "cross- classified" categorical data, that is, categorical data resulting from the combination of the information contents of two or more base tables. in order to implement a universal table interface we make use of a query- optimization procedure, which is able to generate an appropriate answer both in the case that the asked data are present in the data base and in the case that they are not and, then, have to be estimated f. m. malvestuto multimedia searches in internet vania atanasova quasi-cubes: exploiting approximations in multidimensional databases a data cube is a popular organization for summary data. a cube is simply a multidimensional structure that contains at each point an aggregate value, i.e., the result of applying an aggregate function to an underlying relation. in practical situations, cubes can require a large amount of storage. the typical approach to reducing storage cost is to materialize parts of the cube on demand. unfortunately, this lazy evaluation can be a time- consuming operation. in this paper, we describe an approximation technique that reduces the storage cost of the cube without incurring the run time cost of lazy evaluation. the idea is to provide an incomplete description of the cube and a method of estimating the missing entries with a certain level of accuracy. the description, of course, should take a fraction of the space of the full cube and the estimation procedure should be faster than computing the data from the underlying relations. since cubes are used to support data analysis and analysts are rarely interested in the precise values of the aggregates (but rather in trends), providing approximate answers is, in most cases, a satisfactory compromise. alternatively, the technique can be used to implement a multiresolution system in which a tradeoff is established between the execution time of queries and the errors the user is willing to tolerate. by only going to the disk when it is necessary (to reduce the errors), the query can be executed faster. this idea can be extended to produce a system that incrementally increases the accuracy of the answer while the user is looking at it, supporting on-line aggregation. daniel barbará mark sullivan on the propagation of errors in the size of join results yannis e. ioannidis stavros christodoulakis the knowledge weasel hypermedia annotation system daryl t. lawton ian e. smith a case for interaction: a study of interactive information retrieval behavior and effectiveness jurgen koenemann nicholas j. belkin empirically-based re-design of a hypertext encyclopedia this paper reports on the processes used and guidelines discovered in re- designing the user interface of the hypertext encyclopedia, hyperholmes. the re-design was based on the outcomes of a previous experiment and was evaluated experimentally. results showed that the new system resulted in superior performance and somewhat different styles of navigation compared to the old system and to paper. the study provides empirical support for design guidelines relating to tiled windows, navigation tools, graphics and hierarchical navigation. keith instone barbee mynatt teasley laura marie leventhal technology in use (panel session) lucy a. suchman sharon traweek michael lynch richard frankel brigitte jordon behavioral sampling as a data-gathering method for gss research michael parent r. brent gallupe jim sheffield using quantitative information for efficient association rule generation b. pôssas m. carvalho r. resende w. meita view maintenance issues for the chronicle data model (extended abstract) h. v. jagadish inderpal singh mumick abraham silberschatz database security teresa f. lunt eduardo b. fernandez organizing usability work to fit the full product range michael muller mary p. czerwinski smart telepointers: maintaining telepointer consistency in the presence of user interface customization conventional methods for maintaining telepointer consistency in shared windows do not work in the presence of per- user window customizations. this article presents the notion of a "smart telepointer," which is a telepointer that works correctly in spite of such customizations. methods for smart-telepointer implementation are discussed. kenneth j. rodham dan r. olsen diaries at work mikko kovalainen mike robinson esa auramäki incremental update to aggregated information for data warehouses over internet miranda chan hong va leong antonio si point vs. interval-based query languages for temporal databases (extended abstract) david toman framework for extending object-oriented applications with hypermedia functionality (abstract) alejandra garrido francisco vives pablo zanetti exploiting constraint-like data characterizations in query optimization query optimizers nowadays draw upon many sources of information about the database to optimize queries. they employ runtime statistics in cost-based estimation of query plans. they employ integrity constraints in the query rewrite process. primary and foreign key constraints have long played a role in the optimizer, both for rewrite opportunities and for providing more accurate cost predictions. more recently, other types of integrity constraints are being exploited by optimizers in commercial systems, for which certain semantic query optimization techniques have now been implemented. these new optimization strategies that exploit constraints hold the promise for good improvement. their weakness, however, is that often the "constraints" that would be useful for optimization for a given database and workload are not explicitly available for the optimizer. data mining tools can find such "constraints" that are true of the database, but then there is the question of how this information can be kept by the database system, and how to make this information available to, and effectively usable by, the optimizer. we present our work on _soft constraints_ in db2. a soft constraint is a syntactic statement equivalent to an integrity constraint declaration. a soft constraint is not really a constraint, per se, since future updates may undermine it. while a soft constraint is valid, however, it can be used by the optimizer in the same way integrity constraints are. we present two forms of soft constraint: _absolute_ and _statistical_. an absolute soft constraint is consistent with respect to the _current_ state of the database, just in the same way an integrity constraint must be. they can be used in rewrite, as well as in cost estimation. a statistical soft constraint differs in that it may have some degree of violation with respect to the state of the database. thus, statistical soft constraints cannot be used in rewrite, but they can still be used in cost estimation. we are working long-term on absolute soft constraints. we discuss the issues involved in implementing a facility for absolute soft constraints in a database system (and in db2), and the strategies that we are researching. the current db2 optimizer is more amenable to adding facilities for statistical soft constraints. in the short-term, we have been implementing pathways in the optimizer for statistical soft constraints. we discuss this implementation. parke godfrey jarek gryz calisto zuzarte a conversation with ted selker kate ehrlich integrity constraints for xml integrity constraints are useful for semantic specification, query optimization and data integration. the id/idref mechanism provided by xml dtds relics on a simple form of constraint to describe references. yet, this mechanism is not sufficient to express semantic constraints, such as keys or inverse relationships, or stronger, object-style references. in this paper, we investigate integrity constraints for xml, both for semantic purposes and to improve its current reference mechanism. we extend dtds with several families of constraints, including key, foreign key, inverse constraints and constraints specifying the semantics of object identities. these constraints are useful both for native xml documents and to preserve the semantics of data originating in relational or object databases. complexity and axiomatization results are established for the (finite) implication problems associated with these constraints. these results also extend relational dependency theory on the interaction between (primary) keys and foreign keys. in addition, we investigate implication of more general constraints, such as functional, inclusion and inverse constraints defined in terms of navigation paths. wenfei fan jerome simeon implementation of extended indexes in postgres paul m. aoki an extension of the relational data model to incorporate ordered domains we extend the relational data model to incorporate partial orderings into data domains, which we call the ordered relational model. within the extended model, we define the partially ordered relational algebra (the pora) by allowing the ordering predicate &sqsube; to be used in formulae of the selection operator ( ). the pora expresses exactly the set of all possible relations that are invariant under order-preserving automorphism of databases. this result characterizes the expressiveness of the pora and justifies the development of ordered sql (osql) as a query language for ordered databases. osql provides users with the capability of capturing the semantics of ordered data in many advanced applications, such as those having temporal or incomplete information. ordered functional dependencies (ofds) on ordered databases are studied, based on two possible extensions of domain orderings: pointwise ordering and lexicographical ordering. we present a sound and complete axiom system for ofds in the first case and establish a set of sound and complete chase rules for ofds in the second. our results suggest that the implication problems for both cases of ofds are decidable and that the enforcement of ofds in ordered relations are practically feasible. in a wider perspective, the proposed model explores an important area of object- relational databases, since ordered domains can be viewed as a general kind of data type. database research at hp labs marie-anne neimat ming-chien shan a homunculus in the computer? r. john brockmann industry briefs: yahoo! irene au on-line documentation for application systems a critical factor in application development is delivering timely documentation to the end-user. an interactive solution to the development, maintenance, and distribution of application documentation meets this need. a table-driven software package provides control for three levels of user documentation, forming an immediately available description of application systems in an on-line environment. users may select any or all descriptive information for a system. at the detailed level the choice is menu driven. documentation levels are: summary/overview--- provides scope and purpose of system. detailed operating level---lists terminal prompts, examples of appropriate user responses, and explanatory material to guide the user. news---brief descriptions of any changes which affect the end-user. designed for ease of maintenance, the modular file structure facilitates addition, replacement, insertion, and deletion of user documentation. the system allows programmer interaction to install user documentation and provides the system librarian with central control. features exist which provide documentation status reports and track usage by end-user, giving the librarian tools to determine the priorities with which systems are addressed. remote high speed printing and terminal display are available for all documentation. the paper will describe system specifications, design criteria, and give examples of the implementation. kimberly hourigan carolyn stockdale modelling and measuring end user sophistication sid l. huff malcolm c. munro barbara marcolin lore: a database management system for semistructured data lore (for lightweight object repository) is a dbms designed specifically for managing semistructured information. implementing lore has required rethinking all aspects of a dbms, including storage management, indexing, query processing and optimization, and user interfaces. this paper provides an overview of these aspects of the lore system, as well as other novel features such as dynamic structural summaries and seamless access to data from external sources. jason mchugh serge abiteboul roy goldman dallas quass jennifer widom fine-grained sharing in a page server oodbms for reasons of simplicity and communication efficiency, a number of existing object-oriented database management systems are based on page server architectures; data pages are their minimum unit of transfer and client caching. despite their efficiency, page servers are often criticized as being too restrictive when it comes to concurrency, as existing systems use pages as the minimum locking unit as well. in this paper we show how to support object- level locking in a page server context. several approaches are described, including an adaptive granularity approach that uses page-level locking for most pages but switches to object-level locking when finer-grained sharing is demanded. we study the performance of these approaches, comparing them to both a pure page server and a pure object server. for the range of workloads that we have examined, our results indicate that a page server is clearly preferable to an object server. moreover, the adaptive page server is shown to provide very good performance, generally outperforming the pure page server, the pure object server, and the other alternatives as well. michael j. carey michael j. franklin markos zaharioudakis voice-messaging standard completes its journey pat billingsley the third-generation/oodbms manifesto, commercial version frank manola hierarchy-based mining of association rules in data warehouses giuseppe psaila pier luca lanzi databases marjorie richardson considerations for information environments and the navique workspace george w. furnas samuel j. rauch sap r/3 (tutorial): a database application system many database applications in the real world are no longer built on top of a stand-alone database system. rather, generic (standard) application systems are employed in which the database system is one integrated component. sap is the market leader for integrated business administration systems, and its sap r/3 product is a comprehensive software system which integrates modules for finance, material management, sales and distribution, etc. from an architectural point of view, sap r/3 is a client/server application system with a relational database system as back-end. sap supports a choice between a variety of commercial relational database products. alfons kemper donald kossmann florian matthes the lyric language: querying constraint objects alexander brodsky yoram kornatzky review of information appliances and beyond larry wood creative orientations for interface design at do while studio jennifer hall a journey into web usability: what an information architect learned on his summer vacation steve toub informativeness as an ordinal utility function for information retrieval j. tague champagne training on a beer budget edmond p. fitzgerald aileen cater-steel world-wide computer tim berners-lee guiding automation with pixels: a technique for programming in the user interface richard potter normative standards for is research detmar w. straub soon ang roberto evaristo optimal and approximate computation of summary statistics for range aggregates fast estimates for aggregate queries are useful in database query optimization, approximate query answering and online query processing. hence, there has been a lot of focus on "selectivity estimation", that is, computing summary statistics on the underlying data and using that to answer aggregate queries fast and to a reasonable approximation. we present two sets of results for range aggregate queries, which are amongst the most common queries. first, we focus on a histogram as summary statistics and present algorithms for constructing histograms that are provably optimal (or provably approximate) for range queries; these algorithms take (pseudo-) polynomial time. these are the first known optimality or approximation results for arbitrary range queries; previously known results were optimal only for restricted range queries (such as equality queries, hierarchical or prefix range queries). second, we focus on wavelet-based representations as summary statistics and present fast algorithms for picking wavelet statistics that are provably optimal for range queries. no previously-known wavelet-based methods have this property. we perform an experimental study of the various summary representations show the benefits of our algorithms over the known methods. anna c. gilbert yannis kotidis s. muthukrishnan marin j. strauss an actor/actress annotation system for drama videos shin'ichi satoh projection-propagation in complex-object query languages yatin saraiya semantics for null extended nested relations the nested relational model extends the flat relational model by relaxing the first normal form assumption in order to allow the modeling of complex objects. much of the previous work on the nested relational model has concentrated on defining the data structures and query language for the model. the work done on integrity constraints in nested relations has mainly focused on characterizing subclasses of nested relations and defining normal forms for nested relations with certain desirable properties. in this paper we define the semantics of nested relations, which may contain null values, in terms of integrity constraints, called null extended data dependencies, which extend functional dependencies and join dependencies encountered in flat relational database theory. we formalize incomplete information in nested relations by allowing only one unmarked generic null value, whose semantics we do not further specify. the motivation for the choice of a generic null is our desire to investigate only fundamental semantics which are common to all unmarked null types. this lead us to define a preorder on nested relations, which allows us to measure the relative information content of nested relations. we also define a procedure, called the extended chase procedure, for testing satisfaction of null extended data dependencies and for making inferences by using these null extended data dependencies. the extended chase procedure is shown to generalize the classical chase procedure, which is of major importance in flat relational database theory. as a consequence of our approach we are able to capture the novel notion of losslessness in nested relations, called herein null extended lossless decomposition. finally, we show that the semantics of nested relations are a natural extension of the semantics of flat relations. mark levene george loizou projected realities: conceptual design for cultural effect william gaver anthony dunne video redundancy yanlin liu mark claypool a unified environment for fusion of information retrieval approaches prior work has shown that combining results of various retrieval approaches and query representations can improve search effectiveness. today, many meta- search engines exist which combine the results of various search engines in the hopes of improving overall effectiveness. however, the combination of results from different search engines masks variations in parsers, and other indexing techniques (stemming, stop words, etc.) this makes it difficult to assess the utility of the fusion technique. we have implemented the two most prevalent retrieval strategies: probabilistic and vector space using the same parser and the same relational retrieval engine. first, we identified a model that enables the fusion of an arbitrary number of sources. next, we tested various linear combinations of these two methods as well as various thresholds for identifying retrieved documents. our results show some improvement of effectiveness, but they also provide us for a baseline from which we can continue with other retrieval strategies and test the effect of fusing these strategies. m. catherine mccabe abdur chowdhury david a. grossman ophir frieder the movable filter as a user interface tool: the video eric a. bier ken fishkin ken pier maureen c. stone evaluating emergent collaboration on the web loren terveen will hill on chinese text retrieval jian-yun nie martin brisebois xiaobo ren towards tractable algebras for bags bags, i.e. sets with duplicates, are often used to implement relations in database systems. in this paper we study the expressive power of algebras for manipulating bags. the algebra we present is a simple extension of the nested relation algebra. our aim is to investigate how the use of bags in the language extends its expressive power, and increases its complexity. we consider two main issues, namely (i) the relationship between the depth of bag nesting and the expressive power, and (ii) the relationship between the algebraic operations, and their complexity and expressive power. we show that the bag algebra is more expressive than the nested relation algebra (at all levels of nesting), and that the difference may be subtle. we establish a hierarchy based on the structure of algebra expressions. this hierarchy is shown to be highly related to the properties of the powerset operator. stephane grumbach tova milo heuristic algorithms for distributed query processing this paper examines heuristic algorithms for processing distributed queries using generalized joins. as this optimization problem is np-hard heuristic algorithms are deemed to be justified. a heuristic algorithm to form/formulate strategies to process queries is presented. it has a special property in that its overhead can be "controlled": the higher its overhead the better the strategies it produces. modeling on a test-bed of queries is used to demonstrate that there is a trade-off between the strategy's execution and formulation delays. the modeling results also support the notion that simple greedy heuristic algorithms such as are proposed by many researchers are sufficient in that they are likely to lead to near-optimal strategies and that increasing the overhead in forming strategies is only marginally beneficial. both the strategy formulation and execution delays are examined in relation to the number of operations specified by the strategy and the total size of partial results. p. bodorik j. s. riordon computation-tuple sequences and object histories a record-based, algebraically-oriented model is introduced for describing data for "object histories" (with computation), such as checking accounts, credit card accounts, taxes, schedules, and so on. the model consists of sequences of computation tuples defined by a computation-tuple sequence scheme (css). the css has three major features (in addition to input data): computation (involving previous computation tuples), "uniform" constraints (whose satisfaction by a computation-tuple sequence u implies satisfaction by every interval of u), and specific sequences with which to start the valid computation-tuple sequences. a special type of css, called "local," is singled out for its relative simplicity in maintaining the validity of a computation- tuple sequence. a necessary and sufficient condition for a css to be equivalent to at least one local css is given. finally, the notion of "local bisimulatability" is introduced for regarding two css as conveying the same information, and two results on local bisimulatability in connection with local css are established. seymour ginsburg katsumi tanaka a telewriting system on a lan using a pen-based computer as the terminal higaki seiichi taninaka hiroshi moriya shinji webbed documents malcolm graham andrew surray the evolution of cscw: past, present and future development david crow sara parsowith g. bowden wise algebraic change propagation for semijoin and outerjoin queries many interesting examples in view maintenance involve semijoin and outerjoin queries. in this paper we develop algebraic change propagation algorithms for the following operators: semijoin, anti-semijoin, left outerjoin, right outerjoin, and full outerjoin. timothy griffin bharat kumar information retrieval using a singular value decomposition model of latent semantic structure in a new method for automatic indexing and retrieval, implicit higher-order structure in the association of terms with documents is modeled to improve estimates of term-document association, and therefore the detection of relevant documents on the basis of terms found in queries. singular-value decomposition is used to decompose a large term by document matrix into 50 to 150 orthogonal factors from which the original matrix can be approximated by linear combination; both documents and terms are represented as vectors in a 50- to 150- dimensional space. queries are represented as pseudo-documents vectors formed from weighted combinations of terms, and documents are ordered by their similarity to the query. initial tests find this automatic method very promising. g. w. furnas s. deerwester s. t. dumais t. k. landauer r. a. harshman l. a. streeter k. e. lochbaum innovation and evaluation in information exploration interfaces gene golovchinsky nicholas j. belkin the amulet user interface development environment brad a. myers split menus: effectively using selection frequency to organize menus when some items in a menu are selected more frequently than others, as is often the case, designers or individual users may be able to speed performance and improve preference ratings by placing several high-frequency items at the top of the menu. design guidelines for split menus were developed and applied. split menus were implemented and tested in two in situ usability studies and a controlled experiment. in the usability studies performance times were reduced by 17 to 58% depending on the site and menus. in the controlled experiment split menus were significantly faster than alphabetic menus and yielded significantly higher subjective preferences. a possible resolution to the continuing debate among cognitive theorists about predicting menu selection times is offered. we conjecture and offer evidence that, at least when selecting items from pull- down menus, a logarithmic model applies to familiar (high-frequency) items, and a linear model to unfamiliar (low-frequency) items. andrew sears ben shneiderman incremental data structures and algorithms for dynamic query interfaces dynamic query interfaces (dqis) form a recently developed method of database access that provides continuous realtime feedback to the user during the query formulation process. previous work shows that dqis are elegant and powerful interfaces to small databases. unfortunately, when applied to large databases, previous dqi algorithms slow to a crawl. we present a new approach to dqi algorithms that works well with large databases. egemen tanin richard beigel ben shneiderman a database perspective on lotus domino/notes in this one-page summary, i introduce the database aspects of lotus domino/notes. these database features are covered in detail in the corresponding sigmod99 tutorial available at www.almaden.ibm.com/u/mohan/domino_sigmod99.ps. c. mohan computational properties of metaquerying problems metaquerying is a datamining technology by which hidden dependencies among several database relations can be discovered. this tool has already been successfully applied to several real-world applications. recent papers provide only very preliminary results about the complexity of metaquerying. in this paper we define several variants of metaquerying that encompass, as far as we know, all variants defined in the literature. we study both the combined complexity and the data complexity of these variants. we show that, under the combined complexity measure, metaquerying is generally intractable (unless p=np), but we are able to single out some tractable interesting metaquerying cases (whose combined complexity is logcfl-complete). as for the data complexity of metaquerying, we prove that, in general, this is in p, but lies within ac0 in some interesting cases. finally, we discuss the issue of equivalence between metaqueries, which is useful for optimization purposes. f. angiulli r. ben-eliyahu-zohary l. palopoli g. b. ianni elsa (abstract): an electronic library search assistant bekka denning philip j. smith optimization of queries with user-defined predicates relational databases provide the ability to store user-defined functions and predicates which can be invoked in sql queries. when evaluation of a user- defined predicate is relatively expensive, the traditional method of evaluating predicates as early as possible is no longer a sound heuristic. there are two previous approaches for optimizing such queries. however, neither is able to guarantee the optimal plan over the desired execution space. we present efficient techniques that are able to guarantee the choice of an optimal plan over the desired execution space. the optimization algorithm with complete rank-ordering improves upon the naive optimization algorithm by exploiting the nature of the cost formulas for join methods and is polynomial in the number of user-defined predicates (for a given number of relations.) we also propose pruning rules that significantly reduce the cost of searching the execution space for both the naive algorithm as well as for the optimization algorithm with complete rank-ordering, without compromising optimality. we also propose a conservative local heuristic that is simpler and has low optimization overhead. although it is not always guaranteed to find the optimal plans, it produces close to optimal plans in most cases. we discuss how, depending on application requirements, to determine the algorithm of choice. it should be emphasized that our optimization algorithms handle user-defined selections as well as user-defined join predicates uniformly. we present complexity analysis and experimental comparison of the algorithms. surajit chaudhuri kyuseok shim a note on estimating the cardinality of the projection of a database relation the paper by ahad et al. [1] derives an analytical expression to estimate the cardinality of the projection of a database relation. in this note, we propose to show that this expression is in error even when all the parameters are assumed to be constant. we derive the correct formula for this expression. ravi mukkamala sushil jajodia constant time maintenance or the triumph of the fd. marc h. graham ke wang maximizing the value of internet-based corporate travel reservation systems alina m. chircu robert j. kauffman doug keskey using petri nets for rule termination analysis detlef zimmer axel meckenstock rainer unland salton award lecture: on theoretical argument in information retrieval (summary only): on theoretical argument in information retrieval the last winner of the salton award, tefko saracevic, gave an acceptance address at sigir in philadelphia in 1997. previous winners were william cooper (1994), cyril cleverdon (1991), karen sparck jones (1988) and gerard salton himself (1985). in this talk, i plan to follow the tradition of acceptance addresses, and present a personal view of and retrospective on some of the areas in which i work. however, i will not be saying much about what are perhaps the two most obvious parts of my work: the probabilistic approach to retrieval and evaluation of retrieval systems. rather i will attempt to get under the skin of my take on ir, by discussing the nature of theoretical argument in the field, partly through examples. this talk is about the place of theory in the study of information retrieval (in some sense following bill cooper's 1994 topic), but not so much theory with a capital t --- rather what might be described as small-t theory. the field has a very strong pragmatic orientation, reflected both in the attitudes of the commercial participants and in the emphasis on formal evaluation in the academic environment. nevertheless, there are many theoretical ideas buried in, or implied by, the ways we talk about the field --- the language we use to discuss it. i will be discussing two areas to illustrate these low-level theoretical ideas: precision devices, and the apparent symmetry between retrieval and filtering. the phrase `precision device' used to have a rather clear meaning in ir, in the days of set-based retrieval systems. in that context, a precision device was a device to enable the restriction of the retrieved set to those most likely (out of the documents originally included) to be useful. these days, with the ubiquitous scoring and ranking methods largely replacing set-based retrieval, the idea has lost its meaning it is worth exploring the formal relationships involved to understand the change a little better. my second area is to do with the relation between filtering and the more traditional type of adhoc information retrieval. there is a tendency and a temptation to see these as the same kind of thing, sometimes with a more specific assumption of duality, based on the inversion of the roles of documents and queries. it is important to see how far this parallel extends, and where it breaks down. i explore the nature of the duality and the kinds of reasons why it does break down. these examples reflect my interest in the basic logical structure of information retrieval systems and the situations in which such systems may be found. i argue for a certain level of logical argument in information retrieval, which might be taken as small-t theory, though not as capital-t theory. i believe there are reasons to think a grand theory of ir to be an unattainable goal --- such a theory would have to encompass so many different aspects of retrieval, having to do for example with human cognition and behaviour and the structure of knowledge, as well as with the statistical concepts that inform the probabilistic approach. however, accepting the unattainability of a grand theory does not preclude the development of further and more useful models based on particular aspects and lower-level logic the low-level logic is important not only in its own right, but as the basis for linking together more sophisticated theories concerned with more restricted domains. the most elaborate and complete theory of (say) user behaviour is of no use at all without a strong linkage between the parts of that theory and the entities relevant to ir that fall outside its scope. the glue that provides that linkage has to be low-level logic. stephen robertson utilizing first-order logic in query processing the purpose of this research is to improve on the current membership algorithms and their applications in relational database model and information retrieval. a new technique proposed which makes up for deficiencies existing methods. this membership algorithm based on full first-order logic, extending sagiv's prepositional calculus approach. our technique does not assign one relation name (association) to all database dependencies like the representation of nicolas. finally, it may be used to improve relation schema design. furthermore, we can utilize our technique derive data retrieval programs for answering queries, an idea that has not previously appeared in the literature. djamshid asgari lawrence henschen industrial-strength data warehousing arun sen varghese s. jacob database technology (track introduction only) ramzi a. haraty constructing olap cubes based on queries an on-line analytical processing (olap) user often follows a train of thought, posing a sequence of related queries against the data warehouse. although their details are not known in advance, the general form of those queries is apparent beforehand. thus, the user can outline the relevant portion of the data posing generalised queries against a cube representing the data warehouse.since existing olap design methods are not suitable for non-professionals, we present a technique that automates cube design given the data warehouse, functional dependency information, and sample olap queries expressed in the general form. the method constructs complete but minimal cubes with low risks related to sparsity and incorrect aggregations. after the user has given queries, the system will suggest a cube design. the user can accept it or improve it by giving more queries. the method is also suitable for improving existing cubes using respective real mdx queries. tapio niemi jyrki nummenmaa peter thanisch beyond interface builders: model-based interface tools pedro szekely ping luo robert neches social influence and end-user training dennis f. galletta manju ahuja amir hartman thompson teo a. graham peace a virtual oval keyboard and a vector input method for pen-based character input minako hashimoto masatomo togasi genesis: a project to develop an extensible database management system existing database management systems are tailored to traditional business applications. since 1980, a variety of new database applications have arisen: vlsi cad, graphics, and statistical databases are examples. these new applications cannot be handled easily or efficiently by available dbmss because their requirements are so unusual. in particular, they demand special- purpose data types and operators, and require novel storage structures and new query processing techniques. the idea that a single dbms can handle all database tasks equally well is no longer practical. customized database systems are needed. two approaches are currently taken to customize a dbms. either a system is built from scratch, or an existing system is modified. however, neither approach is cost effective as it takes many man- years of effort and hundreds of thousands of dollars to produce an operational system. recently, a number of researchers have independently conceived the idea of extensible or reconfigurable dbmss ([car85], [day85], [sto86], [bat86a]). these systems are modular, they are essentially software busses where modules with desired dbms capabilities can be plugged or unplugged in a matter of hours or days at virtually no cost to produce a customized dbms. the idea of extensible dbmss is a powerful concept, and is certainly a more attractive approach to dbms customization than current methods. features that can be customized in dbmss can be broadly categorized as being logical front-end or physical back-end. at the logical front-end of a dbms, it should be possible to introduce new object types and new operators. for example, conventional relational dbmss provide only integer, float, and string data types. nontraditional applications require novel data types, such as polygons and time-series. it should be possible to support new abstraction concepts, such as molecular objects and versions which are useful in the context of vlsi cad data ([bat85a]). also, it should be possible to express and process recursive queries and rules. at the physical back-end of a dbms, it should be possible to admit new storage structures, new concurrency control and recovery algorithms, and new query optimization algorithms. these are just some of the more important examples. genesis is an extensible database management system that is being developed to address the needs of nontraditional applications. our approach to developing genesis software is based on three principles. first, we are using theoretical models of logical and physical database organization to help identify the software atoms of database management systems. that is, we are taking a building blocks approach to dbms construction: by composing different atoms (modules) in different ways, we can quickly construct different (i.e., customized) dbmss. there are several important advantages to this approach. software atoms (modules) can be written and debugged separately rather than the common approach to write and debug compositions of many atoms simultaneously. in this way, software development is simplified. also, our approach provides the means for fast technology transfer. if a new file structure or database algorithm is invented, it is a matter of writing the software atom (module) that encodes this file structure or algorithm. the atom could then be used immediately in the construction of dbmss. second, all software atoms are plug compatible. that is, all modules that implement dbms primitives have the same interface. this makes module composition trivial. third, all software atoms are reusable. many dbmss use exactly the same software atoms, which means that many algorithms have been reinvented over and over again at an enormous cost. our approach attempts to minimize technology reinvention by writing atoms once and reusing them. it turns out that the techniques that are needed to achieve logical database extensibility and physical database extensibility are rather different. we are using a functional data model and data language as the logical front-end to genesis ([shi81], [bun82]). the functional approach is well-suited for extensible database systems as it is open-ended: new object types and operators can be added easily. a description of the genesis data model and data language, the techniques that are being used for its implementation, and how the data language is being used to express and process queries on non1nf relational databases is given in [bat86b]. the physical back-end of genesis is based on the unifying model ([bat82]) and transformation model ([bat85b]). these models explain how complicated storage structures that are used in commercial database management systems are actually are combinations of primitive file structures, linkset structures, and conceptual-to-internal mappings. a description of the implementation of the physical back-end of genesis is given in [bat86c]. we are using object-based models as a means for unifying database research with techniques in software engineering. d. s. batory participatory design techniques for cscw konrad tollmar the metadesk: models and prototypes for tangible user interfaces brygg ullmer hiroshi ishii decision-supportive modeling techniques for production planning (abstract only) proper analysis of production systems becomes quite complex when considering large, multiple-product systems. production becomes limited when products compete for resources that are shared among products and production lines. these limitations can cause collisions, backlogging, and underutilization of these resources. within the scope of industrial planning, several goals should be met as a result of the application of production analysis techniques. these include the determination of: production levels for each product, utilization of production resources, and costs of production. discussion will be directed towards the problem situation, the problem needs, and applicable solution approaches. corey d. schou expressive power and data complexity of nonrecursive query languages for lists and trees (extended abstract) we extend the traditional query languages by primitives for handling lists and trees. our main results characterize the expressive power and data complexity of the following extended languages: (1) relational algebra with lists and trees, (2) nonrecursive datalog@@@@ with lists and trees, (3) nonrecursive prolog with lists and trees, (4) first-order logic over lists and trees. languages (2)-(4) turn out to have the same expressive power; their range- restricted fragments have the same expressive power as (1). every query in these languages is a boolean combination of range-restricted queries. we also prove that these query languages have polynomial data complexity under any "reasonable" encoding of inputs. furthermore, under a natural encoding of inputs, languages (2)-(4) have the same expressive power as first-order logic over finite structures, therefore their data complexity is in a co. thus, the use of lists and trees in nonrecursive query languages gives no gain in the expressiveness. this contrasts with a huge difference between the nonelementary program complexity of extended languages (2)-(4) and the pspa ce program complexity of their relational counterparts. our results partly explain why lists and trees are not so widely used in nonrecursive query languages as other collection types. evgeny dantsin andrei voronkov fixed-point query languages for linear constraint databases we introduce a family of query languages for linear constraint databases over the reals. the languages are defined over two-sorted structures, the first sort being the real numbers and the second sort consisting of a decomposition of the input relation into regions. the languages are defined as extensions of first-order logic by transitive closure or fixed-point operators, where the fixed-point operators are defined over the set of regions only. it is shown that the query languages capture precisely the queries definable in various standard complexity classes including ptime. stephan kreutzer complete logical routings in computer mail systems the logical routing of a message in a computer mail system involves the identification and location of the set of intended recipients for that message. this function is carried out by the naming and addressing mechanism of the mail system. an important property of that mechanism is that it should be able to identify and locate all the intended recipients of a message, so that, once submitted, a message will not become lost or stuck in the system. we first discuss message addressing schemes, which are a framework for dealing with the naming and addressing problem. message addressing schemes can also serve as a basis for the analysis of some of the properties of logical message routing within a system. we examine the conditions necessary for a complete message addressing scheme, that is, one that guarantees to deliver all possible messages. p. martin d. tsichritzis impact: interactive motion picture authoring system for creative talent hirotada ueda takafumi miyatake satoshi yoshizawa sagebook: searching data-graphics by content mei c. chuah steven f. roth john kolojejchick joe mattis octavio juarez browsing and querying in object-oriented databases juliano lopes de oliveira ricardo de oliveira anido whiteboard: usability design and testing elizabeth buie managerial considerations hugh j. watson barbara j. haley "it/i": a theater play featuring an autonomous computer graphics character claudio s. pinhanez aaron f. bobick the road less traveled: internet collaboration: good, bad, and downright ugly lorrie faith cranor specification and implementation of exceptions in workflow management systems although workflow management systems are most applicable when an organization follows standard business processes and routines, any of these processes faces the need for handling exceptions, i.e., asynchronous and anomalous situations that fall outside the normal control flow. in this paper we concentrate upon anomalous situtations that, although unusual, are part of the semantics of workflow applications, and should be specified and monitored coherently; in most real- life applications, such exceptions affect a significant fraction of workflow cases. however, very few workflow management systems are integrated with a highly expressive language for specifying this kind of exception and with a system component capable of handling it. we present chimera-exc, a language for the specification of exceptions for workflows based on detached active rules, and then describe the architecture of a system, called far, that implements chimera-exc and integrates it with a commercial workflow management system and database server. we discuss the main issues that were solved by our implementation, and report on the performance of far. we also discuss design criteria for exceptions in light of the formal properties of their execution. finally, we focus on the portability of far on its unbundling to a generic architecture with detached active rules. fabio casati stefano ceri stefano paraboschi guiseppe pozzi towards an interaction level for object-oriented geographic database systems geographic database systems are large and complex systems characterized by a high degree of interactivity. though the object-oriented approach is powerful enough for the representation of geographic information, the availability of a flexible and easy-to-use interaction level is essential for most users. in this paper, we focus on three major issues concerning any interaction level to be built on top of an object-oriented geographic database system, namely: spatial data modeling, user views, and browsing through spatial information . despite what really is represented at the logical level, the spatial data model we refer to in this paper explicitly represents all kinds of spatial and non-spatial relationships of interest in the geographic context. this results in an effective conceptual tool for the definition of two different categories of views (namely, frame-based and map-based views). both categories of views feature a high degree of flexibility; in fact, users are allowed to specialize the contents of a view according to their specific needs. a way of combining browsing and querying during the activity of exploring large spatial geographic databases through views is finally sketched in the paper. eliseo clementini paolino di felice combining structure search and content search for the world-wide web hermann kaindl stefan kramer luis miguel afonso structured observation: techniques for gathering information about users in their own world susan m. dray caught them, but how to hold them? r. raghuraman implementing smart for minicomputers via relational processing with abstract data types designed during the 1960's as a research tool for the field of information retrieval, the smart system has been operating on an ibm 370 since 1974. smart is now being enhanced, redesigned, and programmed under the unix operating system [28] on a dec vax 11/780. the techniques used should allow real-time operation on smaller minicomputers in the pdp 11 family. the implementation provides for a combination of database and information retrieval operations which make it applicable to office automation, personal information system management, and research studies. the smart vector space model, which treats information requests and stored information records as vectors in an n-space (of terms), is integrated into the relational database model using the concepts of abstract data types (adts). domains of relations are allowed to be any adt; an extended relational algebra is described with operators that manipulate many complex data structures. after illustrating the application of these concepts to typical smart tasks, a prototype implementation is outlined. also included is a discussion of techniques to be employed in a more efficient version. edward a. fox querying websites using compact skeletons several commercial applications, such as online comparison shopping and process automation, require integrating information that is scattered across multiple websites or xml documents. much research has been devoted to this problem, resulting in several research prototypes and commercial implementations. such systems rely on wrappers that provide relational or other structured interfaces to websites. traditionally, wrappers have been constructed by hand on a per- website basis, constraining the scalability of the system. we introduce a website structure inference mechanism called _compact skeletons_ that is a step in the direction of automated wrapper generation. compact skeletons provide a transformation from websites or other hierarchical data, such as xml documents, to relational tables. we study several classes of compact skeletons and provide polynomial- time algorithms and heuristics for automated construction of compact skeletons from websites. experimental results show that our heuristics work well in practice. we also argue that compact skeletons are a natural extension of commercially deployed techniques for wrapper construction. anand rajaraman jeffrey d. ullmann reflections: the culture of uncertainty steven pemberton an approach to integration of web information source search and web information retrieval yuichi lizuka mitsuaki tsunakawa shin-ichiro seo tetsuo ikeda persistence of information on the web: analyzing citations contained in research articles steve lawrence frans coetzee eric glover gary flake david pennock bob krovetz finn nielsen andries kruger lee giles a generic information retrieval system to support interoperability debajyoti mukhopadhyay (almost) optimal parallel block access to range queries this guarantee is true for any number of dimensions. subsequent to this work, bhatia et al. [4] have proved that such a performance bound is essentially optimal for this kind of scheme, and have also extended our results to the case where the number of disks is a product of the form 1 * 2 * … * t where the ts need not all be 2. mikhail j. atallah sunil prabhakar sonar: system for optimized numeric association rules takeshi fukuda yasuhiko morimoto shinichi morishita takeshi tokuyama gloss: text-source discovery over the internet the dramatic growth of the internet has created a new problem for users: location of the relevant sources of documents. this article presents a framework for (and experimentally analyzes a solution to) this problem, which we call the text-source discovery problem. our approach consists of two phases. first, each text source exports its contents to a centralized service. second, users present queries to the service, which returns an ordered list of promising text sources. this article describes gloss, glossary of servers server, with two versions: bgloss, which provides a boolean query retrieval model, and vgloss, which provides a vector-space retrieval model. we also present hgloss, which provides a decentralized version of the system. we extensively describe the methodology for measuring the retrieval effectiveness of these systems and provide experimental evidence, based on actual data, that all three systems are highly effective in determining promising text sources for a given query. luis gravano hector garcia-molina anthony tomasic exploring the boundaries of successful gss application: supporting inter- organizational policy networks technologies used to support group work are based on, and contain underlying assumptions regarding, how people work together. the appropriateness of such assumptions is an important factor in determining the successful employment of the technology. this paper uses an action research approach to explore the boundaries of effective gss application by challenging the basic assumptions built into gss. this exploration is carried out in the context of a particular arena in which groups have to interact to reach a certain goal: inter-organizational policy networks. from nine cases it appears that gss are most effective in the orientation phase of inter-organizational policy making, while gss should be avoided during the separation phase where winners and losers can be identified. during the package deal phase of an inter-organizational policy making process, gss may have added value to offer, but the technology should be employed with caution. these findings are consistent with various experimental studies that found that gss application is more successful for creativity tasks than for preference tasks and mixed motive tasks. gert-jan de vreede hans de bruijn a poor quality video link affects speech but not gaze andrew f. monk leon watts designing for demanding users michael sellers semantics of query languages for network databases semantics determines the meaning of language constructs; hence it says much more than syntax does about implementing the language. the main purpose of this paper is a formal presentation of the meaning of basic language constructs employed in many database languages (sublanguages). therefore, stylized query languages ssl (sample selection language) and j (joins) are introduced, wherein most of the typical entries present in other query languages are collected. the semantics of ssl and j are defined by means of the denotational method and explained informally. in ssl and j, four types of expressions are introduced: a selector (denotes a set of addresses), a term (denotes a set of values), a formula (denotes a truth value), and a join (denotes a set of n-tuples of addresses or values). in many cases alternative semantics are given and discussed. in order to obtain more general properties of the proposed languages, a new database access model is introduced, intended to be a tool for the description of the logical access paths to data. in particular, the access paths of the network and relational models can be described. ssl and j expressions may be addressed to both data structures. in the case of the relational model, expressions of j are similar to sql or quel statements. thus j may be considered a generalization of relational query languages for the network model. finally, a programming language, based on ssl and j, is outlined, and the issues of ssl and j implementation are considered. kazimierz subieta database research at wisconsin corporate univ. of wisconsin approximating multi-dimensional aggregate range queries over real attributes finding approximate answers to multi-dimensional range queries over real valued attributes has significant applications in data exploration and database query optimization. in this paper we consider the following problem: given a table of _d_ attributes whose domain is the real numbers, and a query that specifies a range in each dimension, find a good approximation of the number of records in the table that satisfy the query. we present a new histogram technique that is designed to approximate the density of multi-dimensional datasets with real attributes. our technique finds buckets of variable size, and allows the buckets to overlap. overlapping buckets allow more efficient approximation of the density. the size of the cells is based on the local density of the data. this technique leads to a faster and more compact approximation of the data distribution. we also show how to generalize kernel density estimators, and how to apply them on the multi-dimensional query approximation problem. finally, we compare the accuracy of the proposed techniques with existing techniques using real and synthetic datasets. dimitrios gunopulos george kollios vassilis j. tsotras carlotta domeniconi efficiently supporting procedures in relational database systems we examine an extended relational database system which supports database procedures as full fledged objects. in particular, we focus on the problems of query processing and efficient support for database procedures. first, a variation to the original ingres decomposition algorithm is presented. then, we examine the idea of storing results of previously processed procedures in secondary storage (caching). using a cache, the cost of processing a query can be reduced by preventing multiple evaluations of the same procedure. problems associated with cache organizations, such as replacement policies and validation schemes are examined. another means for reducing the execution cost of queries is indexing. a new indexing scheme for cached results, partial indexing, is proposed and analyzed. timos k. sellis augmenting the organizational memory: a field study of answer garden a growing concern for organizations and groups has been to augment their knowledge and expertise. one such augmentation is to provide an organizational memory, some record of the organization's knowledge. however, relatively little is known about how computer systems might enhance organizational, group, or community memory. this paper presents findings from a field study of one such organizational memory system, the answer garden. the paper discusses the usage data and qualitative evaluations from the field study, and then draws a set of lessons for next- generation organizational memory systems. mark s. ackerman a study of retrospective and on-line event detection yiming yang tom pierce jaime carbonell human-computer interaction and standardization lin brown designing multimedia for learning: narrative guidance and narrative construction lydia plowman rosemary luckin diana laurillard matthew stratfold josie taylor extensions of world wide web aiming at a construction of a "virtual personal library" yusuke mishina a multimedia system for authoring motion pictures ronald baecker alan j. rosenthal naomi friedlander eric smith andrew cohen understanding complex information environments: a social analysis of watershed planning lisa r. schiff nancy a. van house mark h. butler the diary study: a workplace-oriented research tool to guide laboratory efforts john rieman tailorable domain objects as meeting tools for an electronic whiteboard thomas p. moran william van melle patrick chiu intermail: a prototype hypermedia mail system shari jackson nicole yankelovich systematic design of spoken prompts brian hansen david g. novick stephen sutton fast evaluation of structured queries for information retrieval eric w. brown a multimodal framework for music inputs (poster session) the growth of digital music databases imposes new content-based methods of interfacing with stored data; although indexing and retrieval techniques are deeply investigated, an integrated view of querying mechanism has never been established before. moreover, the multimodal nature of music should be exploited to match the users' expectations as well as their skills. in this paper, we propose a hierarchy of music-interfaces that is suitable for existent prototypes of music information retrieval systems; according to this framework, human/computer interaction should be improved by singing, playing or notating music. dealing with multiple inputs poses many challenging problems for both their combination and the low-level translation needed to transform an acoustic signal into a symbolic representation. this paper addresses the latter problem in some details, aiming to develop music- interfaces available not only to trained-musician. goffredo haus emanuele pollastri the dataindex: a structure for smaller, faster data warehouses in this paper we present dataindexes, a family of design strategies for data warehouses to support online analytical processing (olap). as the name implies, dataindexes are both a storage structure for the warehoused relational data and an indexing scheme to provide fast access to that data. we present two simple dataindexes: the basic dataindex (bdi), which can be used for any attribute and the join dataindex (jdi), which is used for foreign-key attributes. either structure can be shown to significantly improve query response times in most cases. and, since they serve as indexes as well as the store of base data, dataindexes actually define a physical design strategy for a data warehouse where the indexing, for all intents and purposes, comes for "free." anindya datta igor viguier automatic impact sound generation for using in nonvisual interfaces this paper describes work in progress on automatic generation of "impact sounds" based on purely physical modelling. these sounds can be used as non- speech audio presentation of objects and as interaction mechanisms to non visual interfaces. different approaches for synthesizing impact sounds, the process of recording impact sounds and the analysis of impact sounds are introduced. a physical model for describing impact sounds "spherical objects hitting flat plates or beams" is presented. some examples of impact sounds generated by mentioned physical model and comparison of spectra of real recorded sounds and model generated impact sounds (generated via physical modelling) are discussed. the objective of this research project (joint project university of zurich and swiss federal institute of technology) is to develop a concept, methods and a prototype for an audio framework. this audio framework shall describe sounds on a highly abstract semantic level. every sound is to be described as the result of one or several interactions between one or several objects at a certain place and in a certain environment. a. darvishi e. munteanu v. guggiana h. schauer m. motavalli m. rauterberg ten myths of multimodal interaction sharon oviatt an execution model for limited ambiguity rules and its application to derived data update a novel execution model for rule application in active databases is developed and applied to the problem of updating derived data in a database represented using a semantic, object-based database model. the execution model is based on the use of "limited ambiguity rules" (lars), which permit disjunction in rule actions. the execution model essentially performs a breadth-first exploration of alternative extensions of a user-requested update. given an object-based database schema, both integrity constraints and specifications of derived classes and attributes are compiled into a family of limited ambiguity rules. a theoretical analysis shows that the approach is sound: the execution model returns all valid "completions" of a user-requested update, or terminates with an appropriate error notification. the complexity of the approach in connection with derived data update is considered. i.-min a. chen richard hull dennis mcleod perspectives: trialogue on design (of) elizabeth dykstra-erickson wendy mackay jonathan arnowitz human factors challenges in creating a principal support office system - the speech filing system approach john d. gould stephen j. boies integration of text retrieval technology into formatted (conventional) information systems conventional information systems are characterized by data management, which is formatted according to the characterization, prepared by a systems analyst, in conjunction with the user of the new system. the operating principles of these systems enable efficient data management and the resolution of a broad range of problems. at times, systems of this nature do not meet the complex needs of an organization, such as the management of data that is difficult to characterize precisely, or the management of occasional activities that do not justify a separate system, etc.text retrieval technology provides management of "open" data, as well as a wide range of other data management forms. this paper presents the advantages of integrating text retrieval technology into formatted information systems in order to solve the above problems. offer drori extending fitts' law to two-dimensional tasks fitts' law, a one- dimensional model of human movement, is commonly applied to two-dimensional target acquisition tasks on interactive computing systems. for rectangular targets, such as words, it is demonstrated that the model can break down and yield unrealistically low (even negative!) ratings for a task's index of difficulty (id). the shannon formulation is shown to partially correct this problem, since id is always ≥ 0 bits. as well, two alternative interpretations "target width" are introduced that accommodate the two- dimensional nature of tasks. results of an experiment are presented that show a significant improvement in the model's performance using the suggested changes. i. scott mackenzie william buxton queries-r-links: graphical markup for text navigation gene golovchinsky mark chignell conference preview: iui '99: 1999 intelligent user interfaces, redondo beach california, usa, january 5 - 8, 1999 jennifer bruer on interpretations of relational languages and solutions to the implied constraint problem the interconnection between conceptual and external levels of a relational database is made precise in terms of the notion of "interpretation" between first-order languages. this is then used to obtain a methodology for discovering constraints at the external level that are "implied" by constraints at the conceptual level and by conceptual-to-external mappings. it is also seen that these concepts are important in other database issues, namely, automatic program conversion, database design, and compile-time error checking of embedded database languages. although this the deals exclusively with the relational approach, it also discusses how these ideas can be extended to hierarchical and network databases. barry e. jacobs anthony c. klug alan r. aronson image retrieval by appearance s. ravela r. manmatha good web design: essential ingredient! nahum gershon jacob nielsen mary czerwinski nick ragouzis david siegel wayne neale development of a windowing manager for a single process on a large screen (abstract only) current technology provides multi-windowing capabilities for several microcomputer systems. each window is a subset of the information presented for display by a process which is executing concurrently with other processes on the micro or on an up-line host. window sizes are necessarily small since several windows are in contention for space on the limited area provided by a standard sized screen. currently available gas panels provide several times more area for character display than was previously possible. a screen manager for the ibm 3290 gas panel allows a single process to simultaneously display multiple windows on this large capacity terminal. display principles and disciplines and management standards were developed. a prototype screen manager, being implemented to demonstrate the feasibility of the formulated protocol, should inspire and facilitate additional research in this area. mark williard leroy roquemore the role of a judge in a user based retrieval experiment (poster session) mingfang wu michael fuller ross wilkinson voir: visualization of information retrieval (abstract) gene golovchinsky mark h. chignell lothar rostek the equivalence of solving queries and producing tree projections (extended abstract) yehoshua sagiv oded shmueli presenting information in sound in the past few years, the increase in interactive use of computers has led to an emphasis on human factors and the ways in which digital information can best be presented to users. computer graphics has been at the forefront of this growth involving vision as an active aid in interpreting data. bar charts, psuedo-color image processing, and 3-dimensional figures are but a few means of providing the viewer with data information. as the use of computers increases, the need for a variety of alternatives of interacting with computers also increases. computer-generated sound is one capability not being fully utilized in the computer/human interface. just as an x-y plot reveals relationships in data, sounds might also reveal relationships in data. this report focuses on the potential for using computer generated sounds to present data information. the first section addresses multivariate data problems which might be aided by sound output. the second describes experiments performed to determine whether listeners can discriminate among data sets based on sound. the final section discusses ongoing work and future directions. sara bly eno: synthesizing structured sound spaces eno is an audio server designed to make it easy for applications in the unix environment to incorporate non-speech audio cues. at the physical level, eno manages a shared resource, namely the audio hardware. at the logical level, it manages a sound space that is shared by various client applications. instead of dealing with sound in terms of its physical description (i.e., sampled sounds), eno allows sounds to be presented and controlled in terms of higher- level descriptions of sources, interactions, attributes, and sound space. using this structure, eno can facilitate the creation of consistent, rich systems of audio cues. in this paper, we discuss the justification, design, and implementation of eno. michel beaudouin-lafon william w. gaver managing multimedia information in database systems william i. grosky on kent's "consequences of assuming a universal relation" (technical correspondance) jeffrey d. ullman response to "remarks on two new theorems of date and fagin" in [df92], we present simple conditions, which we now describe, for guaranteeing higher normal forms for relational databases. a key is simple if it consists of a single attribute. we show in [df92] that if a relation schema is in third normal form (3nf) and every key is simple, then it is in projection-join normal form (sometimes called fifth normal form), the ultimate normal form with respect to projections and joins. we also show in [df92] that if a relation schema is in boyce-codd normal form (bcnf) and some key is simple, then it is in fourth normal form (4nf). these results give the database designer simple sufficient conditions, defined in terms of functional dependencies alone, that guarantee that the schema being designed is automatically in higher normal forms. c. j. date ronald fagin server selection on the world wide web significant efforts are being made to digitize rare and valuable library materials, with the goal of providing patrons and historians digital facsimiles that capture the "look and feel" of the original materials. this is often done by digitally photographing the materials and making high resolution 2d images available. the underlying assumption is that the objects are flat. however, older materials may not be flat in practice, being warped and crinkled due to decay, neglect, accident and the passing of time. in such cases, 2d imaging is insufficient to capture the "look and feel" of the original. for these materials, 3d acquisition is necessary to create a realistic facsimile. this paper outlines a technique for capturing an accurate 3d representation of library materials which can be integrated directly into current digitization setups. this will allow digitization efforts to provide patrons with more realistic digital facsimile of library materials. nick craswell peter bailey david hawking obsolescent materialized views in query processing of enterprise information systems in recent years, query processing has become more complex as data sources are frequently replicated and data are periodically processed and embedded within several data sources simultaneously. these trends have necessitated the optimization of techniques for query processing in order to exploit these new alternatives. accordingly, this paper introduces an improved query optimization technique, which is capable of assessing query plans that use both current and obsolescent data. in particular, we provide a cost model by which the trade-offs of using obsolescent materialized views can be evaluated and we also discuss the method's applicability to contemporary query optimization techniques. avigdor gal session services and synchronous groupware (abstract) john f. patterson putting it together: rdf: weaving the web of discovery ralph r. swick alexandria digital library the goal of the alexandria digital library project is to develop a distributed system that provides a comprehensive range of library services for collections of spatially indexed and graphical information. while such collections include digitized maps and images as important special components, the alexandria digital library will involve a very wide range of graphical materials and will include textual materials. users of the alexandria digital library will range from school children to academic researchers to members of the general public. they will be able to retrieve materials from the library on the basis of information content as well by reference to spatial location. terence r. smith james frew wais heikki lehvaslaiho robert harper pegasus architecture and design principles ming-chien shan business: making an e-business conceptualization and design process more "user"-centered richard i. anderson text databases: a survey of text models and systems arjan loeffen on the expressive power of datalog: tools and a case study we study here the language datalog( ), which is the query language obtained from datalog by allowing equalities and inequalities in the bodies of the rules. we view datalog( ) as a fragment of an infinitary logic lω and show that lω can be characterized in terms of certain two-person pebble games. this characterization provides us with tools for investigating the expressive power of datalog( ). as a case study, we classify the expressibility of fixed subgraph homeomorphism queries on directed graphs. fortune et al. [fhw80] classified the computational complexity of these queries by establishing two dichotomies, which are proper only if p np. without using any complexity- theoretic assumptions, we show here that the two dichotomies are indeed proper in terms of expressibility in datalog( ). phokion g. kolaitis moshe y. vardi evaluating the influence of interface styles and multiple access paths in hypertext pawan r. vora martin g. helander valerie l. shalin context interchange: overcoming the challenges of large-scale interoperable database systems in a dynamic environment research in database interoperability has primarily focused on circumventing schematic and semantic incompatibility arising from autonomy of the underlying databases. we argue that, while existing integration strategies might provide satisfactory support for small or static systems, their inadequacies rapidly become evident in large-scale interoperable database systems operating in a dynamic environment. this paper highlights the problem of receiver heterogeneity, scalability, and evolution which have received little attention in the literature, provides an overview of the context interchange approach to interoperability, illustrates why this is able to better circumvent the problems identified, and forges the connections to other works by suggesting how the context interchange framework differs from other integration approaches in the literature. cheng hian goh stuart e. madnick michael d. siegel exploring the design space for notification servers devina ramduny alan dix tom rodden touchpad-based remote control devices neil r. n. enns i. scott mackenzie the changing roles of educators: using e-mail, cd-rom, and online documentation in the technical writing classroom lynnette r. porter an automatic news video parsing, indexing and browsing system: chien yong low qi tian hongjiang zhang data mining and the web: past, present and future minos n. garofalakis rajeev rastogi s. seshadri kyuseok shim breaking the barrier of transactions: mining inter-transaction association rules anthony k.h. tung hongjun lu jiawei han ling feng technology transfer: so much research, so few good products ellen a. isaacs john c. tang jim foley jeff johnson allan kuchinsky jean scholtz john bennett global virtual teams recent interviews with gvt leaders and members offer criticaladvice from the trenches regarding the challenges and copingstrategies for collaborating on a global scale. line dube guy pare an efficient algorithm to update large itemsets with early pruning necip fazil ayan abdullah uz tansel erol arkun integration of the viewpoint mechanism in federated databases fouzia benchikha mahmoud boufaida lionel seinturier the naos system c. collet t. coupaye we-met (window environment-meeting enhancement tools) catherine g. wolf james r. rhyne lorna a. zorman harold l. ossher documenting virtual communities scott r. tilley dennis b. smith databases and visualization daniel a. keim getting the model right for video-mediated communication (abstract) sylvia wilbur garry beirne jon crowcroft j. robert ensor john tang responding to subtle, fleeting changes in the user's internal state in human-to-human interaction, people sometimes are able to pick up and respond sensitively to the other's internal state as it shifts moment by moment over the course of an exchange. to find out whether such an ability is worthwhile for computer human interfaces, we built a semi-automated tutoring- type spoken dialog system. the system inferred information about the user's \scare{ephemeral emotions}, such as confidence, confusion, pleasure, and dependency, from the prosody of his utterances and the context. it used this information to select the most appropriate acknowledgement form at each moment. in doing so the system was following some of the basic social conventions for real-time interaction. users rated the system with this ability more highly than a version without. wataru tsukahara nigel ward cscw challenges in large-scale technical projects - a case study kaj morten kyng preben mogensen using local context to support hypermedia authoring (abstract) mylene melly materialized view maintenance and integrity constraint checking: trading space for time kenneth a. ross divesh srivastava s. sudarshan a general, yet useful theory of information systems steven alter a multiagent system for content based navigation of music david de roure samhaa el-beltagy steven blackburn wendy hall "let's see your search-tool!" - collaborative use of tailored artifacts in groupware groupware applications should be tailorable to fit the requirements of dynamically evolving and differentiated fields of application. to encourage individual and collaborative tailoring activities, applications should be tailorable on different levels of complexity. a search tool has been developed which offers different levels of tailoring complexity by means of hierarchically organized component languages. users can create alternative search tools and compound components by themselves. search tool alternatives and compound components can also be shared among the users. when introducing this tool into an organization of the political administration, it turned out that the users had considerable problems in understanding the functioning of artifacts created by someone else. to ease cooperative tailoring activities, we have implemented features, which allow users to structure, describe, and explore shared components and search tool alternatives. also we provided means to store and exchange examples for components' use. volker wulf xlibris: an automated library research assistant while recent work has focused on providing tools and infrastructure for users to access electronic information over the internet, the relationship between the physical world and information available online has been relatively unexplored. information about a user's location, and the objects she interacts with, can be sufficient to recognize enough of the user's task to drive retrieval of online information relevant to the task at hand. the xlibris system automatically retrieves, aggregates, and delivers information about books to users as they are checked out of the library, using information about the books themselves and the user's task. xlibris locates books in the dewey decimal subject hierarchy to automatically search for the most relevant information about the book for the user, tailoring both the sources queried and the information returned based on the book's position in the hierarchy. andrew crossen jay budzik mason warner larry birnbaum kristian j. hammond the new media roy rada george s. carson speech patterns in video-mediated conversations this paper reports on the first of a series of analyses aimed at comparing same room and video- mediated conversations for multiparty meetings. this study compared patterns of spontaneous speech for same room versus two video- mediated conversations. one video system used a single camera, monitor and speaker, and a picture-in- a-picture device to display multiple people on one screen. the other system used multiple cameras, monitors, and speakers in order to support directional gaze cues and selective listening. differences were found between same room and video-mediated conversations in terms of floor control and amount of simultaneous speech. while no differences were found between the video systems in terms of objective speech measures, other important differences are suggested and discussed. abigail j. sellen process restricted ast: an assessment of group support systems appropriation and meeting outcomes using participant perceptions william david salisbury matthew j. stollak query optimization for repository-based applications martin staudt rene soiron christoph quix matthias jarke multiversion divergence control of time fuzziness epsilon serializability (esr) has been proposed to manage and control inconsistency in extending the classic transaction processing. esr increases system concurrency by tolerating a bounded amount of inconsistency. in this paper, we present multiversion divergence control (mvdc) algorithms that support esr with not only value but also time fuzziness in multiversion databases. unlike value fuzziness, accumulating time fuzziness is semantically different. a simple summation of the length of two time intervals may either underestimate the total time fuzziness, resulting in incorrect execution, or overestimate the total time fuzziness, unnecessarily degrading the effectiveness of mvesr. we present a new operation, called timeunion, to accurately accumulate the total time fuzziness. because of the accurate control of time and value fuzziness by the mvdc algorithm, mvesr is very suitable for the use of multiversion databases for real-time applications that may tolerate a limited degree of data inconsistency but prefer more data recency. calton pu miu k. tsang kun-lung wu philip s. yu learning to personalize haym hirsh chumki basu brian d. davison the development and coaching of visok: a web-based project in statoil vidar hepsø merete juul kristin mauseth redundancy in spatial databases spatial objects other than points and boxes can be stored in spatial indexes, but the techniques usually require the use of approximations that can be arbitrarily bad. this leads to poor performance and highly inaccurate responses to spatial queries. the situation can be improved by storing some objects in the index redundantly. most spatial indexes permit no flexibility in adjusting the amount of redundancy. spatial indexes based on z-order permit this flexibility. accuracy of the query response increases with redundancy, (there is a "diminishing return" effect). search time, as measured by disk accesses first decreases and then increases with redundancy. there is, therefore, an optimal amount of redundancy (for a given data set). the optimal use of redundancy for z-order is explored through analysis of the z-order search algorithm and through experiments. j. a. orenstein navigation guided by artificial force fields: dongbo xiao roger hubbold formal model of correctness without serializabilty in the classical approach to transaction processing, a concurrent execution is considered to be correct if it is equivalent to a non-concurrent schedule. this notion of correctness is called serializability. serializability has proven to be a highly useful concept for transaction systems for data- processing style applications. recent interest in applying database concepts to applications in computer-aided design, office information systems, etc. has resulted in transactions of relatively long duration. for such transactions, there are serious consequences to requiring serializability as the notion of correctness. specifically, such systems either impose long- duration waits or require the abortion of long transactions. in this paper, we define a transaction model that allows for several alternative notions of correctness without the requirement of serializability. after introducing the model, we investigate classes of schedules for transactions. we show that these classes are richer than analogous classes under the classical model. finally, we show the potential practicality of our model by describing protocols that permit a transaction manager to allow correct non-serializable executions h. k. korth g. speegle a flexible image search engine panrit tosukhowong frederic andres kinji ono nicolas dessaigne jose martinez nouredine mouaddib douglas c. schmidt a model based reasoning approach to network monitoring anoop singhal gary weiss johannes p. ros classification and regression: money *can* grow on trees with over 800 million pages covering most areas of human endeavor, the world- wide web is a fertile ground for data mining research to make a difference to the effectiveness of information search. today, web surfers access the web through two dominant interfaces clicking on hyperlinks and searching via keyword queries this process is often tentative and unsatisfactory better support is needed for expressing one's information need and dealing with a search result in more structured ways than available now. data mining and machine learning have significant roles to play towards this end. in this paper we will survey recent advances in learning and mining problems related to hypertext in general and the web in particular. we will review the continuum of supervised to semi-supervised to unsupervised learning problems, highlight the specific challenges which distinguish data mining in the hypertext domain from data mining in the context of data warehouses, and summarize the key areas of recent and ongoing research. johannes gehrke wie-yin loh raghu ramakrishnan where were we: making and using near-synchronous, pre-narrative video scott l. minneman steven r. harrison a synergistic approach to specifying simple number independent layouts by example scott e. hudson cheng-ning hsi latent semantic indexing is an optimal special case of multidimensional scaling latent semantic indexing (lsi) is a technique for representing documents, queries, and terms as vectors in a multidimensional real-valued space. the representtions are approximations to the original term space encoding, and are found using the matrix technique of singular value decomposition. in comparison multidimensional scaling (mds) is a class of data analysis techniques for representing data points as points in a multidimensional real- valued space. the objects are represented so that inter- point similarities in the space match inter-object similarity information provided by the researcher. we illustrate how the document representations given by lsi are equivalent to the optimal representations found when solving a particular mds problem in which the given inter-object similarity information is provided by the inner product similarities between the documents themselves. we further analyze a more general mds problem in which the interdocument similarity information, although still in inner product form is arbitrary with respect to the vector space encoding of the documents. brian t. bartell garrison w. cottrell richard k. belew editorial pointers diane crawford information visualization advanced interface and web design ben shneiderman catherine plaisant mobile: user-centered interface building angel r. puerta eric cheng tunhow ou justin min a relational information resource dictionary system a relational implementation of irds using sql demonstrates how the flexibility of the relational environment enhances the extensibility of the irds while at the same time providing more powerful dictionary capabilities than are typically found in relational systems. daniel r dolk robert a. kirsch email - the good, the bad, and the ugly hal berghel rapid ethnography: time deepening strategies for hci field research field research methods are useful in the many aspects of human- computer interaction research, including gathering user requirements, understanding and developing user models, and new product evaluation and iterative design. due to increasingly short product realization cycles, there has been growing interestth in more time efficient methods, including rapid prototyping methods and various usability inspection techniques. this paper will introduce "rapid ethnography," which is a collection of field methods intended to provide a reasonable understanding of users and their activities given significant time pressures and limited time in the field.. the core elements include limiting or constraining the research focus and scope, using key informants, capturing rich field data by using multiple observers and interactive observation techniques, and collaborative qualitative data analysis. a short case study illustrating the important characteristics of rapid ethnography will also be presented. david r. millen issues in design data management (panel session) design data management is an emerging sub-discipline of database systems that is concerned with the management of data quite unlike that found in conventional banking, personnel, or order-entry data- bases. they contain information describing the design of complicated "engineered" objects, such as ships, buildings, mechanical devices, and integrated circuits. these objects are designed by teams of engineers who need concurrent, yet controlled access to the data. very often the view of an object as seen by one group will be different from that of others (e.g., the plumber's view of a building is very different from the electrician's view). the members of the panel include mr. steven hoffman, calma corporation, mr. douglas gilbert, sperry-univac, and dr. roger haskin, ibm. after a brief presentation by each panelist describing his experience with design data and his assessment of how it should be managed, the panel will discuss the following questions. audience participation is invited, of course. randy h. kat semantics and expressiveness issues in active databases (extended abstract) phillippe picouet victor vianu the formalism of probability theory in ir: a foundation or an encumbrance? william s. cooper a usage based analysis of corr based on an empirical analysis of author usage of corr, and of its predecessor in the los alamos eprint archives, it is shown that corr has not yet been able to match the early growth of the los alamos physics archives. some of the reasons are implicit in halpern s paper,and we explore them further here. in particular,we refer to the need to promote corr more effectively for its intended community computer scientists in universities, industrial research labs and in government. we take up some points of detail on this new world of open archiving concerning centralversus distributed self- archiving,publication, the restructuring of the journal publishers review and copyright. les carr steve hitchcock wendy hall stevan harnad heidi ii: a test-bed for interactive multimedia delivery and communication max ott georg michelitsch john hearn dan reininger vivek bansal pen-based interaction techniques for organizing material on an electronic whiteboard thomas p. moran patrick chiu william van melle information systems research at rwth aachen with about 8.000 researchers and 40.000 students, rwth aachen is the largest technical university in europe. the science and engineering departments and their industrial collaborators offer a lot of challenges for database research. the chair informatik v (information systems) focuses on the theoretical analysis, prototypical development, and practical evaluation of meta information systems. meta information systems, also called repositories, document and coordinate the distributed processes of producing, integrating, operating, and evolving database-intensive applications. our research approaches these problems from a technological and from an application perspective. on the one hand, we pursue theory and system aspects of the integration of deductive and object-oriented technologies. one outcome of this work is a deductive object manager called conceptbase which has been developed over the past eight years and is currently used by many research groups and industrial teams throughout the world. on the other hand, a wide range of application-driven projects aims at building a sound basis of empirical knowledge about the demands on meta information systems, and about the quality of proposed solutions. they address application domains as diverse as requirements engineering, telecommunications, cooperative engineering, organization-wide quality management, evolution of chemical production processes, and medical knowledge management. they share the vision of supporting wide-area distributed cooperation not just by low-level interoperation technology but by exploiting conceptual product and process modeling. under the direction of m. jarke, informatik v comprises three research groups with a total of twenty senior researchers and doctoral students: distributed information systems (leader: dr. manfred jeusfeld), process information systems (dr. klaus pohl), and knowledge-based systems (prof. wolfgang nejdl). database-related activities also exist in the software engineering and applied mathematics groups. matthias jarke topical locality in the web _most web pages are linked to others with related content_. this idea, combined with another that says that _text in, and possibly around, html anchors describe the pages to which they point_, is the foundation for a usable world-wide web. in this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the web. in particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. these results show the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents. brian d. davison manual and gaze input cascaded (magic) pointing shumin zhai carlos morimoto steven ihde learning subjective relevance to facilitate information access james r. chen nathalie mathe a web site design model for financial information michael ettredge vernon j. richardson susan scholz multimedia abstractions for a digital video library michael g. christel david b. winkler c. roy taylor what makes web sites credible?: a report on a large quantitative study the credibility of web sites is becoming an increasingly important area to understand. to expand knowledge in this domain, we conducted an online study that investigated how different elements of web sites affect people's perception of credibility. over 1400 people participated in this study, both from the u.s. and europe, evaluating 51 different web site elements. the data showed which elements boost and which elements hurt perceptions of web credibility. through analysis we found these elements fell into one of seven factors. in order of impact, the five types of elements that increased credibility perceptions were "real-world feel", "ease of use", "expertise", "trustworthiness", and "tailoring". the two types of elements that hurt credibility were "commercial implications" ;and "amateurism". this large-scale study lays the groundwork for further research into the elements that affect web credibility. the results also suggest implications for designing credible web sites. b. j. fogg jonathan marshall othman laraki alex osipovich chris varma nicholas fang jyoti paul akshay rangnekar john shon preeti swani marissa treinen the oodb path-method generator (pmg) using precomputed access relevance ashish mehta james geller yehoshua perl erich neuhold collaborative design with use case scenarios digital libraries, particularly those with a community-based governanc e structure, are best designed in a collaborative setting. in this paper, we compare our experience using two design methods: a task-centered method that draws upon a group's strength for eliciting and formulating tasks, and a use case method that tends to require a focus on defining an explicit process for tasks. we discuss how these methods did and did not work well in a collaborative setting. lynne davis melissa dawe a framework for object-oriented on-line analytic processing jan w. buzydlowski il-yeol song lewis hassell experience with personalization of yahoo! udi manber ash patel john robison supporting display generation for complex database objects belinda b. flynn david maier shuffle, throw or take it! working efficiently with an interactive wall jorg geibler an insider's view of interface design elizabeth dykstra-erickson path constraints on semistructured and structured data peter buneman wenfei fan scott weinstein noncollaborative telepresentations come of age d. james gemmell c. gordon bell realizing a temporal complex-object data model support for temporal data continues to be a requirement posed by many applications such as vlsi design and cad, but also in conventional applications like banking and sales. furthermore, the strong demand for complex-object support is known as an inherent fact in design applications, and also emerges for advance "conventional" applications. thus, new advanced database management systems should include both features, i.e. should support temporal complex-objects. in this paper, we present such a temporal complex- object data model. the central notion of our temporal complex-object data model is a time slice, representing one state of a complex object. we explain the mapping of time slices onto the complex objects supported by the mad model (which we use for an example of a non-temporal complex-object data model) as well as the transformation process of operations on temporal complex-objects into mad model operations. thereby, the basic properties of the mad model are a prerequisite for our approach. for example, time slices can only be directly stored, if non-disjunct (i.e. over-lapping) complex objects are easily handled in the underlying complex-object data model. wolfgang käfer harald schöning designing and implementing collaborative applications (tutorial session)(abstract only) goals and content: this tutorial will address the design and implementation of collaborative applications. the design space will be described using the dimensions of session management, coupling, user awareness, and undo/redo. we will examine tools for building collaborative applications including shared window systems, toolkits, and object-oriented frameworks. then we will examine the implementation space of collaborative applications using the dimensions of layering, replication, distribution, concurrency, collaboration awareness, and algorithms for supporting consistency. at the end of the tutorial, the audience will be able to understand the motivation for collaborative applications, summarize important parts of the collaboration design and implementation space, and identify and compare collaborative architectures and tools. prasun dewan efficient construction of large test collections gordon v. cormack christopher r. palmer charles l. a. clarke designing a desktop information system: observations and issues thomas erickson gitta salomon the design requirements of office systems giampio bracchi barbara pernici collaborative document production using quilt quilt is a computer-based tool for collaborative document production. it provides annotation, messaging, computer conferencing, and notification facilities to support communication and information sharing among the collaborators on a document. views of a document tailored to individual collaborators or to other of the document's users are provided by quilt based on the user's position in a permission hierarchy that reflects an extensible set of social roles and communication types. this paper illustrates how quilt could be used by collaborators to produce a document. mary d. p. leland robert s. fish robert e. kraut comparing single- and two-handed 3d input for a 3d object assembly task maarten w. gribnau james m. hennessey information delivery systems in the 1990's: the local integrated knowledge network (linknet) detmar w. straub cynthia mathis beath what's happening jennifer bruer defining the ergonomic buzzwords current software and computer systems are being advertised and sold today as friendly systems that are easy to use. most also claim to be easy to learn as well. but what is it that makes a system friendly, easy to use, or easy to learn? is it sufficient to enshrine a group of interactive features in a video box connected to a single user microcomputer? if a system is easy to use does that make it friendly? or does easy to learn mean that it is easy to use? does your friendly, easy-going personal computer salesman differentiate between these terms! although the physical aspects of human factors have been studied for several decades and have an established scientific base, there is very little published material on the software aspects of ergonomics that is based upon well controlled experimental investigation. however, there is a fair amount of literature in which individuals describe their experiences in various ad hoc situations. the similarities among these experiences has resulted in a fair amount of "folklore" which may be beneficially applied. the terms: easy to learn, easy to use, and friendly are defined below according to this experiential base. jon a. meads structural distinctions between hypermedia storage and presentation lloyd rutledge lynda hardman jacco van ossenbruggen dick c. a. bulterman collaborative wearable systems research and evaluation (video program)(abstract only) an interdisciplimary research group at carnegie mellon university (cmu) is investigating the design and usefulness of mobile cscw systems for the support of distributed diagnosis, repair, and redesign of large vehicles, such as aircraft and trains. these systems incorporate diagnostic aids, on-line maintenance manuals, schematic drawings, and telecommunications that allow workers to access both stored information and interactive help from remote experts. this videotape illustrates the problem area and some wearable computer prototypes. it describes some of the field work we have done documenting the value of collaboration when workers are diagnosing and repairing complex equipment. our laboratory experiments investigate whether wireless video capabilities are useful. one prototype incorporates both shared computer-based information (an on-line repair manual) and a shared view of the non-computerized work space (a video feed from a head-mounted camera). experiments so far show that commumication with a remote expert improves the speed and quality of repairs, but that shared video does not. video does, however, affect how collaborators coordinate their behavior, for example by allowing a pair to be less verbally explicit. the videotape illustrates how a collaborative pair can exploit both shared data sources to communicate more effectively. jane siegel robert e. kraut mark d. miller david j. kaplan malcolm bauer building a distributed full-text index for the web we identify crucial design issues in building a distributed inverted index for a large collection of web pages. we introduce a novel pipelining technique for structuring the core index-building system that substantially reduces the index construction time. we also propose a storage scheme for creating and managing inverted files using an embedded database system. we suggest and compare different strategies for collecting global statistics from distributed inverted indexes. finally, we present performance results from experiments on a testbed distributed web indexing system that we have implemented. people-oriented computer systems, revisited(acm 82 panel session) this panel will look at the importance of the interaction process and the user interface in the design, development and use of computer systems. it is a follow-up to a panel session titled "people-oriented computer systems; when, and how?" which was presented at acm 78 [1]. the panelists will look at questions raised in 1978 and review what has been done in the intervening four years. and they will look at one of the most important happenings of the past four years --- the widespread introduction of the personal computer into the office, the lab, and the schools, and how it has affected the design of all computer systems, small and large. new computers, new users, new ways of designing computer systems for people? lorraine borman the sound of your stuff: designing a complex auditory display for an interactive museum exhibit maribeth back jonathan cohen finding usability problems through heuristic evaluation usability specialists were better than non-specialists at performing heuristic evaluation, and "double experts" with specific expertise in the kind of interface being evaluated performed even better. major usability problems have a higher probability than minor problems of being found in a heuristic evaluation, but more minor problems are found in absolute numbers. usability heuristics relating to exits and user errors were more difficult to apply than the rest, and additional measures should be taken to find problems relating to these heuristics. usability problems that relate to missing interface elements that ought to be introduced were more difficult to find by heuristic evaluation in interfaces implemented as paper prototypes but were as easy as other problems to find in running systems. jakob nielsen automatic text representation, classification and labeling in european law the huge text archives and retrieval systems of legal information have not achieved yet the representation in the well-known subject-oriented structure of legal commentaries. content-based classification and text analysis remains a high priority research topic. in the joint konterm, som and labelsom projects, learning techniques of neural networks are used to achieve similar high compression rates of classification and analysis like in manual legal indexing. the produced maps of legal text corpora cluster related documents in units that are described with automatically selected descriptors. extensive tests with text corpora in european case law have shown the feasibility of this approach. classification and labeling proved very helpful for legal research. the _growing hierarchical self-organizing map_ represents very interesting generalities and specialties of legal text corpora. the segmentation into document parts improved very much the quality of labeling. the next challenge would be a change from _tf_ × _idf_ vector representation to a modified vector representation taking into account thesauri or ontologies considering learned properties of legal text corpora. erich schweighofer andreas rauber michael dittenbach hyperstorm: an extensible object-oriented hypermedia engine ajit bapat jurgen wasch karl aberer jorg m. haake developing hypermedia (panel) david lowe mark bernstein paolo paolini daniel schwabe signature files: an access method for documents and its analytical performance evaluation chris faloutsos stavros christodoulakis enabling technology for users with special needs alan edwards alistair d. n. edwards elizabeth d. mynatt prefer: a system for the efficient execution of multi-parametric ranked queries users often need to optimize the selection of objects by appropriately weighting the importance of multiple object attributes. such optimization problems appear often in operations' research and applied mathematics as well as everyday life; e.g., a buyer may select a home as a weighted function of a number of attributes like its distance from office, its price, its area, etc. we capture such queries in our definition of preference queries that use a weight function over a relation's attributes to derive a score for each tuple. database systems cannot efficiently produce the top results of a preference query because they need to evaluate the weight function over all tuples of the relation. prefer answers preference queries efficiently by using materialized views that have been pre- processed and stored. we first show how the result of a preference query can be produced in a pipelined fashion using a materialized view. then we show that excellent performance can be delivered given a reasonable number of materialized views and we provide an algorithm that selects a number of views to precompute and materialize given space constraints. we have implemented the algorithms proposed in this paper in a prototype system called prefer, which operates on top of a commercial database management system. we present the results of a performance comparison, comparing our algorithms with prior approaches using synthetic datasets. our results indicate that the proposed algorithms are superior in performance compared to other approaches, both in preprocessing (preparation of materialized views) as well as execution time. vagelis hristidis nick koudas yannis papakonstantinou powerbookmarks: a system for personalizable web information organization, sharing, and management wen-syan li quoc vu edward chang divyakant agrawal kyoji hirata sougata mukherjea yi-leh wu corey bufi chen-chuan kevin chang yoshinori hara reiko ito yutaka kimura kezuyuki shimazu yukiyoshi saito user embodiment in collaborative virtual environments steve benford john bowers lennart e. fahlen chris greenhalgh dave snowdon automatic hypermedia generation (abstract) jean-louis vuldy managing a trois: a study of a multi-user drawing tool in distributed design work scott l. minneman sara a. bly quality of service transferred to information retrieval: the adaptive information retrieval system users often quit an information retrieval system (ir system) very frustrated, because they cannot find the information matching their information needs. we have identified the following two main reasons: too high expectations and wrong use of the system. our approach which addresses both issues is based on the transfer of the concept of quality of service to ir systems: the user first negotiates the retrieval success rates with the ir system, so he knows what to expect from the system in advance. second, by dynamic adaptation to the retrieval context, the ir system tries to improve the user's queries and thereby tries to exploit the underlying information source as best as possible. claudia rolker ralf kramer stepwise specification of dynamic database behaviour this paper presents a methodology for the stepwise specification of dynamic database behaviour. a conceptual schema is described in three levels: data, objects and transactions. to determine which sequences of database states are "admissible", integrity constraints on objects are given in temporal logic. transactions are specified by pre/postconditions to produce "executable" state sequences. in order to guarantee that executable state sequences already become admissible, integrity constraints are completely transformed into additional pre/postconditions. we introduce general rules for these transformations. thus, schema specifications can be refined and simplified systematically. udo w. lipeck sims: retrieving and integrating information from multiple sources yigal arens craig knoblock collaborative suites for experiment-oriented scientific research anne schur kelly a. keating deborah a. payne tom valdez kenneth r. yates james d. myers a graphical model of procedures for an automated manager's assistant a model of office procedures based upon directed graphs is introduced. this model is being used as a basis for work on an office assistant which integrates office procedures performed by individuals and by the assistant. it also provides a framework for an interface to an on-line database system. this paper introduces the basic features of the model, including a set of primitive actions which determine the semantics for the office procedures within the model. michael bauer sylvia osborn hci theory on trial alistair sutcliffe john carroll richard young john long information design methods and the applications of virtual worlds technology at world&barbelow;esign, inc. information design is a new professional practice that systematically applies the lessons of human- computer interaction and human factors studies, communication theory, and information science to the presentation of complex data. worldesign, inc., an information design with an emphasis on virtual worlds technology in the service of its corporate, mostly industrial customers. robert jacobson a functional model for macro-databases recently there have been numerous proposals aimed at correcting the deficiency in existing database models to manipulate macro data (such as summary tables). the authors propose a new functional model, mefisto, based on the definition of a new data structure, the "statistical entity", and on a set of operations capable of manipulating this data structure by operating at metadata level. m. rafanelli f. l. ricci multi-level recovery multi-level transactions have received considerable attention as a framework for high-performance concurrency control methods. an inherent property of multi-level transactions is the need for compensating actions, since state- based recovery methods do no longer work correctly for transaction undo. the resulting requirement of operation logging adds to the complexity of crash recovery. in addition, multi-level recovery algorithms have to take into account that high-level actions are not necessarily atomic, e.g., if multiple pages are updated in a single action. in this paper, we present a recovery algorithm for multi-level transactions. unlike typical commercial database systems, we have striven for simplicity rather than employing special tricks. it is important to note, though, that simplicity is not achieved at the expense of performance. we show how a high- performance multi-level recovery algorithm can be systematically developed based on few fundamental principles. the presented algorithm has been implemented in the dasdbs database kernel system. gerhard weikum christof hasse peter broessler peter muth efficient concurrency control in multidimensional access methods the importance of multidimensional index structures to numerous emerging database applications is well established. however, before these index structures can be supported as access methods (ams) in a "commercial- strength" database management system (dbms), efficient techniques to provide transactional access to data via the index structure must be developed. concurrent accesses to data via index structures introduce the problem of protecting ranges specified in the retrieval from phantom insertions and deletions (the phantom problem). this paper presents a dynamic granular locking approach to phantom protection in generalized search trees(gists), an index structure supporting an extensible set of queries and data types. the granular locking technique offers a high degree of concurrency and has a low lock overhead. our experiments show that the granular locking technique (1) scales well under various system loads and (2) similar to the b-tree case, provides a significantly more efficient implementation compared to predicate locking for multidimensional ams as well. since a wide variety of multidimensional index structures can be implemented using gist, the developed algorithms provide a general solution to concurrency control in multidimensional ams. to the best of our knowledge, this paper provides the first such solution based on granular locking. kaushik chakrabarti sharad mehrotra serializability with constraints this paper deals with the serializability theory for single-version and multiversion database systems. we first introduce the concept of disjoint- interval topological sort (dits, for short) of an arc- labeled directed acyclic graph. it is shown that a history is serializable if and only if its transaction io graph has a dits. we then define several subclasses of serializable histories, based on the constraints imposed by write-write, write-read, read-write, or read-read conflicts, and investigate inclusion relationships among them. in terms of dits, we give a sufficient condition for a class of serializable histories to be polynomially recognizable, which is then used to show that a new class of histories, named wrw, can be recognized in polynomial time. we also present np-completeness results for the problem of testing membership in some other classes. in the second half of this paper, we extend these results to multiversion database systems. the inclusion relationships among multiversion classes defined by constraints, such as write-write and write- read, are investigated. one such class coincides with class dmvsr, introduced by papadimitriou and kanellakis, and gives a simple characterization of this class. it is shown that for most constraints, multiversion classes properly contain the corresponding single-version classes. complexity results for the membership testing are also discussed. toshihide ibaraki tiko kameda toshimi minoura issues and approaches for migration/cohabitation between legacy and new systems corporate subject data bases (csdb) are being introduced to reduce data redundancy, maintain the integrity of the data, provide a uniform data access interface, and have data readily available to make business decisions. during the transition phase, there is a need to maintain legacy systems (ls), csdb, and to synchronize between them. choosing the right granularity for migration of data and functionality is essential to the success of the migration strategy. technologies being used to support the transition to csdb include relational systems supporting stored procedures, remote procedures, expert systems, object-oriented approach, reengineering tools, and data transition tools. for our customer csdb to be deployed in 1993, cleanup of data occurs during initial load of the csdb. nightly updates are needed during the transition phase to account for operations executed through ls. there is a lack of an integrated set of tools to help in the transition phase. rodolphe nassif don mitchusson a framework for supporting data integration using the materialized and virtual approaches richard hull gang zhou intergrating relational databases and internet mail rob cutlip a user-centered design approach to personalization joseph kramer sunil noronha john vergo user interfaces for young and old maddy d. brouwer-janse jane fulton suri mitchell yawitz govert de vries james l. fozard roger coleman science watch by knowledge discovery in bibliographic databases t. dkaki office automation and the management of change a period of change in an organization puts our management talents and skills to a true test. when the change is the introduction of new technology or equipment into an existing operation, we need to pay special attention to the way in which we manage the introduction, implementation, training, and feedback of the new system, to ensure that the staff is motivated and informed, positive and cooperative. this paper describes the introduction of information processing systems into the operation of an organization and the special management skills that were brought to bear in introducing and implementing the new technology successfully. of all the things we at xerox computer services learned during the implementation of information processing systems throughout the company, three words stand out: effective management, and communications. these are the critical elements in the introduction of new systems and procedures, and, if used properly, they can lead to a slow, smooth, well-thought-out, and exciting new operation in technology. ann w. luke electronic mail previews using non-speech audio scott e. hudson ian smith enabling technologies for the future of voice-based web access steve woods an automated office communications study in an operational setting in recent years there has been a proliferation of electronic office products developed under the umbrella label of "office of the future". the justification for office automation has been the promise of increased office productivity and significant cost-benefits to the user. in most cases these systems have been driven by the rapid evolution of technological advances in the computer industry. as a result, these technologically leading edge solutions often must then search for a set of office problems to solve. the present study evolved out of the identification of one such set of office problems. office communications have increasingly been identified as a major source of inefficiency and frustration. the purpose of the present study was to investigate ways of solving the variety of problems associated with the use of the telephone. the telephone has been labelled as probably the most intrusive device on work flow in the office (bair, farber, & uhlig, 1980). typically office principals, particularly those without a personal secretary, don't have any means of screening themselves from telephone interruptions. unsuccessful attempts to call someone are another source of inefficiency. randall r. harris touch-typing with a stylus david goldberg cate richardson database in crisis and transition: a technical agenda for the year 2001 the current paper outlines a number of important changes that face the database community and presents an agenda for how some of these challenges can be met. this database agenda is currently being addressed in the enterprise group at microsoft corporation. the paper concludes with a scenario for 2001 which reflects the microsoft vision of "information at your fingertips." david vaskevitch multimedia systems: introduction erika dawn gernand social navigation: techniques for building more usable systems a. dieberger p. dourish k. höök p. resnick a. wexelblat cuevideo dulce ponceleon arnon amir savitha srinivasan tanveer syeda-mahmood dragutin petkovic an information-theoretic approach to automatic query expansion techniques for automatic query expansion from top retrieved documents have shown promise for improving retrieval effectiveness on large collections; however, they often rely on an empirical ground, and there is a shortage of cross-system comparisons. using ideas from information theory, we present a computationally simple and theoretically justified method for assigning scores to candidate expansion terms. such scores are used to select and weight expansion terms within rocchio's framework for query reweigthing. we compare ranking with information-theoretic query expansion versus ranking with other query expansion techniques, showing that the former achieves better retrieval effectiveness on several performance measures. we also discuss the effect on retrieval effectiveness of the main parameters involved in automatic query expansion, such as data sparseness, query difficulty, number of selected documents, and number of selected terms, pointing out interesting relationships. claudio carpineto renato de mori giovanni romano brigitte bigi knowledge-based systems for idea processing lawrence f. young developing hypermedia applications with methods and patterns gustavo rossi fernando daniel lyardet daniel schwabe eddies: continuously adaptive query processing in large federated and shared-nothing databases, resources can exhibit widely fluctuating characteristics. assumptions made at the time a query is submitted will rarely hold throughout the duration of query processing. as a result, traditional static query optimization and execution techniques are ineffective in these environments. in this paper we introduce a query processing mechanism called an _eddy_, which continuously reorders operators in a query plan as it runs. we characterize the _moments of symmetry_ during which pipelined joins can be easily reordered, and the _synchronization barriers_ that require inputs from different sources to be coordinated. by combining eddies with appropriate join algorithms, we merge the optimization and execution phases of query processing, allowing each tuple to have a flexible ordering of the query operators. this flexibility is controlled by a combination of fluid dynamics and a simple learning algorithm. our initial implementation demonstrates promising results, with eddies performing nearly as well as a static optimizer/executor in static scenarios, and providing dramatic improvements in dynamic execution environments. ron avnur joseph m. hellerstein interacting in chaos dan r. olsen summary of database research activities at the university of massachusetts, amherst at the university of massachusetts, we have been conducting research in the following database related areas: theoretical support for database system development, database programming languages, flexible concurrency control and transaction management, real-time databases, and information retrieval. the following is a summary of our research in each area. k. ramamritham e. moss j. a. stankovic d. stemple b. croft d. towsley a deductive and object-oriented database system: why and how? this talk will outline the principles, the architecture and the potential target applications of a deductive and object- oriented database system (dood). such systems combine the novel functionalities (relying on the associated technology) developed in deductive database projects, the ability to manipulate the complex objects appearing in many applications and the architectural advances achieved by object-oriented dbms's. laurent vieille supporting informal communication via ephemeral interest groups laurence brothers jim hollan jakob neilsen scott stornetta steve abney george furnas michael littman using the baby-babble-blanket for infants with motor problems: an empirical study children with motor problems often develop to be passive, presumably because of an inability to communicate and to control the environment. the baby- babble-blanket (bbb), a pad with pressure switches linked to a macintosh computer, was developed to meet this need. lying on the pad, infants use head- rolling, leg-lifting and kicking to produce digitized sound. data is collected by the bbb software on the infant's switch activations. an empirical study was carried out on a five-month-old infant with club feet, hydrocephaly and poor muscle tone to determine what movements the infant could use to access the pad, whether movements would increase over a baseline in response to sound, and what level of cause and effect the infant would demonstrate. videotapes and switch activation data suggest that the infant: 1) could activate the device by rolling his head and raising his legs. 2) increased switch activations, over a no-sound baseline, in response to the sound of his mother's voice. 3) was able to change from using his head to raising his legs in response to the reinforcer. h. j. fell h. delta r. peterson l. j. ferrier z. mooraj m. valleau efficient transaction support for dynamic information retrieval systems mohan kamath krithi ramamritham an extensible query model and its languages for a uniform behavioral objectmanagement system randal j. peters anna lipka m. tamer Ã-zsu duane szafron storhouse metanoia - new applications for database, storage & data warehousing this paper describes the storhouse/relational manager (rm) database system that uses and exploits an _active storage hierarchy_. by active storage hierarchy, we mean that storhouse/rm executes sql queries _directly_ against data stored on all hierarchical storage (i.e. disk, optical, and tape) without post processing a file or a dba having to manage a data set. we describe and analyze storhouse/rm features and internals. we also describe how storhouse/rm differs from traditional hsm (hierarchical storage management) systems. for commercial applications we describe an evolution to the data warehouse concept, called _atomic data store_, whereby atomic data is stored in the database system. atomic data is defined as storing _all_ the historic data values and executing queries against them. we also describe a _hub-and-spoke data warehouse architecture_, which is used to feed or fuel data into data marts. furthermore, we provide analysis how storhouse/rm can be _federated_ with db2, oracle and microsoft sql server 7 (ss7) and thus provide these databases an active storage hierarchy (i.e. tape). we then show two federated data modeling techniques (a) logical horizontal partitioning (lhp) of tuples and (b) logical vertical partitioning (lvp) of columns to demonstrate our database extension capabilities. we conclude with a tpc-like performance analysis of data stored on tape and disk. felipe cariño pekka kostamaa art kaufmann john burgess ariel: augmenting paper engineering drawings w. e. mackay d. s. pagani l. faber b. inwood p. launiainen l. brenta v. pouzol video anywhere: a system for searching and managing distributed heterogeneous video assets visual information, especially videos, plays an increasing role in our society for both work and entertainment as more sources become available to the user. set-top boxes are poised to give home users access to videos that come not only from tv channels and personal recordings, but also from the internet in the form of downloaded and streaming videos of various types. current approaches such as electronic program guides and video search engines search for video assets of one type or from one source. the capability to conveniently search through many types of video assets from a large number of video sources with easy-to-use user profiles cannot be found anywhere yet. videoanywhere has developed such a capability in the form of an extensible architecture as well as a specific implementation using the latest in internet programming (java, agents, xml, etc.) and applicable standards. it automatically extracts and manages an extensible set of metadata of major types of videos that can be queried using either attribute-based or keyword- based search. it also provides user profiling that can be combined with the query processing for filtering. a user-friendly interface provides management of all system functions and capabilities. videoanywhere can also be used as a video search engine for the web, and a servlet-based version has also been implemented. amit sheth clemens bertram kshitij shah an adaptive concurrency control algorithm (abstract) correctness of concurrent executions of multiple database transactions is assured by concurrency control techniques. performance analyses, e.g., [agr85], show that optimistic concurrency control techniques perform more efficiently than pessimistic techniques when there is low contention for the data and low resource utilization. otherwise pessimistic techniques perform better. because the designer of a database system can rarely predict data contention and resource utilization, it is desirable that a database system adaptively select a concurrency control technique at runtime. we present an algorithm for partitioning the collection of active database transactions into a set of clusters. the partitioning algorithm guarantees that there are no data conflicts between transactions to distinct clusters. different concurrency control techniques can be applied in distinct clusters. our algorithm uses the number of transactions in a cluster as a measure of data contention. transactions in newly formed clusters are handled by an optimistic scheduler. when the number of transactions in a cluster exceeds a certain threshold, the cluster becomes pessimistic. such a cluster's current transactions complete, and then newly entering transactions can begin, managed by a pessimistic scheduler. we have designed data structures to support the efficient maintenance of clusters. we have also begun to develop a theory of correct cluster configurations. we are undertaking a simulation study to compare the performances of pessimistic techniques, optimistic techniques, and our adaptive approach. james canning p. muthuvelraj john sieg information retrieval and comprehension in humans and computers david h. jonassen context-sensitive vocabulary mapping with a spreading activation network jonghoon lee david dubin visual decision-making: using treemaps for the analytic hierarchy process toshiyuki asahi david turo ben shneiderman a cooperative mission development environment for crossplatform integration mary ann robert christopher rouff christian burkhardt defining the requirements for hci design methods: an ifip 13.2 workshop in association with interchi '93, amsterdam 23 - 24 april 1993 alistair sutcliffe combining multiple evidence from different properties of weighting schemes joon ho lee ldap directory services- just another database application? (tutorial session) the key driving force behind general-purpose enterprise directory services is for providing a central repository for commonly and widely used information such as users, groups, network service access information and profiles, security information, etc. acceptance of the lightweight directory access protocol (ldap) as an access protocol has facilitated widespread integration of these directory services into the network infrastructure and applications. both directory and relational databases are data repositories sharing the characteristic that they have mechanisms for dealing with schema and structure of information and are suitable for systematically organized data. this tutorial describes characteristics of directories such as schema information, query language and support, storage mechanisms required, typical requirements imposed by applications, etc. we then explain the differences between a directory and relational database, and show how the two are required to co- exist in a typical enterprise. an essential characteristic assumed for information stored in directories is that it is relatively static and that the queries are mostly read only. we describe typical directory applications to validate this assumption and project the requirements imposed on them as these applications evolve. we then describe areas of overlap between traditional databases and directories, describe some database and directory integration solutions adopted in the market, and identify areas in which directory deployment can benefit from the experience gathered by the database community. shridhar shukla anand deshpande simple rational guidance for chopping up transactions chopping transactions into pieces is good for performance but may lead to non- serializable executions. many researchers have reacted to this fact by either inventing new concurrency control mechanisms, weakening serializability, or both. we adopt a different approach. we assume a user who • has only the degree 2 and degree 3 consistency options offered by the vast majority of conventional database systems; and •knows the set of transactions that may run during a certain interval (users are likely to have such knowledge for online or real-time transactional applications). given this information, our algorithm finds the finest partitioning of a set of transactions transet with the following property; if the partitioned transactions execute serializably, then transet executes serializably. this permits users to obtain more concurrency while preserving correctness. besides obtaining more inter-transaction concurrency, chopping transactions in this way can enhance intra-transaction parallelism. the algorithm is inexpensive, running in o(n x (e + m)) time using a naive implementation where n is the number of edges in the conflict graph among the transactions, and m is the maximum number of accesses of any transaction. this makes it feasible to add as a tuning knob to practical systems. dennis shasha eric simon patrick valduriez abstracting home video automatically rainer lienhart building temporal structures in a layered multimedia data model the layered multimedia data model (lmdm) aids in the specification of multimedia compositions by dividing the problem into smaller, more manageable pieces. in this paper we describe the lower two layers of the lmdm, the data definition layer, which allows the specification of multimedia objects in a database, and the data manipulation layer, which allows the specification of temporal structures built from those objects. several examples demonstrate the advantages of the layered paradigm: simple specifications, and modular, reusable components. g. schloss m. wynblatt microcomputer user interface toolkits (panel session): the commercial state- of-the-art a well-designed user interface is a very valuable asset: the best available today are based on hundreds of man-years of work combining results of research in human factors, tasteful design reviewed and modified through extensive end- user testing, and many rounds of implementation effort. as a result, the user interface "toolkit" is emerging as the hottest new software item. a toolkit can provide software developers with a programming environment in which the user interface coding is already done so that new applications programs can automatically be integrated with other workstation functions. the panel will evaluate this new trend. tesler and macgregor, will present the designs of the leading toolkit products from apple and microsoft, respectively. reed will analyze the choices from the point of view of the third party software vendors' requirements. noting that the effort going into these products may well result in de facto standard setting, buxton will question the appropriateness of making this commitment based on microcomputer hardware. irene greif william a. s. buxton david r. reed larry tesler scott macgregor digital library collaborations in a world community digital libraries and their user communities are increasingly internat ional in nature. however \- though technological progress and global education have brought american and european communities closer - cross-cultural and other crosscutting issues impede the formation of world community on larger scales. the pertinent issues include: collaboration in the presence of language and cultural barriers, international copyrights, international revenue streams, and universal access. this panel will examine notions of "community" from a variety of theoretical and practical perspectives, and discuss lessons that can be gleaned from applications of the community concept. topics are expected to include scalability, sustainability, regenerative cycles in healthy communities, and examples of digital-library efforts that have international potential or implications. david fulker sharon dawes leonid kalinichenko tamara sumner constantino thanos alex ushakov using goms for user interface design and evaluation: which technique? since the seminal book, the psychology of human- computer interaction, the goms model has been one of the few widely known theoretical concepts in human- computer interaction. this concept has spawned much research to verify and extend the original work and has been used in real-world design and evaluation situations. this article synthesizes the previous work on goms to provide an integrated view of goms models and how they can be used in design. we briefly describe the major variants of goms that have matured sufficiently to be used in actual design. we then provide guidance to practitioners about which goms variant to use for different design situations. finally, we present examples of the application of goms to practical design problems and then summarize the lessons learned. bonnie e. john david e. kieras equi-depth multidimensional histograms m. muralikrishna david j. dewitt digital libraries: a brief introduction william l. anderson greenstone: a comprehensive open-source digital library software system this paper describes the greenstone digital library software, a comprehensive, open-source system for the construction and presentation of information collections. collections built with greenstone offer effective full-text searching and metadata-based browsing facilities that are attractive and easy to use. moreover, they are easily maintainable and can be augmented and rebuilt entirely automatically. the system is extensible: software "plugins" accommodate different document and metadata types. ian h. witten stefan j. boddie david bainbridge rodger j. mcnab interaction design for shared world-wide web annotations martin röscheisen christian mogensen towards a standard for sql-based spatial query languages paolino di felice eliseo clementine a cautionary tale robert r. korfhage jing-jye yang rich interaction in the digital library effective information access involves rich interactions between users and information residing in diverse locations. users seek and retrieve information from the sources---for example, file serves, databases, and digital libraries ---and use various tools to browse, manipulate, reuse, and generally process the information. we have developed a number of techniques that support various aspects of the process of user/information interaction. these techniques can be considered attempts to increase the bandwidth and quality of the interactions between users and information in an information workspace\\---an environment designed to support information work (see figure 1). ramana rao jan o. pedersen marti a. hearst jock d. mackinlay stuart k. card larry masinter per-kristian halvorsen george c. robertson end-user training and learning deborah compeau lorne olfman maung sei jane webster information mining platforms: an infrastructure for kdd rapid deployment corinna cortes daryl pregibon database research at the ibm almaden research center laura m. haas patricia g. selinger exploiting concurrency in a dbms implementation for production systems an important aspect of integration of ai and dbms technology is identifying functional similarities in database processing and reasoning with rules. in this paper, we focus on applying concurrency techniques to rule- based production systems. we tailor dbms concurrent execution to a production system environment and investigate the resulting concurrent execution strategies for productions. this research is in conjunction with a novel dbms mechanism for testing if the left hand side conditions of productions are satisfied. this set-oriented mechanism uses a special data structure implemented using relational tables. to demonstrate the equivalence of a serial and a concurrent (interleaved) execution strategy, for a set of productions, we assume a locking scheme to enforce serializability and specify what locks must be obtained. we define a logical commit point for a production. after this point the execution of a production is independent of all other (executing) productions. we compare the number of possible serial and parallel execution schedules. l. raschid t. sellis c. c. lin the pointcast network (abstract) pointcast inc, the inventor and leader in broadcast news via the internet and corporate intranets was founded in 1992 to deliver news as it happens from leading sources such as cnn, the new york times, wall street journal interactive edition and more, directly to a viewer's computer screen. the pointcast network is an integrated client/server system. the system give users control of selecting kinds of information the client retrieves and, within limits, the frequency of those retrievals. the system is divided into client and server segments, referred to as "pointcast client" and "the datacenter" respectively. pointcast client is a program that runs on the user's internet-connected computer, and performs a number of functions in addition to retrieving information from the datacenter. the server side of the system, known as the pointcast datacenter, supports the client by providing compelling content in a timely fashion. the data center is composed of multiple sites geographically distributed all over us, each carrying a number of industrial strength web servers called "pointservers". the pointservers are highly customized and "infinitely" scalable to serve close to 200 million requests handled by the pointcast network in a day. the pointcast network receives content from over 100 different sources via satellite links or over the internet. a cluster of servers in the data center run customized processes round the clock which assimilate data from various sources to index, format and store it in multiple content databases. this presentation will describe the basic plumbing of the pointcast network and how some of the challenges of establishing one of the busiest data centers in the world were addressed and implemented. it will focus on following issues: fault tolerance load balancing achieving scalability through pre-caching on servers packaging information to optimize internet bandwidth usage minimizing data latency. satish ramakrishnan vibha dayal persuasive computers: perspectives and research directions bj fogg a design space for multimodal systems: concurrent processing and data fusionlaurence nigay joÃ"lle coutaz hypermedia conversation recording to preserve informal artifacts in realtime collaboration we present a hypermedia-based conversation recording method to preserve informal artifacts which are obtained in a realtime collaboration. informal artifacts are not final goals but important results such as a process of a decision, an individual opinion, a rejected idea, or a nuance. conversation in a realtime collaborative session is recorded linked with a participant's handwritten scripts so that the relevant portion of the conversation can be selectively played back in the subsequent individual session. a participant can preserve informal artifacts and structure the decision-making process by playing back the conversation after the collaboration. the important feature is that a participant can insert meaningful cues in any portion of the conversation without disturbing the current way of collaboration. through the use of a prototype, we have found the effectiveness and the further improvement of this method. t. imai k. yamaguchi t. muranaga industry briefs: informaat nico ten hoor effective user interfaces: some common sense guidelines edward j. see douglas c. woestendiek i-land: an interactive landscape for creativity and innovation norbert a. streitz jorg geibler torsten holmer shin'ichi konomi christian muller-tomfelde wolfgang reischl petra rexroth peter seitz ralf steinmetz state of the art issues in distributed databases (panel session): form-flow application development the development of inexpensive, powerful, microcomputers with bit-mapped graphics displays and local area networks makes it possible to provide each person in a office with a personal computer which can communicate with other computers and shared resources (e.g. printers and data). these personal computers can be used to automate many of the office tasks which currently use paper forms, telephone communication, and filing cabinets and thereby improve office worker productivity. unfortunately, existing programming languages and application development tools are not well- matched to the full-screen interactive user interfaces and the multiuser form- flow applications that these office systems require. as a result, application programs to automate offices are complex, unreliable, expensive, and time- consuming to produce. examples of these systems are keeping track of papers submitted to a journal, bug reports sent to a software organization, and purchase orders sent to a company. we have developed an interactive, form-flow application development system (fads) which will make it easier to build these applications. the system has built-in forms (e.g. data entry and report forms), tools for combining many forms into an application (e.g. using data entered into a data entry form as arguments to a database query that produces data for a report form), facilities to send data (i.e. forms) to another user, and triggering mechanisms to allow a user's attention to be directed to important information (e.g. that a bug fix is overdue). moreover, applications are developed interactively using the same form-based user interface as the applications being created. l. rowe alternatives for on-line help systems this paper reviews some existing possibilities for help systems and proposes a series of steps for improving the interactive interface to users. we consider a hypothetical environment of a predominantly time- sharing facility providing services to multiple campuses over a large geographical area with s small staff for training and user services. t. p. kehler m. barnes the dragmag image magnifier colin ware marlon lewis an experimental evaluation of transparent menu usage beverly l. harrison kim j. vicente an incremental approach to schema integration by refining extensional relationships ingo schmitt can turker user centered design: quality or quackery? john karat michael e. atwood susan m. dray martin rantzer dennis r. wixon an rtp-based synchronized hypermedia live lecture system for distance education in this article, we have introduced a "live synchronized hypermedia live lecture (shll) system" using rtp to synchronize the live presentation of streaming video lecture, html-based lecture notes, and html page navigation events. the shll framework consists of three major modules: (1) shll recorder- for recording the temporal information of the av lecture and the html-based lecture notes navigation processes. (2) shll event server- for receiving, depositing, and multicasting shll events. (3) shll browser- for presentation of the synchronized av lecture and html- based lecture notes navigation. to manage the synchronization presentation of different media, we have proposed an rtp-based multi-sync synchronization model, which account for the human perception factors. to evaluate the performance of the proposed shll framework and synchronization model, a realsystem-based prototype synchronized html-av distance lecture system has been implemented using java/javascript and c. the prototype system certifies the feasibility of the proposed framework for synchronized hypermedia live multicasting. herng-yow chen yen-tsung chia gin-yi chen jen-shin hong providing explicit support for social constraints: in search of the social computer ben anderson 3-d displays for real-time monitoring of air traffic dick steinberg charles deplachett kacheshwar pathak dennis strickland common elements in today's graphical user interfaces (panel): the good, the bad, and the ugly a. brady farrand marc rochkind jean-marie chauvet bruce “tog” tognazzini david c. smith multimedia developers can learn from the history of human communication robert s. tannenbaum a system for adding content-based searching to a traditional music library catalogue server most online music library catalogues can only be searched by textual m etadata. whilst highly effective - since the rules for maintaining consistency have been refined over many years - this does not allow searching by musical content. many music librarians are familiar with users humming their enquiries. most systems providing a query by humming interface tend to run independently of music library catalogue systems and not offer similar textual metadata searching. this demonstration shows how we can integrate these two types of system based on work conducted as part of the nsf/jisc funded omras project (http://www.omras.org). matthew j. dovey signature files: design and performance comparison of some signature extraction methods chris faloutsos the design of the triton nested relational database system unique database requirements of applications such as computer-aided design (cad), computer-aided software engineering(case), and office information systems(oic) have driven the development of new data models and database systems based on these new models. in particular, the goal of these new database systems is to exploit the advantages of complex data models that are more efficient (in terms of time and space) than their relational counterparts. in this paper, we describe the design and implementation of the triton nested relational database system, a prototype system based on the nested relational data model. triton is intended to be used as the backend storage and access component of the aforementioned applications. this paper describes the architecture of the triton system, and compares the performance of the nested relational model versus the relational model using triton. in addition, this paper evaluates the exodus extensible database toolkit used in the development of the triton system including key features of the persistent programming language e and the exodus storage manager. tina m. harvey craig w. schnepf mark a. roth design principles for data-intensive web sites stefano ceri piero fraternali stefano paraboschi putting it together: the glue that binds the net win treese evaluating database selection techniques: a testbed and experiment james c. french allison l. powell charles l. viles travis emmitt kevin j. prey privacy, anonymity and interpersonal competition issues identified during participatory design of project management groupware! michael j. muller john g. smith j. zachary shoher harry goldberg retrieving and visualizing video boon-lock yeo minerva m. yeung cooperative work environment using virtual workspace haruo takemura fumio kishino future (hyper)spaces kathryn cramer sam epstein cathy marshall tom meyer mark pesce hypersam: a management tool for large user interface guideline sets renato iannella typographic design for interfaces of information systems principles of information-oriented graphic design have been utilized in redesigning the interface for a large information management system. these principles are explained and examples of typical screen formats are shown to indicate the nature of improvements. aaron marcus the term retrieval abstract machine scans through large collections of complex objects often cannot be avoided. even if sohphisticated indexing mechanisms are provided, it may be necessary to evaluate simple predicates against data stored on disk for filtering. for traditional record oriented data models i/o and buffer management are the main bottlenecks for this operation, the interpretation of data structures is straightforward and usually not an important cost factor. for heterogeneously shaped complex objects it may become a dominant cost factor. in this paper we demonstrate a technique to make data structure traversal inside of complex objects much cheaper than naive interpretation. we compile navigation necessary to evaluate condition predicates and physical schema information into a program to be executed by a specialized abstract machine. our approach is demonstrated for the feature term data model (ftdm), but the technique is applicable to many other complex data models. main parts of this paper are dedicated to the method we used to design the term retrieval abstract machine (tram) architecture by partial evaluation of a tuned interpreter. michael ley a proposal for an open dss protocol dawn g. gregg michael goul issues and opportunities l. david van over ira r. weiss selection of search terms based on user profile sanjiv k. bhatia supporting the writing of reports in a hierarchical organization in many hierarchical companies, reports from several independent groups must be merged to form a single, company-wide report. this paper describes a process and system for creating and structuring such reports and for propagating contributions up the organization. the system has been in regular use, in- house, by about 30 users for over a year to create monthly status reports. our experiences indicate that it is possible to change a monthly reporting practice so that the system is easy to use, improves the quality of the written report, fosters collaboration across projects and creates a corporate memory for the company. these results were achieved as a consequence of our design effort to directly support the hierarchical and collaborative process of creating and assembling the report within the organization. user feedback has led to many improvements in the usability and functionality of the system. further enhancements using information retrieval and text summarization techniques are in progress. andreas girgensohn on site wearable computer system len bass dan siewiorek asim smailagic john stivoric browsing graphs using a fisheye view marc h. brown james r. meehan manojit sarkar visualizing efficiency: a technique to help designers judge interface efficiency andrew sears experimenting with temporal relational databases iqbal a. goralwalla abudullahu. tansel m. tamer Ã-zsu tactile-based direct manipulation in guis for blind users helen petrie sarah morley gerhard weber multimedia databases and information systems dragutin petrovic farshid arman charlie judice alex pentland james normile a general purpose data base design one of the major problems facing data base designers is how to develop a logical data base design for a proposed application. usually, for each new application, a new data base design is produced. for a data base task group (dbtg) data base management system (dbms) this means creating a new schema and subschemas. this paper describes a non-volatile data base design that allows the dbtg structure of the data base to remain constant, regardless of changes in the applications it is portraying. robert p. brazile multimedia after a decade of research (panel discussion): multimedia after a decade of research kevin c. almeroth the crystal ball of research: how to use it to learn about the user community with computing becoming more diverse and ubiquitous, it will be increasingly important in the future for user services departments to learn about the characteristics and needs of the people we serve. as a research psychologist working at yale's computer center, i have spent much of my time during the past year developing a research program to do just that. this paper will detail the development of the research agenda, describing the process involved, the program itself, and results we've obtained so far. i will end by offering suggestions on how other universities can develop their own user research programs. susan grajek comparing presentation summaries: slides vs. reading vs. listening as more audio and video technical presentations go online, it becomes imperative to give users effective summarization and skimming tools so that they can find the presentation they want and browse through it quickly. in a previous study, we reported three automated methods for generating audio-video summaries and a user evaluation of those methods. an open question remained about how well various text/image only techniques will compare to the audio- video summarizations. this study attempts to fill that gap. this paper reports a user study that compares four possible ways of allowing a user to skim a presentation: 1) powerpoint slides used by the speaker during the presentation, 2) the text transcript created by professional transcribers from the presentation, 3) the transcript with important points highlighted by the speaker, and 4) a audio-video summary created by the speaker. results show that although some text-only conditions can match the audio-video summary, users have a marginal preference for audio-video (anova f=3.067, p=0.087). furthermore, different styles of slide-authoring (e.g., detailed vs. big- points only) can have a big impact on their effectiveness as summaries, raising a dilemma for some speakers in authoring for on-demand previewing versus that for live audiences. liwei he elizabeth sanocki anoop gupta jonathan grudin the parameterized complexity of database queries this paper gives a short introduction into parameterized complexity theory, aimed towards database theorists interested in this area. the main results presented here classify the evaluation of first-order queries and conjunctive queries as hard parameterized problems. martin grohe overview of the virtual data center project and software in this paper, we present an overview of the virtual data center (vdc) software, an open-source digital library system for the management and dissemination of distributed collections of quantitative data. (see ). the vdc functionality provides everything necessary to maintain and disseminate an individual collection of research studies, including facilities for the storage, archiving, cataloging, translation, and on-line analysis of a particular collection. moreover, the system provides extensive support for distributed and federated collections including: location-independent naming of objects, distributed authentication and access control, federated metadata harvesting, remote repository caching, and distributed virtual collections of remote objects. micah altman l. andreev m. diggory g. king e. kolster a. sone s. verba daniel kiskis m. krot web-ccat: a collaborative learning environment for geographically distributed information technology students and working professionals donna dufner ojoung kwon rassule hadidi visual metaphors for interacting with databases the need for better human- computer interaction (hci) has been widely recognized and discussed, even in the database area. it is generally accepted that the quality of the interaction mainly depends on the interface characteristics. how does one recognize "good" interfaces? the use of a suitable metaphor is crucial. unfortunately, to put it as a metaphor, speaking about metaphors in hci is like walking on a slippery floor.we would like to come up with a definition of metaphor that is specific to the particular needs of the database area, and, of course, of the database users. thus, we will highlight some peculiarities of the database interaction. the following considerations should constitute a basis towards a formal approach for the construction of effective metaphors for interacting with databases. tiziana catarci maria f. costabile maristella matera density-based indexing for approximate nearest-neighbor queries kristin p. bennett usama fayyad dan geiger hypursuit: a hierarchical network search engine that exploits content-link hypertext clustering ron weiss bienvenido velez mark a. sheldon conceptual design: from user requirements to user interface kathy potosnak cscw as a basis for interactive design semantics the paper describes a method for visual systems synthesis. it stresses the importance of meaningful objects at the interface and suggests that semantics be used to derive such meaningful objects. the paper makes a distinction between the semantics of the design process and the problem domain and suggests that problem domain semantic concepts become the operands of design concepts. the paper notes the dual role of cscw both as a problem domain in its own right and as providing semantics to describe the design process itself. the paper defines a set of formal semantics for cscw and then describes synthesis of cscw systems. it then describes the design process and shows how cscw semantics can be used to define and implement a general model of the design process. igor t. hawryszkiewycz cscw, groupware and workflow: experiences, state of art, and future trends steven poltrock jonathan grudin what is coordination theory and how can it help design cooperative work systems? it is possible to design cooperative work tools based only on "common sense" and good intuitions. but the history of technology is replete with examples of good theories greatly aiding the development of useful technology. where, then, might we look for theories to help us design computer-supported cooperative work tools? in this paper, we will describe one possible perspective---the interdisciplinary study of coordination---that focuses, in part, on how people work together now and how they might do so differently with new information technologies. in one sense, there is little that is new about the study of coordination. many different disciplines---including computer science, sociology, political science, management science, systems theory, economics, linguistics, and psychology---have all dealt, in one way or another, with fundamental questions about coordination. furthermore, several previous writers have suggested that theories about coordination are likely to be important for designing cooperative work tools (e.g., [holt88], [wino86]). we hope to suggest here, however, that the potential for fruitful interdisciplinary connections concerning coordination is much greater than has as yet been widely appreciated. for instance, we believe that fundamentally similar coordination phenomena arise---unrecognized as such---in many of the fields listed above. though a single coherent body of theory about coordination does not yet exist, many different disciplines could both contribute to and benefit from more general theories of coordination. of particular interest to researchers in the field of computer-supported cooperative work is the prospect of drawing on a much richer body of existing and future work in these fields than has previously been suggested. in this paper, we will first describe what we mean by "coordination theory" and give examples of how previous research on computer-supported cooperative work can be interpreted from this perspective. we will then suggest one way of developing this perspective further by proposing tentative definitions of coordination and analyzing its components in more detail. thomas w. malone kevin crowston managing conflicts between rules (extended abstract) h. v. jagadish alberto o. mendelzon inderpal singh mumick at the forge: speaking sql reuven lerner psychometric evaluation of an after-scenario questionnaire for computer usability studies: the asq james r. lewis starts: stanford proposal for internet meta-searching document sources are available everywhere, both within the internal networks of organizations and on the internet. even individual organizations use search engines from different vendors to index their internal document collections. these search engines are typically incompatible in that they support different query models and interfaces, they do not return enough information with the query results for adequate merging of the results, and finally, in that they do not export metadata about the collections that they index (e.g., to assist in resource discovery). this paper describes starts, an emerging protocol for internet retrieval and search that facilitates the task of querying multiple document sources. starts has been developed in a unique way. it is not a standard, but a group effort coordinated by stanford's digital library project, and involving over 11 companies and organizations. the objective of this paper is not only to give an overview of the starts protocol proposal, but also to discuss the process that led to its definition. luis gravano chen-chuan k. chang hector garcia-molina andreas paepcke a multimedia synchronization model described by boolean expressions in this paper, a multimedia synchronization model is proposed. the purpose of this model is description for temporal relationships(i.e. time ordering and synchronization for multimedia presentation) among various media. in the model, user interactions, such as conditional branching and menu selection, are also considered. for the description of the temporal relationships, boolean expressions are used. the concept of media objects would be a great help to manipulate multimedia data. the functions of the media objects are defined in this paper. a media object is an entity that contains associated data and a media variable representing its presentation state. various types of media are classified into two types: output and input media objects. output media objects have data perceived by the user such as graphics, image, drawings, animation, audio and video. input media objects such as buttons and menus can handle input from the user. temporal relationships among media objects can be represented by a set of presentation expressions consisting of a boolean expression; three logical operators(and, or, and not) and media variables. the value of the media variable indicates the state of media(e.g., in the case of video object, it has two states; playing and not playing.) furthermore, by describing temporal relationships between input media objects and output media objects, user interaction can be successfully described in our model. a multimedia animal guide system, prototype system base on the model, was implemented. the system can synchronously present the media such as video, audio, image, and text, and can explain the feature and characteristics of animals, and the sequence of the presentation can be controlled by the user. tatsuo sato joung- hoon lim ken-ichi okada yutaka matsushita grassroots: providing a uniform framework for communicating, sharing information, and organizing people kenichi kamiya martin röscheisen terry winograd hypervoice: a phone-based cscw platform paul resnick how to roll a join: asynchronous incremental view maintenance incremental refresh of a materialized join view is often less expensive than a full, non-incremental refresh. however, it is still a potentially costly atomic operation. this paper presents an algorithm that performs incremental view maintenance as a series of small, asynchronous steps. the size of each step can be controlled to limit contention between the refresh process and concurrent operations that access the materialized view or the underlying relations. the algorithm supports point-in-time refresh, which allows a materialized view to be refreshed to any time between the last refresh and the present. kenneth salem kevin beyer bruce lindsay roberta cochrane design and functions of duo: the first italian academic opac maristella agosti maurizio masotti in-memory data management for consumer transactions the timesten approach corporate timesten team business: trends in future web designs: what's next for the hci professional? mary czerwinski kevin larson relaxed transaction processing munindar p. singh christine tomlinson darrell woelk distributed facilitation: a concept whose time has come? shelli dubs stephen c. hayne apparent usability vs. inherent usability: experimental analysis on the determinants of the apparent usability masaaki kurosu kaori kashimura performance evaluation of ephemeral logging ephemeral logging (el) is a new technique for managing a log of database activity on disk. it does not require periodic checkpoints and does not abort lengthy transactions as frequently as traditional firewall logging for the same amount of disk space. therefore, it is well suited for highly concurrent databases and applications which have a wide distribution of transaction lifetimes. this paper briefly explains el and then analyzes its performance. simulation studies indicate that it can offer significant savings in disk space, at the expense of slightly higher bandwidth for logging and more main memory. the reduced size of the log implies much faster recovery after a crash as well as cost savings. el is the method of choice in some but not all situations. we assess the limitations of our current knowledge about el and suggest promising directions for further research. john s. keen william j. dally multilingual "worldtrek" for authoring and comprehension marie-luce picard eric boudaillier local tools: an alternative to tool palettes benjamin b. bederson james d. hollan allison druin jason stewart david rogers david proft progressive quantized projection watermarking scheme in this paper we present a new watermarking technique for digital images. our approach modifies blocks of the image after projecting them onto certain directions. by quantizing the projected blocks to even and odd values we can represent the hidden information properly. the proposed algorithm does the modification progressively to ensure successful data extraction without any prior information being sent to the receiver side. in order to increase the robustness of our watermark to scaling and rotation attacks we also present a solution to recover the original size and orientation based on a training sequence which is inserted as part of the watermark. masoud alghoniemy ahmed h. tewfik tessa - an image testbed for evaluating 2-d spatial similarity algorithms venkat n. gudivada random and best-first document selection models most document retrieval systems based on probabilistic models of feature distributions assume random selection of documents for retrieval. the assumptions of these models are met when documents are randomly selected from the database or when retrieving all available documents. a more suitable model for retrieval of a single document assumes that the best document available is to be retrieved first. models of document retrieval systems assuming random selection and best-first selection are developed and compared under binary independence and two poisson independence feature distribution models. under the best-first model, feature discrimination varies with the number of documents in each relevance class in the database. a weight similar to the inverse document frequency weight and consistent with the best-first model is suggested which does not depend on knowledge of the characteristics of relevant documents. r. losee editorial steven pemberton nested transactions and read-write locking a. fekete n. lynch m. merrit w. weihl nulls in relational databases: revised this paper discusses the semantic issues related to null values problem in relational databases. we argue that the proposed set of maybe operations with the three valued logic is still not adequate and needs further enhancements. raju kocharekar pardes - an enhanced active database system (abstract) traditional databases are passive. they do only what isexplicitly requested in the user's query or update operation. theactive database paradigm states that a database may react in anintelligent way to an external input by creating and executingdatabase operations, which, though not explicitly requested in theinput, are required to preserve invariants associated to the database. this paradigm replaces many operations traditionally implementedby application programs with descriptive definitions that are partof the database schema. models and research prototypes based onthis paradigm are post-gres, hipac, sapiens, ariel. our model extends existing active database models in thefollowing ways: * it increases the expressive-power of the language used todefine the database schema by allowing the specification of aricher class of invariants than is currently supported. example:(salary:=base-salary + bonus + 1000* count(subordinates)) the proposed language is compact yet powerful. itreduces the cost of development and maintenance of database updateapplications and reduces the problems of validation andverification. * it provides an efficient algorithm for generating the auxiliaryoperations required to preserve the invariants after an update (inother models these operations have to be explicitly coded, eitherin the application program or in the database schema.) * our model introduces the notion of exception-handling mode,proposes a number of such modes, and specifies how modes can bedefined in the data base schema. the modes we propose eliminate alarge portion of the exception handling code that currently existsin application programs. opher etzion how can cooperative work tools support dynamic group process? bridging the specificity frontier in the past, most collaboration support systems have focused on either automating fixed work processes or simply supporting communication in ad-hoc processes. this results in systems that are usually inflexible and difficult to change or that provide no specific support to help users decide what to do next. this paper describes a new kind of tool that bridges the gap between these two approaches by flexibly supporting processes at many points along the spectrum: from highly specified to highly unspecified. the development of this approach was strongly based on social science theory about collaborative work. abraham bernstein a progressive view materialization algorithm a data warehouse stores materialized views of aggregate data derived from a fact table in order to minimize the query response time. one of the most important decisions in designing the data warehouse is the selection of materialized views. this paper presents an algorithm which provides appropriate views to be materialized while the goal is to minimize the query response time and maintenance cost. we use a data cube lattice, frequency of queries and updates on views, and view size to select views to be materialized using greedy algorithms. in spite of the simplicity, our algorithm selects views which give us better performance than views that selected by existing algorithms. hidetoshi uchiyama kanda runapongsa toby j. teorey rock and roll will never die: text of a talk given at the 1986 conference on computer supported collaborative work (cscw), with updated notes following john leslie king messageworld: a new approach to facilitating asynchronous group communication daniel e. rose jeremy j. bornstein kevin tiene processor allocation strategies for multiprocessor database machines in this paper four alternative strategies for assigning processors to queries in multiprocessor database machines are described and evaluated. the results demonstrate that simd database machines are indeed a poor design when their performance is compared with that of the three mimd strategies presented. also introduced is the application of data-flow machine techniques to the processing of relational algebra queries. a strategy that employs data-flow techniques is shown to be superior to the other strategies described by several experiments. furthermore, if the data-flow query processing strategy is employed, the results indicate that a two-level storage hierarchy (in which relations are paged between a shared data cache and mass storage) does not have a significant impact on performance. haran boral david j. dewitt cache consistency and concurrency control in a client/server dbms architecture yongdong wang lawrence a. rowe consequences of assuming a universal relation although central to the current direction of dependency theory, the assumption of a universal relation is incompatible with some aspects of relational database theory and practice. furthermore, the universal relation is itself ill defined in some important ways. and, under the universal relation assumption, the decomposition approach to database design becomes virtually indistinguishable from the synthetic approach. w. kent empirically validated web page design metrics a quantitative analysis of a large collection of expert-rated web sites reveals that page-level metrics can accurately predict if a site will be highly rated. the analysis also provides empirical evidence that important metrics, including page composition, page formatting, and overall page characteristics, differ among web site categories such as education, community, living, and finance. these results provide an empirical foundation for web site design guidelines and also suggest which metrics can be most important for evaluation via user studies. melody y. ivory rashmi r. sinha marti a. hearst unary inclusion dependencies have polynomial time inference problems we study the interaction between unary inclusion dependencies (uind's) and other known classes of dependencies, in the context of both unrestricted and finite implication. we provide complete axiomatizations for unrestricted and finite implication of uind's and functional dependencies, and polynomial-time algorithms for the inference problems. the inference problem becomes, however, np-hard, if we require that some attribute have a bounded domain. we show that for unrestricted implication, the interaction between uind's and unary functional dependencies completely characterizes the interaction between uind's and embedded implicational dependencies. also, for finite implication, the interaction between uind's and unary functional dependencies completely characterizes the interaction between uind's and full implicational dependencies (but not uind's and embedded implicational dependencies). paris c. kanellakis stavros s. cosmadakis moshe y. vardi the zen of interface design maria g. wadlow versant replication: supporting fault-tolerant object databases yuh-ming shyy h. stephen au-yeung c. p. chou constraints and triggers ludger becker hendrik ditt klaus hinrichs andreas voigtmann categorizing scenarios: a quixotic quest? robert l. campbell designing dexter-based cooperative hypermedia systems kaj jens a. hem ole l. madsen lennert sloth and nothing to watch: bad protocols, good users: in praise of evolvable systems clay shirky the diamond security policy for object-oriented databases a formal data model and integrated multilevel security policy for object- oriented database systems are presented in this paper. the security policy addresses mandatory as well as discretionary security controls. classes derive security classification constraints from their instances and logical instances. linda m. null johnny wong application of domain vector perfect hash join for multimedia data mining venkata n. rao goli william perrizo the cafe constructionkit: a toolkit for sociality mark s. ackerman generating user interfaces from data models and dialogue net specifications christian janssen anette weisbecker jurgen ziegler the attributistical understanding of information: its evaluation and its consequences for the soft redesign of user interface screens gunter dubrau patching: a multicast technique for true video- on-demand services kien a. hua ying cai simon sheu inferring graphical constraints with rockit solange karsenty chris weikart james a. landay special section on management of information systems gordon b. davis data warehousing and olap for decision support on-line analytical processing (olap) and data warehousing are decision support technologies. their goal is to enable enterprises to gain competitive advantage by exploiting the ever-growing amount of data that is collected and stored in corporate databases and files for better and faster decision making. over the past few years, these technologies have experienced explosive growth, both in the number of products and services offered, and in the extent of coverage in the trade press. vendors, including all database companies, are paying increasing attention to all aspects of decision support. surajit chaudhuri umeshwar dayal usability support inside and out randolph g. bias peter b. reitmeyer strengthening the focus on users' working practices julia gardner linking by interacting: a paradigm for authoring hypertext maria da graça pimentel gregory d. abowd yoshihide ishiguro open-vocabulary speech indexing for voice and video mail retrieval m. g. brown j. t. foote g. j. f. jones k. spärck jones s. j. young presenting to local and remote audiences: design and use of the telep system the current generation of desktop computers and networks are bringing streaming audio and video into widespread use. a small investment allows presentations or lectures to be multicast, enabling passive viewing from offices or rooms. we surveyed experienced viewers of multicast presentations and designed a lightweight system that creates greater awareness in the presentation room of remote viewers and allows remote viewers to interact with each other and the speaker. we report on the design, use, and modification of the system, and discuss design tradeoffs. gavin jancke jonathan grudin anoop gupta organizing information spatially kevin mullet individual characteristics associated with world wide web use: an empirical study of playfulness and motivation this study examines the influence of the individual characteristic of playfulness on the use of the world wide web (www). previous research suggests that microcomputer playfulness has an effect on computer usage in general, and we found support for a similar relationship in www use. two samples of students were surveyed in this study; one consisting of undergraduate students and the other comprised of graduate students. our findings also suggest that both intrinsic and extrinsic factors affect www use differentially for entertainment purposes and for course work purposes. our study confirms previous research in that we found that ability to use the computer has a positive effect on www usage. maryanne atkinson christine kydd a new approach to text searching we introduce a family of simple and fast algorithms for solving the classical string matching problem, string matching with don't care symbols and complement symbols, and multiple patterns. in addition we solve the same problems allowing up to k mismatches. among the features of these algorithms are that they are real time algorithms, they don't need to buffer the input, and they are suitable to be implemented in hardware. r. a. baeza-yates g. h. gonnet de-constructing the workstation: window systems, distribution and cscw (abstract) steve freeman stories & scenarios: pix pals barbara kiser ascw: an assistant for cooperative work (abstract): architectural and technological issues thomas kreifelts wolfgang prinz an information retrieval application for simulated annealing (poster) bernard j. jansen the jungle database search engine information spread in in databases cannot be found by current search engines. a database search engine is capable to access and advertise database on the www. jungle is a database search engine prototype developed at aalborg university. operating through jdbc connections to remote databases, jungle extracts and indexes database data and meta-data, building a data store of database information. this information is used to evaluate and optimize queries in the aqua query language. aqua is a natural and intuitive database query language that helps users to search for information without knowing how that information is structured. this paper gives an overview of aqua and describes the implementation of jungle. michael böhlen linas bukauskas curtis dyreson video-based hypermedia for education-on-demand: wei-hsiu ma yen-jen lee david h. c. du mark p. mccahill interactive multidimensional document visualization josiane mothe taoufiq dkaki involving the "user": blurred roles and co-design over time lucy suchman details of command-language keystrokes r. b. allen m. w. scerbo metricviews: design of multiple spreadsheets into a single dynamic view david small yin yin wong sergio canetti pql elaheh pourabbas maurizio rafanelli designing presentations for on-demand viewing increasingly often, presentations are given before a live audience, while simultaneously being viewed remotely and recorded for subsequent viewing on- demand over the web. how should video presentations be designed for web access? how is video accessed and used online? does optimal design for live and on-demand audiences conflict? we examined detailed behavior patterns of more than 9000 on-demand users of a large corpus of professionally prepared presentations. we find that as many people access these talks on-demand as attend live. online access patterns differ markedly from live attendance. people watch less overall and skip to different parts of a talk. speakers designing presentations for viewing on-demand should emphasize key points early in the talk and early within each slide, use slide titles that reveal the talk structure and are meaningful outside the flow of the talk. in some cases the recommendations conflict with optimal design for live audiences. the results also provide guidance in developing tools for on-demand multimedia authoring and use. liwei he jonathan grudin anoop gupta empirical bayes screening for multi-item associations this paper considers the framework of the so-called "market basket problem", in which a database of transactions is mined for the occurrence of unusually frequent item sets. in our case, "unusually frequent" involves estimates of the frequency of each item set divided by a baseline frequency computed as if items occurred independently. the focus is on obtaining reliable estimates of this measure of interestingness for all item sets, even item sets with relatively low frequencies. for example, in a medical database of patient histories, unusual item sets including the item "patient death" (or other serious adverse event) might hopefully be flagged with as few as 5 or 10 occurrences of the item set, it being unacceptable to require that item sets occur in as many as 0.1% of millions of patient reports before the data mining algorithm detects a signal. similar considerations apply in fraud detection applications. thus we abandon the requirement that interesting item sets must contain a relatively large fixed minimal support, and adopt a criterion based on the results of fitting an empirical bayes model to the item set counts. the model allows us to define a 95% bayesian lower confidence limit for the "interestingness" measure of every item set, whereupon the item sets can be ranked according to their empirical bayes confidence limits. for item sets of size _j_ > 2, we also distinguish between multi-item associations that can be explained by the observed _j_(_j_-1)/2 pairwise associations, and item sets that are significantly more frequent than their pairwise associations would suggest. such item sets can uncover complex or synergistic mechanisms generating multi- item associations. this methodology has been applied within the u.s. food and drug administration (fda) to databases of adverse drug reaction reports and within at&t; to customer international calling histories. we also present graphical techniques for exploring and understanding the modeling results. william dumouchel daryl pregibon a hybrid indoor navigation system we describe a hybrid building navigation system consisting of stationary information booths and a mobile communication infrastructure feeding small portable devices. the graphical presentations for both the booths and the mobile devices are generated from a common source and for the common task of way finding, but they use different techniques to convey possibly different subsets of the relevant information. the form of the presentations is depending on technical limitations of the output media, accuracy of location information, and cognitive restrictions of the user. we analyze what information needs to be conveyed, how limited resources influence the presentation of this information, and argue, that by generating all different presentations in a common framework, a consistent appearance across devices can be achieved and that the different device classes can complement each other in facilitating the navigation task. andreas butz jörg baus antonio kruger marco lohse strategies for database schema acquisition and design consideration of the database design process highlights specific problems in requirements determination, development of the overall conceptual schema, and the data base administrator bottleneck. an alternative design strategy is described which first acquires the separate user views and then systematically merges these views to derive the overall conceptual schema. the user views may be specified independently and are not limited to a single data model formalism. matthew morgenstern industry briefs: viant john armitage posties: a webdav application for collaborative work joachim feise linking and messaging from real paper in the paper pda it is well known that paper is a very fluid, natural, and easy to use medium for manipulating some kinds of information. it is familiar, portable, flexible, inexpensive, and offers good readability properties. paper also has well known limitations when compared with electronic media. work in hybrid paper electronic interfaces seeks to bring electronic capabilities to real paper in order to obtain the best properties of each. this paper describes a hybrid paper electronic system --- the paper pda \\--- which is designed to allow electronic capabilities to be employed within a conventional paper notebook, calendar, or organizer. the paper pda is based on a simple observation: a paper notebook can be synchronized with a body of electronic information much like an electronic pda can be synchronized with information hosted on a personal computer. this can be accomplished by scanning, recognizing and processing its contents, then printing a new copy. this paper introduces the paper pda concept and considers interaction techniques and applications designed to work within the paper pda. the stickerlink technique supports on-paper hyperlinking using removable paper stickers. two applications are also considered which look at aspects of electronic communications via the paper pda. jeremy m. heiner scott e. hudson kenichiro tanaka an experimental system for transactional messaging allen e. milewski thomas m. smith design evolution in a multimedia tutorial on user-centered design tom carey slade mitchell dan peerenboom mary lytwyn empowering users in a task-based approach to design stephanie wilson peter johnson a synthesis algorithm for a class of 4nf e. a. unger cleopas o. angaye ramonamap - an example of graphical groupware ramonamap is an iterative map for database and communication services within our workgroup. resources are represented as icons on the map, which preserves their actual (or implied) physical location and capitalizes on a user's understanding of maps. the map is interactive, giving the user control over the level of detail visible, allowing more information and services to appear than could be placed on a static map. the interactivity also allows users to change the map and add icon annotations. since the map is continuously derived from an on-line database, changes and annotations are immediately shared by all users. as the database contains a wealth of information about the group, it also serves as a source for static maps for other purposes. joel f. bartlett a method for evaluating web page design concepts thomas s. tullis searching the web (keynote address): can you find what you want? the world wide web has revolutionized communication and information distribution, storage, and access. its impact has been felt everywhere - e.g. science and technology, commerce and business, education, government, religion, law, entertainment, health care. even so, there are many ways the web can be improved. we discuss what the web consists of and how it has changed, what is the size of the web, and what is covered. results for the publicly indexable web show that the web though terabytes in size and growing is still less than large commercial databases and the library of congress. though the web started out as an academic-government endeavor, it is now primarily commerce. furthermore, the major web search engines cover only a fraction of the publicly indexable web and appear to base their indexing strategy on the popularity of information. since current search on the web is primarily done with the search engines, what would be the economic, political and scientific implications of these results? c. lee giles designing presentation in multimedia interfaces alistair sutcliffe peter faraday bottom-up design of active object-oriented databases g. kappel s. rausch- schott w. retschitzegger m. sakkinen toward a multilevel secure relational data model sushil jajodia ravi sandhu starburst ii: the extender strikes back! guy m. lohman george lapis tobin lehman rakesh agrawal roberta cochrane john mcpherson c. mohan hamid pirahesh jennifer widom ercim workshop on "user interfaces for all" constantine stephanidis event-entity-relationship modeling in data warehouse environments we use the event-entity-relationship model (ever) to illustrate the use of entity-based modeling languages for conceptual schema design in data warehouse environments. ever is a general-purpose information modeling language that supports the specification of both general schema structures and multi- dimensional schemes that are customized to serve specific information needs. ever is based on an event concept that is very well suited for multi- dimensional modeling because measurement data often represent events in multi- dimensional databases. lars bækgaard experiences in developing collaborative applications using the world wide web "shell" andreas girgensohn alison lee kevin schueter retrospective on the mcc human interface laboratory bill curtis roy kuntz jim hollan s. joy mountford george collier applying an update method to a set of receivers (extended abstract) marc andries luca cabibbo jan paredaens jan van den bussche real time groupware as a distributed system: concurrency control and its effect on the interface this paper exposes the concurrency control problem in groupware when it is implemented as a distributed system. traditional concurrency control methods cannot be applied directly to groupware because system interactions include people as well as computers. methods, such as locking, serialization, and their degree of optimism, are shown to have quite different impacts on the interface and how operations are displayed and perceived by group members. the paper considers both human and technical considerations that designers should ponder before choosing a particular concurrency control method. it also reviews our work-in-progress designing and implementing a library of concurrency schemes in groupkit, a groupware toolkit. saul greenberg david marwood a data model for flexible hypertext database systems hypertext and other page-oriented databases cannot be schematized in the same manner as record-oriented databases. as a result, most hypertext databases implicitly employ a data model based on a simple, unrestricted graph. this paper presents a hypergraph model for maintaining page-oriented databases in such a way that some of the functionality traditionally provided by database schemes can be available to hypertext databases. in particular, the model formalizes identification of commonality in the structure, set-at-a-time database access, and definition of user-specific views. an efficient implementation of the model is also discussed. frank wm. tompa a case for dynamic view management materialized aggregate views represent a set of redundant entities in a data warehouse that are frequently used to accelerate on-line analytical processing (olap). due to the complex structure of the data warehouse and the different profiles of the users who submit queries, there is need for tools that will automate and ease the view selection and management processes. in this article we present dynamat, a system that manages dynamic collections of materialized aggregate views in a data warehouse. at query time, dynamat utilizes a dedicated disk space for storing computed aggregates that are further engaged for answering new queries. queries are executed independently or can be bundled within a multiquery expression. in the latter case, we present an execution mechanism that exploits dependencies among the queries and the materialized set to further optimize their execution. during updates, dynamat reconciles the current materialized view selection and refreshes the most beneficial subset of it within a given maintenance window. we show how to derive an efficient update plan with respect to the available maintenance window, the different update policies for the views and the dependencies that exist among them. yannis kotidis nick roussopoulos a generic dynamic-mapping wrapper for open hypertext system support of analytical applications chao-min chiu michael bieber a graphical notation editing system (agnes) - support for database modeling using the extended e-r model hsiao ying liu research alerts marisa campbell university briefs: drexel university kathi martin performances of clustering policies in object bases in this paper, we address the problem of clustering graphs in object-oriented databases. unlike previous studies which focused only on a workload consisting of a single operation, this study tackles the problem when the workload is a set of operations (method and queries) that occur with a certain probability. thus, the goal is to minimize the expected cost of an operation in the workload, while maintaining a similarly low cost for each individual operation class. to this end, we present a new clustering policy based on the nearest-neighbor graph partitioning algorithm. we then demonstrate that this policy provides considerable gains when compared to a suite of well-known clustering policies proposed in the literature. our results are based on two widely referenced object-oriented database benchmarks; namely, the tektronix hypermodel and oo7. adel shrufi experiments in multilingual information retrieval using the spider system páraic sheridan jean paul ballerini the role of gender in high school computer mediated communication (abstract) matthew a. ford elise n. cassidente j. suzanne rothrock david w. brown daniel miller maintaining data warehouses over changing information sources elke a. rundensteiner andreas koeller xin zhang easily adding animations to interfaces using constraints brad a. myers robert c. miller rich mcdaniel alan ferrency on the complexity of database queries (extended abstract) christos h. papadimitriou mihalis yannakakis ingraph: graphical interface for a fully object-oriented database system xuequn wu guido dinkhoff information visualization for utilizing distributed information resources masanori sugimoto norio katayama perf join: an alternative to two-way semijoin and bloomjoin zhe li kenneth a. ross interface issues and interaction strategies for information retrieval systems scott henninger nicholas j. belkin bridging the communication gap in the workplace with usability engineering desiree sy nbdl: a cis framework for nsdl in this paper, we describe the nbdl (national biology digital library) project, one of the six cis (core integration system) projects of the nsf nsdl (national smete digital library) program. joe futrelle su-shing chen kevin c. chang research alerts marisa campbell disaggregations in databases an algebraic foundation of database schema design is presented. a new database operator, namely, disaggregation, is introduced. beginning with "free" families, repeated applications of disaggregation and three other operators (matching function, cartesian product, and selection) yield families of increasingly elaborate structure. in particular, families defined by one join dependency and several "embedded" functional dependencies can be obtained in this manner. serge abiteboul exploratory mining via constrained frequent set queries although there have been many studies on data mining, to date there have been few research prototypes or commercial systems supporting comprehensive query- driven mining, which encourages interactive exploration of the data. our thesis is that constraint constructs and the optimization they induce play a pivotal role in mining queries, thus substantially enhancing the usefulness and performance of the mining system. this is based on the analogy of declarative query languages like sql and query optimization which have made relational databases so successful. to this end, our proposed demo is not yet another data mining system, but of a new paradigm in data mining - mining with constraints, as the important first step towards supporting ad-hoc mining in dbms. in this demo, we will show a prototype exploratory mining system that implements constraint-based mining query optimization methods proposed in [5]. we will demonstrate how a user can interact with the system for exploratory data mining and how efficiently the system may execute optimized data mining queries. the prototype system will include all the constraint pushing techniques for mining association rules outlined in [5], and will include additional capabilities for mining other kinds of rules for which the computation of constrained frequent sets forms the core first step. raymond ng laks v. s. lakshmanan jiawei han teresa mah integrating theoreticians' and practitioners' perspectives with design rationale victoria bellotti flexible queries over semistructured data flexible queries facilitate, in a novel way, easy and concise querying of databases that have varying structures. two different semantics, flexible and semiflexible, are introduced and investigated. the complexity of evaluating queries under the two semantics is analyzed. query evaluation is polynomial in the size of the query, the database and the result in the following two cases. first, a semiflexible dag query and a tree database. second, a flexible tree query and a database that is any graph. query containment and equivalence are also investigated. for the flexible semantics, query equivalence is always polynomial. for the semiflexible semantics, query equivalence is polynomial for dag queries and exponential when the queries have cycles. under the semiflexible and flexible semantics, two databases could be equivalent even when they are not isomorphic. database equivalence is formally defined and characterized. the complexity of deciding equivalences among databases is analyzed. the implications of database equivalence on query evaluation are explained. yaron kanza yehoshua sagiv synchronous and asynchronous flaviu cristian identifying and analyzing multiple threads in computer-mediated and face-to- face conversations susan e. mcdaniel gary m. olson joseph c. magee amdb: an access method debugging tool marcel kornacker mehul shah joseph m. hellerstein evaluating the user interface: the candid camera approach in the development of a new interactive graphics application, considerable effort was spent on designing a user interface which would be easy to use. when a portion of the application was completed, typical potential users were brought in to help evaluate the interface. they were given a sample task and a short introduction to the application; then their efforts to complete the task were observed and videotaped. this method of evaluating the user interface provided the development staff with quite a bit of valuable information. changes were made, and more testing was done, including using some subjects for a second time. this paper describes how this evaluation method was used for two purposes: to point out problem areas in the interface, and to verify that changes made have improved the user interface. michelle a. lund video: a design medium steve harrison scott minneman bob stults karon weber on the design and administration of secure database transactions e. v. krishnamurthy a. mcguffin database design: methodologies, tools, and environments (panel session) carlo batini stefano ceri al hershey george gardarin david reiner collecting user information on a limited budget: a chi '95 workshop alison popowicz the role of sigois in cscw marilyn mantei discount or disservice?: discount usability analysis--evaluation at a bargain price or simply damaged merchandise? (panel session) wayne d. gray re-evaluating indexing schemes for nested objects performance is a major issue in the acceptance of object-oriented database management systems (oodbms). the nested index and path index schemes have been criticized for their heavy costs and poor handling of update operations. this paper re- evaluates three index schemes (nested index, path index, and multi- index) applicable to queries on nested attributes. among these, we found that a multi-index scheme is best supported in the object-oriented or extended relational dbms environment. multi-index schemes not only provide a better balance between retrieval and update costs than do the nested or path indices, but they also scale well for update when the number of indices increases. in this paper, we propose a multi-index design that reuses the single-table index structures already present in a dbms. our performance study extends previous models by permitting attributes to be multi-valued as well as single-valued. we also suggest that a combination of nested index and multi-index schemes offers a feasible solution to the support of queries on nested objects. yin-he jiang xiangning liu bharat bhargava improved bulk-loading algorithms for quadtrees gísli r. hjaltason hanan samet dtd inference for views of xml data we study the inference of data type definitions (dtds) for views of xml data, using an abstraction that focuses on document content structure. the views are defined by a query language that produces a list of documents selected from one or more input sources. the selection conditions involve vertical and horizontal navigation, thus querying explicitly the order present in input documents. we point several strong limitations in the descriptive ability of current dtds and the need for extending them with (i) a subtyping mechanism and (ii) a more powerful specification mechanism than regular languages, such as context-free languages. with these extensions, we show that one can always infer tight dtds, that precisely characterize a selection view on sources satisfying given dtds. we also show important special cases where one can infer a tight dtd without requiring extension (ii). finally we consider related problems such as verifying conformance of a view definition with a predefined dtd. extensions to more powerful views that construct complex documents are also briefly discussed. yannis papakonstantinou victor vianu object-oriented and database concepts for the design of networked information retrieval systems norbert fuhr an ois model for internal accounting control evaluation andrew d. bailey james h. gerlach r. preston mcafee andrew b. whinston nested effects testing: a multidimensional approach for evaluating cscw systems j. h. erik andriessen nic: interaction on the world wide web dan olsen ken rodham doug kohlert jeff jensen brett ahlstrom mike bastian darren davis exploding the interface: experiences of a cscw network john bowers tom rodden using a knowledge cache for interactive discovery of association rules biswadeep nag prasad m. deshpande david j. dewitt insite: introduction to a generic paradigm for interpreting user-web space interaction insite is a heuristic-based implementation to provide consistent tracking, analysis and visualization of users' interactions with a generic web site. our research has immediate applicability in such disparate fields as business, e-commerce, distance education, entertainment and management for capturing individual and collective profiles of customers, learners and employees. insite can identify trends and changes in user(s) behavior (interests) by monitoring their online interactions. it has a three- tier architecture for tracking, analysis and visualization. first, a remote agent transparently tracks user-navigation-paths within a site. second, a unique connectivity matrix (cm) model (a set of connectivity matrices) represents each path (and cluster of paths). third, the user-web site interaction, thus translated to a finite number of cm-models, is readily visualized by graphically representing the member matrices of the models. each member matrix of a representative cm- model captures a single navigational attribute. our dimensionally static approach to path and cluster representation by the connectivity matrices can reduce the complexity of analysis by several orders. consequently, we employ a new paradigm for dynamic clustering that leverages on the unique cm-model of representation. adil faisal cyrus shahabi margaret mclaughlin frederick betz filochat: handwritten notes provide access to recorded conversations steve whittaker patrick hyland myrtle wiley the use of scenarios in human-computer interaction research: turbocharging the tortoise of cumulative science a scenario is an idealised but detailed description of a specific instance of human-computer interaction (hci). a set of scenarios can be used as a "filter bank" to weed out theories whose scope is too narrow for them to apply to many real hci situations. by helping redress the balance between generality and accuracy in theories derived from cognitive psychology, this use of scenarios (1) allows the researcher to build on empirical findings already established while avoiding the tar-pits associated with the experimental methodology, (2) enables the researcher to consider a range of phenomena in a single study, thereby directly addressing the question of the scope of the theory, and (3) ensures that the resulting theory will be applicable in hci contexts. richard m. young phil barnard selected papers from the sixth international conference on information systems there are a number of alternative tools and methods for building and designing information systems for organizational use. each tool or design alternative has its advocates. benefits and advantages are proposed or claimed; little empirical evidence is presented. the two selected papers from the 1985 international conference on information systems (indianapolis, december 15-18) present empirical laboratory experiments to provide evidence as to tools and design alternatives. the first paper by dickson, desanctis, and mcbride describes three experiments to compare traditional tabular presentation with graphic presentation. the experiments are designed to build cumulative research results around the issue of the task the reader of the information is to perform upon receiving the information. the second paper by vessey and weber provides experimental evidence comparing methods for documenting a problem: decision tree, decision table, and structured english. the three methods are frequently presented as alternatives; the experiments compare them. two studies do not settle an issue as complex as comparison of alternative tools and methods: they begin to provide the evidence needed. they also illustrate one well- established research approach---the laboratory experiment. the advantage of laboratory experiments is the control that can be obtained; field studies and experience of practitioners can be understood more fully in the context of such laboratory results. gordon b. davis rule based database access control - a practical approach tor didriksen a dynamic framework for object projection views user views in a relational database obtained through a single projection ("projection views") are considered in a new framework. specifically, such views, where each tuple in the view represents an object ("object-projection views"), are studied using the dynamic relational model, which captures the evolution of the database through consecutive updates. attribute sets that yield object- projection views are characterized using the static and dynamic functional dependencies satisfied by the database. object-projection views are then described using the static and dynamic functional dependencies "inherited" from the original database. finally, the impact of dynamic constraints on the view update problem is studied in a limited context. this paper demonstrates that new, useful information about views can be obtained by looking at the evolution of the database as captured by the dynamic relational model. victor vianu teaching software psychology experimentation through team projects this paper describes an undergraduate human factors course which emphasizes psychologically oriented experimentation on the human use of computers. the reductionist principles of the scientific method are emphasized throughout the course: lucid statement of testable hypotheses, alteration of independent variables, measurement of dependent variables, selection and assignment of subjects, control for biasing, and statistical testing. term- length team projects are highly motivating for students and have led to worthwhile research contributions. ben shneiderman is the east really different from the west: a cross-cultural study on information technology and decision making james t. c. teng kenneth j. calhoun myun joong cheon scott raeburn willy wong schema analysis for database restructuring the problem of generalized restructuring of databases has been addressed with two limitations: first, it is assumed that the restructuring user is able to describe the source and target databases in terms of the implicit data model of a particular methodology; second, the restructuring user is faced with the task of judging the scope and applicability of the defined types of restructuring to his database implementation and then of actually specifying his restructuring needs by translating them into the restructuring operations on a foreign data model. a certain amount of analysis of the logical and physical structure of databases must be performed, and the basic ingredients for such an analysis are developed here. the distinction between hierarchical and nonhierarchical data relationships is discussed, and a classification for database schemata is proposed. examples are given to illustrate how these schemata arise in the conventional hierarchical and network systems. application of the schema analysis methodology to restructuring specification is also discussed. an example is presented to illustrate the different implications of restructuring three seemingly identical database structures. shamkant b. navathe reformulating query plans for multidatabase systems chun-nan hsu craig a. knoblock guidelines for designing usable world wide web pages jose a. borges israel morales nestor j. rodriguez change detection in hierarchically structured information sudarshan s. chawathe anand rajaraman hector garcia-molina jennifer widom a comparison of three selection techniques for touchpads: i. scott mackenzie aleks oniszczak the whiteboard: the joy of ~~sex~~ psychology daryle gardner-bonneau gesturecam (video program) (abstract only): a video communication system to support spatial workspace collaboration in this paper, the collaboration in the real three-dimensional environment is defined as spatial workspace collaboration, and an experimental system, gesturecam, is presented which supports spatial workspace collaboration via a video-mediated communication. the gesturecam system has an ability to look around a remote site, an ability of remote pointing, and an ability to support gaze awareness, all of which are the essential system requirements for spatial workspace collaboration. the gesturecam consists of an actuator with three degrees of freedom and a video camera mounted on the actuator. the actuators can be controlled by a master-slave method, by a touch-sensitive crt, or by a gyro sensor. also, a laser pointer is mounted on an actuator to assist remote pointing. the experiments with human subjects are also shown in the video. hideaki kuzuoka gen ishimoda yushi nishimura yoshihiro nakada an almost linear-time algorithm for computing a dependency basis in a relational database zvi galil reflections on awards criteria marc rettig a methodological framework for data warehouse design matteo golfarelli stefano rizzi comment on some recent comments on information retrieval charles t. meadow collaboration media: the problem of design by use and the use of design p. t. hughes t. a. plant m. e. morris n. r. seel demonstration of hierarchical document clustering of digital library retrieval results as digital libraries grow in size, querying their contents will become as frustrating as querying the web is now. one remedy is to hierarchically cluster the results that are returned by searching a digital library. we demonstrate the clustering of search results from carnegie mellons informedia database, a large video library that supports indexing and retrieval with automatically generated descriptors. c. r. palmer j. pesenti r. e. valdes-perez m. g. christel a. g. hauptmann d. ng h. d. wactlar ready for prime time: pre-generation of web pages in tiscover in large data- and access-intensive web sites, efficient and reliable access is hard to achieve. this situation gets even worse for web sites providing precise structured query facilities and requiring topicality of the presented information even in face of a highly dynamic content. the achievement of these partly conflicting goals is strongly influenced by the approach chosen for page generation, ranging from composing a web page upon a user's request to its generation in advance. the official austrian web-based tourism information and booking system tiscover tries to reconcile these goals by employing a hybrid approach of page generation. in tiscover, web pages are not only generated on request in order to support precise structured queries on the content managed by a database system. rather, the whole web site is also pre- generated out of the extremely dynamic content and synchronized with the database on the basis of metadata. thus, topicality of information is guaranteed, while ensuring efficient and reliable access. this paper discusses the hybrid approach as realized in tiscover, focussing in particular on the concepts used for pre-generation.1 birgit pröll heinrich starck werner retschitzegger harald sighart a digital video athoring and publishing system designed for the internet ronald baecker checkpointing strategies for database systems in this paper we consider a transaction oriented database system where checkpointing is done after a certain number of transactions are processed. the design objective is to maximize the system availability given the failure rate, service rate and other system parameters. several checkpointing strategies have been proposed and analyzed. we have also considered the effect of error latency. raj sekhar pamula pradip k. srimani developing an image viewer for content-based retrieval and navigation (abstract) stephen perry mark dobie paul lewis smily, a smil authoring environment muriel jourdan laurent tardif lionel villard after the gold rush (invited talk) (abstract only): data mining in the new economy david stodder phenomenal data mining john mccarthy tree queries: a simple class of relational queries one can partition the class of relational database schemas into tree schemas and cyclic schemas. (these are called acyclic hypergraphs and cyclic hypergraphs elsewhere in the literature.) this partition has interesting implications in query processing, dependency theory, and graph theory. the tree/cyclic partitioning of database schemas originated with a similar partition of equijoin queries. given an arbitrary equijoin query one can obtain an equivalent query that calculates the natural join of all relations in (an efficiently) derived database; such a query is called a natural join (nj) query. if the derived database is a tree schema the original query is said to be a tree query, and otherwise a cyclic query. in this paper we analyze query processing consequences of the tree/cyclic partitioning. we are able to argue, qualitatively, that queries which imply a tree schema are easier to process than those implying a cyclic schema. our results also extend the study of the semijoin operator. nathan goodman oded shmueli using multiversion data for non-interfering execution of write-only transactions d. agrawal v. krishnaswamy an algorithm for concurrency control and recovery in replicated distributed databases in a one-copy distributed database, each data item is stored at exactly one site. in a replicated database, some data items may be stored at multiple sites. the main motivation is improved reliability: by storing important data at multiple sites, the dbs can operate even though some sites have failed. this paper describes an algorithm for handling replicated data, which allows users to operate on data so long as one copy is "available." a copy is "available" when (i) its site is up, and (ii) the copy is not out-of-date because of an earlier crash. the algorithm handles clean, detectable site failures, but not byzantine failures or network partitions. philip a. bernstein nathan goodman discourse analysis of user requests sanna talja rami heinisuo eija-liisa kasesniemi harri kemppainen sinikka luukkainen kirsi pispa kalervo järvelin decompositions and functional dependencies in relations a general study is made of two basic integrity constraints on relations: functional and multivalued dependencies. the latter are studied via an equivalent concept: decompositions. a model is constructed for any possible combination of functional dependencies and decompositions. the model embodies some decompositions as unions of relations having different schemata of functional dependencies. this suggests a new, stronger integrity constraint, the degenerate decomposition. more generally, the theory demonstrates the importance of using the union operation in database design and of allowing different schemata on the operands of a union. techniques based on the union lead to a method for solving the problem of membership of a decomposition in the closure of a given set of functional dependencies and decompositions. the concept of antiroot is introduced as a tool for describing families of decompositions, and its fundamental importance for database design is indicated. w. w. armstrong c. deobel automatic chunk detection in human-computer interaction this paper describes an algorithm to detect user's mental chunks by analysis of pause lengths in goal-directed human-computer interaction. identifying and characterizing users' chunks can help in gauging the users' level of expertise. the algorithm described in this paper works with information collected by an automatic logging mechanism. therefore, it is applicable to situations in which no human intervention is required to perform the analysis, such as adaptive interfaces. an empirical study was conducted to validate the algorithm, showing that mental chunks and their characteristics can indeed be inferred from analysis of human-computer interaction logs. users performing a variety of goal-directed tasks were monitored. using an automated logging tool, every command invoked, every operation performed with the input devices, as well as all system responses were recorded. analysis of the interaction logs was performed by a program that implements a chunk detection algorithm that looks at command sequences and timings. the results support the hypothesis that a significant number of user mental chunks can be detected by our algorithm. paulo j. santos albert n. badre the kim query system: an iconic interface for the unified access to distributed multimedia databases fabrizio massimo ferrara talking to machines christopher k. cowley dylan m jones using latent semantic indexing for information filtering latent semantic indexing (lsi) is an information retrieval method that organizes information into a semantic structure that takes advantage of some of the implicit higher-order associations of words with text objects. the resulting structure reflects the major associative patterns in the data while ignoring some of the smaller variations that may be due to idiosyncrasies in the word usage of individual documents. this permits retrieval based on the "latent" semantic content of the documents rather than just on keyword matches. this paper evaluates using lsi for filtering information such as netnews articles based on a model of user preferences for articles. users judged articles on how interesting they were and based on these judgements, lsi predicted whether new articles would be judged interesting. lsi improved prediction performance over keyword matching an average of 13% and showed a 26% improvement in precision over presenting articles in the order received. the results indicate that user preferences for articles tend to cluster based on the semantic similarities between articles. p. w. foltz navigating nowhere/hypertext infrawhere jim rosenberg a concurrency control theory for nested transactions (preliminary report) concurrency control is the activity of synchronizing transactions that access shared data. a concurrency control algorithm is regarded as correct if it ensures that any interleaved execution of transactions is equivalent to a serial one. such executions are called serializable. serializability theory provides a method for modelling and analyzing the correctness of concurrency control algorithms [bsw, pa]. the concept of nested transaction has recently received much attention [gr], [mo]. in a nested transaction model, each transaction can invoke sub- transactions, which can invoke sub- subtransactions, and so on. the natural modelling concept is the tree log. the leaves of a tree log are atomic operations executed by the underlying system. internal nodes are operations (as seen by their parents) implemented as transactions (as seen by their children). nodes are related by a partial order <, where x c. beeri p. a. bernstein n. goodman m. y. lai d. e. shasha the user interface in text retrieval systems: a letter to the editor jef raskin optimal file designs and reorganization points a model for studying the combined problems of file design and file reorganization is presented. new modeling techniques for predicting the performance evolution of files and for finding optimal reorganization points for files are introduced. applications of the model to hash-based and indexed- sequential files reveal important relationships between initial loading factors and reorganization frequency. a practical file design strategy, based on these relationships, is proposed. d. s. batory letters from the editors mark keil ephraim r. mclean book preview jennifer bruer parallelism in relational data base systems: architectural issues and design approaches with current systems, some important complex queries may take days to complete because of: (1) the volume of data to be processed, (2) limited aggregate resources. introducing parallelism addresses the first problem. cheaper, but powerful computing resources solve the second problem. according to a survey by brodie,1 only 10% of computerized data is in data bases. this is an argument for both more variety and volume of data to be moved into data base systems. we conjecture that the primary reasons for this low percentage are that data base management systems (dbmss) still need to provide far greater functionality and improved performance compared to a combination of application programs and file systems. this paper addresses the issues and solutions relating to intraquery parallelism in a relational dbms supporting sql. instead of focussing only on a few algorithms for a subset of the problems, we provide a broad framework for the study of the numerous issues that need to be addressed in supporting parallelism efficiently and flexibly. we also discuss the impact that parallelization of complex queries has on short transactions which have stringent response time constraints. the pros and cons of the shared nothing, shared disks and shared everything architectures for parallelism are enumerated. the impact of parallelism on a number of components of an industrial-strength dbms are pointed out. the different stages of query processing during which parallelism may be gainfully employed are identified. the interactions between parallelism and the traditional systems' pipelining technique are analyzed. finally, the performance implications of parallelizing a specific complex query are studied. this gives us a range of sample points for different parameters of a parallel system architecture, namely, i/o and communication bandwidth as a function of aggregate mips. hamid pirahesh c. mohan josephine cheng t. s. liu pat selinger phrasier: a system for interactive document retrieval using keyphrases steve jones mark s. staveley pivoted document length normalization amit singhal chris buckley mandar mitra evaluating the cost of boolean query mapping chen-chuan k. chang hector garcia-molina mediadoc: automated generation of multimedia explanatory presentations lewis johnson stacy marsella a visual calendar for scheduling group meetings scheduling group meetings requires access to participants' calendars, typically located in scattered pockets or desks. placing participants' calendars on-line and using a rule-based scheduler to find a time slot would alleviate the problem to some extent, but it often is difficult to trust the results, because correct scheduling rules are elusive, varying with the participants and the agenda of a particular meeting. what's needed is a comprehensive scheduling system that summarizes the available information for quick, flexible, and reliable scheduling. we have developed a prototype of a priority-based, graphical scheduling system called visual scheduler (vs). a controlled experiment comparing automatic scheduling with vs to manual scheduling demonstrated the former to be faster and less error prone. a field study conducted over six weeks at the unc-ch computer science department showed vs to be a generally useful system and provided valuable feedback on ways to enhance the functionality of the system to increase its value as a groupwork tool. in particular, users found priority-based time-slots and access to scheduling decision reasoning advantageous. vs has been in use by more than 75 faculty, staff, and graduate students since fall 1987. david beard murugappan palaniappan alan humm david banks anil nair yen-ping shan data management in ecommerce (tutorial session): the good, the bad, and the ugly avigdor gal personalized information delivery: an analysis of information filtering methods peter w. foltz susan t. dumais the sigchi bulletin: interviews with the editors steven pemberton phonet: telephone call database 3d exploration applet christian ghezzi discerning behavioral properties by analyzing transaction logs (extended abstract) srinath srinivasa myra spiliopoulou jotmail: a voicemail interface that enables you to see what was said voicemail is a pervasive, but under-researched tool for workplace communication. despite potential advantages of voicemail over email, current phone-based voicemail uis are highly problematic for users. we present a novel, web-based, voicemail interface, jotmail. the design was based on data from several studies of voicemail tasks and user strategies. the gui has two main elements: (a) _personal annotations_ that serve as a visual analogue to underlying speech; (b) automatically derived message header information. we evaluated jotmail in an 8-week field trial, where people used it as their only means for accessing voicemail. jotmail was successful in supporting most key voicemail tasks, although users' electronic annotation and archiving behaviors were different from our initial predictions. our results argue for the utility of a combination of annotation based indexing and automatically derived information, as a general technique for accessing speech archives. steve whittaker richard davis julia hirschberg urs muller conference preview: siggraph 99 jennifer bruer performance evaluation of functional disk system with nonuniform data distribution in this paper, we analyze the performance of a functional disk system with relational database engine (fds-rii) for a nonuniform data distribution. fds- rii is a relational storage system, designed to accelerate relational algebraic operations, which employs a hash-based algorithm to process relational operations. basically, in the hash-based algorithm, a relation is first partitioned into several clusters by a split function. then each cluster is staged onto the main memory and, further, a hash function is applied to each cluster to perform a relational operation. thus, the nonuniformity of split and hash functions is considered to be resulting from a nonuniform data distribution on the hash-based algorithm. we clarify the effect of nonuniformity of the hash and split functions on the join performance. it is possible to attenuate the effect of the hash function nonuniformity by increasing the number of processors and processing the buckets in parallel. furthermore, in order to tackle the nonuniformity of split function, we introduce the combined hash algorithm. this algorithm combines the grace hash algorithm with the nested loop algorithm in order to handle the overflown bucket efficiently. using the combined hash algorithm, we find that the execution time of the nonuniform data distribution is almost equal to that of the uniform data distribution. thus we can get sufficiently high performance on fds-rii also for nonuniformly distributed data. masaru kitsuregawa miyuki nakano lilian harada mikio takagi pricing considerations in video-on-demand systems (poster session) video-on-demand (vod) has been an active area of research for the past few years in the multimedia research community. however, there have not been many significant commercial deployments of vod owing to the inadequacy of _per user_ bandwidth and the lack of a good business model.1 significant research efforts have been directed towards reduction of network bandwidth requirements, improvement of server utilization, and minimization of start-up latency. in this paper, we investigate another aspect of vod systems which has been largely neglected by the research community, namely, pricing models for vod systems. we believe that the price charged to a user for an on-demand video stream should influence the rate of user arrivals into the vod system and in turn should depend upon quality-of-service (qos) factors such as initial start-up latency. we briefly describe some simple pricing models and analyze the tradeoffs involved in such scenarios from a profit maximization point of view. we further explore secondary content insertion (ad-insertion) which was proposed elsewhere [1] not only as a technique for reducing the resource requirements at the server and the network, but also as a means of subsidizing vod content to the end user. we treat the rate of ad insertion as another qos factor and demonstrate how it can influence the price of movie delivery. prithwish basu thomas d. c. little towards new measures of information retrieval evaluation william r. hersh diane l. elliot david h. hickam stephanie l. wolf anna molnar pda-based observation logging monty hammontree paul weiler bob hendrich computing for users with special needs and models of computer-human interaction models of human-computer interaction (hci) can provide a degree of theoretical unity for diverse work in computing for users with special needs. example adaptations for special users are described in the context of both implementation-oriented and linguistic models of hci. it is suggested that the language of hci be used to define standards for special adaptations. this would enhance reusability, modifiability, and compatibility of adaptations, inspire new innovations, and make it easier for developers of standard interfaces to incorporate adaptations. the creation of user models for subgroups of users with special needs would support semantic and conceptual adaptations. william w. mcmillan evaluating a class of distance-mapping algorithms for data mining and clustering jason tsong-li wang xiong wang king-ip lin dennis shasha bruce a. shapiro kaizhong zhang xanalogical structure, needed now more than ever: parallel documents, deep links to content, deep versioning, and deep re-use theodor holm nelson a system for classification and control of information in the computer aided cooperative work place m. carl drott predicting query times rodger mcnab yong wang ian h. witten carl gutwin a history and evaluation of system r system r, an experimental database system, was constructed to demonstrate that the usability advantages of the relational data model can be realized in a system with the complete function and high performance required for everyday production use. this paper describes the three principal phases of the system r project and discusses some of the lessons learned from system r about the design of relational systems and database systems in general. donald d. chamberlin morton m. astrahan michael w. blasgen james n. gray w. frank king bruce g. lindsay raymond lorie james w. mehl thomas g. price franco putzolu patricia griffiths selinger mario schkolnick donald r. slutz irving l. traiger basics of integrated information and physical spaces: the state of the art norbert a. streitz daniel m. russell conversations with clement mok and jakob nielsen, and with bill buxton and clifford nass richard i. anderson cscw, groupware and workflow (tutorial session)(abstract only): experiences, state of art and future trends goals and content: this tutorial draws on the experiences of the participants and instructors with groupware and workflow technologies, and with cscw issues and methods, to construct an informed picture of what is happening and possible. to lectures and video-taped illustrations of commercial systems and research prototypes we have added structured subgroup activity by participants. we cover the multi-disciplinary nature of cscw; emerging groupware products and research that support communication, collaboration, and coordination; and behavioral, social, and organizational challenges to developing, acquiring, or using these technologies, and approaches that can lead to success. steven e. poltrock jonathan grudin approaches to an integrated office enviroment makoto yoshida makoto kotera kyoko yokoyama sadayuki hikita placeholder: technology and the senses rob tow infomaster: an information integration system infomaster is an information integration system that provides integrated access to multiple distributed heterogeneous information sources on the internet, thus giving the illusion of a centralized, homogeneous information system. we say that infomaster creates a virtual data warehouse. the core of infomaster is a facilitator that dynamically determines an efficient way to answer the user's query using as few sources as necessary and harmonizes the heterogeneities among these sources. infomaster handles both structural and content translation to resolve differences between multiple data sources and the multiple applications for the collected data. infomaster connects to a variety of databases using wrappers, such as for z39.50, sql databases through odbc, edi transactions, and other world wide web (www) sources. there are several www user interfaces to infomaster, including forms based and textual. infomaster also includes a programmatic interface and it can download results in structured form onto a client computer. infomaster has been in production use for integrating rental housing advertisements from several newspapers (since fall 1995), and for meeting room scheduling (since winter 1996). infomaster is also being used to integrate heterogeneous electronic product catalogs. michael r. genesereth arthur m. keller oliver m. duschka bootstrapping our collective intelligence douglas engelbart jeff ruilifson how reliable are the results of large-scale information retrieval experiments? justin zobel hyper mochi sheet: a predictive focusing interface for navigating and editing nested networks through a multi-focus distortion-oriented view masashi toyoda etsuya shibayama the ahi: an audio and haptic interface for contact interactions derek difilippo dinesh k. pai auditory illusions for audio feedback michel beaudouin-lafon stephane conversy where did you put it? issues in the design and use of a group memory lucy m. berlin robin jeffries vicki l. o'day andreas paepcke cathleen wharton what's that character doing in your interface? abbe don mining optimized association rules for numeric attributes takeshi fukuda yasuhido morimoto shinichi morishita takeshi tokuyama information ecologies and system design: a developmental perspective on mass multimedia networks menahem blondheim conference preview: uist 2001: the 14th annual acm symposium on user interface software and technology marisa campbell normalizing and optimising documentation jean e. tardy view-based query processing for regular path queries with inverse view-based query processing is the problem of computing the answer to a query based on a set of materialized views, rather than on the raw data in the database. the problem comes in two different forms, called query rewriting and query answering, respectively. in the first form, we are given a query and a set of view definitions, and the goal is to reformulate the query into an expression that refers only to the views. in the second form, besides the query and the view definitions, we are also given the extensions of the views and a tuple, and the goal is to check whether the knowledge on the view extensions logically implies that the tuple satisfies the query. in this paper we address the problem of view-based query processing in the context of semistructured data, in particular for the case of regular-path queries extended with the inverse operator. several authors point out that the inverse operator is one of the fundamental extensions for making regular-path queries useful in real settings. we present a novel technique based on the use of two-way finite-state automata. our approach demonstrates the power of this kind of automata in dealing with the inverse operator, allowing us to show that both query rewriting and query answering with the inverse operator has the same computational complexity as for the case of standard regular-path queries. diego calvanese moshe y. vardi giuseppe de giacomo maurizio lenzerini an introduction to remy's fast polymorphic record projection limsoon wong ultra-structure: a design theory for complex systems and processes the physicist and nobel laureate ilya prigogine states that "our understanding of nature is undergoing a radical change toward the multiple, the temporal, and the complex. curiously, the unexpected complexity found in nature has not led to a slowdown in the progress of science, but on the contrary to the emergence of new conceptual structures that now appear as essential to our understanding of the physical world" [11]. we believe the challenges posed by complex systems arise primarily from the use of conceptual structures that worked well for static systems but do not work as well for more dynamic systems. we therefore propose new conceptual structures based on a different metaphysical view of the nature of complex systems. jeffrey g. long dorothy e. denning regression testing for wrapper maintenance nicholas kushmerick data sharing in group work data sharing is fundamental to computer- supported cooperative work: people share information through explicit communication channels and through their coordinated use of shared databases. this paper examines the data management requirements of group work applications on the basis of experience with three prototype systems and on observations from the literature. database and object management technologies that support these requirements are briefly surveyed, and unresolved issues in the particular areas of access control and concurrency control are identified for future research. irene greif sunil sarin an effective algorithm for mining interesting quantitative association rules keith c. c. chan wai-ho au database patchwork on the internet naturally, data processing requires three kinds of resources: the data itself, the functionality (i.e. database operations) and the machines on which to run the operations. because of the internet we believe that in the long run there will be alternative providers for all of these three resources for any given application. data providers will bring more and more data and more and more different kinds of data to the net. likewise, function providers will develop new methods to process and work with the data; e.g., function providers might develop new algorithms to compress data or to produce thumbnails out of large images and try to sell these on the internet. it is also conceivable, that some people allow other people to use spare cycles of their idle machines in the internet (as in the condor system of the university of wisconsin) or that some companies (cycle providers) even specialize on selling computing time to businesses that occasionally need to carry out very complex operations for which regular hardware is not sufficient. at the university of passau, we are currently developing a distributed database system to be used in the internet. the goal is to ultimately have a system which is able to run on any machine, manage any kind of data, import any kind of data from other systems and import any kind of database operations. the system is entirely written in java. one of the most important features of the system is that it is capable of dynamically loading (external) query operators, written in java and supplied by any function provider, and executing these query operators in concert with pre-defined and other external operators in order to evaluate a query. compared to object-relational database systems, which allow to integrate external data and functionality by the means of extensions (datablades, extenders or cartridges) or heterogeneous database systems such as garlic [ms97] or tsimmis [gmpq+97], our approach makes it possible to place external query operators anywhere in a query evaluation plan as opposed to restricting the placement of external operations to the "access level" of plans. it would, for example, be possible to make our system execute a completely new relational join method, if somebody finds a new join method which is worth-while implementing. because our system is written in java, it is highly portable and could be used by data, function and cycle providers with almost no effort. furthermore, our query engine is, of course, completely distributed providing all the required infrastructure for server-server communication, name services, etc. reinhard braumandl alfons kemper donald kossmann a new theoretical framework for information retrieval c j van rijsbergen infomod: a knowledge-based moderator for electronic mail help lists robert j. hall clusterbook, a tool for dual information access (demonstration session) gheorghe muresan david j. harper ayse goker peter lowit frustrating the user on purpose: using biosignals in a pilot study to detect the user's emotional state jocelyn riseberg jonathan klein raul fernandez rosalind w. picard an investigation of a 3-poisson model as a basis for automatic index term selection amos o. olagunju vip-mdbs: a logic multidatabase system we present a multidatabase management system built in vienna integrated prolog (vip) for cooperative management of autonomous databases. data in different databases may differ with respect to naming, structures and value types. vip- mdbs (vip multidatabase system) allows the ability to manipulate them jointly and in a non-procedural way. its features are similar to those of the relational multidatabase language msql, but adapted to logic programming. we introduce the concept of so-called semantic relations, a concept which stems from the extension of global views by deductiveness. vip-mdbs allows for representation of intentional data and formulation of recursive multiple queries. e. kuhn t. ludwig storytelling with digital photographs photographs play a central role in many types of informal storytelling. this paper describes an easy-to-use device that enables digital photos to be used in a manner similar to print photos for sharing personal stories. a portable form factor combined with a novel interface supports local sharing like a conventional photo album as well as recording of stories that can be sent to distant friends and relatives. user tests validate the design and reveal that people alternate between "photo-driven" and "story-driven" strategies when telling stories about their photos. marko balabanovic lonny l. chu gregory j. wolff studying document usability: grasping the nettles judith ramey natural language vs. boolean query evaluation: a comparison of retrieval performance howard turtle sibling clustering of tree-based spatial indexes for efficient spatial query processing kihong kim sang k. cha worp (a public resource and repository for workflow and process management) and upcoming events in workflow amit sheth the human-computer technology group at bellcore rita m. bush the design, simulation, and evaluation of a menu driven user interface as the number of system users increases, the degree of formal training of the typical user declines. techniques such as menu selection, which can best accommodate the novice user, almost necessarily must be included in a strategy for person-computer communication. yet care must be taken that the experienced or sophisticated user is not encumbered with an interface that involves frustratingly slow entry of commands or procedures. this paper details the process and techniques required to develop and test an interface that would satisfy the needs of a broad spectrum of users. two design and evaluation iterations are described. ricky e. savage james k. habinek thomas w. barnhart design practice and interface usability: evidence from interviews with designers research into human-computer interaction (hci) is mainly conducted by engineering psychologists, cognitive psychologists and computer scientists. the principal consumers of applied hci research, on the other hand, are human factors practitioners and system designers and developers. the hci researcher who believes his or her findings to be of practical relevance has therefore to consider the interface between researcher and practitioner as well as that between system and user: the products of hci research must not only be relevant but also "user-friendly" to the practitioner. this problem is not merely one of communication between different professional communities, as the optimal route for the translation of research findings into terms that will be of practical use in the design process is itself a matter of considerable uncertainty and debate. thus there are many instances in the research literature where apparently contradictory recommendations can all too easily be drawn from findings based on sound but, by its very nature, limited experimentation (e.g., compare the findings of landauer et al., in press, ledgard et al., 1980, and scapin, 1981, on naming text-editing operations). one of the prerequisites for tackling both the communication problem and the translation problem is an understanding of relevant aspects of decision-making in design which influence the usability of the end-user interface. this is so for three reasons. first, an appreciation of the nature of design practice will at least help identify those areas where research input might have the greatest impact and allow researchers to direct their efforts towards them. second, it may identify possible modifications to existing design practice which would allow research input to be used more effectively. finally, it would be somewhat surprising if current design practice were not to furnish researchers with any insights into the underlying processes of users. the experience and skills of the practitioner should be a valuable source of information for the hci researcher. for these reasons, we have been documenting some of the relationships between design practice and the usability of systems for use by non-experts. while there is considerable literature on programming behaviour (e.g. mayer, 1981), reports of design behaviour are rare, other than occasional descriptions by practitioners of the interface design of their own products (e.g., botterill, 1982; smith et al., 1982). this paper focusses on the influence of the individual designer's decision-making. evidence is taken from interviews with experienced system designers concerning design issues influencing the nature of the user interface which had arisen with systems they had recently worked on. for two of the systems usability investigations had been performed (see lewis & mack, 1982 and hammond et al., 1983). n. hammond a. jørgensen a. maclean p. barnard j. long comments on "the role of balloon help" by david farkas john r. talburt internet-based information management technology gail e. kaiser atomic incremental garbage collection and recovery for a large stable heap a stable heap is storage that is managed automatically using garbage collection, manipulated using atomic transactions, and accessed using a uniform storage model. these features enhance reliability and simplify programming by preventing errors due to explicit deallocation, by masking failures and concurrency using transactions, and by eliminating the distinction between accessing temporary storage and permanent storage. stable heap management is useful for programming languages for reliable distributed computing, programming languages with persistent storage, and object-oriented database systems. many applications that could benefit from a stable heap (e.g., computer-aided design, computer-aided software engineering, and office information systems) require large amounts of storage, timely responses for transactions, and high availability. we present garbage collection and recovery algorithms for a stable heap implementation that meet these goals and are appropriate for stock hardware. the collector is incremental: it does not attempt to collect the whole heap at once. the collector is also atomic: it is coordinated with the recovery system to prevent problems when it moves and modifies objects. the time for recovery is independent of heap size, even if a failure occurs during garbage collection. elliot k. kolodner william e. weihl an architecture for automatic relational database sytem conversion changes in requirements for database systems necessitate schema restructuring, database translation, and application or query program conversion. an alternative to the lengthy manual revision process is proposed by offering a set of 15 transformations keyed to the relational model of data and the relational algebra. motivations, examples, and detailed descriptions are provided. ben shneiderman glenn thomas transformation of data traversals and operations in application programs to account for semantic changes of databases this paper addresses the problem of application program conversion to account for changes in database semantics that result in changes in the schema and database contents. with the observation that the existing data models can be viewed as alternative ways of modeling the same database semantics, a methodology of application program analysis and conversion based on an existing-dbms-model-and schema-independent representation of both the database and programs is presented. in this methodology, the source and target databases are described in terms of the association types of a semantic association model. the structural properties, the integrity constraints, and the operational characteristics (storage operation behaviors) of the association types are more explicitly defined to reveal the semantics that is generally hidden in application programs. the explicit descriptions of the source and target databases are used as the basis for program analysis and conversion. application programs are described in terms of a small number of "access patterns" which define the data traversals and operations of the programs. in addition to the methodology, this paper (1) describes a model of a generalized application program conversion system that serves as a framework for research, (2) presents an analysis of access patterns that serve as the primitives for program description, (3) delineates some meaningful semantic changes to databases and their corresponding transformation rules for program conversion, (4) illustrates the application of these rules to two different approaches to program conversion problems, and (5) reports on the development effort undertaken at the university of florida. stanley y. w. su herman lam der her lo the fischlar digital video system: a digital library of broadcast tvprogrammesfischl&Âࠡcute;r is a system for recording, indexing, browsing and playback ofbroadcast tv programmes which has been operational on our university campusfor almost 18 months. in this paper we give a brief overview of how the systemoperates, how tv programmes are organised for browse/playback and a shortreport on the system usage by over 900 users in our university.a. f. smeaton n. murphy n. e. o'connor s. marlow h. lee k. mcdonald p. brownej. ye partial collection replication versus caching for information retrieval systems the explosion of content in distributed information retrieval (ir) systems requires new mechanisms to attain timely and accurate retrieval of unstructured text. in this paper, we compare two mechanisms to improve ir system performance: partial collection replication and caching. when queries have locality, both mechanisms return results more quickly than sending queries to the original collection(s). caches return results when queries exactly match a previous one. partial replicas are a form of caching that return results when the ir technology determines the query is a good match. caches are simpler and faster, but replicas can increase locality by detecting _similarity_ between queries that are not exactly the same. we use real traces from thomas and excite to measure query locality and similarity. with a very restrictive definition of query similarity, similarity improves query locality up to 15% over exact match. we use a validated simulator to compare their performance, and find that even if the partial replica hit rate increases only 3 to 6%, it will outperform simple caching under a variety of configurations. a combined approach will probably yield the best performance. zhihong lu kathryn s. mckinley locking objects and classes in multiversion object-oriented databases wojciech cellary waldemar wieczerzycki on open cscw systems kurt keller the specification and enforcement of authorization constraints in workflow management systems in recent years, workflow management systems (wfmss) have gained popularity in both research and commercial sectors. wfmss are used to coordinate and streamline business processes. very large wfmss are often used in organizations with users in the range of several thousands and process instances in the range of tens and thousands. to simplify the complexity of security administration, it is common practice in many businesses to allocate a role for each activity in the process and then assign one or more users to each role---granting an authorization to roles rather than to users. typically, security policies are expressed as constraints (or rules) on users and roles; separation of duties is a well-known constraint. unfortunately, current role-based access control models are not adequate to model such constraints. to address this issue we (1) present a language to express both static and dynamic authorization constraints as clauses in a logic program; (2) provide formal notions of constraint consistency; and (3) propose algorithms to check the consistency of constraints and assign users and roles to tasks that constitute the workflow in such a way that no constraints are violated. elisa bertino elena ferrari vijay atluri learning the shape of information: a longitudinal study of web-news reading a concept called shape is proposed to experimentally examine the development of users' mental representations of information spaces over time. twenty five novice users are exposed to two differently designed news web sites over five sessions. the longitudinal impacts on users' comprehension, usability, and navigation are examined. misha w. vaughan andrew dillon interface techniques for minimizing disfluent input to spoken language systems sharon oviatt a configuration management approach for large workflow management systems scalability to large, heterogeneous, and distributed environments is an important requirement for workflow management systems (wfms). as a consequence, the management of the configuration of a wfms installation becomes a key issue. this paper proposes an approach for managing the configuration of wfms together with an assignment strategy for workflow instances. separating the logical issues of the workflow model from the physical configuration of a wfms is the basis of our strategy. a formalization of physical organizational requirements in a wfms configuration covering access rights, usage policies, and costs for the access to wfms servers is presented and used in the assignment strategy for workflow instances. the results of our approach fit well for many existing wfms and also for the reference architecture of the workflow management coalition. hans schuster jens neeb ralf schamburger the organizational contexts of development and use jonathan grudin integrating personal and community recommendations in collaborative filtering (workshop session)(abstract only) this full-day workshop will bring together researchers and practitioners to explore techniques for integrating personal and community recommendations into cscw systems. personal recommendations are tailored to an individual user, while community recommendations reflect the values or tastes of a broader community of users. joseph a. konstan krishna bharat prototyping an instructible interface: moctec moctec is a set of interactive mockups of an interface for programming search and replace tasks by example. the user guides inference by pointing at relevant features of data. david l. maulsby touchscreen toggle design catherine plaisant daniel wallace future directions in user-computer interface software james d. foley finding redundant paths in hypermedia web page prerequisites can be used to constrain how a user can explore a web site. this can be used in a number of ways, but plays an especially important role in educational sites. if disjunctive and conjunctive constraints are used to describe the preferences, some pages may become redundant given the user's previous path and the user may not want to visit them. the redundancy of a page is not a local property and may depend on many preferences. this paper describes an efficient algorithm that finds these redundant pages and discusses some applications of the approach. roland hubscher binary relationship imposition rules on ternary relationships in er modeling il-yeol song trevor h. jones e. k. park supporting content retrieval from www via "basic level categories" (poster abstract) eduard hoenkamp onno stegeman lambert schomaker networking on the network phil agre data mining and knowledge discovery in databases usama fayyad ramasamy uthurusamy applying user research directly to information system design (panel session) marcia j. bates raya fidel efthimis efthimiadis annelise mark pejtersen object imperatives! fintan culwin performance evaluation of a temporal database management system a prototype of a temporal database management system was built by extending ingres. it supports the temporal query language tquel, a superset of quel, handling four types of database static, rollback, historical and temporal. a benchmark set of queries was run to study the performance of the prototype on the four types of databases. we analyze the results of the benchmark, and identify major factors that have the greatest impact on the performance of the system. we also discuss several mechanisms to address the performance bottlenecks we encountered. ilsoo ahn richard snodgrass a methodology for the design and implementation of virtual interfaces verlynda dobbs sandra a. mamrak third generation tp monitors: a database challenge in a 1976 book, "algorithms + data structures = programs" [15], niklaus wirth defined programs to be algorithms and data structures. of course, by now we know that man does not live from programs alone, and that there is a second fundamental computer science equation: "programs + databases = information systems." database researchers have traditionally focused on the database component of the equation, providing shared and persistent repositories for the data that programs need and produce. as a matter of fact, a lot of us have worked hard to hide or ignore the programs component. for instance, non-procedural languages like sql and relational algebra have been the holy grail of the database field, letting us describe the data in the way we want without need to write messy programs. the magic wand of transactions makes programs that execute concurrently with our non- procedural statements suddenly disappear: these other programs appear as atomic actions that are either executed before we started looking at our data, or will be executed after we are all done with our work. the wonders of fault tolerance and automatic recovery guarantee that we never have to concern ourselves with our statements failing or being interrupted. the data we need will always be there for us, and our statements will always run to completion. unfortunately, the real programs that operate on databases are in many cases more complex than the classical ones like "withdraw 100 dollars from my account" or "find me all my blue eyed great-grandfathers." for one, programs may be much longer, requiring many database interactions. furthermore, programs need to interact with other concurrent programs, getting results from and to them. they may also need to be aware of their environment, perhaps monitoring the execution of another program, or taking corrective action when some system components fail. of course, this is not to say that transactions and non-procedural query languages have not been great contributions. in many cases, they are all that is needed to program one's application. but beyond that there are many cases when one must deal with multiple concurrent applications. indeed, a critical problem facing complex enterprises is the automation of complex business processes. enterprises today are drowning in an ocean of data, with a few isolated islands of automation consisting of heterogeneous databases and legacy application programs, each of which automates some point function (e.g., order entry, inventory, accounting, billing) within the enterprise. as the enterprise attempts to automate its business processes, these isolated islands have to be bridged: complex information systems must be developed that need to span many of these databases and application programs. traditional database systems do not provide the supporting environment for this. our programming languages colleagues have been working on the programs component of our fundamental equation, but the database component has traditionally been ignored or hidden. there has been a lot of recent interest on languages that support persistent objects, but often the goal is to make the database that holds the objects look as little as possible like a database. that is, the persistent objects are to be handled just as if they were volatile objects, even though they are not. also, the programming languages researchers have borrowed the notions of transactions and serializable schedules to hide as much as possible concurrent execution and failures of programs. finally, traditional programming languages (there are exceptions[4, 13]) have focused on "programming in the small," as opposed to "programming in the large." the goal of the former is to program single applications or to solve single problems, as opposed to programming an entire enterprise and all of its interacting applications. researchers from both camps have recently been addressing both components of the "programs + databases" equation. for example, database researchers have been adding triggers and procedures to database objects[2], resulting in so called active databases. these are important steps in the right direction (other related steps are listed below), but still do not address the full programming in the large problem. in our opinion, the only software providers that have tackled both components of the "programs + databases" equation, and have a proven track record with real applications, are the transaction processing monitor (tpm) builders[9]. umesh dayal hector garcia-molina mei hsu ben kao ming- chien shan bulletin board systems patrick r. dewey model-based user interface design by example and by interview martin r. frank james d. foley qbi: query by icons antonio massari stefano pavani lorenzo saladini panos k. chrysanthis the datacycle architecture t. f. bowen g. gopal g. herman t. hickey k. c. lee w. h. mansfield j. raitz a. weinrib supporting distributed groups with a montage of lightweight interactions the montage prototype provides lightweight audio-video glances among distributed collaborators and integrates other applications for coordinating future contact. we studied a distributed group across three conditions: before installing montage, with montage, and after removing montage. we collected quantitative measures of usage as well as video-tape and user perception data. we found that the group used montage glances for short, lightweight interactions that were like face-to-face conversations in many respects. yet like the phone, montage offered convenient access to other people without leaving the office. most glances revealed that the person was not available, so it was important to integrate other tools for coordinating future interaction. montage did not appear to displace the use of e-mail, voice-mail, or scheduled meetings. john c. tang ellen a. isaacs monica rua context-based synchronization: an approach beyond semantics for concurrency control the expressiveness of various object-oriented languages is investigated with respect to their ability to create new objects. we focus on database method schemas (dms), a model capturing the data manipulation capabilities of a large class of deterministic methods in object-oriented databases. the results clarify the impact of various language constructs on object creation. several new constructs based on expanded notions of deep equality are introduced. in particular, we provide a tractable construct which yields a language complete with respect to object creation. the new construct is also relevant to query complexity. for example, it allows expressing in polynomial time some queries, like counting, requiring exponential space in dms alone. man h. wong divyakant agrawal navigational plans for data integration marc friedman alon levy todd millstein document filtering method using non-relevant information profile document filtering is a task to retrieve documents relevant to a user's profile from a flow of documents. generally, filtering systems calculate the similarity between the profile and each incoming document, and retrieve documents with similarity higher than a threshold. however, many systems set a relatively high threshold to reduce retrieval of non-relevant documents, which results in the ignorance of many relevant documents. in this paper, we propose the use of a non-relevant information profile to reduce the mistaken retrieval of non-relevant documents. results from experiments show that this filter has successfully rejected a sufficient number of non-relevant documents, resulting in an improvement of filtering performance. keiichiro hoashi kazunori matsumoto naomi inoue kazuo hashimoto optimization of extended database query languages timos k. sellis leonard shapiro multiview (abstract): a www interface for globe visualizations j.-f. de la beaujardiere horace mitchell a. fritz hasler quikwriting: continuous stylus-based text entry ken perlin natural language systems: how are they meeting human needs? one goal of natural language research is to make systems more accessible to their users by allowing them to interact with machines in a language as close as possible to the language people use among themselves. developing systems that answer natural language questions is one way to allow users to interact easily with computers. in the following sections, i briefly describe ways in which researchers have been successful in this attempt, limitations inherent in existing natural language systems, and current efforts to bring systems closer to meeting the needs of their users. to illustrate these issues, i focus on natural language database systems as one example of question- answering systems. kathleen r. mckeown automated test plan generator for database applications systems as database applications become larger and more complex, the development of suitable test plans to assure robust and reliable software becomes essential. the testing procedure must establish the applications' reliability from the human expectation viewpoint as well as its correctness and robustness. the predeterminded goals of this testing procedure are to assure high reliability of the application and acertain that the software performed according to specifications, while remaining within the project's time and budgetary constraints. errors identified during development can be corrected with minimal cost when compared to errors uncovered by users after the software release. by overlapping testing with development, and automating the production of a standardized set of test plans, the incremental production time can be minimized. by generating an application specific test plan, syntactic and semantic specifications of the actual system applications can be tested. a side effect of this specific, interleaved correctness testing is the uncovering of user interface inconsistencies before exposure of the application to the real end user. analysis of the results of our experimental testing showed the keystone to be the test plan. since the completeness and veracity of this document is a prime determining factor in the validity of the testing, we have developed a program using c++ on a sun workstation to automatically generate test plans from the design schema. this paper presents an automated/manual testing scheme suitable for assuring the validity of large database applications. the initial premises, design, execution and consequences of the test procedure is detailed, followed by a description of the automation of strategic components. maryann robbert fred j. maryanski a user interface for statistical databases many statistical database applications require the use of aggregations to perform queries. typically, the aggregation part of a query is the most difficult for a user to formulate. this work deals with the automation of the aggregation formulation process. to this end, a restricted type of aggregation, called summarization, is introduced. it is shown that summarization is sufficient in most cases. essentially, summarization encourages the user to formulate a query without conceptualizing it in terms of aggregation. instead, the user simply expresses the tabular form of the result. another problem found in many statistical database applications arises when there are different measurements based on different partitionings of the same entity. for example, consider census data based on census tracts and pollution data based on pollution districts. furthermore, the boundaries of census tracts and pollution districts do not coincide. meaningful correlations between the census and pollution data can be obtained by the use of a statistical interpretation such as a weighted average or cubic spline contouring. the expression of a particular statistical interpretation to be used is the most difficult part of query formulation. accordingly, the concept of complex summarization is introduced which allows the pre-specification of statistical interpretations. the process of choosing a statistical interpretation is easy and straightforward. here again, a query is formulated by expressing the tabular form of the result and not conceptualizing in terms of aggregation. rowland r. johnson quest: a project on database mining r. agrawal m. carey c. faloutsos s. ghosh m. houtsma t. imielinski b. iyer a. mahboob h. miranda r. srikant a. swami a graph-oriented object database model a simple, graph-oriented database model, supporting object-identity, is presented. for this model, a transformation language based on elementary graph operations is defined. this transformation language is suitable for both querying and updates. it is shown that the transformation language supports both set-operations (except for the powerset operator) and recursive functions. marc gyssens jan paredaens dirk van gucht dense-order constraint databases (extended abstract) stephane grumbach jianwen su "finding and reminding" reconsidered scott fertig eric freeman david gelernter corpus-based stemming using cooccurrence of word variants stemming is used in many information retrieval (ir) systems to reduce variant word forms to common roots. it is one of the simplest applications of natural- language processing to ir and is one of the most effective in terms of user acceptance and consistency, though small retrieval improvements. current stemming techniques do not, however, reflect the language use in specific corpora, and this can lead to occasional serious retrieval failures. we propose a technique for using corpus-based word variant cooccurrence statistics to modify or create a stemmer. the experimental results generated using english newspaper and legal text and spanish text demonstrate the viability of this technique and its advantages relative to conventional approaches that only employ morphological rules. jinxi xu w. bruce croft pbir - perception-based image retrieval we demonstrate a system that we have built on our proposed perception-based image retrieval (pbir) paradigm. this pbir system achieves accurate similarity measurements by rooting image characterization in human perception and by learning user's query concept through an intelligent sampling process. we show that our system can usually grasp a user's query concept with a small number of labeled instances. edward chang kwang-ting cheng lihyuarn l. chang isolation of transaction aborts in object-oriented database systems we address the problem of recovery in object-oriented data-bases from considerations of the semantics of committed transactions and the efficiency of recovery procedures. the recovery strategy used is update-in-place. we show that efficient recovery procedures should allow transactions to abort independently of one another by executing inverse operations. we refer to these requirements collectively as the recovery isolation property. we present a formal definition of recovery isolation and prove that schedules possessing this property do not suffer from rollback dependencies. recovery isolation is useful in constructing concurrency control protocols that go beyond commutativity, and in parallel and high performance database systems. we define the notion of strict schedules for object-oriented databases as an extension of the analogous definition for read and write operations. we show that strict schedules possess the recovery isolation property. shankar pal sitaram lanka rbac support in object-oriented role databases raymond k. wong three perspectives (panel session): if markus' 1983 classic study, "power, politics, and mis implementation," were being reviewed today allen s. lee michael myers guy pare cathy urquhart m. lynne markus exploring remote images: a telepathology workstation catherine plaisant david a. carr hiroaki hasegawa claims, observations and inventions: analysing the artifact andrew f. monk peter c. wright philly sound park: using animation as an architectural design/presentation tool christopher janney integration of synchronous and asynchronous collaboration activities the inegrated synchronous and asynchronous collaboration (isaac) project [1] is constructing a communication and collaboration system to bridge traditional workgroup barriers of time and space. possible applications include military command and control, corporate real-time collaboration, and distributed teams of research scientists. thus, this system must host the widest possible range of applications, and must run on heterogeneous hardware. isaac incorporates real-time (synchronous) collaboration technologies developed by the habanero® project [2,3] at the national center for supercomputing applications at the university of illinois urbana-champaign, with asynchronous extensions. isaac research is aimed at moving information between synchronous and asynchronous modes. isaac's session capture conceptually transforms a real-time multiple tool collaboration into multimedia document, which can be analyzed and reused by other programs. automated segmentation and indexing of captured audio and videoteleconference traffic adds further information. larry s. jackson ed grossman whisper: a wristwatch style wearable handset masaaki fukumoto yoshinobu tonomura composing interactive virtual operas alain bonardi francis rousseaux object database evolution using separation of concerns awais rashid peter sawyer parallel algorithms for the execution of relational database operations this paper presents and analyzes algorithms for parallel processing of relational database operations in a general multiprocessor framework. to analyze alternative algorithms, we introduce an analysis methodology which incorporates i/o, cpu, and message costs and which can be adjusted to fit different multiprocessor architectures. algorithms are presented and analyzed for sorting, projection, and join operations. while some of these algorithms have been presented and analyzed previously, we have generalized each in order to handle the case where the number of pages is significantly larger than the number of processors. in addition, we present and analyze algorithms for the parallel execution of update and aggregate operations. dina bitton haran boral david j. dewitt w. kevin wilkinson music-notation searching and digital librariesalmost all work on music information retrieval to date has concentrate d onmusic in the audio and event (normally midi) domains. however, music in theform of notation, especially conventional music notation (cmn), is of muchinterest to musically-trained persons, both amateurs and professionals, andsearching cmn has great value for digital music libraries. one obvious reasonlittle has been done on music retrieval in cmn form is the overwhelmingcomplexity of cmn, which requires a very substantial investment in programmingbefore one can even begin studying music ir. this paper reports on work addingmusic- retrieval capabilities to nightingaleÂ੬ an existing professional- levelmusic-notation editor.donald byrd optimization of nested sql queries revisited current methods of evaluating nested queries in the sql language can be inefficient in a variety of query and data base contexts. previous research in the area of nested query optimization which sought methods of reducing evaluation costs is summarized, including a classification scheme for nested queries, algorithms designed to transform each type of query to a logically equivalent form which may then be evaluated more efficiently, and a description of a major bug in one of these algorithms. further examination reveals another bug in the same algorithm. solutions to these bugs are proposed and incorporated into a new transformation algorithm, and extensions are proposed which will allow the transformation algorithms to handle a larger class of predicates. a recursive algorithm for processing a general nested query is presented and the action of this algorithm is demonstrated. this algorithm can be used to transform any nested query. richard a. ganski harry k. t. wong building mosaics from video using mpeg motion vectors ryan c. jones daniel dementhon david s. doermann multi-table joins through bitmapped join indices patrick o'neil goetz graefe tvdbms shakeel a. khoja wendy hall bringing object-relational technology to the mainstream over the last few years, oracle has evolved its flagship relational database system into an object-relational system by adding an extensible type system, object storage, an object cache, an extensible query and indexing framework, support for multimedia datatypes, a server-based scalable java virtual machine, as well as enhancing its sql ddl and dml language. these extensions were done with the practical goal of bringing objects to mainstream use. vishu krishnamurthy sandeepan banerjee anil nori focus+context views of world-wide web nodes sougata mukherjea yoshinori hara backtracking: the eternal flame chris welty the work to make a network work: studying cscw in action this paper reports on a field study of the procurement, implementation and use of a local area network devoted to running cscw-related applications in an organization within the u.k.'s central government. in this particular case, the network ran into a number of difficulties, was resisted by its potential users for a variety of reasons, was faced with being withdrawn from service on a number of occasions and (at the time of writing) remains only partly used. the study points to the kinds of problems that a project to introduce computer support for cooperative work to an actual organization is likely to face and a series of concepts are offered to help manage the complexity of these problems. in so doing, this paper adds to and extends previous studies of cscw tools in action but also argues that experience from the field should be used to re-organise the research agenda of cscw. john bowers media streams: representing video for retrieval and repurposing marc davis distributed algorithms for dynamic replication of data we present two distributed algorithms for dynamic replication of a data-item in communication networks. the algorithms are adaptive in the sense that they change the replication scheme of the item (i.e. the set of processors at which the data- item is replicated), as the read-write pattern of the processors in the network changes. each algorithm continuously moves the replication scheme towards an optimal one, where optimality is defined with respect to different objective functions. one algorithm optimizes the communication cost objective function, and the other optimizes the communication time. we also provide a lower bound on the performance of any dynamic replication algorithm. ouri wolfson sushil jajodia accessing a digital library collection through multiple hypertextual information spaces (abstract) eric h. johnson using n-grams for korean text retrieval joo ho lee jeong soo ahn personification of the computer: a pathological metaphor in is semantic and syntactic aspects of metaphor are explored and explained as a preface to an exposition of the impact on information systems (is) personnel of a popular root metaphor: the personification of the computer. it is suggested that since action metaphors determine attitudes and future directions, this personification may be responsible for confusing both end- user and researcher: that if the potential of computers is to be more fully realised and utilised, our perceptions of the language in which we describe them should be illuminated. there is a danger, it is argued, that if human attributes are ascribed to the computer, personnel in is begin to act out the metaphor pathologically. nanette monin d. john monin optimization of sequence queries in database systems the need to search for complex and recurring patterns in database sequences is shared by many applications. in this paper, we discuss how to express and support efficiently sophisticated sequential pattern queries in databases. thus, we first introduce sql-ts, an extension of sql, to express these patterns, and then we study how to optimize search queries for this language. we take the optimal text search algorithm of knuth, morris and pratt, and generalize it to handle complex queries on sequences. our algorithm exploits the inter- dependencies between the elements of a sequential pattern to minimize repeated passes over the same data. experimental results on typical sequence queries, such as double bottom queries, confirm that substantial speedups are achieved by our new optimization techniques. reza sadri carlo zaniolo amir zarkesh jafar adibi targeting audiences on the internet tanya l. cheyne frank e. ritter translation with optimization from relational calculus to relational algebra having aggregate functions most of the previous translations of relational calculus to relational algebra aimed at proving that the two languages have the equivalent expressive power, thereby generating very complicated relational algebra expressions, especially when aggregate functions are introduced. this paper presents a rule-based translation method from relational calculus expressions having both aggregate functions and null values to optimized relational algebra expressions. thus, logical optimization is carried out through translation. the translation method comprises two parts: the translational of the relational calculus kernel and the translation of aggregate functions. the former uses the familiar step-wise rewriting strategy, while the latter adopts a two-phase rewriting strategy via standard aggregate expressions. each translation proceeds by applying a heuristic rewriting rule in preference to a basic rewriting rule. after introducing sql- type null values, their impact on the translation is thoroughly investigated, resulting in several extensions of the translation. a translation experiment with many queries shows that the proposed translation method generates optimized relational algebra expressions. it is shown that heuristic rewriting rules play an essential role in the optimization. the correctness of the present translation is also shown. each translation proceeds by applying a heuristic rewriting rule in preference to a basic rewriting rule. after introducing sql-type null values, their impact on the translation is thoroughly investigated, resulting in several extensions of the translation. a translation experiment with many queries shows that the proposed translation method generates optimized relational ryohei nakano on the reuse of past optimal queries vijay v. raghavan hayri sever hci education: past, present and future? jean b. gasen editorial steven pemberton using semantic values to facilitate interoperability among heterogeneous information systems large organizations need to exchange information among many separately developed systems. in order for this exchange to be useful, the individual systems must agree on the meaning of their exchanged data. that is, the organization must ensure semantic interoperability. this paper provides a theory of semantic values as a unit of exchange that facilitates semantic interoperability betweeen heterogeneous information systems. we show how semantic values can either be stored explicitly or be defined by environments. a system architecture is presented that allows autonomous components to share semantic values. the key component in this architecture is called the context mediator, whose job is to identify and construct the semantic values being sent, to determine when the exchange is meaningful, and to convert the semantic values to the form required by the receiver. our theory is then applied to the relational model. we provide an interpretation of standard sql queries in which context conversions and manipulations are transparent to the user. we also introduce an extension of sql, called context-sql (c-sql), in which the context of a semantic value can be explicitly accessed and updated. finally, we describe the implementation of a prototype context mediator for a relational c-sql system. edward sciore michael siegel arnon rosenthal local verification of global integrity constraints in distributed databases we present an optimization for integrity constraint verification in distributed databases. the optimization allows a global constraint, i.e. a constraint spanning multiple databases, to be verified by accessing data at a single database, eliminating the cost of accessing remote data. the optimization is based on an algorithm that takes as input a global constraint and data to be inserted into a local database. the algorithm produces a local condition such that if the local data satisfies this condition then, based on the previous satisfaction of the global constraint, the global constraint is still satisfied. if the local data does not satisfy the condition, then a conventional global verification procedure is required. ashish gupta jennifer widom information design methods and the applications of virtual worlds technology at worl&dbarbelow;esign, inc. robert jacobson book preview jennifer bruer handling missing data by using stored truth values this paper proposes a method for handling inapplicable and unknown missing data. the method is based on: (1) storing default values (instead of null values) in place of missing data, (2) storing truth values that describe the logical status of the default values in corresponding fields of corresponding tables. four valued logic is used so that the logical status of the default data values can be described as, not just true or false, but also as inapplicable or unknown. this method, in contrast to the "hidden byte" approach, has two important advantages: (1) because the logical status of all data is represented explicitly in tables, all 4-valued operations can be handled via a 2-valued data manipulation language, such as sql. language extensions for handling missing data (e.g., "is null") are not necessary. (2) because data fields always contain a default value (as opposed to a null value or mark), it is possible to do arithmetic across missing data and to interpret the logical status of the result by means of logical operations on the corresponding stored truth values. g. h. gessert minimum covers in relational database model david maier technical opinion: deconstructing the "any" key munindar p. singh mona singh an overview of semistructured data dan suciu achieving practical development-merging skill bases (panel session) david lowe deena larsen bill bly robert kendall les carr peter nurnberg lawrence clark an experimental evaluation of on-line help for non-programmers an interactive computer system was made easier to learn for non-programmers by modifying the on-line help and error messages of a system designed primarily for programmers. the modifications included supplementing the existing help command with a help key, making the content of help and error messages more concrete, responding to command synonyms, and more. the systems were evaluated in a between-groups experiment in which office workers with no programming experience were asked to perform a typical office task using one of the unfamiliar interactive computer systems. the results of the experiment supported the inclusion of the modifications. non-programmers using the modified system completed the computer task in less time, with greater accuracy, and with better resulting attitudes than those who used the system designed primarily for programmers. celeste s. magers electronic mail in an expanding universe (abstract) anita borg online association rule mining we present a novel algorithm to compute large itemsets online. the user is free to change the support threshold any time during the first scan of the transaction sequence. the algorithm maintains a superset of all large itemsets and for each itemset a shrinking, deterministic interval on its support. after at most 2 scans the algorithm terminates with the precise support for each large itemset. typically our algorithm is by an order of magnitude more memory efficient than apriori or dic. christian hidber information product evaluation as asynchronous communication in context: a model for organizational research lisa d. murphy customizing multimedia information access daniela rus devika subramanian on effective multi-dimensional indexing for strings as databases have expanded in scope from storing purely business data to include xml documents, product catalogs, e-mail messages, and directory data, it has become increasingly important to search databases based on wild-card string matching: prefix matching, for example, is more common (and useful) than exact matching, for such data. in many cases, matches need to be on multiple attributes/dimensions, with correlations between the dimensions. traditional multi-dimensional index structures, designed with (fixed length) numeric data in mind, are not suitable for matching unbounded length string data. in this paper, we describe a general technique for adapting a multi- dimensional index structure for wild-card indexing of unbounded length string data. the key ideas are (a) a carefully developed mapping function from strings to rational numbers, (b) representing an unbounded length string in an index leaf page by a fixed length offset to an external key, and (c) storing multiple elided tries, one per dimension, in an index page to prune search during traversal of index pages. these basic ideas affect all index algorithms. in this paper, we present efficient algorithms for different types of string matching. while our technique is applicable to a wide range of multi-dimensional index structures, we instantiate our generic techniques by adapting the 2-dimensional r-tree to string data. we demonstrate the space effectiveness and time benefits of using the string r-tree both analytically and experimentally. h. v. jagadish nick koudas divesh srivastava hypertext databases and data mining the volume of unstructured text and hypertext data far exceeds that of structured data. text and hypertext are used for digital libraries, product catalogs, reviews, newsgroups, medical reports, customer service reports, and the like. currently measured in billions of dollars, the worldwide internet activity is expected to reach a trillion dollars by 2002. database researchers have kept some cautious distance from this action. the goal of this tutorial is to expose database researchers to text and hypertext information retrieval (ir) and mining systems, and to discuss emerging issues in the overlapping areas of databases, hypertext, and data mining. soumen chakrabarti a model of data distribution based on texture analysis nabil kamel roger king hilites: the information service for the world hci community brian shackel james l. alty peter reid the effect of accessing nonmatching documents on relevance feedback traditional information retrieval (ir) systems only allow users access to documents that match their current query, and therefore, users can only give relevance feedback on matching documents (or those with a matching strength greater than a set threshold. this article shows that, in systems that allow access to nonmatching documents (e.g., hybrid hypertext and information retrieval systems), the strength of the effect of giving relevance feedback varies between matching and nonmatching documents. for positive feedback the results shown here are encouraging, as they can be justified by an intuitive view of the process. however, for negative feedback the results show behavior that cannot easily be justified and that varies greatly depending on the model of feedback used. mark d. dunlop high performance and availability through data distribution jay kasi on optimistic methods for concurrency control most current approaches to concurrency control in database systems rely on locking of data objects as a control mechanism. in this paper, two families of nonlocking concurrency controls are presented. the methods used are "optimistic" in the sense that they rely mainly on transaction backup as a control mechanism, "hoping" that conflicts between transactions will not occur. applications for which these methods should be more efficient than locking are discussed. h. t. kung john t. robinson reasoning on gestural interfaces through syndetic modelling] recent advances in user interface development have been mainly driven by technical innovation, based either on new interaction devices and paradigms, or on algorithms for achieving realistic audio/visual effects [connor92] [robertson91a] [robertson91b] [berners92]. in such a rich environment, the user potentially interacts with the computer by addressing concurrently different modalities. while the technology driven approach has made possible the implementation of systems in specific application areas, it largely misses an undelying theory. this makes it difficult to assess whether this technology will be effective for users with the consequence that cognitive ergonomics is becoming an urgent requirement for the design of new interactive systems. attention has been paid to the psychology of terminal users from the very beginning of human-computer interface research [martin73]. however, existing established design techniques do not readily accomodate issues such as concurrency and parallelism, or the potential for the interaction with multiple interface techniques [coutaz93].recently, works have taken place investigating models and techniques for the analysis and design of interactionally rich systems from a variety of disciplinary perspectives as it can be found in the dsv-is book series edited by [paterno94] and [bastide95]. formal methods have been one of a number of approaches; others include cognitive user models, design space representations, and software architecture models. applications for formal methods are well known, see for example [bowen95], and [gaudel94]. however, none of the cited applications use formal methods to examine the user interface. one reason is that the factors that influence the design of interactive systems depend mostly on psychological and social properties of cognition and work, rather than on abstract mathematical models of programming semantics. for this reason, claims made through formal methods about the properties of interactive systems must be grounded in some psychological or social theory.this paper builds on previous works carried within the esprit amodeus project and shows how a new approach to human-computer interaction, called _syndetic_ modeling, can be used to gain insight into user-oriented properties of interactive systems. the word syndesis comes from the ancient greek and means _conjunction._ it is used to emphasize the key point of this approach: user and system models are described within a common framework that enables one to reason about how cognitive resources are mapped onto the functionality of the system. g. p. faconti database research at the university of queensland maria e. orlowska experiences with selecting search engines using metasearch search engines are among the most useful and high-profile resources on the internet. the problem of finding information on the internet has been replaced with the problem of knowing where search engines are, what they are designed to retrieve, and how to use them. this article describes and evaluates savvysearch, a metasearch engine designed to intelligently select and interface with multiple remote search engines. the primary metasearch issue examined is the importance of carefully selecting and ranking remote search engines for user queries. we studied the efficacy of savvysearch's incrementally acquired metaindex approach to selecting search engines by analyzing the effect of time and experience on performance. we also compared the metaindex approach to the simpler categorical approach and showed how much experience is required to surpass the simple scheme. daniel dreilinger adele e. howe the aggregate data problem: a system for their definition and management in this paper we describe the fundamental components of a database management system for the definition, storage, manipulation and query of aggregate data, i.e. data which are obtained by applying statistical aggregations and statistical analysis functions over raw data. in particular, the attention has been focused on: (1) a data structure for the efficient storage and manipulation of aggregate data, called adas; (2) the graphical structures of the aggregate data model adamo for a more user-friendly definition and query of aggregate data; (3) a graphical user interface which enables a straightforward specification of the adamo structures; (4) a textual declarative query language to retrieve data from the aggregate database, called adquel. m. rafanelli a. bezenchek l. tininini extended commitment ordering, or guaranteeing global serializability by applying commitment order selectively to global transactions the extended commitment ordering (eco) property of transaction histories (schedules) generalizes the commitment ordering (co) property defined in [raz 90]. in a multi resource manager (rm) environment eco guarantees global serializability when supported locally by each rm that participates in global transactions (i.e., transactions that span more than a single rm) and provides local serializability (by any mechanism). eco assumes that a rm has the knowledge to distinguish local transactions (i.e., transactions confined to that rm) from global transactions. eco imposes an order condition, similar to the co condition, on the commit events of global transactions only, and thus, it is less constraining than co. like co, eco provides a fully distributed solution to the long standing problem of guaranteeing global serializability across rms with different concurrency control mechanisms. also, like co, no communication beyond atomic commitment (ac) protocol messages is required to enforce eco. when rms are provided with the information about transactions being local, and are coordinated solely via ac protocols (have the extended knowledge autonomy property), eco, applied locally together with local serializability in each rm involved with global transactions, is a necessary condition for guaranteeing global serializability. eco reduces to co when all the transactions are assumed to be global (e.g. if no knowledge about the transactions being local is available). yoav raz implementation of magic-sets in a relational database system we describe the implementation of the magic-sets transformation in the starburst extensible relational database system. to our knowledge this is the first implementation of the magic-sets transformation in a relational database system. the starburst implementation has many novel features that make our implementation especially interesting to database practitioners (in addition to database researchers). (1) we use a cost-based heuristic for determining join orders (sips) before applying magic. (2) we push all equality and non- equality predicates using magic, replacing traditional predicate pushdown optimizations. (3) we apply magic to full sql with duplicates, aggregation, null values, and subqueries. (4) we integrate magic with other relational optimization techniques. (5) the implementation is extensible. our implementation demonstrates the feasibility of the magic-sets transformation for commercial relational systems, and provides a mechanism to implement magic as an integral part of a new database system, or as an add-on to an existing database system. inderpal singh mumick hamid pirahesh generating requirements in a courier despatch management system jocelyn keep hilary johnson the effects of narrow-band width multipoint videoconferencing on group decision making and turn distribution this study reports an experiment that examines the effects that face-to-face meetings (ff) and two modes of videoconferencing, switching video (sv), which shows only the current speaker, and mixing video (mv), which shows each group member simultaneously, have on turn distribution and the quality of small group decision making. the subjects were 200 undergraduate students and the task was the nasa moon survival problem. multiple comparison tests indicated that mv yielded significantly higher group decision quality than ff. the other pairs, ff-sv and sv-mv, showed no significant differences. with regard to turn taking, there was almost no difference between sv-mv. shinji takao dynamic resource brokering for multi-user query execution diane l. davison goetz graefe speeding up bulk-loading of quadtrees gísli r. hjaltason hanan samet yoram j. sussmann that vision thing frank m. marchak shannon ford using domain knowledge in knowledge discovery with the explosive growth of the size of databases, many knowledge discovery applications deal with large quantities of data. there is an urgent need to develop methodologies which will allow the applications to focus search to a potentially interesting and relevant portion of the data, which can reduce the computational complexity of the knowledge discovery process and improve the interestingness of discovered knowledge. previous work on semantic query optimization, which is an approach to take advantage of domain knowledge for query optimization, has demonstrated that significant cost reduction can be achieved by reformulating a query into a less expensive yet equivalent query which produces the same answer as the original one. in this paper, we introduce a method to utilize three types of domain knowledge in reducing the cost of finding a potentially interesting and relevant portion of the data while improving the quality of discovered knowledge. in addition, we propose a method to select relevant domain knowledge without an exhaustive search of all domain knowledge. the contribution of this paper is that we lay out a general framework for using domain knowledge in the knowledge discovery process effectively by providing guidelines. suk-chung yoon lawrence j. henschen e. k. park sam makki experiments in retrieval of mineral information dusan cakmakov danco davcev an agent-oriented modeling approach this paper proposes an agent-oriented modeling method for the information processing. first of all, it presents an information processing model for real world; then the hierarchical organization of agents and the structure of an agent are given. furthermore, the logical representation of transaction and event are described. finally, an agent-oriented modeling process is put forward. modeling starts from describing the agents that participate in the process; then gathers the other information centered on agents and creates the model. the approach can explicitly present the organized structure of agents and the correlation among transactions, reflect the dynamic process change by describing the state change of the system, illustrate the dynamic character of information process. hong liu g. zeng zongkai lin the language of coordination (doctoral colloquium) george m. wyner abstract specification of user interfaces ole lauridsen personalizing the capture of public experiences in this paper, we describe our work on developing a system to support the personalization of a captured public experience. specifically, we are interested in providing students with the ability to personalize the capture of the lecture experiences as part of the classroom 2000 project. we discuss the issues and challenges involved in designing a system that performs live integration of personal streams of information with multiple other streams of information made available to it through an environment designed to capture public information. khai n. truong gregory d. abowd jason a. brotherton naming objects in the digital library (working session) william y. arms help is but a phone call away... james c. sweeton jill tuer christine wendt updating derived relations: detecting irrelevant and autonomously computable updates consider a database containing not only base relations but also stored derived relations (also called materialized or concrete views). when a base relation is updated, it may also be necessary to update some of the derived relations. this paper gives sufficient and necessary conditions for detecting when an update of a base relation cannot affect a derived relation (an irrelevant update), and for detecting when a derived relation can be correctly updated using no data other than the derived relation itself and the given update operation (an autonomously computable update). the class of derived relations considered is restricted to those defined by psj-expressions, that is, any relational algebra expressions constructed from an arbitrary number of project, select and join operations (but containing no self-joins). the class of update operations consists of insertions, deletions, and modifications, where the set of tuples to be deleted or modified is specified by a selection condition on attributes of the relation being updated. jose a. blakeley neil coburn per-:1vke larson synchronized continuous media playback through the world wide web: ketan mayer-patel david simpson david wu lawrence a. rowe support concept-based multimedia information retrieval: a knowledge management approach bin zhu marshall ramsey hsinchun chen rosie v. hauck tobun d. ng bruce schatz extending cscw into domestic environments (workshop session) (abstract only) this half-day workshop will aim to build a community of interest and research agenda around extending cscw methods and technologies to home settings. relevant issues include the coordination of activities in public and private spaces; shared resource technologies; distributed coordination in and between households and the role of technology in everyday life. jon o'brien john hughes mark ackerman debby hindus an extended memoryless inference control model: accounting for dependence in table-level controls s. c. hansen e. a. unger a desk supporting computer-based interaction with paper documents before the advent of the personal workstation, office work practice revolved around the paper document. today the electronic medium offers a number of advantages over paper, but it has not eradicated paper from the office. a growing problem for those who work primarily with paper is lack of direct access to the wide variety of interactive functions available on personal workstations. this paper describes a desk with a computer-controlled projector and camera above it. the result is a system that enables people to interact with ordinary paper documents in ways normally possible only with electronic documents on workstation screens. after discussing the motivation for this work, this paper describes the system and two sample applications that can benefit from this style of interaction: a desk calculator and a french to english translation system. we describe the design and implementation of the system, report on some user tests, and conclude with some general reflections on interacting with computers in this way. william newman pierre wellner a computer network is a social network barry wellman videx: an integrated generic video indexing approach this paper presents an integrated generic technique for low- and high-level video indexing. the proposed approach tries to integrate the advantages of existing low- and high-level video indexing approaches by reducing their shortcomings. furthermore, the model introduces concepts for a detailed structuring of video streams, and for correlations of low-and high- level video objects. the proposed model is called generic, as it only defines a framework of classes for an implementating a prototype of a distibuted multimedia information system supporting content-based video retrieval. roland tusch harald kosch lazlo boszormenyi designing a trans-pacific virtual space lia adams lori toomey mediablocks: physical containers, transports, and controls for online media brygg ullmer hiroshi ishii dylan glas arabia online: answering the call of the holy land r. w. burniske netscape communicator's collapsible toolbars irene au shuang li presenting web site search results in context: a demonstration michael chen marti a. hearst a database perspective on knowledge discovery tomasz imielinski heikki mannila towards the design of a hypermedia journal anita sundaram an efficient scheme for providing high availability replication at the partition level is a promising approach for increasing availability in a shared nothing architecture. we propose an algorithm for maintaining replicas with little overhead during normal failure-free processing. our mechanism updates the secondary replica in an asynchronous manner: entire dirty pages are sent to the secondary at some time before they are discarded from primary's buffer. a log server node (hardened against failures) maintains the log for each node. if a primary node fails, the secondary fetches the log from the log server, applied it to its replica, and brings itself to the primary's last transaction-consistent state. we study the performance of various policies for sending pages to secondary and the corresponding trade-offs between recovery time and overhead during failure- free processing. anupam bhide ambuj goyal hui-i hsiao anant jhingran information finding in a digital library: the stanford perspective tak w. yan hector garcia-molina user interface design for the www jakob nielsen annette wagner human computer interaction division logica cambridge ltd., uk rod rivers dynamic soundscape: mapping time to space for audio browsing minoru kobayashi chris schmandt open issues in parallel query optimization we provide an overview of query processing in parallel database systems and discuss several open issues in the optimization of queries for parallel machines. waqar hasan daniela florescu patrick valduriez low-cost audio/visual presentation enabler robert a. pascoe the webspace method: on the integration of database technology with multimedia retrieval roelof van zwol peter m. g. apers application of object-oriented technology for integrating heterogeneous database systems bhavani thuraisingham in search of a user interface reference model gene lynch jon meads mmvis: design and implementation of a multimedia visual information seeking environment stacie hibino elke a. rundensteiner separating semantics from representation in a temporal object database domain niki pissinou kia makki erp project dynamics and enacted dialogue: perceived understanding, perceived leeway, and the nature of task-related conflicts different views on change and it- related outcomes have been proposed in the literature. most privilege the technological deterministic and organizational imperative positions. this article introduces two types of process views on change arising from designers' inability to forecast the impacts of erp on work and governance:• a dialectical process due to the lack of perceived leeway by the actors, and• a teleological process view, where actors feel they have more leeway and where they try to take advantage of technological effects that they feel they can controlbuilding on the concept of enactment and on the nature of conflicts, this work demonstrates the necessity to articulate these views in a theoretical framework describing the dynamics of erp projects.this framework is employed to interpret problems arising from erp choice and implementation in the french context. during the "chartering phase," the deterministic vision dominates the perceptions of designers. during the "project phase," the designers come closer to the organizational imperative view when they customize the system and make integration/differentiation choices. during the "shakedown" and subsequent phases, organizational outcomes are often not realized because of job and governance conflicts with end users. patrick besson frantz rowe querying network directories heirarchically structured directories have recently proliferated with the growth of the internet, and are being used to store not only address books and contact information for people, but also personal profiles, network resource information, and network and service policies. these systems provide a means for managing scale and heterogeneity, while allowing for conceptual unity and autonomy across multiple directory servers in the network, in a way for superior to what conventional relational or object-oriented databases offer. yet, in deployed systems today, much of the data is modeled in an ad hoc manner, and many of the more sophisticated "queries" involve navigational access. in this paper, we develop the core of a formal data model for network directories, and propose a sequence of efficiently computable query languages with increasing expressive power. the directory data model can naturally represent rich forms of heterogeneity exhibited in the real world. answers to queries expressible in our query languages can exhibit the same kinds of heterogeneity. we present external memory algorithms for the evaluation of queries posed in our directory query languages, and prove the efficiency of each algorithm in terms of its i/o complexity. our data model and query languages share the flexibility and utility of the recent proposals for semi- structured data models, while at the same time effectively addressing the specific needs of network directory applications, which we demonstrate by means of a representative real-life example. h. v. jagadish laks v. s. lakshmanan tova milo divesh srivastava dimitra vista a model of integrity for object-oriented database systems james m. slack elizabeth a. unger comparing information without leaking it ronald fagin moni naor peter winkler to table or not to table: a hypertabular answer suitable data set organizers are necessary to help users assimilating information retrieved from a database. in this paper we present (1) a general hypertextual framework for the interaction with tables, and (2) a specialization of the framework in order to present in hypertextual format the results of queries expressed in terms of a visual semantic query language. giuseppe santucci laura tarantino the berkom multimedia collaboration service michael altenhofen jurgen dittrich rainer hammerschmidt thomas käppner carsten kruschel ansgar kuckes thomas steinig the study of user behavior on information retrieval systems christian borgman an implementation of hybrid approach to indexing image databases chaman l. sabharwal some observations on retrieval from a large technical document database r marcus a practical query-by-humming system for a large music database a music retrieval system that accepts hummed tunes as queries is described in this paper. this system uses similarity retrieval because a hummed tune may contain errors. the retrieval result is a list of song names ranked according to the closeness of the match. our ultimate goal is that the correct song should be first on the list. this means that eventually our system's similarity retrieval should allow for only one correct answer. the most significant improvement our system has over general query-by-humming systems is that all processing of musical information is done based on beats instead of notes. this type of query processing is robust against queries generated from erroneous input. in addition, acoustic information is transcribed and converted into relative intervals and is used for making feature vectors. this increases the resolution of the retrieval system compared with other general systems, which use only pitch direction information. the database currently holds over 10,000 songs, and the retrieval time is at most one second. this level of performance is mainly achieved through the use of indices for retrieval. in this paper, we also report on the results of music analyses of the songs in the database. based on these results, new technologies for improving retrieval accuracy, such as partial feature vectors and or'ed retrieval among multiple search keys, are proposed. the effectiveness of these technologies is evaluated quantitatively, and it is found that the retrieval accuracy increases by more than 20% compared with the previous system [9]. practical user interfaces for the system are also described. naoko kosugi yuichi nishihara tetsuo sakata masashi yamamuro kazuhiko kushima experience from a real life query optimizer yun wang the elements of computer credibility b. j. fogg hsiang tseng a window-based help, tutorial and documentation system jean-marie comeau peter r. milton logical design for temporal databases with multiple granularities the purpose of good database logical design is to eliminate data redundancy and isertion and deletion anomalies. in order to achieve this objective for temporal databases, the notions of temporal types, which formalize time granularities, and temporal functional dependencies (tfds) are intrduced. a temporal type is a monotonic mapping from ticks of time (represented by positive integers) to time sets (represented by subsets of reals) and is used to capture various standard and user-defined calendars. a tfd is a proper extension of the traditional functional dependency and takes the form x ->m&l; t;/f> y, meaning that there is a unique value for y during one tick of the temporal type μ for one particular x value. an axiomatization for tfds is given. because a finite set tfds usually implies an infinite number of tfds, we introduce the notion of and give an axiomatization for a finite closure to effectively capture a finite set of implied tfds that are essential of the logical design. temporal normalization procedures with respect to tfds are given. specifically, temporal boyce-codd normal form (tbcnf) that avoids all data redundancies due to tfds, and temporal third normal form (t3nf) that allows dependency preservation, are defined. both normal forms are proper extensions of their traditional counterparts, bcnf and 3nf. decompositition algorithms are presented that give lossless tbcnf decompositions and lossless, dependency-preserving, t3nf decompositions. x. sean wang claudio bettini alexander brodsky sushil jajodia clare - a prolog database machine the clause retrieval engine (clare) is a coprocessor system which based on two-stage filtering, for handling large sets of disc resident clauses in prolog database applications. the overall architecture and the timing measurements of the first stage clare hardware are reported. the timing measurements shows that the first stage clare hardware can perform search operations on data transferring at a rate up to 4.5 million double bytes per second. kam-fai wong howard m. williams finding the cut of the wrong trousers: fast video search using automatic storyboard generation peter j. macer peter j. thomas nouhman chalabi john f. meech active database systems active database systems support mechanisms that enable them to respond automatically to events that are taking place either inside or outside the database system itself. considerable effort has been directed towards improving understanding of such systems in recent years, and many different proposals have been made and applications suggested. this high level of activity has not yielded a single agreed-upon standard approach to the integration of active functionality with conventional database systems, but has led to improved understanding of active behavior description languages, execution models, and architectures. this survey presents the fundamental characteristics of active database systems, describes a collection of representative systems within a common framework, considers the consequences for implementations of certain design decisions, and discusses tools for developing active applications. norman w. paton oscar díaz computer support for distributed collaborative writing: defining parameters of interaction this paper reports research to define a set of interaction parameters that collaborative writers will find useful. our approach is to provide parameters of interaction and to locate the decision of how to set the parameters with the users. what is new in this paper is the progress we have made outlining task management parameters, notification, scenarios of use, as well as some implementation architectures. christine m. neuwirth david s. kaufer ravinder chandhok james h. morris consistency, standards, and formal approaches to interface development and evaluation: a note on wiecha, bennett, boies, gould, and greene jonathan grudin breaking the metadata generation bottleneck: preliminary findings elizabeth d. liddy stuart sutton woojin paik eileen allen sarah harwell michelle monsour anne turner jennifer liddy beyond videotex: the library of congress pilot project in page image retrieval and transmission digital optical disk this paper arrived late and can be found published in full on pages 257-260 of this proceedings. w. nugent j. harding a forum for supporting interactive presentations to distributed audiences computer technology is available to build video-based tools for supporting presentations to distributed audiences, but it is unclear how such an environment affects participants' ability to interact and to learn. we built and tested a tool called forum that broadcasts live audio, video and slides from a speaker, and enables audiences to interact with the speaker and other audience members in a variety of ways. the challenge was to enable effective interactions while overcoming obstacles introduced by the distributed nature of the environment, the large size of the group, and the asymmetric roles of the participants. forum was most successful in enabling effective presentations in cases when the topic sparked a great deal of audience participation or when the purpose of the talk was mostly informational and did not require a great deal of interaction. we are exploring ways to enhance forum to expand the effectiveness of this technology. ellen a. isaacs trevor morris thomas k. rodriguez the importance of laboratory experimentation in is research (technical correspondence) sirkka l. jarvenpaa automatic phrase indexing for document retrieval an automatic phrase indexing method based on the term discrimination model is described, and the results of retrieval experiments on five document collections are presented. problems related to this non-syntactic phrase construction method are discussed, and some possible solutions are proposed that make use of information about the syntactic structure of document and query texts. j. fagan exploring functionalities in the compressed image/video domain shin-fu chang dbsim: a simulation tool for predicting database performance mark lefler mark stokrp craig wong putting it together: it's ten o'clock - do you know where your data are win treese new paradigms in information visualization (poster session) _we present three new visualization front-ends that aid navigation through the set of documents returned by a search engine (hit documents). we cluster the hit documents to visually group these documents and label the groups with related words. the different front-ends cater for different user needs, but all can browse cluster information as well as drilling up or down in one or more clusters and refining the search using one or more of the suggested related keywords._ peter au matthew carey shalini sewraz yike guo stefan m. ruger the rasdaman approach to multidimensional database management peter baumann paula furtado roland ritsch norbert widmann object identity and dimension alignment in parametric databases tsz s. cheng shashi k. gadia sunil s. nair designing object-oriented synchronous groupware with coast christian schuckmann lutz kirchner jan schummer jörg m. haake an architecture for transforming graphical interfaces while graphical user interfaces have gained much popularity in recent years, there are situations when the need to use existing applications in a nonvisual modality is clear. examples of such situations include the use of applications on hand- held devices with limited screen space (or even no screen space, as in the case of telephones), or users with visual impairments. we have developed an architecture capable of transforming the graphical interfaces of existing applications into powerful intuitive nonvisual interfaces. our system, called mercator, provides new input and output techniques for working in the nonvisual domain. navigation is accomplished by traversing a hierarchical tree representation of the interface structure. output is primarily auditory, although other output modalities (such as tactile) can be used as well. the mouse, an inherently visually-oriented device, is replaced by keyboard and voice interaction. our system is currently in its third major revision. we have gained insight into both the nonvisual interfaces presented by our system and the architecture necessary to construct such interfaces. this architecture uses several novel techniques to efficiently and flexibly map graphical interfaces into new modalities. w. keith edwards elizabeth d. mynatt anecdote: a multimedia storyboarding system with seamless authoring support komei harada eiichiro tanaka ryuichi ogawa yoshinori hara indexing multimedia databases christos faloutsos why object-oriented databases can succeed where others have failed why are database systems an infrequent component of cad systems? most cad systems perform their own data management on top of the os file system. those that are built atop relational databases use them almost exclusively for selection, sometimes for projection, but perform joins in the private program memory of the design tools. cad systems are using a dbms as an index package \---to support associative access---when they are using one at all. the ubiquitous reason for foregoing the rest of the data manipulation capabilities is "performance! commercial database systems aren't fast enough to support simulators and interactive design tools." the following paragraphs give my opinion on what conventional database systems are too slow at, why they are slow at it, and why object-oriented database systems could be faster at it. david maier query processing in a heterogeneous retrieval network the concept of a large-scale information retrieval network incorporating heterogeneous retrieval systems and users is introduced, and the necessary components for enabling term-based searching of any database by untrained end- users are outlined. we define a normal form for expression of queries, show that such queries can be automatically produced, if necessary, from a natural- language request for information, and give algorithms for translating such queries, with little or no loss of expressiveness, into equivalent queries on both boolean and term-vector type retrieval systems. we conclude with a proposal for extending this approach to arbitrary database models. p. simpson interface agents pattie maes alan wexelblat a unified analysis of batched searching of sequential and tree-structured files a direct and unified approach is used to analyze the efficiency of batched searching of sequential and tree-structured files. the analysis is applicable to arbitrary search distributions, and closed-form expressions are obtained for the expected batched searching cost and savings. in particular, we consider a search distribution satisfying zipf's law for sequential files and four types of uniform (random) search distribution for sequential and tree- structured files. these results unify and extend earlier research on batched searching and estimating block accesses for database systems. sheau-dong lang james r. driscoll jiann h. jou electronic mail usage analysis an electronic mail system (ems) has been in operation within digital equipment corporation now for several years. the function of the system is to help serve the internal communication needs of this geographically distributed corporation. in particular, the system was intended to increase the effectiveness of managers, although secretaries, engineers, and other employees also use the system. as the mail system grew from a pilot operation to a global communications utility, the system's managers continued to receive a wide range of informal feedback concerning users' perceptions and utilization of the system. in order to assess reliably how users were reacting to this new communication mode, usage aspects of the electronic mail system were investigated by a random survey of the user population and by an analysis of ems learning behavior. the survey portion of the study will be discussed in this paper. harry m. hersh shopping models: a flexible architecture for information commerce steven p. ketchpel hector garcia-molina andreas paepcke tractable query languages for complex object databases stephane grumbach victor vianu moving motion metaphors colin ware cost-based query optimization for metadata repositories query optimization strategies for repository systems must take into account the rich and often unpredictable structure of metadata, as well as supporting complex analysis of relationships between those structures. this paper describes rationale, design, and system integration of a cost-based query optimizer offered in conceptbase, a metadata manager that supports these capabilities by a deductive object-oriented data model. in contrast to most implemented dbms, the optimizer is not based on the concept of join selectivity, but on detailed distribution information about object and literal fan-out, with heavy exploitation of materialized views and end-biased histograms. experiences from two real-world applications in software configuration management and cooperative design demonstrate the practical advantages but also point to some lessons for further improvement. martin staudt rene soiron christoph quix matthias jarke vb2: an architecture for interaction in synthetic worlds enrico gobbetti jean- francis balaguer daniel thalmann the primacy of the user - from user's perspective dan conway cat-a-cone: an interactive interface for specifying searches and viewing retrieval results using a large category hierarchy marti a. hearst chandu karadi annotation: from paper books to the digital library catherine c. marshall video nodes and video webs: use of video in hypermedia simon gibbs a visual medium for programmatic control of interactive applications luke s. zettlemoyer robert st. amant hardware, software, and infoware tim o'reilly pointing and visualization william c. hill james d. hollan what is hypermedia? helen ashman correctness conditions for highly available replicated databases nancy lynch barbara blaustein michael siegel what's happening steven cherry using icons to find documents: simplicity is critical michael d. byrne right sizing and international view points: the chi '95 research symposium janni nielsen cathleen wharton implementing the spirit of sql-99 this paper describes the current informix ids/ud release (9.2 or centaur) and compares and contrasts its functionality with the features of the sql-99 language standard. informix and illustra have been shipping dbmss implementing the spirit of the sql-99 standard for five years. in this paper, we review our experience working with ordbms technology, and argue that while sql-99 is a huge improvement over sql-92, substantial further work is necessary to make object-relational dbmss truly useful. specifically, we describe several interesting pieces of functionality unique to ids/ud, and several dilemmas our customers have encountered that the standard does not address. paul brown automatic generation of task-oriented help s. pangoli f. paternó group-authoring in concord&emdash;a db-based approach norbert ritter darpa interdomain addressing (panel session, title only) v. cerf d. clark d. comer l. peterson d. terry the system work group computer science department aarhus university morten kyng research perspectives for time series management systems werner dreyer angelika kotz dittrich duri schmidt filtered suggestions (abstract) joris verrips what's happening marisa campbell music video analysis and context tools for usability measurement miles macleod nigel bevan translating description logics to information server queries premkumar t. devanbu meeting the needs (and preferences) of a diverse world wide web audience debbie hysell a dynamic data model for a video database management system qing li liu sheng huang multimedia electronic mail: will the dream become a reality? nathaniel s. borenstein extended tasks elicit complex eye movement patterns visual perception is an inherently complex task, yet the bulk of studies in the past were undertaken with subjects performing relatively simple tasks under reduced laboratory conditions. in the research reported here, we examined subjects' oculomotor performance as they performed two complex, extended tasks. in the first task, subjects built a model rocket from a kit. in the second task, a wearable eyetracker was used to monitor subjects as they walked to a restroom, washed their hands, and returned to the starting point. for the purposes of analysis, both tasks can be broken down into smaller sub- tasks that are performed in sequence. differences in eye movement patterns and high-level strategies were observed in the model building and hand-washing tasks. fixation durations recorded in the model building tasks were significantly shorter than those reported in simpler tasks. performance in the hand-washing task revealed _look-ahead_ eye movements made to objects well in advance of a subject's interaction with the object. often occurring in the middle of another task, they provide overlapping temporal information about the environment, providing a mechanism to produce our conscious visual experience. jeff b. pelz roxanne canosa jason babcock using texture for content-based retrieval and navigation in mavis (abstract) joseph kuan paul lewis magic is relevant we define the magic-sets transformation for traditional relational systems (with duplicates, aggregation and grouping), as well as for relational systems extended with recursion. we compare the magic- sets rewriting to traditional optimization techniques for nonrecursive queries, and use performance experiments to argue that the magic-sets transformation is often a better optimization technique. i. s. mumick s. j. finkelstein hamid pirahesh raghu ramakrishnan the effect of communication modality on cooperation in online environments one of the most robust findings in the sociological literature is the positive effect of communication on cooperation and trust. when individuals are able to communicate, cooperation increases significantly. how does the choice of communication modality influence this effect? we adapt the social dilemma research paradigm to quantitatively analyze different modes of communication. using this method, we compare four forms of communication: no communication, text-chat, text-to-speech, and voice. we found statistically significant differences between different forms of communication, with the voice condition resulting in the highest levels of cooperation. our results highlight the importance of striving towards the use of more immediate forms of communication in online environments, especially where trust and cooperation are essential. in addition, our research demonstrates the applicability of the social dilemma paradigm in testing the extent to which communication modalities promote the development of trust and cooperation. carlos jensen shelly d. farnham steven m. drucker peter kollock on risk, convenience, and internet shopping behavior amit bhatnagar sanjog misra h. raghav rao a pilot card-based hypermedia integrated with a layered architecture-based oodb and an object-forwarding mail system the goal of our project is to develop a pilotcard-based hypermedia which allows users to create the connecting network of links through the use of an electronic mail system. based on the goal, we developed an underlying object- oriented database management system with a layered architecture to combine personal work environments and a team cooperative work environment effectively. this integration provides the basis of groupware applications. satoshi ichimura yutaka matsushita method engineering: from data to model to practice this paper explores the behavior of experts choosing among various methods to accomplish tasks. given the results showing that methods are not chosen solely on the basis of keystroke efficiency, we recommend a technique to help designers assess whether they should offer multiple methods for some tasks, and if they should, how to make them so that they are chosen appropriately. erik nilsen hee sen jong judith s. olson peter g. polson the relation of agenda creation and use to group support system experience the meeting agenda is often recommended as a critical element for conducting successful meetings. however, assertions about the value of meeting agendas are rarely explained or empirically tested. this study surveyed 238 group facilitators, including 113 who reported facilitating at least one meeting using group support systems (gss), to examine their views on the use of agendas in meetings.analyses are based on overall responses by group facilitators and by responses of the subset of facilitators who have used gss. in addition, correlation analyses between the amount of gss experience and the responses of the full group of facilitators and the subset with gss experience are reported. frequency of agenda use and of evaluation of the "goodness of the agenda" both correlated with gss experience. pre-meeting sessions with the group sponsors correlated significantly with gss experience as did the frequency of varying from the agenda. finally, quality of meeting deliverables, satisfaction with outcomes, and satisfaction with process were viewed as most important benefits of using an agenda; but they did not correlate with gss experience. meeting efficiency as a benefit of agenda use was negatively correlated with gss experience. the implications of these findings for gss facilitation and for future research are discussed. fred niederman roger j. volkema the mitre map navigation experiment jeffrey l. kurtz laurie e. damianos robyn kozierok lynette hirschman non-isomorphic 3d rotational techniques this paper demonstrates how non-isomorphic rotational mappings and interaction techniques can be designed and used to build effective spatial 3d user interfaces. in this paper, we develop a mathematical framework allowing us to design non-isomorphic 3d rotational mappings and techniques, investigate their usability properties, and evaluate their user performance characteristics. the results suggest that non-isomorphic rotational mappings can be an effective tool in building high- quality manipulation dialogs in 3d interfaces, allowing our subjects to accomplish experimental tasks 13% faster without a statistically detectable loss in accuracy. the current paper will help interface designers to use non- isomorphic rotational mappings effectively. ivan poupyrev suzanne weghorst sidney fels intelligent integration of information this paper describes and classifies methods to transform data to information in a three-layer, mediated architecture. the layers can be characterized from the top down as information-consuming applications, mediators which perform intelligent integration of information (i3), and data, knowledge and simulation resources. the objective of modules in the i3 architecture is to provide end users' apoplications with information obtained through selection, abstraction, fusion, caching, extrapolation, and pruning of data. the data is obtained from many diverse and heterogeneous sources. the i3 objective requires the establishment of a consensual information system architecture, so that many participants and technologies can contribute. an attempt to provide such a range of services within a single, tightly integrated system is unlikely to survive technological or environmental change. this paper focuses on the computational models needed to support the mediating functions in this architecture and introduces initial applicatiions. the architecture has been motivated in [wied:92c]. gio wiederhold aries/csa: a method for database recovery in client-server architectures this paper presents an algorithm, called aries/csa (algorithm for recovery and isolation exploiting semantics for client-server architectures), for performing recovery correctly in client-server (cs) architectures. in cs, the server manages the disk version of the database. the clients, after obtaining database pages from the server, cache them in their buffer pools. clients perform their updates on the cached pages and produce log records. the log records are buffered locally in virtual storage and later sent to the single log at the server. aries/csa supports a write-ahead logging (wal), fine- granularity (e.g., record) locking, partial rollbacks and flexible buffer management policies like steal and no-force. it does not require that the clocks on the clients and the server be synchronized. checkpointing by the server and the clients allows for flexible and easier recovery. c. mohan inderpal narang k: a high-level knowledge base programming language for advanced database applications yuh-ming shyy stanley y. w. su increasing efficiency of electronic meetings by autonomous software agents freimut bodendorf ralph seitz an error-based conceptual clustering method for providing approximate query answers w. w. chu k. chiang c. hsu h. yau the integrity problem, and what can be done about it using today's dbmss pentti a. honkanen one visual designer's perspective on interchi'93 andy cargile computer support for knowledge workers: a review of laboratory experiments robert l leitheiser methods & tools: the rich picture: a tool for reasoning about work context andrew monk steve howard visualizing queries and querying visualizations mariano p. consens isabel f. cruz alberto o. mendelzon embodied user interfaces for really direct manipulation kenneth p. fishkin anuj gujar beverly l. harrison thomas p. moran roy want graphical query specification and dynamic result previews for a digital library steve jones the role of balloon help david k. farkas friend21 project: two-tiered architecture for 21st-century human interfaces hajime nonogaki hirotada ueda sisco: providing a cooperation filter for a shared information space john a. mariani groupware in the wild judith s. olson stephanie teasley video based human animation technique human animation is a challenging domain in computer animation. to aim at many shortcomings in conventional techniques, this paper proposes a new video based human animation technique. given a clip of video, firstly human joints are tracked with the support of kalman filter and morph-block based match in the image sequence. then corresponding sequence of three-dimension (3d) human motion skeleton is constructed under the perspective projection using camera calibration and human anatomy knowledge. finally a motion library is established automatically by annotating multiform motion attributes, which can be browsed and queried by the animator. this approach has the characteristic of rich source material, low computing cost, efficient production, and realistic animation result. we demonstrate it on several video clips of people doing full body movements, and visualize the results by re-animating a 3d human skeleton model. xiaoming liu yueting zhuang yunhe pan cybercoaster takashi satou haruhiko kojima akihito akutsu yoshinobu tonomura motives, hurdles, and dropouts james katz philip aspden resources section: web sites michele tepper structural query optimization - a uniform framework for semantic query optimization in deductive databases laks v. s. lakshmanan hector j. hernandez the high-tech toolbelt: a study of designers in the workplace tamara sumner an evaluation of retrieval effectiveness for a full-text document-retrieval system an evaluation of a large, operational full-text document- retrieval system (containing roughly 350,000 pages of text) shows the system to be retrieving less than 20 percent of the documents relevant to a particular search. the findings are discussed in terms of the theory and practice of full-text document retrieval. david c. blair m. e. maron introducing usability into smaller organizations carola b. fellenz high-level synthesis and codesign methods: an application to a videophone codec pierre paulin jean frehel michel harrand elisabeth berrebi clifford liem françois na󧡢al jean-claude herluison efficient mining of association rules in text databases in this paper, we propose two new algorithms for mining association rules between words in text databases. the characteristics of text databases are quite different from those of retail transaction databases, and existing mining algorithms cannot handle text databases efficiently because of the large number of itemsets (i.e., words) that need to be counted. two well-known mining algorithms, apriori algorithm and direct hashing and pruning (dhp) algorithm, are evaluated in the context of mining text databases, and are compared with the new proposed algorithms named multipass-apriori (m-apriori) and multipass-dhp (m-dhp). it has been shown that the proposed algorithms have better performance for large text databases. john d. holt soon m. chung using active server pages and simulation techniques to create virtual m&m;'s the m&m; problem is an excellent example of a nontrivial yet simple method for summarizing sampled data that are then used in making decisions. simulating the creation of virtual m&m;'s that can be copied and pasted into a spreadsheet as the raw data for the m&m; problem provides a way for students to get a unique sample of m&m;'s, thus modeling how, in an information systems environment, sampled raw data can often be captured electronically rather than being entered by hand as would be the case with a physical package of m&m;'s. previously presented using client-side javascript, this paper presents a solution to the same problem using server- side active server pages, thus avoiding compatibility problems of client-side scripting. the paper serves as a short introduction to both programming in active server pages and to simulating discrete events. robin m. snyder information archiving with bookmarks: personal web space construction and organization david abrams ron baecker mark chignell middleware for distributed multimedia (panel): need a new direction? guru parulkar lawrence a. rowe david hutchison jonathan walpole raj yavatkar a case study for distributed query processing p. agrawal d. bitton k. guh c. liu c. yu inhabited virtual worlds: a new frontier for interaction design bruce damer a model of mental model construction learning to control a computer system from limited experience with it seems to require constructing a mental model adequate to indicate the causal connections between user actions, system responses, and user goals. while many kinds of knowledge could be used in building such a model, a small number of simple, low-level heuristics is adequate to interpret some common computer interaction patterns. designing interactions so that they fall within the scope of these heuristics may lead to easier mastery by learners. c. lewis infoweaver: dynamic and tailor-made integration of structured documents, web, and databases atsuyuki morishima hiroyuki kitagawa emvis - a visual e-mail analysis tool bjoern heckel bernd hamann the cultural heritage information online project (poster): demonstrating access to distributed cultural heritage museum information william e. moen john perkins a space based model for user interaction in shared synthetic environments lennart e. fahlen olov ståhl christer carlsson charles grant brown security problems on inference control for sum, max, and min queriesthe basic inference problem is defined as follows: for a finite set _x_ ={_x_i, Â... , _xn_}, we wish to infer properties of elements of _x_ on thebasis of sets of "queries" regarding subsets of _x_. by restricting thesequeries to statistical queries, the statistical database (sdb) securityproblem is obtained. the security problem for the sdb is to limit the use ofthe sdb so that only statistical information is available and no sequence ofqueries is sufficient to infer protected information about any individual.when such information is obtained the sdb is said to be compromised. in thispaper, two applications concerning the security of the sdb are considered: * _on-line application_. the queries are answered one by one in sequence and it is necessary to determine whether the sdb is compromised if a new query is answered. * _off-line application_. all queries are available at the same time and it is necessary to determine the maximum subset of queries to be answered without compromising the sdb.the complexity of these two applications, when the set of queries consists of(a) a single type of sum query, (b) a single type of max/min query, (c) mixedtypes of max and min queries, (d) mixed types of sum and max/min queries, and(e) mixed types of sum, max, and min queries, is studied. efficient algorithmsare designed for some of these situations while others are shown to be np-hard.francis chin confessions of a gardener: a review of information ecologies this review of information ecologies places the text in the mediating tradition that seeks a middle ground between rigid technological determinism and indifferent value neutrality. the biological metaphors for situated technology use make interesting reading,but the stories may not be compelling evidence that users really can shape technological change from the local level. william hart-davidson distributed error correction steve lawrence kurt bollacker c. lee giles storyspace as a hypertext system for writers and readers of varying ability michael joyce kea: practical automatic keyphrase extraction ian h. witten gordon w. paynter eibe frank carl gutwin craig g. nevill-manning database selection techniques for routing bibliographic queries jian xu yinyan cao ee-peng lim wee-keong ng a threshold mechanism for distributed query processing a strategy to process a distributed query is formed using estimates for partial results sizes and delays/costs associated with the network data transfer and cpu processing. to guard against inaccurate estimates the strategy execution is monitored and if estimates are observed to be "inaccurate" the strategy is corrected. this paper presents and compares two methods which can be used to decide whether or not to correct a strategy. in the reformulation method a new strategy is formulated after executing each relational operation. the thresholds method is based on the fact that some partial results formed by the strategy's execution are more "critical" than others. the query processing strategy is represented as a network of activities and the critical path method method is used to determine threshold values for partial results. if a partial result is delayed beyond its threshold value the strategy is corrected. the reformulation and thresholds methods are evaluated on a test-bed of queries for a modeled application of a ddb. p. bodorik j. s. riordon the effect of domain knowledge on elementary school children's search behavior on an information retrieval system: the science library catalog sandra goldstein hirsh conversational hypertext: information access through natural language dialogues with computers one need not create a natural language understanding system in order to create a hypertext data base that can be traversed with unconstrained natural language. the task is simplified because the computer creates a constrained context, imposes a non-negotiable topic, and elicits simple questions. two small hypertext data bases describing the authors' organization and the terms and rules of baseball were implemented on an ibm pc. when ten untrained people were allowed to search through these data bases, 59 per cent of their queries were answered correctly by the first data base and 64 per cent by the second. t. whalen a. patrick query languages with arithmetic and constraint databases leonid libkin maintaining database consistency in presence of value dependencies in multidatabase systems claire morpain michele cart jean ferrie jean-françois pons pingpongplus: design of an athletic-tangible interface for computer-supported cooperative play hiroshi ishii craig wisneski julian orbanes ben chun joe paradiso tivoli: an electronic whiteboard for informal workgroup meetings elin rønby pedersen kim mccall thomas p. moran frank g. halasz a concurrency model for transaction management marc descollonges end-user computing abilities and the use of information systems kunsoo suh sanghoon kim jinjoo lee time series similarity measures (tutorial pm-2) dimitrios gunopulos gautam das foundations of advanced information visualization for information retrieval systems mark e. rorvig matthias hemmje a case study of calendar use in an organization jonathan grudin mining frequent patterns by pattern-growth: methodology and implications jiawei han jian pei book preview jennifer bruer graphlog: a visual formalism for real life recursion we present a query language called graphlog, based on a graph representation of both data and queries. queries are graph patterns. edges in queries represent edges or paths in the database. regular expressions are used to qualify these paths. we characterize the expressive power of the language and show that it is equivalent to stratified linear datalog, first order logic with transitive closure, and non-deterministic logarithmic space (assuming ordering on the domain). the fact that the latter three classes coincide was not previously known. we show how graphlog can be extended to incorporate aggregates and path summarization, and describe briefly our current prototype implementation. mariano p. consens alberto o. mendelzon spatio-temporal data handling with constraints stephane grumbach philippe rigaux luc segoufin managing photos with at&t; shoebox (demonstration session) timothy j. mills david pye david sinclair kenneth r. wood design issues and an architecture for a heterogenous multidatabase system many sophisticated computer applications could be significantly simplified if they are built on top of a general-purpose distributed database management system. in spite of much research on distributed database management systems there are only a few homogenous distributed database system architectures, that have reached the development stage. the situation with heterogeneous multidatabase systems, which connect a number of possibly incompatible pre- existing database systems, is even less satisfactory. to understand the complexity of designing a heterogeneous multidatabase system, we have presented some issues that have been a topic of research in this area. we propose a heterogeneous multidatabase system architecture which solves some of the inherent problems of heterogeneity such as reliability, semantic integrity, and protection. the heterogeneous multidatabase system environment that has been considered involves a connection of pre-existing database systems, with a possibility of adding new database systems at any time during system lifetime. sushil v. pillai ramanatham gudipati leszek lilien experience with a learning personal assistant tom m. mitchell rich caruana dayne freitag john mcdermott david zabowski playback: a method for evaluating the usability of software and its documentation a methodology is described for obtaining objective measures of product usability. the playback program developed at the ibm human factors center in san jose collects performance data of the user interface without impact upon the user or the system being evaluated. while a user is working with the system, keyboard activity is timed and recorded by a second computer. this log of stored activity is later played back through the host system for analysis. an observer watching television monitors enters time- stamped codes and comments concerning the users employment of system publications. the advantages of this approach are: (1) data-collection programs are external to the product being evaluated, (2) no modifications of the playback program are required for testing different software applications, (3) the data-collection process does not intrude on the user's thoughts or activities, (4) problem determination is performed at an accelerated rate during playback analysis, and (5) all data collection is performed on line. alan s. neal roger m. simons an approximate search engine for structural databases when a person interested in a topic enters a keyword into a web search engine, the response is nearly instantaneous (and sometimes overwhelming). the impressive speed is due to clever inverted index structures, caching, and a domain-independent knowledge of strings. our project seeks to construct algorithms, data structures, and software that approach the speed of keyword- based search engines for queries on structural databases. a structural database is one whose data objects include trees, graphs, or a set of interrelated labeled points in two, three, or higher dimensional space. examples include databases holding (i) protein secondary and tertiary structure, (ii) phylogenetic trees, (iii) neuroanatomical networks, (iv) parse trees, (v) molecular diagrams, and (vi) xml documents. comparison queries on such databases require solving variants of the graph isomorphism or subisomorphism problems (for which all known algorithms are exponential), so we have explored a large heuristic space. jason t. l. wang xiong wang dennis shasha bruce a. shapiro kaizhong zhang qicheng ma zasha weinberg people presence or room activity supporting peripheral awareness over distance elin rønby pedersen integrating relational databases with support for updates most work on database integration has considered only support for data retrieval, not support for updates, and often the use of a special semantically rich data model has been required. in this paper we present an approach to database integration which supports updates and which uses only the standard relational data model. many of the ideas used in this approach are applicable to database integration in the context of other data models as well. m. samy gamal-eldin g. thomas r. elmasri associating types with domains of relational data bases in the db field, there are two interpretations (or perceptions or schools of thought) concerning the relational model: (a) one interpretation considers that the relational model contribution to the db field essentially consists in the presentation of a clear and simple notion of (flat) file, and that many users will be very satisfied to describe their data structures as tables; (b) another interpretation considers the relational model as a (reasonable) support for a (weak) entity relationship model. michel lacroix alain pirotte representing extended entity-relationship structures in relational databases: a modular approach a common approach to database design is to describe the structures and constraints of the database application in terms of a semantic data model, and then represent the resulting schema using the data model of a commercial database management system. often, in practice, extended entity-relationship (eer) schemas are translated into equivalent relational schemas. this translation involves different aspects: representing the eer schema using relational constructs, assigning names to relational attributes, normalization, and merging relations. considering these aspects together, as is usually done in the design methodologies proposed in the literature, is confusing and leads to inaccurate results. we propose to treat separately these aspects and split the translation into four stages (modules) corresponding to the four aspects mentioned above. we define criteria for both evaluating the correctness of and characterizing the relationship between alternative relational representations of eer schemas. victor m. markowitz arie shoshani the whiteboard elizabeth buie concurrency control in advanced database applications naser s. barghouti gail e. kaiser gss for cooperative policymaking: no trivial matter kees wim van den herik gert-jan de vreede a scheme-driven natural language query translator an approach to natural language query translation is presented that is driven mainly by the semantics contained in an extended database scheme. this approach has the advantage of ease of implementation and thus portability since the scheme can easily be extended to interface with the translation system's natural language understanding modules. the required extensions consist of adding domain specific routines to recognize and classify literals and a lexicon to recognize context keywords. the results from these recognizers are then presented to a domain independent translator for further analysis. a prototype system has been implemented and some initial experimentation has been done. observations about the effectiveness of the translator and its efficiency are reported. david w. embley roy e. kimbrell replica control in distributed systems: as asynchronous approach calton pu avraham leff the task gallery: a 3d window manager the task gallery is a window manager that uses interactive 3d graphics to provide direct support for task management and document comparison, lacking from many systems implementing the desktop metaphor. user tasks appear as artwork hung on the walls of a virtual art gallery, with the selected task on a stage. multiple documents can be selected and displayed side-by-side using 3d space to provide uniform and intuitive scaling. the task gallery hosts any windows application, using a novel _redirection mechanism_ that routes input and output between the 3d environment and unmodified 2d windows applications. user studies suggest that the task gallery helps with task management, is enjoyable to use, and that the 3d metaphor evokes spatial memory and cognition. george robertson maarten van dantzich daniel robbins mary czerwinski ken hinckley kirsten risden david thiel vadim gorokhovsky reconciling top-down and bottom-up design approaches in rmm the proliferation of intranets and extranets as well as the vast expansion of the world wide web (www) and electronic commerce indicate the need for a structured hypermedia design methodology that will guide the design, development, and maintenance of large multimedia and hypermedia information systems and collaborative systems. the relationship management methodology (rmm) is a well-known hypermedia design methodology. in this paper we provide an extension to it that enhances the design process. we present an iterative process of application design that incorporates the design of the entire application as well as its components. the process includes the design of an application diagram in a top-down fashion, the design of the components or building blocks using the construct of an m-slice, and the regeneration of the application diagram in a bottom-up fashion. an iterative comparison and refinement of the two versions of the application diagram ensure a better final application. tomás isakowitz arnold kamis marios koufaris cyber-surfing: the state-of-the-art in client server browsing and navigation hal berghel starzoom - an interactive visual database interface per bruno viktor ehrenberg lars erik holmquist a task-oriented interface to a digital library steve b. cousins textual bloopers: an excerpt from gui bloopers jeff johnson data access (tutorial session) with an explosion of data on the web, consistent data access to diverse data sources has become a challenging task. in this tutorial will present topics of interest to database researchers and developers building: interoperable middle-ware, gateways, distributed heterogeneous query processors, federated databases, data source wrappers, mediators, and dbms extensions. all of these require access to diverse information through common data access abstractions, powerful apis, and common data exchange formats. with the emergence of the web, database applications are being run over the intranet and the extranet. this tutorial presents an overview of existing and emerging data access technologies. we will concentrate on some of the technical challenges that have to be addressed to enable uniform data access across various platforms and some of the issues that went into the design of these data access strategies. jose a. blakeley anand deshpande editorial pointers diane crawford evaluating distributed environments based on communicative efficacy eckehard doerry searchers and searchers: differences between the most and least consistent searches mirja iivonen the garnet and amulet user interface development environments brad a. myers the context toolkit: aiding the development of context-enabled applications daniel salber anind k. dey gregory d. abowd mmdb reload algorithms le gruenwald margaret h. eich defining data types in a database language the question of defining data types in a database language is examined. in order to illustrate the general ideas and make them more concrete, the specific case of adding support for dates and times to the database language sql is considered in detail. c. j. date more commentary on missing information in relational databases (applicable and inapplicable information) e f codd workflow history management pinar koksal sena nural arpinar asuman dogac useless actions make a difference: strict serializability of database updates ravi sethi the myriad federated database prototype s.-y. hwang e.-p. lim h.-r. yang s. musukula k. mediratta m. ganesh d. clements j. stenoien j. srivastava hci challenges in government contracting (abstract) ira s. winkler elizabeth a. buie estc - a computer file for the scholarly community m j crump an architecture for an integrated active help system graeme knight danny kilis perry c. cheng a common query interface for multilingual document retrieval from databases of the european community institutions (abstract) a. steven pollitt geoffrey p. ellis martin p. smith mark r. gregory chun sheng li introduction to the book commentaries bob johnson visualisation of entrenched user preferences judy kay richard c. thomas a data modeling approach for office information systems simon gibbs dionysis tsichritzis a flexible approach to third-party usability william r. dolan joseph s. dumas embodiment in conversational interfaces: rea j. cassell t. bickmore m. billinghurst l. campbell k. chang h. vilhjálmsson h. yan term-ordered query evaluation versus document-ordered query evaluation for large document databases marcin kaszkiel justin zobel evaluation for collaborative systems laurie damianos lynette hirschman robyn kozierok jeffrey kurtz andrew greenberg kimberley walls sharon laskowski jean scholtz the sybase open server paul melmon world wide web: a hypercharged view of the internet presenting a mosaic of information choices tim fitzgerald an experimental framework for email categorization and management many problems are difficult to adequately explore until a prototype exists in order to elicit user feedback. one such problem is a system that automatically categorizes and manages email. due to a myriad of user interface issues, a prototype is necessary to determine what techniques and technologies are effective in the email domain. this paper describes the implementation of an add-in for microsoft outlook 2000 tm that intends to address two problems with email: 1) help manage the inbox by automatically classifying email based on user folders, and 2) to aid in search and retrieval by providing a list of email relevant to the selected item. this add-in represents a first step in an experimental system for the study of other issues related to information management. the system has been set up to allow experimentation with other classification algorithms and the source code is available online in an effort to promote further experimentation. kenricj mock the technical committee for human interface (sice-japan) - an introduction masaaki kurosu developing a focus in unsupervised database mining lawrence j. mazlack one flavor assumption and gamma-acyclicity for universal relation views h biskup l schnetgoke synthesizing auditory icons auditory icons add valuable functionality to computer interfaces, particularly when they are parameterized to convey dimensional information. they are difficult to create and manipulate, however, because they usually rely on digital sampling techniques. this paper suggests that new synthesis algorithms, controlled along dimensions of events rather than those of the sounds themselves, may solve this problem. several algorithms, developed from research on auditory event perception, are described in enough detail here to permit their implementation. they produce a variety of impact, bouncing, breaking, scraping, and machine sounds. by controlling them with attributes of relevant computer events, a wide range of parameterized auditory icons may be created. william w. gaver video graphic query facility database design the purpose of this paper is to describe the video graphic query facility's database design in terms of the query capabilities it orchestrates. the next section of the paper discusses the background and capabilities to be provided by the video graphic query facility. section iii describes the multi-media database underlying the facility and section iv summarizes what we believe to be a new database application. nancy h. mcdonald john p. mcnally human factors testing in the design of xerox's 8010 "star" office workstation integral to the design process of the xerox 8010 "star" workstation was constant concern for the user interface. the design was driven by principles of human cognition. prototyping of ideas, paper-and-pencil analyses, and human-factors experiments with potential users all aided in making design decisions. three of the human- factors experiments are described in this paper: a selection schemes test determined the number of buttons on the mouse pointing device and the meanings of these buttons for doing text selection. an icon test showed us the significant parameters in the shapes of objects on the display screen. a graphics test evaluated the user interface for making line drawings, and resulted in a redesign of that interface. william l. bewley teresa l. roberts david schroit william l. verplank funbase: a function-based information management system wafik m. farag toby j. teorey video-based gesture interface to interactive movies jakub segen senthil kumar designing & testing groupware user interfaces (abstract) jean c. scholtz anthony c. salvador james a. larson business: a narrative approach to user requirements for web design pär carlshamre martin rantzer information visualization information visualization, an increasingly important subdiscipline within the field of human computer interaction (hci) [13], focuses on visual mechanisms designed to communicate clearly to the user the structure of information and improve on the cost of access to large data repositories. in printed form, information visualization has included the display of numerical data (e.g., bar charts, plot charts, pie charts), combinatorial relations (e.g., drawings of graphs), and geographic data (e.g., encoded maps) [1, 9, 16]. in addition to these "static" displays, computer- based systems, such as the information visualizer [2] and dynamic queries [15] have coupled powerful visualization techniques (e.g., constraints, 3d, animation) with near real-time interactivity, i.e., the ability of the system to respond quickly to the user's direct manipulation commands. another important aspect of computer- based information systems concerns the dual communication with user and machine, which motivates the concept of visual formalism that has been introduced by harel: "the intricate nature of a variety of computer-related systems and situations can, and in our opinion should, be represented via visual formalisms; visual because they are to be generated, comprehended, and communicated by humans; and formal, because they are to be manipulated, maintained, and analyzed by computers" [12]. while historically hci and database research were kept separate, the interests of both research communities have been converging, mainly in what concerns the topic of information visualization. in the database community, the focus on information visualization started with research in visual query languages, where the visualization of schema and/or database instances is common (for a survey, see [4]). recently, a new generation of database systems is emerging, which tightly combine querying capabilities with visualization techniques and are information visualization systems in their own right [3, 8, 11]. database applications that access large data repositories, such as data mining and data warehousing, and the enormous quantity of information sources on the www available to users with diverse capabilities also provide hci researchers with new opportunities for information visualization (see, for instance, the acm report on strategic directions in hci [13], the reports of the "fadiva" working group [10], and the work presented in [14]). the objective of this special issue is two-fold: to compile some of the most recent research on information visualization from both communities, and to make it available to a large readership whose main interests lie in the management of data. the eight papers in this issue cover fundamental topics in information visualization, including tailorable multi- visualizations (i.e., the ability of the system to provide the user with alternative visualizations, depending on their suitability to different data, tasks, and users' preferences); near real-time interactivity when dealing with very large data sizes; effective display and color usage, and multidimensionality. a short overview of each paper follows. tiziana catarci isabel f. cruz to take arms against a sea of email the rapid growth of the internet means there is more email traffic now than ever before, and there will be still more in years to come. there was a time when only hard-core hackers had to deal with significant amounts of email. as email becomes a standard medium of communication, and more and more nonprogrammers join mailing lists, an increasing number of people find themselves receiving large amounts of email and must cope with it. andrew arensburger azriel rosenfeld envision: a user-centered database of computer science literature project envision is an early nsf-funded digital library effort to develop a multimedia collection of computer science literature with full-text searching and full-content retrieval capabilities. envision was launched in 1991 in accordance with the acm publications board's plans for encouraging research studies to develop an electronic archive for computer science. lenwood s. heath deborah hix lucy t. nowell william c. wake guillermo a. averboch eric labow scott a. guyer dennis j. brueni robert k. france kaushai dalal edward a. fox the functional data model and the data languages daplex daplex is a database language which incorporates: a formulation of data in terms of entities; a functional representation for both actual and virtual data relationships; a rich collection of language constructs for expressing entity selection criteria; a notion of subtype/supertype relationships among entity types. this paper presents and motivates the daplex language and the underlying data model on which it is based. david w. shipman personalized galaxies of information earl rennison inhabited virtual worlds: a new frontier for interaction design bruce damer mediating the views of databases and database users the natural- language/deduction group at sri international has undertaken several large projects integrating knowledge representation, the modeling and use of distributed conventional databases, logical deduction, and natural- language processing. one of the largest projects, ladder [2], involved accessing data distributed over a computer network by using queries expressed in english. work with ladder (and several similar systems) has revealed that: (1) users wish to talk about data in terms of the enterprise in which the data are to be used. users do not confine their questions to concepts and terminology covered by the database per se. (2) users are seldom satisfied with access only to the data in a database. they need to know the kind of data available (i.e., they want to ask questions about the db schema), and they expect systems to include information that can be computed from "common knowledge" and information stored explicitly in the database (e.g., if a database records where two ships are, users expect the system to know the distance between them). (3) users are not satisfied with access to an existing database. they want to tell the system new facts. some of these are not suitable for storage in conventional databases (e.g., statements involving quantification), and some involve counter factuals (e.g., "suppose the ship were 100 miles south of its current location..."). (4) given natural- language access to a dbms, users expect to interact in natural language with other types of software, too. moreover, they expect the various underlying software packages to understand one another's results (e.g., user: "who is the commander of the ship?" system: "admiral brown." user: "send him a copy of smith's memo." the mailer is expected to understand the output from the database). gary g. hendrix what should be modelled? balzer: this discussion is intended to set the stage for further discussions by soliciting ai, db and pl views of "what should be modelled?" and the nature of models. [this discussion suffered two problems. first, it was the initial attempt at the workshop by people in different areas to communicate on the same subject. second, the issue of "what should be modelled?" was frequently confused with "how should it be modelled?" to aid the reader in understanding the issues that were confused, comments by deutseh, hayes, levesque, rich and sibley have been extracted and placed at the beginning, ed.] who are the users' in a user-centered virtual library? dudee chiang phidias: informed design through use of intelligent hypercad (abstract) raymond j. mccall analysis of locking policies in database management systems quantitative analysis of locking mechanisms and of their impact on the performance of transactionnal systems have yet received relatively little attention. although numerous concurrency mechanisms have been proposed and implemented, there is an obvious lack of experimental as well as analytical studies of their behaviour and their influence on system performance. we present in this paper an analytical framework for the performance analysis of locking mechanisms in transactionnal systems based on hierarchical analytical modelling. three levels of modelling are considered: at level 1, the different stages (lock request, execution, blocking) transactions of through during their life-time are described; the organization and operations of the cpu and i/o resources are analysed at level 2; transaction's behaviour during their lock request phase is analysed at modelling level 3. this hierarchical approach is applied to the analysis of a physical locking scheme involving a static lock acquisition policy. a simple probabilistic model of the transaction behaviour is used to derived the probability that a new transaction is granted the locks it requests given the number of transactions already active as a function of the granularity of the database. on the other hand, the multiprogramming effect due to the sharing of cpu and i/o resources by transactions is analysed using the standard queueing network approaches and the solution package qnap. in a final step, the results on the blocking probabilities and the multiprogrammin effect are used as input of a global performance model of the transactionnal system. markovian analysis is used to solve this model and to obtain the throughput of the system as a function of the data base granularity and other parameters. the results obtained provide a clear understanding of the various factors which determine the global performance, of their role and improtance. they also raise many new issues which can only be solved by further extensive experimental and analytical studies and show that two particular topics deserve special attention: the modelling of transaction behaviour and the modelling of locking overheads. d. potier ph. leblanc tangible bits: towards seamless interfaces between people, bits and atoms hiroshi ishii brygg ullmer yahoo! as an ontology: using yahoo! categories to describe documents we suggest that one (or a collection) of names of yahoo! (or any other www indexer's) categories can be used to describe the content of a document. such categories offer a standardized and universal way for referring to or describing the nature of real world objects, activities, documents and so on, and may be used (we suggest) to semantically characterize the content of documents. www indices, like yahoo! provide a huge hierarchy of categories (topics) that touch every aspect of human endeavors. such topics can be used as descriptors, similarly to the way librarians use for example, the library of congress cataloging system to annotate and categorize books. in the course of investigating this idea, we address the problem of automatic categorization of webpages in the yahoo! directory. we use telltale as our classifier; telltale uses n-grams to compute the similarity between documents. we experiment with various types of descriptions for the yahoo! categories and the webpages to be categorized. our findings suggest that the best results occur when using the very brief descriptions of the yahoo! categorized entries; these brief descriptions are provided either by the entries' submitters or by the yahoo! human indexers and accompany most yahoo!-indexed entries. yannis labrou tim finin cooperative information systems (tutorial session)(abstract only): a research agenda goals and content: the tutorial proposes a generic architecture for cooperative information systems which consists of four layers: the system layer which includes legacy systems, a system integration layer, a human cooperation layer, and an organizational layer. for each, the tutorial will review fundamental concepts, promising research directions, and open questions. the concepts will be illustrated with detailed case studies from production and service industries and with results from ongoing research efforts. matthais jarke john mylopoulos cdam - compound document access and management: an object-oriented approach wolfgang herzner erwin hocevar models, prototypes, and evaluations for hci design: making the structured approach practical george casaday cynthia rainis the effects of positional constancy on searching menus for information one of the more popular methods today for instructing software designers on how to structure man-display interfaces is with guidelines. numerous design guidelines have been promulgated in the last several years (engel and granda, 1975; ramsey and atwood, 1980; smith, 1980; kennedy, 1974; pew and rollins, 1975) and there is still much current activity in collecting and expanding screen guidelines (smith, 1981; smith & aucella, 1982) in the past few years an increased number of empirical investigations quantifying directly the behavioral impacts of individual design guidelines have appeared in the literature. issues such as the depth of menu hierarchies (miller, 1981), eye movements during menu viewing (card, 1982; kolers, duchnicky, and ferguson, 1981), or location of screen entry areas (granda, teitelbaum, and dunlap, 1982) have been experimentally studied. richard c. teitelbaum richard e. granda beating the limitations of camera-monitor mediated telepresence with extra eyes kimiya yamaashi jeremy r. cooperstock tracy narine william buxton usability testing in the field: bringing the laboratory to the user david e. rowley supporting the virtual organization through information technology in a new venture: the retex experience jonathan w. palmer query optimization and processing in federated database systems ee-peng lim jaideep srivastava object management and sharing in autonomous, distributed databases an important current trend in information management is from a record-based to an object-based orientation. in particular, existing record-oriented database management systems fulfill many of the requirements of traditional database application domains, but they fall short of providing facilities well-suited to applications in office information systems, design engineering databases, and artificial intelligence systems. in an object-oriented system: information units of various modalities, levels of granularity, and levels of abstraction have individual identity; semantic primitives for object classification and inter-relation are explicitly part of the system; and objects can be active as well as passive. the purpose of the research project described here is to devise and experimentally test concepts, techniques and mechanisms to support a distributed object management system, termed the distributed personal knowledge manager (dpkm). dpkm is an adaptive tool for the non-computer expert; it is intended to allow end-users to define, manipulate, and evolve collections of information. dpkm handles various forms of information/knowledge in an integrated manner; this includes symbolic data, meta-data, derived data (rules), behavioral information (procedures), constraints, and mixed modality information. an individual dpkm also serves as an access port to other (external) information resources. this research specifically focuses on the following issues. an information model to support the integrated specification of various forms of data/knowledge1; an end-user interface providing a layered view of knowledge, multi-media information input and output, and prescriptive user guidance1, 2; an efficient mechanism for internally organizing and evolving dpkm databases ;3; a multi-level networking/communication mechanism to support inter-dpkm information exchange, sharing, coordination, and access control4, 5. in the approach taken in this research, each data/knowledge base is a logical context (node) in a logical network. associated with each context is a collection of information objects and mappings. the objects model units of potentially shareable information at different levels of abstraction; these are: symbolic (or atomic) objects, abstract objects, object classifications via enumeration (enumerated sets) or via selection predicate (predicate sets), constraints on inter-object relationships (generic mappings), and behavioral objects (procedures). mappings specify relations among instances of objects. it is significant to note that information objects classically distinguished in database terms as "schema" and "data" are treated here within a uniform framework. an extensible set of operations supports the manipulation and sharing of objects. an experimental prototype implementation of dpkm is under development, based on an interconnected network of personal workstations and computers. it is intended that the dpkm project will examine applications in a variety of domains, but it will principally focus on the researcher and design engineer for the purposes of initial experimental application and testing. dennis mcleod strategies for a better user interface leonel morales díaz relational queries computable in polynomial time (extended abstract) query languages for relational databases have received considerable attention. in 1972 codd [cod72] showed that two natural mathematical languages for queries-&-mdash;one algebraic and the other a version of first order predicate calculus-&-mdash;had identical powers of expressibility. query languages which are as expressive as codd's relational calculus are sometimes called complete. this term is misleading, however, because many interesting queries are not expressible in -&-ldquo;complete-&-rdquo; languages. in this paper we show: theorem 2: the fixpoint hierarchy collapses at the first fixpoint level. that is, any query expressible with several applications of least fixpoint can already be expressed with one. we also show: theorem 1: let l be a query language consisting of relational calculus plus the least fixpoint operator. suppose that l contains a relation symbol for a total ordering relation on the domain (e.g. lexicographic ordering). then the queries expressible in l are exactly the queries computable in polynomial time. theorem 1 was discovered independantly by m. vardi [var82]. it gives a simple syntactic categorization of those queries which can be answered in polynomial time. of course queries requiring polynomial time in the size of the database are usually prohibitatively expensive. we also consider weaker languages for expressing less complex queries. neil immerman tuning databases for high performance dennis shasha design alternatives for user interface management sytems based on experience with cousin user interface management systems (uimss) provide user interfaces to application systems based on an abstract definition of the interface required. this approach can provide higher-quality interfaces at a lower construction cost. in this paper we consider three design choices for uimss which critically affect the quality of the user interfaces built with a uims, and the cost of constructing the interfaces. the choices are examined in terms of a general model of a uims. they concern the sharing of control between the uims and the applications it provides interfaces to, the level of abstraction in the definition of the information exchanged between user and application, and the level of abstraction in the definition of the sequencing of the dialogue. for each choice, we argue for a specific alternative. we go on to present cousin, a uims that provides graphical interfaces for a variety of applications based on highly abstracted interface definitions. cousin's design corresponds to the alternatives we argued for in two out of three cases, and partially satisfies the third. an interface developed through, and run by cousin is described in some detail. philip j. hayes pedro a. szekely richard a. lerner selection from alphabetic and numeric menu trees using a touch screen: breadth, depth, and width goal items were selected by a series of touch-menu choices among sequentially subdivided ranges of integers or alphabetically ordered words. the number of alternatives at each step, b, was varied, and, inversely, the size of the target area for the touch. mean response time for each screen was well described by t = k+& lt;italic>clogb, in agreement with the hick-hyman and fitts' laws for decision and movement components in series. it is shown that this function favors breadth over depth in menus, whereas others might not. speculations are offered as to when various functions could be expected. t. k. landauer d. w. nachbar eyes at the interface there is little dispute that the main channels of intercommunication of people with the world at large are: sight, sound, and touch; and for people with other people: eye-contact, speech, gesture. advanced human-computer interfaces increasingly implicate speech i/o, and touch or some form of manual input. richard a. bolt a relation-based language interpreter for a content addressable file store the combination of the content addressable file store (cafs®; cafs is a registered trademark of international computers limited) and an extension of relational analysis is described. this combination allows a simple and compact implementation of a database query and update language (fidl). the language has one of the important properties of a "natural" language interface by using a "world model" derived from the relational analysis. the interpreter (flin) takes full advantage of the cafs by employing a unique database storage technique which results in a fast response to both queries and updates. t. r. addis hyperstorm - administering structured documents using object-oriented database technology klemens böhm karl aberer of maps and scripts - the status of formal constructs in cooperative work kjeld schmidt the human guidance of automated design lynne colgan robert spence paul rankin aggregation everywhere: data reduction and transformation in the phoenix data warehouse this paper describes the phoenix system, which loads a data warehouse and then reports against it. between the raw atomic data of the source system and the business measures presented to users there are many computing environments. aggregation occurs everywhere: initial bucketing by the natural keys on the mainframe, loading the fact table using a mapping table, maintaining aggregate tables and reporting tables in the data base, in the gui, in sql queries issued on behalf of client tools by the web server, and inside the client tools themselves. producing the desired business measures required writing a complex sql query, and post-processing an excel pivot table. steven tolkin a cross-media adaptation strategy for multimedia presentations adaptation techniques for multimedia presentations are mainly concerned with switching between different qualities of single media elements to reduce the data volume and by this to adapt to limited presentation resources. this kind of adaptation, however, is limited to an inherent lower bound, i.e., the lowest acceptable technical quality of the respective media type. to overcome this limitation, we propose cross- media adaptation in which the presentation alternatives can be media elements of different media type, even different fragments. thereby, the alternatives can extremely vary in media type and data volume and this enormously widens the possibilities to efficiently adapt to the current presentation resources. however, the adapted presentation must still convey the same content as the original one, hence, the substitution of media elements and fragments must preserve the presentation semantics. therefore, our cross-media adaptation strategy provides models for the automatic augmentation of multimedia documents by semantically equivalent presentation alternatives. additionally, during presentation, substitution models enforce a semantically correct information flow in case of dynamic adaptation to varying presentation resources. the cross-media adaptation strategy allows for flexible reuse of multimedia content in many different environments and, at the same time, maintains a semantically correct information flow of the presentation. susanne boll wolfgang klas jochen wandel a prefetching scheme based on the analysis of user access patterns in news-on- demand system the nod article makes a difference to vod data in terms of media type, size, creation interval and user interactivity. because of these intrinsic characteristics, user access patterns of the nod article can be different from that of vod data. in this paper, we analyze the log file of one electronic newspaper to show the short-term popularity and long-term popularity patterns. based on these patterns, we propose llbf (largest life- cycle based frequency) prefetching scheme that uses the two popularity patterns to cache a set of popular articles. in simulation, we show that the proposed llbf prefetching scheme increases hit ratio, and reduces the number of replacements more than other replacement algorithms as a small number of articles such as headline news is prefetched in main memory. tae-uk choi young-ju kim ki-dong chung hypertext ii jakob nielsen using psychomotor models of movement in the analysis and design of computer pointing devices anant kartik mithal task oriented or task disoriented: designing a usable help web michael priestley multimedia access and retrieval (panel session): the state of the art and future directions shih-fu chang gwendal auffret jonathan foote chung-shen li behzad shahraray tanveer syeda-mahmood hongjiang zhang reliability mechanisms for sdd-1: a system for distributed databases this paper presents the reliability mechanisms of sdd-1, a prototype distributed database system being developed by the computer corporation of america. reliability algorithms in sdd-1 center around the concept of the reliable network (relnet). the relnet is a communications medium incorporating facilities for site status monitoring, event timestamping, multiply buffered message delivery, and the atomic control of distributed transactions. this paper is one of a series of companion papers on sdd-1 [3, 4, 6, 13]. micael hammer david shipman multiparty videoconferencing at virtual social distance: majic design this paper describes the design and implementation of majic, a multi- party videoconferencing system that projects life-size video images of participants onto a large curved screen as if users in various locations are attending a meeting together and sitting around a table. majic also supports multiple eye contact among the participants and awareness of the direction of the participants' gaze. hence, users can carry on a discussion in a manner comparable to face-to-face meetings. we made video-tape recordings of about twenty visitors who used the prototype of majic at the nikkei collaboration fair in tokyo. our initial observations based on this experiment are also reported in this paper. ken-ichi okada fumihiko maeda yusuke ichikawaa yutaka matsushita storage structures in digital libraries (poster): jstor amy j. kirchhoff mark ratliff dimensionality reduction for similarity searching in dynamic databases databases are increasingly being used to store multi-media objects such as maps, images, audio and video. storage and retrieval of these objects is accomplished using multi-dimensional index structures such as r*-trees and ss- trees. as dimensionality increases, query performance in these index structures degrades. this phenomenon, generally referred to as the dimensionality curse, can be circumvented by reducing the dimensionality of the data. such a reduction is however accompanied by a loss of precision of query results. current techniques such as qbic use svd transform-based dimensionality reduction to ensure high query precision. the drawback of this approach is that svd is expensive to compute, and therefore not readily applicable to dynamic databases. in this paper, we propose novel techniques for performing svd-based dimensionality reduction in dynamic databases. when the data distribution changes considerably so as to degrade query precision, we recompute the svd transform and incorporate it in the existing index structure. for recomputing the svd-transform, we propose a novel technique that uses aggregate data from the existing index rather than the entire data. this technique reduces the svd-computation time without compromising query precision. we then explore efficient ways to incorporate the recomputed svd- transform in the existing index structure without degrading subsequent query response times. these techniques reduce the computation time by a factor of 20 in experiments on color and texture image vectors. the error due to approximate computation of svd is less than 10%. k. v. ravi kanth divyakant agrawal ambuj singh new technological windows into mind: there is more in eyes and brains for human-computer interaction boris m. velichkovsky john paulin hansen ir evaluation methods for retrieving highly relevant documents this paper proposes evaluation methods based on the use of non-dichotomous relevance judgements in ir experiments. it is argued that evaluation methods should credit ir methods for their ability to retrieve highly relevant documents. this is desirable from the user point of view in modern large ir environments. the proposed methods are (1) a novel application of p-r curves and average precision computations based on separate recall bases for documents of different degrees of relevance, and (2) two novel measures computing the cumulative gain the user obtains by examining the retrieval result up to a given ranked position. we then demonstrate the use of these evaluation methods in a case study on the effectiveness of query types, based on combinations of query structures and expansion, in retrieving documents of various degrees of relevance. the test was run with a best match retrieval system (in-query1) in a text database consisting of newspaper articles. the results indicate that the tested strong query structures are most effective in retrieving highly relevant documents. the differences between the query types are practically essential and statistically significant. more generally, the novel evaluation methods and the case demonstrate that non-dichotomous relevance assessments are applicable in ir experiments, may reveal interesting phenomena, and allow harder testing of ir methods. kalervo jarvelin jaana kekalainen collaborative solid modeling on the www stephen chan martin wong vincent ng subjective usability feedback from the field over a network bruce elgin design of an external schema facility to define and process recursive structures the role of the external schema is to support user views of data and thus to provide programmers with easier data access. this author believes that an external schema facility is best based on hierarchies, both simple and recursive. after a brief introduction to an external schema facility to support simple hierarchical user views, the requirements for a facility for recursive hierarchies are listed and the necessary extensions to the external schema definition language are offered. functions that must be provided for generality in definition are node specification and node control. tree traversal functions must be provided for processing. definitions of each and examples of use are presented. eric k. clemons lifestreams: an alternative to the desktop metaphor scott fertig eric freeman david gelernter advances in information retrieval(acm 82 panel session) in conventional information retrieval boolean combinations of index terms are used to formulate the users' information request. boolean queries are difficult to generate and the retrieved items are not presented to the user in any useful order. a new flexible retrieval system is described which makes it possible to relax the strict conditions of boolean query logic thereby retrieving useful items that are rejected in a conventional retrieval situation. the query structure inherent in the boolean system is preserved, while at the same time weighted terms may be incorporated into both queries and stored documents; the retrieved output can also be ranked in strict similarity order with the user queries. a conventional retrieval system can be modified to make use of the flexible metric system. laboratory tests indicate that the extended system produces better retrieval output than conventional boolean or vector processing systems. donald h. kraft a statistical approach to incomplete information in database systems there are numerous situations in which a database cannot provide a precise answer to some of the questions that are posed. sources of imprecision vary and include examples such as recording errors, incompatible scaling, and obsolete data. in many such situations, considerable prior information concerning the imprecision exists and can be exploited to provide valuable information for queries to which no exact answer can be given. the objective of this paper is to provide a framework for doing so. eugene wong estimating precision by random sampling (poster abstract) gordon v. cormack ondrej lhotak christopher r. palmer what's happening steven cherry role-based security, object oriented databases and separation of duty in this paper we combined concepts of role-based protection and object oriented (o-o) databases to specify and enforce separation of duty as required for commercial database integrity [5, 23, 24]. roles essentially partition database information into access contexts. methods (from the o-o world) associated with a database object, also partition the object interface to provide windowed access to object information. by specifying that all database information is held in database objects and authorizing methods to roles, we achieve object interface distribution across roles. for processing in the commercial world we can design objects and distribute their associated methods to different roles. by authorizing different users to the different roles, we can enforce both the order of execution on the objects and separation of duty constraints on method execution. matunda nyanchama sylvia osborn charles welty query size estimation by adaptive sampling (extended abstract) we present an adaptive, random sampling algorithm for estimating the size of general queries. the algorithm can be used for any query q over a database d such that 1) for some n, the answer to q can be partitioned into n disjoint subsets q1, q2, …, qn, and 2) for 1 ≤ i ≤ n, the size of qi is bounded by some function b(d, q), and 3) there is some algorithm by which we can compute the size of qi, where i is chosen randomly. we consider the performance of the algorithm on three special cases of the algorithm: join queries, transitive closure queries, and general recursive datalog queries. richard j. lipton jeffrey f. naughton guaranteeing rights for the user clare-marie karat multiple presentations of www documents using style sheets philip m. marden ethan v. munson optimizing recall/precision scores in ir over the www matthew montebello active memory for text information retrieval a symbolic associative processor (sap), capable of supporting parallel keyword match and record match functions, is proposed to select and streamline textual data for information retrieval. consequently, high volume text data could be analysed on-the-fly before being channelled to cpu, and thus, cushion the impact of von neumann bottleneck commonly experienced in applications requiring high i/o bandwidth. this paper identifies some of the system requirements to support text information retrieval using sap with the aid of simplified examples. y. h. ng s. p.v. barros knowledge management systems: issues, challenges, and benefits maryam alavi dorothy e. leidner pardes: a data-driven oriented active database model most active database models adopted an event-driven approach in which whenever a given event occurs the database triggers some actions. many derivations are data-driven by nature, deriving the values of data-elements as a function of the values of other derived data-elements. the handling of such rules by current active databases suffers from semantic and pragmatic fallacies. this paper explores these fallacies and reports about the pardes language and supporting architecture, aiming at the support of data-driven rules, in an active database framework. opher etzion newt's cape (abstract): an html environment for the newton pda steve weyer high tech or high touch (panel session): automation and human mediation in libraries there are those who now think that traditional library services, such as cataloging and reference, will no longer be needed in the future, or at least will be fully automated. others are equally adamant that human intervention is not only important but essential. underlying such positions are a host of assumptions - about the continued existence and place of paper, the role of human intelligence and interpretation, the nature of research, and the significance of the human element. this panel brings together experts in libraries and digital technology to uncover such issues and assumptions and to discuss and debate the place of people and machines in cataloging and reference work. david levy william arms oren etzioni diane nester barbara tillett are newsgroups virtual communities?: teresa l. roberts groupware and workflow: a survey of systems and behavioral issues steven poltrock jonathan grudin using earcons to improve the usability of tool palettes stephen a. brewster design and implementation of a digital library james richvalsky david watkins wsq/dsq: a practical approach for combined querying of databases and the web we present wsq/dsq (pronounced "wisk-disk"), a new approach for combining the query facilities of traditional databases with existing search engines on the web. wsq, for _web-supported (database) queries_, leverages results from web searches to enhance sql queries over a relational database. dsq, for _database-supported (web) queries_, uses information stored in the database to enhance and explain web searches. this paper focuses primarily on wsq, describing a simple, low-overhead way to support wsq in a relational dbms, and demonstrating the utility of wsq with a number of interesting queries and results. the queries supported by wsq are enabled by two _virtual tables_, whose tuples represent web search results generated dynamically during query execution. wsq query execution may involve many high- latency calls to one or more search engines, during which the query processor is idle. we present a lightweight technique called _asynchronous iteration_ that can be integrated easily into a standard sequential query processor to enable concurrency between query processing and multiple web search requests. asynchronous iteration has broader applications than wsq alone, and it opens up many interesting query optimization issues. we have developed a prototype implementation of wsq by extending a dbms with virtual tables and asynchronous iteration; performance results are reported. roy goldman jennifer widom explicitly modal interfaces for business professionals alan wexelblat integrated data capture and analysis tools for research and testing on graphical user interfaces our on-line data capture and analysis tools include an event capture program, event data filtering programs, a multimedia data analyzer, and a retrospective verbal protocal recorder for use with the multimedia data analyzer. off-line observation logging is also supported. additional plans for development include the integration of an online time- synchronized observation logger, and time-synchronized eyetracking data recording. the tool set provides an integrated multi-source data collection, processing, and analysis system for: 1) comparing and evaluating software applications and prototypes; 2) evaluating software documentation and instructional materials; and 3) evaluating on-line training. the tools currently run on macintosh computers and under microsoft windows. plans are to port the tools to run under presentation manager and motif. monty l. hammontree jeffrey j. hendrickson billy w. hensley exploring bimanual camera control and object manipulation in 3d graphics interfaces ravin balakrishnan gordon kurtenbach the document management component of a multimedia data model we describe estrella a multimedia object oriented data model developed by matra. this model is based upon objects, classes (organized in a lattice) and functions (allow to dynamically implement operations on data and new data types). the valid states of the data base are described by a set of integrity constraints. we propose a document model capable to manage structured documents and to index them with a superimposed codes method. we present as well the associated data manipulation language with a navigational interface and content search operators c. damier b. defude the webbook and the web forager: video use scenarios for a world-wide web information workspace stuart k. card george g. robertson william york a multiparadigmatic environment for interacting with databases we present a prototype system to be used for visually accessing heterogeneous databases. the basic idea is to provide the user with several visual representations of data as well as multiple interaction mechanisms for both querying databases and visualizing the query results. since some visual representations better fit certain user classes, the system adapts to the user's needs by switching to the most appropriate visual representation and interaction mechanism, according to a suitable user model. the data and query representations are consistent in every paradigm. such a notion of consistency stems from formal basis, i.e. a twofold data/representation model, namely the graph model, and a set of basic graphical primitives. this formal approach allows the user to switch from one interaction paradigm to another, always saving the query state. t. catarci m. f. costabile a. massari l. saladini g. santucci authentication in office system internetworks jay e. israel theodore a. linden cscw overview (tutorial session)(abstract only) goals and content: to provide an organized and entertaining overview of the world fo cscw for newcomers to the field. we will offer a framework for understanding cscw as a research domain, a management opportunity, and a business challenge. we will analyze some of the great successes and great disasters in cscw. we will provide an overview of the cscw conference, incuding sunday's tutorial program, and will suggest how to learn more about cscw. we will conclude with refreshments and an opportunity to meet many of the conference participants. jonathan grudin steven e. poltrock john patterson stretch button scrollbar daniel j. smith robert a. henning editorial jay blickstein assessing data quality for information products amir parssian sumit sarkar varghese s. jacob web site redesign follies jan boucher marion smith siteseer: personalized navigation for the web james rucker marcos j. polanco about 23 million documents match your query… kerry rodden pesce: a visual generator for software understanding rogelio adobbati w. lewis johnson stacy marsella data base design principles for striping and placement of delay-sensitive data on disks stavros christodoulakis fenia a. zioga cogenthelp: a tool for authoring dynamically generated help for java guis david e. caldwell michael white anonymous collaboration: an alternative technique for working together andrew lee tools and trade-offs: making wise choices for user-centered design stephanie rosenbaum judith ramey judee humburg anne seeley adept - advanced design environment for prototyping with task models peter johnson stephanie wilson panos markopoulos james pycock dynamic wais book: an electronic book to publish and consult information distributed across a wide-area network the aim of this paper is to present the results of a research in the design and development of a "network user interface" application which provides capabilities to locally organise information resulting from searches conducted on a collection of data distributed across a wide area network. a strong requirement which had to be satisfied was to define a model for the user interface characterised by easy and intuitive use- ability in order to provide a system that could be easily used by people not necessarily expert in computing systems. s. bizzozero a. rana interactive internet search: keyword, directory and query reformulation mechanisms compared this article compares search effectiveness when using query-based internet search (via the google search engine), directory- based search (via yahoo) and phrase-based query reformulation assisted search (via the hyperindex browser) by means of a controlled, user-based experimental study. the focus was to evaluate aspects of the search process. cognitive load was measured using a secondary digit-monitoring task to quantify the effort of the user in various search states; independent relevance judgements were employed to gauge the quality of the documents accessed during the search process. time was monitored in various search states. results indicated the directory-based search does not offer increased relevance over the query-based search (with or without query formulation assistance), and also takes longer. query reformulation does significantly improve the relevance of the documents through which the user must trawl versus standard query-based internet search. however, the improvement in document relevance comes at the cost of increased search time and increased cognitive load. peter bruza robert mcarthur simon dennis what is the sound of a headache? using digital media to represent inner experiences: louise penberthy jay bolter the zephyr help instance: promoting ongoing activity in a cscw system mark s. ackerman leysia palen accessing relational databases from the world wide web tam nguyen v. srinivasan integrating the natural environment into a gis for decision support glenn s. iwerks hanan samet a narrative approach to user requirements for web design stefana broadbent francesco cara perfomance study of synchronization schemes on parallel cbr video servers chow-sing lin wei shu min-you wu task scheduling using intertask dependencies in carnot the carnot project at mcc is addressing the problem of logically unifying physically-distributed, enterprise-wide, heterogeneous information. carnot will provide a user with the means to navigate information efficiently and transparently, to update that information consistently, and to write applications easily for large, heterogeneous, distributed information systems. a prototype has been implemented which provides services for (a) enterprise modeling and model integration to create an enterprise-wide view, (b) semantic expansion of queries on the view to queries on individual resources, and (c) inter-resource consistency management. this paper describes the carnot approach to transaction processing in environments where heterogeneous, distributed, and autonomous systems are required to coordinate the update of the local information under their control. in this approach, subtransactions are represented as a set of tasks and a set of intertask dependencies that capture the semantics of a particular relaxed transaction model. a scheduler has been implemented which schedules the execution of these tasks in the carnot environment so that all intertask dependencies are satisfied. darrell woelk paul attie phil cannata greg meredith amit sheth munindar singh christine tomlinson pic matrices: a computationally tractable class of probabilistic query operators the inference network model of information retrieval allows a probabilistic interpretation of query operators. in particular, boolean query operators are conveniently modeled as link matrices of the bayesian network. prior work has shown, however, that these operators do not perform as well as the pnorm operators used for modeling query operators in the context of the vector space model. this motivates the search for alternative probabilistic formulations for these operators. the design of such alternatives must contend with the issue of computational tractability, since the evaluation of an arbitrary operator requires exponential time. we define a flexible class of link matrices that are natural candidates for the implementation of query operators and an o< ;/italic>(n2) algorithm (n = the number of parent nodes) for the computation of probabilities involving link matrices of this class. we present experimental results indicating that boolean operators implemented in terms of link matrices from this class perform as well as pnorm operators in the context of the inquery inference network. warren r. greiff w. bruce croft howard turtle style and substance in communication: implications for message structuring systems murli nagasundaram sigchi: the early years lorraine borman recommending and evaluating choices in a virtual community of use will hill larry stead mark rosenstein george furnas editorial steven pemberton a decision-theoretic approach to database selection in networked ir in networked ir, a client submits a query to a broker, which is in contact with a large number of databases. in order to yield a maximum number of documents at minimum cost, the broker has to make estimates about the retrieval cost of each database, and then decide for each database whether or not to use it for the current query, and if, how many documents to retrieve from it. for this purpose, we develop a general decision-theoretic model and discuss different cost structures. besides cost for retrieving relevant versus nonrelevant documents, we consider the following parameters for each database: expected retrieval quality, expected number of relevant documents in the database and cost factors for query processing and document delivery. for computing the overall optimum, a divide-and-conquer algorithm is given. if there are several brokers knowing different databases, a preselection of brokers can only be performed heuristically, but the computation of the optimum can be done similarily to the single-broker case. in addition, we derive a formula which estimates the number of relevant documents in a database based on dictionary information. norbert fuhr from undo to multi-user applications - the demo michael spenke using thumbnails to search the web we introduce a technique for creating novel, textually-enhanced thumbnails of web pages. these thumbnails combine the advantages of image thumbnails and text summaries to provide consistent performance on a variety of tasks. we conducted a study in which participants used three different types of summaries (enhanced thumbnails, plain thumbnails, and text summaries) to search web pages to find several different types of information. participants took an average of 67, 86, and 95 seconds to find the answer with enhanced thumbnails, plain thumbnails, and text summaries, respectively. we found a strong effect of question category. for some questions, text outperformed plain thumbnails, while for other questions, plain thumbnails outperformed text. enhanced thumbnails (which combine the features of text summaries and plain thumbnails) were more consistent than either text summaries or plain thumbnails, having for all categories the best performance or performance that was statistically indistinguishable from the best. allison woodruff andrew faulring ruth rosenholtz julie morrsion peter pirolli speechskimmer: interactively skimming recorded speech barry arons object-oriented features of db2 client/server hamid pirahesh open (source)ing the doors for contributor-run digital libraries paul jones using approximations to scale exploratory data analysis in datacubes daniel barbará xintao wu chi '95 basic research symposium on human-computer interaction (abstract) cathleen wharton janni nielsen operations on sets in an oodb in this article, we express some ideas on how to select an arbitrary set of objects, or to combine objects and information in the objects, in an arbitrary way. in the following will an object-oriented database be thought of as a collection of sets. the sets are collections of objects which all share a common message protocol. operations originating from relational algebra are defined on these sets. to make the retrieval of objects efficient, a storage strategy for objects is developed. combination of objects and changes in the objects' message protocol as a result of retrieval request, are solved by a filtering mechanism. jorn andersen trygve reenskaug agents2go: an infrastructure for location-dependent service discovery in the mobile electronic commerce environment in recent years, the growth of electronic commerce and mobile computing has created a new concept of mobile electronic commerce. in this paper we describe the agents2go system that attempts to solve problems related to location dependence that arise in a mobile electronic commerce environment. agents2go is a distributed system that provides mobile users with the ability to obtain location dependent services and information. our system also automatically obtains a user's current geographical location in cdpd (cellular digital packet data) based systems without relying on external aids such as gps (global positioning system). olga ratsimor vladimir korolev anupam joshi timothy finin a process-oriented scientific database model a database model is proposed for organizing data that describes natural processes studied experimentally. adapting concepts from object-oriented and temporal databases, this process-oriented scientific database model (posdbm) identifies two data object types (independent and dependent variables) and two types of relationships (becomes-a and affects-a) between data objects. successive versions of dependent variable objects are associated by the becomes-a relationship, while independent and dependent variable objects are associated by the affects-a relationship. thus, a process can be viewed as a sequence of states (versions) of a dependent variable object whose attributes are affected over time by independent variable objects. j. michael pratt maxine cohen fractals for secondary key retrieval in this paper we propose the use of fractals and especially the hilbert curve, in order to design good distance-preserving mappings. such mappings improve the performance of secondary-key- and spatial- access methods, where multi- dimensional points have to be stored on an 1-dimensional medium (e.g., disk). good clustering reduces the number of disk accesses on retrieval, improving the response time. our experiments on range queries and nearest neighbor queries showed that the proposed hilbert curve achieves better clustering than older methods ("bit-shuffling", or peano curve), for every situation we tried. c. faloutsos s. roseman building user interfaces for database applications: the o2 experience p. borras j. c. mamou d. plateau b. poyet d. tallot webquilt: a proxy-based approach to remote web usability testing webquilt is a web logging and visualization system that helps web design teams run usability tests (both local and remote) and analyze the collected data. logging is done through a proxy, overcoming many of the problems with server-side and client- side logging. captured usage traces can be aggregated and visualized in a zooming interface that shows the web pages people viewed. the visualization also shows the most common paths taken through the web site for a given task, as well as the optimal path for that task, as designated by the designer. this paper discusses the architecture of webquilt and describes how it can be extended for new kinds of analyses and visualizations. normal forms and conservative properties for query languages over collection types strong normalization results are obtained for a general language for collection types. an induced normal form for sets and bags is then used to show that the class of functions whose input has height (that is, the maximal depth of nestings of sets/bags/lists in the complex object) at most i and output has height at most o definable in a nested relational query language without powerset operator is independent of the height of intermediate expressions used. our proof holds regardless of whether the language is used for querying sets, bags, or lists, even in the presence of variant types. moreover, the normal forms are useful in a general approach to query optimization. paredaens and van gucht proved a similar result for the special case when i = o = 1. their result is complemented by hull and su who demonstrated the failure of independence when powerset operator is present and i = o = 1\. the theorem of hull and su was generalized to all i and o by grumbach and vianu. our result generalizes paredaens and van gucht's to all i and o, providing a counterpart to the theorem of grumbach and vianu. limsoon wong backtracking: the demise of "!" christopher welty word processing in an m.i.s environment a funny thing happened on the way to the "office of the future" \- wp type products (the step-children of the technology) have taken the lead in "extending the human resource" by the use of computing machinery for the support of office processes. one might mistake the previous statement for an extravagant claim. however, a quick review of some recent history will sustain my case: one of the most awesome m.i.s. environments in existence is ibm. for a long time their office products operation was treated with disdain. the group made money - but, the place to be was dp! every cliche applicable to the aloftness and mystery of dp processes was fostered by this group. users, in terms of an organization's dp people, ate it up and enjoyed cloaking themselves in the mystique of dp. then came word processing. and quiet as it was kept - word processing machines were and are computing machines. the software developed for a word processor was and is simply another program - generally better written than most - designed to permit a user to manipulate text. at first, the wp product was very rigid and limited with the idea being "those wp folks can't handle anything else". the side effect in this kiss approach was the production of user - friendly software before it became fashionable. jean green dorsey building a user-defined interface a measurably easy-to-use interface has been built using a novel technique. novices attempted an electronic mail task using a command-line interface containing no help, no menus, no documentation, and no instruction. a hidden operator intercepted commands when necessary, creating the illusion of a true interactive session. the software was repeatedly revised to recognize users' new commands; in essence, the users defined the interface. this procedure was used on 67 subjects. the first version of the software could recognize only 7% of all the subjects' spontaneously generated commands; the final version could recognize 76% of those commands. this experience contradicts the idea that people are not good at designing their own command languages. through careful observation and analysis of user behavior, a mail interface unusable by novices evolved into one that let novices do useful work within minutes. dennis wixon john whiteside michael good sandra jones hypertext and pen computing norman meyrowitz when tvs are computers are tvs (panel) this panel brings together experts from tv production with those in the computer multimedia business. they will discuss what is likely to happen when the two media coexist. an exciting opportunity exists in merging the strengths of both media together synergistically to create pervasive and powerful interactive television. s. joy mountford peter mitchell pat o'hara joe sparks max whitby database research faces the information explosion henry f. korth abraham silberschatz state of the art in workflow management research and products c. mohan communication chairs: examples of mobile roomware components christian muller- tomfelde wolfgang reischl incremental database systems: databases from the ground up this paper discusses a new approach to database management systems that is better suited to a wide class of new applications such as scientific, hypermedia, and financial applications. these applications are characterized by their need to store large amounts of raw, unstructured data. our premise is that, in these situations, database systems need a way to store data without imposing a schema, and a way to provide a schema incrementally as we process the data. this requires that the raw data be mapped in complex ways to an evolving schema. stanley b. zdonik reno: a component-based user interface randy kerr mike markley martin sonntag tandy trower informing the design of collaborative virtual environments steve benford dave snowdon andy colebourne jon o'brien tom rodden 1-800-hypertext: browsing hypertext with a telephone stuart goose michael wynblatt hans mollenhauer adaptive information filtering: detecting changes in text streams the task of information filtering is to classify documents from a stream as either relevant or non-relevant according to a particular user interest with the objective to reduce information load. when using an information filter in an environment that is changing with time, methods for adapting the filter should be considered in order to retain classification accuracy. we favor a methodology that attempts to detect changes and adapts the information filter only if inevitable in order to minimize the amount of user feedback for providing new training data. yet, detecting changes may require costly user feedback as well. this paper describes two methods for detecting changes without user feedback. the first method is based on evaluating an expected error rate, while the second one observes the fraction of classification decisions made with a confidence below a given threshold. further, a heuristics for automatically determining this threshold is suggested and the performance of this approach is experimentally explored as a function of the threshold parameter. some empirical results show that both methods work well in a simulated change scenario with real world data. carsten lanquillon ingrid renz human factors in information filing and retrieval session overview: the development of, and increased reliance on, advanced workstations by a variety of users is creating the need for more user oriented filing and retrieving systems. with manuscripts, reports, personal mail, and data being stored in computers, the need for simplifying the storage and access of this information is increasing in importance. part of this simplifying process has to address the problem of remembering file structure and file names, keywords, passwords, and other retrieval cues all of which can be very taxing to the user. the purpose of this session is to focus on those cognitive issues which affect user performance and satisfaction in the filing and retrieving of information from computer systems. raoul n. smith computing on a shoestring: initial data entry for service organizations this paper addresses the feasibility of computerized record-keeping for low- budget volunteer organizations, and presents results of an experiment designed to determine a fast, reliable, and comfortable data entry technique for enabling non-computer-user to enter large amounts of manually recorded data into a computer file. martha r. horton interactive sketching for the early stages of user interface design james a. landay brad a. myers defining and designing the performance-centered interface: moving beyond the user-centered interface karen l. mcgraw improving collaborative filtering with multimedia indexing techniques to create user-adapting web sites the internet is evolving from a static collection of hypertext, to a rich assortment of dynamic services and products targeted at millions of internet users. for most sites it is a crucial matter to keep a close tie between the users and the site. more and more web sites build close relationships with their users by adapting to their needs and therefore providing a personal experience. one aspect of personalization is the recommendation and presentation of information and products so that users can access the site more efficiently. however, powerful filtering technology is required in order to identify relevant items for each user. in this paper we describe how collaborative filtering and content-based filtering can be combined to provide better performance for filtering information. filtering techniques of various nature are integrated in a weighed mix to achieve more robust results and to profit from automatic multimedia indexing technologies. the combined approach is evaluated in a prototype user-adapting web site, the active webmuseum. arnd kohrs bernard merialdo chi 99 sig: automated data collection for evaluating collaborative systems jill drury tari fanderclai frank linton key frame preview techniques for video browsing anita komlodi gary marchionini extending the spreadsheet interface to handle approximate quantities and relationships conventional spreadsheet programs offer a very convenient user interface for many quantitative tasks, but they are restricted to handling precisely- specified quantities and calculations. asp is a generalized spreadsheet that extends the basic spreadsheet paradigm to encompass quantities which are not known exactly, and functions which are not known well enough to permit calculation. asp works by propagating assertions about quantities and functions through the network of relationships that the spreadsheet defines. clayton lewis combining supervised learning with color correlograms for content-based image retrieval jing huang s. ravi kumar mandar mitra surrogate users: mediating between social and technical interaction deborah lawrence michael e. atwood shelly dews distributed collaborative writing: a comparison of spoken and written modalities for reviewing and revising documents christine m. neuwirth ravinder chandhok david charney patricia wojahn loel kim situating conversations within the language/action perspective: the milan conversation model the debate on the language/action perspective has been receiving attention in the cscw field for almost ten years. in this paper, we recall the most relevant issues raised during this debate, and propose a new exploitation of the language/action perspective by considering it from the viewpoint of understanding the complexity of communication within work processes and the situatedness of work practices. on this basis, we have defined a new conversation model, the milan conversation model, and we are designing a new conversation handler to implement it. giorgio de michelis m. antonietta grasso specifying dynamic support for collaborative work within worlds in this paper, we present a specification language developed for worlds, a next generation computer-supported collaborative work system. our specification language, called introspect, employs a meta-level architecture to allow run- time modifications to specifications. we believe such an architecture is essential to worlds' ability to provide dynamic support for collaborative work in an elegant fashion. william j. tolone simon m. kaplan geraldine fitzpatrick the recovery manager of the system r database manager jim gray paul mcjones mike blasgen bruce lindsay raymond lorie tom price franco putzolu irving traiger mind your vocabulary: query mapping across heterogeneous information sources in this paper we present a mechanism for translating constraint queries, i.e., boolean expressions of constraints, across heterogeneous information sources. integrating such systems is difficult in part because they use a wide range of constraints as the vocabulary for formulating queries. we describe algorithms that apply user-provided mapping rules to translate query constraints into ones that are understood and supported in another context, e.g., that use the proper operators and value formats. we show that the translated queries minimally subsume the original ones. furthermore, the translated queries are also the most compact possible. unlike other query mapping work, we effectively consider inter-dependencies among constraints, i.e., we handle constraints that cannot be translated independently. furthermore, when constraints are not fully supported, our framework explores relaxations (semantic rewritings) into the closest supported version. our most sophisticated algorithm (algorithm tdqm) does not blindly convert queries to dnf (which would be easier to translate, but expensive); instead it performs a top-down mapping of a query tree, and does local query structure conversion only when necessary. chen-chuan k. chang hector garcia-molina compressed data cubes for olap aggregate query approximation on continuous dimensions jayavel shanmugasundaram usama fayyad p. s. bradley the infosleuth project r. j. bayardo w. bohrer r. brice a. cichocki j. fowler a. halal v. kashyap t. ksiezyk g. martin m. nodine m. rashid m. rusinkiewicz r. shea c. unnikrishnan a. unruh d. woelk information technology and transaction processing jobs: a cognitive approach this project investigates the job of a particular group of computer users, here termed "transaction processors", using a cognitive psychological perspective. the project involves a blend of two approaches. one approach to analyzing computer use is represented in the human-computer interaction research literature, well exemplified by the acm sigchi bulletin. typically, this research is driven by a specific phenomenon (e.g. the choice of command names, screen layout, menu structures) and investigates this aspect of an interface, often in a laboratory setting, using quantitative measures of user performance and drawing on the techniques of cognitive (information processing) psychology. a second approach involves the analysis of characteristics of computer-mediated jobs and the development of profiles of different user groups (e.g. [1,2]). this research implicitly discusses some cognitive features (e.g. the repetitiveness of information processing required of the user), but in a qualitative way. the present project attempts to combine these approaches by examining explicitly the cognitive factors involved in the jobs of a particular class of computer users. the project examines an increasingly important class of computer users who perform transactions using the system as a tool, and actively input instructions and retrieve information to achieve the desired result. obvious examples include bank tellers, travel agents, money markets and users of various computer-based ordering systems. these users have been chosen for two reasons. firstly, more and more jobs involving transactions are being computerized, and it is therefore likely that there will be increasing pressure to design efficient and effective systems for transaction processors. secondly, the tools provided by cognitive psychology seem particularly applicable here. these users' jobs are relatively tangible and have moderate structure, compared with the less structured jobs of managers. at the same time, they are less stereotyped than jobs of other groups (notably, data entry clerks). the research will initially focus on two examples of transaction processors: bank tellers and travel agents. it will attempt to identify any common cognitive factors involved in these jobs. for example, are these processing rate limitations such that users become less able to cope if they attempt to work too fast? if so, can such limitations be overcome using other design features? are there typical cognitive strategies or "shortcuts" developed by users to navigate through their system? answers to such questions will be very important in attempting to build a comprehensive research framework encompassing the user's job as a whole, in addition to the detailed moment-to-moment user-computer interaction, usually emphasized in the human-computer interaction literature. a. bodi c. lees view indexing in relational databases the design and maintenance of a useful database system require efficient optimization of the logical access paths which demonstrate repetitive usage patterns. views (classes of queries given by a query model) are an appropriate intermediate logical representation for database. frequently accessed views of databases need to be supported by indexing to enhance retrieval. this paper investigates the problem of selecting an optimal index set of views and describes an efficient algorithm for this selection. nicholas roussopoulos on saying "enough already!" in sql in this paper, we study a simple sql extension that enables query writers to explicitly limit the cardinality of a query result. we examine its impact on the query optimization and run-time execution components of a relational dbms, presenting two approaches---a conservative approach and an aggressive approach \---to exploiting cardinality limits in relational query plans. results obtained from an empirical study conducted using db2 demonstrate the benefits of the sql extension and illustrate the tradeoffs between our two approaches to implementing it. michael j. carey donald kossmann usability: the final frontier lon barfield spatio-temporal representation and retrieval using moving object's trajectories in this paper, we propose a new spatio-temporal representation scheme using moving objects' trajectories in video data. in order to support content-based retrieval on video data very well, our representation scheme considers the moving distance of an object during a given time interval as well as its temporal and spatial relations. based on our representation scheme, we present a new similarity measure algorithms for the trajectory of moving objects, which provides ranking for the retrieved video results. finally, we show from our experiment that our representation scheme achieves about 20% higher precision while holding about the same recall, compared with li's and shan's schemes. choon-bo shim jae-woo chang stories & scenarios: hot shots pauline bax providing database-like access to the web using queries based on textual similarity most databases contain "name constants" like course numbers, personal names, and place names that correspond to entities in the real world. previous work in integration of heterogeneous databases has assumed that local name constants can be mapped into an appropriate global domain by normalization. here we assume instead that the names are given in natural language text. we then propose a logic for database integration called whirl which reasons explicitly about the similarity of local names, as measured using the vector- space model commonly adopted in statistical information retrieval. an implemented data integration system based on whirl has been used to successfully integrate information from several dozen web sites in two domains. william w. cohen creation and performance analysis of user representations in collaborative virtual environments in distributed collaborative virtual environments, participants are often embodied or represented in some form within a virtual world. the representations take many different forms and are often driven by limitations in the available technology. desktop web based environments typically use textual or two dimensional representations, while high end environments use motion trackers to embody a participant and their actions in an avatar or human form. this paper describes this wide range of virtual user representations and their creation and performance issues investigated as part of the human-computer symbiotes project within darpa's intelligent collaboration and visualization (ic&v;) program. kevin martin automatic structure visualization for video editing we developed intelligent functions for the automatic description of video structure, and visualization methods for temporal-spatial video structures obtained by these functions as well as for the functions. the functions offer descriptions of cut separations, motion of the camera and filmed objects, tracts and contour lines of objects, existence of objects, and periods of existence. furthermore, identical objects are automatically linked. thus the visualization methods supported by object-links allow users to freely browse and directly manipulate the structure including descriptions and raw video data. hirotada ueda takafumi miyatake shigeo sumino akio nagasaka i didn't even know it was user services we decided to write this paper in the third person since we are merging two co-workers' views of the evolution of user services. marlene r. pratto judy h. martin updating relational databases through weak instance interfaces the problem of updating databases through interfaces based on the weak instance model is studied, thus extending previous proposals that considered them only from the query point of view. insertions and deletions of tuples are considered. as a preliminary tool, a lattice on states is defined, based on the information content of the various states. potential results of an insertion are states that contain at least the information in the original state and that in the new tuple. sometimes there is no potential result, and in the other cases there may be many of them. we argue that the insertion is deterministic if the state that contains the information common to all the potential results (the greatest lower bound, in the lattice framework) is a potential result itself. effective characterizations for the various cases exist. a symmetric approach is followed for deletions, with fewer cases, since there are always potential results; determinism is characterized as a consequence. paola atzeni riccardo torlone let's search for songs by humming! naoko kosugi yuichi nishihara seiichi kon'ya masashi yamamuro kazuhiko kushima fostering interdepartmental knowledge communication through groupware: a process improvement perspective ned kock concept based query expansion query expansion methods have been studied for a long time - with debatable success in many instances. in this paper we present a probabilistic query expansion model based on a similarity thesaurus which was constructed automatically. a similarity thesaurus reflects domain knowledge about the particular collection from which it is constructed. we address the two important issues with query expansion: the selection and the weighting of additional search terms. in contrast to earlier methods, our queries are expanded by adding those terms that are most similar to the concept of the query, rather than selecting terms that are similar to the query terms. our experiments show that this kind of query expansion results in a notable improvement in the retrieval effectiveness when measured using both recall- precision and usefulness. yonggang qiu hans-peter frei building a home-grown knowledge base: don't wait for the resources - build a prototype susan b. jones carol wood mediacaptain - a demo florian mueller comparative design review: an exercise in parallel design (panel) jakob nielsen randy kerr dan rosenberg gitta salomon heather desurvire rolf molich tom stewart human computer interaction group, university of york, u.k. (lab review) staff in the departments of computer science and psychology at the university of york have been cooperating in interdisciplinary research since 1983\. the mainstream of york's approach is to apply theory developed in these parent disciplines to hci design. our goal is to integrate formal and empirical methods. by formal methods we mean mathematical models that are capable of capturing properties of a user interface. by empirical methods we mean the observation and measurement of user behaviour. integration of these two approaches is achieved by an iterative design process in which formal models are successively refined by testing their predictions against the results of user trials. michael harrison andrew monk extended object oriented model to design relational databases (abstract) the recent trend in designing relational databases systems has been toward using object oriented model in high level. the next step is to convert the high level model into the relational diagrams from which relational are extracted. finally the relations are examined to make it sure that they are fully normalized. in the current object oriented model a number of important concepts of the relational data bases such as generalization, subset hierarchy, the relation of a record type to itself..etc either can not be represented or it hard to represent them. in this presentation the object oriented model is extended to have a better representation of the concepts such as optional or mandatory relationship, aggregation objects, association objects, generalization, subset hierarchy and others. asad khailany wafa khorshid the newspaper image database: empirical supported analysis of users' typology and word association clusters susanne ornager comparing interfaces based on what users watch and do with the development of novel interfaces controlled through multiple modalities, new approaches are needed to analyze the process of interaction with such interfaces and evaluate them at a fine grain of detail. in order to evaluate the usability and usefulness of such interfaces, one needs tools to collect and analyze richly detailed data pertaining to both the process and outcomes of user interaction. eye tracking is a technology that can provide detailed data on the allocation and shifts of users' visual attention across interface entities. eye movement data, when combined with data from other input modalities (such as spoken commands, haptic actions with the keyboard and the mouse, etc.), results in just such a rich data on set. however, integrating, analyzing and visualizing multimodal data on user interactions remains a difficult task. in this paper we report on a first step toward developing a suite of tools to facilitate this task. we designed and implemented an eye tracking analysis system that generates combined gaze and action visualizations from eye movement data and interaction logs. this new visualization allows an experimenter to see the visual attention shifts of users interleaved with their actions on each screen of a multi-screen interface. a pilot experiment on comparing two interfaces --- a traditional interface and a speech-controlled one --- to an educational multimedia application was carried out to test the utility of our tool. eric c. crowe n. hari narayanan a knowledge-based approach to organizing retrieved documents wanda pratt marti a. hearst lawrence m. fagan an evaluation of query processing strategies using the tipster collection the tipster collection is unusual because of both its size and detail. in particular, it describes a set of information needs, as opposed to traditional queries. these detailed representations of information need are an opportunity for research on different methods of formulating queries. this paper describes several methods of constructing queries for the inquery information retrieval system, and then evaluates those methods on the tipster document collection. both adhoc and routing query processing methods are evaluated. james p. callan w. bruce croft the cscw column: the quadrant model of groupware simon kaplan designing workscape: an interdisciplinary experience joseph m. ballay efficient optimistic concurrency control using loosely synchronized clocks atul adya robert gruber barbara liskov umesh maheshwari database description with sdm: a semantic database model sdm is a high-level semantics-based database description and structuring formalism (database model) for databases. this database model is designed to capture more of the meaning of an application environment than is possible with contemporary database models. an sdm specification describes a database in terms of the kinds of entities that exist in the application environment, the classifications and groupings of those entities, and the structural interconnections among them. sdm provides a collection of high-level modeling primitives to capture the semantics of an application environment. by accommodating derived information in a database structural specification, sdm allows the same information to be viewed in several ways; this makes it possible to directly accommodate the variety of needs and processing requirements typically present in database applications. the design of the present sdm is based on our experience in using a preliminary version of it. sdm is designed to enhance the effectiveness and usability of database systems. an sdm database description can serve as a formal specification and documentation tool for a database; it can provide a basis for supporting a variety of powerful user interface facilities, it can serve as a conceptual database model in the database design process; and, it can be used as the database model for a new kind of database management system. michael hammer dennis mc leod graphical styles for building interfaces by demonstration conventional interface builders allow the user interface designer to select widgets such as menus, buttons and scroll bars, and lay them out using a mouse. although these are conceptually simple to use, in practice there are a number of problems. first, a typical widget will have dozens of properties which the designer might change. insuring that these properties are consistent across multiple widgets in a dialog box and multiple dialog boxes in an application can be very difficult. second, if the designer wants to change the properties, each widget must be edited individually. third, getting the widgets laid out appropriately in a dialog box can be tedious. grids and alignment commands are not sufficient. this paper describes graphical tabs and graphical styles in the gild interface builder which solve all of these problems. a "graphical tab" is an absolute position in a window. a "graphical style" incorporates both property and layout information, and can be defined by example, named, applied to other widgets, edited, saved to a file, and read from a file. if a graphical style is edited, then all widgets defined using that style are modified. in addition, because appropriate styles are inferred, they do not have to be explicitly applied. osamu hashimoto brad a. myers guidelines for presentation and comparison of indexing techniques descriptions of new indexing techniques are a common outcome of database research, but these descriptions are sometimes marred by poor methodology and a lack of comparison to other schemes. in this paper we describe a framework for presentation and comparison of indexing schemes that we believe sets a minimum standard for development and dissemination of research results in this area. justin zobel alistair moffat kotagiri ramamohanarao field research in product development karen h. kvavik danielle fafchamps sandra jones shifteh karimi evaluating queries with generalized path expressions vassilis christophides sophie cluet guido moerkotte fact: a learning based web query processing system though the query is posted in key words, the returned results contain exactly the information that the user is querying for, which may not be explicitly specified in the input query. the required information is often not contained in the web pages whose urls are returned by a search engine. fact is capable of navigating in the neighborhood of these pages to find those that really contain the queried segments. the system does not require a prior knowledge about users such as user profiles [1] or preprocessing of web pages such as wrapper generation [2]. a prototype system has been implemented using the approach. it learns and applies two types of knowledge, navigation knowledge for following hyperlinks and classification knowledge for queried segment identification. for learning, it supports three training strategies, namely sequential training, random training and interleaved training. yahoo! is currently the external search engine. the urls of web pages returned by the external search engine are used in processing. a set of experiments that are designed to evaluate the system, and compare different implementations, such as knowledge representations and training strategies. songting chen yanlei diao hongjun lu zengping tian quorum consensus in nested-transaction systems gifford's quorum consensus algorithm for data replication is studied in the context of nested transactions and transaction failures (aborts), and a fully developed reconfiguration strategy is presented. a formal description of the algorithm is presented using the input/output automaton model for nested- transaction systems due to lynch and merritt. in this description, the algorithm itself is described in terms of nested transactions. the formal description is used to construct a complete proof of correctness that uses standard assertional techniques, is based on a natural correctness condition, and takes advantage of modularity that arises from describing the algorithm as nested transactions. the proof is accomplished hierarchically, showing that a fully replicated reconfigurable system "simulates" an intermediate replicated system, and that the intermediate system simulates an unreplicated system. the presentation and proof treat issues of data replication entirely separately from issues of concurrency control and recovery. kenneth j. goldman nancy lynch a world lit by flame peter j. denning an exploratory evaluation of three interfaces for browsing large hierarchical tables of contents three different interfaces were used to browse a large (1296 items) table of contents. a fully expanded stable interface, expand/contract interface, and multipane interface were studied in a between- groups experiment with 41 novice participants. nine timed fact retrieval tasks were performed; each task is analyzed and discussed separately. we found that both the expand/contract and multipane interfaces produced significantly faster times than the stable interface for many tasks using this large hierarchy; other advantages of the expand/contract and multipane interfaces over the stable interface are discussed. the animation characteristics of the expand/contract interface appear to play a major role. refinements to the multipane and expand/contract interfaces are suggested. a predictive model for measuring navigation effort of each interface is presented. richard chimera ben shneiderman incremental view maintenance in object-oriented databases a database management system should support views to facilitate filtering of information in order to have only necessary and required information available to users with minimal delay. although a lot of research efforts concentrated on views within the conventional relational model, much more effort is required when object-oriented models are considered. however, supporting views is only a step forward in achieving the purpose that requires improving the performance of the system by considering incremental maintenance of views instead of recomputing a view from scratch each time it is accessedin this paper, we introduce a model that facilitates incremental maintenance of single-class- based object-oriented views by employing the deferred update mode that has proved to be more suitable for object-oriented databases in general. for that purpose, we categorize classes into base and brother classes corresponding to classes originally present in the database and those introduced as views, respectively. to each class, we add a modification list that keeps related modifications during different intervals. an interval starts with the creation or update of a view and ends with the creation or update of another view. a modification list is empty as long as no views depend on its class. further, we introduce some algorithms that locate modifications done on related classes while trying to compute a given view incrementally. finally, we give a theoretical justification showing that, in general, the introduced algorithms perform much better than doing computation from scratch each time a view is accessed. reda alhajj faruk polat the contents of resource decisions in the diffusion of internet strategies in less-developed countries: lessons from bolsa de valores de guayaquil ramiro montealegre developing recommendation services for a digital library with uncertain and changing data in developing recommendation services for a new digital library called ilumina (www.ilumina-project.org), we are faced with several challenges related to the nature of the data we have available. the availability and consistency of data associated with ilumina is likely to be highly variable. any recommendation strategy we develop must be able to cope with this fact, while also being robust enough to adapt to additional types of data available over time as the digital library develops. in this paper we describe the challenges we are faced with in developing a system that can provide our users with good, consistent recommendations under changing and uncertain conditions. gary geisler david mcarthur sarah giersch discovering quasi-equivalence relationships from database systems association rule mining has recently attracted strong attention and proven to be a highly successful technique for extracting useful information from very large databases. in this paper, we explore a generalized affinity- based association mining which discovers quasi-equivalent media objects in a distributed information-providing environment consisting of a network of heterogeneous databases which could be relational databases, hierarchical databases, object-oriented databases, multimedia databases, etc. online databases, consisting of millions of media objects, have been used in business management, government administration, scientific and engineering data management, and many other applications owing to the recent advances in high- speed communication networks and large-capacity storage devices. because of the navigational characteristic, queries in such an information-providing environment tend to traverse equivalent media objects residing in different databases for the related data records. as the number of databases increases, query processing efficiency depends heavily on the capability to discover the equivalence relationships of the media objects from the network of databases. theoretical terms along with an empirical study of real databases are presented. mei-ling shyu shu-ching chen r. l. kashyap the c-oda project: online access to electronic journals peter kirstein goli montasser-kohsari personal space in a virtual community phillip jeffrey the use of guidelines in interface design linda tetzlaff david r. schwartz resources section: conferences jay blickstein the retrieval power of nfql forms are common and well understood in our modern society, especially in the office. they organize and structure communication according to well established and long standing convention. the natural forms query language (nfql) takes advantage of these features to provide a "natural" communication language between computers and humans. various facets of nfql have been discussed elsewhere. in this paper we explore the retrieval power of nfql. we explain why basic nfql forms (which are essentially ordinary business forms) do not by themselves have enough retrieval power to be relationally complete. we then explain how to augment the notation to increase the retrieval power, and we provide an inductive proof to show that nfql, as augmented, is relationally complete. because additional notation may negatively effect usability, we discuss the pragmatics of adding new features. we explain how these features can be improved notationally, and argue that, with the improvements, we maintain the objective of being able to interpret standard forms, while increasing their retrieval power. y. k. ng d. w. embley design rules based on analyses of human error by analyzing the classes of errors that people make with systems, it is possible to develop principles of system design that minimize both the occurrence of error and the effects. this paper demonstrates some of these principles through the analysis of one class of errors: slips of action. slips are defined to be situations in which the user's intention was proper, but the results did not conform to that intention. many properties of existing systems are conducive to slips; from the classification of these errors, some procedures to minimize the occurrence of slips are developed. donald a. norman interaction design and human factors support in the development of a personal communicator for children ron oosterholt mieko kusano govert de vries media-based navigation with generic links paul h. lewis hugh c. davis steve r. griffiths wendy hall rob j. wilkins diluting acid several dbms vendors have implemented the ansi standard sql isolation levels for transaction processing. this has created a gap between database practice and textbook accounts of transaction processing which simply equate isolation with serializability. we extend the notion of conflict to cover lower isolation levels and we present improved characterisations of classes of schedules achieving these levels. tim kempster colin stirling peter thanisch a public library based on full-text retrieval ian h. witten craig nevill- manning rodger mcnab sally jo cunningham a pegging method for decomposing relations in databases the anomalies in normal forms generally can be reduced by decomposing relations. unfortunately many current decomposition procedures fail to give lossless decomposition and still preserve all functional dependencies. a pegging method is proposed to solve this decomposition problem. pegging is also helpful in decomposing relations that have the characteristics of "three- decomposability" and some fifth normal form relations. c. alec chang michael s. leonard h. brian hwarng tzong-huie shiau a tool for the rapid evaluation of input devices using fitts' law models i. scott mackenzie william buxton satellite: a visualization and navigation tool for hypermedia satellite is a visualization and navigation tool for a hypermedia system. it is based on the concept of affinity between objects; that is, a relationship with an associated intensity. the user is presented with a two dimensional map that provides a view of the hypermedia environment where objects lying close together have a greater affinity than those lying further apart. the system provides different views by allowing modification of the underlying measure of affinity. the system is also able to track dynamically the evolution of the objects' relationships. based on the affinity concept, we develop new dynamic presentation techniques that do not depend on the explicit display of links between the nodes of the graph. the dynamic layout algorithm that we present at the end of the paper is based on these techniques and it allows for the display of rapidly changing relationships between objects. x. pintado d. tsichritzis the microsoft database research group david lomet roger barga surajit chaudhuri paul larson vivek narasayya metaphor mayhem: mismanaging expectation and surprise aaron marcus estimating alphanumeric selectivity in the presence of wildcards p. krishnan jeffrey scott vitter bala iyer a model of optimal exploration and decision making in novel interfaces bob rehder clayton lewis bob terwilliger peter polson john rieman an algorithmic framework for performing collaborative filtering jonathan l. herlocker joseph a. konstan al borchers john riedl document overlap detection system for distributed digital libraries in this paper we introduce the matchdetectreveal(mdr) system, which is capable of identifying overlapping and plagiarised documents. each component of the system is briefly described. the matching-engine component uses a modified suffix tree representation, which is able to identify the exact overlapping chunks and its performance is also presented. krisztián monostori arkdy zaslavsky heinz schmidt bmir-j2: a test collection for evaluation of japanese information retrieval systems tetsuya sakai tsuyoshi kitani yasushi ogawa tetsuya ishikawa haruo kimoto ikuo keshi jun toyoura toshikazu fukushima kunio matsui yoshihiro ueda takenobu tokunaga hiroshi tsuruoka hidekazu nakawatase teru agata noriko kando usage patterns in an integrated voice and data communications system recently, office communication systems have begun to integrate voice recordings into their mail and data communications facilities. the study of usage patterns on one such system shows that voice is used for informal, person-to-person communications, as opposed to the formal content of typed messages. voice messages are generally sent to fewer recipients (often only one), and sometimes replace face-to-face meetings. robert t. nicholson different cultures meet (panel session): lessons learned in global digital library development this panel is organized to share the experience gained and lessons lea rned in developing cutting-edge technology applications and digital libraries when different cultures meet together. "culture"? interpreted in different ways and different context. this include the interdisciplinary collaboration among professionals from different fields with their own cultures -- such as library/information science, computer science, humanities, social sciences, science and technology, etc; to more globally as experienced in major international collaborative projects involving r&d; professionals from two or more different cultures -- the east and the west, or the north and the south. the moderator will share her own personal perspective on the true meaning of global and interdisciplinary collaboration, drawing upon experiences gained in conducting numerous technology related r&d; activities throughout the years, starting from her award winning multimedia project on the first emperor of china1 and his 7000 magnificent terracotta warriors and horses, supported by the national endowment for the humanities in the mid-1980s to her recent (since may 2000) and challenging nsfs international digital library project (idlp) called chinese memory net (cmnet): us-sino collaborative research toward global digital library, culminating the community building experiences at nit conferences with many participants of jcdl from over 15 countries at the nit 2001 in beijing, china during may 29-31, 2001. cmnets us affiliates include academic researchers from several universities, including drexel university, kent state university, syracuse university, university of kentucky, and university of wisconsin-milwaukee. its collaborators in beijing include peking university and tsinghua university, in shanghai include the shanghai jiaotong university, and in taipei including national taiwan university, national tsinghua university, and the academia sinica. several collaborato ching chen wen gao hsueh-hua chen li-zhu zhou von-wun soo website design from the trenches tom brinck darren gergle scott wood the digital physics of data mining usama fayyad information visualization nahum gershon stephen g. eick stuart card spire: a progressive content-based spatial image retrieval engine in this demo, we will show the implementation of a content-based spatial image retrieval engine (spire) for multimodal unstructured data. this architecture provides a framework for retrieving multi-modal data including image, image sequence, time series and parametric data from large archives. dramatic speedup (from a factor of 4 to 35) has been achieved for many search operations such as template matching, texture feature extraction. this framework has been applied and validated in solar flares and petroleum exploration in which spatial and spatial-temporal phenomena are located. chung-sheng li lawrence d. bergman yuan-chi chang vittorio castelli john r. smith metaphors for web navigation (abstract) andreas dieberger the chi conference: interviews with conference chairs steven pemberton a document retrieval model based on term frequency ranks ijsbrand jan aalbersberg no justice, no peace brock n. meeks using the web instead of a window system james rice adam farquhar philippe piernot thomas gruber adaptive precision setting for cached approximate values caching approximate values instead of exact values presents an opportunity for performance gains in exchange for decreased precision. to maximize the performance improvement, cached approximations must be of appropriate precision: approximations that are too precise easily become invalid, requiring frequent refreshing, while overly imprecise approximations are likely to be useless to applications, which must then bypass the cache. we present a parameterized algorithm for adjusting the precision of cached approximations adaptively to achieve the best performance as data values, precision requirements, or workload vary. we consider interval approximations to numeric values but our ideas can be extended to other kinds of data and approximations. our algorithm strictly generalizes previous adaptive caching algorithms for exact copies: we can set parameters to require that all approximations be exact, in which case our algorithm dynamically chooses whether or not to cache each data value. we have implemented our algorithm and tested it on synthetic and real-world data. a number of experimental results are reported, showing the effectiveness of our algorithm at maximizing performance, and also showing that in the special case of exact caching our algorithm performs as well as previous algorithms. in cases where bounded imprecision is acceptable, our algorithm easily outperforms previous algorithms for exact caching. chris olston boon thau loo jennifer widom the medusa project: autonomous data management in a shared-nothing parallel database machine g. m. bryan w. e. moore b. j. curry k. w. lodge j. geyer the sift information dissemination system information dissemination is a powerful mechanism for finding information in wide-area environments. an information dissemination server accepts long-term user queries, collects new documents from information sources, matches the documents against the queries, and continuously updates the users with relevant information. this paper is a retrospective of the stanford information filtering service (sift), a system that as of april 1996 was processing over 40,000 worldwide subscriptions and over 80,000 daily documents. the paper describes some of the indexing mechanisms that were developed for sift, as well as the evaluations that were conducted to select a scheme to implement. it also describes the implementation of sift, and experimental results for the actual system. finally, it also discusses and experimentally evaluates techniques for distributing a service such as sift for added performance and availability. tak w. yan hector garcia-molina reflections: abusus non tollit usum steven pemberton a multi-view intelligent editor for digital video libraries silver is an authoring tool that aims to allow novice users to edit di gital video. the goal is to make editing of digital video as easy as text editing. silver provides multiple coordinated views, including project, source, outline, subject, storyboard, textual transcript and timeline views. selections and edits in any view are synchronized with all other views. a variety of recognition algorithms are applied to the video and audio content and then are used to aid in the editing tasks. the informedia digital library supplies the recognition algorithms and metadata used to support intelligent editing, and informedia also provides search and a repository. the metadata includes shot boundaries and a time-synchronized transcript, which are used to support intelligent selection and intelligent cut/copy/paste. brad a. myers juan p. casares scott stevens laura dabbish dan yocum albert corbett introduction to the special issue on human-computer interaction and collaborative virtual environments steve benford paul dourish tom rodden the effectiveness of computer graphics for decision support: meta-analytical integration of research findings mark i. h. hwang bruce j. p. wu on global multidatabase query optimization hongjun lu beng-chin ooi cheng-hian goh project feelex: adding haptic surface to graphics this paper presents work carried out for a project to develop a new interactive technique that combines haptic sensation with computer graphics. the project has two goals. the first is to provide users with a spatially continuous surface on which they can effectively touch an image using any part of their bare hand, including the palm. the second goal is to present visual and haptic sensation simultaneously by using a single device that doesn't oblige the user to wear any extra equipment. in order to achieve these goals, we designed a new interface device comprising of a flexible screen, an actuator array and a projector. the actuator deforms the flexible screen onto which the image is projected. the user can then touch the image directly and feel its shape and rigidity. initially we fabricated two prototypes, and their effectiveness is examined by studying the observations made by anonymous users and a performance evaluation test for spatial resolution. hiroo iwata hiroaki yano fumitaka nakaizumi ryo kawamura integration and synchronization of input modes during multimodal human- computer interaction sharon oviatt antonella deangeli karen kuhn partial orders for document representation: a new methodology for combining document features steven finch from reading to retrieval: freeform ink annotations as queries gene golovchinsky morgan n. price bill n. schilit collection oriented match anurag acharya milind tambe a hierarchical approach to concurrency control for multidatabases a multidatabase system is a facility that allows access to data stored in multiple autonomous and possibly heterogeneous database systems. in order to support atomic updates across multiple database systems, a global concurrency control algorithm is required. hierarchical concurrency control has been proposed as one possible approach for multidatabase systems. however, to apply this approach, some restrictions have to be imposed on the local concurrency control algorithms. in this paper, we identify the restriction. based on this restriction, we formalize the hierarchical concurrency control approach and prove its correctness. a new global concurrency control algorithm based on this hierarchical approach is also presented. yungho leu ahmed k. elmagarmid a condensed representation to find frequent patterns given a large set of data, a common data mining problem is to extract the frequent patterns occurring in this set. the idea presented in this paper is to extract a condensed representation of the frequent patterns called disjunction-free sets, instead of extracting the whole frequent pattern collection. we show that this condensed representation can be used to regenerate all frequent patterns and their exact frequencies. moreover, this regeneration can be performed without any access to the original data. practical experiments show that this representation can be extracted very efficiently even in difficult cases. we compared it with another representation of frequent patterns previously investigated in the literature called frequent closed sets. in nearly all experiments we have run, the disjunction-free sets have been extracted much more efficiently than frequent closed sets. artur bykowski christophe rigotti schemma: a new approach to hypermedia design (abstract) n. gouraros w. hall b. sparkes uniform generation in spatial constraint databases and applications (extended abstract) we study the efficient approximation of queries in linear constraint databases using sampling techniques. we define the notion of an almost uniform generator for a generalized relation and extend the classical generator of dyer, frieze and kannan for convex sets to the union and the projection of relations. for the intersection and the difference, we give sufficient conditions for the existence of such generators. we show how such generators give relative estimations of the volume and approximations of generalized relations as the composition of convex hulls obtained from the samples. david gross michel de rougemont abstraction mechanisms for database programming databases contain vast amounts of highly related data accessed by programs of considerable size and complexity. therefore, database programming has a particular need for high level constructs that abstract from details of data access, data manipulation, and data control. the paper investigates the suitability of several well-known abstraction mechanisms for database programming (e.g., control abstraction and functional abstraction). in addition, it presents some new abstraction mechanisms (access abstraction and transactional abstraction) particularly designed to manage typical database problems like integrity and concurrency control. joachim w. schmidt manuel mall a flexible reference architecture for distributed database management james a. larson deductive databases in action shalom tsur ikem: a company-wide open hypermedia system for discovering and validating metallurgical knowledge (abstract) hans c. arents the effects of gaze awareness on dialogue in a video-based collaborative manipulative task caroline gale instant messaging: products meet workplace users john c. tang austina de bonte mary beth raven ellen isaacs query transformation in an instructional database management system a database management system designed for instructional use should offer facilities usually not required in a commercial environment. one of the most important features desirable in such a system is its ability to perform query transformation. the use of an universal symbol and tree manipulation system to perform query translation, decomposition and optimization is described in the paper. examples of transformation rules required to translate sql expressions into equivalent quel expressions, decompose sql expressions into parse trees and perform optimization of expressions based on relational algebra are shown. an experimental relational dbms using the above approach is currently under development at the university of houston. it supports various nonprocedural query languages within a single system, using a unified database dictionary. cross-translation between various query languages is allowed. the results of every important phase of the query transformation during its execution are interactively available to the system user. bogdan czejdo marek rusinkiewioz the duality of database structures and design techniques attempting to pair database structures with database design techniques that seem incompatible yields some fascinating concepts about the world of database, including foreign keys and multiple relationships. mark l. gillenson a survey and critique of advanced transaction models c. mohan usability improvements in lotus cc:mail for windows stacey l. ashlund karen j. horwitz the rational model contra entity relationship? h. w. buff searching program source code with a structured text retrieval system (poster abstract) charles clarke anthony cox susan sim detecting content-bearing words by serial clustering - extended abstract a. bookstein s. t. klein t. raita the importance of laboratory experimentation in is research (authors' response) robert d. galliers frank f. land object oriented approach to mls database application design peter j. sell technology and pedagogy for collaborative problem solving as a context for learning timothy koschmann denis newman earl woodruff roy pea peter rowley logic and databases: a response h gallaire j minker j nicolas why decision support fails and how to fix it ralph kimball kevin strehlo indexing field values in field oriented systems: interval quadtree with the extension of spatial database applications, field oriented systems emerge as an important research issue in order to deal with continuous natural phenomena during the last years. it however has a large volume of data and efficient indexing methods for field data are necessary to overcome the performance obstacle. in special, we introduce indexing methods for field value queries (i.e. searching some regions where the temperature is more 20 degrees). we introduce the concept of subfield and show how we make use of this concept to index field values in field oriented systems. we present two implementation methods based on quadtree space subdivision. we modify traditional linear quadtree implementation method for field value query processing using subfields. we analyze the performance of our methods. experimentation with real terrain data shows that proposed indexing methods improve the query processing time of field value queries in comparison with the case of no indexing method. myoung-ah kang sylvie servigne ki-joune joune li robert laurini netserf: using semantic knowledge to find internet information archives anil s. chakravarthy kenneth b. haase group relations psychology and computer supported work some new directions for research and development william l. anderson database research at ipsi at the integrated publication and information systems institute (ipsi) of the gmd (gesellschaft fur mathematik und datenverarbeitung) database research is focused towards distributed database management systems to support the integration of heterogeneous multi-media information bases needed in an integrated publishing environment. the objective is to investigate advanced object-oriented and active data modelling concepts together with the principles of distributed data stores and data management. the unifying basis for the database research is the object-oriented data model vml developed in the project vodak over the last four years. the data model is based on recursively defined meta classes, classes and instance hierarchies paired with a strict separation of structural and operational definitions in a polymorphic type system. this report describes the highlights of the work on the vodak database system, the research efforts in heterogeneous database integration and the research plans of the multimedia database project as well as applications of our system in different environments such as hypertext and office automation. erich neuhold volker turau tutorial on data base machines the cost of the hardware components that make up modern day digital computers continues to drop at a rapid rate. along with this cost reduction there exists a simultaneous increase in the number of components that one can place on an integrated circuit chip. this leads to what seems to be an ever decreasing cost of hardware combined with a continued increase in functional capability. when these improvements are coupled with the ever increasing costs of software, primarily caused by increased personnel costs, it becomes imperative to consider moving functions that are primarily executed in software, to hardware. one field that has received increased attention vis-a-vis movement of software functions to hardware is the database management field. owing to size and complexity, managing databases becomes increasingly difficult. over the past few years, increasing efforts have been expended in the study of new types of hardware called database machines which are designed solely for the purpose of managing data. this research has been driven by the decreasing cost of hardware and by the need for performance improvements due to increased database size and complexity and a maturing database theory. p. bruce berra image classification and retrieval on the world wide web n. abbadeni d. ziou s. wang robust color indexing in content based image retrieval, color indexing is one of the most prevalent retrieval methods. in literature, most of the attention has been focussed on the color model with little or no consideration of the noise models. in this paper we investigate the problem of color indexing from a maximum likelihood perspective. we take into account the color model, the noise distribution, and the quantization of the color features. furthermore, from the real noise distribution we derive a distortion measure, which consistently provides improved accuracy. our investigation concludes with results on a real stock photography database, consisting of 11,000 color images. nicu sebe michael s. lew flexible user interface coupling in a collaborative system prasun dewan rajiv choudhard exploring information with visage peter lucas steven f. roth paradise: a database system for gis applications corporate the paradise team a partial test of the task-medium fit proposition in a group support system environment a laboratory experiment was carried out to partially test the task-medium fit proposition in a gss environment. communication medium was varied using a face-to-face gss and a dispersed gss setting. task type was varied using an intellective and a preference task. group decision outcome variables of interest were (actual and perceived) decision quality, decision time, decision satisfaction, and decision process satisfaction. with the intellective task, there were no significant differences between face-to-face gss and dispersed gss groups for all group decision outcome variables. with the preference task, face-to-face gss groups performed significantly better than dispersed gss groups for all group decision outcome variables. these findings suggest that group decision outcomes in a gss environment tend to be adversely affected when the communication medium is too lean for the task but not when the communication medium is too rich for the task. consequences of providing groups with too rich and too lean a communication medium for their task are discussed. implications of these findings, and other related results, for practice and for future revisions of media richness theory are explored. bernard c. y. tan kwok-kee wei choon-ling sia krishnamurthy s. raman querying virtual videos using path and temporal expressions rafael lazano herve martin a hypermedia surfing/authoring system for computer users with basic it skills (abstract) giuseppe simonetti bridging the paper and electronic worlds: the paper user interface walter johnson herbert jellinek leigh klotz ramana rao stuart card a bucket architecture for the open video project the open video project is a collection of public domain digital video available for research and other purposes. the open video collection currently consists of approximately 350 video segments, ranging in duration from 10 seconds to 1 hour. rapid growth for the collection is planned through agreements with other video repository projects and provision for user contribution of video. to handle the increased accession, we are experimenting with "buckets", aggregative intelligent publishing constructs for use in digital libraries. michael l. nelson gary marchionini gary geisler meng yang the institute for perception research ipo, a joint venture of philips electronics and eindhoven university of technology f. l. van nes h. bouma m. d. brouwer-janse the web as a graph the pages and hyperlinks of the world-wide web may be viewed as nodes and edges in a directed graph. this graph has about a billion nodes today, several billion links, and appears to grow exponentially with time. there are many reasons---mathematical, sociological, and commercial \---for studying the evolution of this graph. we first review a set of algorithms that operate on the web graph, addressing problems from web search, automatic community discovery, and classification. we then recall a number of measurements and properties of the web graph. noting that traditional random graph models do not explain these observations, we propose a new family of random graph models. ravi kumar prabhakar raghavan sridhar rajagopalan d. sivakumar andrew tompkins eli upfal editorial saveen reddy soft machines: a philosophy of user-computer interface design machines and computer systems differ in many characteristics that have important consequences for the user. machines are special-purpose, have forms suggestive of their functions, are operated with controls in obvious one-to- one correspondence with their actions, and the consequences of the actions on visible objects are immediately and readily apparent. by contrast, computer systems are general-purpose, have inscrutable form, are operated symbolically via a keyboard with no obvious correspondence between keys and actions, and typically operate on invisible objects with consequences that are not immediately or readily apparent. the characteristics possessed by machines, but typically absent in computer systems, aid learning, use and transfer among machines. but "hard," physical machines have limitations: they are inflexible, and their complexity can overwhelm us. we have built in our laboratory "soft machine" interfaces for computer systems to capitalize on the good characteristics of machines and overcome their limitations. a soft machine is implemented using the synergistic combination of real-time computer graphics to display "soft controls," and a touch screen to make soft controls operable like conventional hard controls. lloyd h. nakatani john a. rohrlich "lostness" and digital libraries yin leng theng visual search and mouse-pointing in labeled versus unlabeled two-dimensional visual hierarchies an experiment investigates (1) how the physical structure of a computer screen layout affects visual search and (2) how people select a found target object with a mouse. two structures are examined---labeled visual hierarchies (groups of objects with one label per group) and unlabeled visual hierarchies (groups without labels). search and selection times were separated by imposing a point-completion deadline that discouraged participants from moving the mouse until they found the target. the observed search times indicate that labeled visual hierarchies can be searched much more efficiently than unlabeled visual hierarchies, and suggest that people use a fundamentally different strategy for each of the two structures. the results have implications for screen layout design and cognitive modeling of visual search. the observed mouse-pointing times suggest that people use a slower and more accurate speed-accuracy operating characteristic to select a target with a mouse when visual distractors are present, which suggests that fitts' law coefficients derived from standard mouse-pointing experiments may under- predict mouse-pointing times for typical human-computer interactions. the observed mouse-pointing times also demonstrate that mouse movement times for a two-dimensional pointing task can be most-accurately predicted by setting the _w_ in fitts' law to the width of the target along the line of approach. anthony j. hornof beyond the graphical user interface (abstract) the wildly popular graphical user interfaces are a dramatic improvement over earlier interaction styles, but the next generation of user interfaces is already being fashioned. the future will be dynamic, spatial, gestural, colorful, ubiquitous, often auditory, and sometimes virtual. the dominance of visual information with hi- res images and full-motion video will push the hardware requirements, absorb network capacity, and challenge the algorithm designers. as human-computer interaction researchers accumulate experimental data and theory matures, novel input devices, information visualization displays, and interaction strategies will emerge. ben shneiderman getting to know your users: usability roundtables at lotus development mary beth butler to click or not to click: a comparison of two target-selection methods for hci michael bohan alex chaparro argo: a system for distributed collaboration the goal of the argo system is to allow medium-sized groups of users to collaborate remotely from their desktops in a way that approaches as closely as possible the effectiveness of face-to-face meetings. in support of this goal, argo combines high quality multi-party digital video and full-duplex audio with telepointers, shared applications, and whiteboards in a uniform and familiar environment. the shared applications can be unmodified x programs shared via a proxy server, unmodified groupware applications, and applications written using our toolkit. workers can contact each other as easily as making a phone call, and can easily bring into a conference any material they are working on. they do so by interacting with an object-oriented, client/server conference control system. the same conference control system is used to support teleporting, i.e. moving the desktop environment from one workstation's display to another (for example, from office to home). this paper describes the system we have built to test the hypothesis that the effectiveness of remote collaboration can be substantially impacted by the responsiveness of the interaction media. h. gajewska j. kistler m. manasse d. redell term relevance feedback and query expansion: relation to design amanda spink on the expressive power of the logical data model: prelimiary report gabriel m. kuper moshe y. vardi personal browser yi-shiou chen schy chiou yuan-kai wang wen-lian hsu designing and supporting a web-based electronic reserves project in a university library john j. small the impact of fluid documents on reading and browsing: an observational study fluid documents incorporate additional information into a page by adjusting typography using interactive animation. one application is to support hypertext browsing by providing glosses for link anchors. this paper describes an observational study of the impact of fluid documents on reading and browsing. the study involved six conditions that differ along several dimensions, including the degree of typographic adjustment and the distance glosses are placed from anchors. six subjects read and answered questions about two hypertext corpora while being monitored by an eyetracker. the eyetracking data revealed no substantial differenccs in eye behavior between conditions. gloss placement was significant: subjects required less time to use nearby glosses. finally, the reaction to the conditions was highly varied, with several conditions receiving both a best and worst rating on the subjective questionnaires. these results suggest implications for the design of dynamic reading environments. polle t. zellweger susan harkness regli jock d. mackinlay bay-wei chang intent: an integrated environment for distributed heterogeneous databases distributed database technology evolved from the need to integrate large volumes of corporate information to lower production and maintenance costs. most of the contemporary distributed database systems usually lack the pervasive component furnishing the entire system with the appropriate structural and semantical capabilities. it is rather likely that these systems comprise a number of disconnected subsystems patched together to provide an ad hoc and temporal functionality. such systems are poorly engineered and thus unreliable and expensive to maintain, increment, or modify. by contrast, the architectural framework envisaged tries to remedy this situation by suggesting that database management facilities should be the converging point in a data- intensive application environment, no matter if it is centralized or distributed. intent is an ongoing project which reconsiders several of the long standing assumptions and perspectives that are pervading the field of distributed database management. intent proposes alternatives and solutions leading towards the development of an integrated architectural framework supporting independence from the physical distribution of the component databases while providing the users with a transparent view of the information that is scattered across the nodes of a common network. this can be achieved by introducing new software components which provide an assembly of well-defined logical interfaces for integration with the already existing data management systems. intent relieves the users from the serious problem of integrated retrieval by providing them with a single integrated unifying view of the heterogeneous data stored in the diverse local databases. the intent distributed conceptual schema is a highly-logical view of the information content of the integrated system which does not require that the individual databases are physically integrated: rather, all global database access and manipulation operations are mediated through this new form of conceptual schema. as individual schemas may contain redundant or possibly conflicting information, the distributed conceptual schema will have to be aware of the logical relationships among seemingly disjoint components of the local schemas. this approach attempts to embed the advantages of logical centralization in a system presenting a semi- decentralized nature [1]. the initial distributed schema comprises a collection of global data properties and a set of appropriate transformation-translation rules between global and local data properties. moreover, it contains the appropriate amount of metaknowledge which enables it to specify consistency and integrity constraints as well as to establish useful assertions concerning the locally defined properties. the distributed conceptual schema entails a dynamic nature, and by recording automatically any changes at the local levels it increases its amount of metaknowledge concerning the description of the entire database complex. furthermore, it incorporates such properties that guarantee its potential self-adjustment to changes affected by its environment to meet the ever changing information requirements of its users[2], and provides the basis for the materialization of the unifying query language of the system [3]. during our research on heterogeneous distributed database management systems (hd-dbms), we have postulated a number of advanced features to assist the designer in the formidable task of retrieving and manipulating information in such an environment. these features mainly emanate from the object-oriented paradigm, the functional programming, and from knowledge and rule based approaches. in general, we envisage a new generation dbmss that should provide amongst other support for: objects as advanced abstraction mechanisms. functional operations. uniform data and meta- data representation. rule-based consistency checks. the approach for achieving this is by advocating the use of a higher-level object oriented data model to serve as the unifying data model of the system. one major benefit of this approach is that the common behaviors and descriptions of both object types and instances can be shared. in addition, the internal behavior of objects is hidden by external communication protocols. as processing in an object oriented programming environment emphasizes behavior. application code can be shared by similar object types. more specifically, one can write application code to manipulate a particular data, but specify special handlers for each specialization of the data type that needs to perform different processing. the overall impact of this work is that multiple users can transparently, and concurrently manipulate heterogeneous databases distributed over several communicating nodes in a common network. the aforementioned features can be thought of as the main structural components of a modeling facility coping with the predicament of distributed modeling. under these assumptions, we currently explore the adaptation of a higher-level object oriented data model called the extended semantic data model (esdm model) [3] to serve as the mapping front- end between diametrically different data representations. the purpose of the esdm is to support the richer semantic structures and dynamic aspects of database applications whose substance tends to evolve during their life-span. esdm encompasses concepts and primitives underlying functional data models [4], [5] as well as fundamentals of object oriented programming languages. to provide a conceptually natural data manipulation language we have borrowed concepts loosely coupled with functional languages which provide the environment for syntactically more readable and concise forms of queries. the esdm language constructs combine many important features such as rich typing facilities, increased expressive power, optimization facilities and finally interactive user interfaces that support high-level set-based query and update languages. the innovative aspect of this research is to extend the expedience of a distributed data representation substrate, capable of supporting the coexistence of diverse data models and incorporating knowledge representation techniques, to manage distributed data integration and global information processing. m. p. papazoglou j. bubenko m. norrie evaluating smil lloyd rutledge lynda hardman jacco van ossenbruggen virtual slots: increasing power and reusability for user interface development languages francisco saiz javier contreras roberto moriyon chroma (demonstration abstract): a content-based image retrieval system ting- sheng lai john tait patterns of contact and communication in scientific research collaboration in this paper, we describe the influence of physical proximity on the development of collaborative relationships between scientific researchers and on the execution of their work. our evidence is drawn from our own studies of scientific collaborators, as well as from observations of research and development activities collected by other investigators. these descriptions provide the foundation for a discussion of the actual and potential role of communications technology in professional work, especially for collaborations carried out at a distance. robert kraut carmen egido jolene galegher user experience of clive/mbanx solution shahrokh daijavad tong-haing fin tom frauenhofer tetsu fujisaki alison lee maroun touma catherine g. wolf workflow management systems: still few operational systems peter kueng data-valued partitioning and virtual messages (extended abstract) network partition failures in traditional distributed databases cause severe problems for transaction processing. the only way to overcome the problems of "blocking" behavior for transaction processing in the event of such failures is, effectively, to execute them at single sites. a new approach to data representation and distribution is proposed and it is shown to be suitable for failure-prone environments. we propose techniques for transaction processing, concurrency control and recovery for the new representation. several properties that arise as a result of these methods, such as non-blocking behavior, independent recovery and high availability, suggest that the techniques could be profitably implemented in a distributed environment. nandit soparkar abraham silberschatz dynamic bookmarks for the www hajime takano terry winograd the active badge system (abstract) andy hopper andy harter tom blackie join processing in relational databases the join operation is one of the fundamental relational database query operations. it facilitates the retrieval of information from two different relations based on a cartesian product of the two relations. the join is one of the most diffidult operations to implement efficiently, as no predefined links between relations are required to exist (as they are with network and hierarchical systems). the join is the only relational algebra operation that allows the combining of related tuples from relations on different attribute schemes. since it is executed frequently and is expensive, much research effort has been applied to the optimization of join processing. in this paper, the different kinds of joins and the various implementation techniques are surveyed. these different methods are classified based on how they partition tuples from different relations. some require that all tuples from one be compared to all tuples from another; other algorithms only compare some tuples from each. in addition, some techniques perform an explicit partitioning, whereas others are implicit. priti mishra margaret h. eich an algorithm for inferring multivalued dependencies with an application to propositional logic an algorithm is given for deciding whether a functional or a multivalued dependency (with a right-hand side y) is implied by a set of functional and multivalued dependencies . the running time of the algorithm is o(| y |&verbar; &verbar;), where y is the number of attributes in y and &verbar; &verbar; is the size of the description of . the problem of constructing the dependency basis of a set of attributes x is also investigated. it is shown that the dependency basis can be found in o(s&verbar; &verbar;) time, where s is the number of sets in the dependency basis. since functional and multivalued dependencies correspond to a subclass of propositional logic (that can be viewed as a generalization of horn clauses), the algorithm given is also an efficient inference procedure for this subclass of propositional logic. yehoshua sagiv an overview of the international symposium on wearable computers 1998 mark billinghurst thad starner virtual organizations: two choice problems daniel e. o'leary the flag taxonomy of open hypermedia systems kasper Østerbye uffe kock wiil building a distributed application using visual obliq krishna bharat marc h. brown musical content-based retrieval: an overview of the melodiscov approach and system pierre-yves rolland gailius raškinis jean-gabriel ganascia interface and data architecture for query preview in networked information systems there are numerous problems associated with formulating queries on networked information systems. these include increased data volume and complexity, accompanied by slow network access. this article proposes a new approach to a network query user interfaces that consists of two phases: query preview and query refinement. this new approach is based on the concepts of dynamic queries and query previews, which guides users in rapidly and dynamically eliminating undesired records, reducing the data volume to a manageable size, and refining queries locally before submission over a network. examples of two applications are given: a restaurant finder and a prototype for nasa's earth observing systems data information systems (eosdis). data architecture is discussed, and user feedback is presented. catherine plaisant ben shneiderman khoa doan tom bruns experimental evaluation of pfs continuous media file system wonjun lee difu su duminda wijesekera jaideep srivastava deepak kenchammana-hosekote mark foresti cognitive factors in design: basic phenomena in human memory and problem solving thomas t. hewett fast supervised dimensionality reduction algorithm with applications to document categorization & retrieval george karypis eui-hong (sam) han how a group-editor changes the character of a design meeting as well as its outcome judith s. olson gary m. olson marianne storrøsten mark carter creating and sharing web notes via a standard browser ng s. t. chong masao sakauchi the graspin db - a syntax directed, language independent software engineering database c. zaroliagis p. soupos s. goutas d. christodoulakis providing awareness information in remote computer-mediated collaboration (doctoral colloquium) susan e. mcdaniel formal process ontology the paper reports some results of the author's recent work on aprocess-ontological framework called apt. apt is based on thenotion of a free process, a new category in ontology, with broad's/ sellars' "subjectless" or "pure" processes as their closestcognates. with a pointer to verbal and nominal aspect shift asbackground motivation i argue that basic classificatory terms ofcommon sense reasoning (activity, event, thing, stuff, propertyetc.) can be treated as different types of free processes. afterintroducing essential elements of the formal framework of apt (anon- standard extensional mereology with a non-transitivepart-relation) i sketch apt-classifications of basic and complexprocesses, based on five main descriptive parameters: homomeritypattern, participant structure, dynamic composition, dynamic shape,and dynamic context. johanna seibt answering queries with useful bindings in information-integration systems, sources may have diverse and limited query capabilities. to obtain maximum information from these restrictive sources to answer a query, one can access sources that are not specified in the query (i.e., off-query sources). in this article, we propose a query-planning framework to answer queries in the presence of limited access patterns. in the framework, a query and source descriptions are translated to a recursive datalog program. we then solve optimization problems in this framework, including how to decide whether accessing off-query sources is necessary, how to choose useful sources for a query, and how to test query containment. we develop algorithms to solve these problems, and thus construct an efficient program to answer a query. chen li edward chang logical design of a reliable transaction management in a distributed multiple processor system the logical design of a reliable transaction management in a distributed multiple processor system is presented. a main characteristic of the system is that it realizes autonomous nodes. this has implications on the construction of the distributed data base management software. we discuss some construction problems and we provide the concept of guardian transaction managers for the implementation of transactions as atomic units of consistency and recovery. herbert kuss hci research at the institute of systems science john a. waterworth juzaar motiwalla physical versus virtual pointing evan d. graham christine l. mackenzie deriving functional dependencies from the entity-relationship model sudha ram mlr: a recovery method for multi-level systems to achieve high concurrency in a database system has meant building a system that copes well with important special cases. recent work on multi-level systems suggest a systematic path to high concurrency. a multi-level system using locks permits restrictive low level locks of a subtransaction to be replaced with less restrictive high level locks when sub-transactions commit, enhancing concurrency. this is possible because sub-transactions can be undone via high level compensation actions rather than by restoring a prior lower level state. we describe a recovery scheme, called multi-level recovery (mlr) that logs this high level undo operation with the commit record for the subtransaction that it compensates, posting log records to only a single log. a variant of the method copes with nested transactions, and both nested and multi-level transactions can be treated in a unified fashion. david b. lomet development of meta databases for geospatial data in the www stefan göbel karen lutze relevance and contributing information types of searched documents in task performance end-users base the relevance judgements of the searched documents on the expected contribution to their task of the information contained in the documents. there is a shortage of studies analyzing the relationships between the experienced contribution, relevance assessments and type of information initially sought. this study categorizes the types of information in documents being used in writing a research proposal for a master's thesis by eleven students throughout the various stages of the proposal writing process. the role of the specificity of the searched information in influencing its contribution is analyzed. the results demonstrate that different types of information are sought at different stages of the writing process and thus the contribution of the information also differs at the different stages. the categories of the contributing information can be understood of topicality. pertti vakkari lab report special section: information retrieval research in the university of sheffield peter willett a voice in the silence: a constructive hypermedia application (abstract) dennis fukai interactive information retrieval systems: from user centered interface design to software design p. mulhem l. nigay office monitor nicole yankelovich cynthia d. mclain windowing vs scrolling on a visual display terminal to study a different star the astronomer moves his telescope. to study a different bacterium the biologist moves his microscope slide. in the case of the astronomer, it is the viewing instrument that is being moved, while in the case of the biologist it is the viewed object that is being moved. these scientists have no choice, the nature of their equipment requires that they operate in a pre-defined way. the user of a video display terminal (vdt), however, can be given a choice. the vdt user views a representation of an area of computer memory. in most cases the portion of memory the user wishes to examine is much larger than that which will fit on the screen at one time. for this reason almost all vdt's are equipped with some sort of "scroll function" that allows the user to display data that is located beyond the limits of the screen. kevin f. bury james m. boyle r. james evey alan s. neal a blueprint for automatic indexing g. salton serializability by locking mihalis yannakakis a visual environment for multimedia object retrieval we present a graph-based object model that has been used as a uniform framework for direct manipulation of multimedia information. after a brief introduction motivating the need for abstraction and structuring mechanisms in hypermedia systems, we introduce the data model and the visual retrieval environment which, combining filtering, browsing and navigation techniques, provides an integrated view of the retrieval problem. dario lucarella antonella zanzi mauro zanzi a review and taxonomy of distortion-oriented presentation techniques one of the common problems associated with large computer-based information systems is the relatively small window through which an information space can be viewed. increasing interest in recent years has been focused on the development of distortion-oriented presentation techniques to address this problem. however, the growing number of new terminologies and techniques developed have caused considerable confusion to the graphical user interface designer, consequently making the comparison of these presentation techniques and generalization of empirical results of experiments with them very difficult, if not impossible. this article provides a taxonomy of distortion- oriented techniques which demonstrates clearly their underlying relationships. a unified theory is presented to reveal their roots and origins. issues relating to the implementation and performance of these techniques are also discussed. y. k. leung m. d. apperley learning, memory and technology: some initial considerations monica divitini carla simone a french text recognition model for information retrieval system g. antoniadis g. lallich-boidin y. polity j. rouault technology survey: future perfect stephan somogyi towards the prediction of development effort for web applications emilia mendes wendy hall xtract: a system for extracting document type descriptors from xml documents xml is rapidly emerging as the new standard for data representation and exchange on the web. an xml document can be accompanied by a _document type descriptor_ (dtd) which plays the role of a schema for an xml data collection. dtds contain valuable information on the structure of documents and thus have a crucial role in the efficient storage of xml data, as well as the effective formulation and optimization of xml queries. in this paper, we propose xtract, a novel system for inferring a dtd schema for a database of xml documents. since the dtd syntax incorporates the full expressive power of _regular expressions_, naive approaches typically fail to produce concise and intuitive dtds. instead, the xtract inference algorithms employ a sequence of sophisticated steps that involve: (1) finding patterns in the input sequences and replacing them with regular expressions to generate "general" candidate dtds, (2) factoring candidate dtds using adaptations of algorithms from the logic optimization literature, and (3) applying the minimum description length (mdl) principle to find the best dtd among the candidates. the results of our experiments with real-life and synthetic dtds demonstrate the effectiveness of xtract's approach in inferring concise and semantically meaningful dtd schemas for xml databases. minos garofalakis aristides gionis rajeev rastogi s. seshadri kyuseok shim the character, value, and management of personal paper archives we explored general issues concerning personal information management by investigating the characteristics of office workers' paper-based information, in an industrial research environment. we examined the reasons people collect paper, types of data they collect, problems encountered in handling paper, and strategies used for processing it. we tested three specific hypotheses in the course of an office move. the greater availability of public digital data along with changes in people's jobs or interests should lead to wholescale discarding of paper data, while preparing for the move. instead we found workers kept large, highly valued papar archives. we also expected that the major part of people's personal archives would be unique documents. however, only 49% of people's archives were unique documents, the remainder being copies of publicly available data and unread information, and we explore reasons for this. we examined the effects of paper-processing strategies on archive structure. we discovered different paper-processing strategies (filing and piling)that were relatively independent of job type. we predicated that filers' attempted to evaluate and catergorize incoming documents would produce smaller archives that were accessed frequently. contrary to our predictions, filers amassed more information, and accessed it less frequently than pilers. we argue that filers may engage in premature filing: to clear their workspace, they archives information that later turns out to be of low value. given the effort involved in organzing data, they are also loath to discard filed information, even when its value is uncertain. we discuss the implications of this research for digital personal information management. steve whittaker julia hirschberg experiences with hyperbase: a hypertext database supporting collaborative work this paper describes the architecture and experiences with a hyperbase (hypertext database). hyperbase is based on the client-server model and has been designed especially to support collaboration. hyperbase has been used in a number of (hypertext) applications in our lab and is currently being used in research projects around the world to provide database support to all kinds of applications. one application from our lab is a multiuser hypertext system for collaboration which deals with three fundamental issues in simultaneous sharing: access contention, real-time monitoring and real-time communication. major experiences with hyperbase (collaboration support, data modeling and performance) gained from use both in our lab and in different projects at other research sites are reported. one major lesson learned is that hyperbase can provide powerful support for data sharing among multiple users simultaneously sharing the same environment. uffe kock wiil efficient search server assignment in a disproportionate system environment toru takaki tsuyoshi kitani improved histograms for selectivity estimation of range predicates viswanath poosala peter j. haas yannis e. ioannidis eugene j. shekita an empirical study of human web assistants: implications for user support in web information systems user support is an important element in reaching the goal of universal usability for web information systems. recent developments indicate that human involvement in user support is a step towards this goal. however, most such efforts are currently being pursued on a purely intuitive basis. this, empirical findings about the role of human assistants are important. in this paper we present the findings from a field study of a general user support model for web information systems. we show that integrating human assistance into web systems is a way to provide efficient user support. further, this integration makes a web site more fun to use and increases the user's trust in the site. the support also improves the site atmosphere. our findings are summarised as recommendations and design guidelines for decision-makers and developers web systems. johan aberg nahid shahmehri supporting semantic information retrieval in communication networks by multimedia techniques annelise mark pejtersen performance analysis of file replication schemes in distributed systems in distributed systems the efficiency of the network file system is a key performance issue. replication of files and directories can enhance file system efficiency, but the choice of replication techniques is crucial. this paper studies a number of replication techniques, including remote access, prereplication, weighted voting, and two demand replication schemes: polling and staling. it develops a markov chain model, which is capable of characterizing properties of file access sequences, including access locality and access bias. the paper compares the replication techniques under three different network file system architectures. the results show that, under reasonable assumptions, demand replication requires fewer file transfers than remote access, especially for files that have a high degree of access locality. among the demand replication schemes, staling requires fewer auxiliary messages than polling. zuwang ruan walter f. tichy hierarchical materialisation of methods in object-oriented views: design, maintenance, and experimental evaluation the application of materialised object-oriented views in object-relational data warehousing systems is promising. in this paper we propose a novel technique for the materialisation of method results in object-oriented views, called _hierarchical materialisation_. when an object used to materialise the result of method _m_ is updated, then _m_ has to be recomputed. this recomputation can use unaffected intermediate materialised results of methods called from _m_, thus reducing a recomputation time. the hierarchical materialisation technique was implemented and evaluated by a number of experiments concerning methods without input arguments as well as methods with input arguments. the results showed that hierarchical materialisation reduces method recomputation time. moreover, materialising methods with input arguments of narrow discrete domains introduces only a small time overhead. bartosz bebel robert wrembel from usability lab to "design collaboratorium": reframing usability practice this paper presents an exploratory process in which three industrial usability groups, in cooperation with hci researchers, worked to reframe their own work practice. the usability groups moved beyond a classical usability setting towards a new way of working which we have coined the design collaboratorium. this design collaboratorium is a design approach that creates an open physical and organizational space where design?, engineers, users and usability professionals meet and work alongside each other. at the same time the design collaboratorium makes use of event-driven ways of working known from participatory design. some of these working methods are well-documented from literature but adapted to the needs of the particular project, others are new. this paper illustrates how it is pos?le to reframe usability work and it discusses the new usability competence required. jacob buur susanne bødker a comparison of still, animated, or nonillustrated on-line help with written or spoken instructions in a graphical user interface susan m. harrison everyone is talking about knowledge management (panel) irene greif deriving initial data warehouse structures from the conceptual data models of the underlying operational information systems in recent years the construction of large scale data schemes for operational systems has been the major problem of conceptual data modeling for business needs. multidimensional data structures used for decision support applications in data warehouses have rather different requirements to data modeling techniques. in case of operational systems the data models are created from application specific requirements. the data models in data warehouses base on the analytical requirements of the users. furthermore, the development of data warehouse structures implicates the consideration of user-defined information requirements as well as the underlying operational source systems. in this paper we show that the conceptual data models of the underlying operational information systems can support the construction of multidimensional structures. we would like to point out that the special features of the structured entity relationship model (serm) are not only useful for the development of big operational systems but can also help with the derivation of data warehouse structures. the serm is an extension of the conventional entity relationship model (erm) and the conceptual basis of the data modeling technique used by the sap corporation. to illustrate the usefulness of this approach we explain the derivation of the warehouse structures from the conceptual data model of a flight reservation system. michael boehnlein achim ulbrich-vom ende graphical multiscale web histories: a study of padprints ron r. hightower laura t. ring jonathan i. helfman benjamin b. bederson james d. hollan use of a component architecture in integrating relational and non-relational robert atkinson recovery for transaction failures in object-based databases man hon wong frame-sliced partitioned parallel signature files the retrieval capabilities of the signature file access method have become very attractive for many data processing applications dealing with both formatted and unformatted data. however, performance is still a problem, mainly when large files are used and fast response required. in this paper, a high performance signature file organization is proposed, integrating the latest developments both in storage structure and parallel computing architectures. it combines horizontal and vertical approaches to the signature file fragmentation. in this way, a new, mixed decomposition scheme, particularly suitable for parallel implementation, is achieved. the organization, based on this fragmentation scheme, is called fragmented signature file. performance analysis shows that this organization provides very good and relatively stable performance, covering the full range of possible queries. for the same degree of parallelism, it outperforms any other parallel signature file organization that has been defined so far. the proposed method also has other important advantages concerning processing of dynamic files, adaptability to the number of available processors, load balancing, and, to some extent, fault-tolerant query processing. fabio grandi paolo tiberio pavel zezula tourist - conceptual hypermedia tourist information j. c. bullock c. a. goble supporting organizational problem solving with a work station gerald barber visual querying and explanation of recommendations from collaborative filtering systems junichi tatemura a close look at the ifo data model magdy s. hanna providing assurances in a multimedia interactive environment doree duncan seligmann rebecca t. mercuri john t. edmark an incremental access method for viewcache: concept, algorithms, and cost analysis a viewcache is a stored collection of pointers pointing to records of underlying relations needed to materialize a view. this paper presents an incremental access method (iam) that amortizes the maintenance cost of viewcaches over a long time period or indefinitely. amortization is based on deferred and other update propagation strategies. a deferred update strategy allows a viewcache to remain outdated until a query needs to selectively or exhaustively materialize the view. at that point, an incremental update of the viewcache is performed. this paper defines a set of conditions under which incremental access to the viewcache is cost effective. the decision criteria are based on some dynamically maintained cost parameters, which provide accurate information but require inexpensive bookkeeping. the iam capitalizes on the viewcache storage organization for performing the update and the materialization of the viewcaches in an interleaved mode using one-pass algorithms. compared to the standard technique for supporting views that requires reexecution of the definition of the view, the iam offers significant performance advantages. we will show that under favorable conditions, most of which depend on the size of the incremental update logs between consecutive accesses of the views, the incremental access method outperforms query modification. performance gains are higher for multilevel viewcaches because all the i/o and cpu for handling intermediate results are avoided. nicholas roussopoulos towards usability guidelines for multimedia systems the advent of technology which supports the concurrent presentation of information through a range of different media has raised new issues relating to the design of usable systems. while previous work in the areas of both human-computer interaction (hci) and hypermedia system usability can contribute a considerable amount to the development of such guidelines, we believe that the use of multiple output media demands an understanding of particular characteristics and limitations of users' attentional capabilities. this paper presents some initial guidelines for the design of usable multimedia systems. these guidelines are based on empirical findings regarding the nature of human attention derived from the field of experimental psychology. we believe that the provision of such guidelines for multimedia interface design will support designers in achieving the dual goals of maximising a user's flexibility in controlling the presentation of multiple concurrent media, while keeping cognitive load to an acceptable level. m. bearne s. jones j. sapsford-francis turning away from talking heads: the use of video-as-data in neurosurgery bonnie a. nardi heinrich schwarz allan kuchinsky robert leichner steve whittaker robert sclabassi on site: an "out-of-box" experience jason w. fouts implementing catalog clearinghouses with xml and xsl andrew v. royappa a collectivity in an electronic social space wayne g. lutters mark s. ackerman upa93: usability professionals association annual meeting, 21 - 23 july 1993, redmond, wa, usa jakob nielsen cultural perceptions of task-technology fit acknowledging cultural differences helps companies build the strongest global virtual teams and determine the strongest tools they need. anne p. massey mitzi montoya-weiss caisy hung v. ramesh learning disabled students' difficulties in learning to use a word processor: implications for design charles a macarthur ben shneiderman the effects of information accuracy on user trust and compliance jean e. fox capturing and playing multimedia events with streams (demonstration) gil cruz ralph hill on the design of a learning crawler for topical resource discovery in recent years, the world wide web has shown enormous growth in size. vast repositories of information are available on practically every possible topic. in such cases, it is valuable to perform topical resource discovery effectively. consequently, several new ideas have been proposed in recent years; among them a key technique is focused crawling which is able to crawl particular topical portions of the world wide web quickly, without having to explore all web pages. in this paper, we propose the novel concept of intelligent crawling which actually _learns_ characteristics of the linkage structure of the world wide web while performing the crawling. specifically, the intelligent crawler uses the inlinking web page content, candidate url structure, or other behaviors of the inlinking web pages or siblings in order to estimate the probability that a candidate is useful for a given crawl. this is a much more general framework than the focused crawling technique which is based on a pre- defined understanding of the topical structure of the web. the techniques discussed in this paper are applicable for crawling web pages which satisfy arbitrary user-defined predicates such as topical queries, keyword queries, or any combinations of the above. unlike focused crawling, it is not necessary to provide representative topical examples, since the crawler can learn its way into the appropriate topic. we refer to this technique as _intelligent crawling_ because of its adaptive nature in adjusting to the web page linkage structure. we discuss how to intelligently select features which are most useful for a given crawl. the learning crawler is capable of reusing the knowledge gained in a given crawl in order to provide more efficient crawling for closely related predicates. charu c. aggarwal fatima al-garawi philip s. yu a test of task-technology fit theory for group support systems group support systems (gss) provide both promise and puzzlement. experimental studies of different systems over the years have resulted in conflicting findings --- sometimes enhancing group performance, at other times having no effect, and occasionally even resulting in worse performance for gss-supported groups than for traditional groups. researchers have speculated that the mixed results are due to a poor fit of the gss with the group's task. a recent model of task- technology fit has provided a theoretical perspective from which to test this issue. in this paper, a theory of task-technology fit is tested by applying it to a selected set of published gss experiments. key constructs in the theory are operationalized via coding instruments, and the application of the coding scheme provides support for the theory. ilze zigurs bonnie k. buckland james r. connolly e. vance wilson collaborative multimedia in shastra c. bajaj s. cutchin creating trading networks of digital archives digital archives can best survive failures if they have made several c opies of their collections at remote sites. in this paper, we discuss how autonomous sites can cooperate to provide preservation by trading data. we examine the decisions that an archive must make when forming trading networks, such as the amount of storage space to provide and the best number of partner sites. we also deal with the fact that some sites may be more reliable than others. experimental results from a data trading simulator illustrate which policies are most reliable. our techniques focus on preserving the ``bits'' of digital collections; other services that focus on other archiving concerns (such as preserving meaningful metadata) can be built on top of the system we describe here. brian cooper hector garcia focus troupe: using drama to create common context for new product concept end-user evaluations tony salvador karen howells integrated video archive tools rune hjelsvold stein langorgen roger midstraum olav sandstȧ accurate user directed summarization from existing tools mark sanderson using local optimality criteria for efficient information retrieval with redundant information filters we consider information retrieval when the data---for instance, multimedia--- is computationally expensive to fetch. our approach uses "information filters" to considerably narrow the universe of possibilities before retrieval. we are especially interested in redundant information filters that save time over more general but more costly filters. efficient retrieval requires that decisions must be made about the necessity, order, and concurrent processing of proposed filters (an "execution plan"). we develop simple polynomial-time local criteria for optimal execution plans and show that most forms of concurrency are suboptimal with information filters. although the general problem of finding an optimal execution plan is likely to be exponential in the number of filters, we show experimentally that our local optimality criteria, used in a polynomial-time algorithm, nearly always find the global optimum with 15 filters or less, a sufficient number of filters for most applications. our methods require no special hardware and avoid the high processor idleness that is characteristic of massive-parallelism solutions to this problem. we apply our ideas to an important application, information retrieval of captioned data using natural-language understanding, a problem for which the natural-language processing can be the bottleneck if not implemented well. neil c. rowe automated name authority control this paper describes a system for the automated assignment of authoriz ed names. a collaboration between a computer scientist and a librarian, the system provides for enhanced end-user searching of digital libraries without increasing drastically the cost and effort of creating a digital library. it is a part of the workflow management system of the levy sheet music project. james w. warnner elizabeth w. brown compiling document collections from the internet v. kluev equivalence of relational algebra and relational calculus query languages having aggregate functions anthony klug exploratory sequential data analysis: exploring continuous observational data carolanne fisher penelope sanderson queries with incomplete answers over semistructured data yaron kanza werner nutt yehoshua sagiv a tour through tapestry douglas b. terry applying database visualization to the world wide web in this paper, we present visualizations of parts of the network of documents comprising the world wide web. we describe how we are using the hy+ visualization system to visualize the portion of the world wide web explored during a browsing session. as the user browses, the web browser communicates the url and title of each document fetched as well as all the anchors contained in the document. hy+ displays graphically the history of the navigation and multiple views of the structure of that portion of the web. masum z. hasan alberto o. mendelzon dimitra vista strategies and interfaces while it is difficult to devise human- computer interfaces which work well, it is even more difficult to determine why a good interface is good. in programming and database query languages, many human factors experiments have been performed. these experiments have led to little hard knowledge about the nature of learning or using computer languages. the paucity of results suggests that we may be examining the wrong factors; we are looking at syntactic features of the interface when perhaps we should be looking at its cognitive features. one of the cognitive features of an interface language is the strategy employed when the language is used to solve a problem. the relationship of a language to the problem-solving strategies it encourages should be the focus of more human factors experiments. there are a number of problems associated with determining cognitive strategies. the first has to do with the "thinking-out-loud interview approach. while beginning students can sometimes articulate their thoughts and confusions while solving problems, they tend to internalize strategies as they progress, becoming unaware of some of their more powerful cognitive techniques. learning a computer language, like learning most things, is a progression of "chunkings", each level of which may lead to a different kind of hidden strategy. if we are to discover what strategies a language encourages, we must find a way of monitoring this iceberg-like hierarchy of strategies. error analysis may be the best way. we need to construct cognitive models of language users, predict what kinds of errors would occur if the models were correct, and run experiments to test the predictions. david w. stemple what is the shape of information?: human factors in the development and use of digital libraries andrew dillon modification operations in data base machines: where are they? advances in technology coupled with the inefficiency of the conventional von- neumann type architecture in handling database systems have motivated the design and implementation of the so called database machines. unfortunately, a majority of the proposed database machines have neglected to address proper procedures for modification operations. this negligence is partially due to the complexity of the operations and low performance of the database architectures in handling modification operations. first, the implementation of the modification operations in some database machines is overviewed. second, we will address the mechanisms in which the relational database machine aslm handles the modification operations. finally, our approach is compared against the other proposed approaches in the literature. a. r. hurson l. l. miller a user-friendly algorithm the interface between a person and a computer can be looked at from either side. programmers tend to view it from the inside; they consider it their job to defend the machine against errors made by its users. from the outside, the user sees his/her problems as paramount. he/she is often at odds with this complex, inflexible, albeit powerful tool. the needs of both people and machines can be reconciled; users will respond more efficiently and intelligently if they receive meaningful feedback. a "user-friendly" algorithm that covers a wide range of interactive environments and is typical of most operating systems and many application programs is presented. barry dwyer a telewriting system on a lan using a pen-based computer as the terminal seiichi higaki hiroshi taninaka shinji moriya dynamic query result previews for a digital library steve jones introduction and overview to human-computer interaction keith a. butler robert j. k. jacob bonnie e. john information and context: lessons from the study of two shared information systems paul dourish victoria bellotti wendy mackay chao-ying ma efficient searching in distributed digital libraries james c. french allison l. powell walter r. creighton the active database management system manifesto: a rulebase of adbms features active database systems have been a hot research topic for quite some years now. however, while "active functionality" has been claimed for many systems, and notions such as "active objects" or "events" are used in many research areas (even beyond database technology), it is not yet clear which functionality a database management system must support in order to be legitimately considered as an active system. in this paper, we attempt to clarify the notion of "active database management system" as well as the functionality it has to support. we thereby distinguish mandatory features that are needed to qualify as an active database system, and desired features which are nice to have. finally, we perform a classification of applications of active database systems and identify the requirements for an active database management system in order to be applicable in these application areas. corporate act-net consortium decision support systems - new approaches for realization georgi penchev margarita todorova augmenting reality: adding computational dimensions to paper wendy mackay gilles velay kathy carter chaoying ma daniele pagani kdd-cup 2000: question 4 winner's report e-steam, inc. rafal kustra jorge a. picazo bogdan e. popescu towards comprehensive database support for geoscientific raster data norbert widmann peter baumann principles of delay-sensitive multimedia data storage retrieval this paper establishes some fundamental principles for the retrieval and storage of delay-sensitive multimedia data. delay-sensitive data include digital audio, animations, and video. retrieval of these data types from secondary storage has to satisfy certain time constraints in order to be acceptable to the user. the presentation is based on digital audio in order to provide intuition to the reader, although the results are applicable to all delay-sensitive data. a theoretical framework is developed for the real-time requirements of digital audio playback. we show how to describe these requirements in terms of the consumption rate of the audio data and the nature of the data-retrieval rate from secondary storage. making use of this framework, bounds are derived for buffer space requirements for certain common retrieval scenarios. storage placement strategies for multichannel synchronized data are then categorized and examined. the results presented in this paper are basic to any playback of delay-sensitive data and should assist the multimedia system designer in estimating hardware requirements and in evaluating possible design choices. jim gemmell stavros christodoulakis self maintenance of multiple views in data warehousing materialized views (mv) at the data warehouse (dw) can be kept up to date in response to changes in data sources without accessing data sources for additional information. this process is usually refered to as "self maintenance of views". a number of algorithms have been proposed for self maintenance of views, which use auxiliary views (av) to keep some additional information in dw. in this paper we propose an algorithm for self maintainability of multiple mvs using the above approach. our algorithm generates a simple maintenance query to incrementally maintain an mv along with its av at dw. the algorithm maintains these views by minimizing the number and the size of the avs. our approach provides better insight into view maintenance issues by exploiting the dependencies and constraints that might exist in the data sources and multiple mvs at dw. s. samtani v. kumar m. mohania some recommendations for designing effective dss (abstract only) choosing the proper hardware and software tools is only one phase of implementing and managing decision support systems. generally, organizations pass through three phases in adopting dss. the first phase is concerned with learning and experimenting with the system. in the second phase management assesses the benefits and rewards of end-user computing. in the third phase management tries to regain the control which was loosened because of the unrestrained growth of dss. many organizations put great emphasis on the first and third phases of the dss and do not put much attention to the second phase which is concerned with gaining value from the dss systems. to optimize the benefits of dss, management must develop a well-balanced approach that ensures both control of the system and getting rewards from the system. decision support systems must be the mind-support for management to identify the high- impact areas within the organization which lead to the development of a well- balanced approach. this paper presents some recommendations for designing such dss systems. asad khailany wafa khorshid query optimization by using derivability in a data warehouse environment j. albrecht w. hummer w. lehner l. schlesinger the webcluster project. using clustering for mediating access to the world wide web mourad mechkour david j. harper gheorghe muresan scalable algorithms for mining large databases rajeev rastogi kyuseok shim group hci design: problems and prospects bernard j. catterall susan harker gary klein mark notess john c. tang build-it: a planning tool for construction and design matthias rauterberg morten fjeld helmut krueger martin bichsel uwe leonhardt markus meier space-scale diagrams: understanding multiscale interfaces george w. furnas benjamin b. bederson a logical design methodology for relational databases using the extended entity-relationship model a database design methodology is defined for the design of large relational databases. first, the data requirements are conceptualized using an extended entity-relationship model, with the extensions being additional semantics such as ternary relationships, optional relationships, and the generalization abstraction. the extended entity- relationship model is then decomposed according to a set of basic entity- relationship constructs, and these are transformed into candidate relations. a set of basic transformations has been developed for the three types of relations: entity relations, extended entity relations, and relationship relations. candidate relations are further analyzed and modified to attain the highest degree of normalization desired. the methodology produces database designs that are not only accurate representations of reality, but flexible enough to accommodate future processing requirements. it also reduces the number of data dependencies that must be analyzed, using the extended er model conceptualization, and maintains data integrity through normalization. this approach can be implemented manually or in a simple software package as long as a "good" solution is acceptable and absolute optimality is not required. toby j. teorey dongqing yang james p. fry hypermedia research directions: an infrastructure perspective uffe k. wiil peter j. nurnberg john j. leggett what video can and can't do for collaboration: a case study ellen a. isaacs john c. tang communication control in computer supported cooperative work systems this paper presents alphadeltaphi-groups (adp-group) as a communication tool for connection level management in distributed cscw systems. in order to accurately model cscw communication patterns, an adp- group is a related set of cooperating processes whose communication is supported by allowing a spectrum of quality-of-service, message delivery reliability, atomicity and causal ordering options to co-exist within the same group. adp-group communication provides appropriate connection management support and network control within distributed cscw environments characterized by a heterogeneous mixture of equipment types, network performance and user activity levels. this efficiency is achieved by defining a small set of canonical group communication operations, by automatically making appropriate connections between data sources and sinks, and by using a receiver-based method of connection specification, monitoring and modification. robert simon robert sclabassi taieb znati learner-centered design: addressing, finally, the unique needs of learners sherry hsi elliot soloway a standard for multimedia middleware d. j. duke i. herman efficient frequency domain video scrambling for content access control multimedia data security is very important for multimedia commerce on the internet such as video-on-demand and real-time video multicast. traditional cryptographic algorithms for data security are often not fast enough to process the vast amount of data generated by the multimedia applications to meet the real-time constraints. this paper presents a joint encryption and compression framework in which video data are scrambled efficiently in the frequency domain by employing selective bit scrambling, block shuffling and block rotation of the transform coefficients and motion vectors. the new approach is very simple to implement, yet provides considerable level of security, has minimum adverse impact on the compression efficiency, and allows transparency, transcodability, and other content processing functionalities without accessing the cryptographic key. wenjun zeng shawmin lei inter-relational information and incompleteness in relational databases (abstract only) we augment every relation scheme in the relational data base (rdb) with a special attribute f ( the flag attribute). the values for attribute f may only be non-negative integers. values of f in the set (n:n>1, n is odd) are used for capturing inter-relational disjunctive information. values of f in the set (n:n >1, n is even) are used for capturing intra-relational disjunctive information. tuples of relations with flag value 1 represent information known to be true. tuples of relations with flag value 0 represent information neither known to be true nor known to be false. we shall denote the predicate represented in a relation scheme by the name of the relation scheme. below we show instances p and q of relation schemes p(x,y,f) and q(y,z,f) respectively and logical expressions representing the information contained in them: p(x y f) q(y z f) p(x1,y1) x1 y1 1 y1 z1 0 q(y1,z1) v~q(y1,z1) x2 y2 2 y2 z2 2 q(y2,z2) v q(y3,z3) x3 y3 2 y3 z3 2 p(x2,y2) v p(x3,y3) x4 y4 3 y4 z4 3 p(x4,y4) v q(y4,z4) under the domain closure assumption [1], a tuple (x1,@@@@) in an instance of the conventional relation scheme r(x,y) represents the disjunction: r(x1,y1) v r(x1,y2) v r(x1,y3) when @@@@ denotes the null value and the domain of attribute y is the set (y1,y2,y3). our scheme thus includes null values as a special case. in addition, our scheme captures many more kinds of incomplete information. we introduce the notion of an "alternative" under our definition the relation u(a b c f) a1 b1 c1 1 a1 b2 c2 2 a1 b1 c3 2 which contains only intra- relational incomplete information has the following conventional relations as alternatives: u1(a b c) u2(a b c) u3(a b c) a1 b1 c1 a1 b1 c1 a1 b1 c1 a1 b2 c2 a1 b2 c2 a1 b1 c3 a1 b1 c3 also, in our treatment, the following rdb containing inter-relational incomplete information r(a b f) s(b c f) a1 b1 2 b2 c2 3 a2 b2 2 b1 c1 5 a1 b1 3 b1 c1 2 a2 b2 5 b2 c2 2 a3 b3 4 b3 c3 1 a4 b4 4 b4 c4 4 b4 c5 4 has seven alternatives; here each alternative is a pair (ri,si), i lying between 1 and 7 and each of the alternatives contains only intra-relational incompleteness. for this example, the alternatives are as follows: r1= r0 u {(a1,b1,1),(a2,b2,1)} s1= s0 u {(b2,c2,1),(b1,c1,1)} r2= r1, s2= s0 u {(b2,c2,1)} r3=r1, s3= s0 u {(b1,c1,1)} r4= r0 u {(a1,b1,1)}, s4= s1 r5= r4, s5= s3 r6=r0 u {(a2,b2,1)}, s6= s1 r7= r6, s7= s2 where r0 and s0 are as shown below: r0(a b f) s0(b c f) a3 b3 4 b3 c3 1 a4 b4 4 b4 c4 4 b4 c5 4 we also introduce the notion of "minimality" of relation instances and based on the idea of an alternative, we give a specification and realization of an extended join operator; under our treatment, the extended join of r and s above is t, shown below: t(a b c f) a1 b1 c1 2 a2 b2 c2 2 a3 b3 c3 4 a4 b4 c4 4 a4 b4 c5 4 in the context of our scheme, we also introduce an extended closed world assumption, weak functional dependencies and inference rules and conditions for the lossless decomposition of a relation containing incomplete information. r. b. abhyankar r. l. kashyap thunderwire: a field study of an audio-only media space debby hindus mark s. ackerman scott mainwaring brian starr hci through creative plagiarization michael j. sinclair manageability, availability and performance in porcupine: a highly scalable, cluster-based mail service (summary only) yasushi saito brian n. bershad henry m. levy virginia tech's center for human-computer interaction john m. carroll a forward error recovery technique for mpeg-ii video transport r. radhakrishna pillai b. prabhakaran qui qiang metu interoperable database system a. dogac c. dengi e. kilic g. ozhan f. ozcan s. nural c. evrendilek u. halici b. arpinar p. koksal n. kesim s. mancuhan the millennium bug and asia pacific tung x. bui layering an open hypermedia system above a distributed communication architecture (abstract) stuart goose jonathan dale on-line reorganization in object databases reorganization of objects in an object databases is an important component of several operations like compaction, clustering, and schema evolution. the high availability requirements (24 × 7 operation) of certain application domains requires reorganization to be performed on-line with minimal interference to concurrently executing transactions. in this paper, we address the problem of on-line reorganization in object databases, where a set of objects have to be migrated from one location to another. specifically, we consider the case where objects in the database may contain physical references to other objects. relocating an object in this case involves finding the set of objects (parents) that refer to it, and modifying the references in each parent. we propose an algorithm called the incremental reorganization algorithm (ira) that achieves the above task with minimal interference to concurrently executing transactions. the ira algorithm holds locks on at most two distinct objects at any point of time. we have implemented ira on brahma, a storage manager developed at iit bombay, and conducted an extensive performance study. our experiments reveal that ira makes on-line reorganization feasible, with very little impact on the response times of concurrently executing transactions and on overall system throughput. we also describe how the ira algorithm can handle system failures. mohana k. lakhamraju rajeev rastogi s. seshadri s. sudarshan distributed chinese bibliographic searching m. k. leong l. cao y. lu flexible conflict detection and management in collaborative applications w. keith edwards adaptive agents and personality change: complementarity versus similarity as forms of adaptation youngme moon clifford i. nass knowledge-based support for the user-interface design process kumiyo nakakoji uwe malinowski jonas löwgren where is information visualization technology going? mountaz hascoÃ"t-zizichris ahlberg robert korfhage catherine plaisant matthew chalmers ramana rao specifying analysis patterns for geographic databases on the basis of a conceptual framework jugurta lisboa filho cirano iochpe a usability study of awareness widgets in a shared workspace groupware system carl gutwin mark roseman saul greenberg international usability standards wolfgang dzida policies and roles in collaborative applications w. keith edwards creation and automatic classification of a robot-generated subject index anders ardö traugott koch a formal approach to the evaluation of interactive systems fabio paterno designing effective multimedia presentations peter faraday alistair sutcliffe an evaluation of the augment system for a period of nine months the author used augment, an electronic office support system. this paper is an evaluation of that system. augment is a commercially available, time-shared system marketed by tymshare. in an integrated manner, augment supports functions needed in any electronic office support system. these functions include word processing, electronic mail, computer conferencing and file sharing. it also supports a powerful and unique method of structuring and viewing a file. the author was a member of a group of six people who used the system in their daily activities. many of the major functions of augment were regularly used during that period. the author found the system to be extremely supportive in his daily activities and believes that it allowed him to materially increase his productivity. the functions that were supported are reviewed and the strengths and weaknesses of augment as seen by the author are examined. e. d. callender equivalence and optimization of relational transactions a large class of relational database update transactions is investigated with respect to equivalence and optimization. the transactions are straight-line programs with inserts, deletes, and modifications using simple selection conditions. several basic results are obtained. it is shown that transaction equivalence can be decided in polynomial time. a number of optimality criteria for transactions are then proposed, as well as two normal forms. polynomial- time algorithms for transaction optimization and normalization are exhibited. also, an intuitively appealing system of axioms for proving transaction equivalence is introduced. finally, a simple, natural subclass of transactions, called strongly acyclic, is shown to have particularly desirable properties. serge abiteboul victor vianu how good is that data in the warehouse? a data warehouse is an analytical database used for decision support. data are copied from production databases, cleaned up, and possibly renormalized (i.e., denormalized for performance or normalized to create correct record structures). if the resulting records are normalized incorrectly or if the users do not understand how the records have been denormalized, then a phenomenon called semantic disintegrity may occur. semantic disintegrity occurs when a user submits a query and receives an answer, but the answer is not the answer to the question they believe that they asked.thus an understanding of normalization is critically important for both database designers and database users. unfortunately, the process of normalization relies on a series of heuristics that, in turn, assume the existence of an innate mental logic in the mind of the database designer or user for understanding data dependencies and their implications. the quality of answers derived from the data warehouse, in turn, relies on the existence of this innate mental logic.in order to determine if this is a reasonable assumption, an empirical test was constructed for the purpose of determining if subjects have an innate mental logic for understanding data dependencies and their implications. john m. artz improving automatic query expansion mandar mitra amit singhal chris buckley supporting general queries in an object management system xuequn wu integrating object-oriented data modelling with a rule-based programming paradigm logres is a new project for the development of extended database systems which is based on the integration of the object-oriented data modelling paradigm and of the rule-based approach for the specification of queries and updates. the data model supports generalization hierarchies and object sharing, the rule-based language extends datalog to support generalized type constructors (sets, multisets, and sequences), rule-based integrity constraints are automatically produced by analyzing schema definitions. modularization is a fundamental feature, as modules encapsulate queries and updates, when modules are applied to a logres database, their side effects can be controlled. the logres project is a follow-up of the algres project, and takes advantage of the algres programming environment for the development of a fast prototype. f. cacace s. ceri s. crespi- reghizzi l. tanca r. zicari the scent of a site: a system for analyzing and predicting information scent, usage, and usability of a web site designers and researchers of users' interactions with the world wide web need tools that permit the rapid exploration of hypotheses about complex interactions of user goals, user behaviors, and web site designs. we present an architecture and system for the analysis and prediction of user behavior and web site usability. the system integrates research on human information foraging theory, a reference model of information visualization and web data- mining techniques. the system also incorporates new methods of web site visualization (dome tree, usage based layouts), a new predictive modeling technique for web site use (web user flow by information scent, wufis), and new web usability metrics. ed h. chi peter pirolli james pitkow a tool for content based navigation of music steven blackburn david deroure 3d visualization technique for retrieval of documents and images haruo kimoto developing dual user interfaces for integrating blind and sighted users: the homer uims anthony savidis constantine stephanidis collective phenomena in hypertext networks valery m. chelnokov victoria l. zephyrova assessing the value of information n. ahituv from latent semantics to spatial hypertext - an integrated approach chaomei chen mary czerwinski implementing ranking strategies using text signatures signature files provide an efficient access method for text in documents, but retrieval is usually limited to finding documents that contain a specified boolean pattern of words. effective retrieval requires that documents with similar meanings be found through a process of plausible inference. the simplest way of implementing this retrieval process is to rank documents in order of their probability of relevance. in this paper techniques are described for implementing probabilistic ranking strategies with sequential and bit-sliced signature tiles and the limitations of these implementations with regard to their effectiveness are pointed out. a detailed comparison is made between signature-based ranking techniques and ranking using term-based document representatives and inverted files. the comparison shows that term- based representations are at least competitive (in terms of efficiency) with signature files and, in some situations, superior. w. bruce croft pasquale savino accessing information from globally distributed knowledge repositories (extended abstract) alfred v. aho transaction management for object-oriented systems recently there has been a lot of interest in object-oriented databases, which appears to be driven by application demands in areas such as cad-cam and software development environments. a number of projects are developing data models for object-oriented databases, but, while data models are obviously necessary, less attention has been paid to the more difficult problems involved in supporting such models reliably. my research aims to extend database transaction processing concepts to the object-oriented world. there is general agreement that traditional transaction processing will not be effective. i am investigating two ideas for extension: using the semantics of the operations being performed (as opposed to read/write semantics), and taking advantage of multiple layers of abstraction in the application. both ideas lead me towards a per-abstraction transaction management design, which is natural for object-oriented programming languages. i hope that inheritance can be used to prevent applications programmers from having to construct transaction management code for each abstraction. my goal is to build a distributed object- oriented system that is resilient to failures, appropriately controls concurrency, and provides the illusion of a single global object space to all its users, roughly in the sense of a smalltalk or lisp workspace (i.e., small objects, not large objects). this is in contrast to, for example, building a object-oriented database server, or just a reliable centralized system. in working towards such a system, i will first define what the ideal behavior should be. this is not so obvious as one might think, since multiple user workspaces have not been previously developed, to my knowledge. once i have a conceptual model, i will deal with the performance problems. once begun, conceptual design and performance improvement will be an iterative process. i believe that i will learn more about the real problems by taking this approach, rather than by starting with a server or centralized system design. however, the conceptual model will tend to downplay distribution per se, concentrating on the concurrency problems posed by multiple users in the same logical address space. while this is an ambitious line of research, it should produce a number of valuable and interesting results even if my initial goals are not met. j. elliot b. moss odmg-93: a standard for object-oriented dbmss r. g. g. cattell a horizontal fragmentation algorithm for the fact relation in a distributed data warehouse data warehousing is one of the major research topics of appliedside database investigators. most of the work to date has focused on building large centralized systems that are integrated repositories founded on pre-existing systems upon which all corporate-wide data are based. unfortunately, this approach is very expensive and tends to ignore the advantages realized during the past decade in the area of distribution and support for data localization in a geographically dispersed corporate structure. this research investigates building distributed data warehouses with particular emphasis placed on distribution design for the data warehouse environment. the article provides an architectural model for a distributed data warehouse, the formal definition of the relational data model for data warehouse and a methodology for distributed data warehouse design along with a "horizontal" fragmentation algorithm for the fact relation. amin y. noaman ken barker user involvement in the design process: why, when & how? c. dennis allen don ballman vivienne begg harold h. miller-jacobs jakob nielsen jared spool michael muller auto-adaptative multimedia composition (abstract) michel crampen second summit on international cooperation in digital libraries robert m. akscyn ian h. witten dbis-toolkit: adaptable middleware for large scale data delivery mehmet altinel demet aksoy thomas baby michael franklin william shapiro stan zdonik whiter (or wither) uims? the subject of user interface management systems (uims) has been a topic of research and debate for the last several years. the goal of such systems has been to automate the production of user interface software. the problem of building quality user interfaces within available resources is a very important one as the demand for new interactive programs grows. prototype uimss have been built and some software packages are presently being marketed as such. many papers have been published on the topic. there still, however, remain a number of unanswered questions. is a uims an effective tool for building high quality user interfaces or is the run-time cost of abstracting out the user interface too high? why are there not more uimss available and why are they not more frequently used? is simple programmer productivity alone sufficient motivation for learning and adopting yet another programming tool? what is the difference, if any, between a "user interface toolbox", a windowing system and a uims? what are the differences between a uims and the screen and editor generators found in fourth generation languages? in fact, exactly what is a uims? in order to discuss these questions and to reassess the state of the uims art, siggraph sponsored a workshop on these issues (proceedings will be published in computer graphics in 1987). the panelists represent four subgroups who have each addressed these problems from different points of view. dan r. olsen mark green keith a. lantz andrew schulert john l. sibert schemasql: an extension to sql for multidatabase interoperability we provide a principled extension of sql, called _schemasql_, that offers the capability of uniform manipulation of data and schema in relational multidatabase systems. we develop a precise syntax and semantics of _schemasql_ in a manner that extends traditional sql syntax and semantics, and demonstrate the following. (1) _schemasql_ retains the flavor of sql while supporting querying of both data and schema. (2) it can be used to transform data in a database in a structure substantially different from original database, in which data and schema may be interchanged. (3) it also permits the creation of views whose schema is dynamically dependent on the contents of the input instance. (4) while aggregation in sql is restricted to values occurring in one column at a time, _schemasql_ permits "horizontal" aggregation and even aggregation over more general "blocks" of information. (5) _schemasql_ provides a useful facility for interoperability and data/schema manipulation in relational multidatabase systems. we provide many examples to illustrate our claims. we clearly spell out the formal semantics of _schemasql_ that accounts for all these features. we describe an architecture for the implementation of _schemasql_ and develop implementation algorithms based on available database technology that allows for powerful integration of sql based relational dbms. we also discuss the applicability of _schemasql_ for handling semantic heterogeneity arising in a multidatabase system. laks v. s. lakshmanan fereidoon sadri subbu n. subramanian nonparametric methods for automatic classification of documents and transactions (abstract) the question of how to classify documents is a central problem in document retrieval. the classification problem can be stated as follows. there exists a large document collection, each of which contains a set of terms. how should the documents be clustered to allow the selection of index terms so that the collection can be searched to the maximal collective benefit of the retrieval system customers? traditionally, transaction functionalities are manually scheduled into deferred and immediate queues for processing without any special consideration given to the interwoven functionalities invoked by the different user groups. the question of how to classify transactions for the concurrency controller in a distributed system is a major problem in transaction scheduling. the problem here is, how should transaction functionalities be scheduled for processing to satisfy the requirements of the different user groups? that is, how should transaction functionalities be organized on disk to minimize disk access time, in the hope of fulfilling the requirements of individual user groups? this paper presents nonparametric algorithms and heuristic for automatic classification of documents according to the similarity in their keywords; the words likely to be useful as index terms for document set. the normal approximation to the binomial distribution was explored as an index for automatic classification of documents and transactions. a nonparametric measure of association consistent with the cramer statistic was used in the examination of similarities among documents. a nonparametric analysis of variance procedure was developed for comparing the profiles of term frequencies between documents or transaction functionalities invoked between users. the usefulness of the heuristic in the automatic classification of user groups according to the transaction functionalities that they invoke in a distributed system is discussed. amos o. olagunju a minimalist approach to the development of a word processor supporting group writing activities nicholas malcolm brian r. gaines automatic acquisition of terminological relations from a corpus for query expansion jean-david sta graphical user interfaces and ease of use: some myths examined graphical user interfaces (guis) are on their way to becoming the most pervasive interface for desktop systems at least partly because of conventional wisdom about their ease of use. such an assumption may have been kindled by vendors' claims about the inherent usability of such interfaces although previous research on the productivity gains from guis has yielded mixed results. this paper reports the results of a field study of 230 users of a popular gui, microsoft corporation's windows. the study examined windows' ease of use---an important factor contributing to eventual productivity. the results indicate that contrary to popular belief guis are not universally easy to use---certain types of individuals are likely to find them easier to use than others. organizational roles and management initiatives can also influence perceptions of ease of use. the findings also suggest that ease of use is enhanced through opportunities for self training rather than traditional, formal training. michael c. zanino ritu agarwal jayesh prasad querying structured web resources ee-peng lim cheng-hai tan boon-wan lim wee- keong ng developing a high traffic, read-only web site in this paper, we describe some of the considerations for designing highly trafficked web sites with read-only or read mostly characteristics. john nauman ray suorsa experiments with query acquisition and use in document retrieval systems in some recent experimental document retrieval systems, emphasis has been placed on the acquisition of a detailed model of the information need through interaction with the user. it has been argued that these "enhanced" queries, in combination with relevance feedback, will improve retrieval performance. in this paper, we describe a study with the aim of evaluating how easily enhanced queries can be acquired from users and how effectively this additional knowledge can be used in retrieval. the results indicate that significant effectiveness benefits can be obtained through the acquisition of domain concepts related to query concepts, together with their level of importance to the information need. w. b. croft r. das an exploration into supporting artwork orientation in the user interface george w. fitzmaurice ravin balakrishnan gordon kurtenbach bill buxton querying object-oriented databases michael kifer won kim yehoshua sagiv scatter/gather browsing communicates the topic structure of a very large text collection peter pirolli patricia schank marti hearst christine diehl the representation of agents: anthropomorphism, agency, and intelligence william joseph king jun ohya achieving robustness in distributed database systems the problem of concurrency control in distributed database systems in which site and communication link failures may occur is considered. the possible range of failures is not restricted; in particular, failures may induce an arbitrary network partitioning. it is desirable to attain a high "level of robustness" in such a system; that is, these failures should have only a small impact on system operation. a level of robustness termed maximal partial operability is identified. under our models of concurrency control and robustness, this robustness level is the highest level attainable without significantly degrading performance. a basis for the implementation of maximal partial operability is presented. to illustrate its use, it is applied to a distributed locking concurrency control method and to a method that utilizes timestamps. when no failures are present, the robustness modifications for these methods induce no significant additional overhead. derek l. eager kenneth c. sevcik software development snapshots: a preliminary investigation laura marie leventhal recent applications of ibm's query by image content (qbic) d. petkovic w. niblack m. flickner d. steele d. lee j. yin j. hafner f. tung h. treat r. dow m. gee m. vo p. vo b. holt j. hethorn k. weiss p. elliott c. bird fragmented interaction: establishing mutual orientation in virtual environments jon hindmarsh mike fraser christian heath steve benford chris greenhalgh backend database systems fred j. maryanski data mining techniques to improve forecast accuracy in airline business predictive models developed by applying data mining techniques are used to improve forecasting accuracy in the airline business. in order to maximize the revenue on a flight, the number of seats available for sale is typically higher than the physical seat capacity (overbooking). to optimize the overbooking rate, an accurate estimation of the number of no-show passengers (passengers who hold a valid booking but do not appear at the gate to board for the flight) is essential. currently, no-shows on future flights are estimated from the number of no-shows on historical flights averaged on booking class level. in this work, classification trees and logistic regression models are applied to estimate the probability that an individual passenger turns out to be a no-show. passenger information stored in the reservation system of the airline is either directly used as explanatory variable or used to create attributes that have an impact on the probability of a passenger to be a no-show. the total number of no-shows in each booking class or on the total flight is then obtained by accumulating the individual no-show probabilities over the entity of interest. we show that this forecasting approach is more accurate than the currently used method. in addition, the selected models lead to a deepened insight into passenger behavior. christoph hueglin francesco vannotti entropy-based subspace clustering for mining numerical data chun-hung cheng ada waichee fu yi zhang increasing the resilience of atomic commit, at no additional cost idit keidar danny dolev hypermedia-based structured modeling (abstract) yongming tang hua hua data visualization sliders computer sliders are a generic user input mechanism for specifying a numeric value from a range. for data visualization, the effectiveness of sliders may be increased by using the space inside the slider as • an interactive color scale, • a barplot for discrete data, and • a density plot for continuous data. the idea is to show the selected values in relation to the data and its distribution. furthermore, the selection mechanism may be generalized using a painting metaphor to specify arbitrary, disconnected intervals while maintaining an intuitive user-interface. stephen g. eick timestamp based certification schemes for transactions in distributed database systems mukul k. sinha p. d. nandikar s. l. mehndiratta the smooth video db - demonstration of an integrated generic indexing approach the smooth video db is a distributed system proposing an integral query, browsing, and annotation software framework in common with an index database for video media material. alexander bachlechner lazlo boszormenyi bernhard doflinger christian hofbauer harald kosch carmen riedler roland tusch capability based mediation in tsimmis chen li ramana yerneni vasilis vassalos hector garcia-molina yannis papakonstantinou jeffrey ullman murty valiveti interactivedesk: a computer-augmented desk which responds to operations on real objects toshifumi arai kimiyoshi machii soshiro kuzunuki hiroshi shojima a digital video library on the world wide web: wayne wolf yiqing liang michael kozuch heathery yu michael phillips marcel weekes andrew debruyne a comparison of face-to-face and distributed presentations ellen a. isaacs trevor morris thomas k. rodriguez john c. tang intelligent help in a one-shot dialog: a protocol study a database of 150 interactions conducted via electronic mail was analyzed. the database had been constructed as an on-line tool for users and advisors, but the interactions can also be regarded as modelling intelligent help dialog in which posing a query and providing a response are each accomplished in "one- shot". the types of questions users ask and the advisory strategies employed for imcomplete queries without follow-up questioning are described. the goal is to understand this new on-line tool for advising and its implications as a model of one-shot intelligent help dialogs. amy aaronson john m. carroll supporting cooperative and personal surfing with a desktop assistant hannes marais krishna bharat initiatives that center on scientific dissemination marcos andre goncalves claudia bauzer medeiros supporting situated actions in high volume conversational data situations: christopher lueg d-lib working group on metrics for digital libraries barry m. leiner preservation of local sound periodicity with variable-rate video a method of allowing pitch preservation of sound with variable-rate video playback is suggested. this is an important factor in monitoring of audio content for cueing purposes. methods of separately considering a signal's frequency and time representations are considered with a view to performing time-scale modification with preservation of local periodicity (pitch). particular emphasis is placed upon granulation in time of a sampled source --- a technique based upon dennis gabor's landmark papers in 1946 and 1947 and developed in the field of computer music. this relatively simple method requires no prior signal analysis and is therefore a less computationally expensive method of achieving the goals stated above. this is an important point considering the need for real-time implementation. the process does however introduce some distortion, and investigation into how this may be minimised is necessary to produce acceptable results. d. knox t. itagaki i. stewart a. nesbitt i. j. kemp agents and creativity margaret a. boden adaptive commitment for distributed real-time transactions distributed real-time transaction systems are useful for both real- time and high-performance database applications. standard transaction management approaches that use the two-phase commit protocol suffer from its high costs and blocking behavior which is problematic in real-time computing environments. our approach in this paper is to identify ways in which a commit protocol can be made adaptive in the sense that under situations that demand it, such as a transient local overload, the system can dynamically change to a different commitment strategy. the decision to do so can be taken autonomously at any site. the different commitment strategies exploit a trade-off between the cost of commitment and the obtained degree of atomicity. our protocols are based on optimistic commitment strategies, and they rely on local compensatory actions to recover from non-atomic executions. we provide the necessary framework to study the logical and temporal correctness criteria, and we describe examples to illustrate the use of our strategies. nandit soparkar eliezer levy henry f. korth avi silberschatz graphical editing by example david kurlander human-computer interaction research at massey university, new zealand mark apperley chris phillips from undo to multi-user applications: the demo the object-oriented history mechanism of the gina application framework and its relevance for multi-user applications are demonstrated. the interaction history of a document is represented as a tree of command objects. synchronous cooperation is supported by replicating the document state and exchanging command objects. asynchronous cooperation leads to different branches of the history tree which can later be merged. michael spenke hypertext and hypermedia related issues at chi '94 laurence nigay the concurrency control problem in multidatabases: characteristics and solutions a multidatabase system (mdbs) is a collection of local database management systems, each of which may follow a different concurrency control protocol. this heterogeneity makes the task of ensuring global serializability in an mdbs environment difficult. in this paper, we reduce the problem of ensuring global serializability to the problem of ensuring serializability in a centralized database system. we identify characteristics of the concurrency control problem in an mdbs environment, and additional requirements on concurrency control schemes for ensuring global serializability. we then develop a range of concurrency control schemes that ensure global serializability in an mdbs environment, and at the same time meet the requirements. finally, we study the tradeoffs between the complexities of the various schemes and the degree of concurrency provided by each of them. sharad mehrotra rajeev rastogi yuri breitbart henry f. korth avi silberschatz hdm - a model for the design of hypertext applications franca garzotto paolo paolini daniel schwabe collaborative filtering and the generalized vector space model (poster session) collaborative filtering is a technique for recommending documents to users based on how similar their tastes are to other users. if two users tend to agree on what they like, the system will recommend the same documents to them. the generalized vector space model of information retrieval represents a document by a vector of its similarities to all other documents. the process of collaborative filtering is nearly identical to the process of retrieval using gvsm in a matrix of user ratings. using this observation, a model for filtering collaboratively using document content is possible. ian soboroff charles nicholas timewarp: techniques for autonomous collaboration w. keith edwards elizabeth d. mynatt partial replica selection based on relevance for information retrieval zhihong lu kathryn s. mckinley the microcosm link service (abstract): an integrating technology wendy hall hugh davis adrian pickering gerard hutchings book preview marisa campbell olympic records for data at the 1998 nagano games the 1998 nagano olympic games had more intensive demands on data management than any previous olympics in history. this talk will take you behind the scenes to talk about the technical challenges and the architectures that made it possible to handle 4.5 terabytes of data and sustain a total of almost 650 million web requests, reaching a peak of over 103k per minute. we will discuss the overall structure of the most comprehensive and heavily used internet technology application in history. many products were involved, both hardware and software, but this talk will focus in on the database and web challenges, the technology that made it possible to support this tremendous workload. high availability, data integrity, high performance, support of both smps and clustered architectures were among the features and functions that were critical. we will cover the olympic results system, the commentator information system, info '98, games management, and the olympic web site that made this information available to the internet community. the speaker will be ed lassettre, ibm fellow, and a key member of ibm's olympic team. edwin r. lassettre performance enhancement through replication in an object-oriented dbms in this paper we describe how replicated data can be used to speedup query processing in an object-oriented database system. the general idea is to use replicated data to eliminate some of the functional joins that would otherwise be required for query processing. we refer to our technique for replicating data as field replication because it allows individual data fields to be selectively replicated. in the paper we describe how field replication can be specified at the data model level and we present storage-level mechanisms to efficiently support it. we also present an analytical cost model to give some feel for how beneficial field replication can be and the circumstances under which it breaks down. while field replication is a relatively simple notion, the analysis shows that it can provide significant performance gains in many situations. eugene j. shekita michael j. carey complete geometrical query languages (extended abstract) marc gyssens jan van den bussche dirk van gucht declarative updates of relational databases this article presents a declarative language, called update calculus, of relational database updates. a formula in update calculus involves conditions for the current database, as well as assertions about a new database. logical connectives and quantifiers become constructors of complex updates, offering flexible specifications of database transformations. update calculus can express all nondeterministic database transformations that are polynomial time. for set-at-a-time evaluation of updates, we present a corresponding update algebra. existing techniques of query processing can be incorporated into update evaluation. we show that updates in update calculus can be translated into expressions in update algebra and vice versa. weidong chen from browsing to interacting: dbms support for responsive websites internet websites increasingly rely on database management systems. there are several reasons for this trend: * as sites grow larger, managing the content becomes impossible without the use of a dbms to keep track of the nature, origin, authorship, and modification history of each article. * as sites become more interactive, tracking and logging user activity and user contributions creates valuable new data, which again is best managed using a dbms. the emerging paradigm of _customer-centric e-business_ places a premium on engaging users, building a relationship with them across visits, and leveraging their expertise and feed-back. supporting this paradigm means that we not only have to track what users visit on a site, we also have to enable them to offer opinions and contribute to the content of the website in various ways; naturally, this requires us to use a dbms. * in order to personalize a user's experience, a site must dynamically construct (or at least fine-tune) each page as it is delivered, taking into account information about the user's past activity and the nature of the content on the current page. in other words, personalization is made possible by utilizing the information (about content and user activity) that we already indicated is best managed using a dbms. in summary, as websites go beyond a passive collection of pages to be browsed and seek to present users with a personalized, interactive experience, the role of database management systems becomes central. in this talk, i will present an overview of these issues, including a discussion of related techniques such as _cookies_ and _web server logs_ for tracking user activity. raghu ramakrishnan a survey on usage of sql relational database systems have been on market for more than a decade. sql has been accepted as the standard query language of relational systems. to further understand the usage of relational systems and relational query language sql, we conducted a survey recently that covers various aspects of the usage of sql in industrial organizations. in this paper, we present those results that may interest dbms researcher and developers, including the profiles of sql users, the application areas where sql is used, the usage of different features of sql and difficulties encountered by sql users. hongjun lu hock chuan chan kwok kee wei communicative facial displays as a new conversational modality akikazu takeuchi katashi nagao rapid controlled movement through virtual 3d workspaces (videotape) jock d. mackinlay george g. robertson stuart k. card braque (abstract): an interface to support browsing and interactive query formulation in information retrieval systems p. g. marchetti s. vazzana r. panero n. j. belkin integrating dynamically-fetched external information into a dbms for semistructured data we describe the external data manager component of the lore database system for semistructured data. lore's external data manager enables dynamic retrieval and integration of data from arbitrary, heterogeneous external sources during query processing. the distinction between lore- resident and external data is invisible to the user. we introduce a flexible notion of arguments that limits the amount of data fetched from an external source, and we have incorporated optimizations to reduce the number of calls to an external source. jason mchugh jennifer widom invited talk it2: an information technology initiative for the twenty-first century - nsf plans for implementation ruzena bajesy spatial index based on multiple interval segments xiao weiqi feng yucai an interface for navigating clustered document sets returned by queries robert b. allen pascal obry michael littman the indiana center for database systems judith copler database prototyping and implementation (abstract only) 1\\. in previous work (beuerman, proc. csc '85 and proc. csc '86), we have used lisp to express relational data base operations. this work is, perhaps, best viewed as prototyping. a number of different systems of relational data base operations have been implemented in pascal. these pascal implementations are based on their lisp prototypes. these various pascal systems all provide for such standard database operations as deletion, insertion, selection and updating. other operations such as set- theoretic and i/o are also available. because of pascal's strong typing (cf. lisp) an early version required all attribute values to be integers. later versions allow a full range of possibilities. to provide for a user-friendly interface, menu-driven variants have been developed. these are evolving in the direction of natural language processing. this work was begun at albright college, reading, pa. d. r. beuerman working with audio: integrating personal tape recorders and desktop computers audio data is rarely used on desktop computers today, although audio is otherwise widely used for communication tasks. this paper describes early work aimed at creating computer tools that support the ways users may want to work with audio data. user needs for the system were determined by intervieweing people already working with audio data, using existing devices such as portable tape recorders. a preliminary prototype system -- consisting of a personal tape recorder for recording and simultaneously marking audio and a macintosh application for browsing these recordings -- was built. informal field user tests of this prototype system have indicated areas for improvement and directions for future work. leo degen richard mander gitta salomon closest pair queries in spatial databases this paper addresses the problem of finding the k closest pairs between two spatial data sets, where each set is stored in a structure belonging in the r-tree family. five different algorithms (four recursive and one iterative) are presented for solving this problem. the case of 1 closest pair is treated as a special case. an extensive study, based on experiments performed with synthetic as well as with real point data sets, is presented. a wide range of values for the basic parameters affecting the performance of the algorithms, especially the effect of overlap between the two data sets, is explored. moreover, an algorithmic as well as an experimental comparison with existing incremental algorithms addressing the same problem is presented. in most settings, the new algorithms proposed clearly outperform the existing ones. antonio corral yannis manolopoulos yannis theodoridis michael vassilakopoulos the dynamics of mass interaction steve whittaker loren terveen will hill lynn cherny interface issues and interaction strategies for information retrieval systems scott henninger nick belkin design: no here, now where? bill hill austin henderson kate ehrlich automatic link generation ross wilkinson alan f. smeaton on the menbership problem for functional and multivalued dependencies in relational databases the problem of whether a given dependency in a database relation can be derived from a given set of dependencies is investigated. we show that the problem can be decided in polynomial time when the given set consists of either multivalued dependencies only or of both functional and multivalued dependencies and the given dependency is also either a functional or a multivalued dependency. these results hold when the derivations are restricted not to use the complementation rule. catriel beeri toward an open shared workspace: computer and video fusion approach of teamworkstation hiroshi ishii advcharts: a visual formalism for interactive systems l. m. f. carneiro d. d. cowan c. j. p. lucena mocha: a self-extensible database middleware system for distributed data sources we present mocha, a new self-extensible database middleware system designed to interconnect distributed data sources. mocha is designed to scale to large environments and is based on the idea that some of the user- defined functionality in the system should be deployed by the middleware system itself. this is realized by shipping java code implementing either advanced data types or tailored query operators to remote data sources and have it executed remotely. optimized query plans push the evaluation of powerful _data-reducing_ operators to the data source sites while executing _data- inflating_ operators near the client's site. the volume reduction factor is a new and more explicit metric introduced in this paper to select the best site to execute query operators and is shown to be more accurate than the standard selectivity factor alone. mocha has been implemented in java and runs on top of informix and oracle. we present the architecture of mocha, the ideas behind it, and a performance study using scientific data and queries. the results of this study demonstrate that mocha provides a more flexible, scalable and efficient framework for distributed query processing compared to those in existing middleware solutions. manuel rodríguez-martinez nick roussopoulos designing groupware for congruency in use wolfgang prinz gloria mark uta pankoke-babatz enabling distributed collaborative science tom hudson diane sonnenwald kelly maglaughlin mary whitton ronald bergquist cognitive approach for building user model in an information retrieval context (poster session) the recent development of communication networks and multimedia system provides users with the availability of a huge amount of information making worse the problem of information overload [9]. the evolution of system design is necessary becoming more user centred, and more personally involving. a review of survey studies on internet users since 1993 confirms that a greater percentage of people are becoming online citizens, and professionals are integrating more online components into their work process. a review of the experimental literature on internet user's reveals that there is intense interest in humanising the online environment by integrating affective and cognitive components [8]. we are specially in concern with the effects on the evolution on information retrieval. we can notice significant changes in the information retrieval world over the past five or so years due to the emergence of internet and one of its most important and widely used services, the world wide web (www) or simply the web. while reviewing the progress of research in information retrieval and user modelling, we can observe that many systems and prtotypes are created [5], [10], [11], but all of them, share some basic limitations: the techniques used to represent knowledge in the user model is based on simple list of keyword, the type of the considered knowledge is very limited, usually restricted to single word, or to (some) structural characteristic: the learning capability are very poor. we aim to propose a cognitive approach for building user model in an information retrieval context. in fact cognitive approach is based on identifying how users process information and what constitutes an appropriate model to represent this process, and because ir, under the cognitive paradigm takes the user into account in a high-priority way [1]. however, within the cognitive paradigm, there is no general model valid for our documentary approach that satisfactorily how user knowledge is represented for the purpose of processing information. the lack of such a model does not allow one to identify a user's cognitive state with regard to his or her information needs and requirements. methodology adopting the cognitive viewpoint in ir are synthesised by daniel [4] in three groups, which comprise the representation of: * users and their problem, which stems from the hypothesis proposed by belkin on the `anomalous states of knowledge' (ask), according to which the user searches for information * search strategy, which compile the different ways search strategies and processes are carried out, depending on the variable involved - user, intermediary, ir systems [6],[7] * document and information, which is considered a major goal of current ir research, since it embraces the whole corpus of studies about user models intended to eliminate the intermediary's role in retrieval system. the aim of this approach is to allow users direct access to the system by means of the representation of documents and intelligent interfaces. user-centered paradigm now dominates in studies of information needs and information retrieval. we have the goal of developing new approaches to information retrieval which are based on user modelling techniques for building and managing the representation of the user preferances. in this paper, we describe two complementary approaches which are necessary for building user model and its integration in an information retrieval system: * a conceptual one based on the description of knowledge needed by the user in an information retrieval context, * a functional approach which deals with dynamic aspects of the model. within this approach we aim to determine the role played by the model in an information retrieval context. in many studies of ir interesting in user modelling we find different kind of knowledge trying to describe user's need. so our conceptual approach has consisted in enumerating these knowledge and integrating them in their adequate components in information retrieval architecture. almost of these studies that identified cognitive characteristics have used quantitative methods to measure them. what is needed is a qualitative study and appropriate method to ascertain these cognitive characteristic [12]. our main objective is the development of techniques for modelling the user as an interactive part of ir, so we propose our functional approach which deals with identifying cognitive characteristic within the role played by the user model in an information retrieval architecture. so we began by presenting the conceptual approach and so the functional one. amina sayeb belhassen nabil ben abdallah henda hadjami ben ghezala the design of an interactive online help desk in the alexandria digital library in large software systems such as digital libraries, electronic commerce applications, and customer support systems, the user interface and system are often complex and difficult to navigate. it is necessary to provide users with interactive online support to help users learn how to effectively use these applications. such online help facilities can include providing tutorials and animated demonstrations, synchronized activities between users and system supporting staff for real time instruction and guidance, multimedia communication with support staff such as chat, voice, and shared whiteboards, and tools for quick identification of user problems. in this paper, we investigate how such interactive online help support can be developed and provided in the context of a working system, the alexandria digital library (adl) for geospatially-referenced data. we developed an online help system, alexhelp!. alexhelp! supports collaborative sessions between the user and the librarian (support staff) that include activities such as map browsing and region selection, recorded demonstration sessions for the user, primitive tools for analyzing user sessions, and channels for voice and text based communications. the design of alexhelp! is based on user activity logs, and the system is a light-weight software component that can be easily integrated into the adl user interface client. a prototype of alexhelp! is developed and integrated into the adl client; both the adl client and alexhelp! are written in java. robert prince jianwen su hong tang yonggang zhao gss for presentation support robert m. davison robert o. briggs data modelling in the large martin bertram inherent complexity of recursive queries stavros cosmadakis aqui: dynamic links on the web (abstract) michael dockter joel farber semi-determinism (extended abstract) we investigate under which conditions a non-deterministic query is semi- deterministic, meaning that two different results of the query to a database are isomorphic. we also consider uniform semi-determinism, meaning that all intermediate results of the computation are isomorphic. semi-determinism is a concept bridging the new trends of non-determinism and object generation in database query languages. our results concern decidability, both at compile time and at run time; expressibility of the infamous counting queries; and completeness, which is related to the issue of copy elimination raised by abiteboul and kannelakis. jan van den bussche dirk van gucht groupweb (video program) (abstract only): a groupware web browser groupweb is a prototype browser that allows group members to visually share and navigate world wide web page in real time. its groupware features include document and view slaving for synchronizing information sharing, telepointers for enacting gestures, and relaxed "what you see is what i see" views to handle display differences. a groupware text editor lets groups create and attach annotations to pages. an immediate application of groupweb is as a presentation tool for real time distance education and conferencing. the video illustrates groupweb and all its features. saul greenberg mark roseman relative knowledge in a distributed database let db be a database and let u1, , um be a collection of users each having at his or her disposal a query sublanguage lu1 generated by some view predicate each of these users knows only as much as he can learn from the database using his or her query sublanguage. such a knowledge is called relative knowledge in the paper and its various properties including the model and proof theory are investigated. the applications of relative knowledge in the database security and integrity are also discussed. t. imielinski designing a language for interactions terry winograd estimating the relative usability of two interfaces: heuristic, formal, and empirical methods compared jakob nielsen victoria l. phillips elicitations during information retrieval: implications for ir system design amanda spink abby goodrum david robins mei mei wu microcosm: an open hupermedia system hugh davis wendy hall adrian pickering rob wilkins a shared command line in a virtual space: the working man's moo mark guzdial visual metaphor and the problem of complexity in the design of web sites: techniques for generating, recognizing and visualizing structure michael joyce robert kolker stuart moulthrop ben shneiderman john merritt unsworth beyond the traditional query operators (poster session) chen ding chi-hung chi geominer: a system prototype for spatial data mining spatial data mining is to mine high-level spatial information and knowledge from large spatial databases. a spatial data mining system prototype, geominer, has been designed and developed based on our years of experience in the research and development of relational data mining system, dbminer, and our research into spatial data mining. the data mining power of geominer includes mining three kinds of rules: characteristic rules, comparison rules, and association rules, in geo-spatial databases, with a planned extension to include mining classification rules and clustering rules. the sand (spatial and nonspatial data) architecture is applied in the modeling of spatial databases, whereas geominer includes the spatial data cube construction module, spatial on-line analytical processing (olap) module, and spatial data mining modules. a spatial data mining language, gmql (geo-mining query language), is designed and implemented as an extension to spatial sql [3], for spatial data mining. moreover, an interactive, user-friendly data mining interface is constructed and tools are implemented for visualization of discovered spatial knowledge. jaiwei han krzysztof koperski nebojsa stefanovic learning about hidden events in system interactions understanding how to use a computer system often requires knowledge of hidden events: things which happen as a result of user actions but which produce no immediate perceptible effect. how do users learn about these events? will learners explain the mechanism in detail or only at the level at which they are able to use it? we extend lewis' expl model of causal analysis, incorporating ideas from miyake, draper, and dietterich, to give an account of learning about hidden events from examples. we present experimental evidence suggesting that violations of user expectations trigger a process in which hidden events are hypothesized and subsequently linked to user actions using schemata for general classes of situations which violate user expectations. stephen casner clayton lewis road crew: students at work lynellen d. s. perry retrieving descriptive phrases from large amounts of free text hideo joho mark sanderson mitre information discovery system (poster) raymond j. d'amore daniel j. helm puck-fai yan stephen a. glanowski the information grid: a framework for information retrieval and retrieval- centered applications the information grid (infogrid) is a framework for building information access applications that provides a user interface design and an interaction model. it focuses on retrieval of application objects as its top level mechanism for accessing user information, documents, or services. we have embodied the infogrid design in an object-oriented application framework that supports rapid construction of applications. this application framework has been used to build a number of applications, some that are classically characterized as information retrieval applications, others that are more typically viewed as personal work tools. ramana rao stuart k. card herbert d. jellinek jock d. mackinlay george g. robertson building hypertext from newspaper articles using lexical chaining (abstract) stephen j. green finding generalized projected clusters in high dimensional spaces high dimensional data has always been a challenge for clustering algorithms because of the inherent sparsity of the points. recent research results indicate that in high dimensional data, even the concept of proximity or clustering may not be meaningful. we discuss very general techniques for projected clustering which are able to construct clusters in arbitrarily aligned subspaces of lower dimensionality. the subspaces are specific to the clusters themselves. this definition is substantially more general and realistic than currently available techniques which limit the method to only projections from the original set of attributes. the generalized projected clustering technique may also be viewed as a way of trying to redefine clustering for high dimensional applications by searching for hidden subspaces with clusters which are created by inter-attribute correlations. we provide a new concept of using extended cluster feature vectors in order to make the algorithm scalable for very large databases. the running time and space requirements of the algorithm are adjustable, and are likely ta tradeoff with better accuracy. charu c. aggarwal philip s. yu interoperating and integrating multidatabases and systems (extended abstract) david k. hsiao safe stratified datalog with integer order does not have syntax stratified datalog with integer (gap)-order (or datalog ,_. the above three inter-related techniques offer an integrated framework for modeling, browsing, and searching large video databases. our experimental results indicate that they have many advantages over existing methods. junghwan oh kien a. hua the segment support map: scalable mining of frequent itemsets laks v. s. lakshmanan carson kai-sang leung raymond t. ng on designing database schemes bounded or constant-time maintainable with respect to functional dependencies under the weak instance model, to determine if a class of database schemes is bounded with respect to dependencies is fundamental for the analysis of the behavior of the class of database schemes with respect to query processing and updates. however, proving that a class of database schemes is bounded with respect to dependencies seems to be very difficult even for restricted cases. to resolve this problem, we need to develop techniques for characterizing bounded database schemes in this paper, we give a formal methodology for designing database schemes bounded with respect to functional dependencies using a new technique called extensibility. this methodology can also be used to design constant-time- maintainable database schemes e. p. f. chan h. j. hernandez aroma: abstract representation of presence supporting mutual awareness elin rønby pedersen tomas sokoler automatically linking multimedia meeting documents by image matching patrick chiu jonathan foote andreas girgensohn john boreczky comment on bancilhon and spyratos' "update semantics and relational views" arthur m. keller tailorable groupware (workshop): issues, methods, and architectures anders mørch oliver stiemerling volker wulf database design for incomplete relations although there has been a vast amount of research in the area ofrelational database design, to our knowledge, there has been very little work that considers whether this theory is still valid when relations in the database may be incomplete. when relations are incomplete and thus contain null values the problem of whether satisfaction is additive arises. additivity is the property of the equivalence of the satisfaction of a set of functional dependencies (fds) f with the individual satisfaction of each member of f in an incomplete relation. it is well known that in general, satisfaction of fds is not additive. previously we have shown that satisfaction is additive if and only if the set of fds is monodependent. we conclude that monodependence is a fundamental desirable property of a set of fds when considering incomplete information in relational database design. we show that, when the set of fds f either satifies the intersection property or the split-freeness property, then the problem of finding an optimum cover of f can be solved in polynomial time in the size of f; in general, this problem is known to be np-complete. we also show that when f satisfies the split-freeness property then deciding whether there is a superkey of cardinality k or less can be solved in polynomial time in the size of f, since all the keys have the same cardinality. if f only satisfies the intersection property then this problem is np-complete, as in the general case. moreover, we show that when f either satisfies the intersection property or the split-freeness property then deciding whether an attribute is prime can be solved in polynomial time in the size of f; in general, this problem is known to be np-complete. assume that a relation schema r is an appropriate normal form with respect to a set of fds f. we show that when f satisfies the intersection property then the notions of second normal form and third normal form are equivalent. we also show that when r is in boyce-codd normal form (bcnf), then f is monodependent if and only if either there is a unique key for r, or for all keys x for r, the cardinality of x is one less than the number of attributes associated with r. finally, we tackle a long-standing problem in relational database theory by showing that when a set of fds f over r satisfies the intersection property, it also satisfies the split-freeness property (i.e., is monodependent), if and only if every lossless join decomposition of r with respect to f is also dependecy preserving. as a corollary of this result we are able to show that when f satisfies the intersection property, it also satisfies the intersection property, it also satisfies the split-freeness property(i.e., is monodependent), if and only if every lossless join decomposition of r, which is in bcnf, is also dependency preserving. our final result is that when f is monodependent, then there exists a unique optimum lossless join decomposition of r, which is in bcnf, and is also dependency preserving. furthermore, this ultimate decomposition can be attained in polynomial time in the size of f. mark levene george loizou automatic structure visualization for video editing hirotada ueda takafumi miyatake shigeo sumino akio nagasaka safe constraint queries michael benedikt leonid libkin the effect of adding relevance information in a relevance feedback environment chris buckley gerard salton james allan on the decidability of query containment under constraints diego calvanese giuseppe de giacomo maurizio lenzerini geomed for urban planning - first user experiences barbara schmidt-belz claus rinner thomas f. gordon the transfer function bake-off (panel session) hanspeter pfister bill lorensen will schroeder chandrajit bajaj gordon kindlmann the data warehouse and data mining w. h. inmon apple guide: a case study in user-aided design of online help kevin knabe iterative design of video communication systems c. cool r. s. fish r. e. kraut c. m. lowery the state of the art in automating usability evaluation of user interfaces usability evaluation is an increasingly important part of the user interface design process. however, usability evaluation can be expensive in terms of time and human resources, and automation is therefore a promising way to augment existing approaches. this article presents an extensive survey of usability evaluation methods, organized according to a new taxonomy that emphasizes the role of automation. the survey analyzes existing techniques, identifies which aspects of usability evaluation automation are likely to be of use in future research, and suggests new ways to expand existing approaches to better support usability evaluation. melody y. ivory marti a. hearst practical methods for automatically generating typed links chip cleary ray bareiss towards an intelligent and personalized retrieval system development of an information retrieval system that can be personalized to each user requires maintaining and continually updating an interest profile for each individual user. since people tend to be poor at self-description, it is suggested that profile development and maintenance is an area in which machine learning and knowledge based techniques can be profitably employed. this paper presents a model for such an application of ai techniques. s h myaeng r r korfhage introducing creativity to cognition linda candy ernest edmonds making metadata: a study of metadata creation for a mixed physical-digital collection catherine c. marshall a practitioner's view of techniques used in data warehousing for sifting through data to provide information (keynote address) over the past 10 years data warehousing evolved from providing 'nice to know' data to 'need to know' information. the decision support systems providing summarized reports to executives have advanced to integrated information factories providing vital information to the desktops of knowledge workers. data warehousing has benefited through the use advanced techniques of sifting through data to produce information. this discussion will cover data mining techniques used in specific business cases as well as attempt to describe problems that still exist (and could be researched) in the business intelligence arena. jim scoggins on the potential utility of negative relevance feedback in interactive information retrieval colleen cool nicholas j. belkin jurgen koenemann semantic optimization: what are disjunctive residues useful for? residues have been proved to be a very important means for doing semantic optimization. in this paper we will discuss a new kind of residues--- the disjunctive residues. it will be shown that they are very useful to perform subformula elimination, if, in addition, a powerful reduction algorithm is available. wolfgang l. j. kowarschick comparing interactive information retrieval systems across sites: the trec-6 interactive track matrix experiment eric lagergren paul over query expansion using domain-adapted, weighted thesaurus in an extended boolean model in this paper, we address there important issues with query expansion using a thesaurus; how to give weights to the terms in expanded queries, how to select additional search terms in the thesaurus, and how to enrich the terms in the manual thesaurus (namely, thesaurus reconstruction). to weight the terms in expanded queries, we construct the weighted thesaurus that has a similarity value between the terms in the thesaurus, using statistical co-occurrence in a corpus. to enrich the terms in the manual thesaurus, domain dependent terms which occur in a corpus are inserted into the weighted thesaurus using the co- occurrence information. in this paper, the reconstructed thesaurus with weights is defined as a domain- adapted, weighted thesaurus. then we explain query expansion using the domain- adapted, weighted thesaurus in an extended boolean retrieval model. to select additional search terms during query expansion, our model uses semi-automatic query expansion and a restriction method. in the experiments, our system had almost twice the recall of the boolean retrieval system not using the thesaurus or the query expansion retrieval system using the original thesaurus. and also, the precision of our system was almost the same precision as the other systems. oh-woog kwon myoung-cheol kim key-sun choi osam*.kbms: an object-oriented knowledge base management system for supporting advanced applications stanley y. w. su herman x. lam srinivasa eddula javier arroyo neeta prasad ronghao zhuang infoharness: a system for search and retrieval of heterogeneous information leon shklar amit sheth vipul kashyap satish thatte mining frequent patterns without candidate generation mining frequent patterns in transaction databases, time-series databases, and many other kinds of databases has been studied popularly in data mining research. most of the previous studies adopt an apriori-like candidate set generation-and-test approach. however, candidate set generation is still costly, especially when there exist prolific patterns and/or long patterns. in this study, we propose a novel frequent pattern tree (fp-tree) structure, which is an extended prefix-tree structure for storing compressed, crucial information about frequent patterns, and develop an efficient fp-tree-based mining method, fp-growth, for mining _the complete set of frequent patterns_ by pattern fragment growth. efficiency of mining is achieved with three techniques: (1) a large database is compressed into a highly condensed, much smaller data structure, which avoids costly, repeated database scans, (2) our fp-tree-based mining adopts a pattern fragment growth method to avoid the costly generation of a large number of candidate sets, and (3) a partitioning- based, divide-and-conquer method is used to decompose the mining task into a set of smaller tasks for mining confined patterns in conditional databases, which dramatically reduces the search space. our performance study shows that the fp-growth method is efficient and scalable for mining both long and short frequent patterns, and is about an order of magnitude faster than the apriori algorithm and also faster than some recently reported new frequent pattern mining methods. jiawei han jian pei yiwen yin identity in virtual communities anri kivimäki kaisa kauppinen mike robinson parallelism and recovery in database systems in this paper a new method to increase parallelism in database systems is described. use is made of the fact that for recovery reasons, we often have two values for one object in the database---the new one and the old one. introduced and discussed in detail is a certain scheme by which readers and writers may work simultaneously on the same object. it is proved that transactions executed according to this scheme have the correct effect; i.e., consistency is preserved. several variations of the basic scheme which are suitable depending on the degree of parallelism required, are described. r. bayer h. heller a. reiser multimedia document architecture (panel session) stephen bulick terry crowley lester ludwig jonathan rosenberg the art of the interface: visual ideas, principles and inspiration for interface designers suzanne watzman cone trees: animated 3d visualizations of hierarchical information george g. robertson jock d. mackinlay stuart k. card compressed inverted files with reduced decoding overheads anh ngoc vo alistair moffat the integrated user-support environment (in-use) group at usc/isi robert neches peter aberg david benjamin brian harp liyi hu ping luo roberto moriyón pedro szekely popup vernier: a tool for sub-pixel-pitch dragging with smooth mode transition yuji ayatsuka jun rekimoto satoshi matsuoka answer garden: a tool for growing organizational memory answer garden allows organizations to develop databases of commonly asked questions that grow "organically" as new questions arise and are answered. it is designed to help in situations (such as field service organizations and customer "hot lines") where there is a continuing stream of questions, many of which occur over and over, but some of which the organization has never seen before. the system includes a branching network of diagnostic questions that helps users find the answers they want. if the answer is not present, the system automatically sends the question to the appropriate expert, and the answer is returned to the user as well as inserted into the branching network. experts can also modify this network in response to users' problems. our initial answer garden database contains questions and answers about how to use the x window system. m. s. ackerman t. w. malone organization overviews and role management: inspiration for future desktop environments catherine plaisant ben shneiderman design: cuu: bridging the digital divide with universal usability ben shneiderman dtv: a client-based interactive dtv architecture edward y. chang a better mythology for system design jed harris austin henderson communication - nets for the specification of operator dialogs the model of a man - machine - system is the basis of the proposed notation. it consists of the components man and machine. with these (statical) components a (dynamic) process, called man - machine - communication or man - machine - dialog, is realized. the (three) possible dynamic interfaces are described. according to /car;69/ an evaluation, decision and control module for the exchange of information belongs to an active bidirectional communication. an operator dialog is such a communication. w. k. epple u. rembold diades-ii: a multi-agent user interface design approach with an integrated assessment component iris dilli hans-jurgen hoffmann future directions and research problems in the world wide web udi manber four generic communication tasks which must be supported in electronic conferencing john c. mccarthy victoria c. miles andrew f. monk michael d. harrison alan j. dix peter c. wright maintaining views in object-relational databases jixue liu millist vincent mukesh mohania a user interface using fingerprint recognition: holding commands and data objects on fingers atsushi sugiura yoshiyuki koseki sensuality in product design: a structured approach g. h. hofmeester j. a. m. kemp a. c. m. blankendaal web usability: a review of the research alfred t. lee accuracy measures for evaluating computer pointing devices in view of the difficulties in evaluating computer pointing devices across different tasks within dynamic and complex systems, new performance measures are needed. this paper proposes seven new accuracy measures to elicit (sometimes subtle) differences among devices in precision pointing tasks. the measures are target re-entry, task axis crossing, movement direction change, orthogonal direction change, movement variability, movement error, and movement offset. unlike movement time, error rate, and throughput, which are based on a single measurement per trial, the new measures capture aspects of movement behaviour during a trial. the theoretical basis and computational techniques for the measures are described, with examples given. an evaluation with four pointing devices was conducted to validate the measures. a causal relationship to pointing device efficiency (viz. throughput) was found, as was an ability to discriminate among devices in situations where differences did not otherwise appear. implications for pointing device research are discussed. i. scott mackenzie tatu kauppinen miika silfverberg evaluation of alternate data base machine designs the purpose of this paper is to point out the need for performance evaluation measures and techniques suitable for the evaluation of specialized architectural features in nonnumeric applications. toward this end, problems associated with the use of data base machines are examined at three levels of detail: the user level, the system level and the device level. v. vemuri r. a. liuzzi j. p. cavano p. b. berra retrieving documents by plausible inference: a priliminary study choosing an appropriate document representation and search strategy for document retrieval has been largely guided by achieving good average performance instead of optimizing the results for each individual query. a model of retrieval based on plausible inference gives us a different perspective and suggests that techniques should be found for combining multiple sources of evidence (or search strategies) into an overall assessment of a document's relevance, rather than attempting to pick a single strategy. in this paper, we explain our approach to plausible inference for retrieval and describe some preliminary experiments designed to test this approach. the experiments use a spreading activation search to implement the plausible inference process. the results show that significant effectiveness improvements are possible using this approach. w. b. croft t. j. lucia p. r. cohen the update of index structures in object-oriented dbms andreas henrich generalized model for linear referencing paul scarponcini application-embedded retrieval from distributed free-text collections vladimir a. kulyukin synchronization of speech and hand gestures during multimodal human-computer interaction marie-luce bourguet akio ando clues: dynamic personalized message filtering matthew marx chris schmandt the freedom to work from an arbitiary position britt jönsson anna schömer konrad tollmar evaluation of eye gaze interaction eye gaze interaction can provide a convenient and natural addition to user- computer dialogues. we have previously reported on our interaction techniques using eye gaze [10]. while our techniques seemed useful in demonstration, we now investigate their strengths and weaknesses in a controlled setting. in this paper, we present two experiments that compare an interaction technique we developed for object selection based on a where a person is looking with the most commonly used selection method using a mouse. we find that our eye gaze interaction technique is faster than selection with a mouse. the results show that our algorithm, which makes use of knowledge about how the eyes behave, preserves the natural quickness of the eye. eye gaze interaction is a reasonable addition to computer interaction and is convenient in situations where it is important to use the hands for other tasks. it is particularly beneficial for the larger screen workspaces and virtual environments of the future, and it will become increasingly practical as eye tracker technology matures. linda e. sibert robert j. k. jacob managing large systems with db2 udb in this talk, we will describe the usability challenges facing large distributed corporations. as well, we will discuss what ibm's db2 universal database is doing to address these complex issues. chris eaton collaboration in distributed document processing johann schlichter strategic outlook: from ears to eyes gert staal hypertext for the electronic library?: core sample results dennis e. egan michael e. lesk r. daniel ketchum carol c. lochbaum joel r. remde michael littman thomas k. landauer searching the web we offer an overview of current web search engine design. after introducing a generic search engine architecture, we examine each engine component in turn. we cover crawling, local web page storage, indexing, and the use of link analysis for boosting search performance. the most common design and implementation techniques for each of these components are presented. for this presentation we draw from the literature and from our own experimental search engine testbed. emphasis is on introducing the fundamental concepts and the results of several performance analyses we conducted to compare different designs. planning and implementing user-centred design nigel bevan ian curson comparative logical and physical modeling in two oodbmss nancy k. wiegand teresa m. adams aperture based selection for immersive virtual environments andrew forsberg kenneth herndon robert zeleznik requirements for a virtual collocation environment steven e. poltrock george engelbeck conventional and convenient in entity-relationship modeling haim kilov the diamond security policy for object-oriented databases linda m. null johnny wong touch-typing with a stylus (abstract) david goldberg cate richardson getting it across: layout issues for kiosk systems a clear and appealing screen layout is crucial to the success of on-line kiosk systems, public terminals that are connected to a network. this paper addresses the problem of developing such a layout, and provides several guidelines, drawn from traditional typography and gestalt psychology as well as from hypertext authoring, and human-computer interaction. to identify how a kiosk system's primary task influences optimal layout, kiosk systems are classified into four basic types. the usability of html (hypertext markup language) 2.0 and 3.0 to write documents for these systems is discussed, and some alternative existing environments are presented. jan borchers oliver deussen clemens knörzer optimizing object queries using an effective calculus object-oriented databases (oodbs) provide powerful data abstractions and modeling facilities, but they generally lack a suitable framework for query processing and optimization. the development of an effective query optimizer is one of the key factors for oodb systems to successfully compete with relational systems, as well as to meet the performance requirements of many nontraditional applications. we propose an effective framework with a solid theoretical basis for optimizing oodb query languages. our calculus, called the monoid comprehension calculus, captures most features of odmg oql, and is a good basis for expressing various optimization algorithms concisely. this article concentrates on query unnesting (also known as query decorrelation), an optimization that, even though it improves performance considerably, is not treated properly (if at all) by most oodb systems. our framework generalizes many unnesting techniques proposed recently in the literature, and is capable of removing any form of query nesting using a very simple and efficient algorithm. the simplicity of our method is due to the use of the monoid comprehension calculus as an intermediate form for oodb queries. the monoid comprehension calculus treats operations over multiple collection types, aggregates, and quantifiers in a similar way, resulting in a uniform method of unnesting queries, regardless of their type of nesting. cidp - on workflow-based client integration in complex client oriented design projects w. dangelmaier h. hamoudia r. f. klahold helping cscw applications succeed: the role of mediators in the context of use this study found that the use of a computer conferencing system in an r&d; lab was significantly shaped by a set of intervening actors--- mediators---who actively guided and manipulated the technology and its use over time. these mediators adapted the technology to its initial context and shaped user interaction with it; over time, they continued to modify the technology and influence use patterns to respond to changing circumstances. we argue that well-managed mediation may be a useful mechanism for shaping technologies to evolving contexts of use, and that it extends our understanding of the powerful role that intervenors can play in helping cscw applications succeed. kazuo okamura wanda j. orlikowski masayo fujimoto joanne yates interactive sketching for user interface design james a. landay tasks-in-interaction: paper and screen based documentation in collaborative activity paul luff christian heath david greatbatch of crawlers, portals, mice, and men: is there more to mining the web? the world wide web is rapidly emerging as an important medium for transacting commerce as well as for the dissemination of information related to a wide range of topics (e.g., business, government, recreation). according to most predictions, the majority of human information will be available on the web in ten years. these huge amounts of data raise a grand challenge for the database community, namely, how to turn the web into a more useful information utility. this is exactly the subject that will be addressed by this panel. minos n. garofalakis sridhar ramaswamy rajeev rastogi kyuseok shim architecture issues in multimedia storage systems banu Ã-zden rajeev rastogiavi silberschatz the cutting edge darnell gadberry a usability study of workspace awareness widgets carl gutwin mark roseman internet-based groupware for user participation in product development the workshop on internet-based groupware for user participation in product development (igroup) was held on november 14, 1998, in conjunction with the conferences cscw 98 and pdc 98 in seattle, wa, usa. the workshop gathered 16 participants from both academia and industry. the topic discussed was how we can use groupware technology to involve end-users in the development of software and non-software products.the context for product development has traditionally been separated from the users' work context (grudin 1991). in a product development process the user group is often unknown. products are often purchased off-the-shelf, and the developers can in most cases only guess which user community and what types of users will be using their products. the only source of user feedback in these situations is from laboratory experiments with "typical users," and in some occasions from product resellers. the result is often sub-optimal solutions.in the recent years this picture has changed drastically. technical innovations, in particular the internet, have made it possible to bring closer together product developers and end-users. in its beginning, the developers started using a "feedback button" as a menu choice in their programs. pushing this button would let the user compose a "complaint" message and send it to the developer. lately, more advanced methods have emerged, trying to construct mixed communities of end- users and developers, where one of the aims is to deliver products users really want.in this workshop, we took a look at how groupware is or could be used to facilitate collaboration between users and developers in order to increase the quality of the product. ten position papers discussed different aspects of user involvement using groupware technology. in this report we will first give a summary of the position papers and then discuss the areas that we saw as important for further investigation. monica divitini babak a. farshchian tuomo tuikka bits: browsing in time and space antonio eduardo dias joao pedro silva antonio s. camara ten steps to usability testing marion hansen capbased-ams: a capability-based and event-driven activity management system patrick c. k. hung helen p. yeung kamalakar karlapalem computer-assisted evaluation of interface designs research in human-computer interaction has resulted in the development of task-theoretic analytical models that allow interface designers to evaluate the interface design on paper before building a prototype. despite the potential of such models to reduce interface development costs and improve design quality, they are not widely used. we believe that the complexity of these models is the main impediment and propose a computer system to remove this obstacle. the proposed system enables interface designers to describe an interface design formally and then assess it in terms of usability dimensions such as ease of learning and ease of use. it facilitates and structures task analysis and spares designers the burden of learning the complex syntax of analytical models. this paper describes the proposed system and discusses an empirical study designed to evaluate it. the empirical results show that the proposed system improves the accuracy and speed of evaluation of interface designs and contributes to more favorable attitudes towards analytical models. mohamed khalifa a meta model and an infrastructure for the non-transparent replication of object databases werner dreyer klaus r. dittrich a review of recent work on multi-attribute access methods most database systems provide database designers with single attribute indexing capability via some form of b+tree. multi-attribute search structures are rare, and are mostly found in systems specialized to some more narrow application area, e.g. geographic databases. the reason is that no multi- attirbute search structure has been demonstrated, with high confidence. multi- attribute search is an active area of research. this paper reviews the state of this field and some of the difficult problems, and reviews some recent notable papers. david lomet announcement - the temporal query language tsql2 final language definition richard t. snodgrass ilsoo ahn gad ariav don batory james clifford curtis e. dyreson ramez elmasri fabio grandi christian s. jensen wolfgang käfer nick kline krishna kulkarni t. y. cliff leung nikos lorentzos john f. roddick arie segev mi microcentre, dundee: ordinary and extra-ordinary hci research alan f. newell providing awareness information to support transitions in remote computer- mediated collaboration susan e. mcdaniel from electronic whiteboards to distributed meetings (video program)(abstract only): extending the scope of dolphin this video demonstrates different aspects of the dolphin cooperative hypermedia environment in the context of electroic meeting rooms. there are three parts. first, the basic funtionality of dolphin for electronic whiteboards is demonstrated. this includes the pen-based user-interface for creating informal structures such as scribbling, freehand sketching, and the creation of nodes and links. interaction for frequently used operations is based on gesture-recognition. second, it shows how dolphin supports different aspects of meetings including the processes in the pre-, in-, and post-meeting phases. during the meeting, participants can use computers mounted in the meeting room table. thus, everybody can access and modify information on the public space displayed on the whiteboard while sitting at the table. they can also engage in parallel private work which can be shared with the group later on. the third part demonstrates how dolphin can be used to support meetings between two groups on physically distributed meeting rooms. shared workspaces are complemented by audio/video connections between the rooms. it is noted that dolphin can also be used in distributed desktop-based situations. ajit bapat jorg geibler david hicks norbert streitz daniel tietze inferring intent in eye-based interfaces: tracing eye movements with process models dario d. salvucci ifind-a system for semantics and feature based image retrieval over internet hongiang zhang liu wenyin chunhul hu high-level programming features for improving the efficiency of a relational database system this paper discusses some high-level language programming constructs that can be used to manipulate the relations of a relational database system efficiently. three different constructs are described: (1) tuple identifiers that directly reference tuples of a relation; (2) cursors that may iterate over the tuples of a relation; and (3) markings, a form of temporary relation consisting of a set of tuple identifiers. in each case, attention is given to syntactic, semantic, and implementation considerations. the use of these features is first presented within the context of the programming language plain, and it is then shown how these features could be used more generally to provide database manipulation capabilities in a high- level programming language. consideration is also given to issues of programming methodology, with an important goal being the achievement of a balance between the enforcement of good programming practices and the ability to write efficient programs. reind p. van de riet martin l. kersten wiebren de jonge anthony i. wasserman toward the bibliographic control of works: derivative bibliographic relationships in an online union catalog gregory h. leazer richard p. smiraglia the birth of a help system hans bergman jennifer keene-moore usability testing of posture video analysis tool mihriban whitmore tim mckay new file organization based on dynamic hashing new file organizations based on hashing and suitable for data whose volume may vary rapidly recently appeared in the literature. in the three schemes which have been independently proposed, rehashing is avoided, storage space is dynamically adjusted to the number of records actually stored, and there are no overflow records. two of these techniques employ an index to the data file. retrieval is fast and storage utilization is low. in order to increase storage utilization, we introduce two schemes based on a similar idea and analyze the performance of the second scheme. both techniques use an index of much smaller size. in both schemes, overflow records are accepted. the price which has to be paid for the improvement in storage utilization is a slight access cost degradation. michel scholl adaptively constructing the query interface for meta-search engines with the exponential growth of information on the internet, current information integration systems have become more and more unsuitable for this "internet age" due to the great diversity among sources. this paper presents a constraint-based query user interface model, which can be applied to the construction of dynamically generated adaptive user interfaces for meta-search engines. lieming huang thiel ulrich matthias hemmje erich j. neuhold on the analysis of indexing schemes joseph m. hellerstein elias koutsoupias christos h. papadimitriou using web server logs to improve site design m. carl drott cyberbelt: multi-modal interaction with a multi-threaded documentary joshua bers sara elo sherry lassiter david tames web sites that work: designing with your eyes open jared m. spool will schroeder tara scanlon carolyn snyder the attention economy thomas h. davenport john c. beck implementation of the anecdote multimedia storyboarding system: komei harada eiichiro tanaka ryuichi ogawa yoshinori hara multi-disk management algorithms we investigate two schemes for placing data on multiple disks. we show that declustering (spreading each file across several disks) is inherently better than clustering (placing each file on a single disk) due to a number of reasons including parallelism and uniform load on all disks. miron livny setrag khoshafian haran boral metadata for mixed-media access francine chen marti hearst julian kupiec jan pedersen lynn wilcox the ergonomics of hypertext narative: usability testing as a tool for evaluation and redesign while usability research concentrates on evaluating informational documents and web sites, significant insights can be gained from performingusability testing on texts designed for pleasure reading, such as hypertext narratives. this article describes the results of such a test. the results demonstrate that the navigation systems required for such texts can significantly interfere with readers ability to derive value or pleasure from the fiction. the results emphasize the importance of hypertext authors providing more linear paths through texts and of simplifying the navigational apparatus required to read them. real-world interaction using the fieldmouse we introduce an inexpensive position input device called the fieldmouse, with which a computer can tell the position of the device on paper or any flat surface without using special input tablets or position detection devices. a fieldmouse is a combination of an id recognizer like a barcode reader and a mouse which detects relative movement of the device. using a fieldmouse, a user first detects an id on paper by using the barcode reader, and then drags it from the id using the mouse. if the location of the id is known, the location of the dragged fieldmouse can also be calculated by adding the amount of movement from the id to the position of the fieldmouse. using a fieldmouse in this way, any flat surface can work as a pointing device that supports absolute position input, just by putting an id tag somewhere on the surface. a fieldmouse can also be used for enabling a graphical user interface (gui) on paper or on any flat surface by analyzing the direction and the amount of mouse movement after detecting an id. in this paper, we introduce how a fieldmouse can be used in various situations to enable computing in real-world environments. itiro siio toshiyuki masui kentaro fukuchi book preview jennifer bruer integrated cscw tools within a shared 3d virtual environment christer carlsson lennart e. fahlen dynamat: a dynamic view management system for data warehouses pre- computation and materialization of views with aggregate functions is a common technique in data warehouses. due to the complex structure of the warehouse and the different profiles of the users who submit queries, there is need for tools that will automate the selection and management of the materialized data. in this paper we present dynamat, a system that dynamically materializes information at multiple levels of granularity in order to match the demand (workload) but also takes into account the maintenance restrictions for the warehouse, such as down time to update the views and space availability. dynamat unifies the view selection and the view maintenance problems under a single framework using a novel "goodness" measure for the materialized views. dynamat constantly monitors incoming queries and materializes the best set of views subject to the space constraints. during updates, dynamat reconciles the current materialized view selection and refreshes the most beneficial subset of it within a given maintenance window. we compare dynamat against a system that is given all queries in advance and the pre-computed optimal static view selection. the comparison is made based on a new metric, the detailed cost savings ratio introduced for quantifying the benefits of view materialization against incoming queries. these experiments show that dynamat's dynamic view selection outperforms the optimal static view selection and thus, any sub-optimal static algorithm that has appeared in the literature. yannis kotidis nick roussopoulos language features for interoperability of databases with schematic discrepancies ravi krishnamurthy witold litwin william kent a corpus analysis approach for automatic query expansion and its extension to multiple databases searching online text collections can be both rewarding and frustrating. while valuable information can be found, typically many irrelevant documents are also retrieved, while many relevant ones are missed. terminology mismatches between the user's query and document contents are a main cause of retrieval failures. expanding a user's query with related words can improve search performances, but finding and using related words is an open problem. this research uses corpus analysis techniques to automatically discover similar words directly from the contents of the databases which are not tagged with part-of-speech labels. using these similarities, user queries are automatically expanded, resulting in conceptual retrieval rather than requiring exact word matches between queries and documents. we are able to achieve a 7.6% improvement for trec 5 queries and up to a 28.5% improvement on the narrow-domain cystic fibrosis collection. this work has been extended to multidatabase collections where each subdatabase has a collection-specific similarity matrix associated with it. if the best matrix is selected, substantial search improvements are possible. various techniques to select the appropriate matrix for a particular query are analyzed, and a 4.8% improvement in the results is validated. susan gauch jianying wang satya mahesh rachakonda query-processing optimization strategies: feasible vs optimal solutions donna m. kaminski field oriented design techniques: case studies and organizing dimensions (abstract) dennis wixon judy ramey viewpoint: focus groups, theory or the kid in the garage? donald e. rickert thomas nagy proximal sensing: supporting context sensitive bill buxton postgres revisited: a business application perspective ronald c. linton outerjoin optimization in multidatabase systems outerjoin is used in distributed relational multidatabase systems for integrating local schemas to a global schema. queries against the global schema need to be modified, optimized, and decomposed into subqueries at local sites for processing. since outerjoin combines local relations in different databases to form a global relation, it is expensive to process. in this paper, based on the structure of the query and the definition of the schemas, queries with outerjoin, join, select and project operations are optimized. conditions where outerjoin can be avoided or be transformed into a one-side outerjoin are identified. by considering these conditions the response time for query processing can be reduced. arbee l. p. chen failure analysis in query construction: data and analysis from a large sample of web queries bernard j. jansen amanda spink tefko saracevic user interface architectures for information sharing rachel jones recognising "success" and "failure": evaluating groupware in a commercial context steve blythin john a. hughes steinar kristoffersen tom rodden mark rouncefield business: thoughts from 35,000 feet: the evolving real-world context of user centered design susan m. dray data integration using similarity joins and a word-based information representation language the integration of distributed, heterogeneous databases, such as those available on the world wide web, poses many problems. herer we consider the problem of integrating data from sources that lack common object identifiers. a solution to this problem is proposed for databases that contain informal, natural-language "names" for objects; most web-based databases satisfy this requirement, since they usually present their information to the end-user through a veneer of text. we describe whirl, a "soft" database management system which supports "similarity joins," based on certain robust, general- purpose similarity metrics for text. this enables fragments of text (e.g., informal names of objects) to be used as keys. whirl includes textual objects as a built-in type, similarity reasoning as a built-in predicate, and answers every query with a list of answer substitutions that are ranked according to an overall score. experiments show that whirl is much faster than naive inference methods, even for short queries, and efficient on typical queries to real-world databases with tens of thousands of tuples. inferences made by whirl are also surprisingly accurate, equaling the accuracy of hand- coded normalization routines on one benchmark problem, and outerperforming exact matching with a plausible global domain on a second. william w. cohen perceptual user interfaces (introduction) matthew turk george robertson retrieval performance of a distributed text database utilizing a parallel processor document server this paper considers text retrieval systems which store extremely huge amounts of text while providing a multi-user retrieval service for a large customer base. due to the severe i/o demands of such a system, it is usually beneficial if not necessary to utilize a multi- processor system with multiple i/o facilities in an effort to increase the parallel i/o activity, the objective being to lower search response times. after defining the problem, we model a solution and show that the application can be handled in a very effective fashion by a multi-processor system with a simple lan-based topology. the final discussion describes a type of functional splitting which, if done in a careful manner, helps improve search response time. forbes j. burkowski effects of spatial audio on memory, comprehension, and preference during desktop conferences an experiment was conducted to determine the effect of spatial audio on memory, focal assurance, perceived comprehension and listener preferences during desktop conferences. nineteen participants listened to six, pre- recorded, desktop conferences. each conference was presented using either non- spatial audio, co-located spatial audio, or scaled spatial audio, and during half of the conferences, static visual representations of the conferees were present. in the co-located condition, each conferees voice originated from directly above their image on the screen, and in the scaled spatial audio condition, the spatial separation between conferee voices was increased beyond the visual separation. results showed that spatial audio improved all measures, increasing memory, focal assurance, and perceived comprehension. in addition, participants preferred spatial audio to non-spatial audio. no strong differences were found in the visual conditions, or between the co-located spatial condition and the scaled spatial conditions. jessica j. baldis hypermedia, eternal life, and the impermanence agent noah wardrip-fruin concept based design of data warehouses: the dwq demonstrators the esprit project dwq (foundations of data warehouse quality) aimed at improving the quality of dw design and operation through systematic enrichment of the semantic foundations of data warehousing. logic-based knowledge representation and reasoning techniques were developed to control accuracy, consistency, and completeness via advanced conceptual modeling techniques for source integration, data reconciliation, and multi-dimensional aggregation. this is complemented by quantitative optimization techniques for view materialization, optimizing timeliness and responsiveness without losing the semantic advantages from the conceptual approach. at the operational level, query rewriting and materialization refreshment algorithms exploit the knowledge developed at design time. the demonstration shows the interplay of these tools under a shared metadata repository, based on an example extracted from an application at telecom italia. m. jarke c. quix d. calvanese m. lenzerini e. franconi s. ligoudistianos p. vassiliadis y. vassiliou usability testing in the real world carol bergfeld mills video conferencing as a technology to support group work: a review of its failures teleconferencing systems and services, are the main set of technologies developed thus far to support group work. within this set of technologies, videoconferencing is often thought of as a new, futuristic communication mode that lies between the telephone call and the face-to-face meeting. in fact, videoconferencing has been commercially available for over two decades, and, despite consistently brilliant market forecasts, to date it has failed to succeed except in limited niche markets. this paper reviews existing teleconferencing literature and provides an analysis of the reasons behind the failure of videoconferencing. carmen egido the kitchen interface - a lateral approach to gui duncan langford ceinwen jones the gaze groupware system: mediating joint attention in multiparty communication and collaboration roel vertegaal controlling the complexity of menu networks a common approach to the design of user interfaces for computer systems is the menu selection technique. each menu frame can be considered a node in an information/action network. the set of nodes and the permissible transitions between them (menu selections) form a directed graph which, in a system of substantial size, can be large and enormously complex. the solution to this problem of unmanageable complexity is the same for menu networks as for programs: the disciplined use of a set of well-defined one-in-one-out structures. this paper defines a set of such structures and offers some guidelines for their use. james w. brown multiversion concurrency control - theory and algorithms concurrency control is the activity of synchronizing operations issued by concurrently executing programs on a shared database. the goal is to produce an execution that has the same effect as a serial (noninterleaved) one. in a multiversion database system, each write on a data item produces a new copy (or version) of that data item. this paper presents a theory for analyzing the correctness of concurrency control algorithms for multiversion database systems. we use the theory to analyze some new algorithms and some previously published ones. philip a. bernstein nathan goodman properties of functional-dependency families seymour ginsburg sami mohammed zaiddan catching the eye: management of joint attention in cooperative work roel vertegaal boris velichkovsky gerrit van der veer visual information seeking: tight coupling of dynamic query filters with starfield displays christopher ahlberg ben shneiderman an architecture for a relational dataflow database a relational database system based on the principles of functional, data- driven computation is proposed. an architecture composed of a large number of independent, asynchronously operating processing units, each equiped with a separate memory unit holding a portion of the database is shown which implements the relational database system. all processing units are interconnected via 1) a circular shift-register bus, and 2) a daisy chain connection. relations are represented as streams of values (tuples) where each value is carried by a unique token. to perform a query on the data base, relations involved in the query are replicated from the database as streams and used as inputs to dataflow programs (graphs) implementing the relational algebra. all operators represented within the dataflow program are executed by independent processing units and are data-driven. proceeding asynchronously as stream values needed as inputs are produced. a time complexity of o(n) is achieved for any relational algebra query assuming a sufficient number of processing units are available. lubomir bic mike herendeen automatic indexing in information retrieval and text processing systems the search requests and stored information items are normally represented by sets of content identifiers, known as keywords or index terms. the choice of effective indexing products designed accurately to reflect document content is by far the most crucial task in retrieval. a wide variety of automatic indexing theories and systems have been developed over the years based in part on the occurrence frequencies of individual terms in the documents of a collection, on the ability of specific terms to distinguish the documents from each other, and on the distribution of terms in the relevant as opposed to the nonrelevant items in a collection. these theories lead to the automatic generation of content identifiers consisting of individual terms, term phrases, and classes of terms, and to the assignment of term weights reflecting the relative importance of the terms for content representation. the various indexing theories are covered and analytical as well as experimental results are given to demonstrate their effectiveness. gerard salton issues of gestural navigation in abstract information spaces david allport earl rennison lisa strausfeld a system architecture for the extension of structured information spaces by coordinated cscw services the world wide web is an emerging platform for information systems; however established system architectures for web systems focus mainly on the creation and storage of consistent hypermedia information structures and on the efficient distribution of the resulting documents. the interaction between the information users is seldom supported. as many application scenarios profit greatly from human interaction, the paper presents a platform- and application-independent generic system architecture designed to extend existing web-based information systems by coordinated services for human interaction. one prototype implementation of the architecture supports user awareness and human interaction on corporate web sites. peter manhart reseach alerts jennifer bruer conference report: the active web alan dix dave clarke workshop 2: the technology of browsing applications nina wacholder craig nevill manning a note on minimal covers john atkins cognitive factors in design (tutorial session): basic phenomena in human memory and problem solving thomas t. hewett timeslider: an interface to specify time point yuichi koike atsushi sugiura yoshiyuki koseki a specification paradigm for design and implementation of non-wimp user interfaces stephen a. morrison robert j. k. jacob towards an efficient evaluation of general queries: quantifier and disjunction processing revisited database applications often require to evaluate queries containing quantifiers or disjunctions, e.g., for handling general integrity constraints. existing efficient methods for processing quantifiers depart from the relational model as they rely on non- algebraic procedures. looking at quantified query evaluation from a new angle, we propose an approach to process quantifiers that makes use of relational algebra operators only. our approach performs in two phases. the first phase normalizes the queries producing a canonical form. this form permits to improve the translation into relational algebra performed during the second phase. the improved translation relies on a new operator - the complement-join - that generalizes the set difference, on algebraic expressions of universal quantifiers that avoid the expensive division operator in many cases, and on a special processing of disjunctions by means of constrained outer-joins. our method achieves an efficiency at least comparable with that of previous proposals, better in most cases. furthermore, it is considerably simpler to implement as it completely relies on relational data structures and operators. francois bry information access across the language barrier (demonstration abstract): the must system chin-yew lin when using the tool interferes with doing the task susan s. kirschenbaum wayne d. gray brian d. ehret sheryl l. miller tyrannical links, loose associations, and other difficulties of hypertext: persistent hindrances to advances in hypertext publishing and the role of conceptual indexing in overcoming them philip c. murray a digital strategy for the library of congress digital libraries challenge the core practices of libraries and archiv es in many respects, not only in terms of accommodating digital information and technology, but also through the need to develop new economic and organizational models. as the world's largest library, the library of congress (lc) perhaps faces the most profound questions of how to collect, catalog, preserve, and provide access to digital resources. lc asked the computer science and telecommunications board of the national academies for advice in this area by commissioning the study that culminated with the publication of lc21: a digital strategy for the library of congress. the panelists at this session will provide a brief summary of the lc21 report, review developments subsequent to the publication of lc21, and offer their thoughts on how the library community and information industry could engage lc to the benefit of the nation. alan inouye margaret hedstrom dale flecker david levy peepholes: low cost awareness of one's community saul greenberg designing and implementing asynchronous collaborative applications with bayou w. keith edwards elizabeth d. mynatt karin petersen mike j. spreitzer douglas b. terry marvin m. theimer relativism and views in a conceptual data base model the purpose of the pingree park workshop was to bring together practitioners from the fields of artificial intelligence (ai), database management (db), and programming languages (pl), to discover common issues, and to explore commonalities and differences in approaches to these issues. at the risk of being superficial let me try to summarily characterize the three fields and point to where i think they may fruitfully interact. it seems to me that at its best, ai is an interdisciplinary science of cognition. it attempts to understand the bases for natural cognition, primarily by developing models of structures and processes that underlie cognition. by incorporation into interactive systems these models can be both exploited as artificially intelligent technology and explored for their adequacy in explaining natural cognition. peter kreps a corpus analysis approach for automatic query expansion susan gauch jianying wang why are geographic information systems hard to use? carol traynor marian g. williams evolvable view environment (eve): non-equivalent view maintenance under schema changes supporting independent iss and integrating them in distributed data warehouses (materialized views) is becoming more important with the growth of the www. however, views defined over autonomous iss are susceptible to schema changes. in the eve project we are developing techniques to support the maintenance of data warehouses defined over distributed dynamic iss [5, 6, 7]. the eve system is the first to allow views to survive schema changes of their underlying iss while also adapting to changing data in those sources. eve achieves this is two steps: applying view query rewriting algorithms that exploit information about alternative iss and the information they contain, and incrementally adapting the view extent to the view definition changes. those processes are referred to as view synchronization and view adaption, respectively. they increase the survivability of materialized views in changing environments and reduce the necessity of human interaction in system maintenance. e. a. rundensteiner a. koeller x. zhang a. j. lee a. nica a. van wyk y. lee knowledge engineering for automated map design in descartes gennady andrienko natalia andrienko hypertext reading room walter vannini the feature quantity: an information theoretic perspective of tfidf-like measures the _feature quantity_, a quantitative representation of specificity introduced in this paper, is based on an information theoretic perspective of co-occurrence events between terms and documents. mathematically, the feature quantity is defined as a product of probability and information, and maintains a good correspondence with the _tfidf_-like measures popularly used in today's ir systems. in this paper, we present a formal description of the feature quantity, as well as some illustrative examples of applying such a quantity to different types of information retrieval tasks: representative term selection and text categorization. akiko aizawa a case study of the evolution of jun: an object-oriented open-source 3d multimedia library _jun is a large open-source graphics and multimedia library. it is object- oriented and supports 3d geometry, topography and multimedia. this paper reviews the development of the jun library from five perspectives: open- source, software evolution processes, development styles, technological support, and development data. we conclude the paper with lessons learned from the perspective of a for-profit company providing open-source object-oriented software to the community._ atsushi aoki kaoru hayashi kouichi kishida kumiyo nakakoji yoshiyuki nishinaka brent reeves akio takashima yasuhiro yamamoto a comprehension-based model of exploration muneo kitajima peter g. polson knowledge discovery in data warehouses themistoklis palpanas indexing images in oracle8i content-based retrieval of images is the ability to retrieve images that are similar to a query image. oracle8i visual information retrieval provides this facility based on technology licensed from virage, inc. this product is built on top of oracle8i intermedia which enables storage, retrieval and management of images, audios and videos. images are matched using attributes such as color, texture and structure and efficient content-based retrieval is provided using indexes of an image index type. the design of the index type is based on a multi-level filtering algorithm. the filters reduce the search space so that the expensive comparison algorithm operates on a small subset of the data. bitmap indexes are used to evaluate the first filter resulting in a design which performs well and is scalable. the image index type is built using oracle8i extensible indexing technology, allowing users to create, use, and drop instances of this index type as they would any other standard index. in this paper we present an overview of the product, the design of the image index type, and some performance results of our product. melliyal annamalai rajiv chopra samuel defazio susan mavris measuring the reputation of web sites: a preliminary exploration we describe the preliminary results from a pilot study, which assessed the perceived reputation - authority and trustworthiness - of the output from five www indexing/ranking tools. the tools are based on three techniques: external link structures, internal content, or human selection/indexing. twenty-two participants reviewed the output from each tool and assessed the reputation of the retrieved sites. greg keast elaine g. toms joan cherry integrity checking for multiple updates arding hsu tomasz imielinski perceptual user interfaces: perceptual intelligence alex pentland spreadsheets: a research agenda doug bell mike parr the dimensions of accessibility to online information: implications for implementing office information systems mary j. culnan introduction to the special issue on interface issues and designs for safety-critical interactive systems: when there is no room for user errorsoftware is increasingly being used to control safety-critical systems. muchresearch since levesons fundamental article software safety: why, what, andhow (acm computing surveys 18, 2 (1986), pp. 125163) has focused on ways toreduce or avoid software failures. however, the reliability of even the best-engineered software can be undermined by its user interface. indeed, interfacedesign for safety-critical interactive systems poses special challenges to thehuman-computer interaction community. this special issue addresses thechallenge of analyzing, designing, and building reliable and usable safety-critical interactive systems. from a pragmatic point of view a safety-criticalsystem is a system for which the cost of a failure is more important than thecost of developing the system. safety-critical interactive systems add thehuman dimension to a software system by putting control into the hands of ahuman operator. prominent examples of such control systems include nuclearpower plants, railways systems, airplane cockpits, and military systems.recent years have seen much effort put into the reengineering of the controlsystem that is well represented in this special issueair traffic control. whencompared to office automation systems, human- computer interac-tionfor safety-critical interactive systems is both familiar and different.for instance, themanagement of a functionality like undo, that can beseen as a usability issuein an office automation system, can become acritical functionality when theuser interacts with a safety-critical system. the three articles in thisspecial issue provide three snapshots for how human-computer interactionissues play out in the broader field of safety- critical interactive systems.in the first article, is paper safer? the role of flight strips in air trafficcontrol, wendy mackay provides adetailed ethnographic study on how air trafficcontrollers work.as in mackays article, the case study entails en-route air traffic control. animportant contribution of this article is a method for an integrated analysisof three important methods of this field: task performance, analysis of userdeviation and consequent hazard, and cooperation among users. each of thethree articles deals with the analysis and design phases of safety-criticalinteractive systems. if changes are to be made to large, complex, safety-critical control systems, the changes must be made early in the developmentlifecycle, where redesign in response to identified problems is feasible.thisspecial issue arose from a chi98 workshop organized by palanque and paternó("designing user interfaces for safety-critical systems", sigchi bulletin 30,4). the three articles included in this special issue were selected from morethan a score of papers received. the editors thank and acknowledge their debtto the many qualified external reviewers from several countries who havehelped select and improve (through their comments) the contributions in thisspecial issue.wayne d. gray philippe palanque fabio paternó information retrieval in the office environment the question of information retrieval in the office environment must be considered in two contexts: personal and organizational. in the personal context the issue is increasing individual effectiveness and productivity; in the organizational context it is one of increasing an organization's overall effectiveness and productivity, partially by improvements at the level of the individual but primarily by improvements to the systems and procedures through which individuals work together. the end user requirements of the two contexts are quite different, a fact which must be taken into account when designing information retrieval systems for the office environment. m. t. pezarro transformation-based spatial join spatial join finds pairs of spatial objects having a specific spatial relationship in spatial database systems. a number of spatial join algorithms have recently been proposed in the literature. most of them, however, perform the join in the original space. joining in the original space has a drawback of dealing with sizes of objects and thus has difficulty in developing a formal algorithm that does not rely on heuristics. in this paper, we propose a spatial join algorithm based on the transformation technique. an object having a size in the two-dimensional original space is transformed into a point in the four-dimensional transform space, and the join is performed on these point objects. this can be easily extended to n-dimensional cases. we show the excellence of the proposed approach through analysis and extensive experiments. the results show that the proposed algorithm has a performance generally better than that of the r*-based algorithm proposed by brinkhoff et al. this is a strong indicating that corner transformation preserves clustering among objects and that spatial operations can be performed better in the transform space than in the original space. this reverses the common belief that transformation will adversely affect clustering. we believe that our result will provide a new insight towards transformation-based spatial query processing. ju-won song kyu-young whang young-koo lee min-jae lee sang-wook kim evolving video skims into useful multimedia abstractions: michael g. christel michael a. smith c. roy taylor david b. winkler mocca: an environment for cscw applications corporate the mocca group surveyor's forum: technical transactions heinz bender time concept in generalized data bases: tigre project m. adiba n. bui quang j. palazzo de olivera working with the user in creating online information: in response to george miller brenda knowles rubens updating a database in an unsafe environment normally, when an application makes use of a database, considerable resources are invested in maintaining the integrity of that database. however, in situations where use of a database may be desirable even though the normal level of resources is unavailable, a simple technique using a partitioned data file protects the database if immediate transaction recording is not essential. anthony i. hinxman cockpitview: a user interface framework for future network terminals georg michelitsch ivee: an environment for automatic creation of dynamic queries applications christopher ahlberg erik wistrand a novel method for the evaluation of boolean query effectiveness across a wide operational range traditional methods for the system-oriented evaluation of boolean ir system suffer from validity and reliability problems. laboratory-based research neglects the searcher and studies suboptimal queries. research on operational systems fails to make a distinction between searcher performance and system performance. this approach is neither capable of measuring performance at standard points of operation (e.g. across r0.0-r1.0). a new laboratory-based evaluation method for boolean ir systems is proposed. it is based on a controlled formulation of inclusive query plans, on an automatic conversion of query plans into elementary queries, and on combining elementary queries into optimal queries at standard points of operation. major results of a large case experiment are reported. the validity, reliability, and efficiency of the method are considered in the light of empirical and analytical test data. eero sormunen credit for computer crashes?: creative solutions to usability problems can serve all users john gehl ben shneiderman enhanced dynamic queries via movable filters ken fishkin maureen c. stone towards intelligent recognition of multimedia episodes in real-time applications the ability to automatically capture and index multimedia information for later perusal and review is critical to the success of future multimedia services. in this paper, we describe how to automatically generate indexes of real-time streams without requiring deep content analysis. our techniques involve segmenting continuous audio and video into natural units, and relating these to discrete events from the multimedia application, such as user interactions, control events, and data content. in addition, we describe how to search within multimedia streams using query-based retrieval and visual and auditory retrieval modes. this multimodal retrieval allows for quick browsing and visual comprehension of multimedia streams. finally we show how our techniques apply to the area of multimedia conference recording. j. gabbe a. ginsberg b. robinson cache investment: integrating query optimization and distributed data placement emerging distributed query-processing systems support flexible execution strategies in which each query can be run using a combination of data shipping and query shipping. as in any distributed environment, these systems can obtain tremendous performance and availability benefits by employing dynamic data caching. when flexible execution and dynamic caching are combined, however, a circular dependency arises: caching occurs as a by-product of query operator placement, but query operator placement decisions are based on (cached) data location. the practical impact of this dependency is that query optimization decisions that appear valid on a per-query basis can actually cause suboptimal performance for all queries in the long run. to address this problem, we developed cache investment \\- a novel approach for integrating query optimization and data placement that looks beyond the performance of a single query. cache investment sometimes intentionally generates a "suboptimal" plan for a particular query in the interest of effecting a better data placement for subsequent queries. cache investment can be integrated into a distributed database system without changing the internals of the query optimizer. in this paper, we propose cache investment mechanisms and policies and analyze their performance. the analysis uses results from both an implementation on the shore storage manager and a detailed simulation model. our results show that cache investment can significantly improve the overall performance of a system and demonstrate the trade-offs among various alternative policies. donald kossmann michael j. franklin gerhard drasch wig ag protecting vod the easier way carsten griwodz oliver merkel jana dittmann ralf steinmetz workshop on organizing web space: an integration of content search with structure analysis yoshinori hara katsumi tanaka robert wilensky the abstraction-link-view paradigm: using constraints to connect user interfaces to applications the goal of the rendezvous project is to build interactive systems that are used by multiple users from multiple workstations, simultaneously. this goal caused us to choose an architecture that requires a clean run-time separation of user interfaces from applications. such a separation has long been state goal of uims researchers, but it is difficult to achieve. a key technical reason for the difficulty is that modern direct manipulation interfaces require extensive communication between the user interface and the application to provide semantic feedback. we discuss several communications mechanisms that have been used in the past, and present our approach --- the abstraction- link-view paradigm. links are objects whose sole responsibility is to facilitate communication between the abstraction objects (application) and the view objects (user interfaces). the abstraction-link-view paradigm relies on concurrency and a fast but powerful constraint system. ralph d. hill the logical record access approach to database design toby j. teorey james p. fry users' perception of the performance of a filtering system raya fidel michael crandall exception-based information flow control in object-oriented systems we present an approach to control information flow in object-oriented systems. the decision of whether an information flow is permitted or denied depends on both the authorizations specified on the objects and the process by which information is obtained and transmitted. depending on the specific computations, a process accessing sensitive information could still be allowed to release information to users who are not allowed to directly access it. exceptions to the permissions and restrictions stated by the authorizations are specified by means of exceptions associated with methods. two kinds of exceptions are considered: invoke exceptions, applicable during a mehtod execution and reply exceptions applicable to the information returned by a method. information flowing from one object into another or returned to the user is subject to the different exceptions specified for the methods enforcing the transmission. we formally characterize information transmission and flow in a transaction and define the conditions for safe information flow. we define security specifications and characterize safe information flows. we propose an approach to control unsafe flows and present an algorithm to enforce it. we also illustrate an efficient implementation of our controls and present some experimental results evaluating its performance. elisa bertino sabrina de capitani di vimercati elena ferrari pierangela samarati m: an architecture of integrated agents doug riecken music video analysis and context tools for usability measurement miles macleod nigel bevan towards usability evaluation of multimedia applications marianne g. petersen networked information retrieval norbert fuhr the use of think-aloud evaluation methods in design peter c. wright andrew f. monk an interactive poetic garden tom white david small visualseek: a fully automated content-based image query system john r. smith shih-fu chang content: a practical, scalable, high-performance multimedia database lawrence yapp craig yamashita gregory zick muse: a multiscale editor george w. furnas xiaolong zhang video parsing, retrieval and browsing: an integrated and content-based solution h. j. zhang c. y. low s. w. smoliar j. h. wu evaluation and analysis of users' activity organization our analyses of the activities performed by users of computer systems show complex patterns of interleaved activities. current human - computer interfaces provide little support for the kinds of problems users encounter when attempting to accomplish several different tasks in a single session. in this paper we develop a framework for discussing the characteristics of activities, in terms of activity structures, and provide a number of conceptual guidelines for developing an interface which supports activity coordination. the concept of a workspace is introduced as a unifying construct for reducing the mental workload when switching tasks, and for supporting contextually-driven interpretations of the users' activity structures. liam bannon allen cypher steven greenspan melissa l. monty the influence of persuasion, training and experience on user perceptions and acceptance of it innovation weidong xia gwanhoo lee mpeg-4 systems and applications hari kalva lai-tee cheok alexandros eleftheriadis an experimental study of the human/computer interface an exploratory study was conducted to analyze whether interface and user characteristics affect decision effectiveness and subject behavior in an interactive human/computer problem-solving environment. the dependent variables were performance and the use of the systems options. two of the independent variables examined, experience and cognitive style, were user characteristics; the other three, dialogue, command, and default types, were interface characteristics. results indicate that both user and interface characteristics influence the use of the system options and the request for information in the problem-solving task. izak benbasat albert s. dexter paul s. masulis a query language for multidimensional arrays: design, implementation, and optimization techniques leonid libkin rona machlin limsoon wong just talk to me: a field study of expertise location david w. mcdonald mark s. ackerman a metadatabase system for semantic image search by a mathematical model of meaning yasushi kiyoki takashi kitagawa takanari hayama ai watch: data mining and the web joseph s. fulda actual & potential hypertext & hypermedia (panel): 5 realizations diane greco markku eskelinen chris funkhouser marjorie luesebrink jim rosenberg incomplete information and the relational model of data a scheme for representing incomplete information in a relational database is presented. also given are algorithms for performing relational algebra operations, inference rules for data dependencies and conditions for lossless decompositions in the context of the representation scheme introduced here. r. b. abhyankar r. l. kashyap processing multi-join query in parallel systems kian-lee tan hongjun lu a collaborative approach to developing style guides stephen gale properties and update semantics of consistent views the problem of translating view updates to database updates is considered. both databases and views are modeled as data abstractions. a data abstraction consists of a set of states and of a set of primitive update operators representing state transition functions. it is shown how complex update programs can be built from primitive update operators and how view update programs are translated into database update programs. special attention is paid to a class of views that we call "consistent." loosely speaking, a consistent view is a view with the following property: if the effect of a view update program on a view state is determined, then the effect of the corresponding database update is unambiguously determined. thus, in order to know how to translate a given view update into a database update, it is sufficient to be aware of a functional specification of such a program. we show that consistent views have a number of interesting properties with respect to the concurrency of (high-level) update transactions. moreover we show that the class of consistent views includes as a subset the class of views that translate updates under maintenance of a constant complement. however, we show that there exist consistent views that do not translate under constant complement. the results of bancilhon and spyratos [6] are generalized in order to capture the update semantics of the entire class of consistent views. in particular we show that the class of consistent views is obtained if we relax the requirement of a constant complement by allowing the complement to decrease according to a suitable partial order. georg gottlob paolo paolini roberto zicari why reading was slower from crt displays than from paper experiments, including our own (gould et al., 1982; 1984; 1986), have shown that people read more slowly from crt displays than from paper. here we summarize results from a few of our fifteen experiments that have led us to conclude that the explanation centers on the image quality of the crt characters. reading speeds equivalent to those on paper were found when the crt displays contained character fonts that resembled those on paper (rather than dot matrix fonts, for example), had a polarity of dark characters on a light background, were anti-aliased (e.g., contained grey level), and were shown on displays with relatively high resolution (e.g., 1000 x 800). each of these variables probably contributes something to the improvement, but the trade-offs have not been determined. other general explanations for the reading speed difference that can be excluded include some inherent defect in crt technology itself or personal variables such as age, experience, or familiarity at reading from crt displays. john d. gould lizette alfaro rich finn brian haupt angela minuto interoperability and object identity data model transparency can be achieved by providing a canonical language format for the definition and seamless manipulation of multiple autonomous information bases. in this paper we assume a canonical data and computational model combining the function and object-oriented paradigms. we investigate the concept of identity as a property of an object and the various ways this property is supported in existing databases, in relation to the object-oriented canonical data model. the canonical data model is the tool for combining and integrating preexisting syntactical homogeneous, but semantical heterogeneous data types into generalized unifying data types. we identify requirements for object identity in federated systems, and discuss problems of object identity and semantical object replication arising from this new abstraction level. we argue that a strong notion of identity at the federated level can only be acheived by weakening strict autonomy requirements of the component information bases. finally we discuss various solutions to this problem that differ in their requirements with repect to giving up autonomy. frank eliassen randi karlsen an algebra for structured office documents we describe a data model for structured office information objects, which we generically call "documents," and a practically useful algebraic language for the retrieval and manipulation of such objects. documents are viewed as hierarchical structures; their layout (presentation) aspect is to be treated separately. the syntax and semantics of the language are defined precisely in terms of the formal model, an extended relational algebra. the proposed approach has several new features, some of which are particularly useful for the management of office information. the data model is based on nested sequences of tuples rather than nested relations. therefore, sorting and sequence operations and the explicit handling of duplicates can be described by the model. furthermore, this is the first model based on a many- sorted instead of a one-sorted algebra, which means that atomic data values as well as nested structures are objects of the algebra. as a consequence, arithmetic operations, aggregate functions, and so forth can be treated inside the model and need not be introduced as query language extensions to the model. many-sorted algebra also allows arbitrary algebra expressions (with boolean result) to be admitted as selection or join conditions and the results of arbitrary expressions to be embedded into tuples. in contrast to other formal models, this algebra can be used directly as a rich query language for office documents with precisely defined semantics. ralf hartmut guting roberto zicari david m. choy neimo, a multiworkstation usability lab for observing and analyzing multimodalinteraction joÃ"lle coutaz daniel salber eric carraux nathalie portolan for a social network analysis of computer networks: a sociological perspective on collaborative work and virtual community barry wellman the hotbox: efficient access to a large number of menu-items gordon kurtenbach george w. fitzmaurice russell n. owen thomas baudel "making place" to make it work: empirical explorations of hci for mobile cscw this paper addresses issues of user interface design, relating to ease of use, of handheld cscw. in particular, we are concerned with the requirements that arise from situations in which a traditionally designed mobile computer with a small keyboard and screen, may not be easily used. this applies to many mobile use contexts, such as inspection work and engineering in the field. by examining two such settings, we assert that what is usually pointed to as severe shortcomings of mobile computing today, for example: awkward keyboard, small display and unreliable networks, are really implications from a conceptual hci design that emphasise unstructured, unlimited input; a rich, continuous visual feedback channel and marginal use of sound. we introduce motile, a small prototype that demonstrates some alternative ideas about hci for mobile devices. we suggest that identifying complementing user interface paradigms for handheld cscw may enhance our understanding not only of mobile computing or handheld cscw, but the cscw field as a whole. steinar kristoffersen fredrik ljungberg a modular and open object-oriented database system on 30 august 1990, texas instruments incorporated, dallas, tx was awarded a three year contract (contract no. daab07-90-c-b920) to develop a modular and open object-oriented database system. the contract is funded by darpa/isto and is being managed by the u.s. army, cecom, fort monmouth, n.j. the contract is being executed at ti's information technologies laboratory, computer science center, dallas, texas. so far, we have received an outstanding response from interested parties (database research community, oodb application developers, oodb builders) to our contract award announcement. this communication is a collection of most commonly asked questions and answers to them. satish m. thatte assumption analysis in compiled logic-based decision support systems (ldsss) (abstract only) we describe a ldss which extends and improves on prior proposals1. it uses a connection graph to compile programs for handling ordinary queries2, thus eliminating the need for deductive search at query time. it provides for assumption analyses (like "what-if" questions) by allowing changes to the rules (clauses) of the system and tracing the effects of such changes, thereby avoiding the need to recompile the whole system3. it allows certain existentially quantified formulas to participate in the ldss as constraints on answers produced by rules. such a ldss exhibits desirable characteristics of general decision support systems like handling semi-structured problems and being supporting, descriptive, effective, and evolutionary. future work includes interfacing to large statistical packages and dealing with aggregate functions like average. michael c. chen lawrence j. henschen effective retrieval of structured documents ross wilkinson special report: the 1988 object-oriented database workshop j. joseph s. thatte c. thompson d. wells word document density and relevance scoring (poster session) previous work addressing the issue of word distribution in documents has shown the importance of word _repetitiveness_ as an indicator of the word content- bearing characteristics. in this paper we propose a simple method using a measure of the tendency of words to repeat within a document to separate the words with similar document frequencies, but different topic discriminating characteristics. we describe the application of the new measure in query- document relevance scoring. experiments on the trec ad hoc and spoken document retrieval tasks [7] show useful performance improvements. martin franz j. scott mccarley community networks (tutorial session)(abstract only) goals and content: this tutorial will survey community networks (such as berkeley community memory and the cleveland freenet) focusing on how they may impact human activities and institutions. the tutorial offers a tour of the blacksburg electronic village both to demonstrate one networked community in action and to illustrate important design decisions for any networked community. john carroll carmen sears dynamic metadata for monitoring digital library management george buchanan gil marsden harold thimbleby do color models really make a difference? sarah douglas ted kirkpatrick exploding the interface: experiences of a cscw network the development of human computer interaction has been dominated by the interface both as a design concept and as an artefact of computer systems. however, recently researchers have been re-examining the role of the interface in the user's interaction with the computer. this paper further examines the notion of the interface in light of the experiences of the authors in establishing a network to support cooperative work. the authors argue that the concept of the single interface which provides a focus for interaction with a computer system is no longer tenable and that richer conceptions of the inter- relationships between users and computer systems are needed. john bowers tom rodden the cosie communications subsystem: support for distributed office applications douglas b. terry sten andler javax.xxl: a prototype for a library of query processing algorithms therefore, index structures can easily be used in queries. a typical example is a join cursor which consumes the outputs of two underlying cursors. most of our work is however not dedicated to the area of relational databases, but mainly refers to spatial and temporal data. for spatial databases, for example, we provide several implementations of spatial join algorithms [3]. the cursor-based processing is however the major advantage of xxl in contrast to approaches like leda [6] and tpie [7]. for more information on xxl see http://www.mathematik.uni- marburg.de/dbs/xxl. we will demonstrate the latest version of xxl using examples to show its core functionality. we will concentrate on three key aspects of xxl. usage: we show how easily state-of-the-art spatial join-algorithms can be implemented in xxl using data from different sources. reuse: we will demonstrate how to support different joins, e.g. spatial and temporal joins, using the same generic algorithm like plug&join; [1]. comparability: we will demonstrate how xxl serves as an ideal testbed to compare query processing algorithms and index structures. jochen van den bercken jens-peter dittrich bernhard seeger resolving the tension between integrity and security using a theorem prover some information in databases and knowledge bases often needs to be protected from disclosure to certain users. traditional solutions involving multi-level mechanisms are threatened by the user's ability to infer higher level information from the semantics of the application. we concentrate on the revelation of secrets through a user running transactions in the presence of database integrity constraints. we develop a method of specifying secrets formally that not only exposes a useful structure and equivalence among secrets but also allows a theorem prover to detect certain security lapses during transaction compilation time. subhasish mazumdar david stemple tim sheard polynomial time query processing in temporal deductive databases we study conditions guaranteeing polynomial time computability of queries in temporal deductive databases. we show that if for a given set of temporal rules, the period of its least models is bounded from the above by a polynomial in the database size, then also the time to process yes-no queries (as well as to compute finite representations of all query answers) can be polynomially bounded. we present a bottom-up query processing algorithm bt that is guaranteed to terminate in polynomial time if the periods are polynomially bounded. polynomial periodicity is our most general criterion, however it can not be directly applied. therefore, we exhibit two weaker criteria, defining inflationary and i-periodic sets of temporal rules. we show that it can be decided whether a set of temporal rules is inflationary. i-periodicity is undecidable (as we show), but it can be closely approximated by a syntactic notion of multi- separability. jan chomicki distributed data base management: some thoughts and analyses the new decade will witness the widespread usage of distributed processing systems. distributed data base management will turn out to be one of the important applications of this relatively new technology. this paper initially presents a brief outline of the nature of research that has been performed so far in this area. next, it discusses some of the important issues (like integrity and security constraints, deadlocks, concurrency control, etc.) and analyses some proposed mechanisms. performance and correctness issues are also addressed. then, a brief sketch of an adaptive architecture for distributed data base management systems is presented. the last section lists some future directions for research in this area. c. mohan anatomy of a real e-commerce system today's e-commerce systems are a complex assembly of databases, web servers, home grown glue code, and networking services for security and scalability. the trend is towards larger pieces of these coming together in bundled offerings from leading software vendors, and the networking/hardware being offered through service delivery companies. in this paper we examine the bundle by looking in detail at ibm's websphere, commerce edition, and its deployment at a major customer site. anant jhingran a room of your own: what would it take to help remote groups work as well as collocated groups? judith s. olson lisa covi elena rocco william j. miller paul allie gesture at the user interface (abstract) alan wexelblat marc cavazza text-hypertext mutual conversion and hypertext interchange through sgml min zheng roy rada compromising statistical databases responding to queries about means this paper describes how to compromise a statistical database which only answers queries about arithmetic means for query sets whose cardinality falls in the range [k, n \\- k], for some k > 0, where n ≥ 2k is the number of records in the database. the compromise is shown to be easy and to require only a little preknowledge; knowing the cardinality of just one nonempty query set is usually sufficient. this means that not only count and sum queries, but also queries for arithmetic means can be extremely dangerous for the security of a statistical database, and that this threat must be taken into account explicitly by protective measures. this seems quite important from a practical standpoint: while arithmetic means were known for some time to be not altogether harmless, the (perhaps surprising) extent of the threat is now shown. wiebren de jonge comparative logical and physical modeling in two oodbmss an application developer's perspective is used to compare modeling and storage in two object- oriented database management systems (oodbmss): ode (object database and environment) and objectstore. although both systems are based on the object- oriented language c++, differences exist in their oodbms designs. comparing the differences between these two systems provides insight into other possible designs or combinations of features that could be possible in an oodbms. as part of this discussion, internal translations of oodbms schemas to physical storage are shown. such knowledge enables an application developer to create appropriate logical and physical database designs and provides a basis for understanding oodbms software. features compared in this paper are physical and logical clustering, iterators, sets, joins, inverse pointers, and queries. to facilitate the discussion, a simple geographic information system (gis) schema is presented. nancy k. wiegand teresa m. adams on the desirability of acyclic database schemes catriel beeri ronald fagin david maier mihalis yannakakis codex, memex, genex: the pursuit of transformational technologies ben shneiderman dynamic sets for search david steere m. satyanarayanan jeannette m. wing network communities john m. carroll stuart laughton mary beth rosson traversal recursion: a practical approach to supporting recursive applications many capabilities that are needed for recursive applications in engineering and project management are not well supported by the usual formulations of recursion. we identify a class of recursions called "traversal recursions" (which model traversals of a directed graph) that have two important properties they can supply the necessary capabilities and efficient processing algorithms have been defined for them. first we present a taxonomy of traversal recursions based on properties of the recursion on graph structure and on unusual types of metadata. this taxonomy is exploited to identify solvable recursions and to select an execution algorithm. we show how graph traversal can sometimes outperform the more general iteration algorithm. finally we show how a conventional query optimizer architecture can be extended to handle recursive queries and views. arnon rosenthal sandra heiler umeshwar dayal frank manola modeling temporal primitives: back to basics iqbal a. goralwalla yuri leontievm. tamer Ã-zsu duane szafron visualizing music and audio using self-similarity this paper presents a novel approach to visualizing the time structure of music and audio. the acoustic similarity between any two instants of an audio recording is displayed in a 2d representation, allowing identification of structural and rhythmic characteristics. examples are presented for classical and popular music. applications include content-based analysis and segmentation, as well as tempo and structure extraction. jonathan foote the deckscape web browser marc h. brown robert a. shillner high-dimensional index structures database support for next decade's applications (tutorial) stefan berchtold daniel a. keim are we working on the right problems? (panel) there appears to be a discrepancy between the research topics being pursued by the database research community and the key problems facing information systems decisions makers such as chief information officers (cios). panelists will present their view of the key problems that would benefit from a research focus in the database research community and will discuss perceived discrepancies. based on personal experience, the most commonly discussed information systems problems facing cios today include: michael stonebraker book preview jennifer bruer database research at at&t; bell laboratories bell laboratories is the research and development arm of at&t.; there is today tremendous support for database research in bell labs and in at&t.; this is expected to continue since database technology is recognized as being central not just to teradata, ncr, and at&t;'s computing business but to its core communication business as well. what you see described below is a survey of the current state of a young and growing research effort in the database area. you should expect to see a stronger "resume" a few years down the road. research at at&t; bell labs is never directed from above, towards specific projects or objectives. individual researchers select their areas of research based on a combination of factors that include individual skill and interest, other on-going activity and potential for collaboration, and the likelihood of intellectual or monetary impact. while research activity can range from the highly theoretical to the completely applied, it is expected that most research be performed within the framework of some larger objective; and this framework is typically provided by a prototype system. in addition to the core set of people here full-time, often there are visitors. you will see the names of many of these in the publications listed. for instance, most recently oded shmueli from the technion spent a year with us. currently, avi silberschatz is visiting from austin. h. v. jagadish the trackpad: a study on user comfort and performance ahmet e. Çakir gisela Çakir thomas muller pieter unema design: understanding design representations harry j. saddler ontology-based web site mapping for information exploration centralized search process requires that the whole collection reside at a single site. this imposes a burden on both the system storage of the site and the network traffic near the site. it thus comes to require the search process to be distributed. recently, more and more web sites provide the ability to search their local collection of web pages. query brokering systems are used to direct queries to the promising sites and merge the results from these sites. creation of meta-information of the sites plays an important role in such systems. in this article, we introduce an ontology-based web site mapping method used to produce conceptual meta-information, the vector space approach, and present a serial of experiments comparing it with naive-bayes approach. we found that the vector space approach produces better accuracy in ontology- based web site mapping. xiaolan zhu susan gauch lutz gerhard nicholas kral alexander pretschner the structure of object transportation and orientation in human-computer interaction: yanqing wang christine l. mackenzie valerie a. summers kellogg s. booth informedia digital video library the informedia digital video library project is developing new technologies for creating full-content search and retrieval digital video libraries. working in collaboration with wqed pittsburgh, the project is creating a testbed that will enable k-12 students to access, explore, and retrieve science and mathematics materials from the digital video library. the library will initially contain 1,000 hours of video from the archives of project partners: wqed, fairfax co. va schools' electronic bbc-produced video courses. (industrial partners include digital equipment corp., bell atlantic, intel corp., and microsoft, inc.) this library will be installed at winchester thurston school, an independent k-12 school in pittsburgh. m. christel t. kanade m. mauldin r. reddy m. sirbu s. stevens h. wactlar adviso demonstration tom mcgee n. dimitrova l. agnihotri meta-design: design for designers one fundamental challenge for the design of the interactive systems of the future is to invent and design environments and cultures in which humans can express themselves and engage in personally meaningful activities. unfortunately, a large number of new media are designed from a perspective of viewing and treating humans primarily as consumers. the possibility for humans to be and act as designers (in cases in which they desire to do so) should be accessible not only to a small group of high-tech scribes, but rather to all interested individuals and groups. meta- design characterizes activities, processes, and objectives to create new media and environments that allow users to act as designers and be creative. in this paper we discuss problems addressed by our research on meta-design, provide a conceptual framework for meta-design, and illustrate our developments in the context of a particular system, the envisionment and discovery collaboratory. gerhard fischer eric scharff xmas: an extensible main-memory storage system for high-performance applications xmas is an extensible main-memory storage system for high-performance embedded database applications. xmas not only provides the core functionality of dbms, such as data persistence, crash recovery, and concurrency control, but also pursues an extensible architecture to meet the requirements from various application areas. one crucial aspect of such extensibility is that an application developer can compose application- specific, high-level operations with a basic set of operations provided by the system. called composite actions in xmas, these operations are processed by a customized xmas server with minimum interaction with application processes, thus improving the overall performance. this paper first presents the architecture and functionality of xmas, and then demonstrates a simulation of mobile communication service. jang ho park yong sik kwon ki hong kim sang ho lee byoung dae park sang kyun cha cooperative inquiry: developing new technologies for children with children allison druin visual interactions with a multidimensional ranked list anton leouski james allan indexing the positions of continuously moving objectsthe coming years will witness dramatic advances in wireless communications aswell as positioning technologies. as a result, tracking the changing positionsof objects capable of continuous movement is becoming increasingly feasibleand necessary. the present paper proposes a novel, r*-tree based indexingtechnique that supports the efficient querying of the current and projectedfuture positions of such moving objects. the technique is capable of indexingobjects moving in one-, two-, and three-dimensional space. update algorithmsenable the index to accommodate a dynamic data set, where objects may appearand disappear, and where changes occur in the anticipated positions ofexisting objects. a comprehensive performance study is reported.simonas Å altenis christian s. jensen scott t. leutenegger mario a. lopez how do the experts do it? the use of ethnographic methods as an aid to understanding the cognitive processing and retrieval of large bodies of text this paper explores an important problem in information retrieval: that of rapidly increasing amounts of full-text storage that is difficult to file and retrieve effectively. the author suggests that a possible avenue for improving full-text retrieval would include in-depth studies of the ways in which individual users cope with large amounts of written information, stored chiefly on paper in their offices. relevant literature in cognitive psychology is reviewed and some recent and continuing studies are described that have used anthropological methods to approach this problem. it is argued that historians are a good group to study, due to their reliance on the examination and processing of texts, and the broad scope of their inquiries. examinations of the ways in which this one group of information workers categorize documents could lead us to a better understanding of human problems in processing and retrieving textual information. d. o. case visual interaction design special interest area: getting started maria g. wadlow static detection of sources of dynamic anomalies in a network of referential integrity restrictions laura c. rivero jorge h. doorn daniel loureiro no members, no officers, no dues: a ten year history of the software psychology society ben shneiderman alignment of the is organization: the special case of corporate acquisitions carol v. brown janet s. renwick constraints on the design of information systems thomas b. hickey designing computers with people in mind anne garrison s. joy mountford greg thomas from hal to office appliances: human-machine interfaces in science fiction and reality david g. stork a language for queries on structure and contents of textual databases gonzalo navarro ricardo baeza-yates ariadne: a system for constructing mediators for internet sources the web is based on a browsing paradigm that makes it difficult to retrieve and integrate data from multiple sites. today, the only way to achieve this integration is by building specialized applications, which are time-consuming to develop and difficult to maintain. we are addressing this problem by creating the technology and tools for rapidly constructing information mediators that extract, query, and integrate data from web sources. the resulting system, called ariadne, makes it feasible to rapidly build information mediators that access existing web sources. jose luis ambite naveen ashish greg barish craig a. knoblock steven minton pragnesh j. modi ion muslea andrew philpot sheila tejada tivoli: an electronic whiteboard for informal workgroup meetings this paper describes tivoli, an electronic whiteboard application designed to support informal workgroup meetings and targeted to run on the xerox liveboard, a large screen, pen-based interactive display. tivoli strives to provide its users with the simplicity, facile use, and easily understood functionality of conventional whiteboards, while at the same time taking advantage of the computational power of the liveboard to support and augment its users' informal meeting practices. the paper presents the motivations for the design of tivoli and briefly describes the current version in operation. it then reflects on several issues encountered in designing tivoli, including the need to reconsider the basic assumptions behind the standard desktop gui, the use of strokes as the fundamental object in the system, the generalized wipe interface technique, and the use of meta-strokes as gestural commands. elin rønby pedersen kim mccall thomas p. moran frank g. halasz visdb: a system for visualizing large databases daniel a. keim hans-peter kriegel training algorithms for linear text classifiers david d. lewis robert e. schapire james p. callan ron papka modern client-server dbms architectures in this paper, we describe three client-server dbms architectures. we discuss their functional components and provide an overview of their performance characteristics. nick roussopoulos alexis delis topological queries in spatial databases c. h. papadimitriou d. suciu v. vianu editorial steven pemberton jay blickstein a factor analysis of user cognition and emotion judith ramsay a methodology for objectively evaluating error messages message quality is a critical factor in influencing user acceptance of a program product. good error messages can reduce the time and cost to create and maintain software, as well as help users learn about the product. we have developed a methodology for conducting controlled usability evaluations of error messages. the message test program is easily modified to adapt to different product situations, and messages can be evaluated even before working code exists. the message test program can be used to test error messages for a batch product, as well as messages for an interactive product. it can also be used for stand-alone messages, for products that offer on-line help, or messages that provide additional information in a reference manual. message testing enables us to objectively evaluate error messages and provide specific feedback about the difficulties users encounter and how error messages can be improved. barbara s. isa james m. boyle alan s. neal roger m. simons out of the frying pan and into the fire: creating a help desk james cunningham bryan lubbers making technology accessible for older users beth meyer sherry e. mead wendy a. rogers mattias schneider-hufschmidt state of the art issues in distributed databases (panel session): the efficacy of replicated data this discussion of replicated data will investigate such questions as: what good is replicated data? what overhead does it have? what cost/benefit is there to the increased availability it may provide? how can a system take advantage of this availability? a. keller three dimensional visualization of the world wide web steve benford ian taylor david brailsford boriana koleva mike craven mike fraser gail reynard chris greenhalgh on sharing resources and donating books: what are we doing wrong? is this the bull-in-a-china-shop syndrome? gitte lindgaard ying leung john fabre a normal form for precisely characterizing redundancy in nested relations we give a straightforward definition for redundancy in individual nested relations and define a new normal form that precisely characterizes redundancy for nested relations. we base our definition of redundancy on an arbitrary set of functional and multivalued dependencies, and show that our definition of nested normal form generalizes standard relational normalization theory. in addition, we give a condition that can prevent an unwanted structural anomaly in nested relations, namely, embedded nested relations with at most one tuple. like other normal forms, our nested normal form can serve as a guide for database design. wai yin mok yiu-kai ng david w. embley cityquilt tirtza even student readers' use of library documents: implications for library technologies kenton o'hara fiona smith william newman abigail sellen flexible, active support for collaboration with conversationbuilder simon m. kaplan william j. tolone douglas p. bogia theodore phelps navigation as multiscale pointing: extending fitts' model to very high precision tasks yves guiard michel beaudouin-lafon denis mottet the nsf/arpa/nasa digital libraries initiative: opportunities for hci research (panel session) william hefley creating creativity: user interfaces for supporting innovation a challenge for human-computer interaction researchers and user interf ace designers is to construct information technologies that support creativity. this ambitious goal can be attained by building on an adequate understanding of creative processes. this article offers a four-phase framework for creativity that might assist designers in providing effective tools for users: (1)collect: learn from provious works stored in libraries, the web, etc.; (2) relate: consult with peers and mentors at early, middle, and late stages, (3)create: explore, compose, evaluate possible solutions; and (4) donate: disseminate the results and contribute to the libraries. within this integrated framework, this article proposes eight activities that require human-computer interaction research and advanced user interface design. a scenario about an architect illustrates the process of creative work within such an environment. ben shneiderman the multidimensional database system rasdaman rasdaman is a universal \--- i.e., domain-independent --- array dbms for multidimensional arrays of arbitrary size and structure. a declarative, sql- based array query language offers flexible retrieval and manipulation. efficient server-based query evaluation is enabled by an intelligent optimizer and a streamlined storage architecture based on flexible array tiling and compression. rasdaman is being used in several international projects for the management of geo and healthcare data of various dimensionality. p. baumann a. dehmel p. furtado r. ritsch n. widmann rosolving conflicts in global storage design through replication we present a conceptual framework in which a database's intra- and interrecord set access requirements are specified as a constrained assignment of abstract characteristics ("evaluated," "indexed," "clustered," "well-placed") to logical access paths. we derive a physical schema by choosing an available storage structure that most closely provides the desired access characteristics. we use explicit replication of schema objects to reduce the access cost along certain paths, and analyze the trade-offs between increased update overhead and improved retrieval access. finally, we given an algorithm to select storage structures for a codasyl 78 dbtg schema, given its access requirements specification. r. h. katz e. wong reachability and connectivity queries in constraint databases it is known that standard query languages for constraint databases lack the power to express connectivity properties. such properties are important in the context of geographical databases, where one naturally wishes to ask queries about connectivity (what are the connected components of a given set?) or reachability (is there a path from a to b that lies entirely in a given region?). no existing constraint query languages that allow closed form evaluation can express these properties. in the first part of the paper, we show that in principle there is no obstacle to getting closed languages that can express connectivity and reachability queries. in fact, we show that adding any topological property to standard languages like fo + lin and fo+poly results in a closed language. in the second part of the paper, we look for tractable closed languages for expressing reachability and connectivity queries. we introduce path logic, which allows one to state properties of paths with respect to given regions. we show that it is closed, has polynomial time data complexity for linear and polynomial constraints, and can express a large number of reachability properties beyond simple connectivity. query evaluation in the logic involves obtaining a discrete abstraction of a continuous path, and model-checking of temporal formulae on the discrete structure. michael benedikt martin grohe leonid libkin luc segoufin back to the future: pen and paper technology supports complex group coordination steve whittaker heinrich schwarz an orthogonal taxonomy for hyperlink anchor generation in video streams using ovaltine jason mcc. smith david stotts sang-uok kum a shot classification method of selecting effective key-frames for video browsing hisashi aoki shigeyoshi shimotsuji osamu hori the organization of cooperative work: beyond the "leviathan" conception of the organization of cooperative work this paper examines the relationship between cooperative work and the wider organizational context. the purpose of the exploration is not to contribute to organizational theory in general, but to critique the transaction cost approach to organizational theory from the point of view of cooperative work. the paper posits that the formal conception of organization---organization conceived of in terms of "common ownership"\---is inadequate as a conceptual foundation for embedding cscw systems in a wider organizational context. the design of cscw systems for real-world application must move beyond the bounds of organizational forms conceived of in terms of "common ownership." kjeld schmidt a high-level user interface for update and retrieval in relational databases - language aspects gottfried vossen volkert brosda a genetic model for video content based retrieval cyril decleir mohand-saïd hacid jacques kouloumdjian principles, techniques, and ethics of stage magic and their application to human interface design magicians have been designing and presenting illusions for 5000 years. they have developed principles, techniques and ethical positions for their craft that this paper argues are applicable to the design of human/computer interfaces. the author presents a number of specific examples from magic and discusses their counterparts in human interface design, in hopes that human interface practitioners and researchers will, having recognized the applicability of magic, go further on their own to explore its domain. bruce tognazzini first-class views: a key to user-centered computing large database systems (e.g., federations, warehouses) are multi- layer \\--- i.e., a combination of base databases and (virtual or physical) view databases1. smaller systems use views for layers that hide detailed physical and conceptual structures. we argue that most databases would be more effective if they were more user-centered \\--- i.e., if they allowed users, administrators, and application developers to work mostly within their native view. to do so, we need first class views \\--- views that support most of the metadata and operations available on source tables. first class views could also make multi-tier object architectures (based on objects in multiple tiers of servers) easier to build and maintain. the views modularize code for data services (e.g., query, security) and for coordinating changes with neighboring tiers. when data in each tier is derived declaratively, one can generate some of these methods semi-automatically. much of the functionality required to support first class views can be generated semi- automatically, if the derivations between layers are declarative (e.g., sql, rather than java). we present a framework where propagation rules can be defined, allowing the flexible and incremental specification of view semantics, even by non-programmers. finally, we describe research areas opened up by this approach. arnon rosenthal edward sciore remote usability evaluation: user participation in the design of a web-based email service david r. millen effective video screen displays: cognitive style and cuing effectiveness kenneth a. cory user perceptual mechanisms in the search of computer command menus menu-based command systems, in which a user selects a command from a set of choices displayed to him, have acquired widespread use as a human- computer interface technique. the technique is especially attractive for use with new or untrained users since the user need not recall the command he wishes, but merely recognize it. but menu systems also find application in more sophisticated systems meant for expert users (for example, teitelman, 1977) where they can be used to reduce the complexity of the options with which the user is presented. stuart k. card what help do users need?: taxonomies for on-line information needs & access methods a. w. roesler s. g. mclellan defining the "virtualness" of groups, teams, and meetings fred niedeman catherine m. beise the role of emotion in believable agents joseph bates ease of use and the richness of documentation adequacy t. r. girill mavis 2 robert tansley mark dobie paul lewis wendy hall homenet: a field trial of residential internet services robert kraut william scherlis tridas mukhopadhyay jane manning sara kiesler getting more out of programming-by-demonstration richard g. mcdaniel brad a. myers toward a worldwide digital library edward a. fox gary marchionini picturepiper: using a re-configurable pipeline to find images on the web adam m. fass eric a. bier eyton adar defining logical domains in a web site wen-syan li okan kolak quoc vu hajime takano the use of mmr, diversity-based reranking for reordering documents and producing summaries jaime carbonell jade goldstein conjunctive query equivalence of keyed relational schemas (extended abstract) joseph albert yanis ioannidis raghu ramakrishnan a five-key mouse with built-in dialog control kunio ohno ken-ichi fukaya jurg nievergelt the design of a gui paradigm based on tablets, two-hands, and transparency gordon kurtenbach george fitzmaurice thomas baudel bill buxton a pattern matching language for spatio-temporal databases we propose a pattern matching language for spatio-temporal databases. the matching process in time dimension is based upon the evolutionary nature of time, but in spatial dimension it is based on placement, shape and sizes of regions. the concept of pattern matching introduced in this paper is independent of the choice of the underlying model for spatio-temporal databases. in particular, the pattern matching language seamlessly extends our sql-like query language parasql for spatio-temporal databases. the pattern matching language would also have application in active databases, because patterns can be used as triggers. tsz s. cheng shashi k. gadia developing collaborative applications using the world wide web shell (tutorial session)(abstract only) goals and content: the tutorial discusses how to develop collaborative applications using the www shell as a rapid prototyping and development platform. using an example collaborative application, we introduce particular development topics to illustrate the suitability of the www shell and its use. also, we discuss recent additions in functionality as well as constraints with the www shell approach. alison lee andreas girgensohn flexible, active support for collaboration with conversationbuilder simon m. kaplan william j. tolone douglas p. bogia theodore phelps application and database design - putting it off there seems to be two major schools of thought about designing database application systems: system design is mandatory, say the methodologers. how can you ever build a system unless you first determine in detail what it should do? system design is impossible, say the prototypists. how can you ever build a system if you must first determine in detail what it should do? christopher j. shaw the tangled web we wove: a taskonomy of www use michael d. byrne bonnie e. john neil s. wehrle david c. crow the rapport multimedia communication system (demonstration) j. r. ensor s. r. ahuja r. b. connaghan m. pack d. d. seligmann information sought and information provided: an empirical study of user/expert dialogues transcripts of computer-mail users seeking advice from an expert were studied to investigate the complementary claims that people often do not know what information they need to obtain in order to achieve their goals, and consequently, that experts must identify inappropriate queries and infer and respond to the goals behind them. this paper reports on one facet of the transcript analysis, namely, the identification of the types of relation that hold between the action that an advice-seeker asks about and the action that an expert tells him how to perform. three such relations between actions are identified: generates, enables, and is-alternative-to. the claim is made that a cooperative advice-providing system, such as a help system or an expert system, must be able to compute these relations between actions. martha e. pollack efficient mining of emerging patterns: discovering trends and differences guozhu dong jinyan li adaptive hypermedia: from systems to framework paul de bra peter brusilovsky geert-jan houben "i've got a little list" harriett johnson richard johnson effectively locating information on the internet qiongsen yu barrett r. bryant designing the muse: a digital music stand for the symphony musician christopher graefe derek wahila justin maguire orya dasna bringing treasures to the surface: previews and overviews in a prototype for the library of congress national digital library catherine plaisant gary marchionini anita komlodi organizational issues of end-user computing donald l. amoroso user needs assessment and evaluation: issues and methods (workshop) nancy van house david levy ann bishop barbara buttenfield task based groupware design: putting theory into practice designing groupware systems requires methods and tools that cover all aspects of groupware systems. we present a method that utilizes known theoretical insights and makes them usable in practice. in our method, the design of groupware systems is driven by an extensive task analysis followed by structured design and iterative evaluation using usability criteria. using a combination of multiple complementary representations and techniques, a wide range of aspects of groupware design is covered. the method is built on our experiences and is used in practice by several companies and educational institutes in europe. we define the design process, the models needed and the tools that support the design process. gerrit van der veer martijn van welie video helps remote work: speakers who need to negotiate common ground benefit from seeing each other elizabeth s. veinott judith olson gary m. olson xiaolan fu what's happening marisa campbell design and selection of materialized views in a data warehousing environment: a case study in this paper, we describe the design of a data warehousing system for an engineering company 'r'. a cost model was developed for this system to enable the evaluation of the total costs and benefits involved in selecting each materialized view. using the cost analysis methodology for evaluation, an adapted greedy algorithm has been implemented for the selection of materialized views. the algorithm and cost model were applied to a set of real-life database items extracted from company 'r'. by selecting the most cost effective set of materialized summary views, the total of the maintenance, storage and query costs of the system is optimized, thereby resulting in an efficient data warehousing system. goretti k. y. chan qing li ling feng dynamic restructuring of transactional workflow activities: a practical implementation method tong zhou calton pu ling liu the world is not a desktop marc weiser the oasis multidatabase prototype the oasis prototype is under development at dublin city university in ireland. we describe a multi-database architecture which uses the odmg model as a canonical model and describe an extention for construction of virtual schemas within the multidatabase system. the omg model is used to provide a standard distribution layer for data from local databases. this takes the form of corba objects representing export schemas from separate data sources. mark roantree john murphy wilhelm hasselbring converting relational to object-oriented databases joseph fong a probabilistic relational algebra for the integration of information retrieval and database systems we present a probabilistic relational algebra (pra) which is a generalization of standard relational algebra. in pra, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. based on intensional semantics, the tuple weights of the result of a pra expression always conform to the underlying probabilistic model. we also show for which expressions extensional semantics yields the same results. furthermore, we discuss complexity issues and indicate possibilities for optimization. with regard to databases, the approach allows for representing imprecise attribute values, whereas for information retrieval, probabilistic document indexing and probabilistic search term weighting can be modeled. we introduce the concept of vague predicates which yield probabilistic weights instead of boolean values, thus allowing for queries with vague selection conditions. with these features, pra implements uncertainty and vagueness in combination with the relational model. norbert fuhr thomas rölleke a design system approach to data integrity due to the nature of chip design, the very large-scale integrated (vlsi) design data base is constantly changing. the changes may be caused by logical or physical design activities. in either case, there is a need to make sure that no matter what happens, the data base remains valid. this paper discusses a system which uses an automated data integrity technique (audit) to eliminate errors prior to hardware build. william a. noon ken n. robbins m. ted roberts a collaborative medium for the support of conversational props tom brinck louis m. gomez meme tags and community mirrors: moving from conferences to collaboration richard borovoy fred martin sunil vemuri mitchel resnick brian silverman chris hancock towards a fast precision-oriented image retrieval system yves chiaramella philippe mulhem mourad mechkour iadh ounis marius pasca a case study for evaluating interface design through communicability communicability evaluation is a method based on semiotic engineering that aims at assessing how designers communicate to users their design intents and chosen interactive principles, and thus complements traditional usability evaluation methods. in this paper, we present a case study in which we evaluate how communicablity tagging of an application changes along users learning curves. our main goal was to have indications of how communicability evaluation along a learning period helps provide valuable information about interface designs, and identify communicative and interactive problems, as users become more proficient in the application. raquel o. prates simone d. j. barbosa clarisse s. de souza computers versus humans milan e. soklic research alerts ben shneiderman semistructured data peter buneman the multiview project: object-oriented view technology and applications e. a. rundensteiner h. a. kuno y.-g. ra v. crestana-taube m. c. jones p. j. marron "body coupled fingerring": wireless wearable keyboard masaaki fukumoto yoshinobu tonomura usability inspection methods jakob nielsen advanced search technologies for unfamiliar metadata (demonstration abstract) barbara norgard youngin kim michael buckland aitao chen ray larson fred gey defining search success: evaluation of searcher performance in digital libraries barbara m. wildemuth reflections: writing effectively to humans keith gregory fuzzy rule extraction from gis data with a neural fuzzy system for decision making ding zheng wolfgang kainz a simple database language for personal computers a simple database language for personal computers has been implemented by selecting a subset of the ans mumps language and enhancing it so as to meet the requirements of microcomputer end-users who are unfamiliar with computers. this database language is named micro mumps. its database is based on a modified prefix b-tree having parameters for adjusting its data organization according to the requirements of space and time efficiency. experience with micro mumps has demonstrated a remarkable reduction in programming time. tan watanabe tsuneharu ohsawa takaji suzuki consistency of models the topic of this session is the consistency of models. (i believe that consistency and integrity are substantially the same.) a model basically contains: \\- states consisting of entities and various relationships between them, \\- operations (functions) for examining states, and \\- operations (procedures) for changing states. the topic will be discussed in terms of the following, rather controversial issues: 1\\. kinds of consistency 2\\. specifying consistency 3\\. detecting inconsistency 4\\. living with inconsistency 5\\. concurrency and consistency 6\\. recovery from failure 7\\. exploiting constraint knowledge for each issue i will present my own view, and i encourage alternative views to be expressed. interactive sketching of multimedia storyboards brian bailey efficient and effective metasearch for text databases incorporating linkages among documents linkages among documents have a significant impact on the importance of documents, as it can be argued that important documents are pointed to by many documents or by other important documents. metasearch engines can be used to facilitate ordinary users for retrieving information from multiple local sources (text databases). there is a search engine associated with each database. in a large-scale metasearch engine, the contents of each local database is represented by a representative. each user query is evaluated against he set of representatives of all databases in order to determine the appropriate databases (search engines) to search (invoke) in previous word, the linkage information between documents has not been utilized in determining the appropriate databases to search. in this paper, such information is employed to determine the degree of relevance of a document with respect to a given query. specifically, the importance (rank) of each document as determined by the linkages is integrated in each database representative to facilitate the selection of databases for each given query. we establish a necessary and sufficient condition to rank databases optimally, while incorporating the linkage information. a method is provided to estimate the desired quantities stated in the necessary and sufficient condition. the estimation method runs in time linearly proportional to the number of query terms. experimental results are provided to demonstrate the high retrieval effectiveness of the method. clement yu weiyi meng wensheng wu king- lup liu the future of internet search (keynote address) steve kirsch combining fuzzy information from multiple systems (extended abstract) ronald fagin an interaction engine for rich hypertexts in semantically rich hypertexts it is attractive to enable presentation of a network of nodes and link at different levels of abstraction. it is also important that the user can interact with the hypertext using a command repertoire that reflects the chosen abstraction level. based on a characterization of rich hypertext we introduce the concept of an interaction engine that governs the separation between internal hypertext representation and external screen presentation. this separation is the key principle of the hyperpro system. the hyperpro interaction engine is based on simple rules for presentation, interpretation of events, and menu set up. much of the power of the interaction engine framework comes from the organization of these rules relative to the type of hierarchy of nodes and links, and relative to a hierarchy of so-called interaction schemes. the primary application domain discussed in the paper is program development and program documentation. kasper Østerbye kurt nørmark irresistible forces and immovable objects jonathan grudin communication-efficient distributed mining of association rules mining for associations between items in large transactional databases is a central problem in the field of knowledge discovery. when the database is partitioned among several share-nothing machines, the problem can be addressed using distributed data mining algorithms. one such algorithm, called cd, was proposed by agrawal and shafer in [1] and was later enhanced by the fdm algorithm of cheung, han et al. [5]. the main problem with these algorithms is that they do not scale well with the number of partitions. they are thus impractical for use in modern distributed environments such as peer-to-peer systems, in which hundreds or thousands of computers may interact. in this paper we present a set of new algorithms that solve the distributed association rule mining problem using far less communication. in addition to being very efficient, the new algorithms are also extremely robust. unlike existing algorithms, they continue to be efficient even when the data is skewed or the partition sizes are imbalanced. we present both experimental and theoretical results concerning the behavior of these algorithms and explain how they can be implemented in different settings. assaf schuster ran wolff the effect of vdu text-presentation rate on reading comprehension and reading speed the effect of video display unit presentation rate on reading performance was investigated. reading material was presented at one of the following presentation-rates: 15, 30, 120, 960 cps, or "instant". in the instant condition, the full text appeared simultaneously on the screen. in the other conditions, text appeared one character at a time starting in the upper left corner of the screen, from left to right and top to bottom. reading comprehension was highest under the 30 cps and instant presentation conditions. total time to perform the reading task was equivalent for all conditions except the 15 cps rate which required a longer time to complete the task. in terms of comprehension and time to perform the task, a slow rate of 15 cps, contrary to previous recommendations, is not desirable for novice computer users. jo w. tombaugh michael d. arkin richard f. dillion optimal histograms for hierarchical range queries (extended abstract) nick koudas s. muthukrishnan divesh srivastava tpc-d - the challenges, issues and results this paper covers what we at ncr have learned about the tpc-d benchmark as we executed and published our first set of volume points for the teradata database. areas where customers should read the full disclosure report carefully are pointed out as well as the weaknesses in the benchmark relative to real customer applications. the key execution and optimization elements of the teradata database and the 5100 worldmark platform that contribute to our published results are discussed. ramesh bhashyam type inference in a database programming language we extend an ml- like implicit type system to include a number of structures and operations that are common in database programming including sets, labeled records, joins and projections. we then show that the type inference problem of the system is decidable by extending the notion of principal type schemes to include conditions on substitutions. combined with milner's polymorphic let constructor, our language also supports type inheritance. atsushi ohori peter buneman optimization of real conjunctive queries surajit chaudhuri an experimental vehicle for the user/filing-system interface achieving maximum benefits from the automation of office functions will require that every member of the office staff use the automated system. as a result, an office information system (ois) must be able to support interaction across a user community with a wide range of characteristics. despite the relative importance of the filing system to an ois, there is little knowledge regarding the best filing system structures for particular classes of users. this is at least partially due to the difficulties of conducting experiments. this paper describes a design for a prototype system to ease experimentation. the system provides for relatively easy changes of both filing system and user interface characteristics, and for transportability among different host systems. ronald danielson philippe rolander understanding and improving the user interface design process eric a. fisch an overview of ergonomic considerations in computer systems the ergonomic considerations in computer systems involve an examination of hardware and software components. this paper examines some of the topics that must be considered in the design of a useful system. the state of the art in this area is underdeveloped. thus, there is little hard data upon which to base design decisions. m. l. schneider f. b. arble n. olson d. wolff enabling for mass communication in community design volkmar pipek si in digital libraries nabil r. adam vijayalakshmi atluri igg adiwijaya dec data distributor: for data replication and data warehousing daniel j. dietterich eye tracking the visual search of click-down menus michael d. byrne john r. anderson scott douglass michael matessa incremental computation of complex object queries the need for incremental algorithms for evaluating database queries is well known, but constructing algorithms that work on object-oriented databases (oodbs) has been difficult. the reason is that oodb query languages involve complex data types including composite objects and nested collections. as a result, existing algorithms have limitations in that the kinds of database updates are restricted, the operations found in many query languages are not supported, or the algorithms are too complex to be described precisely. we present an incremental computation algorithm that can handle any kind of database updates, can accept any expressions in complex query languages such as oql, and can be described precisely. by translating primitive values and records into collections, we can reduce all query expressions comprehension. this makes the problems with incremental computation less complicated and thus allows us to decribe of two parts: one is to maintain the consistency in each comprehension occurrence and the other is to update the value of an entire expression. the algorithm is so flexible that we can use strict updates, lazy updates, and their combinations. by comparing the performance of applications built with our mechanism and that of equivalent hand written update programs, we show that our incremental algorithm can be iplemented efficiently. hiroaki nakamura the standard factor: ec92 and you pat billingsley bridging the paper and electronic worlds: the paper user interface since its invention millenia ago, paper has served as one of our primary communications media. its inherent physical properties make it easy to use, transport, and store, and cheap to manufacture. despite these advantages, paper remains a second class citizen in the electronic world. in this paper, we present a new technology for bridging the paper and the electronic worlds. in this new technology, the user interface moves beyond the workstation and onto paper itself. we describe paper user interface technology and its implementation in a particular system called xax. walter johnson herbert jellinek leigh klotz ramana rao stuart k. card a graphical interface for speech-based retrieval laura slaughter douglas w. oard vernon l. warnick julie l. harding galen j. wilkerson crim: curricular resources in interactive multimedia the crim project addresses the need for curricular guidelines and educational resources for the interactive multimedia area. a digital library / repository allows educators to submit knowledge modules that will be reviewed and made available for use by teachers or students. recommendations are given for courses and topics, and a process is outlined to reach consensus and improve education. this efforts is connected with the computer science teaching center, http://www.cstc.org/. edward a. fox rachelle s. heller anna long david watkins architecture of a microcomputer based dss generator a decision support system (dss) is an interactive computer-based system used to support decision makers in their decision making activities. normally, it is easier and quicker to build such systems using dss generators (software packages) with related hardware. this paper describes the design and implementation of an experimental microcomputer-based dss generator regimes. the system design is conceptually an extension of sprague's model for organising decision support capabilities. the system is one of the first examples of an integrated dss generator software implemented in basic on a personal computer providing multiple user interfaces, graphics, data base management and modelling capabilities. k. b. c. saxena d. v. gulati m. kaul email overload: exploring personal information management of email steve whittaker candace sidner designing and integrating user interfaces of geographic database applications in this paper, we investigate the problem of designing graphical geographic database user interfaces (gduis) and of integrating them into a database management system (dbms). geographic applications may vary widely but they all have common aspects due to the spatial component of their data: geographic data are not standard and they require appropriate tools for (i) editing them (i.e., display and modify) and (ii) querying them. the conceptual problems encountered in designing gduis are partly due to the merger of two independent fields, geographic dbmss on the one hand, and graphical user interfaces (guis) on the other hand. although these areas have evolved considerably during the past ten years, only little effort has been made to understand the problems of connecting them in order to efficiently manipulate geographic data on a display. this issue raises the general problem of coupling a dbms with specialized modules (in particular, the problem of strong vs. weak integration), and more generally the role of a dbms in a specific application. after giving the functionalities that a gdui should provide, we study the possible conceptual integrations between a gui and a dbms. finally, a map editing model as well as a general and modular gdui architecture are presented. agnès voisard associating video with related documents reiko hamada ichiro ide shuichi sakai hidehiko tanaka computing covers for embedded functional dependencies this paper deals with the problem of computing covers for the functional dependencies embedded in a subset of a given relation schema. we show how this problem can be simplified and present a new and efficient algorithm "reduction. by resolution" (rbr) for its solution. though the problem of computing covers for embedded dependencies is inherently exponential, our algorithm behaves polynomially for several classes of inputs. rbr can be used for the solution of some related problems in the theory of database design, such as deciding whether a given database scheme is in boyce-codd normal form or decomposing a scheme into boyce-codd normal form. g. gottlob first order normal form for relational databases and multidatabases witold litwin m. ketabchi ravi krishnamurthy reasoning about naming systems mic bowman saumya k. debray larry l. peterson computer viruses: can it happen at iu? mark sheehan an application-level multicast architecture for multimedia communications (poster session) kang-won lee sungwon ha jia-ru li vaduvur bharghavan context an associative media browser glorianna davenport michael murtaugh improving the performance of lineage tracing in data warehouse satyadeep patnaik marshall meier brian henderson joe hickman brajendra panda talking in circles: a spatially-grounded social environment roy a. rodenstein judith s. donath tools for searching the web (panel) donna harman paul over the effects of jitter on the peceptual quality of video mark claypool jonathan tanner a note on the relational calculus we examine a logical anomaly in codd's relational calculus [1], according to which queries can occasionally return somewhat surprising results. c. j. date schema integration for multidatabases using the unified relational and object- oriented model soon m. chung pyeong s. mah piloting electronic mail in today's office environment this paper is a review of the experiences of the authors with electronic mail and is based upon their use of two different electronic mail systems. the first part of the paper contains a review of the attributes of the commercial systems currently available. the second part of the paper is a review of the use of two different electronic mail systems. user response to the electronic mail systems tested is considered to be the key measurement in system acceptability and utilization. the users' expectations, attitudes and responses to the systems are examined. the final part of the paper is a recommendation of how to introduce a pilot electronic mail system into a technically oriented organization. bonnie a. callender e. david callender footprints: history-rich tools for information foraging alan wexelblat pattie maes perceptual user interfaces: perceptual bandwidth byron reeves clifford nass application of aboutness to functional benchmarking in information retrieval experimental approaches are widely employed to benchmark the performance of an information retrieval (ir) system. measurements in terms of recall and precision are computed as performance indicators. although they are good at assessing the retrieval effectiveness of an ir system, they fail to explore deeper aspects such as its underlying functionality and explain why the system shows such performance. recently, inductive (i.e., theoretical) evaluation of ir systems has been proposed to circumvent the controversies of the experimental methods. several studies have adopted the inductive approach, but they mostly focus on theoretical modeling of ir properties by using some metalogic. in this article, we propose to use inductive evaluation for functional benchmarking of ir models as a complement of the traditional experiment-based performance benchmarking. we define a functional benchmark suite in two stages: the evaluation criteria based on the notion of "aboutness," and the formal evaluation methodology using the criteria. the proposed benchmark has been successfully applied to evaluate various well- known classical and logic-based ir models. the functional benchmarking results allow us to compare and analyze the functionality of the different ir models. kam-fai wong dawei song peter bruza chun-hung cheng how can groups communicate when they use different languages? many office systems are based on various types of messages, forms, or other documents. when users of such systems need to communicate with people who use different document types, some kind of translation is necessary. in this paper, we explore the space of general solutions to this translation problem and propose several specific solutions to it. after first illustrating the problem in the information lens electronic messaging system, we identify two partly conflicting objectives that any translation scheme should satisfy: preservation of meaning and autonomous evolution of group languages. then we partition the space of possible solutions to this problem in terms of the set theoretic relations between group languages and a common language. this leads to four primary solution classes and we illustrate and evaluate each one. finally, we describe a composite scheme that combines many of the best features of the other schemes. even though our examples deal primarily with extensions to the information lens system, the analysis also suggests how other kinds of office systems might exploit specialization hierarchies of document types to simplify the translation problem. computer-based office systems often use various types of messages, forms, and other documents to communicate information. when all the people who wish to communicate with each other use exactly the same types of documents, translation problems do not arise. however, when people want to communicate with others who use different kinds of documents, some kind of translation is necessary. the needs for such translation seem likely to become increasingly common as more and more diverse kinds of systems are linked into heterogeneous networks. we have been particularly concerned with one instance of this problem that arises in the context of template-based communication systems (e.g. [malone et.al.87b], [tsichritzis 82]). the problem is how users of such systems can communicate with other users who have a different set of templates. other examples of the problem may arise when users of different word processing systems wish to exchange documents, or when different companies wish to use electronic data interchange (edi) standards to exchange business forms such as purchase orders and invoices. in this paper, we will explore the space of general solutions to this translation problem and propose several specific schemes for solving it. our primary goal has been to design extensions to the information lens system ([malone et. al. 87a], [malone et. al. 87b]) that allow different groups to communicate with each other when (1) the groups use some, but not all, of the same types of messages, and (2) the message types used by each group may change over time. in the first section of the paper, we illustrate the problem as it arises in the context of the information lens system. in the second section, we state the problem more precisely in terms of the objectives that we want its solution to satisfy. in section 3, we explore the space of possible solutions by suggesting a dimension along which to partition the space and examining a representative solution for each class. in section 4, we propose a new scheme that combines most of the desirable features of the solutions we explored. finally, we conclude by hinting at the implications that this research might have for more general contexts. jintae lee thomas w. malone ubiquitous, self-tuning, scalable servers hardware developments allow wonderful reliability and essentially limitless capabilities in storage, networks, memory, and processing power. costs have dropped dramatically. pcs are becoming ubiquitous. the features and scalability of dbms software have advanced to the point where most commercial systems can solve virtually all oltp and dss requirements. the internet and application software packages allow rapid deployment and facilitate a broad range of solutions. peter spiro a message board on www for on-door communication norihisa segawa yuko murayama yasunari nakamoto hiromi gondo masatoshi miyazaki who won the mosaic war? hal berghel standards and the emergence of digital multimedia systems edward a. fox designing audio aura: elizabeth d. mynatt maribeth back roy want michael baer jason b. ellis individual user interfaces and model-based user interface software tools egbert schlungbaum metaphor design in user interfaces: how to effectively manage expectation, surprise, comprehension, and delight aaron marcus a critical appraisal of task taxonomies as a tool for studying office activities christopher a. higgins frank r. safayeni hypertext john,b. smith stephen,f. weiss planning the acoustic urban environment: a gis-centered approach maria piedade g. oliveira eduardo bauzer medeiros clodoveu a. davis an object-oriented data model for the research laboratory this paper presents work in-progress on an object-oriented data model for representing the semantics of the research laboratory. the laboratory computing environment (lce) is an application development software tool designed to improve productivity in the implementation of research laboratory computing systems. the core of the lce is an object-oriented database management system. we describe the complex requirements of laboratory application environments and present motivation for using the object-oriented approach. the lce concept is the result of several years of experience in developing custom laboratory systems. one of the principal difficulties in developing scientific applications is the need to integrate many different software tools in order to represent the numerous abstract data types which are required even in simple applications. an object-oriented data management system, at the outset, appeared most likely to meet the need for tight integration of complex data types. p. kachhwaha r. hogan anchored conversations: chatting in the context of a document this paper describes an application-independent tool called anchored conversations that brings together text-based conversations and documents. the design of anchored conversations is based on our observations of the use of documents and text chats in collaborative settings. we observed that chat spaces support work conversations, but they do not allow the close integration of conversations with work documents that can be seen when people are working together face-to-face. anchored conversations directly addresses this problem by allowing text chats to be anchored into documents. anchored conversations also facilitates document sharing; accepting an invitation to an anchored conversation results in the document being automatically uploaded. in addition, anchored conversations provides support for review, catch-up and asynchronous communications through a database. in this paper we describe motivating fieldwork, the design of anchored conversations, a scenario of use, and some preliminary results from a user study. elizabeth f. churchill jonathan trevor sara bly les nelson davor cubranic wip/ppp: automatic generation of personalized multimedia presentations elisabeth andre jochen muller thomas rist a concurrency control framework for collaborative systems jonathan munson prasun dewan "data in your face": push technology in perspective michael franklin stan zdonik database research activities at the university of vienna a. m. tjoa g. vinek prototyping web applications mario bochicchio roberto paiano modeling data for the summary database richard j. orli domino: databases for moving objects tracking consider a database that represents information about moving objects and their location. for example, for a database representing the location of taxi-cabs a typical query may be: retrieve the free cabs that are currently within 1 mile of 33 n. michigan ave., chicago (to pick-up a customer); or for a trucking company database a typical query may be: retrieve the trucks that are currently within 1 mile of truck abt312 (which needs assistance); or for a database representing the current location of objects in a battlefield a typical query may be: retrieve the friendly helicopters that are in a given region, or, retrieve the friendly helicopters that are expected to enter the region within the next 10 minutes. the queries may originate from the moving objects, or from stationary users. we will refer to applications with the above characteristics as moving-objects-database (mod) applications, and to queries as the ones mentioned above as mod queries. in the military mod applications arise in the context of the digital battlefield (see [1]), and in the civilian industry they arise in transportation systems. for example, omnitracs developed by qualcomm (see[2]) is a commercial system used by the transportation industry, which enables mod functionality. it provides location management by connecting vehicles (e.g. trucks), via satellites, to company databases. the vehicles are equipped with a global positioning system (gps), and they automatically and periodically report their location. ouri wolfson prasad sistla bo xu jutai zhou sam chamberlain hypertext and higher education: a reality check this panel discussion will examine the extent to which today's hypermedia systems are actually used and usable in higher education instruction and research. the panel will seek to foster dialog between hypermedia systems developers, corpus authors, and faculty users. a series of questions will be addressed by the panel and audience, including: how are hypermedia corpuses actually used in classes? how well do today's systems meet the exploratory learning goals so often described as being at the heart of the hypermedia movement? what kinds of assignments make sense pedagogically? are there system features which tend to discourage or prevent such uses? what are the difficulties involved in administering a course using hypertext? are there unusual logistical problems? how does one monitor or manage the annotation and expansion of the corpus during the semester? are today's hypermedia environments, tools, and practices proving to be useful for the faculty researcher? the student researcher? for faculty considering a hypertext to organize data, is periodic porting to new hardware and software a safe bet? or does the faculty member risk marooning two years of work inside an obsolete system? what are the prospects for published hypertext corpuses? what system features are likely to affect the economics of publication and distribution? what are the special problems in creating hypertext materials that can be used on other campuses and in other courses? s. c. erhmann s. erde k. morrell r. f. e. weissman do batch and user evaluations give the same results? _do improvements in system performance demonstrated by batch evaluations confer the same benefit for real users? we carried out experiments designed to investigate this question. after identifying a weighting scheme that gave maximum improvement over the baseline in a non-interactive evaluation, we used it with real users searching on an instance recall task. our results showed the weighting scheme giving beneficial results in batch studies did not do so with real users. further analysis did identify other factors predictive of instance recall, including number of documents saved by the user, document recall, and number of documents seen by the user._ william hersh andrew turpin susan price benjamin chan dale kramer lynetta sacherek daniel olson structural issues in multimedia design linn marks collins hypermedia on the web: what will it take? fabio vitali michael bieber answering queries using limited external query processors (extended abstract) alon y. levy anand rajaraman jeffrey d. ullman human-machine perceptual cooperation francis k. h. quek michael c. petro montage: providing teleproximity for distributed groups john c. tang monica rua human-computer interface development tools: a methodology for their evaluation deborah hix robert s. schulman user learning and performance with marking menus gordon kurtenbach william buxton user interface flexibility and limits of effective computer documentation for users this paper looks at the increasing flexibility of computer interfaces and the resulting impact on the effectiveness of the computer manufacturer's documentation (reference manuals, training packages, and so on). the paper identifies the types of documentation that begin to lose effectiveness as the user interface becomes more flexible, and briefly looks at ways to keep that documentation as effective as possible. allan j. henderson on the use of spreading activation methods in automatic information spreading activation methods have been recommended in information retrieval to expand the search vocabulary and to complement the retrieved document sets. the spreading activation strategy is reminiscent of earlier associative indexing and retrieval systems. some spreading activation procedures are briefly described, and evaluation output is given, reflecting the effectiveness of one of the proposed procedures. g. salton c. buckley a virtual media (vmedia) jpeg 2000 interactive image browser jin li hong-hui sun smartwatch: an automated video event finder in this paper, we present an automated video event detection system that combines textual and aural analysis techniques. serhan dagtas tom mcgee mohamed abdel-mottaleb reducing recovery constraints on locking based protocols serializability is the standard correctness criterion for concurrency control. to ensure correctness in the presence of failures, recoverability is also imposed. pragmatic considerations result in further constraints, for instance, the existing log-based recovery implementations that use before- images warrant that transaction executions be strict. strict executions are restrictive, thus sacrificing concurrency and throughput. in this paper we identify the relation between the recovery mechanism and the restrictions imposed by concurrency control protocols. in particular, we propose a new inverse operation that can be integrated with the underlying recovery mechanism. in order to establish the viability of our approach, we demonstrate the new implementation by making minor modifications to the conventional recovery architecture. this inverse operation is also designed to avoid the undesirable phenomenon of cascading aborts when transactions execute conflicting write operations. g. alonso d. agrawal a. el abbadi database research at the university of oklahoma le gruenwald leonard brown ravi dirckze sylvain guinepain carlos sanchez brian summers sirirut vanichayobon automatically generating olap schmata from conceptual graphical models karl hahn carsten sapia markus blaschka a microcomputer implementation of an er query interface bogdan d. czejdo perceptual vs. hardware performance in advanced acoustic interface design this panel brings together experts in the field of non-speech auditory displays with points of view ranging from long-term basic research in human perception to the timely production of useable tools in commercial systems. the panel will examine issues of perceptual validity and engineering performance from several different perspectives representative of current work in the field, and discuss how such issues can or should impact decisions made during technology development. panelists' perspectives include: levels of analysis in designing and using auditory interfaces (gaver), an example of what can be learned about implementation requirements for low-level psychophysical studies (wenzel), designing integrated systems to encompass sonification in a three-dimensional environment (foster), issues in the study of information transfer in representational acoustic signals (levkowitz), and the design of a generalized technology platform for acoustic signal presentation (powell). elizabeth m. wenzel making global digital libraries work: collection services, connectivity regions, and collection views carl lagoze david fielding sandra payette making gis closer to end users of urban environment data andrea aime flavio bonfatti paola daniela monari sirio: a distributed information system over a heterogeneous computer network this paper presents the sirio project, a commercial experience within spanish industry, developed for tecnatom s.a. by the data and knowledge bases research group at the technical university of madrid. sirio runs over a heterogeneous local area network, with a client- server architecture, using the following tools: oracle as rdbms, running over a unix server, tcp/ip as a communication protocol, ethernet toolkit for the distributed client- server architecture, and c as the host programming language for the distributed applications (every one of them is rather complex and very different from the rest). the system uses computers with ms-dos, connected to the server over the lan. sirio is mainly based on the conceptual design of an rdb, upon which several distributed applications are operational as big software modules. these applications are: 1.- the inspection programs, the management of their corresponding criteria and the automatic generation of queries. 2.- graphics processing and interface definition. 3.- interactive rdb updating. 4.- historical db management. 5.- massive load of on-field obtained data. 6.- report and query application. the approach of the sirio integrated information system presented here is a pioneering one. there are about two dozen companies worldwide in this field and none has developed such an advanced system to this day. from 1992, sirio is totally operational in the tecnatom s.a. industry. it constitutes an important tool to obtain the reports (from different plants) for the clients, for the state control organizations, and for the specialized analyst staff. c. r. costilla m. j. bas j. villamor strict histories in object-based database systems rajeev rastogi henry f. korth abraham silberschatz design and evaluation rules for building adaptive schema in an object-oriented data and knowledge base system we develop a selection of design and evaluation rules for building an adaptive schema in an object-oriented data and knowledge base system. this set of style rules include not only those which we use to preserve validity and minimality of an object-oriented schema, but also those which help us to promote extensibility, reusability and adaptiveness of an object-oriented schema against future requirement changes. we encourage to use the set of style rules proposed as a means for validating quality of a schema, and for transforming an object-oriented schema into a better style regarding to adaptiveness and robustness, rather than as a user- oriented method solely for designing the schema. ling liu widening the net (workshop)(abstract only): the theory and practice of physical and electronic communities this 1.5 day workshop will bring together designers and researchers working on on-line communities, to discuss: (a) existing understanding of real-world cummunities; (b) experiences with the behaviour, implementation and design of on-line communities; (c) lessons from "traditional" cscw systems. we will develop design goals for such communities and identify oustanding research issues. steve whittaker ellen isaacs vicki o'day the silicon graphics customer research and usability group mike mohageg the best and worst of teams: self-directed work teams as an information systems development workforce strategy brian d. janz virtual environments at work: ongoing use of muds in the workplace in recent years much attention has been paid to network-based, distributed environments like text-based muds and moos for supporting collaborative work. such environments offer a shared virtual world in which interactions can take place irrespective of the actual physical proximity or distance of interactants. although these environments have proven successful within social, recreational and educational domains, few data have been reported concerning use of such systems in the workplace. in this paper we summarize in-depth interviews with 8 mudders from a software research and development community where a mud has been operational and actively used for a number of years. the interviews suggest that the mud fills a valuable communication niche for this workgroup, being used both synchronously and asynchronously to enable the establishment of new contacts and the maintenance of existing contacts. these observations are discussed in the context of the organization under study. elizabeth f. churchill sara bly help desk metamorphosis (from being despised to being valued) daniel e. wilson design of an opac database to permit different subject searching accesses in a multi-disciplines universities library catalogue database this paper presents searching approaches and user interface capabilities of duo, an online public access catalogue (opac) designed to permit the users of three universities of the northeast of italy different subject searching accesses to the co-operative multi-disciplines library catalogue database. the co-operative catalogue database is managed by one of the software systems developed under the italian national project for library automation: the sbn project. since the sbn database has not been designed to be efficiently accessed for end-user searches, the duo database has been designed to avoid duplication of the sbn database data and to be usable for making efficient subjects accesses to the catalogue documents. the duo design choices are presented, in particular the main choice of designing a "virtual" document that corresponds to each sbn document and that has unstructured data usable for subject search purposes. the paper presents a new kind of user-opac dialogue that makes available to the user different search approaches and on- line dictionaries. in particular the user during the interaction with the search tool can represent his information needs with the support of interface capabilities that are based on retrieval path history, and words and codes on- line dictionaries. duo is the first italian opac that has been made openly available to users of universities and research institutions. for this reason, it is also the first time that opac log data is going to be collected in italy. this work mainly intends to make a modern opac available to the users of a sbn catalogue database, but it is going to permit also to build up a knowledge on opac usage in italy. maristella agosti maurizio masotti multidimensional access methods search operations in databases require special support at the physical level. this is true for conventional databases as well as spatial databases, where typical search operations include the point query (find all objects that contain a given search point) and the region query (find all objects that overlap a given search region). more than ten years of spatial database research have resulted in a great variety of multidimensional access methods to support such operations. we give an overview of that work. after a brief survey of spatial data management in general, we first present the class of point access methods, which are used to search sets of points in two or more dimensions. the second part of the paper is devoted to spatial access methods to handle extended objects, such as rectangles or polyhedra. we conclude with a discussion of theoretical and experimental results concerning the relative performance of various approaches. volker gaede oliver gunther how to find it (abstract): research issues in distributed search udi manber separable hyperstructure and delayed link binding david f. brailsford a research program to assess user perceptions of group work support computer support for group work is a technological innovation receiving considerable attention from developmental researchers. this paper reports the preliminary results from two surveys which assessed user perceived needs for various types of group work support. the instruments, distributed to managers and professionals in a variety of organizations, described group support scenarios and associated functions/tools and asked for an assessment of their usefulness to one of the respondent's organizational work groups. support for between meetings group work was perceived to be more useful than support for either face to face or electronic meetings. common single user tools were generally perceived to be more useful than multi-user group tools. individual differences and implications are addressed. john satzinger lorne olfman efficient optimization of simple chase join expressions simple chase join expressions are relational algebra expressions, involving only projection and join operators, defined on the basis of the functional dependencies associated with the database scheme. they are meaningful in the weak instance model, because for certain classes of schemes, including independent schemes, the total projections of the representative instance can be computed by means of unions of simple chase join expressions. we show how unions of simple chase join expressions can be optimized efficiently, without constructing and chasing the corresponding tableaux. we also present efficient algorithms for testing containment and equivalence, and for optimizing individual simple chase join expressions. paolo atzeni edward p. f. chan modelling user, system design: results of a scenarios matrix exercise nick hammond phil barnard joelle coutaz michael harrison allan maclean richard m. young an analytic model in the study of physical database reorganization (abstract only) an analytic model is defined to be used in the study of automated physical database reorganization. periodic reorganizations are called for so that an acceptable database performance level may be maintained. database performance is defined in terms of the physical structures which comprise the database. the performance of a particular structure is measured in terms of its storage, retrieval, and update costs. a reorganization is called for when the current performance level, or cost, of the database exceeds the sum of the projected cost (following a reorganization) and the cost of performing a reorganization. recommendations regarding structural changes are made by the model. the model addresses three such possibilities - the choice of a primary storage structure, the selection of the optimal set of secondary indices, and cleanup of overflow chains. the results of experiments which were conducted using the model are presented. the model is evaluated regarding its usefulness in database performance studies, its generalizability to various database environments, and its extendibility. julia hodges data models in database management it is a combination of three components: 1) a collection of data structure types (the building blocks of any database that conforms to the model); 2) a collection of operators or inferencing rules, which can be applied to any valid instances of the data types listed in (1), to retrieve or derive data from any parts of those structures in any combinations desired; 3) a collection of general integrity rules, which implicitly or explicitly define the set of consistent database states or changes of state or both--- these rules may sometimes be expressed as insert-update-delete rules. e. f. codd document quality indicators and corpus editions corpus editions can only be useful to scholars when users know what to expect of the texts. we argue for text quality indicators, both general and domain- specific. jeffrey a. rydberg-cox anne mahoney gregory r. crane reconciling objects and multilevel security t. f. keefe editorial ravi sandhu multimodal interfaces for dynamic interactive maps sharon oviatt structural matching and discovery in document databases structural matching and discovery in documents such as sgml and html is important for data warehousing [6], version management [7, 11], hypertext authoring, digital libraries [4] and internet databases. as an example, a user of the world wide web may be interested in knowing changes in an html document [2, 5, 10]. such changes can be detected by comparing the old and new version of the document (referred to as structural matching of documents). as another example, in hypertext authoring, a user may wish to find the common portions in the history list of a document or in a database of documents (referred to as structural discovery of documents). in sigmod 95 demo sessions, we exhibited a software package, called treediff [13], for comparing two latex documents and showing their differences. given two documents, the tool represents the documents as ordered labeled trees and finds an optimal sequence of edit operations to transform one document (tree) to the other. an edit operation could be an insert, delete, or change of a node in the trees. the tool is so named because documents are represented and compared using approximate tree matching techniques [9, 12, 14]. jason tsong-li wang dennis shasha george j. s. chang liam relihan kaizhong zhang girish patel locking primitives in a database system henry f. korth cover story: structural web search using a graph-based discovery system nitish manocha diane j. cook lawrence b. holder using metadata for the intelligent browsing of structured media objects william i. grosky farshad fotouhi ishwar k. sethi bogdan capatina on conjunctive queries containing inequalities conjunctive queries are generalized so that inequality comparisons can be made between elements of the query. algorithms for containment and equivalence of such "inequality queries" are given, under the assumption that the data domains are dense and totally ordered. in general, containment does not imply the existence of homomorphisms (containment mappings), but the homomorphism property does exist for subclasses of inequality queries. a minimization algorithm is defined using the equivalence algorithm. it is first shown that the constants appearing in a query can be divided into "essential" and "nonessential" subgroups. the minimum query can be nondeterministically guessed using only the essential constants of the original query. anthony klug integration of data base design in programming languages in 1978, the astra research group at the university of trondheim was formed to create a prototype of a relational data base machine with appropriate software. a major part of this development was the definition and implementation of an integrated data definition and manipulation language astral (1). tore amble bridging across content and tools glorianna davenport the use of scenarios in design bonnie a. nardi distributed query processing using active networks zhili zhang william perrizo enhancing the usability of an office information system through direct manipulation in office information systems, the primary focus has been to integrate facilities for the communication and management of information. however, the human factors aspects of the design of office systems are equally important considerations if such office systems are to gain widespread acceptance and use. the application of design techniques from human factors can help enhance the usability of an office system. in this paper, we describe the user interface of an office system developed by adapting such design techniques. alison lee f. h. lochovsky tool-based approach to distributed database design: includes web-based forms design for access to academic affairs data david a. owens frederick t. sheldon image databases organization and data models boris rachev irena valova mariana stoeva statistical semantics: how can a computer use what people name things to guess what things people mean when they name things? the descriptors or categories assigned to entries in an information system form the basis of most retrieval mechanisms (e.g., menu or key word). these descriptors are the primary means of communication between system designers and end users. in this paper we analyze some of the factors which influence this communication link. our goal is to uncover some psychological principles that will help us to understand naming and describing behavior and thus improve the communication between designers and users. in traditional communication (e.g., conversation) the communicator can accommodate to different listeners, both by shifting perspective and by attending to explicit feedback from the listener. in describing items in a data base, however, system designers are at a disadvantage in that they do not usually get explicit, immediate, and continuous feedback from users. knowing how people describe common objects and shift their descriptions for audiences of different levels of sophistication may help designers build systems whose information is accessible to the widest possible audience. george w. furnas louis m. gomez thomas k. landauer susan t. dumais generating a dynamic hypertext environment with n-gram analysis claudia pearce charles nicholas a three-step filtering mechanism masashi uyama commentary gillian crampton smith scenario? guilty! morten kyng content + connectivity => community: digital resources for a learning community gary marchionini victor nolet hunter williams wei ding josephus beale anne rose allison gordon ernestine enomoto lynn harbinson mgen - a generator for menu driven programs many interactive computer applications can with advantage be built after the menu principle. mgen is a program that automatically builds a skeleton of a menu driven program. it is a well known problem that users of programs normally cannot state what they really want until they see the finished program running. with mgen, the resulting program can be simulated by the use of examples. to enable efficient utilization by skilled users, a feature called "quick selection" is implemented. bertil friman dialogue structures for virtual worlds j. bryan lewis lawrence koved daniel t. ling pages, books, the web, and virtual reality: a response to negroponte's "books without pages" r. stanley dicks argohalls: adding support for group awareness to the argo telecollaboration system hania gajewska mark manasse dave redell predicting the time to recall computer command abbreviations a goms theory of stimulus-response compatibility is shown to predict response- time performance on a command/abbreviation encoding task. working with parameters that were set by an earlier study and which have rational, task- meaningful interpretations as mapping, motor, perception and retrieval operators, zero- parameter predictions were made that fit the observed performance with r2 = 0.776 (p<0.05). the reasonableness of the parameters, the algorithms used to generate the predictions, and the weighting assumption used to combine algorithms into a single prediction are discussed. bonnie e. john allen newell aquanet: a hypertext tool to hold your knowledge in place catherine c. marshall frank g. halasz russell a. rogers william c. janssen the ccube constraint object-oriented database system alexander brodsky victor e. segal jia chen paval a. exarkhopoulo concurrency control in trusted database management systems: a survey recently several algorithms have been proposed for concurrency control in a trusted database management system (tdbms). the various research efforts are examining the concurrency control algorithms developed for dbmss and adapting them for a multilevel environment. this paper provides a survey of the concurrency control algorithms for a tdbms and discusses future directions. bhavani thuraisingham hai-ping ko interactive city planning using multimedia representation aids michael j. shiffer partitioned two-phase locking in a large integrated database, there often exists an "information hierarchy," where both raw data and derived data are stored and used together. therefore, among update transactions, there will often be some that perform only read accesses from a certain (i.e., the "raw" data) portion of the database and write into another (i.e., the "derived" data) portion. a conventional concurrency control algorithm would have treated such transactions as regular update transactions and subjected them to the usual protocols for synchronizing update transactions. in this paper such transactions are examined more closely. the purpose is to devise concurrency control methods that allow the computation of derived information to proceed without interfering with the updating of raw data. the first part of the paper presents a proof method for correctness of concurrency control algorithms in a hierarchically decomposed database. the proof method provides a framework for understanding the intricacies in dealing with hierarchically decomposed databases. the second part of the paper is an application of the proof method to show the correctness of a two-phase- locking- based algorithm, called partitioned two-phase locking, for hierarchically decomposed databases. this algorithm is a natural extension to the version pool method proposed previously in the literature. meichun hsu arvola chan groupware comes to the internet: charting a new world early experiences with web-based groupware point to new collaboration opportunities within and between organizations. we report the results of a study of more than 100 organizations that have used web-based groupware to understand better how they are using it and what advantages and disadvantages they have experienced. we then use these data to develop a framework for analyzing and assessing the fit of groupware systems to organizational needs. we close with a discussion of the evolution and future of groupware. bradley c. wheeler alan r. dennis laurence i. press european research in visual interfaces (panel): experiences and perspectives the goal of this panel is to discuss some significant examples of current r&d; projects related to visual interfaces, which are part of the different r&d; programmes partially funded by the european commission, and in particular of the esprit programme. even if the panel will concentrate on user interface issues, the scope of the presented projects is fairly broad: from user interface development technologies (as in hyperface and interactors), to banking and financial environments and applications (as in fast), to hypermedia platforms and applications (as in miners), to navigational user interfaces to traditional information systems (as in hifi), to multimedia educational environments (as in multed), to multimedia medical information (as in milord), to 3d interaction with data bases (as in fadiva). all the presented projects make large use of multimedia concepts and technologies. roberto polillo sebastiano bagnara heinz-dieter d. böcker antonio cantatore alessandro d'atri paolo paolini isis: interface for a semantic information system kenneth j. goldman sally a. goldman paris c. kanellakis stanley b. zdonik a spatial match representation scheme for indexing and querying in iconic image databases jae-woo chang yeon-jung kim ki-jin chang the last word aaron weiss aqr-toolkit: an adaptive query routing middleware for distributed data intensive systems query routing is an intelligent service that can direct query requests to appropriate servers that are capable of answering the queries. the goal of a query routing system is to provide efficient associative access to a large, heterogeneous, distributed collection of information providers by routing a user query to the most relevant information sources that can provide the best answer. effective query routing not only minimizes the query response time and the overall processing cost, but also eliminates a lot of unnecessary communication overhead over the global networks and over the individual information sources. the aqr-toolkit divides the query routing task into two cooperating processes: query refinement and source selection. it is well known that a broadly defined query inevitably produces many false positives. query refinement provides mechanisms to help the user formulate queries that will return more useful results and that can be processed efficiently. as a complimentary process, source selection reduces false negatives by identifying and locating a set of relevant information providers from a large collection of available sources. by pruning irrelevant information sources, source selection also reduces the overhead of contacting the information servers that do not contribute to the answer of the query. the system architecture of aqr-toolkit consists of a hierarchical network (a directed acyclic graph) with external information providers at the leaves and query routers as mediating nodes. the end-point information providers support query-based access to their documents. at a query router node, a user may browse and query the meta information about information providers registered at that query router or make use of the router's facilitates for query refinement and source selection. ling liu calton pu david buttler wei han henrique paques wei tang distributed hypermedia in support of corporate memory (abstract) gerard hutchings chris scott wendy hall towards constructive axiomatic specifications the main goal of our efforts is the specification of data bases whose structure and behaviour are restricted by semantic integrity constraints. research proceeds along the following stages: 1\\. choose a data model; 2\\. establish a taxonomy of integrity constraints; 3\\. specify the data model formally; 4\\. develop a methodology for specifying conceptual schemas for different applications, based on the data model; 5\\. provide a conceptual characterization of query and update operations. c. s. dos santos a. l. furtado j. m.v. de castilho s. e.r. de carvalho optimizing search by showing results in context we developed and evaluated seven interfaces for integrating semantic category information with web search results. list interfaces were based on the familiar ranked-listing of search results, sometimes augmented with a category name for each result. category interfaces also showed page titles and/or category names, but re- organized the search results so that items in the same category were grouped together visually. our user studies show that all category interfaces were more effective than list interfaces even when lists were augmented with category names for each result. the best category performance was obtained when both category names and individual page titles were presented. either alone is better than a list presentation, but both together provide the most effective means for allowing users to quickly examining search results. these results provide a better understanding of the perceptual and cognitive factors underlying the advantage of category groupings and provide some practical guidance to web search interface designers. susan dumais edward cutrell hao chen towards the integration of a query mechanism and navigation for retrieval of data on multimedia documents e. duval h. olivie secure statistical databases with random sample queries a new inference control, called random sample queries, is proposed for safeguarding confidential data in on-line statistical databases. the random sample queries control deals directly with the basic principle of compromise by making it impossible for a questioner to control precisely the formation of query sets. queries for relative frequencies and averages are computed using random samples drawn from the query sets. the sampling strategy permits the release of accurate and timely statistics and can be implemented at very low cost. analysis shows the relative error in the statistics decreases as the query set size increases; in contrast, the effort required to compromise increases with the query set size due to large absolute errors. experiments performed on a simulated database support the analysis. dorothy e. denning selectively estimation for boolean queries in a variety of applications ranging from optimizing queries on alphanumeric attributes to providing approximate counts of documents containing several query terms, there is an increasing need to quickly and reliably estimate the number of strings (tuples, documents, etc.) matching a boolean query. boolean queries in this context consist of substring predicates composed using boolean operators. while there has been some work in estimating the selectivity of substring queries, the more general problem of estimating the selectivity of boolean queries over substring predicates has not been studied. our approach is to extract selectivity estimates from relationships between the substring predicates of the boolean query. however, storing the correlation between all possible predicates in order to provide an exact answer to such predicates is clearly infeasible, as there is a super- exponential number of possible combinations of these predicates. instead, our novel idea is to capture correlations in a space-efficient but approximate manner. we employ a monte carlo technique called set hashing to succinctly represent the set of strings containing a given substring as a signature vector of hash values. correlations among substring predicates can then be generated on-the-fly by operating on these signatures. we formalize our approach and propose an algorithm for estimating the selectivity of any boolean query using the signatures of its substring predicates. we then experimentally demonstrate the superiority of our approach over a straight-forward approach based on the independence assumption wherein correlations are not explicitly captured. zhiyuan chen nick koudas flip korn s. muthukrishnan disk allocation for cartesian product files on multiple-disk systems cartesian product files have recently been shown to exhibit attractive properties for partial match queries. this paper considers the file allocation problem for cartesian product files, which can be stated as follows: given a k-attribute cartesian product file and an m-disk system, allocate buckets among the m disks in such a way that, for all possible partial match queries, the concurrency of disk accesses is maximized. the disk modulo (dm) allocation method is described first, and it is shown to be strict optimal under many conditions commonly occurring in practice, including all possible partial match queries when the number of disks is 2 or 3. it is also shown that although it has good performance, the dm allocation method is not strict optimal for all possible partial match queries when the number of disks is greater than 3. the general disk modulo (gdm) allocation method is then described, and a sufficient but not necessary condition for strict optimality of the gdm method for all partial match queries and any number of disks is then derived. simulation studies comparing the dm and random allocation methods in terms of the average number of disk accesses, in response to various classes of partial match queries, show the former to be significantly more effective even when the number of disks is greater than 3, that is, even in cases where the dm method is not strict optimal. the results that have been derived formally and shown by simulation can be used for more effective design of optimal file systems for partial match queries. when considering multiple- disk systems with independent access paths, it is important to ensure that similar records are clustered into the same or similar buckets, while similar buckets should be dispersed uniformly among the disks. h. c. du j. s. sobolewski concepts for transaction recovery in nested transactions the concept of nested transactions offers more decomposable execution units and finer grained control over recovery and concurrency as compared to 'flat' transactions. to exploit these advantages, especially transaction recovery has to be refined and adjusted to the requirements of the control structure. in this paper, we investigate transaction recovery for nested transactions. therefore, a model for nested transaction is introduced allowing for synchronous and asynchronous transaction invocation as well as single call and conversational interfaces. for the resulting four parameter combinations, the properties and dependencies of transaction recovery are explored if a transaction is 'unit of recovery' and if savepoints within transactions are used to gain finer recovery units. theo haerder kurt rothermel elicitation queries to the excite web search engine cq: a personalized update monitoring toolkit the cq project at ogi, funded by darpa, aims at developing a scalable toolkit and techniques for update monitoring and event-driven information delivery on the net. the main feature of the cq project is a "personalized update monitoring" toolkit based on continual queries [3]. comparing with the pure pull (such as dbmss, various web search engines) and pure push (such as pointcast, marimba, broadcast disks) technology, the cq project can be seen as a hybrid approach that combines the pull and push technology by supporting personalized update monitoring through a combined client-pull and server-push paradigm. ling liu calton pu wei tang david buttler john biggs tong zhou paul benninghoff wei han fenghua yu interactive video michael k. stenzler richard r. eckert teamrooms: groupware for shared electronic spaces mark roseman saul greenberg qoc in action (abstract): using design rationale to support design diane mckerlie allan maclean hill climbing algorithms for content-based retrieval of similar configurations the retrieval of stored images matching an input configuration is an important form of content-based retrieval. exhaustive processing (i.e., retrieval of the best solutions) of configuration similarity queries is, in general, exponential and fast search for sub-optimal solutions is the only way to deal with the vast (and ever increasing) amounts of multimedia information in several real-time applications. in this paper we discuss the utilization of hill climbing heuristics that can provide very good results within limited processing time. we propose several heuristics, which differ on the way that they search through the solution space, and identify the best ones depending on the query and image characteristics. finally we develop new algorithms that take advantage of the specific structure of the problem to improve performance. dimitris papadias spaces without places: politicizing nicholas negroponte's technologizing of books without pages brad mehlenbacher the influence of video in desktop computer interactions ronald h. nowaczyk terri l. thomas darryall o. white video widgets and video actors simon gibbs christian breiteneder vicki de mey michael papathomas framing implementation management angela lin tony conford another look at automatic text-retrieval systems evidence from available studies comparing manual and automatic text-retrieval systems does not support the conclusion that intellectual content analysis produces better results than comparable automatic systems. gerard salton groups without groupware alan wexelblat hydro: a heterogeneous distributed database system william perrizo joseph rajkumar prabhu ram correcting execution of distributed queries algorithms for processing distributed queries require a priori estimates of the size of intermediate relations. most such algorithms take a "static" approach in which the algorithm is completely determined before processing begins. if size estimates are found to be inaccurate at some intermediate stage, there is no opportunity to re-schedule, and the result may be far from optimal. adaptive query execution may be used to alleviate the problem. care is necessary, though, to ensure that the delay associated with re-scheduling does not exceed the time saved through the use of a more efficient strategy. this paper presents a low overhead delay method to decide when to correct a strategy. sampling is used to estimate the size of relations, and alternative heuristic strategies prepared in a background mode are used to decide when to correct. correction is made only if lower overall delay is achieved, including correction time. evaluation using a model of a distributed data base indicates that the heuristic strategies are near optimal. moreover, it also suggests that it is usually correct to abort creation of an intermediate relation which is much larger than predicted. p. bodorik j. pyra j. s. riordon explicit query formulation with visual keywords this paper presents a novel framework called visual keywords for indexing and retrieving digital images. visual keywords are flexible and intuitive visual prototypes with semantics. a new query method based on visual constraints allows direct and explicit content specification. last but not least, we have developed a digital album prototype to demonstrate query and retrieval based on visual keywords. joo-hwee lim formal syntax and semantics of a reconstructed relational database system dan jonsson fast density estimation using cf-kernel for very large databases tian zhang raghu ramakrishnan miron livny overview of the first trec conference the first text retrieval conference (trec-1) was held in early november 1992 and was attended by about 100 people working in the 25 participating groups. the goal of the conference was to bring research groups together to discuss their work on a new large test collection. there was a large variety of retrieval techniques reported on, including methods using automatic thesaurii, sophisticated term weighting, natural language techniques, relevance feedback, and advanced pattern matching. as results had been run through a common evaluation package, groups were able to compare the effectiveness of different techniques, and discuss how differences among the sytems affected performance. donna harman video browsing using 3d video content trees knut manske rabbit: an interface for database access a new kind of user interface for information retrieval has been designed and implemented to aid users in formulating a query. the system, called rabbit, relies upon a new paradigm for retrieval by reformulation, based on a psychological theory of human remembering. the paradigm actually evolved from an explicit attempt to design a 'natural' interface which imitated human retrieval processes. to make a query in rabbit, the user interactively refines partial descriptions of his target item(s) by criticizing successive example (and counterexample) instances that satisfy the current partial description. instances from the database are presented to the user from a perspective inferred from the user's query description and the structure of the knowledge base. among other things, this constructed perspective reminds users of likely terms to use in their descriptions, enhances their understanding of the meaning of given terms, and prevents them from creating certain classes of semantically improper query descriptions. rabbit particularily facilitates users who approach a database with only a vague idea of what it is that they want and who thus, need to be guided in the (re)formulation of their queries. rabbit is also of substantial value to casual users who have limited knowledge of a given database or who must deal with a multitude of databases. michael d. williams frederich n. tou portholes: supporting awareness in a distributed work group we are investigating ways in which media space technologies can support distributed work groups through access to information that supports general awareness. awareness involves knowing who is "around", what activities are occurring, who is talking with whom; it provides a view of one another in the daily work environments. awareness may lead to informal interactions, spontaneous connections, and the development of shared cultures \---all important aspects of maintaining working relationships which are denied to groups distributed across multiple sites. the portholes project, at rank xerox europarc in cambridge, england, and xerox parc in palo alto, california, demonstrates that awareness can be supported across distance. a data network provides a shared database of image information that is regularly updated and available at all sites. initial experiences of the system in use at europarc and parc suggest that portholes both supports shared awareness and helps to build a "sense of community". paul dourish sara bly building multi-discipline digital libraries (poster) michael l. nelson kurt maly stewart n. t. shen peripheral participants in mediated communication andrew f. monk leon a. watts meaning-making in the creation of useful summary reports summary reports are the periodic assemblings of text, numbers, and other data, drawn from diverse sources to present a picture of some aspect of an organization's state. they have become ubiquitous in organizations with the advent of computers, but are not always as useful as their readers would like them to be. this paper focuses on the meaning-making work that report contributors and readers must do in order for reports to be useful and presents some examples drawn from everyday interactions in a business unit of a large corporation. the paper uses these examples as a foundation for asking what it might mean to purposefully support meaning-making in organizational reporting. barbara katzenberg john mcdermott an information structure dealing with term dependance and polysemy an information structure (is) that is regarded as a formal description of a domain of discourse is proposed. this is is aimed at increasing the effectiveness of an information retrieval system. it is shown how the retrieval algorithm can take into account the term dependencies that are provided by the is. moreover, these term dependencies can be used by an automatic indexing procedure in order to interpret polysemic terms. the theoretical framework of our is has some favorable properties. as a consequence, the construction and maintenance of such an is is simpler than that of a thesaurus. p. schauble powers of ten thousand: navigating in large information spaces how would you interactively browse a very large display space, for example, a street map of the entire united states? the traditional solution is zoom and pan. but each time a zoom-in operation takes place, the context from which it came is visually lost. sequential applications of the zoom-in and zoom-out operations may become tedious. this paper proposes an alternative technique, the macroscope, based on zooming and planning in multiple translucent layers. a macroscope display should comfortably permit browsing continuously on a single image, or set of images in multiple resolutions, on a scale of at least 1 to 10,000. henry lieberman an extensible query optimizer for an objectbase management system m. tamerÃ-zsu adriana muñoz duane szafron building user interfaces interactively using pre- and postconditions a tool is presented which allows graphic layout of a user interface integrated with specification of behavior using pre- and postconditions. martin r. frank j. j. de graaff daniel f. gieskens james d. foley a groupware engine using uims methodologies lever wang book preview michael p. papazoglou stefano spaccapierta zahir tari cell tuple based spatio-temporal data model: an object oriented approach ale raza wolfgang kainz optimization of the number of copies in a distribution data base we consider the effect on system performance of the distribution of a data base in the form of multiple copies at distinct sites. the purpose of our analysis is to determine the gain in read throughput that can be obtained in the presence of consistency preserving algorithms that have to be implemented when update operations are carried out on each copy. we show that read throughput diminishes if the number of copies exceeds an optimal value. the theoretical model we develop is applied to a system in which consistency is preserved through the use of ellis's ring algorithm. e. g. coffman e. gelenbe b. plateau implementing digital libraries (panel session) this panel will address some of the practical issues of implementing digital libraries. everyone seems to agree that digital libraries are in their infancy. many of us watch in amazement as new developments occur. in the year 2000 building a production digital library means analyzing tradeoffs between stability, innovation, and costs/benefits. in this panel we will address some of the more interesting issues of digital library implementation. one of the critical challenges is the economics of digital libraries. institutions are forming consortia out of necessity and this raises issues of trust, cooperation and commitment. in order for institutions to deliver web-based resources effectively sophisticated cross-organizational access management tools are needed to authenticate and authorize. institutions are developing new tools and exporting them from research departments into libraries. how well does this collaboration work? the above issues are only a sampling of the challenges this panel will explore. rebecca wesley dan greenstein david millman margery tibbetts gregory zick turbo-charging vertical mining of large databases in a vertical representation of a market-basket database, each _item_ is associated with a column of values representing the transactions in which it is present. the association-rule mining algorithms that have been recently proposed for this representation show performance improvements over their classical horizontal counterparts, but are either efficient only for certain database sizes, or assume particular characteristics of the database contents, or are applicable only to specific kinds of database schemas. we present here a new vertical mining algorithm called viper, which is general-purpose, making no special requirements of the underlying database. viper stores data in compressed bit-vectors called "snakes" and integrates a number of novel optimizations for efficient snake generation, intersection, counting and storage. we analyze the performance of viper for a range of synthetic database workloads. our experimental results indicate significant performance gains, especially for large databases, over previously proposed vertical and horizontal mining algorithms. in fact, there are even workload regions where viper outperforms an optimal, but practically infeasible, horizontal mining algorithm. pradeep shenoy jayant r. haritsa s. sundarshan gaurav bhalotia mayank bawa devavrat shah business: a day-in-the-life of a customer centered design consultant deborah mrazek amy silverman summary of the chi'87 doctoral consortium tom carey cscw collaboration is a sign of maturation. a child is nurtured at home before venturing into the world. an idea is first developed then introduced to a larger community. we work on a software module in isolation before integrating it. jonathan grudin usability problem identification using both low- and high-fidelity prototypes robert a. virzi jeffrey l. sokolov demetrios karis framboise - an approach to framework-based active database management system construction hans fritschi stella gatziu klaus r. dittrich preserving update semantics in schema integration in this paper, we propose a methodology for schema integration where the semantics of updates is preserved during the view integration process. we propose to divide view integration into three steps: combination, restructuring, and optimization. in the view combination step, we define the combined schema that contains all original views, plus a new set of constraints that express how data in distinct views are interrelated. the restructuring step is devoted to normalizing the views so that merging becomes possible. the optimization step tries to reduce redundancy and the size of the schema. our methodology defines a set of transformation primitives that allows schema integration to be realized in a safe (information preserving) and algorithmic way. in the proposed transformation primitives, the relationship between the original and transformed schema is formally specified by the instance and update mappings. we introduce the notion of an update semantics preserving transformation, which guarantees that the relationships between each view and the global schema, originated during the view integration process, reflect exactly the relationships between the views as defined by the combined schema. in our approach, the view definition mappings and the view update translator can be directly defined from the instance and update mappings between the different intermediate schemas generated during the view integration process. vânia m. p. vidal marianne winslett automatic discovery of language models for text databases the proliferation of text databases within large organizations and on the internet makes it difficult for a person to know which databases to search. given language models that describe the contents of each database, a database selection algorithm such as gioss can provide assistance by automatically selecting appropriate databases for an information need. current practice is that each database provides its language model upon request, but this cooperative approach has important limitations. this paper demonstrates that cooperation is not required. instead, the database selection service can construct its own language models by sampling database contents via the normal process of running queries and retrieving documents. although random sampling is not possible, it can be approximated with carefully selected queries. this sampling approach avoids the limitations that characterize the cooperative approach, and also enables additional capabilities. experimental results demonstrate that accurate language models can be learned from a relatively small number of queries and documents. jamie callan margaret connell aiqun du r&d; for a nationwide general-purpose system sung hyon myaeng the effect of frame rate and video information redundancy on the perceptual learning of american sign language gestures b. f. johnson j. k. caird access methods betty salzberg using icons to find documents: simplicity is critical a common task at almost any computer interface is that of searching for documents, which guis typically represent with icons. oddly, little research has been done on the processes underlying icon search. this paper outlines the factors involved in icon search and proposes a model of the process. an experiment was conducted which suggests that the proposed model is sound, and that the most important factor in searching for files is the type of icons used. in general, simple icons (those discriminable based on a few features) seem to help users, while complex icons are no better than simple rectangles. michael d. byrne the toughest web user interface challenges richard miller keith rettig counting, enumerating, and sampling of execution plans in a cost-based query optimizer testing an sql database system by running large sets of deterministic or stochastic sql statements is common practice in commercial database development. however, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on the query but strongly influenced by a large number of parameters describing the database and the hardware environment. modifying these parameters in order to steer the optimizer to select other plans is difficult since this means anticipating often complex search strategies implemented in the optimizer. in this paper we devise algorithms for counting, exhaustive generation, and uniform sampling of plans from the complete search space. our techniques allow extensive validation of both generation of alternatives, and execution algorithms with plans other than the optimized one \---if two candidate plans fail to produce the same results, then either the optimizer considered an invalid plan, or the execution code is faulty. when the space of alternatives becomes too large for exhaustive testing, which can occur even with a handful of joins, uniform random sampling provides a mechanism for unbiased testing. the technique is implemented in microsoft's sql server, where it is an integral part of the validation and testing process. florian waas cesar galindo-legaria quantifying coordination in multiple dof movement and its application to evaluating 6 dof input devices: shumin zhai paul milgram duplicate record elimination in large data files the issue of duplicate elimination for large data files in which many occurrences of the same record may appear is addressed. a comprehensive cost analysis of the duplicate elimination operation is presented. this analysis is based on a combinatorial model developed for estimating the size of intermediate runs produced by a modified merge-sort procedure. the performance of this modified merge-sort procedure is demonstrated to be significantly superior to the standard duplicate elimination technique of sorting followed by a sequential pass to locate duplicate records. the results can also be used to provide critical input to a query optimizer in a relational database system. dina bitton david j. dewitt automating interface evaluation michael d. byrne scott d. wood james d. foley david e. kieras piyawadee noi sukaviriya on the containment and equivalence of database queries with linear constraints (extended abstract) oscar h. ibarra jianwen su just-in-time databases and the world-wide web ellen spertus lynn andrea stein integration of external applications into hypermedia systems within heterogeneous distributed environments (abstract) ajit bapat data quality in internet time, space, and communities (panel session) yang w. lee paul l. bowen james d. funk matthias jarke stuart e. madnick yair wand mash: enabling scalable multipoint collaboration steven mccanne eric brewer randy katz elan amir yatin chawathe todd hodes ketan mayer-patel suchitra raman cynthia romer angela schuett andrew swan teck-lee tung tina wong kristin wright a utility-theoretic analysis of expected search length in this paper the expected search length, which is a measure of retrieval system performance, is investigated from the viewpoint of axiomatic utility theory. necessary and sufficient criteria for the expected search length to be an ordinal scale and sufficient criteria that it is a ratio scale are given. p. bollmann v. v. raghavan temporal fds on complex objects temporal functional dependencies (tfd) are defined for temporal databases that include object identity. it is argued that object identity can overcome certain semantic diffuculties with existing temporal relational data models. practical applications of tfds in object bases are discussed. reasoning about tfds is at the center of this paper. it turns out that the distinction between acyclic and cyclic schemas is significant. for acyclic schemas, a complete axiomatization for finite implication is given and an algorithm for deciding finite implication provided. the same axiomatization is proven complete for unrestricted implication in unrestricted schemas, which can be cyclic. an interesting result is that there are cyclic schemas for which unrestricted and finite implication do not coincide. tfds relate and extend some earlier work on dependency theory in temporal databases. throughout this paper, the construct of tfd is compared with the notion of temporal fd introduced by wang et al. (1997). a comparison with other related work is provided at the end of the article. jef wijsen the active badge system andy hopper andy harter tom blackie research in information retrieval and the practical needs of research and cultural libraries encarnacion rancitelli working the net: internet audio gets down to business jill h. ellsworth closure maintenance in an object-oriented query model an object- algebra is presented as a formal query model for object-oriented data models. the algebra serves not only to access and manipulate the structure and behavior of objects, but it also supports the creation of new objects and the introduction of new relationships into the schema. it provides a more powerful and flexible tool than messages for effectively dealing with complex situations and meeting associative access requirements. operands as well as the results of operations in the proposed algebra are formally characterized as a pair of sets---a set of objects capturing the states and a set of message expressions comprised of sequences of messages modeling the object behavior. the closure property is achieved in a natural way by letting the results of operations possess the same characteristics as the operands in an algebra expression. some operators of the algebra resemble those of the relational algebra but with different syntax and semantics. additional operators are introduced to complement them. a class is shown to posses the properties of an operand by defining a set of objects and deriving a set of message expressions for it. furthermore, the result of an object algebra expression is shown to have the characteristics of a class whose superclass/subclass relationships with its operand class(es) can be established providing a mechanism to properly and persistently place it in the class lattice (schema). reda alhajj faruk polat does animation in user interfaces improve decision making? cleotilde gonzalez message system mores: etiquette in laurel douglas k. brotz user-centered design and the "vision thing" susan m. dray david a. siegel research-guided design of multimedia research tools robert j. beichner real3 communication and aromatic group computing: hci and cscw research at canon media technology lab. hideyuki tamura yuichi bannai musicfx: an arbiter of group preferences for computer supported collaborative workouts joseph e. mccarthy theodore d. anagnost searching for content-based addresses on the world-wide web joe d. martin robert holte a hypermedia-based plant-visualization-system as a means for optimizing the maintenance procedures of complex production plants (abstract) h. husemann j. reichel h.-d. kochs from single-user architectural design to pac*: a generic software architecturemodel for cscw gaÃ"lle calvary joÃ"lle coutaz laurence nigay information artisans: patterns of result sharing by information searchers vicki l. o'day robin jeffries algebraic support for complex objects with arrays, identity, and inheritance scott l. vandenberg david j. dewitt pen computing (abstract): the new frontier while all types of computers are ultimately turing machines, this "reduction- and- abstraction" obscures the way people actually experience computing. about every 10 years, a new type of machine is invented that changes our understanding of computation. from the mysterious glass house mainframes of the 1960s, ministered by white-coated experts, to today's friendly personal computers, our sense of the role of computing in our lives has undergone a dramatic transformation. the next wave of this evolution is the "pen computer." a small group of early pioneers has quietly redefined how most people will experience computation in the future, and refocused what we will do with this power. the pen computer derives its heritage from pen and paper, rather than from the typewriter, as personal computers do. it can be carried and used as a folder or notebook would, rather than as a piece of office equipment in a box. these new social characteristics allow a pen computer to be used in settings where keyboard devices are inappropriate or disruptive. data is entered by handwriting, sketching, and scribbling, rather than typing, pointing, and clicking. images are manipulated directly on a visually active surface, rather than remotely. the subjective experience of using a pen computer is unique and utterly personal. this talk will attempt to present the new face that computing will have for future users. s. jerrold kaplan patterns of entry and correction in large vocabulary continuous speech recognition systems clare-marie karat christine halverson daniel horn john karat information wheel - a framework to identify roles of information systems in this paper, the roles of different information systems in an organization are identified using a framework called "information wheel". an information wheel is similar to a physical wheel in that it consists of a hub, spokes, rim and a tire. the parts of a physical wheel are made of material such as steel, rubber and plastics. the parts of the information wheel are made of different information systems. c. s. sankar online support systems stuart a. selber johndan johnson-eilola brad mehlenbacher data model issues for object-oriented applications presented in this paper is the data model for orion, a prototype database system that adds persistence and sharability to objects created and manipulated in object- oriented applications. the orion data model consolidates and modifies a number of major concepts found in many object- oriented systems, such as objects, classes, class lattice, methods, and inheritance. these concepts are reviewed and three major enhancements to the conventional object-oriented data model, namely, schema evolution, composite objects, and versions, are elaborated upon. schema evolution is the ability to dynamically make changes to the class definitions and the structure of the class lattice. composite objects are recursive collections of exclusive components that are treated as units of storage, retrieval, and integrity enforcement. versions are variations of the same object that are related by the history of their derivation. these enhancements are strongly motivated by the data management requirements of the orion applications from the domains of artificial intelligence, computer-aided design and manufacturing, and office information systems with multimedia documents. jay banerjee hong-tai chou jorge f. garza won kim darrell woelk nat ballou hyoung-joo kim special tactical tasks solutions with geographical information system and metadata aleksander kolev georgi pavlov query model-based content-based image retrieval: similarity definition, application and automation - abstract horst eidenberger animating user interfaces using animation servers krishna bharat piyawadee noi sukaviriya technique for universal quantification in sql universal quantification is expressed in ansi sql with negated existential quantification because there is no direct support for universal quantification in ansi sql. however, the lack of explicit support for universal quantification diminishes the userfriendliness of the language because some queries are expressed more naturally using universal quantification than they are using negated existential quantification. it is the intent of this paper to describe a technique to facilitate the construction of universal quantification queries in ansi sql. the technique is based upon a proposed extension to ansi sql to incorporate explicit general support for universal quantification. claudio fratarcangeli the role of hypermedia in multimedia information systems wendy hall duplex: a distributed collaborative editing environment in large scale duplex is a distributed collaborative editor for users connected through a large-scale environment such as the internet. large-scale implies heterogeneity, unpredictable communication delays and failures, and inefficient implementations of techniques traditionally used for collaborative editing in local area networks. to cope with these unfavorable conditions, duplex proposes a model based on splitting the document into independent parts, maintained individually and replicated by a kernel. users act on document parts and interact with co-authors using a local environment providing a safe store and recovery mechanisms against failures or divergence with co-authors. communication is reduced to a minimum, allowing disconnected operation. atomicity, concurrency, and replica control are confined to a manageable small context. françois pacull alain sandoz andre schiper palette: a paper interface for giving presentations les nelson satoshi ichimura elin rønby pedersen lia adams half-qwerty: a one-handed keyboard facilitating skill transfer from qwerty edgar matias i. scott mackenzie william buxton user performance with command, menu, and iconic interfaces performance and subjective reactions of 76 users of varying levels of computer experience were measured with 7 different interfaces representing command, menu, and iconic interface styles. the results suggest three general conclusions: there are large usability differences between contemporary systems, there is no necessary tradeoff between ease of use and ease of learning, interface style is not related to performance or preference (but careful design is). difficulties involving system feedback, input forms, help systems, and navigation aids occurred in all styles of interface: command, menu, and iconic. new interface technology did not solve old human factors problems. john whiteside sandra jones paula s. levy dennis wixon sensemaker: an information-exploration interface supporting the contextual evolution of a user's interests michelle q. wang baldonado terry winograd synthesizing auditory icons william w. gaver visual interaction design: rites of passage maria g. wadlow social choice theory and distributed decision making strategies of distributed decision making based on social choice theory can be used to create a balance between organizational complexity and uncertainty. although group decision support systems (gdss's) have included options for making human collective choices, their design has not been based on optimal rules. social choice theory can also be used to improve the reliability of decisions made by nodes in distributed computer networks. three examples illustrate the application of this theory: human computer-mediated distributed decision making, electing a coordinator to reorganize a failed distributed network, and using weighted votes to improve network reliability. arnold b. urken evolution of data modeling for databases shamkant b. navathe what's happening jennifer bruer getting the most from paired-user testing daniel wildman image wave ryotaro suzuki yuichi iwadate michihiko minoh self-spacial join selectivity estimation using fractal concepts the problem of selectivity estimation for queries of nontraditional databases is still an open issue. in this article, we examine the problem of selectivity estimation for some types of spatial queries in databases containing real data. we have shown earlier [faloutsos and kamel 1994] that real point sets typically have a nonuniform distribution, violating consistently the uniformity and independence assumptions. moreover, we demonstrated that the theory of fractals can help to describe real point sets. in this article we show how the concept of fractal dimension, i.e., (noninteger) dimension, can lead to the solution for the selectivity estimation problem in spatial databases. among the infinite family of fractal dimensions, we consider here the hausdorff fractal dimension d0 and the "correlation" fractal dimension d2. specifically, we show that (a) the average number of neighbors for a given point set follows a power law, with d2 as exponent, and (b) the average number of nonempty range queries follows a power law with e d0 as exponent (e is the dimension of the embedding space). we present the formulas to estimate the selectivity for "biased" range queries, for self-spatial joins, and for the average number of nonempty range queries. the result of some experiments on real and synthetic point sets are shown. our formulas achieve very low relative errors, typically about 10%, versus 40%--100% of the formulas that are based on the uniformity and independence assumptions. alberto belussi christos faloutsos an experimental distributed modeling system gary j. nutt evolution of a reactive environment jeremy r. cooperstock koichiro tanikoshi garry beirne tracy narine william a. s. buxton navigation in electronic worlds: a chi 97 workshop susanne jul george w. furnas designing menu display format to match input device format gary perlman leo c. sherwin what's happening marisa campbell evaluation of evaluation in information retrieval tefko saracevic a security machanism for statistical database the problem of user inference in statistical databases is discussed and illustrated with several examples. it is assumed that the database allows "total," "average," "count," and "percentile" queries; a query may refer to any arbitrary subset of the database. methods for protecting the security of such a database are considered; it is shown that any scheme which gives "statistically correct" answers is vulnerable to penetration. a precise definition of compromisability (in a statistical sense) is given. a general model of user inference is proposed; two special cases of this model appear to contain all previously published strategies for compromising a statistical database. a method for protecting the security of such a statistical database against these types of user inference is presented and discussed. it is shown that the number of queries required to compromise the database can be made arbitrarily large by accepting moderate increases in the variance of responses to queries. a numerical example is presented to illustrate the application of the techniques discussed. leland l. beck kdd-cup 2000: question 1 winner's report aron inger nurit vatnik saharon rosset einat neumann verifiable properties of database transactions michael benedikt timothy griffin leonid libkin the human computer interaction laboratory's 12th annual symposium and open house wendy a. kellogg john c. thomas information system behavior specification by high level petri nets the specification of an information system should include a description of structural system aspects as well as a description of the system behavior. in this article, we show how this can be achieved by high- level petri nets--- namely, the so-called nr/t-nets (nested-relation/transition nets). in nr/t-nets, the structural part is modeled by nested relations, and the behavioral part is modeled by a novel petri net formalism. each place of a net represents a nested relation scheme, and the marking of each place is given as a nested relation of the respective type. insert and delete operations in a nested relational database (nf2-database) are expressed by transitions in a net. these operations may operate not only on whole tuples of a given relation, but also on "subtuples" of existing tuples. the arcs of a net are inscribed with so-called filter tables, which allow (together with an optional logical expression as transition inscription) conditions to be formulated on the specified (sub-) tuples. the occurrence rule for nr/t-net transitions is defined by the operations union, intersection, and "negative" in lattices of nested relations. the structure of an nr/t-net, together with the occurrence rule, defines classes of possible information system procedures, i.e., sequences of (possibly concurrent) operations in an information system. andreas oberweis peter sander a uniform interface to networked library services mitchell blumenfeld ralph droms elastic windows: a hierarchical multi-window world-wide web browser eser kandogan ben shneiderman from contemporary workflow process automation to adaptive and dynamic work activity coordination and collaboration amit sheth formal methods in computer human interaction: comparison, benefits, open questions fabio paternó gregory abowd philippe palanque distributed and parallel knowledge discovery (workshop session) (title only) hillol kargupta philip chan vipin kumar zoran obradovic crowded collaborative virtual environments steve benford chris greenhalgh david lloyd supporting workspace awareness in groupware (video program) (abstract only) real-time groupware systems often let each participant control their own view into a shared workspace. however, when collaborators do not share the same view they lose their awareness about where and how others are interacting with the workspace artifacts. we have designed a number of add-on awareness windows that help people regain this awareness. two general strategies and several variations are illustrated in this video that extend work done in a few other groupware systems. first, radar overviews shrink the entire workspace to fit within a single window. awareness is indicated by overlaying the overview with boxes representing others' viewports, by telepointers that show where they are working, and by seeing changes to objects in the workspace as they are made. the workspace can be represented within the radar overview as a scaled miniature, by stylized objects, or by its semantic structure. second, two types of detailed views show some or all of what another person can see, providing awareness of fine-grained details of others' actions. carl gutwin saul greenberg mark roseman inverse mapping in the handle management system (poster) varna puvvada roy h. campbell hci in south america: current status and future directions felipe afonso de almeida andre gradvohl luciano meneghetti clearboard: a seamless medium for shared drawing and conversation with eye contact this paper introduces a novel shared drawing medium called clearboard. it realizes (1) a seamless shared drawing space and (2) eye contact to support realtime and remote collaboration by two users. we devised the key metaphor: "talking through and drawing on a transparent glass window" to design clearboard. a prototype of clearboard is implemented based on the "drafter- mirror" architecture. this paper first reviews previous work on shared drawing support to clarify the design goals. we then examine three methaphors that fulfill these goals. the design requirements and the two possible system architectures of clearboard are described. finally, some findings gained through the experimental use of the prototype, including the feature of "gaze awareness", are discussed. hiroshi ishii minoru kobayashi retrospective on a year of participatory design using the pictive technique pictive is a participatory design technique for increasing the direct and effective involvement of users and other stakeholders in the design of software. this paper reviews a year of the use of pictve on products and research prototypes at bellcore. what we have learned is illustrated through five brief case studies. the paper concludes with a summary of our current pictive practice, expressed as three developing, interrelated models: an object model, a process model, and a participation model. michael j. muller using wordnet to disambiguate word senses for text retrieval this paper describes an automatic indexing procedure that uses the "is-a" relations contained within wordnet and the set of nouns contained in a text to select a sense for each plysemous noun in the text. the result of the indexing procedure is a vector in which some of the terms represent word senses instead of word stems. retrieval experiments comparing the effectivenss of these sense-based vectors vs. stem-based vectors show the stem-based vectors to be superior overall, although the sense-based vectors do improve the performance of some queries. the overall degradation is due in large part to the difficulty of disambiguating senses in short query statements. an analysis of these results suggests two conclusions: the is-a links define a generalization/specialization hierarchy that is not sufficient to reliably select the correct sense of a noun from the set of fine sense distinctions in wordnet; and missing correct matches because of incorrect sense resolution has a much more deleterious effect on retrieval performance than does making spurious matches. ellen m. voorhees the desktop metaphor as an approach to user interface design (panel discussion) jeff a. johnson david c. smith frank e. ludolph charles h. irby a chase too far? in a previous paper we proposed a novel method for generating alternative query plans that uses chasing (and back-chasing) with logical constraints. the method brings together use of indexes, use of materialized views, semantic optimization and join elimination (minimization). each of these techniques is known separately to be beneficial to query optimization. the novelty of our approach is in allowing these techniques to interact systematically, eg. non- trivial use of indexes and materialized views may be enabled only by semantic constraints. we have implemented our method for a variety of schemas and queries. we examine how far we can push the method in term of complexity of both schemas and queries. we propose a technique for reducing the size of the search space by "stratifying" the sets of constraints used in the (back)chase. the experimental results demonstrate that our method is practical (i.e., feasible _and_ worthwhile). lucian popa alin deutsch arnaud sahuguet val tannen computers as communicators: designing a multimedia interface that facilitates cultural understanding among sixth graders amanda ropa on estimating the cardinality of the projection of a database relation we present an analytical formula for estimating the cardinality of the projection on certain attributes of a subset of a relation in a relational database. this formula takes into account a priori knowledge of the semantics of the real-world objects and relationships that the database is intended to represent. experimental testing of the formula shows that it has an acceptably low percentage error, and that its worst-case error is smaller than the best- known formula. furthermore, the formula presented here has the advantage that it does not require a scan of the relation. rafiul ahad k. v. bapa dennis mcleod lh*rs: a high-availability scalable distributed data structure using reed solomon codes lh*rs is a new high-availability scalable distributed data structure (sdds). the data storage scheme and the search performance of lh*rs are basically these of lh*. lh*rs manages in addition the parity information to tolerate the unavailability of k 1 server sites. the value of k scales with the file, to prevent the reliability decline. the parity calculus uses the reed -solomon codes. the storage and access performance overheads to provide the high- availability are about the smallest possible. the scheme should prove attractive to data-intensive applications. witold litwin thomas schwarz enforcing strong object typing in flexible hypermedia pedro furtado h. madeira the whiteboard: metaphor: a double-edged sword william hudson interoperability in an open application environment steven pemberton book review: email style book jumps on the internet bandwagon lorrie faith cranor preparing a presentation martin smith why control of the concurrency level in distributed systems is more fundamental than deadlock management over the past years, stress has been put on global deadlock processing in distributed database management systems. this paper presents the main results of evaluation studies which were intended to provide clues for the choice of a concurrency control mechanism for the scot project. the relationship between deadlock management algorithm and concurrency level is exhibited. their respective influence on system performance is studied in the light of simulations of several concurrency control techniques. a good algorithm for concurrency control is that which controls the concurrency level while solving, as a side effect, the deadlock question. r. balter p. berard p. decitre efficient processing of vague queries using a data stream approach ulrich pfeifer norbert fuhr studying the integration of technology with collaborative object workspaces (cows) munir mandviwalla shariq khan real-time decision making (panel) steven m. jacobs tools & techniques for visual design development (abstract) loretta staples cooperative work in the andrew message system the andrew message system, a distributed system for multi-media electronic communication, has a number of special features that support cooperative work. after a brief discussion of the system itself, these features are described and discussed in more detail. examples of how organizations actually use these features are then presented and discussed, with particular attention paid to the "advisor" system for electronic consulting. nathaniel s. borenstein chris a. thyberg serfing the web: web site management made easy elke a. rundensteiner kajal t. claypool li chen hong su keiji oenoki recognition and reasoning in an awareness support system for generation of storyboard-like views of recent activity awareness support system are based on formal and specific context information such as location, or on video-mediated general context information such asa view into a remote office. we propose a new approach based on fusion of these different kinds of context information. in this approach we distinguish white box context, used by the awareness system for reasoning, and black box context, which can only be interpreted by humans. our approach uses a variety of perception techniques to obtain white box context from audio and video streams. white box context is then used for further processing of context information, for instance to derive additional context. it is further used to generate a storyboard-like multimedia representation of collected and extracted context information. this storyboard provides a condensed view of recent activity to collaboration partners. datong chen hans-werner gellersen fusionnet: joining the internet and phone networks for multimedia applications m. reha civanlar glenn l. cash barry g. haskell enhancing data warehouse performance through query caching aditya n. saharia yair m. babad design for individuals, design for groups: tradeoffs between power and workspace awareness carl gutwin saul greenberg semantic data modeling of hypermedia associations many important issues in the design and implementation of hypermedia system functionality focus on the way interobject connections are represented, manipulated, and stored. a prototypic system called hb1 is being designed to meet the storage needs of next-generation hypermedia system architectures. hb1 is referred to as a hyperbase management system (hbms) because it supports, not only the storage and manipulation of information, but the storage and manipulation of the connectivity data that link information together to form hypermedia. among hb1's distinctions is its use of a semantic network database system to manage physical storage. here, basic semantic modeling concepts as they apply to hypermedia systems are reviewed, and experiences using a semantic database system in hb1 are discussed. semantic data models attempt to provide more powerful mechanisms for structuring objects than are provided by traditional approaches. in hb1, it was necessary to abstract interobject connectivity, behaviors, and information for hypermedia. building on top of a semantic database system facilitated such a separation and made the structural aspects of hypermedia conveniently accessible to manipulation. this becomes particularly important in the implementation of structure-related operations such as structural queries. our experience suggests that an integrated semantic object-oriented database paradigm appears to be superior to purely relational, semantic, or object- oriented methodologies for representing the structurally complex interrelationships that arise in hypermedia. john l. schnase john j. leggett david l. hicks ron l. szabo perspectives on database theory mihalis yannakakis the design and evaluation of an auditory-enhanced scrollbar stephen a. brewster peter c. wright alistair d. n. edwards comprehending large-scale connectivity in object-bases venu vasudevan musical information retrieval using melodic surface massimo melucci nicola orio internet-based information management technology gail e. kaiser getting some perspective: using process descriptions to index document history process descriptions are used in workflow and related systems to describe the flow of work and organisational responsibility in business processes, and to aid in coordination. however, the division of a working process into a sequence of steps provides only a partial view of the work involved. in many cases, the performance of individual tasks in a larger process may depend on interpretations and understandings of how other aspects of the work were conducted. we present an example from an ethnographic investigation of one particular organisation, and introduce a mechanism, which we call "perspectives," for dealing with it. a "perspective" uses the process description to provide an index into the history of a document moving through a process. perspectives allow workflow systems to manage and present information about the execution of specific process instances within the general frame of abstract process descriptions. paul dourish richard bentley rachel jones allan maclean an argo telecollaboration session hania gajewska jay kistler mark s. manasse david d. redell incorporating hierarchy in a relational model of data we extend the relational model of data to allow classes as attribute values, thereby permitting the representation of hierarchies of objects. inheritance, including multiple inheritance with exceptions, is clearly supported. facts regarding classes of objects can be stored and manipulated in the same way as facts regarding object instances. our model is upwards compatible with the standard relational model. h. v. jagadish perceptual user interfaces: affective perception rosalind w. picard the role of expectations in human-computer interaction this paper describes a pilot study on the role of expectations in human- computer interaction on a decision-making task. participants (n=70) were randomly assigned to one of 5 different computer partners or to a human partner. after completing the rankings for the desert survival task, participants engaged in a dialog with their computer or human partners. results revealed that interaction with human partners was more expected and more positively evaluated than interaction with computer agents. in addition, the addition of human-like qualities to computer interfaces did not increase expectedness or evaluations as predicted. correlation analysis for the five computer conditions demonstrated that expectations and evaluations do effect influence and perceptions of the partner. discussion focuses on ways to coordinate expectations, interface design, and task objectives. joseph a. bonito judee k. burgoon bjorn bengtsson implementing crash recovery in quickstore: a performance study seth j. white david j. dewitt west: a web browser for small terminals we describe west, a web browser for small terminals, that aims to solve some of the problems associated with accessing web pages on hand-held devices. through a novel combination of text reduction and focus+context visualization, users can access web pages from a very limited display environment, since the system will provide an overview of the contents of a web page even when it is too large to be displayed in its entirety. to make maximum use of the limited resources available on a typical hand-held terminal, much of the most demanding work is done by a proxy server, allowing the terminal to concentrate on the task of providing responsive user interaction. the system makes use of some interaction concepts reminiscent of those defined in the wireless application protocol (wap), making it possible to utilize the techniques described here for wap-compliant devices and services that may become available in the near future. staffan björk lars erik holmquist johan redström ivan bretan rolf danielsson jussi karlgren kristofer franzen context interchange: sharing the meaning of data michael siegel stuart e. madnick extensible database management systems michael carey laura haas sgisl: a distributed computational environment supporting environmental and ecological research ray ford what's in a scenario? peter wright context and orientation in hypermedia networks the core of hypermedia's power lies in the complex networks of links that can be created within and between documents. however, these networks frequently overwhelm the user and become a source of confusion. within intermedia, we have developed the web view-a tool for viewing and navigating such networks with a minimum of user confusion and disorientation. the key factors in the web view's success are a display that combines a record of the user's path through the network with a map of the currently available links; a scope line that summarizes the number of documents and links in the network; and a set of commands that permit the user to open documents directly from the web view. kenneth utting nicole yankelovich ubiquitous audio: capturing spontaneous collaboration debby hindus chris schmandt squashing flat files flatter william dumouchel chris volinsky theodore johnson corinna cortes daryl pregibon the cmc range war: an investigation into user preferences for email and vmail kathryn a. marold gwynne larsen a theory of organization joseph i. b. gonzales a featural approach to command names a variety of aspects of command names have been studied, such as suggestiveness, memorability, and the use of icons. a single framework for these disparate studies is desirable, and it is proposed that the concept of featural analysis prevalent in linguistics and psycholinguistics be adopted as an approach to command name design. examples of the breadth of application of this approach are given for the naming issues of suggestiveness, learning and memory, congruence and hierarchicalness, universal commands, the relationships of names to the command language syntax, and the use of non-words as names. jarrett rosenberg office documents on a database kernel - filing, retrieval, and archiving one of the main component of integrated office systems is the large central filing system. it efficiently stores, retrieves and searches office documents containing text, images, graphics, data and voice. we propose to implement a filing system on top of the darmstadt database system (dasdbs), which is designed as a data management kernel for both standard and non- standard applications. this paper investigates the choice of appropriate storage structures for the filing system objects and the realization of the system by using the kernel. furthermore we discuss the efficient retrieval support of office objects by signatures and introduce a new archival approach by using storage media like optical disks. p. zabback h. b. paul u. deppisch exploring color in interface design hal shubin deborah falck ati gropius johansen repository interactions (working session) william l. scherlis web site review: adsl forum shawn brown estimating block accesses in database organizations: a closed noniterative formula kyu-young whang gio wiederhold daniel sagalowicz an agent-based approach to the construction of floristic digital libraries j. alfredo sanchez cristina a. lopez john l. schnase distributed decision making: a research agenda clyde w. holsapple andrew b. whinston collective dynabases larry press engineering practice and codevelopment of codevelopment of product prototypes william l. anderson william t. crocca bringing all the "users" to the centre andrew clement do users get what they want? janet low bob malcolm steve woolgar the animated interface of the erectable hypergraphic trainer (abstract) serge demeyer a dynamic cluster maintenance system for information retrieval partitioning by clustering of very large databases is a necessity to reduce the space/time complexity of retrieval operations. however, the contemporary and modern retrieval environments demand dynamic maintenance of clusters. a new cluster maintenance strategy is proposed and its similarity/stability characteristics, cost analysis, and retrieval behavior in comparison with unclustered and completely reclustered database environments have been examined by means of a series of experiments. f. can e. ozkarahan adaptive distributed data management with weak consistent replicated data richard lenz efficient resumption of interrupted warehouse loads data warehouses collect large quantities of data from distributed sources into a single repository. a typical load to create or maintain a warehouse processes gbs of data, takes hours or even days to execute, and involves many complex and user- defined transformations of the data (e.g., find duplicates, resolve data inconsistencies, and add unique keys). if the load fails, a possible approach is to "redo" the entire load. a better approach is to resume the incomplete load from where it was interrupted. unfortunately, traditional algorithms for resuming the load either impose unacceptable overhead during normal operation, or rely on the specifics of transformations. we develop a resumption algorithm called _dr_ that imposes no overhead and relies only on the high-level properties of the transformations. we show that _dr_ can lead to a ten-fold reduction in resumption time by performing experiments using commercial software. wilburt juan labio janet l. wiener hector garcia-molina vlad gorelik the database group at iss, national university of singapore the database group is headed by dr. a desai narasimhalu. it comprises nine staff members, together with a collaborative member and three undergraduate students from the department of information systems and computer science. the group's main research areas are listed below. d. narasimhalu karma: knowledge acquisition, retention and maintenance analysis m. czerwinski r. schumacher b. duba database concurrency control using data flow graphs a specialized data flow graph, database flow graph (dbfg) is introduced. dbfgs may be used for scheduling database operations, particularly in an mimd database machine environment. a dbfg explicitly maintains intertransaction and intratransaction dependencies, and is constructed from the transaction flow graphs (tfg) of active transactions. a tfg, in turn, is the generalization of a query tree used, for example, in direct [15]. all dbfg schedules are serializable and deadlock free. operations needed to create and maintain the dbfg structure as transactions are added or removed from the system are discussed. simulation results show that dbfg scheduling performs as well as two-phase locking. m. h. eich david l. wells effectiveness of a graphical display of retrieval results aravindan veerasamy russell heikes artistic environments of telepresence in the www luisa paraguai donati gilbertto prado meme media and a world-wide meme pool yuzuru tanaka hierarchical schemata for relational databases most database design methods for the relational model produce a flat database, that is, a family of relations with no explicit interrelational connections. the user of a flat database is likely to be unaware of certain interrelational semantics. in contrast, the entity-relationship model provides schema graphs as a description of the database, as well as for navigating the database. nevertheless, the user of an entity-relationship database may still commit semantic errors, such as performing a lossy join. this paper proposes a nonflat, or hierarchical, view of relational databases. relations are grouped together to form relation hierarchies in which lossless joins are explicitly shown whereas lossy joins are excluded. relation hierarchies resemble the schema graphs in the entity-relationship model. an approach to the design of relation hierarchies is outlined in the context of data dependencies and relational decomposition. the approach consists of two steps; each is described as an algorithm. algorithm dec decomposes a given universal relation according to a given set of data dependencies and produces a set of nondecomposable relation schemes. this algorithm differs from its predecessors in that it produces no redundant relation schemes. algorithm rh further structures the relation schemes produced by algorithm dec into a hierarchical schema. these algorithms can be useful software tools for database designers. y. edmund lien designing organizational interfaces this paper argues that it will become increasingly important to extend our concept of user interfaces for individual users of computers to include organizational interfaces for groups of users. a number of suggestions are given for how to develop a theoretical base for designing such interfaces. for instance, examples are used to illustrate how traditional cognitive points of view can be extended to include information processing by multiple agents in organizations. examples of design implications from other perspectives such as motivational, economic, and political are also included. thomas w. malone answering queries on embedded-complete database schemes it has been observed that, for some database schemes, users may have difficulties retrieving correct information, even for simple queries. the problem occurs when some implicit "piece" of information, defined on some subset of a relation scheme, is not explicitly represented in the database state. in this situation, users may be required to know how the state and the constraints interact before they can retrieve the information correctly. in this paper, the formal notion of embedded-completeness is proposed, and it is shown that schemes with this property avoid the problem described above. a polynomial-time algorithm is given to test whether a database scheme is independent and embedded-complete. under the assumption of independence, it is shown that embedded-complete schemes allow efficient computation of optimal relational algebra expressions equivalent to the x-total projection, for any set of attributes x. edward p. f. chan alberto o. mendelzon tourmaline: macrostyles by example andrew j. werth brad a. myers updating olap dimensions olap systems support data analysis through a multidimensional data model, according to which data facts are viewed as points in a space of application- related "dimensions" , organized into levels which conform a hierarchy. although the usual assumption is that these points reflect the dynamic aspect of the data warehouse while dimensions are relatively static, in practice it turns out that dimension updates are often necessary to adapt the multidimensional database to changing requirements. these updates can take place either at the structural level (e.g. addition of categories or modification of the hierarchical structure) or at the instance level (elements can be inserted, deleted, merged, etc.). they are poorly supported (or not supported at all) in current commercial systems and have not been addressed in the literature. in a previous paper we introduced a formal model supporting dimension updates. here, we extend the model, adding a set of semantically meaningful operators which encapsulate common sequences of primitive dimension updates in a more efficient way. we also formally define two mappings (normalized and denormalized) from the multidimensional to the relational model, and compare an implementation of dimension updates using these two approaches. carlos a. hurtado alberto o. mendelzon alejandro a. vaisman elfs: english language from sql in this paper we describe a system which, given a query in sql-like relational database language, will display its meaning in clear, unambiguous natural language. the syntax-driven translation mechanism is independent of the application domain. it has direct applications in designing computer-based sql tutorial systems and program debugging systems. the research results obtained in the paper will also be useful in query optimization and design of a more user-friendly language front-endfor casual users. w. s. luk steve kloster safe query languages for constraint databases in the database framework of kanellakis et al. [1990] it was argued that constraint query languages should take constraint databases as input and give other constraint databases that use the same type of atomic constraints as output. this closed- form requirement has been difficult to realize in constraint query languages that contain the negation symbol. this paper describes a general approach to restricting constraint query languages with negation to safe subsets that contain only programs that are evaluable in closed-form on any valid constraint database input. peter z. revesz dbminer: interactive mining of multiple-level knowledge in relational databases jaiwei han youngjian fu wei wang jenny chiang osmar r. zaïane krzysztof koperski the distribution of granule accesses made by database transactions the problem of characterizing the number of granules (or blocks) accessed by a transaction is important in modeling the performance of database management systems and other applications. different expressions for this quantity have appeared in the literature under different probabilistic assumptions. these expressions along with one new result are presented with a uniform notation and a clear statement of the assumptions underlying each. the partial order relating the predictions of the expected number of granules accessed is presented. annie w. shum andrew m. langer shared hardware: a novel technology for computer support of face to face meetings this paper describes the capture lab, a computer supported meeting room in use at the eds center for machine intelligence since late 1987\. most computer supported meeting environments implement a simple hardware approach, where a single computer controlled by a trained technician or facilitator is used, or else adopt a groupware approach, where each user has a personal machine and special-purpose software is used to support group activities. in contrast, the capture lab implements a shared hardware approach, in which each meeting participant has a personal computer, but can easily access a shared public computer as well. we discuss the advantages and limitations of this approach, based on our observations of how the room is used, and compare it to the simple hardware and groupware approaches. d. halonen m. horton r. kass p. scott a high-level functional query language for a small relational system in this paper we describe the design and implementation of a high- level query language fquery which applies the concepts of the functional data model to the retrieval of information from relational databases created on a small pdp-11 system by the riss relational system described in [me]. we describe the query language and the functional data model and discuss the benefits they provide for the casual user in expressing queries in a conceptually natural manner. hoyt d. warner david odle musicfx: an arbiter of group preferences for computer supported collaborative workouts joseph f. mccarthy theodore d. anagnost updates and view maintenance in soft real-time database systems a database system contains base data items which record and model a physical, real world environment. for better decision support, base data items are summarized and correlated to derive views. these base data and views are accessed by application transactions to generate the ultimate actions taken by the system. as the environment changes, updates are applied to the base data, which subsequently trigger view recomputations. there are thus three types of activities: base data update, view recomputation, and transaction execution. in a real-time system, two timing constrains need to be enforced. we require transactions meet their deadlines (transaction timeliness) and read fresh data (data timeliness). in this paper we define the concept of absolute and relative temporal consistency from the perspective of transactions. we address the important issue of transaction scheduling among the three types of activities such that the two timing requirements can be met. we also discuss how a real-time database system should be designed to enforce different levels of temporal consistency. ben kao k. y. lam brad adelberg reynold cheng tony lee identity and versions for complex objects identity is that property of an object that distinguishes each object from all others. identity has been investigated almost independently in general-purpose programming languages and database languages. its importance is growing as these two environments evolve and merge. historical versions of persistent data is a natural and useful feature for a large number of applications. identity is even more important when retrievals may span multiple versions of a single object, since a mechanism is needed to preserve the fact that the various versions are of the same object. there are at least two dimensions involved in the support of identity, the representation dimension and the temporal dimension. the representation dimension distinguishes languages based on whether they represent the identity of an object by its value (e.g., identifying employees by social security number), by a user- defined name (e.g., variable names, user defined file names, etc.), or built into the language (e.g., smalltalk-80). a language providing a stronger notion of identity in this dimension must maintain its representation of identity during updates, use identity in the semantics of its operators, and provide operators to manipulate identity. the temporal dimension distinguishes languages based on whether they preserve their representation of identity within a single program or transaction, between transactions, or between structural reorganizations. an example of structural reorganization is schema reorganization in databases. a language providing stronger identity in the temporal dimension must employ more robust implementation techniques to preserve its representation of identity. the thesis of this abstract is that database conceptual languages should support the built in notion of identity and provide for the possibility of structural reorganizations. there are very few conceptual database models (in fact one we know of) that support this very strong notion of identity. most real-world organizations deal with histories of objects, but they have little support from existing systems to help them in modeling and retrieving historical data. strong support of identity in the temporal dimension is even more important for temporal data models, because a single retrieval may involve multiple historical versions of a single object. such support requires the database system to provide a continuous and consistent notion of identity throughout the life of each object, independently of any descriptive data or structure which is user modifiable. this identity is the common thread that ties together these historical versions of an object. most of the current techniques for implementing object identity (such as as physical addresses, or identifier keys (i.e. groups of attributes which constitute the primary key) etc.) lack either location independence or data (i.e. object content) independence. the strong notion of identity is most easily supported using surrogates, which are system-generated, globally unique identifiers, completely independent of any physical location. george p. copeland setrag n. khoshafian choosing a medium for your message: what determines the choice of delivery media for technical documentation? the current variety of media available to developers of technical documentation makes possible a richness of expression that goes beyond traditional conceptions. rich and effective documentation can result from a judicious combination of media. the introduction of new media techniques to the traditions of technical documentation creates not only new opportunities, but new problems as well. this paper presents a case study of an ongoing project at apple computer that encountered numerous such opportunities and problems. we show what gave rise to them, how we dealt with them, and finally present some guidelines for documentation teams contemplating going "beyond the book." harry j. saddler lori e. kaplan cuta: a simple, practical, low-cost approach to task analysis daniel lafrenière applying the golden rule of sampling for query estimation query size estimation is crucial for many database system components. in particular, query optimizers need efficient and accurate query size estimation when deciding among alternative query plans. in this paper we propose a novel sampling technique based on the golden rule of sampling, introduced by von neumann in 1947, for estimating range queries. the proposed technique randomly samples the frequency domain using the cumulative frequency distribution and yields good estimates without any a priori knowledge of the actual underlying distribution of spatial objects. we show experimentally that the proposed sampling technique gives smaller approximation error than the min-skew histogram based and wavelet based approaches for both synthetic and real datasets. moreover, the proposed technique can be easily extended for higher dimensional datasets. yi-leh wu divyakant agrawal amr el abbadi clusters on the world wide web: creating neighborhoods of make-believe stephen c. hirtle molly e. sorrows guoray cai influence sets based on reverse nearest neighbor queries inherent in the operation of many decision support and continuous referral systems is the notion of the "influence" of a data point on the database. this notion arises in examples such as finding the set of customers affected by the opening of a new store outlet location, notifying the subset of subscribers to a digital library who will find a newly added document most relevant, _etc._ standard approaches to determining the influence set of a data point involve range searching and nearest neighbor queries. in this paper, we formalize a novel notion of influence based on reverse neighbor queries and its variants. since the nearest neighbor relation is not symmetric, the set of points that are closest to a query point (_i.e._, the nearest neighbors) differs from the set of points that have the query point as their nearest neighbor (called the reverse nearest neighbors). influence sets based on reverse nearest neighbor (rnn) queries seem to capture the intuitive notion of influence from our motivating examples. we present a general approach for solving rnn queries and an efficient r-tree based method for large data sets, based on this approach. although the rnn query appears to be natural, it has not been studied previously. rnn queries are of independent interest, and as such should be part of the suite of available queries for processing spatial and multimedia data. in our experiments with real geographical data, the proposed method appears to scale logarithmically, whereas straightforward sequential scan scales linearly. our experimental study also shows that approaches based on range searching or nearest neighbors are ineffective at finding influence sets of our interest. flip korn s. muthukrishnan principles, techniques, and ethics of stage magic and their application to human interface design bruce tognazzini upsizing form file server to client server architectures corporate the access team microsoft the use of hypermedia data to enhance design geri gay joan mazur marc lentini segmentation-based modeling for advanced targeted marketing fingerhut business intelligence (bi) has a long and successful history of building statistical models to predict consumer behavior. the models constructed are typically segmentation-based models in which the target audience is split into subpopulations (i.e., customer segments) and individually tailored statistical models are then developed for each segment. such models are commonly employed in the direct-mail industry; however, segmentation is often performed on an ad-hoc basis without directly considering how segmentation affects the accuracy of the resulting segment models. fingerhut bi approached ibm research with the problem of how to build segmentation-based models more effectively so as to maximize predictive accuracy. the ibm advanced targeted marketing-single eventstm (ibm atm-setm) solution is the result of ibm research and fingerhut bi directing their efforts jointly towards solving this problem. this paper presents an evaluation of atm-se's modeling capabilities using data from fingerhut's catalog mailings. c. apte e. bibelnieks r. natarajan e. pednault f. tipu d. campbell b. nelson industry briefs: at&t; james p. cunningham judy cantor susan h. pearsall kevin h. richardson natural language navigation in multimedia archives: an integrated approach the paper presents the design and prototypical implementation of an integrated retrieval system (hpqs) which provides natural language access to multimedia documents in restricted topic areas. it supports new flexible ways of querying by combining a semantically rich retrieval model based on fuzzy set theory with domain-specific methods for document analysis which can be applied online (i.e. the search criteria are not restricted to combinations of anticipated descriptors). emphasis is put on the retrieval methodology and on the interplay of the system components: because of its provision of computationally demanding direct search methods, it is crucial to the system that all components cooperate to ensure acceptable response times. ingo glöckner alois knoll emacspeak - a speech interface t. v. raman the politics of human factors (panels) william mosteller stephen j. boies charles e. grantham thomas drby richard rubinstein dennis wixon improving two-stage ad-hoc retrieval for short queries k. l. kwok m. chan editorial steven pemberton automated assistance for the telemeeting lifecycle we analyse eighteen months of national and international deployment of a prototype telemeeting system supporting synchronous remote meetings which make extensive use of shared documents as well as video and audio conferencing. logistics of a telemeeting include scheduling people and equipment, document format conversion, pre-sending documents, training, equipment and call setup, and meeting followup. the logistics burden is much larger than expected and can be a barrier to adoption of telemeeting technology. using a process model that recognises moving between solo and group, asynchronous and synchronous work modes, the paper explores the amenability of individual logistics tasks to automated assistance, proposes a framework for such assistance, and develops a set of design principles. neil w. bergmann j. craig mudge on rules, procedure, caching and views in data base systems this paper demonstrates that a simple rule system can be constructed that supports a more powerful view system than available in current commercial systems. not only can views be specified by using rules but also special semantics for resolving ambiguous view updates are simply additional rules. moreover, procedural data types as proposed in postgres are also efficiently simulated by the same rules system. lastly, caching of the action part of certain rules is a possible performance enhancement and can be applied to materialize views as well as to cache procedural data items. hence, we conclude that a rule system is a fundamental concept in a next generation dbms, and it subsumes both views and procedures as special cases. michael stonebraker anant jhingran jeffrey goh spyros potamianos determining view dependencies using tableaux a relational database models some part of the real world by a set of relations and a set of constraints. the constraints model properties of the stored information and must be maintained true at all times. for views defined over physically stored (base) relations, this is done by determining whether the view constraints are logical consequences of base relation constraints. a technique for determining such valid view constraints is presented in this paper. a generalization of the tableau chase is used. the idea of the method is to generate a tableau for the expression whose summary violates the test constraints in a "canonical" way. the chase then tries to remove this violation. it is also shown how this method has applications to schema design. relations not in normal form or having other deficiencies can be replaced by normal form projections without losing the ability to represent all constraint information. anthony klug rod price the impact of query structure and query expansion on retrieval performance jaana kekalainen kalervo jarvelin join synopses for approximate query answering in large data warehousing environments, it is often advantageous to provide fast, approximate answers to complex aggregate queries based on statistical summaries of the full data. in this paper, we demonstrate the difficulty of providing good approximate answers for join-queries using only statistics (in particular, samples) from the base relations. we propose join synopses as an effective solution for this problem and show how precomputing just one join synopsis for each relation suffices to significantly improve the quality of approximate answers for arbitrary queries with foreign key joins. we present optimal strategies for allocating the available space among the various join synopses when the query work load is known and identify heuristics for the common case when the work load is not known. we also present efficient algorithms for incrementally maintaining join synopses in the presence of updates to the base relations. our extensive set of experiments on the tpc-d benchmark database show the effectiveness of join synopses and various other techniques proposed in this paper. swarup acharya phillip b. gibbons viswanath poosala sridhar ramaswamy 50 years acm, 14 years sigchi mike atwood guy boy form management this paper consists of three interrelated parts. in the first part forms are intoduced as an abstraction and generalization of business paper forms. a set of facilities for the manipulation of forms and their contents is outlined. forms can be created, stored, found, viewed in different media, mailed, and located by office workers. data on forms can also be processed in a completely integrated way. the facilities are discussed both abstractly and in relation to a prototype system. in the second part a facility is outlined for the specification and implementation of automatic form procedures. these procedures specify actions on forms which are triggered automatically when certain preconditions are met. the preconditions, actions, and specification method are based on forms. the discussion is centered on our implementation of such a specification framework. finally, in the third part, techniques for the analysis of office flow are specified. an algorithm is outlined for the categorization of forms into classes depending on the local routing and actions on the forms. in this way, we can obtain the paths that forms take and analyze the system for correctness and loading characteristics. d. tsichritzis office automation: the science that supports it (panel discussion) many discoveries and developments in science and technology over the last few years have made office automation possible. national productivity and well being depend upon technology transfer, and this requires people who combine an appreciation of industrial and social needs with a knowledge of discoveries that can help. the design and evolution of equipment, materials and systems architecture used in office automation are supported by many disciplines - the traditional branches of physical science and engineering, mathematics and human factors, computer science and other subjects too. in this session, the panel members will explain in popular terms some of the scientific advances used in office automation and related activities. michael p. barnett robert a myers clarence a. ellis beyond intratransaction association analysis: mining multidimensional intertransaction association rules in this paper, we extend the scope of mining association rules from traditional single-dimensional intratransaction associations, to multidimensional intertransaction associations. intratransaction associations are the associations among items with the same transaction, where the notion of the transaction could be the items bought by the same customer, the events happened on the same day, and so on. however, an intertransaction association describes the association relationships among different transactions, such as "if(company) a's stock goes up on day 1, b's stock will go down on day 2, but go up on day 4." in this case, whether we treat company or day as the unit of transaction, the associated items belong to different transactions. moreover, such an intertransaction association can be extended to associate multiple contextual properties in the same rule, so that multidimensional intertransaction associations can be defined and discovered. a two-dimensional intertransaction association rule example is "after mcdonald and burger king open branches, kfc will open a branch two months later and one mile away," which involves two dimensions: time and space. mining intertransaction associations poses more challenges on efficient processing than mining intratransaction associations. interestingly, intratransaction association can be treated as a special case of intertransaction association from both a conceptual and algorithmic point of view. in this study, we introduce the notion of multidimensional intertransaction association rules, study their measurements---support and confidence---and develop algorithms for mining intertransaction associations by extension of apriori. we overview our experience using the algorithms on both real-life and synthetic data sets. further extensions of multidimensional intertransaction association rules and potential applications are also discussed. hongjun lu ling feng jiawei han putting context into design steven j. clarke sortables: a browser for a digital library william c. wake edward a. fox grey tuple dependency and grey relational algebra ka-wing wong a room of our own: experiences from a direct office share annette adler austin henderson on the nature and fuction of explanation in intelligent information retrieval we discuss the complexity of explanation activity in human-human goal-directed dialogue, and suggest that this complexity ought to be taken account of in the design of explanation in human-computer interaction. we propose a general model of clarity in human-computer systems, of which explanation is one component. on the bases of: this model; of a model of human-intermediary interaction in the document retrieval situation as one of cooperative model- building for the purpose of developing an appropriate search formulation; and, on the results of empirical observation of human user-human intermediary interaction in information systems, we propose a model for explanation by the computer intermediary in information retrieval. n. j. belkin a digital on-demand video service supporting content-based queries t. d. c. little g. ahanger r. j. folz j. f. gibbon f. w. reeve d. h. schelleng d. venkatesh retrieval system evaluation using recall and precision: problems and answers v. v. raghavan p. bollmann g. s. jung a general framework for the view selection problem for data warehouse design and evolution dimitri theodoratos mokrane bouzeghoub web weaving marilyn mantei tremaine wendy mackay joining relations in the belief-consistent multilevel secure relational model nenad jukic susan v. vrbsky a new method for similarity indexing of market basket data in recent years, many data mining methods have been proposed for finding useful and structured information from market basket data. the association rule model was recently proposed in order to discover useful patterns and dependencies in such data. this paper discusses a method for indexing market basket data efficiently for similarity search. the technique is likely to be very useful in applications which utilize the similarity in customer buying behavior in order to make peer recommendations. we propose an index called the signature table, which is very flexible in supporting a wide range of similarity functions. the construction of the index structure is independent of the similarity function, which can be specified at query time. the resulting similarity search algorithm shows excellent scalability with increasing memory availability and database size. charu c. aggarwal joel l. wolf philip s. yu the webbook and the web forager: an information workspace for the world-wide web stuart k. card george g. robertson william york programming and enjoying music with your eyes closed design and user evaluation of a multimodal interaction style for music programming is described. user requirements were _instant usability_ and _optional use of a visual display_. the interaction style consists of a visual roller metaphor. user control of the rollers proceeds by manipulating a force feedback trackball. tactual and auditory cues strengthen the roller impression and support use without a visual display. the evaluation investigated task performance and procedural learning when performing music programming tasks with and without a visual display. no procedural instructions were provided. tasks could be completed successfully with and without a visual display, though programming without a display needed more time to complete. prior experience with a visual display did not improve performance without a visual display. when working without a display, procedures have to be acquired and remembered explicitly, as more procedures were remembered after working without a visual display. it is demonstrated that multimodality provides new ways to interact with music. steffen pauws don bouwhuis berry eggen interactive term suggestion for users of digital libraries: using subject thesauri and co-occurrence lists for information retrieval bruce r. schatz eric h. johnson pauline a. cochrane hsinchun chen informative things: how to attach information to the real world rob barrett paul p. maglio creating the invisible interface: (invited talk) for thirty years, most interface design, and most computer design, has been headed down the path of the "dramatic" machine. its highest ideal is to make a computer so exciting, so wonderful so interesting, that we never want to be without it. a less-traveled path i call the "invisible"; its highest ideal is to make a computer so imbedded, so fitting, so natural, that we use it without even thinking about it. (i have also called this notion "ubiquitous computing.") i believe that in the next twenty years the second path will come to dominate. but this will not be easy; very little of our current systems infrastructure will survive. we have been building versions of the infrastructure-to-come at parc for the past four years, in the form of inch-, foot-, and yard-sized computers we call tabs, pads, and boards. in this talk i will describe the humanistic origins of the "invisible" ideal in post- modernist thought. i will then describe some of our prototypes, how they succeed and fail to be invisible, and what we have learned. i will illustrate new systems issues that user interface designers will face when creating invisibility. and i will indicate some new directions we are now exploring, including the famous "dangling string" display. mark weiser modeling shared information spaces (sis) many companies experience that their corporate intranet is getting complex and poorly manageable. we believe that developing a model for the website, or shared information space will make the management easier and provide solutions that support collaboration and knowledge sharing within the enterprise. the present paper proposes a meta model defining the conceptual building blocks of an information space. the meta model takes knowledge as well as information sharing into account by letting ontologies represent problem domains important to the enterprise, and be used as means of imposing structure on available information. to support the different needs of the employees the ability to interact with the information space through different perspectives is essential. an example of how the meta model can be deployed to support contextual retrieval of information in a workflow management application is described. marit kjøsnes natvig oddrun ohren system integration in multidatabases \\--- this paper presents an exploratory approach to the development of a tool for integrating existing databases. the intent is to meet specific requirements and to achieve flexibility through the creation of an "open" system. the methodology assumes an integration model which captures the essential characteristics of a distributed system within a knowledge base. the model and the underlying knowledge base may be used to represent the distributed environment and to define requirements for the shared use of heterogeneous databases. an interactive method is proposed which allows the user to proceed in an iterative fashion in specifying system attributes and resolving design conflicts. the project is at present in the definition phase; current work is aimed at the identification of generic multidatabase services, and their abstraction in a form amenable for storage in the knowledge base. p. bodorik j. s. riordon reflections: it rings for thee steven pemberton the relationship of user-centered evaluation to design: addressing issues of productivity and power brenda dervin i3: intelligent, interactive investigation of olap data cubes the goal of the i3(eye cube) project is to enhance multidimensional database products with a suite of advanced operators to automate data analysis tasks that are currently handled through manual exploration. most olap products are rather simplistic and rely heavily on the user's intuition to manually drive the discovery process. such ad hoc user-driven exploration gets tedious and error-prone as data dimensionality and size increases. we first investigated how and why analysts currently explore the data cube and then automated them using advanced operators that can be invoked interactively like existing simple operators. our proposed suite of extensions appear in the form of a toolkit attached with a olap product. at this demo we will present three such operators: diff, relax and inform with illustrations from real-life datasets. sunita sarawagi gayatri sathe a query processing method for data warehouses which contain multimedia w. perrizo z. zhang s. krebsbach a generalized constraint and exception handler for an object-oriented cad-dbms a generalized constraint and exception handler for object-oriented cad databases is presented. the main features of the constraint and exception handler are: dynamic definition of new constraints without recompilation of the schema, deferred constraint checking, and homogeneous handling of constraints and their exceptions. in the paper we analyze first the constraints typically encountered in a cad environment, discuss the problems of dynamic constraint definition and deferred evaluation and, based on that discussion, present design and implementation issues of the constraint handler which is currently being implemented. a. p. buchmann r. s. carrera m. a. vazquez-galindo local and global structuring of computer mediated communication: developing linguistic perspectives on cscw in cosmos this paper is concerned with the development of a language/action perspective in the cosmos project. we emphasize the importance of seeing cooperative work in terms of participants' communicative actions. in contrast to some explorations of speech act theory, we argue that communicative actions should be seen as essentially embedded in dialogical contexts. in particular, we attempt to show the relevance of concepts derived from the analysis of actually occurring conversations, for computer mediated communication in general and cooperative work in particular. we distinguish between local and global structuring of communication and argue that many group working situations combine both sorts. these observations have influenced our work in the cosmos project on the design of a structure definition language (sdl) by means of which users can configure their computer mediated communication environment. we describe sdl and show how its interpretation is influenced by our conversation analytic approach. we illustrate our arguments with an example of cooperative document preparation. john bowers john churcher dissemination of dynamic data pavan deolasee amol katkar ankur panchbudhe krithi ramamritham prashant shenoy data base research at berkeley michael stonebraker where is the librarian in the digital library? christine l. borgman comparing the performance of database selection algorithms james c. french allison l. powell jamie callan charles l. viles travis emmitt kevin j. prey yun mou video retrieval with iris: p. alshuth th. hermes j. kreyß m. röper sibyl: a tool for managing group design rationale we describe sibyl, a system that supports group decision making by representing and managing the qualitative aspects of decision making processes: such as the alternatives, the goals to be satisfied, and the arguments evaluating the alternatives with respect to these goals. we use an example session with sibyl to illustrate the language, called drl, that sibyl uses for representing these qualitative aspects, and the set of services that sibyl provides using this language. we also compare sibyl to other systems with similar objectives and discuss the additional benefits that sibyl provides. in particular, we compare sibyl to gibis, a well-known "tool for exploratory policy discussion", and claim that sibyl is mainly a knowledge- based system which uses a semi-formal representation, whereas gibis is mainly a hypertext system with semantic types. we conclude with a design heuristic, drawn from our experience with sibyl, for systems whose goal includes eliciting knowledge from people. jintae lee the ecrc multi database system willem jonker heribert schutz mission-critical web applications: a seismological case marco padula giuliana rubbia rinaldi organizing documents to support browsing in digital libraries yoelle s. maarek universal design frank marchak xml with data values: typechecking revisited we investigate the _type checking_ problem for xml queries: statically verifying that every answer to a query conforms to a given output dtd, for inputs satisfying a given input dtd. this problem had been studied by a subset of the authors in a simplified framework that captured the structure of xml documents but ignored data values. we revisit here the type checking problem in the more realistic case when data values are present in documents and tested by queries. in this extended framework, type checking quickly becomes undecidable. however, it remains decidable for large classes of queries and dtds of practical interest. the main contribution of the present paper is to trace a fairly tight boundary of decidability for type checking with data values. the complexity of type checking in the decidable cases is also considered. noga alon tova milo frank neven dan suciu victor vianu the design of postgres this paper presents the preliminary design of a new database management system, called postgres, that is the successor to the ingres relational database system. the main design goals of the new system are to provide better support for complex objects, provide user extendibility for data types, operators and access methods, provide facilities for active databases (i.e., alerters and triggers) and inferencing including forward- and backward- chaining, simplify the dbms code for crash recovery, produce a design that can take advantage of optical disks, workstations composed of multiple tightly-coupled processors, and custom designed vlsi chips, and make as few changes as possible (preferably none) to the relational model. the paper describes the query language, programming language interface, system architecture, query processing strategy, and storage system for the new system. michael stonebraker lawrence a. rowe learning probabilistic models of the web (poster session) in the world wide web, myriads of hyperlinks connect documents and pages to create an unprecedented, highly complex graph structure - the web graph. this paper presents a novel approach to learning probabilistic models of the web, which can be used to make reliable predictions about connectivity and information content of web documents. the proposed method is a probabilistic dimension reduction technique which recasts and unites latent semantic analysis and kleinberg's hubs-and-authorities algorithm in a statistical setting. this meant to be a first step towards the development of a statistical foundation for web---related information technologies. although this paper does not focus on a particular application, a variety of algorithms operating in the web/internet environment can take advantage of the presented techniques, including search engines, web crawlers, and information agent systems. thomas hofmann new role for community networks d. d. cowan c. i. mayfield f. w. tompa w. gasparini mediation in information systems gio wiederhold a kinetic and 3d image input device shunichi numazaki akira morshita naoko umeki minoru ishikawa miwako doi olap, relational, and multidimensional database systems george colliat introduction: personalized views of personalization doug riecken a virtual window on media space william w. gaver gerda smets kees overbeeke industry briefs: reactivity david crow maria yang discovery of similarity computations on the internet k. l. liu c. yu weiyi meng n. rishe on temporal modeling in the context of object databases niki pissinou kia makki yelena yesha nonverbal information recognition and its application to communications rhyohei nakatsu a simple analysis of exclusive and shared lock contention in a database system we consider a probabilistic model of locking in a database system in which an arriving transaction is blocked and lost when its lock requests conflict with the locks held by currently executing transactions. both exclusive and shared locks are considered. we derive a simple asymptotic expression for the probability of blocking which is exact to order 1/n where n is the number of lockable items in the database. this expression reduces to one recently derived by mitra and weinberger for the special case where all locks are exclusive. stephen s. lavenberg working alone (solution session): finding surrogate coworkers on the nets laura praderio multi-join on parallel processors the paper describes a preliminary evaluation of some multi-join strategies and their performances on parallel hardware. the hardware used was a sequent (under unix) with 11 usable processors, each with shared and private primary memory. a multi-join was broken down into a series of single joins which were then allocated to clusters, each cluster being a collection of parallel processors. the results of single joins, which were studied by both binary search and hash-merge techniques, were then further processed as necessary. the evaluation was conducted varying a number of parameters, such as cluster size, tuple size and cardinality. the comparative results were plotted. the study highlights the importance of a number of factors that influence the performance of a multi-join operation. s. m. deen d. n. p. kannangara m. c. taylor the dangers of replication and a solution jim gray pat helland patrick o'neil dennis shasha evaluation of learning, evaluation as learning magnus ramage using structured types to incorporate knowledge in hypertext jocelyne nanard marc nanard industry briefs: ibm karel vredenburg the five color concurrency control protocol: non- two-phase locking in general databases concurrency control protocols based on two-phase locking are a popular family of locking protocols that preserve serializability in general (unstructured) database systems. a concurrency control algorithm (for databases with no inherent structure) is presented that is practical, non two-phase, and allows varieties of serializable logs not possible with any commonly known locking schemes. all transactions are required to predeclare the data they intend to read or write. using this information, the protocol anticipates the existence (or absence) of possible conflicts and hence can allow non-two-phase locking. it is well known that serializability is characterized by acyclicity of the conflict graph representation of interleaved executions. the two-phase locking protocols allow only forward growth of the paths in the graph. the five color protocol allows the conflict graph to grow in any direction (avoiding two- phase constraints) and prevents cycles in the graph by maintaining transaction access information in the form of data- item markers. the read and write set information can also be used to provide relative immunity from deadlocks. partha dasgupta zvi m. kedem backtracking: the chinese food problem lou hoebel chris welty data base directions information resource management-making it work, executive summary on october 21-23, 1985, the institute for computer sciences and technology of the national bureau of standards (nbs), in cooperation with the association for computing machinery special interest group on management of data (acm sigmod), the ieee computer society technical committee on database engineering, and the federal data management users group (fedmug), held the fourth in their series of data base directions workshops. the purpose of this workshop was to assess the nature of current information resource management (irm) practice and problems, and to explore solutions which have proven workable. elizabeth n. fong alan h. goldfine padprints: graphical multiscale web histories ron r. hightower laura t. ring jonathan i. helfman benjamin b. bederson james d. hollan data manipulation in heterogeneous databases many important information systems applications require access to data stored in multiple heterogeneous databases. this paper examines a problem in interdatabase data manipulation within a heterogeneous environment, where conventional techniques are no longer useful. to solve the problem, a broader definition for join operator is proposed. also, a method to probabilistically estimate the accuracy of the join is discussed. abhirup chatterjee arie segev generalizations of boolean query processing substantial work has been done recently applying fuzzy subset theory to the problems of document and query representation and processing in retrieval systems. the motivation has often been to generalize boolean query processing to allow for non-boolean index weights or measures of importance to be attached to the individual terms in the document or in the query representation. the problems of generalizing the boolean lattice structure have been noted. criteria have been generated for query processing mechanisms with relevance weights in the query, but these have been shown to be inconsistent. an alternative approach using thresholds in the query has been suggested, with the generation of appropriate document evaluation criteria for boolean query processing. donald h. kraft structural analysis of verbal data current methods of analyzing verbal reports (protocol analysis) from human- computer interactions fall short of their potential. although there are systematic methods for collecting complete and objective verbal reports applicable to a broad range of problem- solving tasks, currently available analyses of verbal reports are ad hoc and apply only to well constrained tasks. structural analysis is a systematic method, currently under development, for analyzing real-world tasks involving human-computer interaction. starting with a rule that assigns utterances to two dichotomous categories related to a behavior of interest, rules are generated that expose the goal building and evaluation underlying that behavior. the resulting data yield time distributions that characterize subjects' goal-directed behavior and that allow comparisons among tasks or among subjects. wayne a. bailey edwin j. kay urp: a luminous-tangible workbench for urban planning and design john underkoffler hiroshi ishii missing information (applicable and inapplicable) in relational databases there has been some technical and justified criticism of the treatment of missing information in the data sublanguage sql and in ibm's database 2 system (a relational database management system). some of this criticism has been directed (by mistake) at the relational model. the purpose of this paper is to clarify and extend the treatment of missing information by the relational model. the clarification places heavy emphasis on the semantic aspects of missing information. the extension, which is relatively minor, provides a systematic approach (independent of data type) to dealing with the inapplicability of certain properties to some objects. this extension does not invalidate any part of the present version of the relational model. e. f. codd lurker demographics: counting the silent as online groups grow in number and type, understanding lurking is becoming increasingly important. recent reports indicate that lurkers make up over 90% of online groups, yet little is known about them. this paper presents a demographic study of lurking in email-based discussion lists (dls) with an emphasis on health and software-support dls. four primary questions are examined. one, how prevalent is lurking, and do health and software-support dls differ? two, how do lurking levels vary as the definition is broadened from zero posts in 12 weeks to 3 or fewer posts in 12 weeks? three, is there a relationship between lurking and the size of the dl, and four, is there a relationship between lurking and traffic level? when lurking is defined as no posts, the mean lurking level for all dls is lower than the reported 90%. health-support dls have on average significantly fewer lurkers (46%) than software-support dls (82%). lurking varies widely ranging from 0 to 99%. the relationships between lurking, group size and traffic are also examined. blair nonnecke jenny preece the ergonomics psychology project at inria a. bisseret message addressing schemes d. tsichritzis the map information facility - a cooperative federal and private venture in geocoding the use of a nationwide information inquiry system based on a geographic location will be and has been heralded for some years. we take for granted that the post office, the "phone system", and a number of private firms have been and will continue to provide such service. any of us can call most airlines and get a reasonable answer on route planning for our trips with the various options explained to us by skilled operators at anytime of day from any place. private companies evaluate the economic utility of such systems in serving their customers. obviously, the existence of geographic-based inquiry systems is not new; their emergence to perform nationwide service to specific segments of inquirers in a federal program is rather new. i predict that this system, the federal insurance administration's (fia) map information facility, excited as we are about it and pleased as we are with the current accomplishments, will be replicated and expanded very frequently by many agencies. we can consider this particular project, and no doubt others which are emerging, as an early example of geographic information systems in the federal sector. robert t. aangeenbrug an adaptive query execution system for data integration query processing in data integration occurs over network-bound, autonomous data sources. this requires extensions to traditional optimization and execution techniques for three reasons: there is an absence of quality statistics about the data, data transfer rates are unpredictable and bursty, and slow or unavailable data sources can often be replaced by overlapping or mirrored sources. this paper presents the tukwila data integration system, designed to support adaptivity at its core using a two-pronged approach. interleaved planning and execution with partial optimization allows tukwila to quickly recover from decisions based on inaccurate estimates. during execution, tukwila uses adaptive query operators such as the double pipelined hash join, which produces answers quickly, and the dynamic collector, which robustly and efficiently computes unions across overlapping data sources. we demonstrate that the tukwila architecture extends previous innovations in adaptive execution (such as query scrambling, mid-execution re-optimization, and choose nodes), and we present experimental evidence that our techniques result in behavior desirable for a data integration system. zachary g. ives daniela florescu marc friedman alon levy daniel s. weld reflections: seegomanifesto ramana rao reflections john rheinfrank bill hefley using force feedback to enhance human performance in graphical user interfaces louis rosenberg scott brave evolution of contact point: a case study of a help desk and its users this paper describes the evolution of a concept, contact point, the research process through which it evolved, and the work context and practices which drove its evolution. contact point is a web-based application that helps a business manage its relationships with its customers. it can also be used within a business as a means for managing the relationship between parts of the business. in this paper we describe a study of the applicability of contact point to the technical services organization and field personnel of a medical device manufacturer. we found that there were opportunities to potentially reduce call volume through contact point. we discovered, however, that the technical service representatives sometimes filled roles other than providing information in their telephone conversations with field personnel. these functions included reassuring callers that the callers' answers to questions were correct, providing a rationale for information, and redirecting calls to other departments. the ability to share a document and collaborate in real time was viewed as very valuable. we also discovered that the field personnel need information from a variety of other people in order to do their jobs. these observations were used to enhance the next iteration of contact point and to develop strategies for the introduction of contact point to users. lena mamykina catherine g. wolf the challenges of delivering content on the internet in this talk, we will give an overview of how content is distributed on the internet, with an emphasis on the approach being used by akamai. we will describe some of the technical challenges involved in operating a network of thousands of content servers across multiple geographies on behalf of thousands of customers. the talk will be introductory in nature and should be accessible to a broad audience. tom leighton towards a theory of cost management for digital libraries and electronic commerce one of the features that distinguishes digital libraries from traditional databases is new cost models for client access to intellectual property. clients will pay for accessing data items in digital libraries, and we believe that optimizing these costs will be as important as optimizing performance in traditional databases. in this article we discuss cost models and protocols for accessing digital libraries, with the objective of determining the minimum cost protocol for each model. we expect that in the future information appliances will come equipped with a cost optimizer, in the same way that computers today come with a built-in operating system. this article makes the initial steps towards a thery and practice of intellectual property cost management. a. prasad sistla ouri wolfson yelena yesha robert sloan concept for content administration of database powered multimedia web-sites klaus niederacher alexander wahler out of this world: an extensible session architecture for heterogeneous electronic landscapes jonathan trevor tom rodden gareth smith the egg/yolk reliability hierarchy: semantic data integration using sorts with prototypes integration of disparate heterogeneous databases requires translation of types. because a type in one system often has no exact counterpart in the others, fully reliable integration requires deep understanding of the subject domain, with conceptual analysis of type meanings. so far, reliable translation has had to be done by hand. in practice, few types are so crucial as to require full reliability. the egg/yolk hierarchy ranks types by the tolerable rashness in translation, based on prototypes in each type. each defined class (egg) has a subclass of typical members (yolk) defined. we exploit cui, cohn and randell's qualitative spatial simulation program to create the hierarchy of all possible relations between source and target egg/yolk types, ranked by reliability. our eventual ranking is based on a poset combining four different preference criteria. fritz lehmann anthony g. cohn integrating diverse knowledge sources in text recognition sargur n. srihari jonathan j. hull ramesh choudhari query optimization at the crossroads surajit chaudhuri algorithms for deferred view maintenance latha s. colby timothy griffin leonid libkin inderpal singh mumick howard trickey vague: a user interface to relational databases that permits vague queries a specific query establishes a rigid qualification and is concerned only with data that match it precisely. a vague query establishes a target qualification and is concerned also with data that are close to this target. most conventional database systems cannot handle vague queries directly, forcing their users to retry specific queries repeatedly with minor modifications until they match data that are satisfactory. this article describes a system called vague that can handle vague queries directly. the principal concept behind vague is its extension to the relational data model with data metrics, which are definitions of distances between values of the same domain. a problem with implementing data distances is that different users may have different interpretations for the notion of distance. vague incorporates several features that enable it to adapt itself to the individual views and priorities of its users. amihai motro helping people find what they don't know nicolas j. belkin auto-construction of a live thesaurus from search term logs for interactive web search (poster session) the purpose of this paper is to present an on-going research that is intended to construct a live thesaurus directly from search term logs of real-world search engines. such a thesaurus designed can contain representative search terms, their frequency in use, the corresponding subject categories, the associated and relevant terms, and the hot visiting web sites/pages the search terms may reach. shui-lung chuang hsiao-tieh pu wen-hsiang lu lee-feng chien designing user interfaces for television dale herigstad anna wichansky using collaborative filtering to weave an information tapestry david goldberg david nichols brian m. oki douglas terry data structuring and indexing for data base machines our focus in this paper is not the design of new and improved hardware for data base management. instead, our main interest is to document the results of our investigation on the data structuring and indexing requirements of the class of data base machines that contain partitioned content-addressable memory (pcam). jayanta banerjee browsing and querying in online documentation: a study of user interfaces and the interaction process a user interface study concerning the usage effectiveness of selected retrieval modes was conducted using an experimental text retrieval system, tess, giving access to online documentation of certain programming tools. four modes of tess were compared: (1) browsing, (2) conventional boolean retrieval, (3) boolean retrieval based on venn diagrams, and (4) these three combined. further, the modes of tess were compared to the use of printed manuals. the subjects observed were 87 computing new to them. in the experiment the use of printed manuals is faster and provides answers of higher quality than any of the electronic modes. therefore, claims about the effectiveness of computer- based text retrieval have to by vary in situations where printed manuals are manageable to the user. among the modes of tess, browsing is the fastest and the one causing the fewest operational errors. on the same two variables, time and operational errors, the venn diagram mode performs better than conventional boolean retrieval. the combined mode scores worst on the objective performance measures; nonetheless nearly all subject prefer this mode. concerning the interaction process, the subjects tend to manage the complexities of the information retrieval tasks by issuing series of simple commands and exploiting the interactive capabilities of tess. to characterize the dynamics of the interaction process two concepts are introduced; threads and sequences of tactics. threads in a query sequence describes the continuity during retrieval. sequences of tactics concern the combined mode and describe how different retrieval modes succeed each other as the retrieval process evolves. morten hertzum erik frøkjær on modeling of information retrieval concepts in vector spaces the vector space model (vsm) has been adopted in information retrieval as a means of coping with inexact representation of documents and queries, and the resulting difficulties in determining the relevance of a document relative to a given query. the major problem in employing this approach is that the explicit representation of term vectors is not known a priori. consequently, earlier researchers made the assumption that the vectors corresponding to terms are pairwise orthogonal. such an assumption is clearly unrealistic. although attempts have been made to compensate for this assumption by some separate, corrective steps, such methods are ad hoc and, in most cases, formally inconsistent. in this paper, a generalization of the vsm, called the gvsm, is advanced. the developments provide a solution not only for the computation of a measure of similarity (correlation) between terms, but also for the incorporation of these similarities into the retrieval process. the major strength of the gvsm derives from the fact that it is theoretically sound and elegant. furthermore, experimental evaluation of the model on several test collections indicates that the performance is better than that of the vsm. experiments have been performed on some variations of the gvsm, and all these results have also been compared to those of the vsm, based on inverse document frequency weighting. these results and some ideas for the efficient implementation of the gvsm are discussed. s. k.m. wong w. ziarko v. v. raghavan p. c.n. wong cartographic generalization as a combination of representing and abstracting knowledge sebastien mustière jean-daniel zucker lorenza saitta the clearinghouse: a decentralized agent for locating named objects in a distributed environment derek c. oppen yogen k. dalal cultural representation in interface ecosystems: amendments to the acm/interactions design awards criteria andruid kerne community design of dlese's collections review policy: a technological frames analysis in this paper, i describe the design of a collection review policy for the digital library for earth system education (dlese). a distinctive feature of dlese as a digital library is the dlese community , composed of voluntary members who contribute metadata and resource reviews to dlese. as the dlese community is open, the question of how to evaluate community contributions is a crucial part of the review policy design process. in this paper, technological frames theory is used to analyse this design process by looking at how the designers work with two differing definitions of the peer reviewer , (a) peer reviewer as arbiter or editor, and (b) peer reviewer as colleague. content analysis of dlese documents shows that these frames can in turn be related to two definitions that dlese offers of itself: dlese as a library, and dlese as a digital artifact. the implications of the presence of divergent technological frames for the design process are summarised, and some suggestions for future research are outlined. michael khoo xerox star live demonstration david canfield smith charles h. irby document processing in a relational database system michael stonebraker heidi stettner nadene lynn joseph kalash antonin guttman geographic database systems: issues and research needs max j. egenhofer force-feedback improves performance for steering and combined steering- targeting tasks the introduction of a force-feedback mouse, which provides high fidelity tactile cues via force output, may represent a long- awaited technological breakthrough in pointing device designs. however, there have been few studies examining the benefits of force-feedback for the desktop computer human interface. ten adults performed eighty steering tasks, where the participants moved the cursor through a small tunnel with varying indices of difficulty using a conventional and force-feedback mouse. for the force- feedback condition, the mouse displayed force that pulled the cursor to the center of the tunnel. the tasks required both horizontal and vertical screen movements of the cursor. movement times were on average 52 percent faster during the force-feedback condition when compared to the conventional mouse. furthermore, for the conventional mouse vertical movements required more time to complete than horizontal screen movements. another ten adults completed a combined steering and targeting task, where the participants navigated through a tunnel and then clicked a small box at the end of the tunnel. again, force- feedback improved times to complete the task. although movement times were slower than the pure steering task, the steering index of difficulty dominated the steering-targeting relationship. these results further support that human computer interfaces benefit from the additional sensory input of tactile cues to the human user. jack tigh dennerlein david b. martin christopher hasser pixel-oriented database visualizations in this paper, we provide an overview of several pixel-oriented visualization techniques which have been developed over the last years to support an effective querying and exploration of large databases. pixel-oriented techniques use each pixel of the display to visualize one data value and therefore allow the visualization of the largest amount of data possible. the techniques may be divided into query-independent techniques which directly visualize the data (or a certain portion of it) and query-dependent techniques which visualize the relevance of the data with respect to a specific query. an example for the class of query-independent techniques is the recursive pattern technique which is based on a generic recursive scheme generalizing a wide range of pixel-oriented arrangements for visualizing large databases. examples for the class of query-dependent techniques are the generalized spiral and circle-segments techniques, which visualize the distance with respect to a database query and arrange the most relevant data items in the center of the display. daniel a. keim supporting presence in collaborative environments by haptic force feedback an experimental study of interaction in a collaborative desktop virtual environment is described. the aim of the experiment was to investigate if added haptic force feedback in such an environment affects perceived virtual presence, perceived social presence, perceived task performance, and task performance. a between-group design was employed, where seven pairs of subjects used an interface with graphic representation of the environment, audio connection, and haptic force feedback. seven other pairs of subjects used an interface without haptic force feedback, but with identical features otherwise. the phantom, a one-point haptic device, was used for the haptic force feedback, and a program especially developed for the purpose provided the virtual environment. the program enables for two individuals placed in different locations to simultaneously feel and manipulate dynamic objects in a shared desktop virtual environment. results show that haptic force feedback significantly improves task performance, perceived task performance, and pereceived virtual presence in the collaborative distributed environment. the results suggest that haptic force feedback increases perceived social presence, but the difference is not significant. eva-lotta sallnäs kirsten rassmus-gröhn calle sjöström two years before the mist: experiences with aquanet catherine c. marshall russell a. rogers designing an ultra highly available dbms (tutorial session) svein erik bratsberg oystein torbjornsen neither rain, nor sleet, nor gloom of night: adventures in electronic mail maria capucciati patrick curran kimberly donner o'brien annette wagner a conversation about … conversation john gehl supercomputers and distributed computing multiprocessor supercomputers and distributed systems share many common properties in a logical sense. the physical implementation of these systems may be quite different, particularly with respect to timing considerations. some of the recent work on supercomputer hardware and software may be of use in the distributed processing community. we discuss some aspects of how to relate program and data structures to architectural models. david j. kuck an introduction to the internet and the world wide web bill hefley john morris a serialization graph construction for nested transactions this paper makes three contributions. first, we present a proof technique that offers system designers the same ease of reasoning about nested transaction systems as is given by the classical theory for systems without nesting, and yet can be used to verify that a system satisfies the robust "user view" definition of correctness of [10]. second, as applications of the technique, we verify the correctness of moss' read/write locking algorithm for nested transactions, and of an undo logging algorithm that has not previously been presented or proved for nested transaction systems. third, we make explicit the assumptions used for this proof technique, assumptions that are usually made implicitly in the classical theory, and therefore we clarify the type of system for which the classical theory itself can reliably be used. alan fekete nancy lynch william e. weihl an architecture for high volume transaction processing robert w. horst timothy c. k. chou on site maintenance using a wearable computer system bethany smith len bass jane siegel perceptual speed, learning and information retrieval performance bryce allen a cost model for query processing in high dimensional data spaces during the last decade, multimedia databases have become increasingly important in many application areas such as medicine, cad, geography, and molecular biology. an important research topic in multimedia databases is similarity search in large data sets. most current approaches that address similarity search use the feature approach, which transforms important properties of the stored objects into points of a high-dimensional space (feature vectors). thus, similarity search is transformed into a neighborhood search in feature space. multidimensional index structures are usually applied when managing feature vectors. query processing can be improved substantially with optimization techniques such as blocksize optimization, data space quantization, and dimension reduction. to determine optimal parameters, an accurate estimate of index-based query processing performance is crucial. in this paper we develop a cost model for index structures for point databases such as the r*-tree and the x-tree. it provides accurate estimates of the number of data page accesses for range queries and nearest-neighbor queries under a euclidean metric and a maximum metric and a maximum metric. the problems specific to high-dimensional data spaces, called boundary effects, are considered. the concept of the fractal dimension is used to take the effects of correlated data into account. christian böhm on the equivalence of database models y. edmund lien the expressiveness of a family of finite set languages neil immerman sushant patnaik david stemple state library online information system uses a hypertext front end ernest perez prediction with local patterns using cross-entropy heikki mannila dmitry pavlov padhraic smyth class modification in the gemstone object-oriented dbms we are currently designing a class modification methodology for gemstone. we describe the current status of the design. we choose from two basic approaches and then introduce those aspects of gemstone necessary for an understanding of the paper. after defining a set of invariants for gemstone databases, we discuss specific class modification operations in terms of maintaining these invariants. we next discuss several issues that impact class modification. these issues include concurrency and authorization, and lead to difficult choices. d. jason penney jacob stein talking to strangers steve whittaker a new normal form for the design of relational database schemata this paper addresses the problem of database schema design in the framework of the relational data model and functional dependencies. it suggests that both third normal form (3nf) and boyce-codd normal form (bcnf) supply an inadequate basis for relational schema design. the main problem with 3nf is that it is too forgiving and does not enforce the separation principle as strictly as it should. on the other hand, bcnf is incompatible with the principle of representation and prone to computational complexity. thus a new normal form, which lies between these two and captures the salient qualities of both is proposed. the new normal form is stricter than 3nf, but it is still compatible with the representation principle. first a simpler definition of 3nf is derived, and the analogy of this new definition to the definition of bcnf is noted. this analogy is used to derive the new normal form. finally, it is proved that bernstein's algorithm for schema design synthesizes schemata that are already in the new normal form. carlo zaniolo computation of chain queries in distributed database systems qi yang expressing business rules point-and-click expression builders, for instance limits and type consistency. structured english, for more complex restrictions and logical inferences. entity life history or state transition diagrams, for both basic and more advanced state transition rules. data model or class model extensions, for basic property rules. no matter how the rules are captured, there should be a single, unified conceptual representation "inside" of the man-machine boundary. "inside" here means transparent to the specifiers, but visible to analysis tools (e.g., for conflict analysis) and to rule engines or business logic servers (for run-time processing). inside, there may be still other representations. for processing and performance reasons, there might be many physical representations of the rules, optimized for particular tools or hardware/software environments. the result is actually three layers of representation: external, conceptual, and internal. this is strongly reminiscent of the old ansi/sparc three-schema architecture for data. this should not be surprising since rules simply build on terms and facts, which can be ultimately represented by data. where is this research now? a new, more concise representation scheme is under development. one focus of this scheme is a formal expression of how non-atomic rule types are derived from atomic ones. this would allow reduction of rules to a common base of fundamental rule types, in order to support automatic analysis of conflict and overlap in systematic fashion. this is opening exciting new avenues of research, and significant opportunities for those interested in getting involved. ronald g. ross query planning in infomaster oliver m. duschka michael r. genesereth editorial sumi helal eric brewer widening the net jeff sokolov reflections: our subliminal art steven pemberton long-term interaction: learning the 4 rs alan dix devina ramduny julie wilkinson index research (panel session): forest or trees? indexes and access methods have been a staple of database research --- and indeed of computer science in general --- for decades. a glance at the contents of this year's sigmod and pods proceedings shows another bumper crop of indexing papers. given the hundreds of indexing papers published in the database literature, a pause for reflection seems in order. from a scientific perspective, it is natural to ask why definitive indexing solutions have eluded us for so many years. what is the grand challenge in indexing? what basic complexities or intricacies underlie this large body of work? what would constitute a successful completion of this research agenda, and what steps will best move us in that direction? or is it the case that the problem space branches in so many ways that we should expect to continuously need to solve variants of the indexing problem? from the practitioner's perspective, the proliferation of indexing solutions in the literature may be more confusing than helpful. comprehensively evaluating the research to date is a near-impossible task. an evaluation has to include both functionality (applicability to the practitioner's problem, integration with other data management services like buffer management, query processing and transactions) as well as performance for the practitioner's workloads. unfortunately, there are no standard benchmarks for advanced indexing problems, and there has been relatively little work on methodologies for index experimentation and customization. how should the research community promote technology transfer in this area? are the new extensibility interfaces in object-relational dbmss conducive to this effort? joseph m. hellerstein hans-peter kriegel david comet christos falsutsos raghu ramakrishnan paul brown adapting materialized views after redefinitions ashish gupta inderpal s. mumick kenneth a. ross mpi-video prototype systems (video) arun katkere don y. kuramura patrick kelly saied moezzi shankar chatterjee the affordances of media spaces for collaboration william w. gaver the memory extender personal filing system the benefits of electronic information storage are enormous and largely unrealized. as its cost continues to decline, the number of files in the average user's personal database may increase substantially. how is a user to keep track of several thousand, perhaps several hundred thousand, files? the memory extender (me) system improves the user interface to a personal database by actively modeling the user's own memory for files and for the context in which these files are used. files are multiply indexed through a network of variably weighted term links. context is similarly represented and is used to minimize the user input necessary to disambiguate a file. files are retrieved from the context through a spreading-activation-like process. the system aims towards an ideal in which the computer provides a natural extension to the user's own memory. w. p. jones benefits of implementing on-line methods and procedures kenneth r. ohnemus diana f. mallin system response time operator productivity, and job satisfaction raymond e. barber henry c. lucas a provably efficient computational model for approximate spatiotemporal retrieval delis vasilis makris christos sioutas spiros multiple representations in gis: materialization through map generalization, geometric, and spatial analysis operations clodoveu a. davis alberto h. f. laender "one size fits all" database architectures do not work for dss clark d. french estimating the cost of updates in a relational database in this paper, cost formulas are derived for the updates of data and indexes in a relational database. the costs depend on the data scan type and the predicates involved in the update statements. we show that update costs have a considerable influence, both in the context of the physical database design problem and in access path selection in query optimization for relational dbmss. m. schkolnick p. tiberio multidatabase interdependencies in industry in this paper we address the problem of data consistency between interrelated data. in industrial environments, lack of consistent data creates difficulties in interoperation between systems and often requires manual interventions to restart operations that fail due to inconsistent data. we report the results of a study to understand applicability, adequacy and advantages of a framework we had proposed earlier to specify interdatabase dependencies in multidatabase environments. we studied several existing bellcore systems and identified examples of interdependent data. the examples demonstrate that the framework allows precise and detailed specification of complex interdependencies that lead to efficient strategies to enforce the consistency requirements among the corporate data managed in multiple databases. we believe that our specification framework can help in the maintenance of data that meet a business's consistency needs, reduce time consuming and costly manual operations, and provide data of better quality to end users. amit p. sheth george karabatis a framework for coherent hypertext fiction (abstract) brendon towle wolff dobson an extension of conflict-free multivalued dependency sets several researchers (beeri, bernstein, chiu, fagin, goodman, maier, mendelzon, ullman, and yannakakis) have introduced a special class of database schemes, called acyclic or tree schemes. beeri et al. have shown that an acyclic join dependency, naturally defined by an acyclic database scheme, has several desirable properties, and that an acyclic join dependency is equivalent to a conflict-free set of multivalued dependencies. however, since their results are confined to multivalued and join dependencies, it is not clear whether we can handle functional dependencies independently of other dependencies. in the present paper we define an extension of a conflict-free set, called an extended conflict-free set, including multivalued dependencies and functional dependencies, and show the following two properties of an extended conflict- free set: there are three equivalent definitions of an extended conflict-free set. one of them is defined as a set including an acyclic joint dependency and a set of functional dependencies such that the left and right sides of each functional dependency are included in one of the attribute sets that construct the acyclic join dependency. for a relation scheme with an extended conflict-free set, there is a decomposition into third normal form with a lossless join and preservation of dependencies. hirofumi katsuno archiving telemeetings this paper presents a prototype system for modeling and managing the complete life-time of telemeetings/teleconferences. the system provides services for modeling telemeetings, storing telemeetings in a telemeeting database, annotating telemeetings and querying the telemeeting database. constantin arapis preparing students to communicate in a virtual environment gerhard steinke query optimization by simulated annealing query optimizers of future database management systems are likely to face large access plan spaces in their task. exhaustively searching such access plan spaces is unacceptable. we propose a query optimization algorithm based on simulated annealing, which is a probabilistic hill climbing algorithm. we show the specific formulation of the algorithm for the case of optimizing complex non-recursive queries that arise in the study of linear recursion. the query answer is explicitly represented and manipulated within the closed semiring of linear relational operators. the optimization algorithm is applied to a state space that is constructed from the equivalent algebraic forms of the query answer. a prototype of the simulated annealing algorithm has been built and few experiments have been performed for a limited class of relational operators. our initial experience is that, in general, the algorithm converges to processing strategies that are very close to the optimal. moreover, the traditional processing strategies (e.g., the semi-naive evaluation) have been found to be, in general, suboptimal. yannis e. ioannidis eugene wong ensuring relaxed atomicity for flexible transactions in multidatabase systems global transaction management requires cooperation from local sites to ensure the consistent and reliable execution of global transactions in a distributed database system. in a heterogeneous distributed database (or multidatabase) environment, various local sites make conflicting assertions of autonomy over the execution of global transactions. a flexible transaction model for the specification of global transactions makes it possible to deal robustly with these conflicting requirements. this paper presents an approach that preserves the semi-atomicity (a weaker form of atomicity) of flexible transactions, allowing local sites to autonomously maintain serializability and recoverability. we offer a fundamental characterization of the flexible transaction model and precisely define the semi-atomicity. we investigate the commit dependencies among the subtransactions of a flexible transaction. these dependencies are used to control the commitment order of the subtransactions. we next identify those restrictions that must be placed upon a flexible transaction to ensure the maintenance of its semi-atomicity. as atomicity is a restrictive criterion, semi-atomicity enhances the class of executable global transactions. aidong zhang marian nodine bharat bhargava omran bukhres commercial use of meetingware (workshop session) (abstract only) this full-day workshop focuses on appplications of meetingware within commercial settings. our aim is to share information among people with experience in implementing groupware within organizations, and to share our knowledge about new meeting technologies and practices. michele cresmen robin lampert kathy ryan applying writing guidelines to web pages john morkes jakob nielsen research and development issues for large-scale multimedia information systems stavros christodoulakis peter triantafillou the information visualizer, an information workspace stuart k. card george g. robertson jock d. mackinlay panda user-oriented color interface design: direct manipulation of color in context penny f. bauersfeld jodi l. slater a robust framework for content-based retrieval by spatial similarity in image databases a framework for retrieving images by spatial similarity (friss) in ima ge databases is presented. in this framework, a robust retrieval by spatial similarity (rss) algorithm is defined as one that incorporates both directional and topological spatial constraints, retrieves similar images, and recognized images even after they undergo translation, scaling, rotation (both perfect and multiple), or any arbitrary combination of transformatioins. the friss framework is discussed and used as a base for comparing various existing rss algorithms. analysis shows that none of them satisfies all the friss specifications. an algorithm, simdtc, is then presented. simdtc introduces the concept of a rotation correction angle(rca) to align objects in one image spatially closer to matching objects in another image for more accurate similarity assessment. similarity between two images is a function of the number of common objects between them and the closeness of directional and topological spatial relationships between object pairs in both images. the simdtc retrieval is invariant under translation, scaling, and perfect rotation, and the algorithm is able to rank multiple rotation variants. the algorithm was tested using synthetic images and the tessa image database. analysis shows the robustness of the simdtc algorithm over current algorithms. essam a. el-kwae mansur r. kabuka olap and statistical databases: similarities and differences (abstract) arie shoshani digital libraries: introduction edward a. fox a normal form for relational databases that is based on domains and keys a new normal form for relational databases, called domain-key normal form (dk/nf), is defined. also, formal definitions of insertion anomaly and deletion anomaly are presented. it is shown that a schema is in dk/nf if and only if it has no insertion or deletion anomalies. unlike previously defined normal forms, dk/nf is not defined in terms of traditional dependencies (functional, multivalued, or join). instead, it is defined in terms of the more primitive concepts of domain and key, along with the general concept of a "constraint." we also consider how the definitions of traditional normal forms might be modified by taking into consideration, for the first time, the combinatorial consequences of bounded domain sizes. it is shown that after this modification, these traditional normal forms are all implied by dk/nf. in particular, if all domains are infinite, then these traditional normal forms are all implied by dk/nf. ronald fagin progressive vector transmission michela bertolotto max j. egenhofer kdd-cup 2000: question 5 winnner's report amdocs einat neumann nurit vatnik saharon rosset miri duenias isabel sassoon aaron inger formal semantics for time in databases the concept of a historical database is introduced as a tool for modeling the dynamic nature of some part of the real world. just as first-order logic has been shown to be a useful formalism for expressing and understanding the underlying semantics of the relational database model, intensional logic is presented as an analogous formalism for expressing and understanding the temporal semantics involved in a historical database. the various components of the relational model, as extended to include historical relations, are discussed in terms of the model theory for the logic ils, a variation of the logic il formulated by richard montague. the modal concepts of intensional and extensional data constraints and queries are introduced and contrasted. finally, the potential application of these ideas to the problem of natural language database querying is discussed. james clifford david s. warren a new model for algorithm animation over the www james e. baker isabel f. cruz giuseppe liotta roberto tamassia a case study of a multimedia co-working task and the resulting interface design of a collaborative communication tool the video viewer is a communication tool that allows two users to share video information across a network. the design of this tool was based on the results of a case study involving two multimedia, collaborative workstations situated in two separate rooms. users performed several tasks collaboratively using different media in an unstructured environment (i.e., there were four monitors to increase screen space and there was no specific interface for guidance). this video outlines the case study, the preliminary case study results and how these results effected the interface design of the video viewer. amanda ropa bengt ahlström management of interface design in humanoid today's interface design tools either force designers to handle a tremendous number of design details, or limit their control over design decisions. neither of these approaches taps the true strengths of either human designers or computers in the design process. this paper presents a human-computer collaborative system that uses a model-based approach for interface design to help designers on decision making, and utilizes the bookkeeping capabilities of computers for regular and tedious tasks. we describe (a) the underlying modeling technique and an execution environment that allows even incompletely- specified designs to be executed for evaluation and testing purposes, and (b) a tool that decomposes high-level design goals into the necessary implementation steps, and helps designers manage the myriad of details that arise during design. ping luo pedro szekely robert neches parallel query processing in shared disk database systems system developments and research on parallel query processing have concentrated either on "shared everything" or "shared nothing" architectures so far. while there are several commercial dbms based on the "shared disk" alternative, this architecture has received very little attention with respect to parallel query processing. a comparison between shared disk and shared nothing reveals many potential benefits for shared disk with respect to parallel query processing. in particular, shared disk supports more flexible control over the communication overhead for intra-transaction parallelism, and a higher potential for dynamic load balancing and efficient processing of mixed oltp/query workloads. we also sketch necessary extensions for transaction management (concurrency/coherency control, logging/recovery) to support intra- transaction parallelism in the shared disk environment. erhard rahm the index suggestion problem for object database applications eric hughes marianne winslett how organizational structure and culture shape cscw (doctoral colloquium) angela lin using filtering agents to improve prediction quality in the grouplens research collaborative filtering system badrul m. sarwar joseph a. konstan al borchers jon herlocker brad miller john riedl visual video browsing interfaces using key frames anita komlodi laura slaughter challenges facing researchers using multimedia data: tools for layering significance ricki goldman-segall the resolution of the problem of objectivity in a method of evaluation for interactive applications francisco v. cipolla ficarra reasoning about functional dependencies generalized for semantic data models we propose a more general form of functional dependency for semantic data models that derives from their common feature in which the separate notions of domain and relation in the relational model are combined into a single notion of class. this usually results in a richer terminological component for their query languages, whereby terms may navigate through any number of properties, including none. we prove the richer expressiveness of this more general functional dependency, and exhibit a sound and complete set of inference axioms. although the general problem of decidability of their logical implication remains open at this time, we present decision procedures for cases in which the dependencies included in a schema correspond to keys, or in which the schema itself is acyclic. the theory is then extended to include a form of conjunctive query. of particular significance is that the query becomes an additional source of functional dependency. finally, we outline several applications of the theory to various problems in physical design and in query optimization. the applications derive from an ability to predict when a query can have at most one solution. grant e. weddell multiway spatial joins due to the evolution of geographical information systems, large collections of spatial data having various thematic contents are currently available. as a result, the interest of users is not limited to simple spatial selections and joins, but complex query types that implicate numerous spatial inputs become more common. although several algorithms have been proposed for computing the result of pairwise spatial joins, limited work exists on processing and optimization of _multiway spatial joins_. in this article, we review pairwise spatial join algorithms and show how they can be combined for multiple inputs. in addition, we explore the application of _synchronous traversal_ (st), a methodology that processes synchronously all inputs without producing intermediate results. then, we integrate the two approaches in an engine that includes st and pairwise algorithms, using dynamic programming to determine the optimal execution plan. the results show that, in most cases, multiway spatial joins are best processed by combining st with pairwise methods. finally, we study the optimization of very large queries by employing randomized search algorithms. nikos mamoulis dimitris papadias guides 3.0 abbe don tim oren brenda laurel passive capture and structuring of lectures despite recent advances in authoring systems and tools, creating multimedia presentations remains a labor-intensive process. this paper describes a system for automatically constructing structured multimedia documents from live presentations. the automatically produced documents contain synchronized and edited audio, video, images, and text. two essential problems, synchronization of captured data and automatic editing, are identified and solved. sugata mukhopadhyay brian smith the relation between problems in large-scale concurrent systems and distributed databases we first describes the state of the art in models of concurrency. the models are analyzed along two dimensions: communication and computation. the paper then discusses some problems which make it difficult to realize large-scale concurrent systems. such problems include compositionality, heterogeneity, debugging, resource management, and concurrency control. some useful comparisons are drawn to problems in distributed databases and it is argued that solutions to these problems cross disciplinary boundaries. finally, the paper discusses trends in building concurrent computers and provides some expectations for the future. g. agha wind and wave auditory icons for monitoring continuous processes stephane conversy applying an update method to a set of receivers in the context of object databases, we study the application of an update method to a collection of receivers rather than to a single one. the obvious strategy of applying the update to the receivers one after the other, in some arbitrary order, brings up the problem of order independence. on a very general level, we investigate how update behavior can be analyzed in terms of certain schema annotations, called colorings. we are able to characterize those colorings that always describe order-independedent updates. we also consider a more specific model of update methods implemented in the relational algebra. order-independence of such algebraic methods is undecidable in general, but decidable if the expressions used are positive. finally, we consider an alternative parallel strategy for set-oriented applications of algebraic update methods and compare and relate it to the sequential strategy. monkey in the middle or building and supporting the human interface diane l. darrow looking at ourselves: an examination of the social organisation of two research laboratories richard h. r. harper evaluation of collaborative systems using communication actions elizabeth a. hinkelman designing a children's digital library with and for children this paper describes preliminary work carried out to design a children's digital library of stories and poems with and for children aged 11-14 years old. we describe our experience in engaging children as design partners, and propose a digital library environment and design features to provide an engaging, successful learning experience for children using it for collaborative writing. yin leng theng norliza mohd-nasir harold thimbleby george buchanan matthew jones an interbase system at bnr omran bukhres jiansan chen rob pezzoli the use of data type information in an interactive database environment despite the enormous advances that have been made in the specification of data types and data models in the fields of programming languages, databases and artificial intelligence; there remain a number of problems in attempting to unify the various approaches to the formal description of data. the purpose of this brief paper is to examine these problems from the point (or points) of view of those people---designers, administrators, applications programmers, and end-users---whose main interest is with databases. in particular, we hope to display special concern for the tools provided for the end-user, who should be the final beneficiary of whatever advances are made. in order to pin down some of these problems, it is worthwhile to attempt a definition of certain terms used in databases: 1\\. a data model (or database management system if one is describing an implementation) is a set of parameterized or "generic" data types. 2\\. a database schema is a set of data types that result from instantiating the generic types of the data model to produce a set of data types that describe the data to be stored. 3\\. a database is an instantiation of those types defined by a schema. peter buneman ira winston two techniques for on-line index modification in shared nothing parallel databases kiran j. achyutuni edward omiecinski shamkant b. navathe lessons from three years of web development alan r. dennis butterfly: a conversation-finding agent for internet relay chat neil w. van dyke henry lieberman pattie maes reflections: isn't technology grand? bill hefley content versus structure in information environments: a longitudinal analysis of website preferences michael j. davern d. te'eni jae yun moon a unified approach to indexing and retrieval of information this paper takes another look at information retrieval. it starts from the purposes of retrieval, looks at what people would like from a retrieval system, builds a conceptual model for how a retrieval system could work and from that determines what and how to do appropriate indexing to fit the model. the approach leads to the idea of the duality of indexing and retrieval. the ideas are illustrated by giving the design of a text based system and of a system to store pictures of faces. it is shown that the underlying mechanisms are the same for both systems and it suggests that other retrieval systems using this approach will have similar structures. other implications of the approach are that retrieval and indexing can be monitored by the machine and the systems can learn to better respond to human needs. ongoing research in this area is outlined. kevin cox warehousing and mining web logs analyzing web logs for usage and access trends can not only provide important information to web site developers and administrators, but also help in creating adaptive web sites. while there are many existing tools that generate fixed reports from web logs, they typically do not allow ad-hoc analysis queries. moreover, such tools cannot discover hidden patterns of access embedded in the access logs. we describe a relational olap (rolap) approach for creating a web-log warehouse. this is populated both from web logs, as well as the results of mining web logs. we also present a web based ad-hoc tool for analytic queries on the warehouse. we discuss the design criteria that influenced our choice of dimensions, facts and data granularity, and present the results from analyzing and mining the logs. karuna p. joshi anupam joshi yelena yesha raghu krishnapuram alternative techniques for the efficient acquisition of haptic data immersive environments are those that surround users in an artificial world. these environments consist of a composition of various types of immersidata; unique data types that are combined to render a virtual experience. acquisition, for storage and future querying, of information describing sessions in these environments is challenging because of the real- time demands and sizeable amounts of data to be managed. in this paper, we summarize a comparison of techniques for achieving the efficient acquisition of one type of immersidata, the haptic data type, which describes the movement, rotation, and force associated with user-directed objects in an immersive environment. in addition to describing a general process for real- time sampling and recording of this type of data, we propose three distinct sampling strategies: fixed, grouped, and adaptive. we conducted several experiments with a real haptic device and found that there are tradeoffs between the accuracy, efficiency, and complexity of implementation for each of the proposed techniques. while it is possible to use any of these approaches for real-time haptic data acquisition, we found that an adaptive sampling strategy provided the most efficiency without significant loss in accuracy. as immersive environments become more complex and contain more haptic sensors, techniques such as adaptive sampling can be useful for improving scalability of real-time data acquisition. cyrus shahabi mohammad r. kolahdouzan greg barish roger zimmermann didi yao kun fu lingling zhang an adaptive view element framework for multi-dimensional data management we present an adaptive wavelet view element framework for managing different types of multi-dimensional data in storage and retrieval applications. we consider the problems of multi-dimensional data compression, multi-resolution subregion access, selective materialization, progressive retrieval and similarity searching. the framework uses wavelets to partition the multi- dimensional data into view elements that form the building blocks for synthesizing views of the data. the view elements are organized and managed using different view element graphs. the graphs are used to guide cost-based view element selection algorithms for optimizing compression, access, retrieval and search performance. we present the adaptive wavelet view element framework and describe its application in managing multi-dimensional data such as 1-d time series data, 2-d images, video sequences, and multi-dimensional data cubes. we present experimental results that demonstrate that the adaptive wavelet view element framework improves performance of compressing, accessing, and retrieving multi- dimensional data compared to non-adaptive methods. john r. smith chung-sheng li a fault-tolerant commit protocol for replicated databases when failures occur during the execution of distributed commit protocols, the protocols may block in some partitions to avoid inconsistent termination of the transaction, thus making data items in these partitions unavailable for accesses. we present a protocol that incorporates two new ideas with the goal of improving data availability. first, a new two-level voting scheme is proposed for deciding in which partitions to terminate the transaction. in this scheme, a choice is made based on the number of data items available in the partition rather than on the number of individual nodes. indeed, in replicated systems, a criterion based on the number of nodes may be misleading. second, we propose a way to reduce blocking caused by accumulating network fragmentation. the idea employs the views mechanism previously used in replica management. michael rabinovich edward d. lazowska equality and domain closure in first-order databases a class of first-order databases with no function signs is considered. a closed database db is one for which the only existing individuals are those explicitly referred to in the formulas of db. formally, this is expressed by including in db a domain closure axiom (x)x = c1 ∨***∨ x = cp, where c1,…,cp are all of the constants occurring in db. it is shown how to completely capture the effects of this axiom by means of suitable generalizations of the projection and division operators of relational algebra, thereby permitting the underlying theorem prover used for query evaluation to ignore this axiom. a database is e-saturated if all of its constants denote distinct individuals. it is shown that such databases circumvent the usual problems associated with equality, which arise in more general databases. finally, it is proved for horn databases and positive queries that only definite answers are obtained, and for databases with infinitely many constants that infinitely long indefinite answers can arise. raymond reiter an overview of multitext charles l. a. clarke gordon v. cormack christopher r. palmer lag as a determinant of human performance in interactive systems i. scott mackenzie colin ware of xml and databases (panel session): where's the beef? this panel will examine the implications of the xml revolution, which is currently raging on the web, for database systems research and development. michael j. carey jennifer widom adam bosworth bruce lindsay michael stonebraker dan suciu workspace awareness for groupware carl gutwin saul greenberg don't just stand there win treese a language facility for designing database-intensive applications taxis, a language for the design of interactive information systems (e.g., credit card verification, student-course registration, and airline reservations) is described. taxis offers (relational) database management facilities, a means of specifying semantic integrity constraints, and an exception-handling mechanism, integrated into a single language through the concepts of class, property, and the is-a (generalization) relationship. a description of the main constructs of taxis is included and their usefulness illustrated with examples. john mylopoulos philip a. bernstein harry k. t. wong evaluation of web document retrieval: a sigir'99 workshop maristella agosti massimo melucci an empirical evaluation of user interfaces for topic management of web sites brian amento will hill loren terveen deborah hix peter ju direct: a query facility for multiple databases the subject of this research project is the architecture and design of a multidatabase query facility. these databases contain structured data, typical for business applications. problems addressed are: presenting a uniform interface for retrieving data from multiple databases, providing autonomy for the component databases, and defining an architecture for semantic services. direct is a query facility for heterogeneous databases. the databases and their definitions can differ in their data models, names, types, and encoded values. instead of creating a global schema, descriptions of different databases are allowed to coexist. a multidatabase query language provides a uniform interface for retrieving data from different databases. direct has been exercised with operational databases that are part of an automated business system. ulla merz roger king on balancing the load in a clustered web farm in this article we propose a novel, yet practical, scheme which attempts to optimally balance the load on the servers of a clustered web farm. the goal in solving this performance problem is to achieve minimal average response time for customer requests, and thus ultimately achieve maximal customer throughput. the article decouples the overall problem into two related but distinct mathematical subproblems, one static and one dynamic. we believe this natural decoupling is one of the major contributions of our article. the _static_ component algorithm determines good assignments of sites to potentially overlapping servers. these cluster assignments, which, due to overhead, cannot be changed too frequently, have a major effect on achievable response time. additionally, these assignments must be palatable to the sites themselves. the _dynamic_ component algorithm is designed to handle real-time load balancing by routing customer requests from the network dispatcher to the servers. this algorithm must react to fluctuating customer request load while respecting the assignments of sites to servers determined by the static component. the static and dynamic components both employ in various contexts the same so-called _goal setting_ algorithm. this algorithm determines the theoretically optimal load on each server, given hypothetical cluster assignments and site activity. we demonstrate the effectiveness of the overall load-balancing scheme via a number of simulation experiments. joel l. wolf philip s. yu designing storytelling technologies to encouraging collaboration between youngchildrenwe describe the iterative design of two collaborative storytellingtechnologies for young children, kidpad and the klump. we focus on the idea ofdesigning interfaces to subtly encourage collaboration so that children areinvited to discover the added benefits of working together. this idea has beenmotivated by our experiences of using early versions of our technologies inschools in sweden and the uk. we compare the approach of encouragingcollaboration with other approaches to synchronizing shared interfaces. wedescribe how we have revised the technologies to encourage collaboration andto reflect design suggestions made by the children themselves.steve benford benjamin b. bederson karl-petter Ã...kesson victor bayon allisondruin pär hansson juan pablo hourcade rob ingram helen neale claire o'malleykristian t. simsarian danaÃ" stanton yngve sundblad gustav taxen integrated multi scale text retrieval visualization karlis kaugars a digital strategy for the library of congress alan s. inouye bibliography: individual differences and computer-human interaction elizabeth a buie system management and automatic reconfiguration algorithms for in-home digital networks andrás montvay workspace awareness support with radar views carl gutwin saul greenberg mark roseman new methods and fast algorithms for database normalization a new method for computing minimal covers is presented using a new type of closure that allows significant reductions in the number of closures computed for normalizing relations. benchmarks are reported comparing the new and the standard techniques. jim diederich jack milton mining gps data to augment road models seth rogers pat langley christopher wilson aspect windows, 3-d visualizations, and indirect comparisons of information retrieval systems russell c. swan james allan a response to bethke et al david prigge how fluent is your interface? designing for international users patricia russo stephen boor a transitive closure and magic functions machine an extended version of our simd relational algebraic processor is presented. in addition to the usual relational and set operations the new machine has the ability to recycle its responder sets internally. this allows it to perform repeated joins, for example, without external intervention and so achieve operations such as path discovery and transitive closure in graphs stored as relations, and to evaluate various types of recursive query. the many compiled methods for recursive query evaluation are applicable in this system as in any other relational database, and can be efficiently evaluated because of the in- built recursive and iterative capability of our machine. the magic functions approach has a clear connection with the machine since it uses relations as magic functions. jerome robinson simon lavington tilting operations for small screen interfaces: jun rekimoto flexible, active support for collaborative work with conversationbuilder simon m. kaplan william j. tolone douglas p. bogia celsina bignoli sharedview system: visual communication support for spatial workspace collaboration (abstract) hideaki kuzuoka harnessing the interface for domain learning david golightly intelligent caching: selecting, representing, and reusing data in an information server accessing information sources to retrieve data requested by a user can be expensive, especially when dealing with distributed information sources. one way to reduce this cost is to cache the results of queries, or related classes of data. this paper presents an approach to caching and addresses the issues of which information to cache, how to describe what has been cached, and how to use the cached information to answer future queries. we consider these issues in the context of the sims information server, which is a system for retrieving information from multiple heterogeneous and distributed information sources. the design of this information server is ideal for representing and reusing cached information since each class of cached information is simply viewed as another information source that is available for answering future queries. yigal arens craig a. knoblock interface modeling issues in providing access to guis for the visually impaired (panel session) a. d. n. edwards e. d. mynatts j. thatcher database selection for processing k nearest neighbors queries in distributed environments we consider the processing of digital library queries, consisting of a text component and a structured component in distributed environments. the text component can be processed using techniques given in previous papers such as [7, 8, 11]. in this paper, we concentrate on the processing of the structured component of a distributed query. histograms are constructed and algorithms are given to provide estimates of the desirabilities of the databases with respect to the given query. databases are selected in descending order of desirability. an algorithm is also given to select tuples from the selected databases. experimental results are given to show that the techniques provided here are effective and efficient. clement yu prasoon sharma weiyi meng yan qin applications: a dimension space for user interface management systems joÃ"llecoutaz sandrine balbo an empirical study of speech and gesture interaction: toward the definition of ergonomic design guidelines sandrine robbe jpernlite: an extensible transaction server for the world wide web jack j. yang gail e. kaiser the perceptual structure of multidimensional input device selection concepts such as the logical device, taxonomies, and other descriptive frameworks have improved understanding of input devices but ignored or else treated informally their pragmatic qualities, which are fundamental to selection of input devices for tasks. we seek the greater leverage of a predictive theoretical framework by basing our investigation of three- dimensional vs. two-dimensional input devices on garner's theory of processing of perceptual structure in multidimensional tasks. two three- dimensional tasks may seem equivalent, but if they involve different types of perceptual spaces, they should be assigned correspondingly different input devices. our experiment supports this hypothesis and thus both indicates when to use three- dimensional input devices and gives credence to our theoretical basis for this indication. robert j. k. jacob linda e. sibert friend21 project hirotada ueda formal aspects of concurrency control in long-duration transaction systems using the nt/pv model in the typical database system, an execution is correct if it is equivalent to some serial execution. this criterion, called serializability, is unacceptable for new database applications which require long-duration transactions. we present a new transaction model which allows correctness criteria more suitable for these applications. this model combines three enhancements to the standard model: nested transactions, explicit predicates, and multiple versions. these features yield the name of the new model, nested transactions with predicates and versions, or nt/pv. the modular nature of the nt/pv model allows a straightforward representation of simple systems. it also provides a formal framework for describing complex interactions. the most complex interactions the model allows can be captured by a protocol which exploits all of the semantics available to the nt/pv model. an example of these interactions is shown in a case application. the example shows how a system based on the nt/pv model is superior to both standard database techniques and unrestricted systems in both correctness and performance. henry f. korth greg speegle principles for digital library development alexa t. mccray marie e. gallagher implementation of general constraints in sim richard bigelow what's new and not-so-new in information systems: leveraging the skills and insights of the academic community charles o. rossotti specification of content-dependent security policies the protection of information from unauthorized disclosure is an important consideration for the designers of any large multiuser computer system. a general purpose database management system often requires the enforcement of content-dependent security policies in which a decision to allow access must be based on the value of the data itself. several authors ([har76], [sto76], [gri76], [sum77], [min78], [spo83], and others) have proposed mechanisms for implementing content-dependent security policies. few authors, however, have investigated the properties of models for the specification of such policies. this paper identifies several problems created by inadequate models for the specification of content-dependent security policies. if a specification model is too liberal in the types of policies it can express, it may provide an increased opportunity for compromise of data. if the specification model is too conservative, it cannot express many desirable policies. thus a flexible model which will allow a compromise between these two extremes is needed for specifying content-dependent policies. such a model is proposed here. david l. spooner analyzing user interactions with hypermedia systems wayne a. nelson the design and maintenance of the andrew help system: providing a large set of information to a large community of users ayami ogura terilyn gillespie the organizational consequences of office automation: refining measurement techniques jonathan a. morell corona: a communication service for scalable, reliable group collaboration systems robert w. hall amit mathur farnam jahanian atul prakash craig rassmussen digital ink: a familiar idea with technological might! chris kasabach chris pacione john stivoric francine gemperle dan siewiorek effective retrieval with distributed collections jinxi xu jamie callan hyperform: a hypermedia system development environment development of hypermedia systems is a complex matter. the current trend toward open, extensible, and distributed multiuser hypermedia systems adds additional complexity to the development process. as a means of reducing this complexity, there has been an increasing interest in hyperbase management systems that allow hypermedia system developers to abstract from the intricacies and complexity of the hyperbase layer and fully attend to application and user interface issues. design, development, and deployment experiences of a dynamic, open, and distributed multiuser hypermedia system development environment called hyperform is presented. hyperform is based on the concepts of extensibility, tailorability, and rapid prototyping of hypermedia system services. open, extensible hyperbase management systems permit hypermedia system developers to tailor hypermedia functionality for specific applications and to serve as a platform for research. the hyperform development environment is comprised of multiple instances of four component types: (1) a hyperbase management system server, (2) a tool integrator, (3) editors, and (4) participating tools. hyperform has been deployed in unix environments, and experiments have shown that hyperform greatly reduces the effort required to provide customized hyperbase management system support for distributed multiuser hypermedia systems. uffe k. wiil john j. leggett pioneering hci down under: a mixture of perseverance and fun gitte lindgaard evolution and change in data management - issues and directions one of the fundamental aspects of information and database systems is that they change. moreover, in so doing they evolve, although the manner and quality of this evolution is highly dependent on the mechanisms in place to handle it. while changes in data are handled well, changes in other aspects, such as structure, rules, constraints, the model, etc., are handled to varying levels of sophistication and completeness. in order to study this in more detail a workshop on evolution and change in data management was held in paris in november 1999. it brought together researchers from a wide range of disciplines with a common interest in handling the fundamental characteristics and the conceptual modelling of change in information and database systems. this short report of the workshop concentrates on some of the general lessons that emerged during the four days. john f. roddick lina al-jadir leopoldo bertossi marlon dumas florida estrella heidi gregersen kathleen hornsby jens lufter federica mandreoli tomi männistö enric mayol lex wedemeijer user profiling in personalization applications through rule discovery and validation gediminas adomavicius alexander tuzhilin indexing medium-dimensionality data in oracle k. v. ravi kanth siva ravada jayant sharma jay banerjee the dilemma of credibility vs. speed corrs implicitly constrained but officially open acceptance policy for submitted papers raises concerns about both censorship and credibility. to avoid refereeing incoming papers yet still help readers assess their merits, corr could use coordinated public comments and ratings in the manner of some online auctions and booksellers. james prekeges efficient computation of temporal aggregates with range predicates a temporal aggregation query is an important but costly operation for applications that maintain time-evolving data (data warehouses, temporal databases, etc.). due to the large volume of such data, performance improvements for temporal aggregation queries are critical. in this paper we examine techniques to compute temporal aggregates that include key-range predicates (_range temporal aggregates_). in particular we concentrate on sum, count and avg aggregates. this problem is novel; to handle arbitrary key ranges, previous methods would need to keep a separate index for every possible key range. we propose an approach based on a new index structure called the _multiversion sb-tree_, which incorporates features from both the sb-tree and the multiversion b-tree, to handle arbitrary key-range temporal sum, count and avg queries. we analyze the performance of our approach and present experimental results that show its efficiency. donhui zhang alexander markowetz vassilis tsotras dimitrios gunopulos bernhard seeger further uses of "scenario" david reisner order-of-magnitude advantage on tpc-c through massive parallelism charles levine an outline of a general model for information retrieval systems this paper is a contribution to the construction of a general model for information retrieval. as in the paper of van rijsbergen ([rij86]), the implicit base in all information retrieval systems is considered as a logical implication. the measure of correspondence between a document and a query is transformed into the estimation of the strength (or certainty) of logical implication. the modal logics will show its suitability for representing the behavior of information retrieval systems. in existing information retrieval models, several aspects are often mixed. a part of this paper is contributed to separate these aspects to give a clearer view of information retrieval systems. this general model is also compared with some existing models to show its generality. j. nie adding another communication channel to reality: an experience with a chat- augmented conference jun rekimoto yuji ayatsuka hitoraka uoi toshifumi arai a web odyssey: from codd to xml victor vianu groupware at work gianfranco bazzigaluppi multimedia help: a prototype and an experiment piyawadee sukaviriya ellen isaacs krishna bharat videoscheme: a programmable video editing systems for automation and media recognition james matthews peter gloor fillia makedon tigrito: a multi-mode interactive improvisational agent heidy maldonado antoine picard patrick doyle barbara hayes-roth hyperform: using extensibility to develop dynamic, open, and distributed hypertext systems uffe k. wiil john j. leggett nonstop sql/mx primitives for knowledge discovery john clear debbie dunn brad harvey michael heytens peter lohman abhay mehta mark melton lars rohrberg ashok savasere robert wehrmeister melody xu adept: advanced design environment for prototyping with task models peter johnson stephanie wilson panos markopoulos james pycock a foundation for representing and querying moving objects spatio- temporal databases deal with geometries changing over time. the goal of our work is to provide a dbms data model and query language capable of handling such time-dependent geometries, including those changing continuously that describe moving objects. two fundamental abstractions are moving point and moving region, describing objects for which only the time-dependent position, or position and extent, respectively, are of interest. we propose to present such time-dependent geometries as attribute data types with suitable operations, that is, to provide an abstract data type extension to a dbms data model and query language. this paper presents a design of such a system of abstract data types. it turns out that besides the main types of interest, moving point and moving region, a relatively large number of auxiliary data types are needed. for example, one needs a line type to represent the projection of a moving point into the plane, or a "moving real" to represent the time-dependent distance of two points. it then becomes crucial to achieve (i) orthogonality in the design of the system, i.e., type constructors can be applied unifomly; (ii) genericity and consistency of operations, i.e., operations range over as many types as possible and behave consistently; and (iii) closure and consistency between structure and operations of nontemporal and related temporal types. satisfying these goal leads to a simple and expressive system of abstract data types that may be integrated into a query language to yield a powerful language for querying spatio-temporal data, including moving objects. the paper formally defines the types and operations, offers detailed insight into the considerations that went into the design, and exemplifies the use of the abstract data types using sql. the paper offers a precise and conceptually clean foundation for implementing a spatio-temporal dbms extension. ralf hartmut guting michael h. böhlen martin erwig christian s. jensen nikos a. lorentzos markus schneider michalis vazirgiannis an effective algorithm for parallelizing sort merge joins in the presence of data skew parallel processing of relational queries has received considerable attention of late. however, in the presence of data skew, the speedup from conventional parallel join algorithms can be very limited, due to load imbalances among the various processors. even a single large skew element can cause a processor to become overloaded. in this paper, we propose a parallel sort merge join algorithm which uses a divide-and-conquer approach to address the data skew problem. the proposed algorithm adds an extra scheduling phase to the usual sort, transfer and join phases. during the scheduling phase, a parallelizable optimization algorithm, using the output of the sort phase, attempts to balance the load across the multiple processors in the subsequent join phase. the algorithm naturally identifies the largest skew elements, and assigns each of them to an optimal number of processors. assuming a zipf-like distribution for data skew, the algorithm is demonstrated to achieve very good load balancing for the join phase in a cpu-bound environment, and is shown to be very robust relative to the degree of data skew and the total number of processors. joel l. wolf daniel m. dias philip s. yu piazza: a desktop environment supporting impromptu and planned interactions ellen a. isaacs john c. tang trevor morris envisioning communication: task-tailorable representations of communication in asynchronous work christine m. neuwirth james h. morris susan harkness regli ravinder chandhok geoffrey c. wenger the multilevel relational (mlr) data model many multilevel relational models have been proposed; different models offer different advantages. in this paper, we adapt and refine several of the best ideas from previous models and add new ones to build the new multilevel relational (mlr) data model. mlr provides multilevel relations with element- level labeling as a natural extension of the traditional relational data model. mlr introduces several new concepts (notably, data-borrow integrity and the uplevel statement) and significantly redefines existing concepts (polyinstantiation and referential integrity as well as data manipulation operations). a central contribution of this paper is proofs of soundness, completeness, and security of mlr. a new data-basedsemantics is given for the mlr data model by combining ideas from seaview, belief-based semantics, and ldv. this new semantics has the advantages of both eliminating ambiguity and retaining upward information flow. mlr is secure, unambiguous, and powerful. it has five integrity properties and five operations for manipulating multilevel relations. soundness, completeness, and security show that any of the five database manipulation operations will keep database states legal (i.e., satisfy all integrity properties), that every legal database state can be constructed, and that mlr is noninterfering. the expressive power of mlr also compares favorably with several other models. ravi sandhu fang chen developing collaborative applications on the world wide web andreas girgensohn alison lee oracle's symmetric replication technology and implications for application design dean daniels lip boon doo alan downing curtis elsbernd gary hallmark sandeep jain bob jenkins peter lim gordon smith benny souder jim stamos social presence in web surveys social interface theory has widespread influence in the field of human- computer interaction. the basic thesis is that humanizing cues in a computer interface can engender responses from users similar to human-human interaction. in contrast, the survey interviewing literature suggests that computer administration of surveys on highly sensitive topics reduces or eliminates social desirability effect, even when such humanizing features as voice are used. in attempting to reconcile these apparently contradictory findings, we varied features of the interface in a web survey (n=3047). in one treatment, we presented an image of 1) a male researcher, 2) a female researcher, or 3) the study logo at several points. in another, we varied the extent of personal feedback provided. we find little support for the soical interface hypothesis. we describe our study and discuss possible reasons for the contradictory evidence on social interfaces mick p. couper roger tourangeau darby m. steiger efficient and effective metasearch for a large number of text databases metasearch engines can be used to facilitate ordinary users for retrieving information from multiple local sources (text databases). in a metasearch engine, the contents of each local database is represented by a representative. each user query is evaluated against the set of representatives of all databases in order to determine the appropriate databases to search. when the number of databases is very large, say in the order of tens of thousands or more, then a traditional metasearch engine may become inefficient as each query needs to be evaluated against too many database representatives. furthermore, the storage requirement on the site containing the metasearch engine can be very large. in this paper, we propose to use a hierarchy of database representatives to improve the efficiency. we provide an algorithm to search the hierarchy. we show that the retrieval effectiveness of our algorithm is the same as that of evaluating the user query against all database representatives. we also show that our algorithm is efficient. in addition, we propose an alternative way of allocating representatives to sites so that the storage burden on the site containing the metasearch engine is much reduced. clement yu weiyi meng king-lup liu wensheng wu naphtali rishe answer garden 2: merging organizational memory with collaborative help mark s. ackerman david w. mcdonald tradeoffs in displaying peripheral information peripheral information is information that is not central to a person's current task, but provides the person the opportunity to learn more, to do a better job, or to keep track of less important tasks. though peripheral information displays are ubiquitous, they have been rarely studied. for computer users, a common peripheral display is a scrolling text display that provides announcements, sports scores, stock prices, or other news. in this paper, we investigate how to design peripheral displays so that they provide the most information while having the least impact on the user's performance on the main task. we report a series of experiments on scrolling displays aimed at examining tradeoffs between distraction of scrolling motion and memorability of information displayed. overall, we found that continuously scrolling displays are more distracting than displays that start and stop, but information in both is remembered equally well. these results are summarized in a set of design recommendations. paul p. maglio christopher s. campbell "finding and reminding" revisited: appropriate metaphors for file organization at the desktop bonnie nardi deborah barreau revolutionizing name authority control a new model has been developed for the standardization of names in bibliographic databases. this paper describes the model and its implementation and also compares it with an existing model. the results show that the new model will revolutionize name authority control and will also improve on the existing naco model. a prototype that was developed also indicates the technical feasibility of the model's implementation. m. m. m. snyman m. jansen van rensburg editorial steven pemberton editorial steven pemberton a goal-driven auto-configuration tool for the distributed workflow management system mentorlite the mentor-lite prototype has been developed within the research project "architecture, configuration, and administration of large workflow management systems" funded by the german science foundation (dfg). it has evolved from its predecessor mentor [1], but aims at a simpler architecture. the main goal of mentor-lite has been to build a light-weight, extensible, and tailorable workflow management system (wfms) with small footprint and easy-to-use administration capabilities. our approach is to provide only kernel functionality inside the workflow engine, and consider system components like history management and worklist management as extensions on top of the kernel. the key point to retain the light-weight nature is that these extensions are implemented as workflows themselves. the workflow specifications are interpreted at runtime, which is a crucial prerequisite for flexible exception handling and dynamic modifications during runtime. the interpreter performs a stepwise execution of the workflow specification according to its formal semantics. for each step, the activities to be performed by the step are determined and started. mentor-lite supports a protocol for distributed execution of workflows spread across multiple workflow engines. this support is crucial for workflows that span large, decentralized enterprises with largely autonomous organizational units or even cross multiple enterprises to form so-called "virtual enterprises". a communication manager is responsible for sending and receiving synchronization messages between the engines. in order to guarantee a consistent global state even in the presence of site or network failures, we have built reliable message queues using the corba object transaction services. for administration, mentor-lite provides a java-based workbench for workflow design, workflow partitioning across multiple workflow servers, and a java- based runtime monitoring tool. michael gillmann jeanine weissenfels german shegalov wolfgang wonner gerhard weikum ternary relationship decomposition and higher normal form structures derived from entity relationship conceptual modeling trevor h. jones il-yeol song e. k. park editor's introduction clarence a. ellis idea: interactive data exploration and analysis peter g. selfridge divesh srivastava lynn o. wilson enhancing the explanatory power of usability heuristics jakob nielsen scalable browsing for large collections: a case study phrase browsing techniques use phrases extracted automatically from a large information collection as a basis for browsing and accessing it. this paper describes a case study that uses an automatically constructed phrase hierarchy to facilitate browsing of an ordinary large web site. phrases are extracted from the full text using a novel combination of rudimentary syntactic processing and sequential grammar induction techniques. the interface is simple, robust and easy to use. to convey a feeling for the quality of the phrases that are generated automatically, a thesaurus used by the organization responsible for the web site is studied and its degree of overlap with the phrases in the hierarchy is analyzed. our ultimate goal is to amalgamate hierarchical phrase browsing and hierarchical thesaurus browsing: the latter provides an authoritative domain vocabulary and the former augments coverage in areas the thesaurus does not reach. gordon w. paynter ian h. witten sally jo cunningham george buchanan the trec-9 filtering track david hull stephen robertson the denver model for groupware design the denver model is offered as a framework with which to plan or evaluate the capabilities associated with a particular groupware application. this model was the output of 14 participants at the two day workshop on designing and evaluating groupware, held at chi'95, denver colorado. the denver model consists of three submodels: goals and requirements, design and technology. a description of the framework is provided and evaluation strategies are described in this paper. tony salvador jean scholtz james larson on the design and quantification of privacy preserving data mining algorithms the increasing ability to track and collect large amounts of data with the use of current hardware technology has lead to an interest in the development of data mining algorithms which preserve user privacy. a recently proposed technique addresses the issue of privacy preservation by perturbing the data and reconstructing distributions at an aggregate level in order to perform the mining. this method is able to retain privacy while accessing the information implicit in the original attributes. the distribution reconstruction process naturally leads to some loss of information which is acceptable in many practical situations. this paper discusses an expectation maximization (em) algorithm for distribution reconstruction which is more effective than the currently available method in terms of the level of information loss. specifically, we prove that the em algorithm converges to the maximum likelihood estimate of the original distribution based on the perturbed data. we show that when a large amount of data is available, the em algorithm provides robust estimates of the original distribution. we propose metrics for quantification and measurement of privacy-preserving data mining algorithms. thus, this paper provides the foundations for measurement of the effectiveness of privacy preserving data mining algorithms. our privacy metrics illustrate some interesting results on the relative effectiveness of different perturbing distributions. dakshi agrawal charu c. aggarwal accessing data cubes along complex dimensions in a data warehouse, data cubes are accessed through their dimensions. if dimensions are numerical, because numerical data can be clustered or sorted, fast access methods such as binary search or b+ trees can be applied. however, complex attributes such as keyword sets of document contents are not easily sorted or clustered. although it is highly desirable that documents can be searched through their sets of keywords. signature index is known for its ability to search along complex attributes. we propose a new indexing structure, dimensional signature index (dsi), for fast query processing in data cubes. dsi is particularly suitable for accessing data in data cubes through complex dimensions. through a mathematical analysis, we found that if one signature index (feature index) is built for each dimension of the data cube, if the size of all feature indices is equal to the size of a large signature index for the entire data cube as a flat file, and if a query execution involves all dimensions of a data cube, the search cost in all these feature indices is the same as the search cost in the large signature index for the entire data cube. the significance of this discovery is that usually a query does not involve all dimensions of a data cube. by making one feature index for each dimension, only those feature indices involved in the query predicates need to be accessed. on average, this represents significant faster query executions than using a large signature file for the entire data cube. the use of dsi scheme does not exclude the use of other fast signature index schemes. each feature index in dsi can also use any of the previously proposed fast signature indices (s-trees, multi-leveled, frame-sliced, etc.) to achieve even faster access speed. yuping yang mukesh singhal new technology and new roles: the need for "corpus editors" digital libraries challenge humanists and other academics to rethink the relationship between technology and their work. at the perseus project, we have seen the rise of a new combination of skills. the "corpus editor" manages a collection of materials that are thematically coherent and focused but are too large to be managed soley with the labor-intensive techniques of traditional editing. the corpus editor must possess a degree of domain specific knowledge and technical expertise that virtually no established graduate training provides. this new position poses a challenge to humanists as they train and support members of the field pursuing new, but necessary tasks. gregory crane jeffrey a. rydberg-cox a more general model for handling missing information in relational databases using a 3-valued logic codd proposed the use of two interpretations of nulls to handle missing information in relational databases that may lead to a 4-valued logic [codd86, codd87]. in a more general model, three interpretations of nulls are necessary [roth, zani]. without simplification, this may lead to a 7-valued logic, which is too complicated to be adopted in relational databases. for such a model, there is no satisfactory simplification to a 4-valued logic. however, by making a straightforward simplification and using some proposed logical functions, a 3-valued logic can handle all three interpretations. kwok-bun yue taming the wolf in sheep's clothing: privacy in multimedia communications when ubiquitous multimedia technology is introduced in an organization, the privacy implications of that technology are rarely addressed. users usually extend the trust they have in an organization to the technology it employs. this paper reports results from interviews with 24 internet engineering task force (ietf) attendees whose presentations or contributions to ietf sessions were transmitted on the multicast backbone (mbone). due to a high level of trust in the organization, these users had few initial concerns about the privacy implications of this technology. however, interviewees' trust relied on inaccurate assumptions, since the interviews revealed a number of potential and actual invasions of privacy in transmission, recording and editing of multicast data. previous research found that users who experience an unexpected invasion of their privacy are not only likely to reject the technology that afforded the invasion, but lose trust in the organization that introduced it [2,3]. we discuss a number of mechanisms and policies for protecting users' privacy in this particular application, and propose a strategy for introducing networked multimedia technology in general. anne adams martina angela sasse working-memory failure in phone-based interaction this article investigates working-memory (wm) failure in phone-based interaction (pbi). we used a computational model of phone-based interaction (pbi user) to generate predictions about the impact of three factors on wm failure:pbi features (i.e. menu structure), individual differences (i.e., wm capacity), and task characteristics (i.e., number of tasks). our computational model stipulates that both the storage and the processing of information contribute to wm failure. in practical terms the model and the empirical results indicate that, contrary to guidelines for the design of phone-based interfaces, deep menu hierarchies (no more than three options per menu) do not reduce wm error rates in pbi. at a more theoretical level, the study shows that the use of a computational model in hci research provides a systematic approach for explaining complex empirical results. brian r. huguenard f. javier lerch brian w. junker richard j. patz robert e. kass organizational learning and cscw - an ambiguous relationship kari thoresen the university of california cd-rom information system the university of california cd-rom information system replaces the equivalent of 260,000 books of published federal statistics with a cd-rom-based online information system. the size of this database is currently 270 cd-roms (135gb). it contains 1990 u.s. census data (approximately 3,000 items of socio-economic and demographic information, including race-ethnicity, employment, income, educational level, and poverty) for every block and census tract in the u.s., as well as u.s. foreign trade data by commodity from every city in the u.s. to every country in the world. it also contains the digitized map outline boundary data for city blocks for the entire u.s. (census tiger files). deane merrill nathan parker fredric gey chris stuber concurrency control in database systems: a step towards the integration of optimistic methods and locking the traditional approach to concurrency control is based on locking. recently, new methods have been presented called optimistic methods. these methods are well suited in situations where the likelihood of conflicting actions is rather small. otherwise locking should be used. typically in database systems it is not known in advance what kind of transactions are to be processed. therefore what is really needed are methods which combine the benefits of optimistic methods and locking. this paper is a first step in this direction. george lausen melding structured abstracts and world wide web for retrieval of reusable components reusable software libraries (rsls) often suffer from poor interfaces, too many formal standards, high levels of training required for their use, and most of all, a high cost to build and maintain. hence, rsls have largely failed to return the reuse benefits promised by their developers. this paper first describes an rsl implementation using the world wide web (www) browser mosaic and shows how it meets most rsl needs, avoids most rsl pitfalls, and costs only a fraction of the cost for the average commercial rsl. second, the paper describes a way to quickly assess the important aspects of a piece of software so programmers can decide whether or not to reuse it. using the observation that when programmers discuss software they tend to convey the same key information in a somewhat predictable order, this paper describes a method to automatically mimic this activity using a structured abstract of reusable components. structured abstracts provide a natural, easy to use way for developers to (1) search for components, (2) quickly assess the component for use, and (3) submit components to the rsl. jeffrey s. poulin keith j. werkman who or what is making the music: music creation in a machine age tang-chun li context towards the evolving documentary glorianna davenport michael murtaugh dynamic presentation of asynchronous auditory output albert l. papp meera m. blattner comments on balloon help jonathan price shortest-substring retrieval and ranking we present a model for arbitrary passage retrieval using boolean queries. the model is applied to the task of ranking documents, or other structural elements, in the order of their expected relevance. features such as phrase matching, truncation, and stemming integrate naturally into the model. properties of boolean algebra are obeyed, and the exact-match semantics of boolean retrieval are preserved. simple inverted-list file structures provide an efficient implementation. retrieval effectiveness is comparable to that of standard ranking techniques. since global statistics are not used, the method is of particular value in distributed environments. since ranking is based on arbitrary passages, the structural elements to be ranked may be specified at query time and do not need to be restricted to predefined elements. charles l. a. clarke gordon v. cormack ubiquity and need-to-know: two principles of data distribution h. wedekind interface issues in computer support for asynchronous communication james h. morris christine m. neuwirth susan harkness regli ravinder chandhok geoffrey c. wenger spatial workspace collaboration: a sharedview video support system for remote collaboration capability collaboration in three-dimensional space: "spatial workspace collaboration" is introduced and an approach supporting its use via a video mediated communication system is described. verbal expression analysis is primarily focused on. based on experiment results, movability of a focal point, sharing focal points, movability of a shared workspace, and the ability to confirm viewing intentions and movements were determined to be system requirements necessary to support spatial workspace collaboration. a newly developed sharedview system having the capability to support spatial workspace collaboration is also introduced, tested, and some experimental results described. hideaki kuzuoka multimodal surrogates for video browsing wei ding gary marchionini dagobert soergel information space representation in interactive systems: relationship to spatial abilities bryce allen the effect of join selectives on optimal nesting order akhil kumar michael stonebraker knowledge-based systems for idea processing support lawrence f. young animated cartography for urban soundscape information myoung-ah kang sylvie servign the mitre multi-modal logger: its use in evaluation of collaborative systems samuel bayer laurie e. damianos robyn kozierok james mokwa infobeams - configuration of personalized information assistants mathias bauer dietmar dengler beyond being there a belief in the efficacy of imitating face-to-face communication is an unquestioned presupposition of most current work on supporting communications in electronic media. in this paper we highlight problems with this presupposition and present an alternative proposal for grounding and motivating research and development that frames the issue in terms of needs, media, and mechanisms. to help elaborate the proposal we sketch a series of example projects and respond to potential criticisms. jim hollan scott stornetta selected database research at stanford this report describes seven projects at the computer science department of stanford university that may be relevant to sigmod community. arthur keller peter rathmann jeff ullman gio wiederhold hyperhyper: developments across the field of hypermedia - a mini trip report j. nielsen collaboration in performance of physical tasks: effects on outcomes and communication robert e. kraut mark d. miller jane siegel conversationbuilder: an open system for collaborative work simon m. kaplan exploring constraints to efficiently mine emerging patterns from large high- dimensional datasets xiuzhen zhang guozu dong ramamohanarao kotagiri tracking hands above large interactive surfaces with a low-cost scanning laser rangefinder joshua strickon joseph paradiso netnews: online services: change is the only constant dennis fowler human computer interaction models and application development (panel session) pradip peter dey david benyon gene golovchinsky santosh mathan dewey rundus arnold smith robert c. williges personalization on the net using web mining: introduction maurice d. mulvenna sarabjot s. anand alex g. buchner what's happening marisa campbell one is not enough: multiple views in a media space media spaces support collaboration, but the limited access they provide to remote colleagues' activities can undermine their utility. to address this limitation, we built an experimental system in which four switchable cameras were deployed in each of two remote offices, and observed participants using the system to collaborate on two tasks. the new views allowed increased access to task-related artifacts; indeed, users preferred these views to more typical "face-to-face" ones. however, problems of establishing a joint frame of reference were exacerbated by the additional complexity, leading us to speculate about more effective ways to expand access to remote sites. william w. gaver abigail sellen christian heath paul luff objects modelling the points raised here are based on experience with rad [os86], an experimental relational database system which allows the definition of arbitrary abstract data types and arbitrary operations, for domains. the comments below fall into two categories: the appropriateness of the data model and the level at which programming is done. data model: for the applications motivating the object-oriented database discussion, one could enhance a programming language environment rich in abstract data type support, like smalltalk, with the required database functionality as in gemstone [co84], or one could enhance a database system by adding object semantics. rad is about the simplest possible approach in the latter category. there are a great many applications for which commercial database systems do not have enough domain types, where the rad approach seems to be adequate. however, there are other applications where the rad approach is too simple. these applications have highly nested structures in which the root object consists of an aggregation of other objects, which are aggregations of other objects, etc. the choices in rad, or any system based closely on the relational model are: (1) defining an abstract data type domain for the whole structure, (2) flattening the whole structure into first normal form relations with fairly atomic domains, or (3) some compromise between 1 and 2. rad has two general classes of objects, domains and relations, each with very different operations. choice 1 above means that the components of a complex object can never be accessed by the relational algebra operators. one must also avoid the temptation to re- implement a relational query language on a single domain. choice 2 is also not a good idea. the data for a single object will be scattered so far and wide that an inordinate amount of work will be needed to retrieve it, update it, etc. with the third choice, somewhere between the two extremes, the point of compromise will be very difficult to decide on and one will still suffer from the drawbacks of the two extreme solutions. the problem here is not so much with the extensible database approach but with extending a relational database. the relational model is too simple to capture the structure of the complex objects at an appropriate level. an extensible database system whose underlying data model is semantically richer than the bare bones relational model, like smith and smith's [sm77a, sm77b] or sam* [su86], would do a much better job. object modelling usually requires some form of inheritance or isa hierarchies. systems which support inheritance do so by defining it with respect to classes of objects. relational database systems only allow one to define relation instances, not relation classes. thus the relational model cannot properly handle inheritance. current semantic data models make a sharp distinction between the structured objects they allow and atomic objects (in relational terms, between relations and domains) in the operations allowed on objects from these two broad classes. we are currently investigating an object oriented data model in which this distinction is not so sharp. programming level: with rad, the assumption is that new domain types are designed and implemented by application programmers. the end users of the object oriented databases are engineers, librarians, office workers --- professionals in their own field but not professional programmers. in rad, complex operations must be written by experienced programmers, not end users. if the objects in the application are indeed the complex, highly nested ones mentioned above, it may be very difficult to anticipate all the operations a user will want to perform on them. users seem to be able to express their own queries in sql or other database query languages. the challenge now is to devise a way for them to easily express complex operations on complex objects. this comment is also valid for gemstone. smalltalk programmers are rare enough, but users capable of adding their own smalltalk methods are even rarer. this type of comment also applies to the mechanism provided for defining new object types. in contrast to the operations, the structure of the complex types can probably be determined in advance. thus defining the structure of a type is not likely to be a task for a user, but one which can be done in advance by the application programmer. this application programmer should not, however, have to be an implementor of the database management system in question. rad has been successful in showing how a domain type can be introduced without the programmer actually knowing how the dbms is implemented. sylvia osborn the architecture of a heterogeneous distributed database management system: the distributed access view integrated database (david) this paper describes the architecture of a heterogeneous distributed database management system called the distributed access view integrated database (david). the david system allows uniform access to distributed data stored in different dbmss that support various data models. the system presents the users with a unified cluster data model and with a uniform high level query language (gsql). database heterogeneity issues such as external-to-conceptual view translation, view integration, query processing and the definition of a uniform data access language are presented in the context of the david development. bharat bhasker csaba j. egyhazy konstantinos p. triantis query optimization in a memory-resident domain relational calculus database system we present techniques for optimizing queries in memory- resident database systems. optimization techniques in memory-resident database systems differ significantly from those in conventional disk-resident database systems. in this paper we address the following aspects of query optimization in such systems and present specific solutions for them: (1) a new approach to developing a cpu-intensive cost model; (2) new optimization strategies for main-memory query processing; (3) new insight into join algorithms and access structures that take advantage of memory residency of data; and (4) the effect of the operating system's scheduling algorithm on the memory-residency assumption. we present an interesting result that a major cost of processing queries in memory-resident database systems is incurred by evaluation of predicates. we discuss optimization techniques using the office-by-example (obe) that has been under development at ibm research. we also present the results of performance measurements, which prove to be excellent in the current state of the art. despite recent work on memory-resident database systems, query optimization aspects in these systems have not been well studied. we believe this paper opens the issues of query optimization in memory-resident database systems and presents practical solutions to them. kyu-young whang ravi krishnamurthy web document clustering: a feasibility demonstration oren zamir oren etzioni advantages of query biased summaries in information retrieval anastasios tombros mark sanderson clustering techniques for large data sets - from the past to the future daniel a. keim alexander hinneburg several approaches for improving user cordiality in the design of on-line systems for novice uses this paper presents several factors to be considered when designing a notice user-machine interface for an on-line system. these include user objectives, current procedures for processing the data, tutorial aids, data base organization and human factors issues related to ease of terminal use. various examples from recent experience are given to illustrate some of these points. sandy e. selander the open video project: research-oriented digital video repository a future with widespread access to large digital libraries of video is nearing reality. anticipating this future, a great deal of research is focused on methods of browsing and retrieving digital video, developing algorithms for creating surrogates for video content, and creating interfaces that display result sets from multimedia queries. research in these areas requires that each investigator acquire and digitize video for their studies since the multimedia information retrieval community does not yet have a standard collection of video to be used for research purposes. the primary goal of the open video project is to create and maintain a shared digital video repository and test collection to meet these research needs. gary geisler gary marchionini browsing digital video video in digital format played on programmable devices presents opportunities for significantly enhancing the user's viewing experience. for example, time compression and pause removal can shorten the viewing time for a video, textual and visual indices can allow personalized navigation through the content, and random-access digital storage allows instantaneous seeks into the content. to understand user behavior when such capabilities are available, we built a software video browsing application that combines many such features. we present results from a user study where users browsed video in six different categories: classroom lectures, conference presentations, entertainment shows, news, sports, and travel. our results show that the most frequently used features were time compression, pause removal, and navigation using shot boundaries. also, the behavior was different depending on the content type, and we present a classification. finally, the users found the browser to be very useful. two main reasons were: i) the ability to save time and ii) the feeling of control over what content they watched. francis c. li anoop gupta elizabeth sanocki li-wei he yong rui touch-sensing input devices ken hinckley mike sinclair topic labeling of broadcast news stories in the informedia digital video library alexander g. hauptmann danny lee customer-focused design data in a large, multi-site organization paula curtis tammy heiserman david jobusch mark notess jayson webb efficient prolog access to codaysl and fdm databases p. m. d. gray discovering roll-up dependencies jef wijsen raymond t. ng toon calders reliable distributed database systems (abstract only) we are investigating the problem of ensuring global consistency in the context of distributed database systems. our current research effort concentrates on theoretical study of reliability mechanisms such as algorithm design and performance characterization. in addition, we are building a testbed for evaluating different reliability mechanisms through detailed simulation and actual experimentation. replication is the key factor in improving the availability of distributed database systems. a major restriction in using replication is that replicated copies must behave like a single copy. we have developed algorithms for replication control using tokens [2, 5]. the next step of our research in this direction would be to evaluate different partial operation policies which are critical in maintaining the correctness and achieving the high availability of distributed database systems. two alternatives are possible when a partition occurs: pessimistic and optimistic. neither of the two alternatives is superior to the other. higher availability achieved by an optimistic approach may be penalized during recovery from partition failures, by backing out committed transactions which violate consistency constraints. even if the replication and concurrency control mechanisms are correct and maintain the consistency of the database, the failures of hardware and/or software at the processing site and communication network may destroy the consistency of the database. in order to cope with failures, distributed database systems must provide recovery mechanisms. the goal of checkpointing is to save database states on a separate secure device so that the database can be recovered when errors and failures occur. a checkpointing mechanism which does not interfere with the transaction processing in distributed environment is highly desirable for many applications, where restricting transaction activity during checkpointing is not feasible. our earlier research has resulted in the development of a non- intrusive checkpointing algorithm, along with associated recovery mechanisms [1, 3]. the desirable properties of non-interference and global consistency not only make the checkpointing and recovery more complicated in distributed database systems, but also increase the workload of the system. currently, we are investigating the practicality of non-interfering checkpointing and fully decentralized checkpointing in distributed database systems [4, 6]. sang hyuk son visualizing search results: some alternatives to query-document similarity lucy terry nowell robert k. france deborah hix lenwood s. heath edward a. fox evaluation of digital library use: how can we understand how contributors create and use scholarly communication? lisa m. covi a new approach to retrieve video by example video clip xiaoming liu yueting zhuang yunhe pan the alfresco interactive system oliviero stock query containment for data integration systems the problem of query containment is fundamental to many aspects of database systems, including query optimization, determining independence of queries from updates, and rewriting queries using views. in the data integration framework, however, the standard notion of query containment does not suffice. we define relative containment, which formalizes the notion of query containment relative to the sources available to the integration system. first we provide optimal bounds for relative containment for several important classes of datalog queries, including the common case of conjunctive queries. next we provide bounds for the case when sources enforce access restrictions in the form of binding pattern constraints. surprisingly, we show that relative containment for conjunctive queries is still decidable in this case, even though it is known that finding all answers to such queries may require a recursive datalog program over the sources. finally, we provide tight bounds for variants of relative containment when the queries and source descriptions may contain comparison predicates. todd millstein alon levy marc friedman content awareness in a file system interface: implementing the "pile" metaphor for organizing information the pile is a new element of the desktop user interface metaphor, designed to support the casual organization of documents. an interface design based on the pile concept suggested uses of content awareness for describing, organizing, and filing textual documents. we describe a prototype implementation of these capabilities, and give a detailed example of how they might appear to the user. we believe the system demonstrates how content awareness can be not only used in a computer filing system, but made an integral part of the user's experience. daniel e. rose richard mander tim oren dulce b. ponceleon gitt salomon yin yin wong a comparison of user interfaces for panning on a touch-controlled display jeff a. johnson are window queries representative for arbitrary range queries? bernd-uwe pagel hans-werner six emancipating instances from the tyranny of classes in information modeling database design commonly assumes, explicitly or implicitly, that instances must belong to classes. this can be termed the assumption of inherent classification. we argue that the extent and complexity of problems in schema integration, schema evolution, and interoperability are, to a large degree, consequences of inherent classification. furthermore, we make the case that the assumption of inherent classification violates philosophical and cognitive guidelines on classification and is, therefore, inappropriate in view of the role of data modeling in representing knowledge about application domains. as an alternative, we propose a layered approach to modeling in which information about instances is separated from any particular classification. two data modeling layers are proposed: (1) an instance model consisting of an instance base (i.e., information about instances and properties) and operations to populate, use, and maintain it; and (2) a class model consisting of a class base (i.e., information about classes defined in terms of properties) and operations to populate, use, and maintain it. the two- layered model provides class independence. this is analogous to the arguments of data independence offered by the relational model in comparison to hierarchical and network models. we show that a two- layered approach yields several advantages. in particular, schema integration is shown to be partially an artifact of inherent classification that can be greatly simplified in designing a database based on a layered model; schema evolution is supported without the complexity of operations currently required by class-based models; and the difficulties associated with interoperability among heterogeneous databases are reduced because there is no need to agree on the semantics of classes among independent databases. we conclude by considering the adequacy of a two-layered approach, outlining possible implementation strategies, and drawing attention to some practical considerations. jeffrey parsons yair wand the design and implementation of persistent transactions in an object database system hong-tai chou a graphical reflection notation used in an intelligent discovery world tutoring system jamie schultz touch-typing with a stylus david goldberg cate richardson performance analyses of data bases integration technologies (odbc, ole db) ognian nakov vassil vassilev some dynamical properties of sequentially acquired information (abstract only)0 in this paper we obtain two closely related theorems that essentially say that no matter what information metric is used, on the average the value of the accumulated information at stopping time is bounded by a multiple of the expected stopping time. these results are also independent of the particular stopping strategy employed although they do require that the expected stopping time be finite. these results, along with a general type of stopping strategy based on incremental information, are given. later we apply our general theorem to a specific stopping strategy associated with the gis model. although we concentrate on the problem of stopping, the information function on which this stopping decision is based can also be used to choose the coa for the next cycle of the feedback loop. we apply our results to an estimation problem involving the well known shannon-weiner measure of information. since our theorems require that the expected stopping times be finite, sometime is devoted to a discussion of necessary and sufficient conditions for finite expected stopping times. richard a. aló robert kleyle andre de korvin a generic late-join service for distributed interactive media in this paper we present a generic late-join service for distributed interactive media, i.e, networked media which involve user interactions. examples for distributed interactive media are shared whiteboards, networked computer games and distributed virtual environments. the generic late-join service allows a latecomer to join an ongoing session. this requires that the shared state of the medium is transmitted from the old participants of the session to the latecomer in an efficient and scalable way. in order to be generic and useful for a broad range of distributed interactive media, we have implemented the late-join service based on the real time application level protocol for distributed interactive media (rtp/i). all applications which employ this protocol can also use the generic late-join service. furthermore the late-join service can be adapted to the specific needs of a given application by specifying policies for the late-join process. applications which do use a different application level protocol than rtp/i may still use the concepts presented in this work. however, they will not be able to profit from our rtp/i based implementation. jurgen vogel martin mauve werner geyer volker hilt christoph kuhmunch distributed transactions in practice the concept of transactions and its application has found wide and often indiscriminate usage. in large enterprises, the model for distributed database applications has moved away from the client-server model to a multi-tier model with large database application software forming the middle tier. the software philosophy of "buy and not build" in large enterprises has had a major influence by extending functional requirements such as transactions and data consistency throughout the multiple tiers. in this article, we will discuss the effects of applying traditional transaction management techniques to multi-tier architectures in distributed environments. we will show the performance costs associated with distributed transactions and discuss ways by which enterprises really manage their distributed data to circumvent this performance hit. our intent is to share our experience as an industrial customer with the database research and vendor community to create more usable and scalable designs. prabhu ram lyman do pamela drew structure, navigation, and hypertext: the status of the navigation problem mark bernstein peter j. brown mark frisse robert glushko polle zellweger george landow database security and privacy sushil jajodia undoing actions in collaborative work atul prakash michael j. knister ease of user navigation through digital information spaces michael s. nilan let's not "open" cscw systems before their time peter calingaert extending olap querying to external object databases torben bach pedersen arie shoshani junmin gu christian s. jensen user interfaces: disappearing, dissolving, and evolving andries van dam evaluating evaluation measure stability this paper presents a novel way of examining the accuracy of the evaluation measures commonly used in information retrieval experiments. it validates several of the rules-of-thumb experimenters use, such as the number of queries needed for a good experiment is at least 25 and 50 is better, while challenging other beliefs, such as the common evaluation measures are equally reliable. as an example, we show that precision at 30 documents has about twice the average error rate as average precision has. these results can help information retrieval researchers design experiments that provide a desired level of confidence in their results. in particular, we suggest researchers using web measures such as precision at 10 documents will need to use many more than 50 queries or will have to require two methods to have a very large difference in evaluation scores before concluding that the two methods are actually different. chris buckley ellen m. voorhees applying distortion-oriented displays to groupware real time groupware systems are now moving away from strict view-sharing and towards relaxed "what-you-see-is-what-i-see" interfaces, where distributed participants in a real time session can view different parts of a shared visual workspace. as with strict view-sharing, people using relaxed- wysiwis require a sense of workspace awareness---the up-to-the-minute knowledge aobut another person's interactions with the shared workspace. the problem is deciding how to provide a user with an appropriate level of awareness of what other participants are doing when they are working in different areas of the workspace. in this video, we illustrate distortion oriented displays as a novel way of providing this awareness. these displays, which employ magnification lenses and fisheye view techniques, show global context and local detail within a single window, both peripheral and detailed awareness of other participants' actions. three prototypes are presented as examples of groupware distortion-oriented displays. the head-up lens uses a see-through lens to show full-sized local detail in the foreground, and a miniature overview showing global context in the background. the offset lens employs a magnifying lens to show detail over a miniature overview. the fisheye text viewer provides people with detail of what everyone is doing through multiple focal points, one for each participant. saul greenberg carl gutwin andrew cockburn structured observation: practical methods for understanding users and their work context susan m. dray data integrity and security of the corporate data base: the dilemma of end user computing lawrence s. corman elsevier science's home page: sign of the times robert c. groman multimedia research (panel session): the grand challenges for the next decade wendy hall philippe aigrain dick bulterman lawrence a. rowe brian smith building large-format displays for digital libraries michael s. brown w. brent seales stephen b. webb christopher o. jaynes capturing and playing multimedia events with streams streams is a prototype application designed and implemented at bellcore to support the recording and playback of technical presentations, training sessions, and meetings. during playback streams lets users make choices about how the recorded information is to be presented. to further aid users, streams incorporates powerful searching techniques for locating information in audio and video streams. key features of streams include storing information as separate, single-medium streams correlated with each other by time, and using digital storage to allow rapid search and random access. we describe our capture techniques, prototype streams playback system and report initial results. g. cruz r. hill scalable collection summarization and selection r. dolin d. agrawal e. el abbadi interoperability using appc the complex and competitive business world of today needs to access data for various operations from different systems placed at different geographical locations. in order to fulfil this need, one should have a reliable distributed computing environment. various components of that environment may be supplied by different vendors. this means that the computing environment not only needs to be distributed but also requires interoperability. today, interoperability is no more just an idea but a reality. there is also a growing need to support interactions among various systems in a dialog mode. integrating distributed systems can only help to achieve the goal of developing a reliable distributed computing environment. in this paper, a conceptual framework for an architecture is described in conjunction with advanced program-to-program communications lu 6.2 protocol to handle that challenge. this architecture discusses the required contracting services for integrating distributed systems. this contracting service has three major components: the contract interaction services, the contract support services, and the communications infrastructure services. debajyoti mukhopadhyay extended algebra and calculus for nested relational databases relaxing the assumption that relations are always in first-normal- form (1nf) necessitates a reexamination of the fundamentals of relational database theory. in this paper we take a first step towards unifying the various theories of 1nf databases. we start by determining an appropriate model to couch our formalisms in. we then define an extended relational calculus as the theoretical basis for our 1nf database query language. we define a minimal extended relational algebra and prove its equivalence to the 1nf relational calculus. we define a class of 1nf relations with certain "good" properties and extend our algebra operators to work within this domain. we prove certain desirable equivalences that hold only if we restrict our language to this domain. mark a. roth herry f. korth abraham silberschatz communicating with sound (panel session the communicating with sound panel for chi '85 will focus on ways of expanding the user interface by using sound as a significant means of output. as a user's communication from the computer has progressed from large (and often smeary) printout to a teletypewriter and, finally, to the multi-window workstation displays of today, the emphasis has remained primarily on visual output. although many user terminals and workstations have the capability of generating sound, that capability is rarely used for more than audio cues (indicating status such as an error condition or task completion) and simple musical tunes. research shows that sounds convey meaningful information to users. with examples of such research, the panel members will demonstrate a variety of uses of sound output, discuss issues raised by the work, and suggest further directions. the intent of the panel is to stimulate thinking about expanding the user interface and to discuss areas for future research. in the statements that follow, each panelist will describe his or her own work, including the data and audio dimensions used, the value of the research, remaining issues to be addressed, and suggestions for future research and application. a list of references is included for those who wish further reading. william buxton sara a. bly steven p. frysinger david lunney douglass l. mansur joseph j. mezrich robert c. morrison the case for partial indexes current data managers support secondary and/or primary indexes on columns of relations. in this paper we suggest the advantages that result from indexes which contain only some of the possible values in a column of a relation. m. stonebraker coss: the common object services specifications bruce e. martin tickertape: awareness in a single line geraldine fitzpatrick sara parsowith bill segall simon kaplan high performance clustering based on the similarity join christian böhm bernhard braunmuller markus breunig hans-peter kriegel automatic logical navigation for relational databases paul e. reimers soon m. chung content-based retrieval for music collections yuen-hsien tseng virtual communities, design metaphors, and systems to support community groups duncan sanderson keylinking: dynamic hypertext in a digital library this paper describes keylinking, a framework for dynamic resolution of soft and implied hypertext links to the most appropriate available resource at the time of usage. bob pritchett materialized view selection and maintenance using multi-query optimization materialized views have been found to be very effective at speeding up queries, and are increasingly being supported by commercial databases and data warehouse systems. however, whereas the amount of data entering a warehouse and the number of materialized views are rapidly increasing, the time window available for maintaining materialized views is shrinking. these trends necessitate efficient techniques for the maintenance of materialized views. in this paper, we show how to find an efficient plan for the maintenance of a _set of materialized views_, by exploiting common subexpressions between different view maintenance expressions. in particular, we show how to efficiently select (a) expressions and indices that can be effectively shared, by _transient materialization_; (b) additional expressions and indices for _permanent materialization_; and (c) the best maintenance plan --- _incremental_ or _recomputation_ \\--- for each view. these three decisions are highly interdependent, and the choice of one affects the choice of the others. we develop a framework that cleanly integrates the various choices in a systematic and efficient manner. our evaluations show that many-fold improvement in view maintenance time can be achieved using our techniques. our algorithms can also be used to efficiently select materialized views to speed up workloads containing queries and updates. hoshi mistry prasan roy s. sudarshan krithi ramamritham reduced mvds and minimal covers multivalued dependencies (mvds) are data dependencies that appear frequently in the "real world" and play an important role in designing relational database schemes. given a set of mvds to constrain a database scheme, it is desirable to obtain an equivalent set of mvds that do not have any redundancies. in this paper we define such a set of mvds, called reduced mvds, and present an algorithm to obtain reduced mvds. we also define a minimal cover of a set of mvds, which is a set of reduced mvds, and give an efficient method to find such a minimal cover. the significance and properties of reduced mvds are also discussed in the context of database design (e.g., 4nf decomposition) and conflict-free mvds. z. meral ozsoyoglu li-yan yuan looking forward (keynote) cathy marshall building virtual teams: perspectives on communication, flexibility and trust gloria mark the human factors of natural language query systems understanding the hidden limitations and constraints of the system is the largest potential problem for users of natural language query (nlq). by their nature, most nlq systems hide "how it works" because they are intended for users who do not want to know. however, human factors research indicates that when users do not have a good understanding of a system, the behavior of the system becomes unpredictable. for example, consider the two natural language questions: "which students have more than 20 credits?" and "which students have more than 20 courses?" some nlq systems may be able to answer the first but not the second question and when the user knows that both answers can be derived from the database, the system appears to be inconsistent. to be able to use this type of nlq system effectively, a user will have to learn the hidden system constraints that produce this type of inconsistency. there are two approaches to minimize the impact of the hidden constraints. one approach is to customize the linguistic coverage of the nlq system for a particular user population so that most of their questions can be processed correctly. another approach that is being investigated in our laboratory is to explicitly define a learnable and memorable subset of the language so that the limitations and constraints will no longer be hidden. the first approach may not guarantee a solution. in order to gain good linguistic coverage of a domain, the system must initially have a powerful enough linguistic capability to be able to represent a deep structure of the processed sentence. in addition, however, the grammar and lexicon needs to be augmented with semantic and pragmatic information about the task domain and linguistic requirement of the users. for each application of a nlq system a great deal of effort is required to capture, define and enter this information into the system. this requires a user who is knowledgeable about how to acquire this information and how to translate this information into the form required by the nlq program. thus, the human factors of the nlq application depends on the ability of knowledgeable users to initially supply this information to the system as well as their ability to maintain the system as it changes and as the needs of the users change. it is unlikely that this ability will be uniformly present in all environments in which a nlq system could be applied. therefore this approach to solving the human factors problem of these systems will likely produce mixed results. this approach, then, may be feasible only in situations where a great deal of work has been done at understanding the task domain such as in the domains in which expert systems have been developed. the second approach may not be in the spirit of providing unconstrained natural language to end user populations, but it is an approach that promises to be a more general solution to the human factors problems associated with nlq systems. by restricting users to a memorable subset of natural language we are taking the burden off of the computer system (and the programmers ability to customize it) and are shifting the burden to the user who will have to learn and remember how to restrict their own natural language. however, the burden that we shift to the user should be a light one. humans are naturally skillful at processing language and in our laboratory we are exploring what kinds of language restrictions are easy for a user to follow. our approach is empirical. a general methodology for obtaining a usable subset of english for query could consist of the following steps: determine users' natural form of question asking within a database application domain. select a subset which can be expressed as rules to be learned and followed. test users, identifying which rules can or cannot be easily followed. iterate the previous steps until all rules can be followed and users can still express all required retrieval requests. move to other database applications. we feel all steps are important. for example, rules not based on users' natural forms of question asking will likely not be successful. similarly, english subsets based on users' natural forms may not be successful unless the rules are communicable to the user. a major obstacle to overcome in the study of natural forms of question writing, is to develop a task that does not bias the subjects' natural language. to overcome this problem, some researchers have given subjects open- ended problem statements that require many questions to solve. this is adequate for studying connected discourse, but the experimenter has very little control over the types of questions that can be asked. in our studies, we wanted to be able to ensure that the users would be able to express all of the data retrieval functions that are currently available in existing formal query languages. thus, we needed to control the types of questions that subjects would be required to enter. to meet these needs, we presented forms to subjects that contained information that was obtainable from the database. on each form some of the information was missing however, and it was the subject's task to type a question that would retrieve the missing information. the form contained enough context to indicate the retrieval keys but not enough to bias the syntax of the user's question. this technique did, of course, bias the vocabulary the subjects used. however, this bias was in a direction which represented knowledge actual users normally have about the database they use. thus, query writing performance would not be affected by the subject's inability to think of appropriate task-related questions. the forms were constructed to represent all of the data retrieval capability contained in a powerful formal query language such as sql. thus, they represented questions that were based on a relational database consisting of six tables of information about a hypothetical college, including information about students, faculty, courses, and departments. therefore, all questions which were represented on the forms had analogous sql solutions and covered the full range of sql function. our research examined the effects that various sets of restrictions would have on the types of syntactical constructions subjects would use to express the questions indicated on the forms. the first set of studies imposed a vocabulary restriction on subject's responses. they could use only the pre- defined names of the attributes in the database but they had no restriction on how they could combine these words into a sentence. the results showed that performance was very poor. thus, vocabulary restrictions of the type that are commonly imposed by formal query languages create difficulties for the user. the second set of studies removed the vocabulary restrictions and showed that a large percentage of the natural queries that our subjects produced could be described with a limited set of grammatical rules. a parser based on these rules was implemented and a simple set of instructions were given to a new set of subjects. these subjects showed that they could follow the instructions and could restrict the grammatical form of their questions to the subset selected. thus, these results indicated that users would be able to learn to use a natural subset of english for database query when the syntactic rules were exposed. however, it became clear that even when the subjects could restrict their questions using the syntactic rules of this natural subset, a nlq system would still require a significant amount semantic and pragmatic knowledge to be able process the questions that were asked. thus, a large amount of customization would still be required. therefore, the next phase of experimentation was focused on discovering the types of semantic restrictions that users would be able to learn and remember. in addition to the syntactic restrictions, new subjects were asked to include more semantic information in their questions. specifically, they were given a model of the database and asked to include the name or a synonym of the name of the attribute associated with each database value expressed in the query. thus, instead of asking "what is the major of david lee," subjects were required to ask "what is the major of the student david lee." the results showed that users could easily specify the attribute name when selecting on a particular value of that attribute but had difficulty specifying the name when the attribute was used to calculate a value not in the database. for example, user had trouble expressing the concept of a "full class" as a class with "size greater than or equal to limit." these results suggest that any database query system which is intended to be for general use (i.e. transportable across application domains) will require that it's users have a good understanding of what is in the database so that they will know what attributes of the data they can refer to. this suggests that a well human factored database query systems will exposed in a natural way the underlying structure of the database and then to allow a flexible vocabulary to be used to reference the items in the database that they know about. william c. ogden your local newspaper: will it just be content in a digital library? b. david kingery designing icons and visual symbols william horton u2rs: an upgradable universal relation system the two proposed advantages of the universal relation model are that it can deal with the logical navigation problem and the database structure upgrading problem [maier 1984]. most previous research has concentrated on the first advantage, to relieve the end-user from logical navigation through the underlying structures. however, the second proposed advantage, database structure upgrading, has not been fully discussed. this is the topic of this paper. an upgradable universal relational system (u2rs), which is able to deal with the database structure upgrading problem, is presented. the procedures for verifying and executing database system upgrading operations are described, various anticipated situations when upgrading an existing universal relation database system are classified, and conditions for determining whether a given system will be upgradable for a given upgrading operation are examined. t. c. ting lee a. becker z. q. tan putting the feel in look and feel haptic devices are now commercially available and thus touch has become a potentially realistic solution to a variety of interaction design challenges. we report on an investigation of the use of touch as a way of reducing visual overload in the conventional desktop. in a two-phase study, we investigated the use of the phantom haptic device as a means of interacting with a conventional graphical user interface. the first experiment compared the effects of four different haptic augmentations on usability in a simple targeting task. the second experiment involved a more ecologically-oriented searching and scrolling task. results indicated that the haptic effects did not improve users performance in terms of task completion time. however, the number of errors made was significantly reduced. subjective workload measures showed that participants perceived many aspects of workload as significantly less with haptics. the results are described and the implications for the use of haptics in user interface design are discussed. ian oakley marilyn rose mcgee stephen brewster philip gray hypertext wrapped up (abstract) simon knight hugh davis managing networked multimedia data erik duval henke olivie piers o'hanlon david gordon jameson nermin ismail steve wilbur richard beckwith coming to the wrong decision quickly: why awareness tools must be matched with appropriate tasks this paper presents an awareness tool designed to help distributed, asynchronous groups solve problems quickly. using a lab study, it was found that groups that used the awareness tool tended to converge and agree upon a solution more quickly. however, it was also found that individuals who did not use the awareness tool got closer to the correct solution. implications for the design of awareness tools are discussed, with particular attention paid to the importance of matching the features of an awareness tool with a workgroup's tasks and goals. alberto espinosa jonathan cadiz luis rico-gutierrez robert kraut william scherlis glenn lautenbacher indexing and retrieval of video based on spatial relation sequences serhan dagtas arif ghafoor a semi-automatic approach to home video editing andreas girgensohn john boreczky patrick chiu john doherty jonathan foote gene golovchinsky shingo uchihashi lynn wilcox using elastic windows for world-wide web browsing eser kandogan ben shneiderman a fast procedure for finding a tracker in a statistical database to avoid trivial compromises, most on-line statistical databases refuse to answer queries for statistics about small subgroups. previous research discovered a powerful snooping tool, the tracker, with which the answers to these unanswerable queries are easily calculated. however, the extent of this threat was not clear, for no one had shown that finding a tracker is guaranteed to be easy. this paper gives a simple algorithm for finding a tracker when the maximum number of identical records is not too large. the number of queries required to find a tracker is at most (log2s) queries, where s is the number of distinct records possible. experimental results show that the procedure often finds a tracker with just a few queries. the threat posed by trackers is therefore considerable. dorothy e. denning jan schlörer an investigation of the inconstistencies of the rim-5 relational information management database management system (abstract only) this paper focuses on three areas of research investigating the rim dbms, which is commercially available and is widely used in the academic environment. the first problem area is in the update mode in which rim locks out users from the entire database actually a view in the relational dbms sense. a method which only locks out a minimal portion during update is presented. the second area of concern is the relational completeness of rim. to be relationally complete a dbms must support the relational algebraic operations proposed by codd. the rim system does not support the division operator, nor can its equivalent be obtained using the current rim operators. two algorithms and an analysis of these is presented which when implemented performs the division operator. the final area addressed is the fact that rim supports vectors and matrices as elementary data types. this allows the existence of unnormalized relations. pentti a. honkanen a methodology for taking account of user tasks, goals and behavior for design of computerized library catalogs n. j. belkin tracking join and self-join sizes in limited storage noga alon phillip b. gibbons yossi matias mario szegedy filtered suggestions joris verrips evaluating a content based image retrieval system content based image retrieval (cbir) presents special challenges in terms of how image data is indexed, accessed, and how end systems are evaluated. this paper discusses the design of a cbir system that uses global colour as the primary indexing key, and a user centered evaluation of the systems visual search tools. the results indicate that users are able to make use of a range of visual search tools, and that different tools are used at different points in the search process. the results also show that the provision of a structured navigation and browsing tool can support image retrieval, particularly in situations in which the user does not have a target image in mind. the results are discussed in terms of their implications for the design of visual search tools, and their implications for the use of user-centered evaluation for cbir systems. sharon mcdonald ting-sheng lai john tait research at altai"r altair is a five year project which began in september of 1986. its goal is to design an implement a next generation database system. the five year project was divided in two phases: a three year prototyping phase and a two year phase devoted for one part to the development of a product from the prototype and for the other part to a new research effort. the three year phase ended by the demonstration of the v1 prototype of the o2 object-oriented database system which has been distributed for experimentation to more than 40 universities and about 13 industrial partners. the contribution of the altair group to the database research community has been mainly in three areas: data model and database languages, object stores, and programming environments. furthermore, all these efforts have been integrated in a consistent way into the v1 prototype. the results of this research and development effort are being summarized in a book [bdk91] which is a commented collection of papers, most of them already published in the proceedings of a number of internationally recognized conferences. we briefly survey in the following the o2 activities. philippe richard halclon: designing for people this short paper discusses some early work on the halcion project. the objective of this project is to develop an educational hypermedia cd-rom teaching principles and practices of human- computer interaction to a commercial audience. daniel crow the cambridge university multimedia document retrieval demo system (demonstration session) a. tuerk s. e. johnson p. jourlin k. spärck jones p. c. woodland computer conferencing as a means to optimize conflict in established small groups: a laboratory study steven m. zeltmann when my face is the interface: an experimental comparison of interacting with one's own face or someone else's face clifford nass eun-young kim eun-ju lee the role of it in the creation of sustainable communities (panel session) david b. paradice james f. courtney kalle lyytinen jaana porra an optimistic commit protocol for distributed transaction management eliezer levy henry f. korth abraham silberschatz on genericity and parametricity (extended abstract) catriel beeri tova milo paula ta-shma automatic hypermedia generation for ad hoc queries on semi-structured data this paper describes research on the automatic generation of hypermedia or web-based presentations for semi-structured data resulting from ad-hoc queries. we identify how different aspects of adaptation, such as personalization and customization, influence the generation process. we address important aspects of the software that facilitates the generation process. geert-jan houben paul de bra the uci kdd archive of large data sets for data mining research and experimentation stephen d. bay dennis kibler michael j. pazzani padhraic smyth making computers easier for older adults to use: area cursors and sticky icons aileen worden nef walker krishna bharat scott hudson the functional guts of the kleisli query system limsoon wong database schema design: an experimental comparison between normalization and information analysis peretz shoval moshe even-chaime the applied ergonomics group at philips ian mcclelland visual relevance analysis nikos pediotakis mountaz hascoÃ"t-zizi contextual design: using customer work models to drive systems design karen holtzblatt hugh beyer babble: suporting conversation in the workplace erin bradner wendy a. kellogg thomas erickson unifying functional and multivalued dependencies for relational database design meral ozsoyoglu li yan yuan term position ranking: some new test results presents seven sets of laboratory results testing variables in term position ranking which produce a phrase effect by weighting the distance between proximate terms. results of the 73 tests conducted by this project are included, covering variant term position algorithms, sentence boundaries, stopword counting, every pairs testing, field selection, and combinations of algorithm including collection frequency, record frequency and searcher weighted. the discussion includes the results of tests by fagan and by croft, the need for term stemming, proximity as a precision device, comparisons with boolean, and the quality of test collections. e. michael keen the hybrid object-relational architecture (hora): an integration of object- oriented and relational technology jeff sutherland matthew pope ken rugg metaphor in theory and practice: the influence of metaphors on expectations the use of metaphors is pervasive in all forms of discourse. this paper is concerned with providing a brief review of the development of metaphor theory,illustrated with some examples of supportive empirical research.these include a canonical study of the way concepts of instruction and education are influenced, a study into the effects of human-computer interface metaphors on computer systems users,and a study designed to assess the impacts of metaphor use on attitudes to internet commerce, particularly on attitudes to the roles played by information technology (it). the paper provides some contextual background to assist consideration of the effects of metaphors on attitudes and beliefs.in practice, metaphor use may be intentional,unconscious,or a mixture of both, but in any case metaphors can be shown to play powerful roles in the social construction of human reality. anne hamilton public use of digital community information sstems: findings from a recent study with implications for system design the internet has considerably empowered libraries and changed common p erception of what they entail. public libraries, in particular, are using technological advancements to expand their range of services and enhance their civic roles. providing community information (ci) in innovative, digital forms via community networks is one way in which public libraries are facilitating everyday information needs. these networks have been lauded for their potential to strengthen physical communities through increasing information flow about local services and events, and through facilitating civic interaction. however, little is known about how the public uses such digital services and what barriers they encounter. this paper presents findings about how digital ci systems benefit physical communities based on extensive case studies in three states. at each site, rich data were collected using online surveys, field observation, in- depth interviews and focus groups with internet users, human service providers and library staff. both the online survey and the follow-up interviews with respondents were based on sense- making theory. in our paper we discuss our findings regarding: (1) how the public is using digital ci systems for daily problem solving, and (2) the types of barriers they encounter. suggestions for improving digital ci systems are provided. karen e. pettigrew joan c. durrance an integrated approach to logical design of relational database schemes 1. we propose a new approach to the design of relational database schemes. the main features of the approach are the following: 2. a combination of the traditional decomposition and synthesis approaches, thus allowing the use of both functional and multivalued dependencies. 3. separation of structural dependencies relevant for the design process from integrity constraints, that is, constraints that do not bear any structural information about the data and which should therefore be discarded at the design stage. this separation is supported by a simple syntactic test filtering out nonstructural dependencies. 4. automatic correction of schemes which lack certain desirable properties. catriel beeri michael kifer evolving hypermedia middleware services: lessons and observations uffe k. will peter j. nurnberg talking to customers on the web: a comparison of three voice alternatives qiping zhang catherine g. wolf shahrokh daijavad maroun touma vr's frames of reference: a visualization technique for mastering abstract multidimensional information marilyn c. salzman chris dede r. bowen loftin groupweb: a www browser as real time groupware saul greenberg mark roseman separations of concerns in the chiron-1 user interface development and management system richard n. taylor gregory f. johnson the "starfire" video prototype project: a case history bruce tognazzini database for office automation with a fast growth in the quantity and quality of software applications for office systems and advanced workstations, it is becoming clear that the trend is towards a more integrated software architecture for workstation or office systems. a critical piece in a truly integrated system is a well-designed general purpose database mangement system (dbms) that supports all applications running on the workstation (or office system). sophisticated dbmss have been developed on the main frames, primarily for large data processsing (dp) applications. is it true, then, that all we need is to transport these systems to the workstation? the first problem that one faces to move a full scale dbms to a workstation is the size. it is not clear that trimming a big dbms down is an easy task. some of the advanced capabilities might be modular and relatively easy to remove. others could be quite difficult as the code is distributed throughout the system. in some cases, it might not even be possible to "trim" when the general capability is needed but to a lesser extent. in practice, most major dbmss are built without serious concern on the space restrictions. it might be easier to design and implement a new system with "smallness" in mind than to trim down a huge one. the considerations on smallness include the data structures, control blocks, interfaces, buffer requirements, configuration and packaging of the system, as well as the number of lines of code. size is only the beginning. smallness makes the dbms possible to run on a workstation. because of the new operating environment, there are new requirements for the dbms which are different from the traditional dp environment on a main frame. the real challenge is to determine what these requirements are and how to design a system accordingly. on an advanced workstation, one has to support word processing, data processing, engineering and scientific, administrative, as well as latent semantic indexing: a probabilistic analysis christos h. papadimitriou hisao tamaki prabhakar raghavan santosh vempala semantics-based information brokering the rapid advances in computer and communication technologies, and their merger, is leading to a global information market place. it will consist of federations of very large number of information systems that will cooperate to varying extents to support the users' information needs. we discuss an approach to information brokering in the above environment. we discuss two of its tasks: information resource discovery, which identifies relevant information sources for a given query, and query processing, which involves the generation of appropriate mapping from relevant but structurally heterogeneous objects. query processing consists of information focusing and information correlation. our approach is based on: semantic proximity, which represents semantic similarities based on the context of comparison, and schema correspondences which are used to represent structural mappings and are associated with the context. the context of comparison of the two objects is the primary vehicle to represent the semantics for determining semantic proximity. specifically, we use a partial context representation to capture the semantics in terms of the assumptions in the intended use of the objects and the intended meaning of the user query. information focusing is supported by subsequent context comparison. the same mechanism can be used to support information resource discovery. context comparison leads to changes in schema correspondences that are used to support information correlation. vipul kashyap amit sheth timespace in the workplace: dealing with interruptions brid o'conaill david frohlich audio-visual tracking for natural interactivity the goal in user interfaces is natural interactivity unencumbered by sensor and display technology. in this paper, we propose that a multi-modal approach using inverse modeling techniques from computer vision, speech recognition, and acoustics can result in such interfaces. in particular, we demonstrate a system for audio-visual tracking, showing that such a system is more robust, more accurate, more compact, and yields more information than using a single modality for tracking. we also demonstrate how such a system can be used to find the talker among a group of individuals, and render 3d scenes to the user. gopal pingali gamze tunali ingrid carlbom rdb/vms support for multi-media databases t. k. rengarajan measuring information quality of web sites: development of an instrument pairin katerattanakul keng siau evaluating program representation in a demonstrational visual shell francesmary modugno albert t. corbett brad a. myers the design of star's records processing: data processing for the noncomputer professional robert purvy jerry farrell paul klose dynamic hypertext: querying and linking richard bodner mark chignell an outline of mql victor j. streeter autoadmin "what-if" index analysis utility as databases get widely deployed, it becomes increasingly important to reduce the overhead of database administration. an important aspect of data administration that critically influences performance is the ability to select indexes for a database. in order to decide the right indexes for a database, it is crucial for the database administrator (dba) to be able to perform a quantitative analysis of the existing indexes. furthermore, the dba should have the ability to propose hypothetical ("what-if") indexes and quantitatively analyze their impact on performance of the system. such impact analysis may consist of analyzing workloads over the database, estimating changes in the cost of a workload, and studying index usage while taking into account projected changes in the sizes of the database tables. in this paper we describe a novel index analysis utility that we have prototyped for microsoft sql server 7.0. we describe the interfaces exposed by this utility that can be leveraged by a variety of front-end tools and sketch important aspects of the user interfaces enabled by the utility. we also discuss the implementation techniques for efficiently supporting "what-if" indexes. our framework can be extended to incorporate analysis of other aspects of physical database design. surajit chaudhuri vivek narasayya mixing oil and water? ethnography versus experimental psychology in the study of computer-mediated communication. andrew monk bonnie nardi nigel gilbert marilyn mantei john mccarthy the need for distributed asynchronous transactions the theme of the paper is to promote research on asynchronous transactions. we discuss our experience of executing synchronous transactions on a large distributed production system in the boeing company. due to the poor performance of synchronous transactions in our environment, it motivated the exploration of asynchronous transactions as an alternate solution. this paper presents the requirements and benefits/limitations of asynchronous transactions. open issues related to large scale deployments of asynchronous transactions are also discussed. lyman do prabhu ram pamela drew bridging physical and virtual worlds with electronic tags roy want kenneth p. fishkin anuj gujar beverly l. harrison metadata visualization for digital libraries: interactive timeline editing and review vijay kumar richard furuta robert b. allen using semantic knowledge of transactions to increase concurrencywhen the only information available about transactions is syntacticinformation, serializability is the main correctness criterion for concurrencycontrol. serializability requires that the execution of each transaction mustappear to every other transaction as a single atomic step (i.e., the executionof the transaction cannot be interrupted by other transactions). manyresearchers, however, have realized that this requirement is unnecessarilystrong for many applications and can significantly increase transactionresponse time. to overcome this problem, a new approach for controllingconcurrency that exploits the semantic information available abouttransactions to allow controlled nonserializable interleavings has recentlybeen proposed. this approach is useful when the cost of producing onlyserializable interleavings is unacceptably high. the main drawback of theapproach is the extra overhead incurred by utilizing the semantic information.we examine this new approach in this paper and discuss its strengths andweaknesses. we introduce a new formalization for the concurrency controlproblem when semantic information is available about the transactions. thissemantic information takes the form of transaction types, transaction steps,and transaction break-points. we define a new class of "safe" schedules calledrelatively consistent (rc) schedules. this class contains serializable as wellas nonserializable schedules. we prove that the execution of an rc schedulecannot violate consistency and propose a new concurrency control mechanismthat produces only rc schedules. our mechanism assumes fewer restrictions onthe interleavings among transactions than previously introduced semantic-basedmechanisms.abdel aziz farrag m. tamer Ã-zsu the cost structure of sensemaking daniel m. russell mark j. stefik peter pirolli stuart k. card decomposition of a relation scheme into boyce-codd normal form decomposition into boyce-codd normal form (bcnf) with a lossless join and preservation of dependencies is desired in the design of a relational database scheme. however, there may be no decomposition of a relation scheme into bcnf that is dependency preserving, and the known algorithms for lossless join decomposition into bcnf require exponential time and space. in this paper we give an efficient algorithm for lossless join decomposition and show that the problem of deciding whether a relation scheme has a dependency-preserving decomposition into bcnf is np-hard. the algorithm and the proof assume that all data dependencies are functional. we then discuss the extension of our techniques to the case where data dependencies are multivalued. don- min tsou patrick c. fischer a relational database interface to the world-wide web ellen spertus lynn andrea stein mcluhan meets the net larry press the applied ergonomics group at philips ian mcclelland a self-organized file cabinet the self-organizing file cabinet is an information retrieval system associated with a user's physical file cabinet. it enhances a physical file cabinet with electronic information about the papers in it. it can remember, organize, update, and help the user find documents contained in the physical file cabinet. the system consists of a module for extracting electronic information about the papers stored in the file cabinet, a module for representing and storing this information in multiple views, and a module that allows a user to interact with this information. the focus of this paper is on the design and evaluation of the self-organized file cabinet. dawn lawrie daniela rus bess: storage support for interactive visualization systems a. biliris t. a. funkhouser w. o'connell e. panagos gestalt: an expressive database programming system many new database applications require computational and data modelling power simply not present in conventional database management systems. developers are forced to design complex encodings of complex data into a limited set of database types, and to embed dml commands into a host programming language, a notoriously tricky and error-prone enterprise. in this paper, we describe the design and implementation of gestalt, a system and methodology for organizing and interfacing to multiple heterogeneous, existing database systems. application programs are written in a supported programming language (currently c and lisp) using high-level data and control abstractions native to the language. the system is flexible in that the underlying database systems can easily be replaced/upgraded/augmented without affecting existing application programs. we also describe our experience with the system: gestalt has been in daily operational use at mit for over a year, supporting an information system for caf, a research facility for the automation of semiconductor fabrication. michael l. heytens rishiyur s. nikhil the data that you won't find in databases: tutorial panel on data exchange formats peter buneman david maier syncro: a dataflow command shell for the lilith/modula computer syncro is a two-dimensional command interpreter that allows human interface through a graphic command language. this paper describes the concept of two- dimensional commands for direct implementation of leveled data flow structures, and comments on the syncro scheme for effecting them. syncro is implemented in modula-2 on niklaus wirth's lilith/modula computer. tom demarco aurel soceneantu a system for automatic personalized tracking of scientific literature on the web kurt d. bollacker steve lawrence c. lee giles computing graphical queries over xml data the rapid evolution of xml from a mere data exchange format to a universal syntax for encoding domain-specific information raises the need for new query languages specifically conceived to address the characteristics of xml. such languages should be able not only to extract information from xml documents, but also to apply powerful transformation and restructuring operators, based on a well-defined semantics. moreover, xml queries should be natural to write and understand, as nontechnical persons also are expected to access the large xml information bases supporting their businesses. this article describes xml-gl, a graphical query language for xml data. xml-gl's uniqueness is in the definition of a graph-based syntax to express a wide variety of xml queries, ranging from simple selections to expressive data transformations involving grouping, aggregation, and arithmetic calculations. xml-gl has an operational semantics based on the notion of graph matching, which serves as a guideline both for the implementation of native processors, and for the adoption of xml-gl as a front-end to any of the xml query languages that are presently under discussion as the standard paradigm for querying xml data. sara comai ernesto damiani piero fraternali let's browse: a collaborative web browsing agent henry lieberman neil w. van dyke adrian s. vivacqua design: (inter)facing the millennium: where are we (going)? k. ehrlich a. henderson probabilistic search team weighting - some negative results the effect of probabilistic search term weighting on the improvement of retrieval quality has been demonstrated in various experiments described in the literature. in this paper, we investigate the feasibility of this method for boolean retrieval with terms from a prescribed indexing vocabulary. this is a quite different test setting in comparison to other experiments where linear retrieval with free text terms was used. the experimental results show that in our case no improvement over a simple coordination match function can be achieved. on the other hand, models based on probabilistic indexing outperform the ranking procedures using search term weights. n. fuhr p. muller organizational learning with flexible workflow management systems thomas hermann katharina just-hahn virtual teams: an exploratory study of key challenges and strategies guy pare line dube architecture of the artifact-based collaboration system matrix k. jeffay j. k. lin j. menges f. d smith j. b. smith research issues in the design of online communities: report on the chi 99 workshop amy bruckman judith donath thomas erickson wendy kellogg barry wellman data base machines (part ii) p. b. berra messidor system messidor is an interactive information retrieval system. it differs from current systems in that it allows the simultaneous search of several bibliographic databases. the databases may be on different sites and may use different query languages (mistral, quest,...). these local languages are invisible to the users of messidor. they are all translated to a single language. we describe messidor's goals, system architecture, user language and some details of the implementation. the system is implemented on a micral 80-30 microcomputer. catherine moulinoux jean-claude faure witold litwin structural patterns and hypertext rhetoric mark bernstein www tim berners-lee jean-francois groff beyond the plane: spatial hypertext in a virtual reality world rosemary michelle simpson informing the design of an information management system with iterative fieldwork we report on the design process of a personal information management system, raton laveur, and how it was influenced by an intimate relationship between iterative fieldwork and design thinking. initially, the system was conceived as a paper-based ui to calendar, contacts, to-dos and notes. as the fieldwork progressed, our understanding of peoples practices and the constraints of their office infrastructures radically shifted our design goals away from paper-based interaction to embedded interaction with our system. by this we mean embedding information management functionality in an existing application such as email. victoria bellotti ian smith the o2 object-oriented database system françois bancilhon fotofile: a consumer multimedia organization and retrieval system allan kuchinsky celine pering michael l. creech dennis freeze bill serra jacek gwizdka a distributed multiple-response resolver for value-order retrieval dik lun lee retrospective case base browsing: a data mining process enhancement aubrey e. hill warren t. jones hypermud joern bollmeyer what every systems developer should know about hypertext (abstract) michael bieber statistical database design the security problem of a statistical database is to limit the use of the database so that no sequence of statistical queries is sufficient to deduce confidential or private information. in this paper it is suggested that the problem be investigated at the conceptual data model level. the design of a statistical database should utilize a statistical security management facility to enforce the security constraints at the conceptual model level. information revealed to users is well defined in the sense that it can at most be reduced to nondecomposable information involving a group of individuals. in addition, the design also takes into consideration means of storing the query information for auditing purposes, changes in the database, users' knowledge, and some security measures. francis y. chin gultekin ozsoyoglu a user-centred evaluation of ranking algorithms for interactive query expansion the evaluation of 6 ranking algorithms for the ranking of terms for query expansion is discussed within the context of an investigation of interactive query expansion and relevance feedback in a real operational environment. the yardstick for the evaluation was provided by the user relevance judgements on the lists of the candidate terms for query expansion. the evaluation focuses on the similarities in the performance of the different algorithms and how the algorithms with similar performance treat terms. efthimis n. efthimiadis a visual language for querying spatio-temporal databases christine bonhomme claude trepied marie-aude aufaure robert laurini melodic matching techniques for large music databases with the growth in digital representations of music, and of music stored in these representations, it is increasingly attractive to search collections of music. one mode of search is by similarity, but, for music, similarity search presents several difficulties: in particular, for melodic query support, deciding what part of the music is likely to be perceived as the theme by a listener, and deciding whether two pieces of music with different sequences of notes represent the same theme. in this paper we propose a three-stage framework for matching pieces of music. we use the framework to compare a range of techniques for determining whether two pieces of music are similar, by experimentally testing their ability to retrieve different transcriptions of the same piece of music from a large collection of midi files. these experiments show that different comparison techniques differ widely in their effectiveness; and that, by instantiating the framework with appropriate music manipulation and comparison techniques, pieces of music that match a query can be identified in a large collection. alexandra uitdenbgerd justin zobel a critical investigation of recall and precision as measures of retrieval system performance recall and precision are often used to evaluate the effectiveness of information retrieval systems. they are easy to define if there is a single query and if the retrieval result generated for the query is a linear ordering. however, when the retrieval results are weakly ordered, in the sense that several documents have an identical retrieval status value with respect to a query, some probabilistic notion of precision has to be introduced. relevance probability, expected precision, and so forth, are some alternatives mentioned in the literature for this purpose. furthermore, when many queries are to be evaluated and the retrieval results averaged over these queries, some method of interpolation of precision values at certain preselected recall levels is needed. the currently popular approaches for handling both a weak ordering and interpolation are found to be inconsistent, and the results obtained are not easy to interpret. moreover, in cases where some alternatives are available, no comparative analysis that would facilitate the selection of a particular strategy has been provided. in this paper, we systematically investigate the various problems and issues associated with the use of recall and precision as measures of retrieval system performance. our motivation is to provide a comparative analysis of methods available for defining precision in a probabilistic sense and to promote a better understanding of the various issues involved in retrieval performance evaluation. vijay raghavan peter bollmann gwang s. jung user interface design in the trenches: some tips on shooting from the hip robert m. mulligan mark w. altom david k. simkin optimization techniques for queries with expensive methods object- relational database management systems allow knowledgeable users to define new data types as well as new methods (operators) for the types. this flexibility produces an attendant complexity, which must be handled in new ways for an object-relational database management system to be efficient. in this article we study techniques for optimizing queries that contain time- consuming methods. the focus of traditional query optimizers has been on the choice of join methods and orders; selections have been handled by "pushdown" rules. these rules apply selections in an arbitrary order before as many joins as possible, using th e assumption that selection takes no time. however, users of object-relational systems can embed complex methods in selections. thus selections may take significant amounts of time, and the query optimization model must be enhanced. in this article we carefully define a query cost framework that incorporates both selectivity and cost estimates for selections. we develop an algorithm called predicate migration, and prove that it produces optimal plans for queries with expensive methods. we then describe our implementation of predicate migration in the commercial object-relational database management system illustra, and discuss practical issues that affect our earlier assumptions. we compare predicate migration to a variety of simplier optimization techniques, and demonstrate that predicate migration is the best general solution to date. the alternative techniques we present may be useful for constrained workloads. joseph m. hellerstein scandinavian design: users in product development morten kyng graphical specification of conceptual database schemes using modified model "entity-relationship" silian arsov boris rachev ceva: a tool for collaborative video analysis andy cockburn tony dale an effective way to hire technical staff dianna franklin a society model for office information systems a society model, which characterizes the behavior and procedure of offices, is proposed. it is our belief that an office system capable of dealing with all real office problems only through the modeling of the internal behavior of an office can be developed. in this society model, office entities are viewed as agents. an agent is modeled as a microsociety of interacting knowledge sources. within the microsociety, there exists a microknowledge exchange system, which provides a set of microknowledge exchange protocols as a coordination system among those knowledge sources during their cooperative reasoning process. an office is then modeled as a society of various interacting agents using their knowledge to complete the office goals cooperatively. it is this unified view that allows offices to be modeled in a flexible and general way. cheng- seen ho yang-chang hong te-son kuo a unifying model of physical databases a unifying model for the study of database performance is proposed. applications of the model are shown to relate and extend important work concerning batched searching, transposed files, index selection, dynamic hash- based files, generalized access path structures, differential files, network databases, and multifile query processing. d. s. batory c. c. gotlieb the decoupled simulation model for virtual reality systems the virtual reality user interface style allows the user to manipulate virtual objects in a 3d environment using 3d input devices. this style is best suited to application areas where traditional two dimensional styles fall short, but the current programming effort required to produce a vr application is somewhat large. we have built a toolkit called mr, which facilitates the development of vr applications. the toolkit provides support for distributed computing, head-mounted displays, room geometry, performance monitoring, hand input devices, and sound feedback. in this paper, the architecture of the toolkit is outlined, the programmer's view is described, and two simple applications are described. chris shaw jiandong liang mark green yunqi sun conflicting class structures between the object oriented paradigm and users concepts charles m. hymes goal-directed zoom allison woodruff james landay michael stonebraker towards robust features for classifying audio in the cuevideo system the role of audio in the context of multimedia applications involving video is becoming increasingly important. many efforts in this area focus on audio data that contains some built-in semantic information structure such as in broadcast news, or focus on classification of audio that contains a single type of sound such as cleaar speech or clear music only. in the cuevideo system, we detect and classify audio that consists of mixed audio, i.e. combinations of speech and music together with other types of background sounds. segmentation of mixed audio has applications in detection of story boundaries in video, spoken document retrieval systems, audio retrieval systems etc. we modify and combine audio features known to be effective in distinguishing speech from music, and examine their behavior on mixed audio. our preliminary experimental results show that we can achieve a classification accuracy of over 80% for such mixed audio. our study also provides us with several helpful insights related to analyzing mixed audio in the context of real applications. savitha srinivasan dragutin petkovic dulce ponceleon unified communication systems christopher andrews adapting a full-text information retrieval system to the computer troubleshooting domain peter g. anick informal workplace communication: what is it like and how might we support it? steve whittaker david frohlich owen daly-jones cooperative usability practices thea borgholm kim halskov madsen static analysis of intensional databases in u-datalog (extended abstract) elisa bertino barbara catania clustering in object-oriented databases object-oriented database management systems are still a relatively new area, with many unanswered questions as to their performance. objects can be clustered on disk (i.e., stored in contiguous storage areas) so that when accessing one object in a cluster, all of the objects in that cluster are brought into main memory. thus when accessing additional objects in the cluster, it is then a main memory operation rather than a disk operation. unfortunately, determing which objects to cluster together is often left entirely up to the user, and an improper clustering scheme can severly degrade system performance. this paper presents a set of guidelines for developing a clustering scheme, using an actual application as an example. everton g. de paula michael l. nelson content based navigation in multimedia information systems: p. h. lewis h. c. davis m. r. dobie w. hall j. kuan s. t. perry query driven knowledge discovery in multidimensional data we study kdd (knowledge discovery in databases) processes on multidimensional data from a query point of view. focusing on association rule mining, we consider typical queries to cope with the pre-processing of multidimensional data and the post-processing of the discovered patterns as well. we use a model and a rule-based language stemming from the olap multidimensional representation, and demonstrate that such a language fits well for writing kdd queries on multidimensional data. using an homogeneous data model and our language for expressing queries at every phase of the process appears as a valuable step towards a better understanding of interactivity during the whole process. jean-françois boulicaut patrick marcel christophe rigotti concepts and implications of undo for interactive recovery robert f. gordon george b. leeman clayton h. lewis axis-specified search: a fine-grained full-text search method for gathering and structuring excerpts yasusi kanada unbundling active functionality stella gatziu arne koschel gunter von bultzingsloewen hans fritschi autocompletion in full text transaction entry: a method for humanized input a method for interactive validation of transaction data with autocompletion is introduced and analyzed in a library information system for periodical publications. the system makes it possible to identify the periodicals by using the full title thus making a separate coding phase unnecessary. only the characters that are needed to distinguish the title from other ones have to be typed. in our library this is in the average 4.3 characters. we have noticed that it is faster to use the auto-completion system compared with the use of short codes and a code catalogue. the auto- completion feature causes more errors at least for the novices because the work differs from normal typing. the errors are, however, very easy to correct with the assistance of the system. m. jakobsson lexical semantic relatedness and online new event detection (poster session) nicola stokes paula hatch joe carthy community building in cvw tari lin fanderclai spatial data traversal in road map databases: a graph indexing approach spatial data are found in geographic information systems such as digital road map databases where city and road attributes are associated with nodes and links in a directed graph. queries on spatial data are expensive because of the recursive property of graph traversal. we propose a graph indexing technique to expedite spatial queries where the graph topology remains relatively stationary. using a probabilistic analysis, this paper shows that the graph indexing technique significantly improves the efficiency of constrained spatial queries. j. leon zhao ahmed zaki phoenix: making applications robust roger barga david b. lomet deja vu: a knowledge-rich interface for retrieval in digital libraries andrew s. gordon eric a. domeshek groupwork close up: a comparison of the group design process with and without a simple group editor a simple collaborative tool, a shared text editor called shredit, changed the way groups of designers performed their work, and changed it for the better. first, the designs produced by the 19 groups of three designers were of higher quality than those of the 19 groups who worked with conventional whiteboard, paper and pencil. the groups with the new tool reported liking their work process a little less, probably because they had to adapt their work style to a new tool. we expected, from the brainstorming literature and recent work on group support systems, that the reason the designs were of better quality was that the supported groups generated more ideas. to our surprise, the groups working with shredit generated fewer design ideas, but apparently better ones. it appears that the tool helped the supported groups keep more focused on the core issued in the emerging design, to waste less time on less important topics, and to capture what was said as they went. this suggests that small workgroups can capitalize on the free access they have to a shared workspace, without requiring a facilitator or a work process embedded in the software. judith s. olson gary m. olson marianne storrøsten mark carter full distribution in objectivity/db andrew e. wade query processing and file management issues in partitioned databases (abstract) this study reviews the database partitioning techniques and elaborates on features of storage organization from efficiency and query processing standpoints. methods for static files have excellent utilization records but require variable number of disk accesses, are prone to overflows, and may need re-organization when changes are made. dynamic file schemes with directories have good retrieval query performance but tend to achieve low storage utilization, suffer from growing directory, and may propagate the effects of an update to several regions and possibly all the way up to the highest level directory. partitioning very large databases has become an intense research area over the last few years. the characteristics of partitioned file organizations have profound effects on the efficiency of query processing and database updates. we have identified two main partitioning methodologies: static and dynamic files. while with static files we can achieve good utilization, they result in variable disk accesses per query and need frequent file reorganizations due to updates. as with the dynamic files we can achieve constant number of disk accesses for all queries at the expense of relatively low utilization (for uniform as well as non-uniform data distributions), directory sizes may grow fast, and handling of updates efficiently in extreme cases may be difficult. to this date very few has been researched in the area of updates of partitioned file organizations. however, as can be seen in our discussions there are important problems awaiting to be tackled in this area especially with respect to directory structures. esen ozkarahan h. cem bozsahin standards factor: the chi'91 standards sig session pat billingsley audio-visual data mapping for gis-based data: an experimental evaluation in this work, we present our experience of utilizing audio-visual data mappings for gis-based information visualization. the application we choose is a gis-based system for visualizing crime in a city. in this application, we enhance the pseudo-colored visual presentation of crime information by mapping data to several sound parameters --- volume, balance, bass and treble. our motivation for choosing sound in addition to vision is guided by our belief that data quantities mapped to various colors in a coloring scheme do not always clearly describe the information being presented for many different tasks that visualization is expected to support in many cases additional data characteristics can be conveyed to the user through sound to enhance the performance of the user on those tasks. we have conducted experiments with human users to compare the performance of users on visual data mapping alone vs. visual and sound data mappings together on several tasks including estimates of raw data values, local averaging, and global comparison. in most cases, we found that the use of bi-modal visual and sound data mappings together provided more accurate understanding of data displays suresh k. lodha abigail j. joseph jose c. renteria implicit structure for pen-based systems within a freeform interaction paradigm thomas p. moran patrick chiu william van melle gordon kurtenbach quantifying the benefits of semantics an active area of current research is the use of semantics in concurrency control. simulations using a new concurrency control protocol, called complex two-phase locking, can quantify the benefits of using semantics within long- duration transaction systems. it is then possible to determine if the benefits gained are worth the human effort required to obtain the semantics. gregory d. speegle andrew l. gordon temporal aspects of usability: papers from a workshop chris johnson phil gray a preliminary analysis of the products of hci research, using pro forma abstracts william newman fault-tolerant, load-balancing queries in telegraph mehul a. shah sirish chandrasekaran the growth of software skill: a longitudinal look at learning & performance erik nilsen heesen jong judith s. olson kevin biolsi henry rueter sharon mutter time affordances: the time factor in diagnostic usability heuristics alex paul conn consortium: a framework for transactions in collaborative environments vram kouramajian ross dargahi jerry fowler donald baker security constraint processing in a distributed database environment harvey rubinovitz bhavani thuraisingham two-dimensional substring indexing as databases have expanded in scope to storing string data (xml documents, product catalogs), it has become increasingly important to search databases based on matching substrings, often on multiple, correlated dimensions. while string b-trees are i/o optimal in one dimension, no index structure with non- trivial query bounds is known for two-dimensional substring indexing. in this paper, we present a technique for two-dimensional substring indexing based on a reduction to the geometric problem of identifying common colors in two ranges containing colored points. we develop an i/o efficient algorithm for solving the common colors problem, and use it to obtain an i/o efficient (poly- logarithmic query time) algorithm for the two-dimensional substring indexing problem. our techniques result in a family of secondary memory index structures that trade space for time, with no loss of accuracy. we show how our technique can be practically realized using a combination of string b-trees and r-trees. paolo ferragina nick koudas divesh srivastava s. muthukrishnan world wide hypermedia walter vannini performance tradeoffs for client-server query processing michael j. franklin bjorn thor jonsson donald kossmann load control for locking: the "half-and-half" approach a number of concurrency control performance studies have shown that, under high levels of data contention, concurrency control algorithms can exhibit thrashing behavior which is detrimental to overall system performance. in this paper, we present an approach to eliminating thrashing in the case of two- phase locking, a widely used concurrency control algorithm. our solution, which we call the 'half-and-half' algorithm, involves monitoring the state of the dbms in order to dynamically control the multiprogramming level of the system. results from a performance study indicate that the half-and-half algorithm can be very effective at preventing thrashing under a wide range of operating conditions and workloads. michael j. carey sanjay krishnamurthi miron livny corrigenda: a hierarchy-aware approach to faceted classification of object- oriented components this article presents a hierarchy-aware classification schema for object- oriented code, where software components are classified according to their behavioral characteristics, such as provided services, employed algorithms, and needed data. in the case of reusable application frameworks, these characteristics are constructed from their model, i.e., from the description of the abstract classes specifying both the framework structure and purpose. in conventional object libraries, the characteristics are extracted semiautomatically from class interfaces. characteristics are term pairs, weighted to represent "how well" they describe component behavior. the set of characteristics associated with a given component forms its software descriptor. a descriptor base is presented where descriptors are organized on the basis of structured relationships, such as similarity and composition. the classification is supported by a thesaurus acting as a language-independent unified lexicon. the descriptor base is conceived for developers who, besides conventionally browsing the descriptors hierarchy, can query the system, specifying a set of desired functionalities and getting a ranked set of adaptable candidates. user feedback is taken into account in order to progressively ameliorate the quality of the descriptors according to the views of the user community. feedback is made dependent of the user typology through a user profile. experimental results in terms of recall and precision of the retrieval mechanism against a sample code base are reported. e. damiani m. g. fugini c. bellettini incomplete information in relational databases tomasz imielinski witold lipski hypertext link integrity hugh c. davis a framework for information visualisation in this paper we examine the issues involved in developing information visualisation systems and present a framework for their construction. the framework addresses the components which must be considered in providing effective visualisations. the framework is specified using a declarative object oriented language; the resulting object model may be mapped to a variety of graphical user interface development platforms. this provides general support to developers of visualisation systems. a prototype system exists which allows the investigation of alternative visualisations for a range of data sources. jessie b. kennedy kenneth j. mitchell peter j. barclay visual digests for news video libraries the informedia digital video library contains over 2000 hours of video, growing at a rate of 15 hours per week. a good query engine is not sufficient for information retrieval because often the candidate result sets grow in number as the library grows. video digests summarize sets of stories from the library, providing users with a visual mechanism for interactive browsing and query refinement. these digests are generated dynamically under the direction of the user based on automatically derived metadata from the video library. three types of digests are discussed: vibe digests emphasizing word relationships, timelines showing trends against time, and maps showing geographic correlations. multiple digests can be combined into a single view or animated into a temporal presentation. michael g. christel universal usability statements: marking the trail for all users harry hochheiser ben shneiderman multimedia systems - an interdisciplinary perspective venkat n. gudivada using the cognitive walkthrough for operating procedures david g. novick theater, movie with a-life - romeo & juliet in hades as a-life based cinema naoko tosa object databases francois bancilhon imagine: a vision of health care in 1997 steve anderson shiz kobara barry mathis ev shafrir pre-screen projection: from concept to testing of a new interaction technique deborah hix james n. templeman robert j. k. jacob informix online xps bob gerber the egret project: exploring open, evolutionary, and emergent collaborative systems philip johnson transversing itemset lattices with statistical metric pruning we study how to efficiently compute significant association rules according to common statistical measures such as a chi-squared value or correlation coefficient. for this purpose, one might consider to use of the apriori algorithm, but the algorithm needs major conversion, because none of these statistical metrics are anti-monotone, and the use of higher support for reducing the search space cannot guarantee solutions in its the search space. we here present a method of estimating a tight upper bound on the statistical metric associated with any superset of an itemset, as well as the novel use of the resulting information of upper bounds to prune unproductive supersets while traversing itemset lattices. experimental tests demonstrate the efficiency of this method. shinichi morishita jun sese information visualization tutorial nahum gershon stuart card stephen g. eick the usability of transparent overview layers donald a. cox jasdeep s. chugh carl gutwin saul greenberg pointing on a computer display evan graham christine l. mackenzie multidatabase update issues a formal model of data updates in a multidatabase environment is developed, and a theory of concurrency control in such an environment is presented. we formulate a correctness condition for the concurrency control mechanism and propose a protocol that allows concurrent execution of a set of global transactions in presence of local ones. this protocol ensures the consistency of the multidatabase and deadlock freedom. we use the developed theory to prove the protocol's correctness and discuss complexity issues of implementing the proposed protocol. yuri breitbart avi silberschatz o-o, what's happening to db2? in this presentation, we will describe a collection of new object-relational features that have been added to ibm's db2 universal database (udb) system. the features to be described include support for structured types, object references, and hierarchies of typed tables and views. these features will be covered from the perspective of a database designer or end user. in addition to presenting the features presently available in db2 udb v5.2, which became available in fall 1998, we will discuss the expected evolution and impact of this technology over time. m. carey d. chamberlin d. doole s. rielau n. mattos s. narayanan b. vance r. swagerman an object-oriented data model for distributed office applications the object-oriented paradigm is becoming very popular for database applications and several object-oriented dbmss have been developed. a basic notion in this paradigm is the inheritance hierarchy that allows the users to define objects and the associated operations starting from already defined objects. however, in database applications the inheritance hierarchy must provide a conceptual modeling function, in addition to the re-usability function. another important requirement is to provide support for data distribution in (possibly) heterogeneous environments. this means that object implementation may differ depending on the object location. this paper presents a model that decouples these two aspects, modeling vs implementation, by using the concept of abstract and implementation classes. an abstract class specifies properties and methods for a set of similar objects, like in other object-oriented data models. an abstract class is however independent of the object implementation and location. an implementation class defines the implementation of an abstract class. in our model an abstract class may have several implementations. this allows the user to provide different implementations for the same set of objects, without requiring the objects to change class. e. bertino m. negri g. pelagatti l. sbattella validation of a jungian instrument for mis research charles h. mawhinney albert l. lederer analysis of object oriented spatial access methods this paper provides an analysis of r-trees and a variation (r+-trees) that avoids overlapping rectangles in intermediate nodes of the tree. the main contributions of the paper are the following. we provide the first known analysis of r-trees. although formulas are given for objects in one dimension (line segments), they can be generalized for objects in higher dimensions as well. we show how the transformation of objects to higher dimensions [hinr83] can be effectively used as a tool for the analysis of r- and r+\\- trees. finally, we derive formulas for r+-trees and compare the two methods analytically. the results we obtained show that r+-trees require less than half the disk accesses required by a corresponding r-tree when searching files of real life sizes r+-trees are clearly superior in cases where there are few long segments and a lot of small ones. christos faloutsos timos sellis nick roussopoulos message files dennis tsichritzis stavros christodoulakis multiple methods and the usability of interface prototypes: the complementarity of laboratory observation and focus groups patricia sullivan internet traffic warehouse we report on a network traffic warehousing project at telcordia. the warehouse supports a variety of applications that require access to internet traffic data. the applications include service level agreement (sla), web traffic analysis, network capacity engineering and planning, and billing. we describe the design of the warehouse and the issues encountered in building the warehouse. chung-min chen munir cochinwala claudio petrone marc pucci sunil samtani patrizia santa asynchronous information space analysis architecture using content and structure-based service brokering our project focuses on rapid formation and utilization of custom collections of information for groups focused on high-paced tasks. assembling such collections, as well as organizing and analyzing the documents within them, is a complex and sophisticated task. it requires understanding what information management services and tools are provided by the system, when they appropriate to use, and how those services can be composed together to perform more complex analyses. this paper describes the architecture of a prototype implementation of the information analysis management system that we have developed. the architecture uses metadata to describe collections of documents both in term of their content and structure. this metadata allows the system to dynamically and in a content-sensitive manner to determine the set of appropriate analysis services. to facilitate the invocation of those services, the architecture also provides an asynchronous and transparent service access mechanism. ke-thia yao in-young ko ragy eleish robert neches research issues in active database systems: report from the closing panel at ride-ads '94 the discussions during the panel stayed largely but not entirely focused on the question of active database research issues from the application perspective. there were nine panelists. each panelist was asked to prepare brief answers to a set of questions. the sets of answers were discussed by all participants, and finally a number of more general issues were discussed. the questions asked of the panelists were: name an application that will certainly be supported by active database systems in the not-too-distant future. name an application that will certainly not be supported by active database systems in the near future. name an area of active database systems in which you are not working but that is crucial to meet the needs of applications. name an area of active database systems that is not on the critical path to supporting applications. name an area of active database systems that should have been discussed in the course of the workshop but was not. jennifer widom hodfa: an architectural framework for homogenizing heterogeneous legacy databases one of the main difficulties in supporting global applications over a number of localized databases and migrating legacy information systems to modern computing environment is to cope with the heterogeneities of these systems. in this paper, we present a novel flexible architecture (called hodfa) to dynamically connect such localized heterogeneous databases in forming a homogenized federated database system and to support the process of transforming a collection of heterogeneous information systems onto a homogeneous environment. we further develop an incremental methodology of homogenization in the context of our hodfa framework, which can facilitate different degrees of homogenization in a stepwise manner, so that existing applications will not be affected during the process of homogenization. kamalakar karlapalem qing li chung-dak shum utilization as a dependent variable in mis research andrew w. trice michael e. treacy active video watching using annotation nuno correia teresa chambel usability evaluation with the cognitive walkthrough john rieman marita franzke david redmiles information sharing (solution session): collaborating across the networks phyllis s. galt susan b. jones information translation, mediation, and mosaic-based browsing in the tsimmis system joachim hammer hector garcia-molina kelly ireland yannis papakonstantinou jeffrey ullman jennifer widom a data model and data structures for moving objects databases we consider spatio-temporal databases supporting spatial objects with continuously changing position and extent, termed _moving objects databases_. we formally define a data model for such databases that includes complex evolving spatial structures such as line networks or multi- component regions with holes. the data model is given as a collection of data types and operations which can be plugged as attribute types into any dbms data model (e.g. relational, or object-oriented) to obtain a complete model and query language. a particular novel concept is the _sliced representation_ which represents a temporal development as a set of _units_, where unit types for spatial and other data types represent certain "simple" functions of time. we also show how the model can be mapped into concrete physical data structures in a dbms environment. luca forlizzi ralf hartmut guting enrico nardelli markus schneider the group elicitation method for participatory design and usability testing guy a. boy understanding the response of readers in the digital library chip bruce panoramic overviews for navigating real-world scenes laura a. teodosio michael mills the iris database system iris is an object-oriented database management system being developed at hewlett-packard laboratories [1], [3]. this videotape provides an overview of the iris data model and a summary of our experiences in converting a computer- integrated manufacturing application to iris. an abstract of the videotape follows. iris is intended to meet the needs of new and emerging database applications such as office and engineering information systems, knowledge-based systems, manufacturing applications, and hardware and software design. these applications require a rich set of capabilities that are not supported by the current generation (i.e., relational) dbmss. the iris data model is an object and function model. it provides three basic constructs objects, types and functions. as with other object systems, iris objects have a unique identifier and can only be accessed and manipulated through functions. objects are classified by type. objects that belong to the same type share common functions. types are organized into a hierarchy with inherited functions. in iris, functions are used to model properties of objects, relationships among objects and operations on objects. thus, the behavior of an iris object is completely specified through its participation in functions. iris provides good separation among its three basic notions. this simplifies the data model making it easier to learn and easier to implement since there are fewer constructs than other object models. in addition, it facilitates iris support for the following desirable features. schema evolution: new types and functions may be added at any time. object evolution: iris objects may have multiple types and may acquire and lose types dynamically. object participation in functions may be required or optional (e g, everyone has birthdate but not everyone has a phone number). data independence: the implementation of a function is defined separately from its interface. thus, the implementation of a function may change without affecting applications that use it. functional extensibility: an iris function may be implemented as a stored table, computed as an iris expression, or computed as a subroutine in a general-purpose programming language. thus, any computation can be expressed as an iris function schema and data uniformity: the metadata is modeled and manipulated using the primitives of the data model. also, system functions (create type, delete object, etc) are invoked in the same manner as user functions. thus, users need learn only one interface. set processing: iris supports set-at-a-time processing for efficient retrieval and update of collections of objects. to evaluate the usefulness of the iris prototype, a project was undertaken to convert a large relational application to iris [2]. the relational system contained nearly 200 relations and 2500 attributes. when transcribed to iris, the schema size was reduced by over a third. there are two reasons for this large reduction. first, in the relational schema, many attributes were simply foreign keys required for joins. in the iris schema, function inheritance through the type hierarchy eliminates the need for many of these foreign keys. a second reason for the schema reduction was that compound keys were replaced by object references. this permitted several attributes in a relation to be replaced by a single identifier it was noted that application programs were easier to read and develop using the iris schema. the iris osql (object sql) language was a fairly natural interface for users familiar with sql. the use of function composition and function inheritance and a large number of joins that, in the relational system, must be expressed by comparing keys. the function-orientation of iris encouraged code sharing in that deriving and sharing new functions was simplified. finally, since there are few tools and methodologies for using object-oriented database management systems, the ability of the iris schema to easily evolve was valuable in iteratively refining the iris schema. also, the iris graphical editor was a useful tool in graphically displaying the schema and browsing function definition and instances. bill kent peter lyngback samir mathur kevin wilkinson usability lab tools (abstract) paul weiler bob hendrich monty hammontree adaptive multi-stage distance join processing a spatial distance join is a relatively new type of operation introduced for spatial and multimedia database applications. additional requirements for ranking and stopping cardinality are often combined with the spatial distance join in on-line query processing or internet search environments. these requirements pose new challenges as well as opportunities for more efficient processing of spatial distance join queries. in this paper, we first present an efficient _k_-distance join algorithm that uses spatial indexes such as r-trees. bi-directional node expansion and plane-sweeping techniques are used for fast pruning of distant pairs, and the plane-sweeping is further optimized by novel strategies for selecting a sweeping axis and direction. furthermore, we propose adaptive multi-stage algorithms for _k_-distance join and incremental distance join operations. our performance study shows that the proposed adaptive multi-stage algorithms outperform previous work by up to an order of magnitude for both _k_-distance join and incremental distance join queries, under various operational conditions. hyoseop shin bongki moon sukho lee freedom from deadlock of locked transactions in a distributed database we examine the problem of determining whether a given set of locked transactions, accessing a distributed database, is free from deadlock. a deadlock graph is used to derive a new characterization for deadlock-free two- transaction systems in a distributed environment. the characterization provides a direct and efficient polynomial test for deadlock-freedom in two- transaction systems. the method is not dependent on the number of sites in a distributed database, and hence improves previously known results, which are exponential in the number of sites. henry tirri editorial richard snodgrass trec-8 interactive track william hersh paul over snowball: extracting relations from large plain- text collections text documents often contain valuable structured data that is hidden yin regular english sentences. this data is best exploited infavailable as arelational table that we could use for answering precise queries or running data mining tasks.we explore a technique for extracting such tables from document collections that requires only a handful of training examples from users. these examples are used to generate extraction patterns, that in turn result in new tuples being extracted from the document collection.we build on this idea and present our snowball system. snowball introduces novel strategies for generating patterns and extracting tuples from plain-text documents.at each iteration of the extraction process, snowball evaluates the quality of these patterns and tuples without human intervention,and keeps only the most reliable ones for the next iteration. in this paper we also develop a scalable evaluation methodology and metrics for our task, and present a thorough experimental evaluation of snowball and comparable techniques over a collection of more than 300,000 newspaper documents. eugene agichtein luis gravano supporting valid-time indeterminacy in valid-time indeterminacy it is known that an event stored in a database did in fact occur, but it is not known exactly when. in this paper we extend the sql data model and query language to support valid-time indeterminacy. we represent the occurrence time of an event with a set of possible instants, delimiting when the event might have occurred, and a probability distribution over that set. we also describe query language constructs to retrieve information in the presence of indeterminacy. these constructs enable users to specify their credibility in the underlying data and their plausibility in the relationships among that data. a denotational semantics for sql's select statement with optional credibility and plausibility constructs is given. we show that this semantics is reliable, in that it never produces incorrect information, is maximal, in that if it were extended to be more informative, the results may not be reliable, and reduces to the previous semantics when there is no indeterminacy. although the extended data model and query language provide needed modeling capabilities, these extensions appear initially to carry a significant execution cost. a contribution of this paper is to demonstrate that our approach is useful and practical. an efficient representation of valid-time indeterminacy and efficient query processing algorithms are provided. the cost of support for indeterminacy is empirically measured, and is shown to be modest. finally, we show that the approach is general, by applying it to the temporal query language constructs being proposed for sql3. curtis e. dyreson richard t. snodgrass office automation and the changing definition of the workplace the introduction of telecommunications and computer technology into the workplace has profound implications for the nature of work itself. in particular, the implementation of "office automation" permits significant changes in the organization and execution of office work. it is possible that the term "office" may take on new meanings. office automation provides the potential to alter the locational and temporal definitions of large numbers of office jobs. this discussion focuses on the phenomenon of "remote office work", which is facilitated by developments in telecommunications and computer technology. the general position is that technology can support either positive or negative implementations of remote office work; social and economic forces, as well as policy, provide the impetus for change in one direction or the other. this discussion emphasizes one particular form of remote work, that is, work at home. the reason for this emphasis is that work at home serves as an excellent example of the wide range of potential implications of remote work. margrethe h. olson organizational learning and getting the work done in newly computerized contexts carole groleau james r. taylor the future of digital library research barry m. leiner extending the capabilities of the human visual system: an introduction to enhanced reality jerry bowskill john downie automated visual discourse synthesis: coherence, versatility, and interactivity michelle x. zhou tip: a temporal extension to informix commercial relational database systems today provide only limited temporal support. to address the needs of applications requiring rich temporal data and queries, we have built tip (t_emporal_ i_nformation_ p_rocessor_), a temporal extension to the informix database system based on its datablade technology. our tip datablade extends informix with a rich set of datatypes and routines that facilitate temporal modeling and querying. tip provides both c and java libraries for client applications to access a tip-enabled database, and provides end-users with a gui interface for querying and browsing temporal data. jun yang huacheng c. ying jennifer widom industry briefs: monkeymedia eric gould bear barbee teasley ledia pearl carroll evaluation of an inference network-based retrieval model howard turtle w. bruce croft don't let the millennium bug bite you (or, how to make a silk purse from a sow's ear) john h. esbin electronic mail: expanding library services beyond traditional boundaries stephen foster daniel ferrer looking and lingering as conversational cues in video-mediated communication herbert l. colston diane j. schiano issues in multimedia server design prashant j. shenoy pawan goyal harrick m. vin an aspect of query optimization in multidatabase systems chiang lee chia-jung chen hongjun lu on computing functions with uncertainty we study the problem of computing a function _f_(_x_ 1,…, _xn_) given that the actual values of the variables _xi_'s are known only with some uncertainty. for each variable _xi_, an interval _ii_ is known such that the value of _xi_ is guaranteed to fall within this interval. any such interval can be probed to obtain the actual value of the underlying variable; however, there is a cost associated with each such probe. the goal is to adaptively identify a minimum cost sequence of probes such that regardless of the actual values taken by the unprobed _xi_'s, the value of the function _f_ can be computed to within a specified precision. we design online algorithms for this problem when _f_ is either the selection function or an aggregation function such as sum or average. we consider three natural models of precision and give algorithms for each model. we analyze our algorithms in the framework of competitive analysis and show that our algorithms are asymptotically optimal. finally, we also study online algorithms for functions that are obtained by composing together selection and aggregation functions. sanjeev khanna wang-chiew tan graphical user interfaces and end user computing support (abstract): a multi- phase approach to information technology management chang e. koh facilitating the exploration of interface design alternatives: the humanoid model of interface design humanoid is a user interface design tool that lets designers express abstract conceptualizations of an interface in an executable form, allowing designers to experiment with scenarios and dialogues even before the application model is completely worked out. three properties of the humanoid approach allow it to do so: a modularization of design issues into independent dimensions, support for multiple levels of specificity in mapping application models to user interface constructs, and mechanisms for constructing executable default user interface implementations from whatever level of specificity has been provided by the designer. pedro szekely ping luo robert neches constraint checking with partial information constraints are a valuable tool for managing information across multiple databases, as well as for general purposes of assuring data integrity. however, efficient implementation of constraint checking is difficult. in this paper we explore techniques for assuring constraint satisfaction without performing a complete evaluation of the constraints. we consider methods that use only constraint definitions, methods that use constraints and updates, and methods that use constraints, updates, and "local" data. ashish gupta yehoshua sagiv jeffrey d. ullman jennifer widom tools for supporting cooperative work near and far: highlights from the cscw conference the second conference on computer supported cooperative work has provided focus on use of computers for supporting workers that are at various levels of geographic dispersion. the participants in this panel reported case studies at that conference on group work (1) in face-to-face meetings, (2) in the same building, and (3) distributed across a number of sites. each panelist therefore brings insight about the communication needs of their research subjects and both the value and limitations of particular technologies for supporting the communication that ties the members of the groups together as geographic distance varies. each of the panelists will address the following two questions: what are the preferred types of communication (visual, written, spoken) for people working together at particular geographic distances? what are the benefits and shortcomings of available technologies (video, electronic-mail, telephone/voice mail) for supporting these types of communication? s. f. ehrlich t. bikson w. mackay j. c. tang on site: the life and times of the first web cam quentin stafford-fraser a formal view integration method the design of an appropriate conceptual database scheme is one of the most difficult tasks in usual database applications. especially, the design of a common global database scheme for many different user groups requires a great amount of effort and skill, because the desired scheme should fit a great variety of requirements and expectations. here, view integration is a natural method that should help to manage the complexity of such a design problem. for each user group the requirements and expectations are separately collected and specified as views, that are subsequently integrated into a global scheme supporting all those different views. in this paper, we carefully develop a formal model, clarifying many notions and concepts, related to the view integration method. this formal model serves as a theoretical basis of our integration approach that uses equivalence preserving, local scheme transformations as the main integration operations. joachim biskup bernhard convent global applications of collaborative technology: introduction collaborative technologies, sometimes referred to as "groupware applications" given their deployment to support groups of individuals engaging in collaborative tasks, have developed rapidly in the last few years. much of this expansion has been fueled by the dramatic increases in internet penetration in societies around the world, making it possible for globally distributed teams to work on projects. we see these technologies being applied in a variety of ways, ranging from organizational communication and decision making to distributed software inspections and development to virtual education initiatives. the systems themselves include an array of technologies that may include video- and audioconferencing, shared calendaring, (digital) document management systems, text-based group support systems, and many more. robert davison gert- jan de vreede the limits of expert performance using hierarchic marking menus gordon kurtenbach william buxton the verse version support environment (abstract) david hicks anja haake vodak open nested transactions - visualizing database internals vodak is a prototype of an object-oriented, distributed database system developed during the past five years at the integrated publication and information systems institute (ipsi). the aim of demonstrating vodak open nested transactions is to provide insights into internals of database systems that are usually hidden from application programmers and users. by utilizing semantics of methods, vodak open nested transactions increase the degree of parallelism between concurrent transactions compared to conventional transaction management schemes. demonstrating the difference in parallelism provides users with a "feeling" for internal database mechanisms, application programmers with information about the impact of transaction management on performance, and system developers with ideas how to improve their systems with respect to transaction management. peter muth thomas c. rakow content-oriented integration in hypermedia systems kyoji hirata yoshinori hara hajime takano shigehito kawasaki information filtering shoshana loeb douglas terry an axiomatic model of dynamic schema evolution in objectbase systems randel j.peters m. tamer Ã-zsu surrogate subsets: a free space management strategy for the index of a text retrieval system this paper presents a new data structure and an associated strategy to be utilized by indexing facilities for text retrieval systems. the paper starts by reviewing some of the goals that may be considered when designing such an index and continues with a small survey of various current strategies. it then presents an indexing strategy referred to as surrogate subsets discussing its appropriateness in the light of the specified goals. various design issues and implementation details are discussed. our strategy requires that a surrogate file be divided into a large number of subsets separated by free space which will allow the index to expand when new material is appended to the database. experimental results report on the utilization of free space when the database is enlarged. f. j. burkowski interactive clustering for navigating in hypermedia systems this paper talks about clustering related nodes of an overview diagram to reduce its complexity and size. this is because although overview diagrams are useful for helping the user to navigate in a hypermedia system, for any real- world system these become too complicated and large to be really useful. both structure-based and content-based clustering are used. since the nodes can be related to each other in different ways, depending on the situation different clustered views will be useful. hence, it should be possible to interactively specify the clustering conditions and examine the resulting views. we present efficient clustering algorithms which can cluster the information space in real-time. we talk about the navigational view builder, a tool that allows the interactive development of overview diagrams. finally, we propose a 3-dimensional approach for visualizing these abstracted views. sougata mukherjea james d. foley scott e. hudson making b+\\- trees cache conscious in main memory previous research has shown that cache behavior is important for main memory index structures. cache conscious index structures such as cache sensitive search trees (css-trees) perform lookups much faster than binary search and t-trees. however, css-trees are designed for decision support workloads with relatively static data. although b+-trees are more cache conscious than binary search and t-trees, their utilization of a cache line is low since half of the space is used to store child pointers. nevertheless, for applications that require incremental updates, traditional b+-trees perform well. our goal is to make b+-trees as cache conscious as css-trees without increasing their update cost too much. we propose a new indexing technique called "cache sensitive b+-trees" (csb+-trees). it is a variant of b+-trees that stores all the child nodes of any given node contiguously, and keeps only the address of the first child in each node. the rest of the children can be found by adding an offset to that address. since only one child pointer is stored explicitly, the utilization of a cache line is high. csb+-trees support incremental updates in a way similar to b+-trees. we also introduce two variants of csb+-trees. segmented csb+-trees divide the child nodes into segments. nodes within the same segment are stored contiguously and only pointers to the beginning of each segment are stored explicitly in each node. segmented csb+-trees can reduce the copying cost when there is a split since only one segment needs to be moved. full csb+-trees preallocate space for the full node group and thus reduce the split cost. our performance studies show that csb+-trees are useful for a wide range of applications. jun rao kenneth a. ross the distributed interoperable object model and its application to large-scale interoperable database systems ling liu calton pu view materialization techniques for complex hierarchical objects matthew c. jones elke a. rundensteiner ontology-driven geographic information systems frederico t. fonseca max j. egenhofer reading patterns and formats of academic articles on the web various formats are being used for web-based academic articles such as conference papers and journal papers. we surveyed the formats being used and tried to identify reading activities and the proper formats by carrying out two online surveys: an email-based survey with an email-based questionnaire and a web-based survey with a web-based questionnaire.the survey results show that readers overview web-based academic articles from the screen, print them out and then read the printed articles. the results also show that the structural formats employed by most papers on the web are against readers' preferences. the simple two- frame format was most preferred by 47% of the respondents as readers, but the cascade format of page windows was regarded as the worst by 65%. an interesting result is that 26% of the respondents selected as the worst style the paper-like format that is currently widely used for web-based articles. brief data sets and results are shown in this article.in addition, the importance of examples embedded in the web-based questionnaire was shown by two consecutive surveys. y. rho t. d. gedeon question asking as a tool for novice computer skill acquisition marc m. sebrechts merryanna l. swartz interface design and multivariate analysis of unix command use stephen jose hanson robert e. kraut james m. farber helping the user by helping the developer: the role of guidelines in a multimedia context maria g. wedlow christina haas dan boyarski paul g. crumley interaction and outeraction: instant messaging in action we discuss findings from an ethnographic study of instant messaging (im) in the workplace and its implications for media theory. we describe how instant messaging supports a variety of informal communication tasks. we document the affordances of im that support flexible, expressive communication. we describe some unexpected uses of im that highlight aspects of communication which are not part of current media theorizing. they pertain to communicative processes people use to connect with each other and to manage communication, rather than to information exchange. we call these processes "outeraction". we discuss how outeractional aspects of communication affect media choice and patterns of media use. bonnie a. nardi steve whittaker erin bradner digital library design for organizational usability rob kling margaret elliott the spiffi scalable video-on-demand system craig s. freedman david j. dewitt videomap and videospacelcon: tools for anatomizing video content yoshinobu tonomura akihito akutsu kiyotaka otsuji toru sadakata the temporal query language tquel recently, attention has been focused on temporal databases, representing an enterprise over time. we have developed a new language, tquel, to query a temporal database. tquel was designed to be a minimal extension, both syntactically and semantically, of quel, the query language in the ingres relational database management system. this paper discusses the language informally, then provides a tuple relational calculus semantics for the tquel statements that differ from their quel counterparts, including the modification statements. the three additional temporal constructs defined in tquel are shown to be direct semantic analogues of quel's where clause and target list. we also discuss reducibility of the semantics to quel's semantics when applied to a static database. tquel is compared with ten other query languages supporting time. richard snodgrass least expected cost query optimization: an exercise in utility francis chu joseph y. halpern praveen seshadri introduction to the special issue on virtual reality software and technology dan r. olsen germinder singh steven k. feiner implementation of logiclal query languages for databases jeffrey d. ullman who's in charge here?: cooperative work and authority negotiation in police helicopter missions charlotte linde cost-driven design for archival repositories designing an archival repository is a complex task because there aremany alternative configurations, each with different reliability levels and costs. in this paper we study the costs involved in an archival repository and we introduce a design framework for evaluating alternatives and choosing the best configuration in terms of reliability and cost. we also present a new version of our simulation tool, archsim/c that aids in the decision process. the design framework and the usage of archsim/c are illustrated with a case study of a hypothetical (yet realistic) archival repository shared between two universities. arturo crespo hector garcia-molina data structures for efficient broker implementation with the profusion of text databases on the internet, it is becoming increasingly hard to find the most useful databases for a given query. to attack this problem, several existing and proposed systems employ brokers to direct user queries, using a local database of summary information about the available databases. this summary information must effectively distinguish relevant databases and must be compact while allowing efficient access. we offer evidence that one broker, gloss, can be effective at locating databases of interest even in a system of hundreds of databased and can examine the performance of accessing thegloss summeries for two promising storage methods: the grid file and partitioned hashing. we show that both methods can be tuned to provide good performance for a particular workload (within a broad range of workloads), and we discuss the tradeoffs between the two data structures. as a side effect of our work, we show that grid files are more broadly applicable than previously thought; inparticular, we show that by varying the policies used to construct the grid file we can provide good performance for a wide range of workloads even when storing highly skewed data. anthony tomasic luis gravano calvin lue peter schwarz laura haas research on human-computer interaction and cooperative hypermedia at gmd-ipsi norbert a. streitz heinz-dieter böcker cubist: a new algorithm for improving the performance of ad-hoc olap queries lixin fu joachim hammer value-sensitive design batya friedman interlocus: workspace configuration mechanisms for activity awareness takahiko nomura koichi hayashi tan hazama stephan gudmundson template-based wrappers in the tsimmis system in order to access information from a variety of heterogeneous information sources, one has to be able to translate queries and data from one data model into another. this functionality is provided by so-called (source) wrappers [4,8] which convert queries into one or more commands/queries understandable by the underlying source and transform the native results into a format understood by the application. as part of the tsimmis project [1, 6] we have developed hard-coded wrappers for a variety of sources (e.g., sybase dbms, www pages, etc.) including legacy systems (folio). however, anyone who has built a wrapper before can attest that a lot of effort goes into developing and writing such a wrapper. in situations where it is important or desirable to gain access to new sources quickly, this is a major drawback. furthermore, we have also observed that only a relatively small part of the code deals with the specific access details of the source. the rest of the code is either common among wrappers or implements query and data transformation that could be expressed in a high level, declarative fashion. based on these observations, we have developed a wrapper implementation toolkit [7] for quickly building wrappers. the toolkit contains a library for commonly used functions, such as for receiving queries from the application and packaging results. it also contains a facility for translating queries into source-specific commands, and for translating results into a model useful to the application. the philosophy behind our "template-based" translation methodology is as follows. the wrapper implementor specifies a set of templates (rules) written in a high level declarative language that describe the queries accepted by the wrapper as well as the objects that it returns. if an application query matches a template, an implementor-provided action associated with the template is executed to provide the native query for the underlying source1. when the source returns the result of the query, the wrapper transforms the answer which is represented in the data model of the source into a representation that is used by the application. using this toolkit one can quickly design a simple wrapper with a few templates that cover some of the desired functionality, probably the one that is most urgently needed. however, templates can be added gradually as more functionality is required later on. another important use of wrappers is in extending the query capabilities of a source. for instance, some sources may not be capable of answering queries that have multiple predicates. in such cases, it is necessary to pose a native query to such a source using only predicates that the source is capable of handling. the rest of the predicates are automatically separated from the user query and form a filter query. when the wrapper receives the results, a post- processing engine applies the filter query. this engine supports a set of built-in predicates based on the comparison operators =, ,<,>, etc. in addition, the engine supports more complex predicates that can be specified as part of the filter query. the postprocessing engine is common to wrappers of all sources and is part of the wrapper toolkit. note that because of postprocessing, the wrapper can handle a much larger class of queries than those that exactly match the templates it has been given. figure 1 shows an overview of the wrapper architecture as it is currently implemented in our tsimmis testbed. shaded components are provided by the toolkit, the white component is source-specific and must be generated by the implementor. the driver component controls the translation process and invokes the following services: the parser which parses the templates, the native schema, as well as the incoming queries into internal data structures, the matcher which matches a query against the set of templates and creates a filter query for postprocessing if necessary, the native component which submits the generated action string to the source, and extracts the data from the native result using the information given in the source schema, and the engine, which transforms and packages the result and applies a postprocessing filter if one has been created by the matcher. we now describe the sequence of events that occur at the wrapper during the translation of a query and its result using an example from our prototype system. the queries are formulated using a rule-based language called msl that has been developed as a template specification and query language for the tsimmis project. data is represented using our object exchange model (oem). we will briefly describe msl and oem in the next section. details on msl can be found in [5], a full introduction to oem is given in [1]. joachim hammer hector garcia-molina svetlozar nestorov ramana yerneni marcus breunig vasilis vassalos multi-media risc informatics: retrieving information with simple structural components daniela rus devika subramanian fast algorithms for projected clustering the clustering problem is well known in the database literature for its numerous applications in problems such as customer segmentation, classification and trend analysis. unfortunately, all known algorithms tend to break down in high dimensional spaces because of the inherent sparsity of the points. in such high dimensional spaces not all dimensions may be relevant to a given cluster. one way of handling this is to pick the closely correlated dimensions and find clusters in the corresponding subspace. traditional feature selection algorithms attempt to achieve this. the weakness of this approach is that in typical high dimensional data mining applications different sets of points may cluster better for different subsets of dimensions. the number of dimensions in each such cluster-specific subspace may also vary. hence, it may be impossible to find a single small subset of dimensions for all the clusters. we therefore discuss a generalization of the clustering problem, referred to as the projected clustering problem, in which the subsets of dimensions selected are specific to the clusters themselves. we develop an algorithmic framework for solving the projected clustering problem, and test its performance on synthetic data. charu c. aggarwal joel l. wolf philip s. yu cecilia procopiuc jong soo park database programming in machiavelli - a polymorphic language with static type inference machiavelli is a polymorphically typed programming language in the spirit of ml, but supports an extended method of type inferencing that makes its polymorphism more general and appropriate for database applications. in particular, a function that selects a field of a records is polymorphic in the sense that it can be applied to any record which contains a field with the appropriate type. when combined with a set data type and database operations including join and projection, this provides a natural medium for relational database programming. moreover, by implementing database objects as reference types and generating the appropriate views --- sets of structures with "identity" \--- we can achieve a degree of static type checking for object- oriented databases. atsushi ohori peter buneman val breazu-tannen high-precision midi encoding method including decoder control for synthesizing vocal sounds toshio modegi how a personal document's intended use or purpose affects its classification in an office this paper reports on one of the findings of a larger case study that attempts to describe how people organize documents in their own offices. in that study, several dimensions along which people make classificatory decisions were identified. of these, the use to which a document is put emerged as a strong determiner of that document's classification. the method of analysis is reviewed, and examples of different kinds of uses are presented, demonstrating that it is possible to describe a wide variety of specific instances using a closed set of descriptors. the suggestion is made that, in designing systems for organizing materials, it might be advantageous to incorporate information about contextual variables, such as use, since these seem to be particularly important in classification decisions made within personal environments. b. kwasnik nsync - a toolkit for building interactive multimedia presentations brian bailey joseph a. konstan robert cooley moses dejong constructing, organizing, and visualizing collections of topically related web resources for many purposes, the web page is too small a unit of interaction and analysis. web sites are structured multimedia documents consisting of many pages, and users often are interested in obtaining and evaluating entire collections of topically related sites. once such a collection is obtained, users face the challenge of exploring, comprehending and organizing the items. we report four innovations that address these user needs: (1) we replaced the web page with the web site as the basic unit of interaction and analysis;(2) we defined a new informationstructure, the clan graph, that groups together sets of related sites; (3) we augment the representation of a site with a site profile, information about site structure and content that helps inform user evaluation of a site; and (4) we invented a new graph visualization, the auditorium visualization, that reveals important structural and content properties of sites within a clan graph. detailed analysis and user studies document the utility of this approach. the clan graph construction algorithm tends to filter out irrelevant sites and discover additional relevant items. the auditorium visualization, augmented with drill-down capabilities to explore site profile data, helps users to find high-quality sites as well as sites that serve a particular function. loren terveen will hill brian amento reader opinion cards as a measure of customer satisfaction carol tyler an improved third normal form for relational databases in this paper, we show that some codd third normal form relations may contain "superfluous" attributes because the definitions of transitive dependency and prime attribute are inadequate when applied to sets of relations. to correct this, an improved third normal form is defined and an algorithm is given to construct a set of relations from a given set of functional dependencies in such a way that the superfluous attributes are guaranteed to be removed. this new normal form is compared with other existing definitions of third normal form, and the deletion normalization method proposed is shown to subsume the decomposition method of normalization. tok-wang ling frank w. tompa tiko kameda comments on sdd-1 concurrency control mechanisms gordon mclean the future of integrated design of ubiquitous computing in combined real & virtual worlds daniel m. russell mark weiser pad++: a zooming graphical interface for exploring alternate interface physics we describe the current status of pad++, a zooming graphical interface that we are exploring as an alternative to traditional window and icon-based approaches to interface design. we discuss the motivation for pad++, describe the implementation, and present prototype applications. in addition, we introduce an informational physics strategy for interface design and briefly compare it with metaphor-based design strategies. benjamin b. bederson james d. hollan hypermirror: toward pleasant-to-use video mediated communication system osamu morikawa takanori maesako the text retrieval conference (trec): history and plans for trec-9 ellen m. voorhees donna harman information gathering in the world-wide web: the w3ql query language and the w3qs system the world wide web (www) is a fast growing global information resource. it contains an enormous amount of information and provides access to a variety of services. since there is no central control and very few standards of information organization or service offering, searching for information and services is a widely recognized problem. to some degree this problem is solved by "search services," also known as "indexers," such as lycos, altavista, yahoo, and others. these sites employ search engines known as "robots" or "knowbots" that scan the network periodically and form text-based indices. these services are limited in certain important aspects. first, the structural information, namely, the organization of the document into parts pointing to each other, is usually lost. second, one is limited by the kind of textual analysis provided by the "search service." third, search services are incapable of navigating "through" forms. finally, one cannot prescribe a complex database-like search. we view the www as a huge database. we have designed a high-level sql-like language called w3ql to support effective and flexible query processing, which addresses the structure and content of www nodes and their varied sorts of data. we have implemented a system called w3qs to execute w3ql queries. in w3qs, query results are declaratively specified and continuously maintained as views when desired. the current architecture of w3qs provides a server that enables users to pose queries as well as integrate their own data analysis tools. the system and its query language set a framework for the development of database-like tools over the www. a significant contribution of this article is in formalizing the www and query processing over it. david konopnicki oded shmueli what's going on in indexing? nancy c. mulvany a first step to formally evaluate collaborative work ricardo baeza-yates jose a. pino structured online interactions: improving the decision-making of small discussion groups a quantitative research experiment was used to examine whether a group's computer-mediated decision-making could be improved by providing a scripted structure to the groups text chat discussion. the study compared a regular chat discussion to a scripted chat discussion using lead line, a program that allows people to add a layer of pre-authored structure to regular text chat. we found that groups were more likely to come to consensus in structured chat discussions. in addition, groups applied the structure they learned to subsequent regular chat sessions. shelly farnham harry r. chesley debbie e. mcghee reena kawal jennifer landau authorization management for digital libraries h. m. gladney arthur cantu snowball: a prototype system for extracting relations from large text collections eugene agichtein luis gravano jeff pavel viktoriya sokolova aleksandr voskoboynik preserving digital information forever well within our lifetime we can expect to see most information being created, stored and used digitally. despite the growing importance of digital data, the wider community pays almost no attention to the problems of preserving this digital information for the future. even within the archival and library communities most work on digital preservation has been theoretical, not practical, and highlights the problems rather than giving solutions. physical libraries have to preserve information for long periods and this is no less true of their digital equivalents. this paper describes the preservation approach adopted in the victorian electronic record strategy (vers) which is currently being trialed within the victorian government, one of the states of australia. we review the various preservation approaches that have been suggested and describe in detail encapsulation, the approach which underlies the vers format. a key difference between the vers project and previous digital preservation projects is the focus within vers on the construction of actual systems to test and implement the proposed technology. vers is not a theoretical study in preservation. andrew waugh ross wilkinson brendan hills jon dell'oro human factors comparison of a procedural and a nonprocedural query language two experiments testing the ability of subjects to write queries in two different query languages were run. the two languages, sql and tablet, differ primarily in their procedurality; both languages use the relational data model, and their halstead levels are similar. constructs in the languages which do not affect their procedurality are identical. the two languages were learned by the experimental subjects almost exclusively from manuals presenting the same examples and problems ordered identically for both languages. the results of the experiments show that subjects using the more procedural language wrote difficult queries better than subjects using the less procedural language. the results of the experiments are also used to compare corresponding constructs in the two languages and to recommend improvements for these constructs. charles welty david w. stemple design issues in distributed multidatabase systems many reports have been published on the design strategies employed in the development of multidatabase systems. this paper examines some of these strategies and compares the design of the adds system to other efforts in the development of multidatabase systems. there are a number of interesting issues that arise when considering the design of a distributed multidatabase system. these include: (1) the levels of update control for the supported physical databases, (2) the cooperation of the global and local concurrency control schemes, (3) the treatment of concurrency control of a partially replicated distributed multidatabase directory, (4) reduced data transmission with enhanced semijoin algorithms and localized processing of intermediate query results, and (5) the network architecture required to support a distributed multidatabase system. glenn r. thompson yuri j. breitbart a metric for hypertext usability elmaoun m. babiker hiroko fujihara craig d. b. boyle context interchange: new features and formalisms for the intelligent integration of information the context interchange strategy presents a novel perspective for mediated data access in which semantic conflicts among heterogeneous systems are not identified a priori, but are detected and reconciled by a context mediator through comparison of contexts axioms corresponding to the systems engaged in data exchange. in this article, we show that queries formulated on shared views, export schema, and shared "ontologies" can be mediated in the same way using the context interchange framework. the proposed framework provides a logic-based object-oriented formalsim for representing and reasoning about data semantics in disparate systems, and has been validated in a prototype implementation providing mediated data access to both traditional and web- based information sources. cheng hian goh stephane bressan stuart madnick michael siegel a context-based navigation paradigm for accessing web data wilfried lemahieu position paper: internet vod cache server design carsten griwodz michael zink michael liepert ralf steinmetz query by diagram: a graphical environment for querying databases tiziana catarci giuseppe santucci context-sensitive filtering for browsing in hypertext tsukasa hirashima noriyuki matsuda toyohiro nomoto jun'ichi toyoda comprehensiveness and restrictiveness in group decision heuristics: effects of computer support on consensus decision making g. desanctis m. d'onofrio v. sambamurthy m. s. poole why users cannot "get what they want" ray j. paul dialing a name: alphabetic entry through a telephone keypad lisa fast roy ballantine documentation in the computer age: control and audit implications over the past two decades, proportionally more and more of the corporation's information assets have been entered, processed, stored, and retrieved through edp facilities. today the new developments in data entry technology and the growing acceptance of distributive computing methodology are placing more edp power in the hands of the end-user. as business operations become more dependent upon supportive edp systems, the internal auditor has to focus more attention on the audit of the edp system itself. the ability to independently evaluate a system can come only from knowledge of the system. it is for this reason that in an audit engagement involving computerized systems, the internal auditor should begin with a review of available system documentation. j. jose cortez generating and reintegrating geospatial data the process of building a geospatial component to access existing materials in the perseus digital library has raised interesting questions about the interaction between historical and geospatial data. the traditional methods of describing geographic features' names and locations do not provide a complete solution for historical data such as that in the perseus digital library. very often data sources for a spatial database must be created from the historical materials themselves. robert f. chavez organizational memory systems to support organizational information processing: development of a framework and results of an empirical study ronald k. maier oliver w. klosa mapping relational database management systems to hypertext (abstract) jiangling wan comparison of access methods for time-evolving data this paper compares different indexing techniques proposed for supporting efficient access to temporal data. the comparison is based on a collection of important performance criteria, including the space consumed, update processing, and query time for representative queries. the comparison is based on worst-case analysis, hence no assumptions on data distribution or query frequencies are made. when a number of methods have the same asymptotic worst- case behavior, features in the methods that affect average case behavior are discussed. additional criteria examined are the pagination of an index, the ability to cluster related data together, and the ability to efficiently separate old from current data (so that larger archival storage media such as write-once optical disks can be used). the purpose of the paper is to identify the difficult problems in accessing temporal data and describe how the different methods aim to solve them. a general lower bound for answering basic temporal queries is also introduced. betty salzberg vassilis j. tsotras handling very large databases with informix extended parallel server in this paper, we investigate which problems exist in very large real databases and describe which mechanisms are provided by informix extended parallel server (xps) for dealing with these problems. currently the largest customer xps database contains 27 tb of data. a database server that has to handle such an amount of data has to provide mechanisms which allow achieving adequate performance and easing the usability. we will present mechanisms which address both of these issues and illustrate them with examples from real customer systems. andreas weininger towards context-based search engine selection a well-known problem for web search is targeting search on information that satisfies users' information needs. user queries tend to be short, and hence often ambiguous, which can lead to inappropriate results from general-purpose search engines. this has led to a number of methods for narrowing queries by adding information. this paper presents an alternative approach that aims to improve query results by using knowledge of a user's current activities to select search engines relevant to their information needs, exploiting the proliferation of high-quality special-purpose search services. the paper introduces the prism source selection system and describes its approach. it then describes two initial experiments testing the system's methods. david b. leake ryan scherle office procedure as practical action: models of work and system design lucy a. suchman qbi: an iconic query system for inexpert users we present a general purpose query interface for inexpert users based on the manipulation of icons. the user perceives the reality of interest as structured in classes and attributes while the system internally maintains a schema rich of semantic information. the query language, fully visual, is based on the select and project paradigm that has been proven to be easy to understand. no path specification is required for composing a query. automatic feedbacks based on natural language generation and cardinality constraints analysis help the user in specifying his/her requests. antonio massari stefano pavani lorenzo saladini yet another note on minimal covers in [atk88] atkins corrects a widely spread error in the algorithm for finding a minimal cover for a given set of functional dependencies. the erroneous form of the algorithm has been presented in [sa186,stw83,ul182,yan88]. unfortunately, though, there is an error also in the corrected algorithm. atkins proposed the following algorithm for determining a minimal cover for a given set of functional dependencies f. jyrki nummenmaa peter thanisch translucent patches - dissolving windows this paper presents motivation, design, and algorithms for using and implementing translucent, non-rectangular patches as a substitute for rectangular opaque windows. the underlying metaphor is closer to a mix between the architects yellow paper and the usage of white boards, than to rectangular opaque paper in piles and folders on a desktop. translucent patches lead to a unified view of windows, sub-windows and selections, and provide a base from which the tight connection between windows, their content, and applications can be dissolved. it forms one aspect of on-going work to support design activities that involve "marking" media, like paper and white boards, with computers. the central idea of that research is to allow the user to associate structure and meaning dynamically and smoothly to marks on a display surface. axel kramer groupware: a survey of perceptions and practice jeff butterfield sukumar rathnam andrew whinston introduction to a system for distributed databases (sdd-1) the declining cost of computer hardware and the increasing data processing needs of geographically dispersed organizations have led to substantial interest in distributed data management. sdd-1 is a distributed database management system currently being developed by computer corporation of america. users interact with sdd-1 precisely as if it were a nondistributed database system because sdd-1 handles all issues arising from the distribution of data. these issues include distributed concurrency control, distributed query processing, resiliency to component failure, and distributed directory management. this paper presents an overview of the sdd-1 design and its solutions to the above problems. this paper is the first of a series of companion papers on sdd-1 (bernstein and shipman [2], bernstein et al. [4], and hammer and shipman [14]). j. b. rothnie p. a. bernstein s. fox n. goodman m. hammer t. a. landers c. reeve d. w. shipman e. wong challenges of hci design and implementation brad myers query processing in a system for distributed databases (sdd-1) this paper describes the techniques used to optimize relational queries in the sdd-1 distributed database system. queries are submitted to sdd-1 in a high- level procedural language called datalanguage. optimization begins by translating each datalanguage query into a relational calculus form called an envelope, which is essentially an aggregate-free quel query. this paper is primarily concerned with the optimization of envelopes. envelopes are processed in two phases. the first phase executes relational operations at various sites of the distributed database in order to delimit a subset of the database that contains all data relevant to the envelope. this subset is called a reduction of the database. the second phase transmits the reduction to one designated site, and the query is executed locally at that site. the critical optimization problem is to perform the reduction phase efficiently. success depends on designing a good repertoire of operators to use during this phase, and an effective algorithm for deciding which of these operators to use in processing a given envelope against a given database. the principal reduction operator that we employ is called a semijoin. in this paper we define the semijoin operator, explain why semijoin is an effective reduction operator, and present an algorithm that constructs a cost-effective program of semijoins, given an envelope and a database. philip a. bernstein nathan goodman eugene wong christopher l. reeve james b. rothnie predictive engineering models based on the epic architecture for a multimodal high-performance human-computer interaction task engineering models of human performance permit some aspects of usability of interface designs to be predicted from an analysis of the task, and thus they can replace to some extent expensive user-testing data. we successfully predicted human performance in telephone operator tasks with engineering models constructed in the epic (executive process-interactive control) architecture for human information processing, which is especially suited for modeling multimodal, complex tasks, and has demonstrated success in other task domains. several models were constructed on an a priori basis to represent different hypotheses about how operators coordinate their activities to produce rapid task performance. the models predicted the total time with useful accuracy and clarified some important properties of the task. the best model was based directly on the goms analysis of the task and made simple assumptions about the operator's task strategy, suggesting that epic models are a feasible approach to predicting performance in multimodal high- performance tasks. david e. kieras scott d. wood david e. meyer naming as a fundamental concept of open hypermedia systems manolis tzagarakis nikos karousos dimitris christodoulakis siegfried reich visual system browser e. hudlicka appropriations and patterns in the use of group support systems this paper describes a macro-level coding scheme to distinguish patterns that occur in groups using a group support system (gss). the coding scheme has roots in adaptive structuration theory (ast) with its emphasis on how technology is appropriated or used, and discourse analysis that requires one consider the context of the larger discussion when analyzing textual data. the macro-level coding scheme revolves around junctures that occur during group meetings such as when a group either chooses to use the gss software or chooses to accomplish some aspect of the task without the software. after identifying a juncture, a brief description is written detailing how group members responded to the opportunity presented by the juncture. qualitative analysis is then used to identify patterns in how the technology is used. application of the coding scheme is demonstrated with data gathered from 17 groups that met three times over a two-week period. katherine m. chudoba design issues of idict eye-aware applications have existed for long, but mostly for very special and restricted target populations. we have designed and are currently implementing an eye-aware application, called idict, which is a general-purpose translation aid aimed at mass markets. idict monitors the user's gaze path while s/he is reading text written in a foreign language. when the reader encounters difficulties, idict steps in and provides assistance with the translation. to accomplish this, the system makes use of information obtained from reading research, a language model, and the user profile. this paper describes the idea of the idict application, the design problems and the key solutions for resolving these problems. aulikki hyrskykari päivi majaranta antti aaltonen kari-jouko räihä managing the printed circuit board design process tomas blain michael dohler ralph michaelis emran qureshi a reengineering framework for evaluating a financial imaging system henry c. lucas donald j. berndt greg truman locking without blocking: making lock based concurrent data structure algorithms nonblocking nonblocking algorithms for concurrent data structures guarantee that a data structure is always accessible. this is in contrast to blocking algorithms in which a slow or halted process can render part or all of the data structure inaccessible to other processes. this paper proposes a technique that can convert most existing lock-based blocking data structure algorithms into nonblocking algorithms with the same functionality. our instruction-by-instruction transformation can be applied to any algorithm having the following properties: •interprocess synchronization is established solely through the use of locks. •there is no possiblity of deadlock (e.g. because of a well-ordering among the lock requests). in contrast to a previous work, our transformation requires only a constant amount of overhead per operation and, in the absence of failures, it incurs no penalty in the amount of concurrency that was available in the original data structure. the techniques in this paper may obviate the need for a wholesale reinvention of techniques for nonblocking concurrent data structure algorithms. john turek dennis shasha sundeep prakash on random sampling over joins a major bottleneck in implementing sampling as a primitive relational operation is the inefficiency of sampling the output of a query. it is not even known whether it is possible to generate a sample of a join tree without first evaluating the join tree completely. we undertake a detailed study of this problem and attempt to analyze it in a variety of settings. we present theoretical results explaining the difficulty of this problem and setting limits on the efficiency that can be achieved. based on new insights into the interaction between join and sampling, we develop join sampling techniques for the settings where our negative results do not apply. our new sampling algorithms are significantly more efficient than those known earlier. we present experimental evaluation of our techniques on microsoft's sql server 7.0. surajit chaudhuri rajeev motwani vivek narasayya content-based retrieval in multimedia databases c. h. c. leung j. hibler n. mwara an nf2 relational interface for document retrieval, restructuring and aggregation kalervo järvelin timo niemi ethnography and systems development (tutorial session)(abstract only): bounding the intersection goals and content: participants will learn the relevance of ethnographic analysis for capturing social complexity and its relationship to other social investigation methods for systems development in cooperative environments in the morning session. the afternoon session will specify and elaborate the problems inherent in integrating ethnographic methods with systems development. these problems will be highlighted through examination of data from the instructors' own research in air traffic control and retail financial services. dave randall mark roucefield user-centered abstractions for adaptive hypermedia presentations dick c. a. bulterman independence is good: dependency-based histogram synopses for high-dimensional data approximating the joint data distribution of a multi-dimensional data set through a compact and accurate histogram synopsis is a fundamental problem arising in numerous practical scenarios, including query optimization and approximate query answering. existing solutions either rely on simplistic independence assumptions or try to directly approximate the full joint data distribution over the complete set of attributes. unfortunately, both approaches are doomed to fail for high-dimensional data sets with complex correlation patterns between attributes. in this paper, we propose a novel approach to histogram-based synopses that employs the solid foundation of statistical interaction models to explicitly identify and exploit the statistical characteristics of the data. abstractly, our key idea is to break the synopsis into (1) a statistical interaction model that accurately captures significant correlation and independence patterns in data, and (2) a collection of histograms on low-dimensional marginals that, based on the model, can provide accurate approximations of the overall joint data distribution. extensive experimental results with several real-life data sets verify the effectiveness of our approach. an important aspect of our general, model-based methodology is that it can be used to enhance the performance of other synopsis techniques that are based on data-space partitioning (e.g., wavelets) by providing an effective tool to deal with the "dimensionality curse". amol deshpande minos garofalakis rajeev rastogi support vector machines: hype or hallelujah? kristin p. bennett colin campbell one is not enough: multiple views in a media space william gaver abigail sellen christian heath paul luff an object oriented approach to multidimensional database conceptual modeling (oomd) j. trujillo m. palomar semantic multicast: intelligently sharing collaborative sessions we introduce the concept of semantic multicast to implement a large-scale shared interaction infrastructure providing mechanisms for collecting, indexing, and disseminating the information produced in collaborative sessions. this infrastructure captures the interactions between users (as video, text, audio and other data streams) and promotes a philosophy of filtering, archiving, and correlating collaborative sessions in user and context sensitive groupings. the semantic multicast service efficiently disseminates relevant information to every user engaged in the collaborative session, making the aggregated streams of the collaborative session available to the correct users at the right amount of detail. this contextual focus is accomplished by introducing proxy servers to gather, annotate, and filter the streams appropriate for specific interest groups. users are subscribed to appropriate proxies, based on their profiles, and the collaborative session becomes a multi-level multicast of data from sources through proxies and to user interest groups. son dao eddie shek asha vellaikal richard r. muntz lixia zhang miodrag potkonjak ouri wolfson extending ingres with methods and triggers fred carter on efficient storage space distribution among materialized views and indices in data warehousing environments ladjel bellatreche kamalakar karlapalem michel schneider an overview of db2 parallel edition chaitanya baru gilles fecteau relevance feedback revisited researchers have found relevance feedback to be effective in interactive information retrieval, although few formal user experiments have been made. in order to run a user experiment on a large document collection, experiments were performed at nist to complete some of the missing links found in using the probabilistic retrieval model. these experiments, using the cranfield 1400 collection, showed the importance of query expansion in addition to query reweighting, and showed that adding as few as 20 well-selected terms could result in performance improvements of over 100%. additionally it was shown that performing multiple iterations of feedback is highly effective. donna harman internet news clipping services: how companies keep on top of their markets arik r. johnson lamp: language for active message protocols among the most prominent aspects of office automation is the concept of electronic mail. despite the fact that numerous "office of the future&" descriptions seem to focus upon electronic mail as a great opportunity, seldom is it treated as a phenomenon in its own right rather than product. more often it seems to be a technological byproduct, available to counter certain specific business ailments, most notably the inefficiency of transportation of packaged messages. despite the obvious advantages of e-mail, it is not seen as either essential to office of the future as a concept or as theoretically interesting to researchers. several commentators, however, have pointed out that messaging may be an alternative form of interpersonal communication (see bair [l]), without especially treating messaging as a core concept in office automation. this paper proposes to open the debate, not into the technical or behavioral feasibility of e-mail or the other messaging manifestations, nor into the economic benefits of implementation, but rather into the foundations of pre- programmed, non-real-time verbal interaction (printed and oral) as a general phenomenon. paul s. licker unifying heterogeneous information models narinder singh beyond the browser tony fernandes cscw 2000 video program andreas girgensohn alison lee an architecture for implementing extensible information-seeking environments david g. hendry david j. harper latent semantic indexing model for boolean query formulation (poster session) a new model named boolean latent semantic indexing model based on the singular value decomposition and boolean query formulation is introduced. while the singular value decomposition alleviates the problems of lexical matching in the traditional information retrieval model, boolean query formulation can help users to make precise representation of their information search needs. retrieval experiments on a number of test collections seem to show that the proposed model achieves substantial performance gains over the latent semantic indexing model. dae-ho baek heuiseok lim hae-chang rim high availability of commercial applications kestutis ivinskis call for participation pdc '98 & cscw '98 workshops paul dourish mike robinson practical usability evaluation gary perlman fluid links for informed and incremental link transitions polle t. zellweger bay-wei chang jock d. mackinlay selectivity estimation of window queries for line segment datasets guido proietti christos faloutsos things every update replication customer should know (abstract) rob goldring intermedia: a case study of the differences between relational and object- oriented database systems this paper compares two approaches to meeting the data handling requirements of intermedia, a hypermedia system developed at the institute for research in information and scholarship at brown university. intermedia, though written using an object- oriented programming language, relies on a traditional relational database management system for data storage and retrieval. we examine the ramifications of replacing the relational database with an object- oriented database. we begin by describing the important characteristics each database system. we then describe intermedia and give an overview of its architecture and its data handling requirements. we explain why and how we used a relational database management system and the problems that we encountered with this system. we then present the design of an object-oriented database schema for intermedia and compare the relational and object-oriented database management system approaches. karen e. smith stanley b. zdonik a "pile" metaphor for supporting casual organization of information a user study was conducted to investigate how people deal with the flow of information in their workspaces. subjects reported that, in an attempt to quickly and informally manage their information, they created piles of documents. piles were seen as complementary to the folder filing system, which was used for more formal archiving. a new desktop interface element--the pile-- was developed and prototyped through an iterative process. the design includes direct manipulation techniques and support for browsing, and goes beyond physical world functionality by providing system assistance for automatic pile construction and reorganization. preliminary user tests indicate the design is promising and raise issues that will be addressed in future work. richard mander gitta salomon yin yin wong using semi-joins to solve relational queries philip a. bernstein dah-ming w. chiu data models in geographic information systems shashi shekhar mark coyle brajesh goyal duen-ren liu shyamsundar sarkar lexical navigation: visually prompted query expansion and refinement james w. cooper roy j. byrd a taxonomy of see-through tools: the video eric a. bier ken fishkin ken pier maureen c. stone bricks: laying the foundations for graspable user interfaces george w. fitzmaurice hiroshi ishii william a. s. buxton lotus notes database support for usability testing mary beth butler ericca lahti complete formal model for information retrieval systems jean tague airi salminen charles mcclellan design and use of muds for serious purposes (workshop session)(abstract only) this workshop will investigate muds and their relationship to other cscw systems, with a special focus on design issues. we will explore muds now available on the internet, the role of users in mud design, evaluation methods, and visions of the future. yvonne waern daniel pargman web search---your way improving web searching with user preferences. eric j. glover steve lawrence michael d. gordon william p. birmingham c. lee giles hi-cites: dynamically created citations with active highlighting michelle q. wang baldonado terry winograd at the forge: integrating sql with cgi, part 1 reuven lerner a sound and complete query evaluation algorithm for relational databases with null values reiter has proposed extended relational theory to formulate relational databases with null values and presented a query evaluation algorithm for such databases. however, due to indefinite information brought in by null values, reiter's algorithm is sound but not complete. in this paper, we first propose an extended relation to represent indefinite information in relational databases. then, we define an extended relational algebra for extended relations. based on reiter's extended relational theory, and our extended relations and the extended relational algebra, we present a sound and complete query evaluation algorithm for relational databases with null values li yan yuan ding-an chiang book review: civilizing cyberspace corporate linux journal staff people, places and interfaces: using physiological constraints to inform the design of safety-critical user interfaces c. w. johnson linking specialized network data repositories to standard access tools r. ford g. hellenga v. palaiya d. thompson acq: an automatic clustering and querying approach for large image databases dantong yu aidong zhang on the potential of tolerant region reuse for multimedia applications the recent years have shown an interesting evolution in the mid-end to low-end embedded domain. portable systems are growing in importance as they improve in storage capacity and in interaction capabilities with general purpose systems. furthermore, media processing is changing the view embedded processors are designed, keeping in mind the emergence of new application domains such as those for pda systems or for the third generation of mobile digital phones (umts). the performance requirements of these new kind of devices are not those of the general-purpose domain, where traditionally the premium goal is the highest performance. embedded systems must face ever increasing real time requirements as well as power consumption constraints. under this special scenario, instruction/region reuse arises as a promising way of increasing the performance of media embedded processors and, at the same time, reducing the power consumption. furthermore, media and signal processing applications are a suitable target for instruction/region reuse, given the large amount of redundancy found in media data working sets. in this paper we propose a novel region reuse mechanism that takes advantage of the tolerance of media algorithms to losses in the precision of computation. by identifying regions of code where an input data set is processed into an output data set, we can reuse computational instances using the result of previous ones with a similar input data set (hence the term tolerant reuse). we will show that conventional region reuse is barely able to provide more than a 8% in reduction of executed instructions (even with significantly big tables) in a typical jpeg encoder application. on the other hand, when applying the concept of tolerance, we are able to provide a reduction of more than 25% of the number of executed instructions with tables smaller than 1kb (with only small degradations in the quality of the output image), and up to a 40% of reduction (and no visually perceptible differences) with bigger tables . carlos Álvarez jesus corbal esther salamí mateo valero the digital library steven pemberton the samos active dbms prototype stella gatziu andreas geppert klaus r. dittrich modern wiss hermann maurer enactement and emergence in the dramaturgy of artificial life roy ascott dynamic retrieval of remote digital objects yongcheng li varna puvvada roy campbell main memory database recovery margaret h. eich demonstrating the electronic cocktail napkin: a paper-like interface for early design mark d. gross ellen yi luen do sagiv,y-on finite fd-acyclicity yehoshua sagiv translations in hci: formal representations for work analysis and collaboration michael j. muller distributed computing column cynthia dwork cognitive user interface laboratory, gmd-ipsi h. u. hoppe r. t. king a. tissen a case-based architecture for a dialogue manager for information-seeking processes anne tissen navigating in hyperspace: designing a structure-based toolbox ehud rivlin rodrigo botafogo ben shneiderman replacing usability testing with user dialogue jacob buur kirsten bagger on the semantics of "now" in databases although "now" is expressed in sql and current_timestamp within queries, this value cannot be stored in the database. how ever, this notion of an ever- increasing current-time value has been reflected in some temporal data models by inclusion of database-resident variables, such as "now" "until-changed, " "**," "@," and "-". time variables are very desirable, but their used also leads to a new type of database, consisting of tuples with variables, termed a variable database. james clifford curtis dyreson tomás isakowitz christian s. jensen richard t. snodgrass a spatial feature based photograph retrieval system joemon m. jose david john harper david g. hendry spatial interpretation of domain objects integrated into a freeform electronic whiteboard thomas p. moran william van melle patrick chiu accessibility of information on the web steve lawrence c. lee giles queries-r-links: browsing and retrieval via interactive querying gene golovchinsky mark chignell sci-fi at chi: cyberpunk novelists predict future user interfaces this plenary panel will explore ideas about future user interfaces, their technology support, and their social context as proposed in the work of leading authors of science fiction characterized as the cyberpunk movement. respondents will react to and comment upon the authors' presentations. aaron marcus donald a. norman rudy rucker bruce sterling vernor vinge browsing in a digital library collecting linearly arranged documents a method of assisting a user in finding the required documents effectively is proposed. a user being informed which documents are worth examining can browse in a digital library (dl) in a linear fashion. computational evaluations were carried out, and a dl and its navigator are designed and constructed. yanhua qu keizo sato makoto nakashima tetsuro ito building distributed systems (panel) doug lea david forslund tom barry don vines rajendra raj ashutosh tiwary sequence mining in categorical domains: incorporating constraints mohammed j. zaki human performance using computer input devices in the preferred and non- preferred hands subjects' performance was compared in pointing and dragging tasks using the preferred and non-preferred hands. tasks were tested using three different input devices: a mouse, a trackball, and a tablet-with- stylus. the trackball had the least degradation across hands in performing the tasks, however it remained inferior to both the mouse and stylus. for small distances and small targets, the preferred hand was superior. however, for larger targets and larger distances, both hands performed about the same. the experiment shows that the non-preferred hand is more than a poor approximation of the preferred hand. the hands are complementary, each having its own strength and weakness. one design implication is that the non-preferred hand is well suited for tasks that do not require precise action, such as scrolling. paul kabbash i. scott mackenzie william buxton research alerts: the character, value, and management of personal paper archives steve whittaker julia hirschberg full text retrieval of documents identified in on-line library catalogs via internet in an effort to address issues of user access and retrieval of full-text files, the university of maryland at college park (umcp) and rensselaer polytechnic institute (rpi) are collaborating on a project to retrieve full text of documents using online library catalogs via internet. both organizations share a concern for the integration into the electronic environment of heretofore paperbased information. the amount of information with which libraries must deal is rapidly outpacing libraries' abilities to acquire, catalog, store and retrieve it. creative alternatives must be found to capitalize on technological opportunities for sharing among libraries to the benefit of the nation's scholars. this project builds cooperative links between campus libraries and computer centers, and between campus libraries on geographically distant campuses using the nation's academic computing network facilities. it begins a serious process of developing and evaluating appropriate techniques for sharing primary content material among the nation's research libraries. this project will also focus on issues specific to full-text network access, particularly identifying technology needs and standards options. it will take a pragmatic view, recognizing not only the desire and intent to operate over an iso-osi network eventually, but also the current existence of a large academic network using the tcp-ip protocols. attention will need to be placed on annotation the marc bibliographic record appropriately to identify cataloged items for which full-text is available. the choice of standards for storage and delivery of full-text is also an issue, for which several options warrant attention (e.g., group 4 fax, display postscript, and standard graphic mark-up language). this project will be among the first to demonstrate that users needing bibliographic and full-text information can satisfy their needs over internet. the placement in the marc bibliographic record on local online library systems and oclc of the instructions for accessing the full text and the development of software to assist in retrieving the full text is an innovative approach to library service. university-generated technical reports will be the documents used in this project. these locally produced publications offer an opportunity for experimentation. there are few copyright issues, the texts generally exist in electronic form, there are few illustrations, the material is timely, the distribution is often quite informal, and these publications are often shared with other academic institutions. umcp and rpi will be creating local files of full text of technical reports generated from work done on their respective campus. documents in the files will be tied to the respective library catalogs. users will identify documents of interest through the catalog and move electronically to the document file. documents will be transmitted over the internet between umcp and rpi and downloaded locally for printing. it is anticipated that a relatively small number of technical reports online (no more than 200 total, 100 at each institution) will be needed to demonstrate concept feasibility in this project. m. a. plank temporal statement modifiers a wide range of database applications manage time-varying data. many temporal query languages have been proposed, each one the result of many carefully made yet subtly interacting design decisions. in this article we advocate a different approach to articulating a set of requirements, or desiderata, that directly imply the syntactic structure and core semantics of a temporal extension of an (arbitrary) nontemporal query language. these desiderata facilitate transitioning applications from a nontemporal query language and data model, which has received only scant attention thus far. the paper then introduces the notion of statement modifiers that provide a means of systematically adding temporal support to an existing query language. statement modifiers apply to all query language statements, for example, queries, cursor definitions, integrity constraints, assertions, views, and data manipulation statements. we also provide a way to systematically add temporal support to an existing implementation. the result is a temporal query language syntax, semantics, and implementation that derives from first principles. we exemplify this approach by extending sql-92 with statement modifiers. this extended language, termed atsql, is formally defined via a denotational- semantics-style mapping of temporal statements to expressions using a combination of temporal and conventional relational algebraic operators. michael h. böhlen christian s. jensen richard t. snodgrass spatial management of data spatial data management is a technique for organizing and retrieving information by positioning it in a graphical data space (gds). this graphical data space is viewed through a color raster-scan display which enables users to traverse the gds surface or zoom into the image to obtain greater detail. in contrast to conventional database management systems, in which users access data by asking questions in a formal query language, a spatial data management system (sdms) presents the information graphically in a form that seems to encourage browsing and to require less prior knowledge of the contents and organization of the database. this paper presents an overview of the sdms concept and describes its implementation in a prototype system for retrieving information from both a symbolic database management system and an optical videodisk. christopher f. herot internet2 distributed storage infrastructure (i2-dsi) project: improving global access to digital collections bert j. dempsey micah beck terry moore multimediaminer: a system prototype for multimedia data mining multimedia data mining is the mining of high-level multimedia information and knowledge from large multimedia databases. a multimedia data mining system prototype, multimediaminer, has been designed and developed. it includes the construction of a multimedia data cube which facilitates multiple dimensional analysis of multimedia data, primarily based on visual content, and the mining of multiple kinds of knowledge, including summarization, comparison, classification, association, and clustering. osmar r. zaïane jiawei han ze-nian li sonny h. chee jenny y. chiang schema transformation without database reorganization we argue for avoiding database reorganizations due to schema modification in object- oriented systems, since these are expensive operations and they conflict with reusing existing software components. we show that data independence, which is a neglected concept in object databases, helps to avoid reorganizations in case of capacity preserving and reducing schema transformations. we informally present a couple of examples to illustrate the idea of a schema transformation methodology that avoids database reorganization. markus tresch marc h. scholl spatial data integrity constraints in object oriented geographic data modeling karla a. v. borges alberto h. f. laender clodoveu a. davis rewriting aggregate queries using views sara cohen werner nutt alexander serebrenik a hierarchy-aware approach to faceted classification of objected-oriented components this article presents a hierarchy-aware classification schema for obje ct- oriented code, where software components are classified according to their behavioral characteristics, such as provided services, employed algorithms, and needed data. in the case of reusable application frameworks, these characteristics are constructured from their model, i.e., from the description of the abstract classes specifying both the framework structure and purpose. in conventional object libraries, the characteristics are extracted semiautomatically from class interfaces. characteristics are term pairs, weighted to represent "how well" they describe component behavior. the set of characteristics associated with a given component forms its software descriptor. a descriptor base is presented where descriptors are organized on the basis of structured relationships, such as similarity and composition. the classification is supported by a thesaurus acting as a language-independent unified lexicon. the descriptor base is conceived for developers who, besides conventionally browsing the descriptors hierarchy, can query the system, specifying a set of desired functionalities and getting a ranked set of adaptable candidates. user feedback is taken into account in order to progressively ameliorate the quality of the descriptors according to the views of the user community. feedback is made dependent of the user typology through a user profile. experimental results in terms of recall and precision of the retrieval mechanism against a sample code base are reported. e. damiani m. g. fugini c. bellettini keep (over)reaching for the stars ravi ganesan usability management maturity, part 2 (abstract): usability techniques - what can you do? thyra l. rauch george a. flanagan what's happening jennifer bruer control layer primitives for the layered multimedia data model michael j. wynblatt gerhard a. schloss exploring data mining implementation karim k. hirji fragmentation: a technique for efficient query processing a "divide and conquer" strategy to compute natural joins by sequential scans on unordered relations is described. this strategy is shown to always he better than merging scbiis when both relations must he sorted before joining, and generally better in practical cases when only the largest relation mutt be sorted. giovanni maria sacco system components for embedded information retrieval from multiple disparate information sources ramana rao daniel m. russell jock d. mackinlay application architecture (panel session): 2tier or 3tier? what is dbms's role? experienced panelist will share their views on application architecture, specially, as it relates to database systems. the discussion will focus on what technologies and mechanisms are necessary for developing web applications, and where these mechanisms should reside. anil k. nori parallel database systems 101 jim gray finding replicated web collections many web documents (such as java faqs) are being replicated on the internet. often entire document collections (such as hyperlinked linux manuals) are being replicated many times. in this paper, we make the case for identifying replicated documents and collections to improve web crawlers, archivers, and ranking functions used in search engines. the paper describes how to efficiently identify replicated documents and hyperlinked document collections. the challenge is to identify these replicas from an input data set of several tens of millions of web pages and several hundreds of gigabytes of textual data. we also present two real-life case studies where we used replication information to improve a crawler and a search engine. we report these results for a data set of 25 million web pages (about 150 gigabytes of html data) crawled from the web. junghoo cho narayanan shivakumar hector garcia-molina business: the 8th layer: e-mail outsourcing sends a message kate gerwig direct: a query facility for multiple databases ulla merz roger king the web's hidden order lada a. adamic bernardo a. huberman ant world (demonstration abstract) paul kantor endre boros ben melamed dave neu vladimir menkov qin shi myung-ho kim examining basic items of a screen design kenji ido toshiki yamaoka design of a multi-media vehicle for social browsing in this paper we present a new approach to the use of computer-mediated communications technology to support distributed cooperative work. in contrast to most of the existing approaches to cscw, we focus explicitly on tools to enable unplanned, informal social interaction. we describe a "social interface" which provides direct, low-cost access to other people through the use of multi-media communications channels. the design of the system centers around three basic concepts derived from the research literature and our own observations of the workplace: social browsing, a virtual workplace, and interaction protocols. we use these design properties to describe a new system concept, and examine the implications for cscw of having automated social interaction available through the desktop workstation. robert w. root languages for multi-database interoperability frederic gingras laks v. s. lakshmanan iyer n. subramanian despina papoulis nematollaah shiri sticky labels lon barfield books without pages/pages without books: revisiting the geography of interface james kalmbach a multi-tier framework for accessing distributed, heterogeneous spatial data in a federation based eis claus hofmann maintenance of cube automatic summary tables materialized views (or automatic summary tables---asts) are commonly used to improve the performance of aggregation queries by orders of magnitude. in contrast to regular tables, asts are synchronized by the database system. in this paper, we present techniques for maintaining cube asts. our implementation is based on ibm db2 udb. wolfgang lehner richard sidle hamid pirahesh roberta wolfgang cochrane query expansion method based on word contribution (poster abstract) keiichiro hoashi kazunori matsumoto naomi inoue kazuo hashimoto a hands-on introduction to collaborative filtering (tutorial session)(abstract only) goals and content: the moring session will introduce the concepts of information filtering, develop a taxonomy of the techniques used, and take a detailed look at present and historical applications of collaborative filtering technology. the afternoon session will investigate design issues including algorithms for making recommendations, obtaining user ratings, privacy, communications, and data storage. brad miller john riedl support for fully interactive playout in disk-array-based video server in a video-on-demand (vod) system, it is desirable to provide the user with interactive browsing functions such as "fast forward" and "fast backward." however, these functions usually require a significant amount of additional resources from the vod system in terms of storage space, retrieval throughput, network bandwidth, etc. moreover, prevalent video compression techniques such as mpeg impose additional constraints on the process since they introduce inter-frame dependencies. in this paper, we devise methods to support variable rate browsing for mpeg-like video steams and minimize the additional resources required. specifically, we consider retrieval for a disk-array-based video server and address the problem of distributing the retrieval requests across the disks. our overall approach for interactive browsing comprises (1) a storage method, (2) placement and sampling methods, and (3) a playout method, where the placement and sampling methods are two alternatives for video segment selection. the segment sampling scheme supports browsing at any desired speed, while minimizing the variation on the number of video segments skipped between samplings. on the other hand, the segment placement scheme supports completely uniform segment sampling across the disk array for some specific speedup rates. experiments for the visual effect of the proposed segment skipping approach have been conducted on mpeg data. it is shown that the proposed method is a viable approach to video browsing. m.-s. chen d. kandlur p. yu optimization of constrained frequent set queries with 2-variable constraints currently, there is tremendous interest in providing ad-hoc mining capabilities in database management systems. as a first step towards this goal, in [15] we proposed an architecture for supporting constraint-based, human-centered, exploratory mining of various kinds of rules including associations, introduced the notion of constrained frequent set queries (cfqs), and developed effective pruning optimizations for cfqs with 1-variable (1-var) constraints. while 1-var constraints are useful for constraining the antecedent and consequent separately, many natural examples of cfqs illustrate the need for constraining the antecedent and consequent jointly, for which 2-variable (2-var) constraints are indispensable. developing pruning optimizations for cfqs with 2-var constraints is the subject of this paper. but this is a difficult problem because: (i) in 2-var constraints, both variables keep changing and, unlike 1-var constraints, there is no fixed target for pruning; (ii) as we show, "conventional" monotonicity-based optimization techniques do not apply effectively to 2-var constraints. the contributions are as follows. (1) we introduce a notion of quasi- succinctness, which allows a quasi-succinct 2-var constraint to be reduced to two succinct 1-var constraints for pruning. (2) we characterize the class of 2-var constraints that are quasi-succinct. (3) we develop heuristic techniques for non-quasi-succinct constraints. experimental results show the effectiveness of all our techniques. (4) we propose a query optimizer for cfqs and show that for a large class of constraints, the computation strategy generated by the optimizer is ccc-optimal, i.e., minimizing the effort incurred w.r.t. constraint checking and support counting. laks v. s. lakshmanan raymond ng jiawei han alex pang metonymy as an organising principle of it communities (doctoral colloquium) olaf boettger cognitive differences in end user searching of a cd-rom index cognitive abilities of fifty university students were tested using eight tests from the kit of factor-referenced cognitive tests. all students searched for references on the same topic using a standard computerized index, and performance in the searches was analyzed using a variety of measures. effects for cognitive differences, as well as for differences in demographic characteristics and knowledge, were identified using multiple regression. perceptual speed had an effect on the quality of searches, and logical reasoning, verbal comprehension, and spatial scanning abilities influenced search tactics. it is suggested that information retrieval systems can be made more accessible to users with different levels of cognitive abilities through improvements that will assist users to scan lists of terms, choose appropriate vocabulary for searching, and select useful references. bryce allen a study of navigational support provided by two world wide web browsing applications steve jones andy cockburn advanced capabilities of the outer join this paper demonstrates that the modeling of complex data structures can be performed easily and naturally in sql using the direct outer join operation as defined in the proposed iso- ansi sql2 standard. this paper goes on to demonstrate four advanced capabilities that can be implemented by sql vendors utilizing the data modeling ability of the outer join. these capabilities are: powerful optimization techniques that can dynamically shorten the access path length; intelligent join view updates that utilize the semantics in the data structure being modeled; direct disparate heterogeneous database access that is transparent and efficient; and automatic conversion of multi-table structures into nested relations allowing for more powerful sql operations. michael m. david optimal unification of bound simple set-terms sergio greco digital production: using alien technology mark swain information visualization for hypermedia systems sougata mukherjea the friendly intelligent tutoring environment the advancement of using the artificial intelligence (ai) methods and techniques in design intelligent tutoring systems (itss) makes understanding them more difficult, so that teachers are less and less prepared to accept these systems. as a result, the gap between researchers in the field of itss and the educational community is constantly widening. while itss are becoming more common and proving to be increasingly effective, each one must still be built from scratch at a significant cost. also present itss need quite big development environments, huge computing resources and, in consequence, are expensive and hardly portable to personal computers. this paper describes our efforts toward developing uniform data, explanation and control structures that can be used by a wide circle of authors who are involved in building itss (e.g., domain experts, teachers, curriculum developers, etc.) that is, the model of the itss framework, the **get-bits** model. ljubomir jerinic vladan devedzic at the forge: templates: separating programs from design reuven lerner media streams (demonstration): representing video for retrieval and repurposing marc davis resource reservations in networked multimedia systems daniel mosse object help for guis david freeman a uniform approach toward handling atomic and structured information in the nested relational database model the algebras and query languages for nested relations defined thus far do not allow us to "flatten" a relation scheme by disregarding the internal representation of data. in real life, however, the degree in which the structure of certain information, such as addresses, phone numbers, etc., is taken into account depends on the particular application and may even vary in time. therefore, an algebra is proposed that does allow us to simplify relations by disregarding the internal structure of a certain class of information. this algebra is based on a careful manipulation of attribute names. furthermore, the key operator in this algebra, called "copying," allows us to deal with various other common queries in a very uniform manner, provided these queries are interpreted as operations on classes of semantically equivalent relations rather than individual relations. finally, it is shown that the proposed algebra is complete in the sense of bancilhon and paredaens. marc gyssens dirk van gucht designing for the rest of the world: a consultant's observation susan dray the mre wrapper approach: enabling incremental view maintenance of data warehouses defined on multi-relation information sources some of the most recently proposed algorithms for the incremental maintenance of materialized data warehouses (dw), such as sweep and psweep, offer several significant advantages over previous solutions, such as high-performance, no potential for infinite waits and reduced remote queries and thus reduced network and information source (is) loads. however, similar to many other algorithms, they still have the restricting assumption that each is can be composed of just one single relation. this is unrealistic in practice. in this paper, we hence propose a solution to overcome this restriction. the multi- relation encapsulation (mre) wrapper supports multiple relations in iss in a manner transparent to the rest of the environment. the mre wrapper treats one is composed of multiple relations as if it were a single relation from the dw point of view; thus any existing incremental view maintenance algorithms can now be applied even to such complex iss without any changes. hence, our method maintains all advantages offered by existing algorithms in particular sweep and psweep, while also achieving the additional desired features of being non- intrusive, efficient, flexible and well-behaved. lingli ding xin zhang elke a. rundensteiner looking for a humane interface: will computers ever become easy to use? jef raskin an access control model supporting periodicity constraints and temporal reasoning access control models, such as the ones supported by commercial dbmss, are not yet able to fully meet many application needs. an important requirement derives from the temporal dimension that permissions have in many real-world situations. permissions are often limited in time or may hold only for specific periods of time. in this article, we present an access control model in which periodic temporal intervals are associated with authorizations. an authorization is automatically granted in the specified intervals and revoked when such intervals expire. deductive temporal rules with periodicity and order constraints are provided to derive new authorizations based on the presence or absence of other authorizations in specific periods of time. we provide a solution to the problem of ensuring the uniqueness of the global set of valid authorizations derivable at each instant, and we propose an algorithm to compute this set. moreover, we address issues related to the efficiency of access control by adopting a materialization approach. the resulting model provides a high degree of flexibility and supports the specification of several protection requirements that cannot be expressed in traditional access control models. elisa bertino claudio bettini elena ferrari pierangela samarati efficient maintenance of materialized mediated views james j. lu guido moerkotte joachim schue v. s. subrahmanian on indexing mobile objects george kollios dimitrios gunopulos vassilis j. tsotras a survey of distributed deadlock detection algorithms this paper surveys research work performed within the last five years in distributed deadlock detection. the last survey paper on this topic appeared in 1980; since that time a large number of interesting algorithms have been discribed in the literature. a new, more efficient scheme is the probe-based deadlock detection strategy used by many of the new algorithms. this paper will concentrate on distributed deadlock detection algorithms. only detection of resource deadlocks will be reviewed here, though other types of deadlock handling strategies and environments are briefly mentioned. ahmed k. elmagarmid optimal streaming of synchronized multimedia presentations david a. turner keith w. ross disima: a distributed and interoperable image database system vincent oria m.tamer Ã-zsu paul j. iglinski shu lin bin yao a new paradigm for browsing the web marc h. brown robert a. shillner efendi: federated database system of cadlab e. radeke r. böttger b. burkert y. engel g. kachel s. kolmschlag d. nolte digitization and conversion (working session) m. stuart lynn user-defined music sequence retrieval a system for retrieving a sequence of music excerpts or songs based on users and producers requirements is proposed in this paper. our system provides a flexible way to retrieve music pieces based on its contents as well as user- defined constraints. the proposed system allows online users to extract a sequence of songs whose first and last tracks are known and at the same time the in-between songs have minimum inter-track differences and satisfy predefined requirements. we model the problem as a constrained minimum cost flow problem which leads to a binary integer linear program (bilp) that can be solved in a reasonable amount of time. masoud alghoniemy ahmed h. tewfik the padmouse: facilitating selection and spatial positioning for the non- dominant hand ravin balakrishnan pranay patel open journals project (abstract) gary hill leslie carr time and space pamela mead chris pacione incremental relevance feedback although relevance feedback techniques have been investigated for more than 20 years, hardly any of these techniques has been implemented in a commercial full-text document retrieval system. in addition to pure performance problems, this is due to the fact that the application of relevance feedback techniques increases the complexity of the user interface and thus also the use of a document retrieval system. in this paper we concentrate on a relevance feedback technique that allows easily understandable and manageable user interfaces, and at the same time provides high-quality retrieval results. moreover, the relevance feedback technique introduced unifies as well as improves other well-known relevance feedback techniques. ijsbrand jan aalbersberg design lessons from the best of the world wide web hagan heller david rivers a query based approach for integrating heterogeneous data sources ruxandra domenig klaus r. dittrich ensuring transaction atomicity in multidatabase systems sharad mehrotra rajeev rastogi yuri breitbart henry f. korth avi silberschatz life and death of new technology: task, utility and social influences on the use of a communication medium this field experiment investigates individual, structural and social influences on the use of two video telephone systems. one system flourished, while an equivalent system died. we use a time series design and multiple data sources to test media richness theory, critical mass theory, and social influence theories about new media use. results show that the fit between tasks and features of the communications medium influences use to a degree, but cannot explain why only one system survived. critical mass---the numbers of people one can reach on a system--- and social influence---the norms that grow up around a new medium---can explain this phenomenon. robert e. kraut ronald e. rice colleen cool robert s. fish functional completeness in object-oriented databases a definition of completeness in the context of object oriented databases (oodbs) is proposed in this paper. it takes into account the existence of various categories of functions in oodbs, each of which must be complete in itself. the functionality of an oodb can be divided into sets of related functions. for example, functions needed to perform all schema evolution operations or all version management operations belong in two distinct sets. further, each set of functions must include all functions needed to perform all operations defined for that set. thus, for an oodb to be functionally complete, it must support a certain number of sets (or categories) of functions and each such set must be complete in itself. the purpose of this paper is not to give a precise definition of the categories of functions but rather to define a framework within which such categories should be examined. this paper contains a working definition of functional completeness. we would welcome any feedback on our proposal. priti mishra margaret eich searching for unity among diversity: exploring the "interface" concept kari kuutti liam j. bannon analysis of index-sequential files with overflow chainingthe gradual performance deterioration caused by deletions from and insertionsinto an index-sequential file after loading is analyzed. the model developedassumes that overflow records are handled by chaining. formulas for computingthe expected number of overflow records and the expected number of additionalaccesses caused by the overflow records for both successful and unsuccessfulsearches are derived.per-Ã...ke larson niagaracq: a scalable continuous query system for internet databases continuous queries are persistent queries that allow users to receive new results when they become available. while continuous query systems can transform a passive web into an active environment, they need to be able to support millions of queries due to the scale of the internet. no existing systems have achieved this level of scalability. niagaracq addresses this problem by grouping continuous queries based on the observation that many web queries share similar structures. grouped queries can share the common computation, tend to fit in memory and can reduce the i/o cost significantly. furthermore, grouping on selection predicates can eliminate a large number of unnecessary query invocations. our grouping technique is distinguished from previous group optimization approaches in the following ways. first, we use an incremental group optimization strategy with dynamic re-grouping. new queries are added to existing query groups, without having to regroup already installed queries. second, we use a query-split scheme that requires minimal changes to a general-purpose query engine. third, niagaracq groups both change-based and timer-based queries in a uniform way. to insure that niagaracq is scalable, we have also employed other techniques including incremental evaluation of continuous queries, use of both pull and push models for detecting heterogeneous data source changes, and memory caching. this paper presents the design of niagaracq system and gives some experimental results on the system's performance and scalability. jianjun chen david j. dewitt feng tian yuan wang making contact points between text and images pete faraday alistair sutcliffe cactus - clustering categorical data using summaries venkatesh ganti johannes gehrke raghu ramakrishnan a visual interface for a database with version management this paper describes a graphical interface to an experimental database system which incorporates a built-in version control mechanism that maintains a history of the database development and changes. the system is an extension of isis [6], interface for a semantic information system, a workstation-based, graphical database programming tool developed at brown university. isis supports a graphical interface to a modified subset of the semantic data model (sdm) [7]. the isis extension introduces a transaction mechanism that interacts with the version control facilities. a series of version control support tools have been added to isis to provide a notion of history to user- created databases. the user can form new versions of three types of isis objects: a class definition object (a type), the set of instances of a class (the content), and an entity. a version-viewing mechanism is provided to allow for the comparison of various object versions. database operations are grouped together in atomic units to form transactions, which are stored as entities in the database. a sample session demonstrates the capabilities of version and transaction control during the creation and manipulation of database objects. jay w. davison stanley b. zdonik logjam: a tangible multi-person interface for video logging jonathan cohen meg withgott philippe piernot making graphics physically tangible j. kenneth salisbury using semantic contents and wordnet in image retrieval y. alp aslandogan chuck thier clement t. yu jon zou naphtali rishe extended ephemeral logging: log storage management for applications with long lived transactions john s. keen william j. dally orienteering in an information landscape: how information seekers get from here to there we studied the uses of information search results by regular clients of professional intermediaries. the clients in our study engaged in three different types of searches: (1) monitoring a well-known topic or set of variables over time, (2) following an information-gathering plan suggested by a typical approach to the task at hand, and (3) exploring a topic in an undirected fashion. in most cases, a single search evolved into a series of interconnected searches, usually beginning with a high-level overview. we identified a set of common triggers and stop conditions for further search steps. we also observed a set of common operations that clients used to analyze search results. in some settings, the number of search iterations was reduced by restructuring the work done by intermediaries. we discuss the implications of the interconnected search pattern, triggers and stop conditions, common analysis techniques, and intermediary roles for the design of information access systems. vicki l. o'day robin jeffries using maps as a user interface to a digital library mountaz hascoÃ"t xaviersoinard heterogeneous databases and high level abstraction a heterogeneous database management system combines multiple dissimilar models of data within a single integrated system. the objective is to allow a user to access data independently of how it is actually organized. for example, a user may access a database as though it were stored relationally (i.e., in tables) [codd70], even though it is actually stored as a codasyl/dbtg or network database [coda71]. in addition, different subpieces of the database may be organized under different data models. the heterogeneous database system must present these to the user as an integrated whole. the user's model of his data may be different from any of the models chosen to implement it. rather than construct a new database system from scratch, we are interested in constructing a heterogeneous system out of existing systems. the key difficulties with this approach are: (1) the formulation of database design methods that are applicable to a variety of different data models, and (2) the development of techniques to translate programs and data between dissimilar data models. in this paper, we briefly describe how high level abstraction has been applied to these problems. the use of abstraction in database systems is related to the application of abstraction techniques in programming languages and artificial intelligence research. randy h. katz predator: an or-dbms with enhanced data types praveen seshadri mark paskin behavioral evaluation of cscw technologies (tutorial session)(abstract only) goals and objectives: evaluating cscw systems is much more difficult than evaluating single-user systems because of the additional group and organizational factors. behavioral evaluation consists of having people use cscw technologies under appropriate conditions and gathering either qualitiative or quantitative information about their behavior. we will examine a variety of methods, including case studies, large scale field studies, surveys, and laboratory studies. tom finholt gary olson judy olson surveyors' forum: notations for concurrent programming m. stella atkins access by content of documents in an office information system this paper presents the integration of retrieval functions of an information retrieval system, iota, in an office information server. besides the linear scanning of the text (using a software and a hardware filter), two access methods are proposed. the first one is based on a simple indexing of documents based on signatures. here, texts are treated as character strings. we call this method textual search. the second one is based on the extention of signature methods for implementing the indexing relation of iota, where meaningful terms (noun groups, for example) are identified in the text together with grammatical information. we call this method of signature computation the indexing-term signature. the resulting access method is called semantic search. we present the current experimentations using the schuss hardware filter as a scanning accelerator and the results of different alternatives of implementation of these retrieval functions. c. jimenez guarin continuous queries over append-only databases in a database to which data is continually added, users may wish to issue a permanent query and be notified whenever data matches the query. if such continuous queries examine only single records, this can be implemented by examining each record as it arrives. this is very efficient because only the incoming record needs to be scanned. this simple approach does not work for queries involving joins or time. the tapestry system allows users to issue such queries over a database of mail and bulletin board messages. the user issues a static query, such as "show me all messages that have been replied to by jones," as though the database were fixed and unchanging. tapestry converts the query into an incremental query that efficiently finds new matches to the original query as new messages are added to the database. this paper describes the techniques used in tapestry, which do not depend on triggers and thus be implemented on any commercial database that supports sql. although tapestry is designed for filtering mail and news messages, its techniques are applicable to any append-only database. douglas terry david goldberg david nichols brian oki the model-assisted global query system for multiple databases in distributed enterprises today's enterprises typically employ multiple information systems, which are independently developed, locally administered, and different in logical or physical designs. therefore, a fundamental challenge in enterprise information management is the sharing of information for enterprise users across organizational boundaries; this requires a global query system capable of providing on-line intelligent assistance to users. conventional technologies, such as schema-based query languages and hard-coded schema integration, are not sufficient to solve this problem. this article develops a new approach, a "model-assisted global query system," that utilizes an on-line repository of enterprise metadata \---the metadatabase---to facilitate global query formulation and processing with certain desirable properties such as adaptiveness and open-systems architecture. a definitional model characterizing the various classes and roles of the required metadata as knowledge for the system is presented. the significance of possessing this knowledge (via a metadatabase) toward improving the global query capabilities available previously is analyzed. on this basis, a direct method using model traversal and a query language using global model constructs are developed along with other new methods required for this approach. it is then tested through a prototype system in a computer- integrated manufacturing (cim) setting. waiman cheung cheng hsu retrospection on a database system this paper describes the implementation history of the ingres database system. it focuses on mistakes that were made in progress rather than on eventual corrections. some attention is also given to the role of structured design in a database system implementation and to the problem of supporting nontrivial users. lastly, miscellaneous impressions of unix, the pdp-11, and data models are given. michael stonebraker dissemination of collection wide information in a distributed information retrieval system charles l. viles james c. french how do people organize their desks?: implications for the design of office information systems thomas w. malone space-time tradeoffs for orthogonal range queries we investigate the question of (storage) space - (retrieval) time tradeoff for orthogonal range queries on a static database. lower bounds on the product of retrieval time and storage space are obtained in the arithmetic and tree models. p m vaidya "is this document relevant?…probably": a survey of probabilistic models in information retrieval this article surveys probablistic approaches to modeling information retrieval. the basic concepts of probabilistic approaches to information retrieval are outlined and the principles and assumptions upon which the approaches are based are presented. the various models proposed in the development of ir are described, classified, and compared using a common formalism. new approaches that constitute the basis of future research are described. fabio crestani mounia lalmas cornelis j. van rijsbergen iain campbell grouplens: an open architecture for collaborative filtering of netnews collaborative filters help people make choices based on the opinions of other people. grouplens is a system for collaborative filtering of netnews, to help people find articles they will like in the huge stream of available articles. news reader clients display predicted scores and make it easy for users to rate articles after they read them. rating servers, called better bit bureaus, gather and disseminate the ratings. the rating servers predict scores based on the heuristic that people who agreed in the past will probably agree again. users can protect their privacy by entering ratings under pseudonyms, without reducing the effectiveness of the score prediction. the entire architecture is open: alternative software for news clients and better bit bureaus can be developed independently and can interoperate with the components we have developed. paul resnick neophytos iacovou mitesh suchak peter bergstrom john riedl flatland: new dimensions in office whiteboards elizabeth d. mynatt takeo igarashi w. keith edwards anthony lamarca the monitoring of complex active rules with vector representation dongwook kim myoung ho kim yoon joon lee the end of the browser david garcia mining association rules between sets of items in large databases we are given a large database of customer transactions. each transaction consists of items purchased by a customer in a visit. we present an efficient algorithm that generates all significant association rules between items in the database. the algorithm incorporates buffer management and novel estimation and pruning techniques. we also present results of applying this algorithm to sales data obtained from a large retailing company, which shows the effectiveness of the algorithm. rakesh agrawal tomasz imielinski arun swami extending the relational database data model for design applications in recent years many researchers have tried to apply the traditional database systems to design applications. to date, most of these experiments have been largely unsuccessful. insufficient computing power may be one reason for this failure. however, the problem may be more fundamental. we believe the data models of the traditional database systems are intrinsically unsuited for design applications. in this paper we give reasons for this opinion and describe an enhanced relational model which removes some of the weaknesses we have identified. martin hardwick the database group at university of hagen the database area has been one of those areas of computer science which have very directly been driven by application requirements; this is true today in three ways: first, the users want more application specific support from the database, and they expect the dbms to have more semantic application knowledge. second, users want database support for new applications which are sometimes far from the traditional database applications and introduce completely new requirements as well as the need to smoothly integrate database technology with other advanced technologies (e.g. neural nets) in one application. finally, the embedding of databases into interactive work environments - for instance, the use of databases in cooperative environments (computer supported cooperative work) - forces the database community to reconsider some of the traditional beliefs about databases. the database group at hagen university has felt these application pressures in various projects for quite a time. as a consequence the emphasis of our research has shifted from database core technology to application oriented research. where the former research projects were mainly centered around concurrency control, recovery, distribution, and other "classical" database topics, the new research projects are concerned with the support of design environments, where design includes all activities of developing complex artifacts and, in addition, usually involves the cooperation of a variety of people. design here includes areas like mechanical cad, software engineering, vlsi-design, multimedia production and many others. a characteristic of design in all these areas is the use of a variety of heterogeneous tools which leads to complicated interoperability and integration issues. a key motivating factor in our work on design-environments is concurrent engineering which certainly is one of today's major industrial challenges. for computer science it opens new problems, and at the same time it requires integration of different fields. the above mentioned new requirements for databases apply in prominent ways, including the issues of managing complex data, interoperability and integration, and support of teamwork. the second research area is distributed learning environments. we are working towards systems that in future will offer, in an integrated way, computer based training and multimedia learning material, access to all sorts of information bases, communication facilities, conferencing systems, and simulation, experimentation and exercising environments. such systems are not only important for the changing needs of university education (first of all continuing education), but also for industrial education and training systems, especially in geographically decentralized organizations. while this work is not a primarily database centered one, databases play a key role as the repositories for distributed hypermedia information. part of our work in this second research area involves developing advanced teaching material for computer based learning in computer science, and in cooperation with other faculties, in areas like mathematics, humanities, economy, and history. the database group in hagen, including the computer based learning group, consists of about 20 scientists and technicians. about half of these are financed by research and development contracts. a special unit is concerned with technology transfer and database testing for certain types of applications. closely connected to the database group is the institute for automation, production and information management. this institute was founded by three different groups of hagen university: an economics group with a special background in pps, an automation group with the key area robotics, and our group. this institute acts as a platform for industry projects. the research in the hagen database group is not narrow focused but tries to address the multi-faceted problems of design and learning environments from different angles. the major projects are described in the following, together with the main members of the research teams. for each project, a small selection of publications is listed. gunter schlageter thomas berkel eberhard heuel silke mittrach andreas scherer wolfgang wilkes conceptual schema analysis: techniques and applications the problem of analyzing and classifying conceptual schemas is becomig increasingly important due to the availability of a large number of schemas related to existing applications. the purposes of schema analysis and classification activities can be different: to extract information on intensional properties of legacy systems in order to restructure or migrate to new architectures; to build libraries of reference conceptual components to be used in building new applications in a given domain; and to identify information flows and possible replication of data in an organization. this article proposes a set of techniques for schema analysis and classification to be used separately or in combination. the techniques allow the analyst to derive significant properties from schemas, with human intervention limited as far as possible. in particular, techniques for associating descriptors with schemas, for abstracting reference conceptual schemas based on schema clustering, and for determining schema similarity are presented. a methodology for systematic schema analysis is illustrated, with the purpose of identifying and abstracting into reference components the similar and potentially reusable parts of a set of schemas. experiences deriving from the application of the proposed techniques and methodology on a large set of entity-relationship conceptual schemas of information systems in the italian public administration domain are described s. castano v. de antonellis m. g. fugini b. pernici my partner is a real dog salvatore parise sara kiesler lee sproull keith waters design and implementation of advanced knowledge processing in the kbms krisys j. thomas s. deßloch n. mattos issues of online research repositories from the perspective of the biomedical sciences this commentary on joseph y. halperns proposal for a computing research repository discusses difference in traditions and practicesof online publishing and repositories between computing and biomedicals ciences. issues of accessibility and archiving are also discussed. david l. armbruster integrating geometrical and linguistic analysis for email signature block parsing the signature block is a common structured component found in email messages. accurate identification and analysis of signature blocks is important in many multimedia messaging and information retrieval applications such as email text-to-speech rendering, automatic construction of personal address databases, and interactive message retrieval. it is also a very challenging task, because signature blocks often appear in complex two- dimensional layouts which are guided only by loose conventions. traditional text analysis methods designed to deal with sequential text cannot handle two- dimensional structures, while the highly unconstrained nature of signature blocks makes the application of two-dimensional grammars very difficult. in this article, we describe an algorithm for signature block analysis which combines two- dimensional structural segmentation with one-dimensional grammatical constraints. the information obtained from both layout and linguistic analysis is integrated in the form of weighted finite-state transducers. the algorithm is currently implemented as a component in a preprocessing system for email text-to-speech rendering. hao chen jianying hu richard w. sproat integration of face-to-face and video-mediated meetings: hermes tomoo inoue ken-ichi okada yutaka matsushita integration of spatial join algorithms for processing multiple inputs several techniques that compute the join between two spatial datasets have been proposed during the last decade. among these methods, some consider existing indices for the joined inputs, while others treat datasets with no index, providing solutions for the case where at least one input comes as an intermediate result of another database operator. in this paper we analyze previous work on spatial joins and propose a novel algorithm, called slot index spatial join (sisj), that efficiently computes the spatial join between two inputs, only one of which is indexed by an r-tree. going one step further, we show how sisj and other spatial join algorithms can be implemented as operators in a database environment that joins more than two spatial datasets. we study the differences between relational and spatial multiway joins, and propose a dynamic programming algorithm that optimizes the execution of complex spatial queries. nikos mamoulis dimitris papadias a scrollbar-based visualization for document navigation donald byrd translucent history andreas genau axel kramer tirs: a topological information retrieval system satisfying the requirements of the waller-kraft wish list most document retrieval systems based on probabilistic models of feature distributions assume random selection of documents for retrieval. the assumptions of these models are met when documents are randomly selected from the database or when retrieving all available documents. a more suitable model for retrieval of a single document assumes that the best document available is to be retrieved first. models of document retrieval systems assuming random selection and best-first selection are developed and compared under binary independence and two poisson independence feature distribution models. under the best-first model, feature discrimination varies with the number of documents in each relevance class in the database. a weight similar to the inverse document frequency weight and consistent with the best-first model is suggested which does not depend on knowledge of the characteristics of relevant documents. s. cater d. kraft inspiring your users to learn daniel e. wilson user preferences when searching individual and integrated full-text databases soyeon park competitive testing: issues and methodology kristyn greenwood kelly braun suzy czarkowski pad++: a zoomable graphical interface system benjamin b. bederson james d. hollan active services for federated databases genoveva vargas-solar christine collet helena g. ribeiro integrating culture into interface design julie khaslavsky a tsql2 tutorial richard t. snodgrass ilsoo ahn gad ariav don batory james clifford curtis e. dyreson ramez elmasri fabio grandi christian s. jensen wolfgang käfer nick kline krishna kulkarni t. y. cliff leung nikos lorentzos john f. roddick arie segev mi cumulating the science of hci: from s-r compatibility to transcription typing in keeping with our claim that an applied psychology of hci must be based on cumulative work within a unified framework, we present two extensions of the model human processor. a model of immediate response behavior and stimulus- response (s-r) compatibility is presented and extended to a new domain: transcription typing. parameters are estimated using one s-r compatibility experiment, used to make a priori predictions in four other s-r compatibility tasks, and then carried over into the area of typing. a model of expert transcription typing is described and its prediction of typing phenomena is demonstrated and summarized. b. e. john a. newell an analysis of the structural validity of ternary relationships in entity relationship modeling james dullea ii-yeol song understanding and constructing shared spaces with mixed-reality boundaries we propose an approach to creating shared mixed realities based on the construction of transparent boundaries between real and virtual spaces. first, we introduce a taxonomy that classifies current approaches to shared spaces according to the three dimensions of transportation, artificiality, and spatiality. second, we discuss our experience of staging a poetry performance simultaneously within real and virtual theaters. this demonstrates the complexities involved in establishing social interaction between real and virtual spaces and motivates the development of a systematic approach to mixing realities. third, we introduce and demonstrate the technique of mixed- reality boundaries as a way of joining real and virtual spaces together in order to address some of these problems. steve benford chris greenhalgh gail reynard chris brown boriana koleva stholes: a multidimensional workload-aware histogram attributes of a relation are not typically independent. multidimensional histograms can be an effective tool for accurate multiattribute query selectivity estimation. in this paper, we introduce _stholes_, a "workload- aware" histogram that allows bucket nesting to capture data regions with reasonably uniform tuple density. _stholes_ histograms are built without examining the data sets, but rather by just analyzing query results. buckets are allocated where needed the most as indicated by the workload, which leads to accurate query selectivity estimations. our extensive experiments demonstrate that _stholes_ histograms consistently produce good selectivity estimates across synthetic and real- world data sets and across query workloads, and, in many cases, outperform the best multidimensional histogram techniques that require access to and processing of the full data sets during histogram construction. nicolas bruno surajit chaudhuri luis gravano support for multiprocessing john d. mcgregor arthur m. riehl performance measurements on hep - a pipelined mimd computer a pipelined implementation of mimd operation is embodied in the hep computer. this architectural concept should be carefully evaluated now that such a computer is available commercially. this paper studies the degree of utilization of pipelines in the mimd environment. a detailed analysis of two extreme cases indicates that pipeline utilization is quite high. although no direct comparisons are made with other computers, the low pipeline idle time in this machine indicates that this architectural technique may be more beneficial in an mimd machine than in either sisd or simd machines. harry f. jordan authentication protocols for personal communication systems masquerading and eavesdropping are major threats to the security of wireless communications. to provide proper protection for the communication of the wireless link, contents of the communication should be enciphered and mutual authentication should be conducted between the subscriber and the serving network. several protocols have been proposed by standards bodies and independent researchers in recent years to counteract these threats. however, the strength of these protocols is usually weakened in the roaming environment where the security breach of a visited network could lead to persistent damages to subscribers who visit. the subscriber's identity is not well protected in most protocols, and appropriate mechanisms solving disputes on roaming bills are not supported either. to solve these problems, new authentication protocols are proposed in this paper with new security features that have not been fully explored before. hung-yu lin lein harn the mesh superceded? two dimensional interconnection schemes have some inherent advantages because of their linear area and constant wire- lengths. the nearest-neighbor mesh is such a topology that has enjoyed a widespread acceptance. we investigate a family of bus-based topologies called the doble-lattice-meshes, and propose a variation to improve their properties. we show that the bus-based topologies perform better than the mesh for a variety of communicaion structures. in particular, when global communication is needed, they provide larger effective bandwidth, and when localized communication is permissible, they provide largest neighborhoods for a given communication capacity. l. v. kale self-stabilizing sliding window arq protocols john m. spinelli design of an integrated services packet network the integrated services digital network (isdn) has been proposed as a way of providing integrated voice and data communications services on a universal or near-universal basis. in this paper, i argue that the evolutionary approach inherent in current isdn proposals is unlikely to provide an effective long term solution and advocate a more revolutionary approach, based on the use of advanced packet switching technology. the bulk of this paper is devoted to a detailed description of an integrated services pocket network (ispn), which i offer as an alternative to current isdn proposals. jonathan s. turner measurement management service p. d. amer l. n. cassel performance of a message-based multiprocessor j. sanguinetti b. kumar automatic tcp buffer tuning with the growth of high performance networking, a single host may have simultaneous connections that vary in bandwidth by as many as six orders of magnitude. we identify requirements for an automatically-tuning tcp to achieve maximum throughput across all connections simultaneously within the resource limits of the sender. our auto-tuning tcp implementation makes use of several existing technologies and adds dynamically adjusting socket buffers to achieve maximum transfer rates on each connection without manual configuration.our implementation involved slight modifications to a bsd-based socket interface and tcp stack. with these modifications, we achieved drastic improvements in performance over large bandwidth delay paths compared to the default system configuration, and significant reductions in memory usage compared to hand-tuned connections, allowing servers to support at least twice as many simultaneous connections. jeffrey semke jamshid mahdavi matthew mathis a proposal for an improved network layer of an lan g rossi c garavaglia efficient vector processing on dataflow supercomputer sigma-1 efficiency in vector handling is the key to obtaining high performance in numerical programs. so far, the main defect of dataflow computers is inefficiency in vector processing. we propose structure-flow processing as a new scheme for handling data structures such as vectors in dataflow architecture. the main objective of structure-flow processing is to enhance vector processing performance of a dataflow computer. in this structure-flow processing scheme, the arrival of a data structure unrolls the control structure which processes the data structure itself. a high-level structure is an implementation mechanism of a structure-flow scheme on a practical dataflow computer. since all the computation is executed by instruction-level dataflow architecture, scalar level parallelism and function level parallelism are also fully utilized by this scheme. the sigma-1 architecture that supports high- level structure processing are discussed and the performance is measured. according to the measurement, vector programs can be executed three to four times faster than by unfolding using scalar dataflow processing. k. hiraki s. sekiguchi t. shimada editorial charles e. perkins network traffic measurement and modeling carey l. williamson mp/c: a multiprocessor/computer architecture a computer architecture for concurrent computing is proposed that has the shared memory aspect of tightly coupled multiprocessor systems and also the connection simplicity associated with message-connected, loosely coupled multicomputer systems. a large address space is dynamically partitioned into contiguous segments that can be accessed by a single processor. the partitioning is accomplished by switching the system buses, using semiconductor switches. the completion of a concurrent process is signaled by a processor's return to an idle state and the reattachment of its memory segment to the neighboring active processor. in effect, the assignment of an address sequence and the activation of a processor is a process fork operation, and the processor deactivation and memory segment reattachment is a process join. following a description of the mp/c structure and basic operation, some additional enhancements of the system, which improve the applicability of mp/c to many classes of computations, are outlined. applications include tree-structured multiprocessing, recursive and nondeterministic procedures, arbitrary concurrent computations, very high precision numeric calculations, and process-structured operating systems. the linear mp/c structure is extensible to higher dimensions. a two-dimensional system is described, and its usefulness for data base operations and array processing is discussed. bruce w. arden ran ginosar decentralized optimal traffic engineering in the internet distributed optimal traffic engineering in the presence of multiple paths has been found to be a difficult problem to solve. in this paper, we introduce a new approach in an attempt to tackle this problem. this approach has its basis in nonlinear control theory. more precisely, it relies on the concept of sliding modes. we develop a family of control laws, each of them having the property that the steady-state network resource allocation yields the maximum of the given utility function, subject to the network resource constraints. these control laws not only allow each ingress node to independently adjust its traffic sending rate but also provide a scheme for optimal traffic load redistribution among multiple paths. the only nonlocal information needed is binary feedback from each congested node in the path. moreover, the algorithms presented are applicable to a large class of utility functions, namely, utility functions that can be expressed as the sum of concave functions of the sending rates. we show that the technique can be applied not only to rate adaptive traffic with multiple paths, but also to assured service traffic with multiple paths. preliminary case studies show that this technique is potentially very useful for optimal traffic engineering in a multiple-class-of-service and multiple- path enabled internet, e.g., differentiated services enabled multi-protocol label switching networks. constantino lagoa hao che dynamic load sharing algorithm with a weighted load representation seung ho cho sang young han inside risks: just a matter of bandwidth lauren weinstein pseudo vector processor based on register-windowed superscalar pipeline k. nakazawa h. nakamura h. imori s. kawabe mobility management in integrated wireless-atm networks bala rajagopalan carry-over round robin: a simple cell scheduling mechanism for atm networks debanjan saha sarit mukherjee satish k. tripathi competitive routing in multiuser communication networks ariel orda raphael rom nahum shimkin multi-node communication in hyper-ring networks fadi n. sibai portable, continuous recording of complete computer behavior with low overhead (extended abstract) thomas e. willis george b. adams a virtual circuit deflection protocol emmanouel a. varvarigos jonathan p. lang synthesis of application-specific heterogeneous multiprocessor systems (abstract) heterogeneous systems can achieve enhanced performance and/or cost- effectiveness over homogeneous systems. sos, a formal method to synthesize optimal heterogeneous systems for given applications, involves creation and solution of a mixed integer-linear programming model. a primary component of the model is the set of relations to be satisfied to ensure proper ordering of various events in the task execution, and completeness and correctness. experiments indicate sos can be useful in designing heterogeneous systems. shiv prakash alice c. parker frame content independent stripping token rings have the property that a station that transmits a frame on the ring is responsible for removing the frame after it has been delivered to the destination stations. the algorithm to perform the frame removal is called 'frame stripping'. most existing algorithms strip frames based on their content. this is not always adequate. the need for a new algorithm arises from the fact that frames transmitted by a station need not have the station's own address as the source address for a variety of reasons - such as when a bridge transmits a frame or when another address is used as the source address by a station instead of its original station address. this paper discusses a new frame content independent stripping (fcis) algorithm for token rings. the fcis algorithm counts the number of frames transmitted by the station after capturing the token. in addition, the station places a special delimiter frame at the end of the transmission of frames, before releasing the token. the station then strips all received frames until either the number of frames stripped equals the number of frames transmitted or when either the delimiter frame or a token is received. we demonstrate that the fcis algorithm has a minimal impact on the performance of the ring. we study the robustness of the algorithm to errors and demonstrate that its reliability is as good as the inherent mechanisms of the token ring. the algorithm studied here is very simple to implement and interoperates with other stations not implementing this algorithm. the algorithm places no topological restrictions on the network and has the attractive feature of removing large fragments and no-owner frames. h. yang k. k. ramakrishnan dynamic and static load scheduling performance on a numa shared memory multiprocessor xiaodong zhang a brief overview of atm: protocol layers, lan emulation, and traffic management asnychronous transfer mode (atm) has emerged as the most promising technology in supporting future broadband multimedia communication services. to accelerate the deployment of atm technology, the atm forum, which is a consortium of service providers and equipment vendors in the communication industries, has been created to develop implementation and specification agreements. in this article, we present a brief overview on atm protocol layers and current progress on lan emulation and traffic management in the atm forum. kai-yeung siu raj jain the stupid network: essential yet unattainable? andrew odlyzko the construction of a retargetable simulator for an architecture template bart kienhuis ed deprettere kees vissers pieter van der wolf on the use of on-demand layer addition (odl) with mutli-layer multicast transmission techniques this work deals with the multicast transmission of data using multiple multicast groups. one reason to use multiple groups is to address the potential heterogeneity of receivers. but there is a risk that some of the groups are not used and sending data to a group with no receiver has many costs that are often underestimated. as ip multicast becomes widely deployed and used, these issues may well compromise its scalability. in this paper we introduce a new protocol, odl (on demand layer addition), which _enables a source to use only the layers that are actually required by the current set of receivers_. we describe its behavior when used with several kinds of packet scheduling schemes (cumulative or not) and different scenarios (one-to- many versus many-to-many). we have implemented the odl protocol, integrated it in the mcl multicast library and we report several experiments that assess its benefits. vincent roca primitive based architectures c. fritsch t. sanchez j. anaya fddi and timing requirements for image transmission this article presents the timing requirements for digitalized video transmission and the synchronous transmission mode of the fiber distributed data interface (fddi) protocol. first, we develop a timed model of the fddi protocol and we prove that it meets its standard requirement. secondly, we verify that the temporal constraints for real time image transmission are fulfilled by the fddi protocol. b. cousin implications of hierarchical n-body methods for multiprocessor architecture we first examine the key architectural implications of realistically scaling a representative member of this important class of applications. using scaling methods that reflect the concerns of an application scientist leads to different conclusions than does naive scaling model, both the communication to computation ratio and the amount of cache memory per processor required for effective performance increase with scaling. we then examine the effect of a shared address space versus message passing as the communication abstraction. we show that the lack of a shared address space substantially increases the programming complexity and performance overheads of a message-passing implementation. jaswinder pal singh a center-controlled dynamic rerouting scheme for fast and reliable handover in mobile atm networks yoshito takahashi takehiko kobayashi verification of a microprocessor using real world applications you-sung chang seungjong lee in-cheol park chong-min kyung optimal implementation of the weakest failure detector for solving consensus (brief announcement) unreliable failure detectors were introduced by chandra and toueg [2] as a mechanism that provides (possibly incorrect) information about process failures. they showed how unreliable failure detectors can be used to solve the consensus problem in asynchronous systems. they also showed in [1] that one of the classes of failure detectors they defined, namely eventually strong ( s), is the weakest class allowing to solve consensus1. this brief announcement presents a new algorithm implementing s. due to space limitation, the reader is referred to [4] for an in-depth presentation of the algorithm (system model, correctness proof, and performance analysis). here, we present the general idea of the algorithm and compare it with other algorithms implementing unreliable failure detectors. the algorithm works as follows. we have n processes, p1, …, pn. initially, process p1 starts sending messages periodically to the rest of processes. the rest of processes initially trust p1, and wait for its messages. if a process does not receive a message within some timeout period from its trusted process, then it suspects its trusted process and takes the next process as its new trusted process. if a process trusts itself, then it starts sending messages periodically to its successors. otherwise, it just waits for periodical messages from its trusted process. if, at some point, a process receives a message from a process pi such that pi precedes its trusted process, then it will trust pi again, increasing the value of its timeout period with respect to p< subscrpt>i. with this algorithm, eventually all the correct processes will permanently trust the same correct process. this provides the eventual weak accuracy property required by s. by simply suspecting the rest of processes, we obtain the strong completeness property required by s. our algorithm compares favorably with the algorithms proposed in [2] and [3] in terms of the number and size of the messages periodically sent and the total amount of information periodically exchanged. since algorithms implementing failure detectors need not necessarily be periodic, we propose a new and (we believe) more adequate performance measure, which we call eventual monitoring degree. informally, this measure counts the number of pairs of correct processes that will infinitely often communicate. we show that the proposed algorithm is optimal with respect to this measure. table 1 summarizes the comparison, where c denotes the number of correct processes and lfa denotes the proposed algorithm. mikel larrea antonio fernández sergio arevalo ip switching - atm under ip peter newman greg minshall thomas l. lyon an optimized contention protocol for broadband networks this paper describes the concepts underlying an alternative link-level protocol for broadband local networks. the protocol uses implicit slotting of the contention channel to support larger networks, improve performance, and provide reliable distributed collision recognition without reinforcement. it is designed such that compatible interfaces to existing csma/cd-based systems can be provided. w. worth kirkman books michele tepper the anatomy of a context-aware application andy harter andy hopper pete steggles andy ward paul webster wtcp: a reliable transport protocol for wireless wide-area networks prasun sinha narayanan venkitaraman raghupathy sivakumar vaduvur bharghavan exploiting weak connectivity for mobile file access l. b. mummert m. r. ebling m. satyanarayanan experience with formal methods in protocol development deepinder sidhu anthony chung thomas p. blumer interleaving: a multithreading technique targeting multiprocessors and workstations there is an increasing trend to use commodity microprocessors as the compute engines in large-scale multiprocessors. however, given that the majority of the microprocessors are sold in the workstation market, not in the multiprocessor market, it is only natural that architectural features that benefit only multiprocessors are less likely to be adopted in commodity microprocessors. in this paper, we explore multiple- context processors, an architectural technique proposed to hide the large memory latency in multiprocessors. we show that while current multiple-context designs work reasonably well for multiprocessors, they are ineffective in hiding the much shorter uniprocessor latencies using the limited parallelism found in workstation environments. we propose an alternative design that combines the best features of two existing approaches, and present simulation results that show it yields better performance for both multiprogrammed workloads on a workstation and parallel applications on a multiprocessor. by addressing the needs of the workstation environment, our proposal makes multiple contexts more attractive for commodity microprocessors. james laudon anoop gupta mark horowitz integrating satellite links into a land-based packet network it is becoming apparent that communication via satellite links will play an increasing role in data networks. the challenge is to integrate this new communications option into an existing land-based network such that users can realize its full benefits. this paper discusses that solutions for the sl-10* packet network. the design criteria and performance objectives are also examined. m. chu d. drynan l. r. benning the ncube family of high-performance parallel computer systems corporate ncube performance experiences on sun's wildfire prototype lisa noordergraaf ruud van der pas connection-based communication in dynamic networks amir herzberg protocol implementation on the nectar communication processor we have built a high-speed local-area network called nectar that uses programmable communication processors as host interfaces. in contrast to most protocol engines, our communication processors have a flexible runtime system that supports multiple transport protocols as well as application-specific activities. in particular, we have implemented the tcp/ip protocol suite and nectar-specific communication protocols on the communication processor. the nectar network currently has 25 hosts and has been in use for over a year. the flexibility of our communication processor design does not compromise its performance. the latency of a remote procedure call between application tasks executing on two nectar hosts is less than 500 μsec. the same tasks can obtain a throughput of 28 mbit/sec using either tcp/ip or nectar-specific transport protocols. this throughput is limited by the vme bus that connects a host and its communication processor. application tasks executing on two communication processors can obtain 90 mbit/sec of the possible 100 mbit/sec physical bandwidth using nectar-specific transport protocols. e. c. cooper p. a. steenkiste r. d. sansom b. d. zill performance and implementation of clustered-ofdm for wireless communications an elegant means by which high-speed burst wireless transmission can be accomplished with small amounts of overhead is through a novel technique referred to as clustered-ofdm (cimini et al., 1996). by using ofdm modulation with a long symbol interval, clustered-ofdm overcomes the complex and costly equalization requirements associated with single carrier systems. moreover, the need for highly linear power amplifiers typically required in ofdm systems is alleviated through the use of multiple transmit antennas combined with nonlinear coding. the clustering technique also leads to a natural implementation of transmit diversity. this paper reports on preliminary results on the performance of a clustered-ofdm system as well as the design and implementation of a clustered-ofdm transmitter. the prototype transmitter can deliver 7.5 mbps, and it is expected that this data rate could be easily tripled with existing technology in a second generation system. the paper also describes the architectural trade-offs made in order to reduce the hardware complexity of the boards as well as some experimental results showing the operation of the transmitter. babak daneshrad leonard j. cimini manny carloni nelson sollenberger supporting ip multicast for mobile hosts a binary feedback scheme for congestion avoidance in computer networks with a connectionless network layer we propose a scheme for _congestion avoidance_ in networksusing a connectionless protocol at the network layer. the schemeuses feedback from the network to the users of the network. theinteresting challenge for the scheme is to use a minimal amount offeedback (one bit in each packet) from the network to adjust theamount of traffic allowed into the network. the servers in thenetwork detect congestion and set a _congestion indication_bit on packets flowing in the forward direction. the congestionindication is communicated back to the users through the transportlevel acknowledgement. the scheme is distributed, adapts to the dynamic state of thenetwork, converges to the optimal operating point, is quite simpleto implement, and has low overhead while operational. the schemealso addresses a very important aspect of _fairness_ in theservice provided to the various sources utilizing the network. thescheme attempts to maintain fairness in service provided tomultiple sources. this paper presents the scheme and the analysis that went intothe choice of the various decision mechanisms. we also address theperformance of the scheme under transient changes in the networkand for pathological conditions. k. k. ramakrishnan raj jain research areas in computer communication l. kleinrock transport issues in the network file system bill nowicki dynamic rerouting tag schemes for the augmented data manipulator network the augmented data manipulator (adm) is a multistage interconnection network designed for large-scale, parallel processing systems. this paper is an extension of an earlier work in which the use of the inverse adm (iadm) network in an mimd environment was investigated. dynamically rerouting messages to avoid busy or faulty links is explored for both the adm and iadm networks. several schemes are presented. in some cases, there is no increase in tag overhead, but the switching elements are more complex. in other cases, the size of the routing tag is increased by one bit, but the switching elements are not as complex. a new broadcasting capability is developed that allows one processor to send a message to any number of other processors (with some restriction on the destination addresses). finally, a scheme for dynamically rerouting a broadcast message is presented. robert j. mcmillen howard jay siegel the parallel protocol engine matthias kaiserswerth twentenet: a lan with message priorities, design and performance considerations. this paper discusses design and performance aspects of twentenet, one of the few implemented lans which offers a service based on message priorities. the medium access mechanism uses the csma/cd principle, however with a deterministic collision resolution method. these characteristics make twentenet suitable for real-time applications, as well as a mixture of real- time and non real-time applications. the general system structure is introduced followed by a detailed description of the priority access method. the performance of the system is shown for various traffic conditions and distributions of message priorities. the effect of system parameters, such as transmission rate, message length, cable length, and retry limits, is indicated. i. g. niemegeers c. a. vissers editorial kwang-cheng chen jin-fu chang justin c.-i. chuang yutaka takahashi dynamic reservation multiple access (drma): a new multiple access scheme for personal communication systems (pcs) to improve the spectrum efficiency of integrated voice and data services in personal communication system (pcs), several reservation-type multiple access schemes, such as packet reservation multiple access (prma), dynamic time division multiple access (d-tdma), resource auction multiple access (rama), etc., have been proposed. prma uses the data packet itself to make a channel reservation, and is inefficient in that each unsuccessful reservation wastes one slot. however, it does not have a fixed reservation overhead and offers shorter access delay. on the other hand, fixed reservation overhead is unavoidable in both rama and d-tdma. compared to d-tdma and prma, rama is superior in the sense that its slot assignment is independent of the traffic load. but its implementation is difficult. with these observations, a new reservation protocol, called dynamic reservation multiple access (drma), is proposed in this paper. with this new protocol, the success probability of channel access is greatly improved at the expense of slightly increased system complexity. it solves the problem of inefficiency in prma, but without introducing the fixed reservation overhead as in d-tdma and rama. in addition, it is more suited to the dynamic behavior of the integrated traffic because there is no fixed boundary between voice and data slots (which is mandatory in d-tdma and rama). our numerical results indicate that its performance is superior to the existing reservation protocols, especially in the integrated traffic scenario. moreover, the soft capacity feature is exhibited when the traffic load increases. xiaoxin qiu victor o. k. li using csp to derive a sequentially consistent dsm system cortes e. perez alonso g. román barradas h. ruíz bounds on the efficiency of message-passing protocols for parallel computers robert cypher smaragda konstantinidou resource sharing for book-ahead and instantaneous-request calls albert g. greenberg r. srikant ward whitt performance evaluation of a communication system for transputer-networks based on monitored event traces c. w. oehlrich a. quick the utility of feedback in layered multicast congestion control layered multicast is a common approach for dissemination of audio and video in heterogeneous network environments. layered multicast schemes can be classified into two categories - feedback-based and feedback-free - depending on whether or not the scheme delivers feedback to the sender of the multicast session. advocates of feedback-based schemes claim that feedback is necessary to match the heterogeneous receiver capabilities efficiently. supporters of feedback-free schemes believe that feedback introduces significant complexity and that a moderate amount of additional layers can balance any benefit the feedback provides. surprisingly, there has been no systematic evaluation of these claims. this paper provides a quantitative comparison of feedback-based and feedback-free layered multicast schemes with respect to aligning the provided service to the capabilities of heterogeneous receivers. we discover realistic scenarios when feedback-free schemes require a very large number of additional layers to match the performance of feedback-based schemes. our studies also demonstrate that a light-weight feedback-based scheme can offer substantial improvement in performance over feedback-free schemes and can closely approximate the efficiency achieved by the optimal feedback-based scheme. sergey gorinsky harrick vin a qos-provisioning neural fuzzy connection admission controller for multimedia high-speed networks ray-guang cheng chung-ju chang li-fong lin a novel load balancing scheme for the tele-traffic hot spot problem in cellular networks we propose a dynamic load balancing scheme for the tele-traffic hot spot problem in cellular networks. a tele-traffic hot spot is a region of adjacent hot cells where the channel demand has exceeded a certain threshold. a hot spot is depicted as a stack of hexagonal `rings' of cells and is classified as complete if all cells within it are hot. otherwise it is termed incomplete. the rings containing all cold cells outside the hot spot are called `peripheral rings'. our load balancing scheme migrates channels through a structured borrowing mechanism from the cold cells within the `rings' or `peripheral rings' to the hot cells constituting the hot spot. a hot cell in `ring i' can only borrow a certain fixed number of channels from adjacent cells in `ring i+1'. we first propose a load balancing algorithm for a complete hot spot, which is then extended to the more general case of an incomplete hot spot. in the latter case, by further classifying a cell as cold safe, cold semi-safe or cold unsafe, a demand graph is constructed which describes the channel demand of each cell within the hot spot or its `peripheral rings' from its adjacent cells in the next outer ring. the channel borrowing algorithm works on the demand graph in a bottom up fashion, satisfying the demands of the cells in each subsequent inner ring until `ring 0' is reached. a markov chain model is first developed for a cell within a hot spot, the results of which are used to develop a similar model which captures the evolution of the entire hot spot region. detailed simulation experiments are conducted to evaluate the performance of our load balancing scheme. comparison with another well known load balancing strategy, known as cbwl, shows that under moderate and heavy tele-traffic conditions, a performance improvement as high as 12% in terms of call blockade is acheived by our load balancing scheme. sajal k. das sanjoy k. sen rajeev jayaram knowledge, timed precedence and clocks (preliminary report) yoram moses ben bloom ecxpert: exploiting event correlation in telecommunications yossi nygate the gf11 supercomputer john beetem monty denneau don weingarten a reply to comments "a comment on 'a hardware unification unit: design and analysis,'" nam sung woo a resource estimation and call admission algorithm for wireless multimedia networks using the shadow cluster concept david a. levine ian f. akyildiz mahmoud naghshineh characterization of user mobility in low earth orbit mobile satellite systems future mobile communication networks will provide a global coverage by means of constellations with nongeosynchronous satellites. multi - spot - beam antennas on satellites will allow a cellular coverage all over the earth. due to the unstationarity of satellites a call may require many cell changes during its lifetime. these passages will be managed by inter - beam handover procedures. this paper deals with the modeling of the user cell change process during call lifetime in low earth orbit - mobile satellite systems leo - msss . the analytical derivations presented in this study can be also applied to different mobility models provided that basic assumptions are fulfilled. this paper evaluates the impact of user mobility on the blocking performance of channel allocation techniques. moreover, the handover arrival process towards a cell has been characterized by using a usual statistical parameter for stationary point processes. finally, a performance analysis has been carried out on the basis of the classic teletraffic theory for telephone systems. enrico del re romano fantacci giovanni giambene tcp over atm: abr or ubr? teunis j. ott neil aggarwal performance of an edge-to-edge protocol in a simulated x.25/x.75 packet network p. t. brady location based services in a wireless wan using cellular digital packet data (cdpd) this paper shows how a standard mobile data technology, cellular digital packet data (cdpd), can be used to localize mobile data users, and provide location based services. we present a location determination system that is handset-based. using cdpd and modem protocols, this system extracts the location of the users up to the cell site coverage in the network; it also exposes this location information to the application layer. we have implemented a prototype of the entire system; it works nationwide with mobile laptop and pocket pc users. the system is adequate for providing personalized location based service such as finding close businesses of interest, or business services such as tracking and fleet management, or traffic and re- routing applications, among others. rittwik jana theodore johnson s. muthukrishnan andrea vitaletti performance comparison of the cray-2 and cray x-mp/416 supercomputers the serial and parallel performance of one of the world's fastest general purpose computers, the cray-2, is analyzed using the standard los alamos benchmark set plus codes adopted for parallel processing. for comparison, architectural and performance data are also given for the cray x-mp/416. factors affecting performance, such as memory bandwidth, size and access speed of memory, and software exploitation of hardware, are examined. the parallel processing environments of both machines are evaluated, and speed-up measurements for the parallel codes are given. m. l. simmons h. j. wasserman preface: special issue on operating systems principles anita k. jones communication protocol design to facilitate re-use based on the object- oriented paradigm the main motivation for the present work stems from the wide gap which exists between the research efforts devoted to developing formal descriptions for communication protocols and the effective development methodologies used in industrial implementations. we apply object-oriented (oo) modelling principles to networking protocols, exploring the potential for producing re-useable software modules by discovering the underlying generic class structures and behaviour. petri nets (pns) are used to derive re-useable model elements and a slightly modified ttcn notation is used for message sequence encoding. this demonstrates a formal, practical approach to the development of a protocol implementation through oo modelling. our utilisation of pns in the context of object based modelling allows for isolation of the behavioural characterisation of objects into a separate design plane, treated as a meta- level object control. this separation permits greater execution flexibility of the underlying object models. it is that very aspect of our modelling approach which can be utilised in software implementations where dynamically determined "re-programming" (i.e., change of procedures) is needed. for example, one of the requirements in wireless networking software is the ability to cope with ever-changing transmission/reception conditions and that, in turn, creates greatly varying error rates. similarly, handoff procedures create situations where dynamically determined change of operational modes is required. to illustrate the modelling concepts, the paper addresses the problem of inter- layer communication among multiple protocol entities (pes), assuming the standard iso/osi reference model. a generalised model called the inter-layer communication (ilc) model is proposed. an example of a pe based on the alternating-bit protocol (abp) is also discussed. the final example demonstrates how meta-level object control (pns) allows for the dynamic selection of different arq based algorithms. andrew a. hanish tharam s. dillon analysis and optimization of transmission schedules for single-hop wdm networks george n. rouskas mostafa h. ammar operating system sensitive device driver synthesis from implementation independent protocol specification mattias o'nils axel jantsch measuring parallel processor performance many metrics are used for measuring the performance of a parallel algorithm running on a parallel processor. this article introduces a new metric that has some advantages over the others. its use is illustrated with data from the linpack benchmark report and the winners of the gordon bell award. alan h. karp horace p. flatt communication and computation performance of the cm-5 t. t. kwan b. k. totty d. a. reed mobile facility location (extended abstract) in this paper we investigate the location of mobile facilities (in l∞ and l2 metric) under the motion of clients. in particular, we present lower bounds and efficient algorithms for exact and approximate maintenance of 1-center and 1-median for a set of moving points in the plane. our algorithms are based on the kinetic framework introduced by basch et. al [5]. s. bespamyatnikh b. bhattacharya d. kirkpatrick m. segal the computer for the 21st century mark weiser empirical studies of competitve spinning for a shared-memory multiprocessor anna r. karlin kai li mark s. manasse susan owicki connector: the internet protocol, part one: the foundations shvetima gulati flexible network support for mobility xinhua zhao claude castelluccia mary baker nomadicity: anytime, anywhere in a disconnected world nomadic computing and communications is upon us. we are all nomads, but we lack the systems support to assist us in our various forms of mobility. in this paper, we discuss the vision of nomadicity, its technical challenges, and approaches to the resolution of these challenges. one of the key characteristics of this paradigmshift in the way we deal with the information is that we face dramatic and sudden changes in connectivity and latency. our systems must be "nomadically-enabled" in that mechanisms must be developed that deal with such changes in a natural and transparent fashion. currently, this is not the case in that our systems typically treat such changes as exceptions or failures; this is unacceptable. moreover, the industry is producing "piece parts" that are populating our desktops, briefcases and belt- hooks, but that do not interoperate with each other, in general. we require innovative and systemwide solutions to overcome these problems. such are the issues we address in this paper. leonard kleinrock guest editorial: energy conserving protocols chiara petrioli ramesh r. rao jason redi the design philosophy of the darpa internet protocols the internet protocol suite, tcp/ip, was first proposed fifteen years ago. it was developed by the defense advanced research projects agency (darpa), and has been used widely in military and commercial systems. while there have been papers and specifications that describe how the protocols work, it is sometimes difficult to deduce from these why the protocol is as it is. for example, the internet protocol is based on a connectionless or datagram mode of service. the motivation for this has been greatly misunderstood. this paper attempts to capture some of the early reasoning which shaped the internet protocols. david d. clark end-to-end congestion control for the internet: delays and stability under the assumption that queueing delays will eventually become small relative to propagation delays, we derive stability results for a fluid flow model of end- to-end internet congestion control. the theoretical results of the paper are intended to be decentralized and locally implemented: each end system needs knowledge only of its own round-trip delay. criteria for local stability and rate of convergence are completely characterized for a single resource, single user system. stability criteria are also described for networks where all users share the same round-trip delay. numerical experiments investigate extensions to more general networks. through simulations, we are able to evaluate the relative importance of queueing delays and propagation delays on network stability. finally, we suggest how these results may be used to design network resources. ramesh johari david kim hong tan switch directed dynamic causal networks - a paradigm for electronic system diagnosis electronic systems diagnosis, be it at the device, board, or system level, is a complex and time consuming task. various techniques have been developed to provide design aids to the maintenance technician, each with its own successes and limitations, typically in terms of performance versus complexity issues. this paper demonstrates a novel integration of such techniques to provide for an effective and efficient approach to expert diagnosis of complex systems. the integration of behavior graph concepts and causal network analysis allows for the diagnosis of systems at a fairly high level of abstraction, allows for on-line diagnosis with or without explicit control of input stimuli, and provides for such diagnosis with minimal design detail or apriori fault assumptions. r. m. mcdermott d. stern a network architecture providing host migration transparency fumio teraoka yasuhiko yokore mario tokoro the influence of parallel decomposition strategies on the performance of multiprocessor systems dalibor vrsalovic edward f. gehringer zary z. segall daniel p. siewiorek mobile multicasting in wireless atm networks d.-k. kim c.-k. toh reliable broadcast algorithms for harts dilip d. kandlur kang g. shin realization of a multi-valued inner product step processor using ccd's (abstract only) systolic arrays are special-purpose computers with high-performance and parallel structures. they provide a general methodology for mapping high-level computations into hardware structures. in a systolic system, data flows from the host-computer memory in a rhythmic fashion, passing through many simple processing elements before it returns to memory, much as blood circulates to and from the heart [1]. since systolic arrays are special-purpose computers, arithmetic operations need not be performed using binary systems. therefore, operands may be coded in a multiple-valued, non- binary, form. in fact, benefits of higher radix computation have been confirmed both theoretically and in practice [2]. however, the hardware realization has remained strictly binary. with the availability of multiple- valued logic gates and rom it becomes feasible to perform non-binary arithmetic directly. this paper presents realization of a ternary (3-valued) inner product step processor, basic processing element in systolic arrays, using ccd gates (inhibit, fixed overflow, addition, and constant) [3]. the ccd technology has the inherent advantages of low power consumption, high packing density, and mos compatibility, which make it suitable for vlsi implementation. it must be pointed out that the low speed problem of ccd's remains to be solved [3]. the inner product step processor [4] performs a multiplication operation followed by an addition operation: s = (u)(v)+s two single-digit processing devices, half-multiply-add (hma) and full-multiply-add (fma) elements, are designed as building blocks such that: hma: (x)(y)+z = f+3(c) fma: (x)(y)+z+w = f+3(c) where the inputs x, y, z and as well as the outputs f and c are all ternary digits. figure 1 shows an array, composed from hma's, fma's and an adder, capable of performing the inner product step computation. for n-digit operands, the array requires (2n-1) hma's, (n-1)2 fma's and one n-digit adder. the delay in a ccd circuit is measured by its depth which is equal to the maximum number of gates connect in any path from input to output [3]. for the proposed inner product step processor, the depth is found to be 13n. mahmoud a. manzoul jia-yuan han scalable techniques for discovering multicast tree topology the ip multicast infrastructure has transitioned to a topology that now supports hierarchical routing. multicast network monitoring and management have become key requirements necessary for providing robust multicast operation. monitoring services help to identify potential problems such as protocol shortcomings, implementation bugs or configuration errors. this type of monnitoring often requires knowing the multicast tree topology. in this paper, we present a new approach, called tracetree, to discover tree topology in the source-to-receiver(s) direction using network forwarding state. we start with an overview of the problem. then, we describe tracetree functionality including its request forwarding and response collection mechanisms. next, we discuss a number of functional issues related to tracetree. finally, we evaluate our technique by comparing it to a number of alternative approaches. we believe that our technique provides a scalable way of discovering a multicast tree's topology in realtime while requiring only marginal additional router functionality. kamil sara kevin c. almeroth querying the trajectories of on-line mobile objects position data is expected to play a central role in a wide range of mobile computing applications, including advertising, leisure, safety, security, tourist, and traffic applications. applications such as these are characterized by large quantities of wirelessly internet-worked, position- aware mobile objects that receive services where the objects' position is essential. the movement of an object is captured via sampling, resulting in a trajectory consisting of a sequence of connected line segments for each moving object. this paper presents a technique for querying these trajectories. the technique uses indices for the processing of spatiotemporal range queries on trajectories. if object movement is constrained by the presence of infrastructure, e.g., lakes, park areas, etc., the technique is capable of exploiting this to reduce the range query, the purpose being to obtain better query performance. specifically, an algorithm is proposed that segments the original range query based on the infrastructure contained in its range. the applicability and limitations of the proposal are assessed via empirical performance studies with varying datasets and parameter settings. dieter pfoser christian s. jensen dynamically reconfigurable architecture for image processor applications alexandro m. s. adário eduardo l. roehe sergio bampi measurement and analysis of the error characteristics of an in-building wireless network there is general belief that networks based on wireless technologies have much higher error rates than those based on more traditional technologies such as optical fiber, coaxial cable, or twisted pair wiring. this difference has motivated research on new protocol suites specifically for wireless networks. while the error characteristics of wired networks have been well documented, less experimental data is available for wireless lans.in this paper we report the results of a study characterizing the error environment provided by at&t; wavelan, a commercial product designed for constructing 2 mb/s in-building wireless networks. we evaluated the effects of interfering radiation sources, and of attenuation due to distance and obstacles, on the packet loss rate and bit error rate. we found that under many conditions the error rate of this physical layer is comparable to that of wired links. we analyze the implications of our results on today's csma/ca based wireless lans and on future pico-cellular shared-medium reservation-based wireless networks. david eckhardt peter steenkiste performance analysis of data packet discarding in atm networks yonghwan kim san-qi li experiences implementing a high performance tcp in user-space the advantages of user-space protocols are well-known, but implementations often exhibit poor performance. this paper describes a user-space tcp implementation that outperforms a 'normal' kernel tcp and that achieves 80% of the performance of a 'single-copy' tcp. throughput of 160 mbit/s has been measured. we describe some of the techniques we used and some of the problems we encountered. aled edwards steve muir scheduling with optimized communication for time-triggered embedded systems paul pop petru eles zebo peng a study of protocol analysis for packet switched network communication failures may occur because of residual hardware or software implementation flaws, operator errors, transmission noises and transient or permanent machine failures. for packet switched network operation, some means are necessary to detect the errors and to analyze the phenomena to identify the causes of the errors, since, generally, it is almost impossible to predict errors or to implement systems without errors or failures. this paper describes general aspects of communication protocol analysis, protocol analysis technologies for ccitt x.25 and the protocol analyzer to be used in ddx packet switched network operation. k. tsukamoto t. itoh m. nomura y. tanaka software timing analysis using hw/sw cosimulation and instruction set simulator jie liu marcello lajolo alberto sangiovanni-vincentelli the cl-pvm package _parallel virtual machine_ (pvm) is a software package that integrates a heterogeneous network of computers to form a single parallel/concurrent computing facility [2]. pvm consists of two parts: a run- time server and a set of library functions. a user sets up a _hostfile_ that lists the names of the hosts constituting the _parallel virtual machine (pvm)._ a pvm server runs on each host to help manage the _pvm._ hosts can be added and deleted from the _pvm_ dynamically. the _pvm_ can be controlled from any constituent host either interactively from a _console_ program or automatically from any _pvm tasks._a pvm task is an application program that runs on a _pvm_ and can use the pvm library functions to interact with other tasks: sending and receiving messages, initiating subtasks, detecting errors, etc. the pvm version 3.0 library is written in c allowing direct calls from c programs. there is also a fortran 77 interface to give f77 programs access to the pvm library.cl-pvm provides a common lisp interface enabling lisp-based programs to partake in pvm applications. a wide variety of useful lisp programs exists including symbolic computation systems, expert systems, artificial intelligence systems, knowledge-based systems, and many more. with cl-pvm, the pvm library routines can be invoked interactively from the lisp toplevel or from lisp programs.the cl-pvm package contains a set of common lisp functions that interfaces common lisp (kcl, akcl, or gcl) to the c-based library of pvm [2]. generally, there is one cl interface function to each pvm c library function. the cl function calls its c-based counterpart and relays data to and from the c function. this interface is complete and allows lisp- based programs to run on a _pvm_ and thus facilitates the combination of symbolic, numeric, graphics, and other useful systems in a distributed fashion (fig. 1). cl-pvm also offers a set of tools to aid effective use of the package with lisp and maxima tasks. documentation, on- line manual pages, and examples are also included.cl-pvm is available by public ftp (ftp.mcs.kent.edu) in the directory (/pub/wang/). the package is described here. please refer to [4] for more information on the design and implementation of the lisp interface to pvm. liwei li paul s. wang adaptive link layer strategies for energy efficient wireless networking paul lettieri curt schurgers mani srivastava performance evaluation of abr flow-control protocols in a wireless atm network udo r. krieger michael savoric communication in the ksr1 mpp: performance evaluation using synthetic workload experiments we have developed an automatic technique for evaluating the communication performance of massively parallel processors (mpps). both communication latency and the amount of communication are investigated as a function of a few basic parameters that characterize an application workload. parameter values are captured in an automatically generated sparse matrix that multiplies a dense vector in the synthetic workload. our approach is capable of explaining the degradation of processor performance caused by communication. using the kendall square research ksr1 mpp as a case study, we demonstrate the effectiveness of the technique through a series of experiments used to characterize the communication performance. we show that read and write communciation latencies vary from 150 to 180 and from 80 to 100 processor cycles, respectively. we show that the read communication latency approximates a linear function of the total system communciation (in subpages), write communication approximates a linear function of the number of distinct shared subpages, and that ksr's automatic update feature is effective in reducing the number of read communications given careful binding of threads to processors. eric l. boyd edward s. davidson risc i: a reduced instruction set vlsi computer the reduced instruction set computer (risc) project investigates an alternative to the general trend toward computers with increasingly complex instruction sets: with a proper set of instructions and a corresponding architectural design, a machine with a high effective throughput can be achieved. the simplicity of the instruction set and addressing modes allows most instructions to execute in a single machine cycle, and the simplicity of each instruction guarantees a short cycle time. in addition, such a machine should have a much shorter design time. this paper presents the architecture of risc i and its novel hardware support scheme for procedure call/return. overlapping sets of register banks that can pass parameters directly to subroutines are largely responsible for the excellent performance of risc i. static and dynamic comparisons between this new architecture and more traditional machines are given. although instructions are simpler, the average length of programs was found not to exceed programs for dec vax 11 by more than a factor of 2. preliminary benchmarks demonstrate the performance advantages of risc. it appears possible to build a single chip computer faster than vax 11/780. david a. patterson carlo h. sequin national information infrastructure (nii) at supercomputing '93 (panel) h. d. shay v. cerf lansing hatfield stacey jenkins john rollwagen dale williams ed mccracken congestion avoidance and control v. jacobson reliable multicast in multi-access wireless lans multicast is an efficient paradigm for transmitting data from a sender to a group of receivers. in this paper, we focus on multicast in single channel multi-access wireless local area networks (lans) comprising several small cells. in such a system, a receiver cannot correctly receive a packet if two or more packets are sent to it at the same time, because the ackets "collide". therefore, one has to ensure that only one node sends at a time. we look at two important issues.first, we consider the problem of the sender acquiring the multi-access channel for multicast transmission. second, for reliable multicast in each cell of the wireless lan, we examine arq-based approaches. the second issue is important because the wireless link error rates can be very high. we present a new approach to overcome the problem of feedback collision in single channel multi-access wireless lans, both for the purpose of acquiring the channel and for reliability. our approach involves the election of one of the multicast group members (receivers) as a "leader" or representative for the purpose of sending feedback to the sender. for reliable multicast, on erroneous reception of a packet, the leader does not send an acknowledgment, prompting a retransmission. on erroneous reception of the packet at receivers other than the leader, our protocol allows negative acknowledgments from these receivers to collide with the acknowledgment from the leader,thus destroying the acknowledgment and prompting the sender to retransmit the packet. using analytical models, we demonstrate that the leader-based protocol exhibits higher throughput in comparison to two other protocols which use traditional delayed feedback-based probabilistic methods. last, we present a simple scheme for leader election. joy kuri sneha kumar kasera application-layer mobility using sip henning schulzrinne elin wedlund cell loss analysis and design trade-offs of nonblocking atm switches with nonuniform traffic myung j. lee david s. ahn multiparty unconditionally secure protocols under the assumption that each pair of participants can communicate secretly, we show that any reasonable multiparty protocol can be achieved if at least 2n/3 of the participants are honest. the secrecy achieved is unconditional. it does not rely on any assumption about computational intractability. david chaum claude crepeau ivan damgard computer and network security the field of computer security has matured to the extent that lots of people have heard about it, lots of people have become practitioners, lots of organizations have sprung up, lots of new books are available, certifications are available, laws are being written, people are being prosecuted and the federal government is interested. unfortunately, it is still a black hole to most people and the main source of information is the media industry where each guy is trying to get more attention than the other guy is getting. in the real world, computer and network security is just one more thing that has to be managed. it needs to be understood. it requires resources like time, money and expertise. robert bruen dddp-a distributed data driven processor this paper describes an architecture of a data flow computer named the distributed data driven processor (dddp), and presents an experimental system and the results of experiments using several benchmarks. the experimental system has four processing elements connected by a ring bus, and a structured data memory. the main features of our system are that each processing element is provided with a hardware hashing mechanism to implement token coloring, and a ring bus is used to pass tokens concurrently among processing elements. a hardware monitor was used to measure the performance of the experimental system. the experimental system adopts a low key technology and yet is capable of executing about 0.7 million instructions per second through the benchmarks. this implies that data flow computers can be alternative to the conventional von-neumann computers if state-of-the-art technologies are adequately introduced. masasuke kishi hiroshi yasuhara yasusuke kawamura a programmable network interface for a message-based multicomputer raj k. singh stephen g. tell shaun j. bharrat multicast routing in internetworks and extended lans multicasting is used within local-area networks to make distributed applications more robust and more efficient. the growing need to distribute applications across multiple, interconnected networks, and the increasing availability of high-performance, high-capacity switching nodes and networks, lead us to consider providing lan- style multicasting across an internetwork. in this paper, we propose extensions to two common internetwork routing algorithms---distance-vector routing and link-state routing---to support low-delay datagram multicasting. we also suggest modifications to the single-spanning-tree routing algorithm, commonly used by link-layer bridges, to reduce the costs of multicasting in large extended lans. finally, we show how different link-layer and network- layer multicast routing algorithms can be combined hierarchically to support multicasting across large, heterogeneous internetworks. stephen e. deering a recursive estimator of worst-case burstiness shahrokh valaee the quickring network m. valerio l. e. moser p. m. melliar-smith p. sweazey on the next killer app ron a. zajac mobile agents, adaptation, qos and notification services (track introduction) dorota m. huizinga the content and access dynamics of a busy web site: findings and implications in this paper, we study the dynamics of the msnbc news site, one of the busiest web sites in the internet today. unlike many other efforts that have analyzed client accesses as seen by proxies, we focus on the server end. we analyze the dynamics of both the server content and client accesses made to the server. the former considers the content creation and modification process while the latter considers page popularity and locality in client accesses. some of our key results are: (a) files tend to change little when they are modified, (b) a small set of files tends to get modified repeatedly, (c) file popularity follows a zipf-like distribution with a parameter &agr; that is much larger than reported in previous, proxy-based studies, and (d) there is significant temporal stability in file popularity but not much stability in the domains from which clients access the popular content. we discuss the implications of these findings for techniques such as web caching (including cache consistency algorithms), and prefetching or server-based ``push'' of web content. venkata n. padmanabhan lili qiu building reliable, high-performance communication systems from components xiaoming liu christoph kreitz robbert van renesse jason hickey mark hayden kenneth birman robert constable an adaptable network for functional distributed systems a flexible building-block system is presented which allows setting up arbitrary operational computer networks or changing them with a minimum of effort. it is at present realized as an experimental system based on minicomputers but aimed to utilize microprocessors. the paper deals with the hardware concept, the basic operating system and the software concept. finally some experiences with regard to programming and system optimization will be described. h. von issendorff w. grunewald strategies for achieving improved processor throughput matthew k. farrens andrew r. pleszkun topologies for wavelength-routing all-optical networks m. ajmone marsan andrea bianco emilio leonardi fabio neri improvements in multiprocessor system design david p. rodgers a message priority assignment algorithm for can based networks controller area network (can) defines a very efficient medium access control protocol. this protocol solve conflict message transmission conflicts through message priorities, and results in a high channel utilization and short message delay for higher priority messages. an analytical model of the maximum message delay is presented. the maximum delay of different types of messages are formulated in terms of the message priority and the offered load of the system. based on the maximum delay analysis, a message priority assignment algorithm is proposed. this algorithm is capable of generating an optimum priority assignment. a proof of correctness of the algorithm is also included in this paper. zhengou wang huizhu lu marvin stone a measurement analysis of internet traffic over frame relay judith l. jerkins john monroe jonathan l. wang viewpoint: from teragrid to knowledge grid fran berman reducing the branch penalty by rearranging instructions in a double-width memory manolis katevenis nestoras tzartzanis a nested transaction mechanism for locus atomic transactions are useful in distributed systems as a means of providing reliable operation in the face of hardware failures. nested transactions are a generalization of traditional transactions in which transactions may be composed of other transactions. the programmer may initiate several transactions from within a transaction, and serializability of the transactions is guaranteed even if they are executed concurrently. in addition, transactions invoked from within a given transaction fail independently of their invoking transaction and of one another, allowing use of alternate transactions to accomplish the desired task in the event that the original should fail. thus nested transactions are the basis for a general- purpose reliable programming environment in which transactions are modules which may be composed freely. a working implementation of nested transactions has been produced for locus, an integrated distributed operating system which provides a high degree of network transparency. several aspects of our mechanism are novel. first, the mechanism allows a transaction to access objects directly without regard to the location of the object. second, processes running on behalf of a single transaction may be located at many sites. thus there is no need to invoke a new transaction to perform processing or access objects at a remote site. third, unlike other environments, locus allows replication of data objects at more than one site in the network, and this capability is incorporated into the transaction mechanism. if the copy of an object that is currently being accessed becomes unavailable, it is possible to continue work by using another one of the replicated copies. finally, an efficient orphan removal algorithm is presented, and the problem of providing continued operation during network partitions is addressed in detail. erik t. mueller johanna d. moore gerald j. popek using clustering for effective management of a semantic cache in mobile computing qun ren margaret h. dunham fddi- a lan among mans floyd e. ross james r. hamstra robert l. fink location management of mobile hosts by grouping routers hiroaki hagino tikahiro hara masahiko tsukamoto shojiro nishio lx: a technology platform for customizable vliw embedded processing lx is a scalable and customizable vliw processor technology platform designed by hewlett-packard and stmicroelectronics that allows variations in instruction issue width, the number and capabilities of structures and the processor instruction set. for lx we developed the architecture and software from the beginning to support both scalability (variable numbers of identical processing resources) and customizability (special purpose resources). in this paper we consider the following issues. when is customization or scaling beneficial? how can one determine the right degree of customization or scaling for a particular application domain? what architectural compromises were made in the lx project to contain the complexity inherent in a customizable and scalable processor family? the experiments described in the paper show that specialization for an application domain is effective, yielding large gains in price/performance ratio. we also show how scaling machine resources scales performance, although not uniformly across all applications. finally we show that customization on an application-by-application basis is today still very dangerous and much remains to be done for it to become a viable solution. paolo faraboschi geoffrey brown joseph a. fisher giuseppe desoli fred homewood providing guaranteed services without per flow management existing approaches for providing guaranteed services require routers to manage per flow states and perform per flow operations [9, 21]. such a _stateful_ network architecture is less scalable and robust than _stateless_ network architectures like the original ip and the recently proposed diffserv [3]. however, services provided with current _stateless_ solutions, diffserv included, have lower flexibility, utilization, and/or assurance level as compared to the services that can be provided with per flow mechanisms.in this paper, we propose techniques that do not require per flow management (either control or data planes) at core routers, but can implement guaranteed services with levels of flexibility, utilization, and assurance similar to those that can be provided with per flow mechanisms. in this way we can simultaneously achieve high quality of service, high scalability and robustness. the key technique we use is called dynamic packet state (dps), which provides a lightweight and robust mechanism for routers to coordinate actions and implement distributed algorithms. we present an implementation of the proposed algorithms that has minimum incompatibility with ipv4. ion stoica hui zhang rate-based congestion control for atm networks congestion control plays an important role in the effective and stable operation of atm networks. this paper first gives a historical overview of rate-based congestion control algorithms developed in the atm forum, showing how the current atm forum standard regarding the traffic management control methods is exploited, by these algorithm. then, analytical approach is used to quantitatively evaluate their performance and show the effectiveness of the rate-based approach. in presenting the numerical examples, we emphasize that appropriate control parameter settings are essential for proper traffic management in an atm network environment. hiroyuki ohsaki masayuki murata hiroshi suzuki chinatsu ikeda hideo miyahara protocols for large data transfers over local networks in this paper we analyze protocols for transmitting large amounts of data over a local area network. the data transfers analyzed in this paper are different from most other forms of large-scale data transfer protocols for three reasons: (1) the definition of the protocol requires the recipient to have sufficient buffers available to receive the data before the transfer takes place; (2) we assume that the source and the destination machine are more or less matched in speed; (3) the protocol is implemented at the network interrupt level and therefore not slowed down by process scheduling delays. we consider three classes of protocols: stop-and-wait, sliding window and blast protocols. we show that the expected time of blast and sliding window protocols is significantly lower than the expected time for the stop-and-wait protocol, with blast outperforming sliding window by some small amount. although the network error rate is sufficiently low for blast with full retransmission on error to be acceptable, the frequency of errors in the network interfaces makes it desirable to use a more sophisticated retransmission protocol. a go- back-n strategy is shown to be only marginally inferior to selective retransmission and is, given its simplicity, the retransmission strategy of choice. our results are based on measurements collected on sun workstations connected to a 10 megabit ethernet network using 3-com interfaces. the derivation of the elapsed time in terms of the network packet error rate is based on the assumption of statistically independent errors. willy zwaenepoel reaching agreement in the presence of faults the problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. nonfaulty processors always communicate honestly, whereas faulty processors may lie. the problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. the value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor. it is shown that the problem is solvable for, and only for, n ≥ 3m \\+ 1, where m is the number of faulty processors and n is the total number. it is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0\. this weaker assumption can be approximated in practice using cryptographic methods. m. pease r. shostak l. lamport automatic update of replicated topology data bases in computer communication networks, routing is often accomplished by maintaining copies of the network topology and dynamic performance characteristics in various network nodes. the present paper describes an algorithm that allows complete flexibility in the placement of the topology information. in particular, we assume that an arbitrary subset of network nodes are capable of maintaining the topology. in this environment, protocols are defined to allow automatic updates to flow between these more capable nodes. in addition, protocols are defined to allow less capable nodes to report their topology data to the major nodes, and acquire route information from them. jeffrey m. jaffe adrian segall the fair distributed queue (fdq) protocol for high-speed metropolitan-area networks mete kabatepe kenneth s. vastola a selective location update strategy for pcs users sanjoy k. sen amiya bhattacharya sajal k. das editorial the authors conclude that it was fairly easy to make applications use odyssey ---even for applications for which the source code was not available. independent adaptive applications do not need to be aware to each other. wavevideo---an integrated approach to adaptive wireless video", by george fankhauser, marcel dasen, nathalie weiler, bernhard plattner, and bukhard stiller, presents an integrated adaptive video coding architecture for heterogeneous wireless networks, based on joint source/channel coding. wavevideo adapts to dynamic channel conditions in two ways: the coding has mechanisms to overcome transmission errors, and the amount of error-control information is controlled by end-to-end feedback from the receiver. additionally, wavevideo can scale the reception quality of multiple heterogeneous receivers that receive a multicast video stream. the authors conclude that the wavevideo video coding architecture can effectively adapt to that dynamics to heterogenous wireless networks. "a trace- based evaluation of adaptive error correction for a wireless local area network", by david eckhardt and peter steenkiste, evaluates error control strategies for a wireless lan, and concludes that an adaptive forward error control algorithm which varies both the packet size and the degree of redundancy depending on the channel conditions is able to effectively adapt to the dynamics of a wireless lan environment. "an evaluation of quality of service characteristics of pacs packet channel", by behcet sarikaya and mehmet ulema, studies the quality of service characteristics, integration with the internet, and handover characteristics of one of the pcs standards---the personal access communication systems (pacs) packet channel---with a view of supporting multimedia communication in this environment. vaduvur bharghavan c.-k. toh a markov chain approximation for the analysis of banyan networks arif merchant fifth workshop on computer architecture for non-numeric processing: a flexible image processor using array elements the purpose of this short paper is to describe some activities in our laboratory on designing a flexible image processor using array processing elements. in many document creation applications, it is desirable to include image handling capability in the system together with keystroke captured text processing function. however, this image handling creates two problems: one is the large amount of data needed for image storage, and the other is the large amount of processing required to produce an output image. data compression reduces storage and transmission bandwidth requirements, but it also requires processing power to do compression and de-compression of images. the objective of the study is to provide low-cost hardware which has the flexibility to do a variety of image processing algorithms. ronnie k. l. poon kwan y. wong the agree predictor: a mechanism for reducing negative branch history interference deeply pipelined, superscalar processors require accurate branch prediction to achieve high performance. two-level branch predictors have been shown to achieve high prediction accuracy. it has also been shown that branch interference is a major contributor to the number of branches mispredicted by two-level predictors.this paper presents a new method to reduce the interference problem called agree prediction, which reduces the chance that two branches aliasing the same pht entry will interfere negatively. we evaluate the performance of this scheme using full traces (both user and supervisor) of the specint95 benchmarks. the result is a reduction in the misprediction rate of gcc ranging from 8.62% with a 64k-entry pht up to 33.3% with a 1k-entry pht. eric sprangle robert s. chappell mitch alsup yale n. patt handover in a micro-cell packet switched mobile network this paper proposes a distributed handover protocol for a micro-cell packet switched mobile network. in such a network, users move from one cell to another very often, and each change of location may result in misrouted and lost packets. the purpose of the new protocol is to minimize these consequences of location changes: as long as a mobile moves from one cell to another but stays in the same region, the protocol avoids loss of packets and preserves order of transmission. thus it increases the performance of the transport layer protocol by minimizing the need to retransmit packets. reuven cohen baiju v. patel adrian segall network management capabilities for switched multi-megabit data service david m. piscitello patrick j. sher conferences marisa campbell resource allocation and management in diffserv networks for ip telephony this paper discusses resource allocation and management in differentia ted services (diffserv) networks, particularly in the context of ip telephony. we assume that each node uses weighted fair queuing (wfq) schedulers in order to provide quality of service (qos) to aggregates of traffic. all voice traffic destined for a certain output interface is aggregated into a single queue. when a voice flow traverses a node, its packets are placed in this queue; which is drained at a certain rate, determined by a weight associated with that queue. this paper shows how to set this weight such that the edge-router- to-edge-router queuing delay in the diffserv network is statistically bounded. the nodes are modeled as m/g/1 queuing systems and a heuristic formula is used to compute a quantile of the queuing delay. this formula is compared with results derived from simulations. it is also shown how to use this result to allocate the appropriate resources in a diffserv network and how to update and process rsvp messages at the edge of a diffserv network in an intserv over diffserv scenario. maarten buchli danny de vleeschauwer jan janssen annelies van moffaert guido h. petit remote pipes and procedures for efficient distributed communication we describe a new communication model for distributed systems that combines the advantages of remote procedure call with the efficient transfer of bulk data. three ideas form the basis of this model. first, remote procedures are first-class values which can be freely exchanged among nodes, thus enabling a greater variety of protocols to be directly implemented in a remote procedure call framework. second, a new type of abstract object, called a pipe, allows bulk data and incremental results to be efficiently transported in a type-safe manner. unlike procedure calls, pipe calls do not return values and do not block a caller. data sent down a pipe is received by the pipe's sink node in the order sent. third, the relative sequencing of pipes and procedures can be controlled by combining them into channel groups. calls on the members of a channel group are guaranteed to be processed in order. application experience with this model, which we call the channel model, is reported. derived performance bounds and experimental measures demonstrate k pipe calls can perform min( 1 + (r/p), k) times faster than k procedure calls, where r is the total roundtrip remote communication time and p is the procedure execution time. david k. gifford nathan glasser improving instruction supply efficiency in superscalar architectures using instruction trace buffers chih-po wen a data highway for realtime distributed systems architecture and formalization in this paper the authors describe the proway, a standard data highway architecture for process control, that i.e.c. is developing. communication aspects are examinated and a formal description of highway layer is performed by using a state finite graph. a. faro o. mirabella using microcomputers as satellites in a time-sharing environment so it goes, these days, in colleges across the land. the comments about 8's and 89's are from ph.d. mathematicians, not discussing number theory, but debating microcomputers. (the fellow in the back obviously had not kept up with the newest new math.) even the ivory tower mathematicians are being drawn into the microcomputer revolution. until a few years ago, mathematics and science at st. olaf college, as at most colleges, used a central time-sharing minicomputer for all instructional needs. the twin dartmouth inventions of basic and time-sharing seemed ideal tools for computing in an undergraduate environment. lynn arthur steen j. arthur seebach reflections: did convergence kill the clock? steven pemberton atm concepts, architectures, and protocols asynchronous transfer mode (atm) is often described as the future computer networking paradigm that will bring high-speed communications to the desktop. what is atm? how is it different from today's networking technologies? this article is intended to acquaint readers with this emerging technology and describe some of the concepts embodied within it. in order to understand why atm was created and how it works, we first need to review a bit of computer networking history. ronald j. vetter virtual topology based routing protocol for multihop dynamic wireless networks in this paper, a new hierarchical multihop routing algorithm and its performance evaluation is presented for fully dynamic wireless networks. the routing algorithm operates on a virtual topology obtained by partitioning the routing information for mobile terminals and mobile base stations into a hierarchical, distributed database. based on the virtual topology, each mobile base station stores a fraction of the routing information to balance the complexity of the location-update and the path-finding operations. mobility of the network entities changes the load distribution and causes processing and memory bottlenecks in some parts of the network. however, since the network routing elements are also mobile, their movement can be used to distribute the load. thus, new load balancing schemes are introduced to distribute the routing overhead uniformly among the mobile base stations. the performance of the hierarchical multihop routing algorithm is investigated through simulations. it is shown that the routing protocol can cope with high mobility and deliver packets to the destinations successfully. ian f. akyildiz james i. pelech bulent yener optimal pipelining in supercomputers this paper examines the relationship between the degree of central processor pipelining and performance. this relationship is studied in the context of modern supercomputers. limitations due to instruction dependencies are studied via simulations of the cray-1s. both scalar and vector code are studied. this study shows that instruction dependencies severely limit performance for scalar code as well as overall performance. the effects of latch overhead are then considered. the primary cause of latch overhead is the difference between maximum and minimum gate propagation delays. this causes both the skewing of data as it passes along the data path, and unintentional clock skewing due to clock fanout logic. latch overhead is studied analytically in order to lower bound the clock period that may be used in a pipelined system. this analysis also touches on other points related to latch clocking. this analysis shows that for short pipeline segments both the earle latch and polarity hold latch give the same clock period bound for both single-phase and multi-phase clocks. overhead due to data skew and unintentional clock skew are each added to the cray-1s simulation model. simulation results with realistic assumptions show that eight to ten gate levels per pipeline segment lead to optimal overall performance. the results also show that for short pipeline segments data skew and clock skew contribute about equally to the degradation in performance. s. r. kunkel j. e. smith fault-tolerant circuit-switching networks nicholas pippenger geng lin x.400-based distributed application design methodology the objective of this paper is to provide a distributed application design methodology using the x.400 standards. this one can be applied to two distributed application models: the client/server model and the peer to peer model. the client/server model relies on a relationship between a server entity and a client entity. the use of x.400 for this model is based on the defined relationship between the ua and the ms. the peer to peer model consists of a set of cooperating entities offering a service to the users. the use of x.400 for this model requires the creation of new mts users and of a p2 type protocol enabling them to communicate. we analyze the advantages and disadvantages of each proposed alternative. laurence duchien valerie gay eric horlait some observations on the performance of a 56 kbit internet link d farber l cassel a protocol for route establishment and packet forwarding across multidomain internets deborah estrin martha steenstrup gene tsudik an iso tp4-tp0 gateway lawrence h. landweber mitchell tasman wengyik yeong fast hardware/software co-simulation for virtual prototyping and trade-off analysis claudio passerone luciano lavagno massimiliano chiodo alberto sangiovanni-vincentelli the architecture and programming of the ametek series 2010 multicomputer during the period following the completion of the cosmic cube experiment [1], and while commercial descendants of this first-generation multicomputer (message-passing concurrent computer) were spreading through a community that includes many of the attendees of this conference, members of our research group were developing a set of ideas about the physical design and programming for the second generation of medium-grain multicomputers. our principal goal was to improve by as much as two orders of magnitude the relationship between message-passing and computing performance, and also to make the topology of the message-passing network practically invisible. decreasing the communication latency relative to instruction execution times extends the application span of multicomputers from easily partitioned and distributed problems (eg, matrix computations, pde solvers, finite element analysis, finite difference methods, distant or local field many-body problems, ffts, ray tracing, distributed simulation of systems composed of loosely coupled physical processes) to computing problems characterized by "high flux" [2] or relatively fine-grain concurrent formulations [3, 4] (eg, searching, sorting, concurrent data structures, graph problems, signal processing, image processing, and distributed simulation of systems composed of many tightly coupled physical processes). such applications place heavy demands on the message-passing network for high bandwidth, low latency, and non-local communication. decreased message latency also improves the efficiency of the class of applications that have been developed on first- generation systems, and the insensitivity of message latency to process placement simplifies the concurrent formulation of application programs. our other goals included a streamlined and easily layered set of message primitives, a node operating system based on a reactive programming model, open interfaces for accelerators and peripheral devices, and node performance improvements that could be achieved economically by using the same technology employed in contemporary workstation computers. by the autumn of 1986, these ideas had become sufficiently developed, molded together, and tested through simulation to be regarded as a complete architectural design. we were fortunate that the ametek computer research division was ready and willing to work with us to develop this system as a commercial product. the ametek series 2010 multicomputer is the result of this joint effort. c. l. seitz w. c. athas c. m. flaig a. j. martin j. seizovic c. s. steele w-k. su personal computers in the corporate environment: software george a. heidenrich open base situation transport (obast)architecture phillip d. neumiller peter l. lei michael l. needham the operating system and language support features of the bellmactm-32 microprocessor. the bellmac-32 microprocessor is a 32-bit microprocessor, implemented with cmos technology, designed to support operating system functions and high level languages efficiently. the architecture was designed with the following objectives in mind: • high performance. • enhanced operating system support capabilities. • high level language support. • high reliability, availability and maintainability. alan d. berenbaum michael w. condry priscilla m. lu an object-oriented structured design method for code generation a. i. wasserman p. picher r. j. muller towards coarse-grained mobile qos rahul jain bahareh sadeghi edward w. knightly register/ file/ cache microarchitecture study using vhdl samarina makhdoom daniel tabak richard auletta proposed nist standard for role-based access control in this article we propose a standard for role-based access control (rbac). although rbac models have received broad support as a generalized approach to access control, and are well recognized for their many advantages in performing large-scale authorization management, no single authoritative definition of rbac exists today. this lack of a widely accepted model results in uncertainty and confusion about rbac's utility and meaning. the standard proposed here seeks to resolve this situation by unifying ideas from a base of frequently referenced rbac models, commercial products, and research prototypes. it is intended to serve as a foundation for product development, evaluation, and procurement specification. although rbac continues to evolve as users, researchers, and vendors gain experience with its application, we feel the features and components proposed in this standard represent a fundamental and stable set of mechanisms that may be enhanced by developers in further meeting the needs of their customers. as such, this document does not attempt to standardize rbac features beyond those that have achieved acceptance in the commercial marketplace and research community, but instead focuses on defining a fundamental and stable set of rbac components. this standard is organized into the rbac reference model and the rbac system and administrative functional specification. the reference model defines the scope of features that comprise the standard and provides a consistent vocabulary in support of the specification. the rbac system and administrative functional specification defines functional requirements for administrative operations and queries for the creation, maintenance, and review of rbac sets and relations, as well as for specifying system level functionality in support of session attribute management and an access control decision process. david f. ferraiolo ravi sandhu serban gavrila d. richard kuhn ramaswamy chandramouli the vmp network adapter board (nab): high-performance network communication for multiprocessors high performance computer communication between multiprocessor nodes requires significant improvements over conventional host- to-network adapters. current host-to-network adapter interfaces impose excessive processing, system bus and interrupt overhead on a multiprocessor host. current network adapters are either limited in function, wasting key host resources such as the system bus and the processors, or else intelligent but too slow, because of complex transport protocols and because of an inadequate internal memory architecture. conventional transport protocols are too complex for hardware implementation and too slow without it. in this paper, we describe the design of a network adapter board for the vmp multiprocessor machine that addresses these issues. the adapter uses a host interface that is designed for minimal latency, minimal interrupt processing overhead and minimal system bus and memory access overhead. the network adapter itself has a novel internal memory and processing architecture that implements some of the key performance-critical transport layer functions in hardware. this design is integrated with vmtp, a new transport protocol specifically designed for efficient implementation on an intelligent high- performance network adapter. although targeted for the vmp system, the design is applicable to other multiprocessors as well as uni- processors. h. kanakia d. cheriton locking effects in multiprocessor implementations of protocols mats björkman per gunningberg performance and fault tolerance improvements in the inverse augmented data manipulator network the inverse augmented data manipulator (iadm) is a multistage interconnection network based on the augmented data manipulator (adm) and feng's data manipulator. it is designed to be used in large-scale parallel/distributed processing systems for communication among processors, memories, and other system devices. two aspects of iadm network design are discussed: performance and fault tolerance. a single stage look-ahead scheme for predicting blockage is presented to enhance performance. next, one method of adding some links to the network to enable it to tolerate one link failure is described. finally, a different method of adding links is shown that both improves performance and allows the network to tolerate two switching element or two link failures. included is a new routing tag scheme that accommodates the new links. robert j. mcmillen howard jay siegel comparison of connection admission-control schemes in the presence of hand- offs in cellular networks sunghyun choi kang g. shin performance analysis of small fddi networks jesse smith l. donnell payne tom nute models for use in the design of macro-pipelined parallel processors bradley warren smith howard jay siegel a distributed algorithm of delay-bounded multicast routing for multimedia applications in wide area networks xiaohua jia an ipc protocol and its hardware realization for a high-speed distributed multicomputer system multicomputer systems with distributed control form an architecture that simultaneously satisfies such design goals as high performance through parallel operation of vlsi processors, modular extensibility, fault tolerance, and system software simplication. the nodes of the system may be locally concentrated or spatially dispersed as a local network. applications range from data base-oriented transactional systems to "number crunching." the system is service-oriented; that is, it appears to the user as one computer on which parallel processing takes place in the form of cooperating processes. cooperation is regulated by the unique interprocess communication (ipc) protocol presented in this paper. the high-level protocol is based on the computer/producer model and satisfies all requirements for such a distributed multicomputer system. it is demonstrated that the protocol lends itself toward a straightforward mechanization by dedicated hardware consisting of a cooperation handler, an address transformation and memory guard unit, and bus connection logic. these special hardware resources, assisted by a "local operating system", form the supervisor of a node. nodes are connected by a high-speed bus (280m bit/sec). programming aspects as implied by the ipc protocol are also described. w. k. giloi p. behr enhanced privacy and authentication for the global system for mobile communications the global system for mobile communications (gsm) is widely recognized as the modern digital mobile network architecture. increasing market demands point toward the relevancy of security-related issues in communications. the security requirements of mobile communications for the mobile users include: (1) the authentication of the mobile user and visitor location register/home location register; (2) the data confidentiality between mobile station and visitor location register, and the data confidentiality between visitor location register and visitor location register/home location register (vlr/hlr); (3) the location privacy of mobile user. however, gsm does not provide enough security functions to meet these requirements. we propose three improved methods to enhance the security, to reduce the storage space, to eliminate the sensitive information stored in vlr, and consequently to improve the performance of the system. proposed methods include an improved authentication protocol for the mobile station, a data confidentiality protocol, and a location privacy protocol. the merit of the proposed methods is to improve but not to alter the existing architecture of the system. furthermore, this study also performs computational and capacity analyses to evaluate the original gsm system and proposed approaches on a comparative basis. chii-hwa lee min-shiang hwang wei-pang yang multicast atm switches: survey and performance evaluation computer networks are undergoing a remarkable transformation. the widespread use of optical fiber to transmit data has made tremendous increases in network bandwidth. furthermore, greater cpu power, increasing disk capacity, and support for digital audio and video are creating demand for a new class of network services. for example, video-on-demand, distant learning, distant diagnosis, video conferences, and many others applications have popped up one after another in recent years. many of these services have one thing in common. they all require that the same piece of data be sent to multiple recipients. even in traditional networks, this operation, called multicasting, can not be handled easily and cheaply. when scaled up to high speed atm-based networks, the situation could be worse. multiple streams of data travel around atm networks. each tries to send to many different destinations simultaneously. therefore, designing economical atm network switches which can support multicasting operations easily is very important in the future generation high speed networks. in the past twelve years or so, many designs for multicasting atm switches are proposed. it seems about time to do a historical survey. it is hoped that by learning from the wisdom of the previous authors, new spark or angle can be found and exploited to design new multicasting atm switches. without easy and inexpensive multicasting, all the exciting services may become unaffordable. this will in turn lead to the diminishing of customer bases and finally will hinder the full-scale deployment of high speed networks. ming-huang guo ruay-shiung chang using pathchar to estimate internet link characteristics allen b. downey mobile atm buffer capacity analysis this paper extends a stochastic theory for buffer fill distribution for multiple "on" and "off" sources to a mobile environment. queue fill distribution is described by a set of differential equations assuming sources alternate asynchronously between exponentially distributed periods in "on" and "off" states. this paper includes the probabilities that mobile sources have links to a given queue. the sources represent mobile user nodes, and the queue represents the capacity of a switch. this paper presents a method of analysis which uses mobile parameters such as speed, call rates per unit area, cell area, and call duration and determines queue fill distribution at the atmcell level. the analytic results are compared with simulation results. stephen f. bush joseph b. evans victor frost a proposal of an atm wireless access system for tetherless multimedia services this paper proposes an atm wireless access system for tetherless multimedia services. the proposed system is intended to provide atm-based high-speed transmission capability for tetherless multimedia services by wireless media in private lan/wan environments as well as public environments. to enable high-speed transmission, this paper proposes the utilization of the shf band taking advantage of its wide frequency spectrum availability. however, the propagation feature of the shf band limits the wireless terminal mobility in the proposed system compared with current cellular phone systems. this paper discusses the concept and system architecture of the proposed atm wireless access system, including its atm transport based on atm/tdma conversion using a time stamp scheme. masahiro umehira akira hashimoto hideaki matsue masamitsu nakura microcellular handoff using fuzzy techniques in order to manage the high call density expected in future cellular systems, microcell must be used. the size of the microcell will cause a dramatic increase in the number of handoffs. in addition, the small size of the microcell will require handoff algorithms to respond faster than those in today's systems. the problems are further exacerbated by the corner effect phenomenon which causes the signal level to drop by 20--30 db in 10--20 m. thus, in order to maintain reliable communication in a microcellular system new and better handoff algorithms must be developed. the use of hysteresis and averaging window in classical handoff techniques reduces unnecessary handoffs, but causes delays which may result in the calls being dropped. a fuzzy based handoff algorithm is proposed in this paper as a solution to this problem. the performance of fuzzy based handoff algorithm was also compared to that obtained using the classical handoff algorithms. george edwards ravi sankar two-way tcp traffic over rate controlled channels: effects and analysis lampros kalampoukas anujan varma k. k. ramakrishnan dynamic communication models in embedded system co-simulation ken hines gaetano borriello the z39.50 information retrieval protocol: an overview and status report clifford a. lynch design methodology of a 200mhz superscalar microprocessor: sh-4 a new design methodology focusing on high speed operation and short design time is described for the sh-4 200mhz superscalar microprocessor. random test generation, logic emulation, and formal verification are applied to logic verification for shortening design time. delay budgeting, forward/back annotation, and clock design are key features for timing driven design. toshihiro hattori yusuke nitta mitsuho seki susumu narita kunio uchiyama tsuyoshi takahashi ryuichi satomura on the handoff arrival process in cellular communications philip v. orlik stephen s. rappaport concepts of the system/370 vector architecture this paper discusses the performance, complexity and system-integration considerations that shaped the system/370 vector architecture [1, 9]. the architecture is intended for compatible systems providing a range of price and performance. the paper reviews the reasons for choosing a register-oriented architecture with compound instructions over storage-to-storage operations and vector- instruction chaining. then it discusses the role of cache in stride-n storage accessing, and describes some new facilities introduced for control- program purposes. b. moore a. padegs r. smith w. buchholz optimizing and load balancing metacomputing applications jörg henrichs mobility support in ipv6 charles e. perkins david b. johnson experience with adaptive mobile applications in odyssey in this paper, we present our experience with application-aware adaptation in the context of odyssey, a platform for mobile data access. we describe three applications that we have modified to run on odyssey --- a video player, a web browser, and a speech recognition system. our experience indicates that it is relatively simple to incorporate applications into odyssey, and that application source code is not always essential. although our applications were built without knowledge of each other, odyssey is able to run them concurrently without interference. however, our experience also exposes important areas of future work. specifically, it reveals the difficulty of balancing agility with stability in adaptation, and emphasizes the need for controlled exposure of internal odyssey state to users. b. d. noble m. satyanarayanan dynamic hierarchical database architecture for location management in pcs networks joseph s. m. ho ian f. akyildiz deriving traffic demands for operational ip networks: methodology and experience engineering a large ip backbone network without an accurate network-wide view of the traffic demands is challenging. shifts in user behavior, changes in routing policies, and failures of network elements can result in significant (and sudden) fluctuations in load. in this paper, we present a model of traffic demands to support traffic engineering and performance debugging of large internet service provider networks. by defining a traffic demand as a volume of load originating from an ingress link and destined to a set of egress links, we can capture and predict how routing affects the traffic traveling between domains. to infer the traffic demands, we propose a measurement methodology that combines flow-level measurements collected at all ingress links with reachability information about all egress links. we discuss how to cope with situations where practical considerations limit the amount and quality of the necessary data. specifically, we show how to infer interdomain traffic demands using measurements collected at a smaller number of edge links-the peering links connecting the neighboring providers. we report on our experiences in deriving the traffic demands in the at&t; ip backbone, by collecting, validating, and joining very large and diverse sets of usage, configuration, and routing data over extended periods of time. the paper concludes with a preliminary analysis of the observed dynamics of the traffic demands and a discussion of the practical implications for traffic engineering. anja feldmann albert greenberg carsten lund nick reingold jennifer rexford fred true a trace-based evaluation of adaptive error correction for a wireless local area network wireless transmissions are highly susceptible to noise and interference. as a result, the error characteristics of a wireless link may vary widely depending on environmental factors such as location of the communicating systems and activity of competing radiation sources, making error control a difficult task. in this paper we evaluate error control strategies for a wireless lan. based on low-level packet traces of wavelan, we first show that forward error correction (fec) is effective in recovering from bit corruptions and that packet length adjustment can reduce packet truncation. however, as expected, fixed error control policies can perform very poorly, because they either introduce too much overhead in "good" environments or are not aggressive enough in "bad" environments. we address this problem through adaptive error control, i.e., error control policies that adapt the degree of fec redundancy and the packet size to the environment. the effectiveness of adaptive error control depends on the characteristics of the error environment, e.g., the type of errors and the frequency with which the error environment changes. our evaluation shows that adaptive error control can improve throughput consistently across a wide range of wireless lan error environments. the reason for this effectiveness is that changes in the error environemtn are often caused by human mobility-related events such as the motion of a cordless phone, which take place over seconds, while adaptation protocols can respond in tens of milliseconds. evaluating adaptive error control in a wireless environments is challenging because repeatable experiments are difficult: the wireless environment cannot easily be isolated and the adaptation process itself changes the environment, which may make trace-based evaluation difficult. we introduce a trace-based evaluation methodology that deals appropriately with changes in packet content and size. david a. eckhardt peter steenkiste credit-based flow control for atm networks: credit update protocol, adaptive credit allocation and statistical multiplexing this paper presents three new results concerning credit-based flow control for atm networks: (1) a simple and robust credit update protocol (cup) suited for relatively inexpensive hardware/software implementation; (2) automatic adaptation of credit buffer allocation for virtual circuits (vcs) sharing the same buffer pool; (3) use of credit-based flow control to improve the effectiveness of statistical multiplexing in minimizing switch memory. these results have been substantiated by analysis, simulation and implementation. h. t. kung trevor blackwell alan chapman performance evaluation of connection rerouting schemes for atm-based wireless networks ramachandran ramjee thomas f. la porta jim kurose don towsley simd machines: do they have a significant future? i volunteered to write this report during the simd panel session held on 2/9/95 at frontiers '95. all panelists cooperated by sending me their transparency masters. a draft report was prepared based on these transparency masters, position statements published in the frontiers '95 proceedings on pp. 466-469, and my own notes. the draft was e-mailed on 3/17/95 to the panel organizer/moderator and the panelists for their comments. this final version of the report is based on comments and markups received through 3/31/95. i have drawn from the panelists' ideas freely, using quotation marks only when including their statements verbatim.the panel consisted of both academic and industry experts in the field of massively parallel systems (see the table below). all but tim bridges, who is currently involved in a large-scale software development project for the maspar mp-2 simd architecture, have built working simd machines. the vast practical experience of the panel was quite evident in the insightful presentations and interactions. it is indeed a privilege for me to have worked on this report. behrooz parhami an approach to fault tolerance and error recovery in a parallel graph reduction machine: mars - a case study alessandro contessa a processor architecture for horizon horizon is a scalable shared- memory multiple instruction stream - multiple data stream (mimd) computer architecture independently under study at the supercomputing research center (src) and tera computer company. it is composed of a few hundred identical scalar processors and a comparable number of memories, sparsely embedded in a three-dimensional nearest-neighbor network. each processor has a horizontal instruction set that can issue up to three floating point operations per cycle without resorting to vector operations. processors will each be capable of performing several hundred million floating point operations per second (flops) in order to achieve an overall system performance target of 100 billion (1011) flops. this paper describes the architecture of the processor in the horizon system. in the fashion of the denelcor hep, the processor maintains a variable number of single instruction stream - single data stream (sisd) processes, which are called instruction streams. memory latency introduced by the large shared memory is hidden by switching context (instruction stream) each machine cycle. the processor functional units are pipelined to achieve high computational throughput rates; however, pipeline dependencies are hidden from user code. hardware mechanisms manage the resources to guarantee anonymity and independence of instruction streams. m. r. thistle b. j. smith the performance realities of massively parallel processors: a case study o. m. lubeck m. l. simmons h. j. wasserman the architecture of the universe network the universe network is composed of a number of local area networks at various sites in the u. k., joined by high capacity data links. apart from one token ring, all of the local area networks are cambridge rings; the high capacity data links are provided by a 1 mbit/s satellite broadcast channel. the universe project encompasses both the design and implementation of the network, and a program of experiments which make use of the network. one of the notable features of the network is that a host's view of communication over the network is no different from communication over a single ring; no internet protocol is used. ian m leslie roger m needham john w burren graham c adams estimating the multiplicities of conflicts to speed their resolution in multiple access channels new, improved algorithms are proposed for regulating access to a multiple- access channel, a common channel shared by many geographically distributed computing stations. a conflict of multiplicity n occurs when n stations transmit simultaneously to the channel. as a result, all stations receive feedback indicating whether n is 0, 1, or ≥2\. if n = 1, the transmission succeeds; whereas if n ≥ 2, all the transmissions fail. algorithms are presented and analyzed that allow the conflicting stations to compute a stochastic estimate n* of n, cooperatively, at small cost, as a function of the feedback elicited during its execution. an algorithm to resolve a conflict among two or more stations controls the retransmissions of the conflicting stations so that each eventually transmits singly to the channel. combining one of our estimation algorithms with a tree algorithm (of capetanakis, hayes, and tsybakov and mikhailov) then leads to a hybrid algorithm for conflict resolution. several efficient combinations are possible, the most efficient of which resolves conflicts about 20 percent faster on average than any of the comparable algorithms reported to date. albert g. greenberg philippe flajolet richard e. ladner seamless - a latency-tolerant risc-based multiprocessor architecture (abstract) the seamless parallel system being developed at the university of iowa ece department provides a method for providing latency tolerance in physically- distributed memory systems utilizing "off- the-shelf" risc cpu's without incurring the overhead of multithreading. seamless encompasses an evolutionary new programming model emphasizing data locality that views communication as data movement rather than message passing i/o. a hardware locality manager is added to each processing element to perform this data movement concurrently with computation. samuel a. fineberg thomas l. casavant brent h. pease a general purpose proxy filtering mechanism applied to the mobile environment bruce zenel understanding and improving tcp performance over networks with minimum rate guarantees wu-chang feng dilip d. kandlur debanjan saha kang g. shin building reliable mobile-aware applications using the rover toolkit this paper discusses extensions to the rover toolkit for constructing reliable mobile-aware applications. the extensions improve upon the existing failure model, which addresses client or communication failures and guarantees reliable message delivery from clients to server, but does not address server failures (e.g., the loss of an incoming message due to server failure) (joseph et al., 1997). due to the unpredictable, intermittent communication connectivity typically found in mobile client environments, it is inappropriate to make clients responsible for guaranteeing request completion at servers. the extensions discussed in this paper provide both system- and language-level support for reliable operation in the form of stable logging of each message received by a server, per-application stable variables, programmer-supplied failure recovery procedures, server process failure detection, and automatic server process restart. the design and implementation of fault-tolerance support is optimized for high performance in the normal case (network connectivity provided by a high latency, low bandwidth, wireless link): measurements show a best-case overhead of less than 7% for a reliable null rpc over wired and cellular dialup links. experimental results from both micro-benchmarks and applications, such as the rover web browser proxy, show that support for reliable operation can be provided at an overhead of only a few percent of execution time during normal operation. anthony d. joseph m. frans kaashoek multiple crossbar network integrated supercomputing framework at los alamos national laboratory, site of one of the world's most powerful scientific supercomputing facilities, a prototype network for an environment that links supercomputers and workstations is being developed. driven by a need to provide graphics data at movie rates across a network from a supercomputer to a scientific workstation, the network is called the multiple crossbar network (mcn). it is intended to be a coarsely-grained, loosely- coupled, general-purpose multicomputer framework that will vastly increase the speed at which supercomputers communicate with each other in large networks. the components of the network are described, as well as work done in collaboration with vendors who are interested in providing commercial products. r. hoebelheinrich r. thomsen limiting logical pr neighborhood size b. c. miller an application-oriented error control scheme for high-speed networks fengmin gong gurudatta m. parulkar wireless architecture for access to remote services (wiars) (poster session) our research involves the creation of a plug and play environment where different devices and services can be made to function together effortlessly. currently the connection of any entity to another entity, through a network, needs prior setup and configuration. we aim to design an architecture, which requires minimal setup processes or none, for easy access to remote services using the wireless application protocol (wap) and jini. amisha thakkar bina ramamurthy the scalability of multigrain systems donald yeung an introduction to the transmission performance capabilities of ieee 802-5 token-ring networks d. irvin a turnable protocol for symmetric surveillance in distributed systems in distributed systems surveillance protocols are used for monitoring the status of remote sites. a remote site is regarded as being available as long as messages are received from this site, otherwise it is regarded as being unavailable. if a site becomes unavailable, this will be reported to other sites and recovery actions can be initiated. using an example it will be shown that in certain cases it is necessary, that whenever some site s1 detects the unavailability of some other site s2, within a fixed amount of time s2 must also have detected an unavailability of s1. unfortunately, this cannot be guaranteed by existing surveillance protocols. another problem with existing protocols is, that remote sites are usually reported as being unavailable after being timed out only once, i.e. the loss of just one message might cause complete systems to back out. two versions of a protocol for so-called symmetric surveillance are presented. both guarantee, that if s1 detects the unavailability of s2 at time t0, then s2 (provided that s2 has not crashed) will become aware of this fact at t1 such that [t1 - t0] < Δ. this property is of special interest for handling network partitionings. additionally, one of the versions is tunable, i.e. it can be specified, how many timeouts may occur before a site is regarded as being unavailable. b walter multithreaded computer systems r. r. oldehoeft the implementation of guaranteed, reliable, secure broadcast networks this paper depicts a conceptually simple and easy to implement protocol that provides reliable and secure broadcast/multicast communication. the methodology used in this protocol is surprisingly simple. three logical nodes are enforced in the network - a central retransmitter, a designated acknowledger, and a (many when needed) playback recorder(s). through the coordinated service of the three nodes, every user node can be guaranteed to receive all broadcast messages in the correct temporal order. a fourth logical node, the security controller, can be added to the protocol to provide security-related services such as user authentication, message encryption, etc. this protocol (grsb - guaranteed, reliable, secure broadcast) can be implemented in many types of networks - local area networks, wide area networks, as well as satellite communications. some of its implementation and performance issues are also discussed in the paper. lawrence c. n. tseung keh-chiang yu a close look at vector performance of register-to-register vector computers and a new model ingrid y. bucher margaret l. simmons edmund: a multicast kernel for distributed application although many local area networks and operating systems support the use of multicast communications, multicast communications have remained primarily an interesting research tool. however, the growing research interest in multimedia and hypermedia for knowledge-based models and cooperative work environments, and the need for object migration in distributed systems, suggest that there are other applications that could benefit from the use of multicast. this paper presents an overview of edmund, a light weight kernel that supports interprocess multicast messaging on a network of intel 80286 personal computers, as well as some of our preliminary research results which demonstrate the uses and benefits of multicast communication. larry hughes communication estimation for hardware/software codesign peter voigt knudsen jan madsen transputer-based parallel processing products computer system architects (csa) designs, manufactures, and sells products which use parallel processing to provide low-cost, high-compute performance to the scientific, engineering, and educational marketplace. through its close ties with industry and university research, csa remains on the leading edge of parallel processing technology. the company was started in 1981 as a consulting firm, offering contract design of high speed computer architectures. contracts with companies such as hewlett-packard, burroughs, and floating point systems provided the internal funding for development of a family of computing products using inmos transputers. the transputer is a high-performance 32-bit microprocessor specifically designed to function as a component processor in a network of processors. corporate computer system architects tcp checksum function design william w. plummer organization of invalidation reports for energy-efficient cache invalidation in mobile environments in a wireless environment, mobile clients often cache frequently accessed data to reduce contention on the limited wireless bandwidth. however, it is difficult for clients to ascertain the validity of theory cache content because of their frequent disconnection. one promising cache invalidation approach is the bit-sequences scheme that organizes invalidation reports as a set of binary bit sequences with an associated set of timestamps. the report is periodically broadcast by the server to clients listening to the communication channel. while the approach has been shown to be effective, it is not energy efficient as clients are expected to examine the entire invalidation report. in this paper, we reexamine the bit -sequences method and study different organizations of the invalidation report to facilitate clients to selectively tune to the portion of the report that are of interest to them. this allows the clients to minimize the power consumption when invalidating their cache content. we conducted extensive studies based on a simulation model. our study shows that, compared to the bit-sequences approach, the proposed schemes are not only equally effective in salvaging the cache content but are more efficient in energy uti. kian-lee tan performance of advanced architectures j. dongarra status of osi (and related) standards a. lyman chapin comparing software and hardware schemes for reducing the cost of branches pipelining has become a common technique to increase throughput of the instruction fetch, instruction decode, and instruction execution portions of modern computers. branch instructions disrupt the flow of instructions through the pipeline, increasing the overall execution cost of branch instructions. three schemes to reduce the cost of branches are presented in the context of a general pipeline model. ten realistic unix domain programs are used to directly compare the cost and performance of the three schemes and the results are in favor of the software-based scheme. for example, the software-based scheme has a cost of 1.65 cycles/branch vs. a cost of 1.68 cycles/branch of the best hardware scheme for a highly pipelined processor (11-stage pipeline). the results are 1.19 (software scheme) vs. 1.23 cycles/branch (best hardware scheme) for a moderately pipelined processor (5-stage pipeline). w. w. hwu t. m. conte p. p. chang tree multicast strategies in mobile, multishop wireless networks tree multicast is a well established concept in wired networks. two versions, per- source tree multicast (e.g., dvmrp) and shared tree multicast (e.g., core based tree), account for the majority of the wireline implementations. in this paper, we extend the tree multicast concept to wireless, mobile, multihop networks for applications ranging from ad hoc networking to disaster recovery and battlefield. the main challenge in wireless, mobile networks is the rapidly changing environment. we address this issue in our design by: (a) using "soft state"; (b) assigning different roles to nodes depending on their mobility (2-level mobility model); (c) proposing an adaptive scheme which combines shared tree and per-source tree benefits, and (d) dynamically relocating the shared tree rendezvous point (rp). a detailed wireless simulation model is used to evaluate various multicast schemes. the results show that per-source trees perform better in heavy loads because of the more efficient traffic distribution; while shared trees are more robust to mobility and are more scalable to large network sizes. the adaptive tree multicast scheme, a hybrid between shared tree and per-source tree, combines the advantages of both and performs consistently well across all load and mobility scenarios. the main contributions of this study are: the use of a 2-level mobility model to improve the stability of the shared tree, the development of a hybrid, adaptive per-source and shared tree scheme, and the dynamic relocation of the rp in the shared tree. mario gerla ching-chuan chiang lixia zhang organizational alignment through information technology: a web-based approach to change jeffrey h. smith william heinrichs calculating availability and performability measures of repairable computer systems using randomization repairable computer systems are considered, the availability behavior of which can be modeled as a homogeneous markov process. the randomization method is used to calculate various measures over a finite observation period related to availability modeling of these systems. these measures include the distribution of the number of events of a certain type, the distribution of the length of time in a set of states, and the probability of a near- coincident fault. the method is then extended to calculate performability distributions. the method relies on coloring subintervals of the finite observation period based on the particular application, and then calculating the measure of interest using these colored intervals. edmundo de souza e silva h. richard gail a comparison of two token-passing bus protocols a well known disadvantage of standard token-passing in ring and bus networks is the waste of channel bandwidth often seen in lightly loaded or asymmetric systems. it is possible to make use of the broadcast mechanism in token bus systems to distribute nearly up-to-date information about the state of individual stations to the entire system. one such scheme involves the determination of a randomly varying set of more active stations. these stations are given a chance to form a second logical ring above the standard logical ring that characterizes the token bus. the transmission cycles of the system can thus be made to alternate between standard token-passing and transmission cycles, and the cycles of token-passing and transmission within the logical ring of more active stations. we assume that each station makes at most one transmission when given the chance to transmit. for poisson arrivals and otherwise general input distributions, the cycle-time distribution of the token is derived for each kind of cycle. an important random variable is the random token turnaround time seen by individual stations. for lightly loaded stations this time tends to be larger than for heavily loaded stations. the distribution of this random time, simple performance measures, and a comparative measure of stability, showing the adaptive scheme to be more stable than the standard, are obtained. v rego a pragmatic view of distributed processing systems this course provides an overview of state-of-the-art concepts and problems associated with implementing a distributed processing system. it defines what constitutes a distributed system and discusses contemporary models of distributed systems. the tutorial describes case studies including ethernet, hyperchannel, nestar, cluster bus, and ungerman-bass net-one systems. a survey of design issues associated with system software, interconnection topology, and new hardware concepts is presented before concluding with observations on future system concepts. kenneth j. thurber can dataflow subsume von neumann computing? we explore the question: "what can a von neumann processor borrow from dataflow to make it more suitable for a multiprocessor?" starting with a simple, "risc-like" instruction set, we show how to change the underlying processor organization to make it multithreaded. then, we extend it with three instructions that give it a fine-grained, dataflow capability. we call the result p-risc, for "parallel risc." finally, we discuss memory support for such multiprocessors. we compare our approach to existing mimd machines and to other dataflow machines. r. s. nikhil an overview of the andrew message system j. rosenberg c. f. everhart n. s. borenstein energy efficiency of tcp in a local wireless environment the focus of this paper is to analyze the energy consumption performance of various versions of tcp, namely, tahoe, reno and newreno, for bulk data transfer in an environment where channel errors are correlated. we investigate the performance of a single wireless tcp connection by modeling the correlated packet loss/error process (e.g., as induced by a multipath fading channel) as a first-order markov chain. based on a unified analytical approach, we compute the throughput and energy performance of various versions of tcp. the main findings of this study are the (1) error correlations significantly affect the energy performance of tcp (consistent with analogous conclusions for throughput), and in particular they result in considerably better performance for tahoe and newreno than iid errors, and (2) the congestion control mechanism implemented by tcp does a good job at saving energy as well, by backing off and idiling during error bursts. an interesting conclusion is that, unlike throughput, the energy efficiency metric may be very sensitive to the tcp version used and to the choice of the protocol parameters, so that large gains appear possible. michele zorzi ramesh r. rao comparison of hypercube, hypernet, and symmetric hypernet architectures hypercube has been the most popular topology for developing multiprocessor supercomputers because of its connectivity, regularity, symmetry and algorithmic mapping properties. however, if a hypercube needs to be expanded at some future time, both hardware configuration and communication software of each node has to be altered because its node degree is not constant. hwang and ghosh proposed a new interconnection network called hypernet which has a constant node degree and is easily expandable. however, its components count increases unsymmetrically with the increase in size. we propose a new topology called symmetric hypernet which grows symmetrically with the increase in dimension and hierachical level. the nodes in the symmetric net are functionally and physically separated as io nodes and computation nodes to avoid traffic congestion and message delays. the io nodes are placed uniformly among the processing nodes. the system can be dynamically partitioned into subsystems, each of which may be dedicated to serve a user's request(s). the system analysis can be done with the help of analytical models to serve a user's request(s). the system analysis can be done with the help of analytical models to determine how different structures would affect system performance. r. p. kaushal j. s. bedi issues in using darpa domain names for computer mail although the principle goal of the darpa domain name project is to provide a distributed name resolution mechanism for host names in the darpa internet, it is envisioned that the domain naming scheme will also support names for mailboxes in a global mail system that extends to mail networks beyond the darpa internet. in addition to overviewing the fundamental ideas behind the domain name scheme and outlining a proposal for using domain names as an umbrella naming system for computer mail, we discuss the administrative and technical problems involved in using domain names in the global mail system. douglas e. comer larry l. peterson workload characterization (tutorial): issues and approaches workload characterization is that branch of performance evaluation which concerns itself with the measurement and modeling of the workloads to be processed by the system being evaluated. since all performance indices of interest are workload-dependent, there is no evaluation study that does not require the characterization of one or more workloads. in spite of the importance of the problem, our knowledge in this area leaves much to be desired. the tutorial addresses the main issues, both resolved and unresolved, in the field, and surveys the major approaches that have been proposed and are in use. modern methods for designing executable artificial workloads, as well as the applications of these techniques in system procurement, system tuning, and capacity planning are emphasized. domenico ferrari blocking probabilities in a loss system with arrivals in geometrically distributed batches and heterogeneous service requirements erik a. van doorn frans j. m. panken distributed shared memory in a loosely coupled distributed system this work outlines the development and performance validation of an architecture for distributed shared memory in a loosely coupled distributed computing environment. this distributed shared memory may be used for communication and data exchange between communicants on different computing sites; the mechanism will operate transparently and in a distributed manner. this paper describes the architecture of this mechanism and metrics which will be used to measure its performance. we also discuss a number of issues related to the overall design and what research contribution such an implementation can provide to the computer science field. b. d. fleisch fama-pj: a channel access protocol for wireless lans chane l. fullmer j. j. garcia-luna-aceves local management of a global resource in a communication network this paper introduces a new distributed data object called resource controller that provides an abstraction for managing the consumption of a global resource in a distributed system. examples of resources that may be managed by such an object include; number of messages sent, number of nodes participating in the protocol, and total cpu time consumed. the resource controller object is accessed through a procedure that can be invoked at any node in the network. before consuming a unit of resource at some node, the controlled algorithm should invoke the procedure at this node, requesting a permit or a rejection. the key characteristics of the resource controller object are the constraints that it imposes on the global resource consumption. an (m, w)-controller guarantees that the total number of permits granted is at most m; it also ensures that, if a request is rejected, then at least m---w permits are eventually granted, even if no more requests are made after the rejected one. in this paper, we describe several message and space- efficient implementations of the resource controller object. in particular, we present an (m, w)-controller whose message complexity is o(n log2n log(m/(w \\+ 1)) where n is the total number of nodes. this is in contrast to the o(nm) message complexity of a fully centralized controller which maintains a global counter of the number of granted permits at some distinguished node and relays all the requests to the node. yehuda afek baruch awerbuch serge plotkin michael saks location-aware mobile applications based on directory services location-aware applications are becoming increasingly attractive due to the widespread dissemination of wireless networks and the emergence of small and cheap locating technologies. we developed a location information server that simplifies and speeds up the development of these applications by offering a set of generic location retrieval and notification services to the application. the data model and the access protocols of these services are based on the x.500 directory service and the lightweight directory access protocol ldap since these are becoming the standard attribute-value-pair retrieval mechanisms for internet and intranet environments. this approach establishes a smooth migration path from conventional to location-aware applications. the paper presents the location information server concepts, defines its directory data model and access services, and discusses the implementation options of the loca- tion information server. henning maass a minimal duplex connection capability in the top three layers of the osi reference model a minimal duplex connection capability is described as the ability to open, close, abort, and transfer data through a duplex connection between two application processes. the goal of this paper is to describe a minimal duplex connection capability for the top three layers of the open systems interconnection (osi) reference model that will be appropriate for use by very small open systems such as home computers. such a connection capability would allow these small systems to interconnect with machines of arbitrary size. the approach taken is to use only the parameters on service primitives that are absolutely required to accomplish this connection service. the protocol and services necessary to accomplish such a minimal connection are described for the application, presentation, and session layers of the osi reference model. the existing draft proposals for services and protocol for the session layer provide a basis for this duplex connection service. a minor modification to the session draft proposals is suggested. the services for the presentation and application layer are defined in terms of the tentative service primitives being considered in the international standards efforts. the transparent initial context defined for the presentation layer is used to minimize the work of that layer. the benefits of this minimal duplex connection capability are then discussed from the point of view of very small open systems. m. f. dolan data link layer: two impossibility results nancy a. lynch yishay mansour alan fekete emulating an mimd architecture as part of a research effort in parallel processor architecture and programming, the ultracomputer group at new york university has done extensive simulation of parallel programs. to speed up these simulations, we have developed a parallel processor emulator, using the microprogrammable puma computer system previously designed and built at nyu. su bogong ralph grishman is service priority useful in networks? sandeep bajaj lee breslau scott shenker an analysis of naming conventions for distributed computer systems name servers that collectively manage a global name space facilitate sharing of resources in a large internetwork by providing means of locating named objects. the efficiency with which the name space can be managed is strongly influenced by the adopted naming convention. structured name spaces are shown to simplify name space management from both an administrative and system viewpoint. formulae have been derived which allow one to quantitatively measure the effect of the distributed name server configuration on a given client's level of performance. in general, the cost of a name server query can be reduced by distributing replicated copies of name server database entries in a way that exploits the locality of clients' reference patterns. douglas b. terry optimal dynamic mobility management for pcs networks jie li hisao kameda keqin li automatically tuned collective communications the performance of the mpi's collective communications is critical in most mpi-based applications. a general algorithm for a given collective communication operation may not give good performance on all systems due to the differences in architectures, network parameters and the storage capacity of the underlying mpi implementation. in this paper we discuss an approach in which the collective communications are tuned for a given system by conducting a series of experiments on the system. we also discuss a dynamic topology method that uses the tuned static topology shape, but re-orders the logical addresses to compensate for changing run time variations. a series of experiments were conducted comparing our tuned collective communication operations to various native vendor mpi implementations. the use of the tuned collective communications resulted in about 30 percent to 650 percent improvement in performance over the native mpi implementations. sathish s. vadhiyar graham e. fagg jack dongarra performance models of token ring local area networks this paper presents a simple heuristic analytic algorithm for predicting the "response times" of messages in asymmetric token ring local area networks. a description of the token ring and the model is presented in section 2 the algorithm is described in section 3 and the empirical results in section 4. the analytic results were compared against a detailed simulation model and the results are extremely close over a wide range of models. local area networks (or lans) offer a very attractive solution to the problem of connecting a large number of devices distributed over a small geographic area. they are an inexpensive readily expandable and highly flexible communications media. they are the backbone of the automated office - a significant component of the office of the future. this importance of lans in the future of applied computer science has resulted in a tremendous burst of interest in the study of their behaviour. there are already many different lan architectures proposed and studied in the literature [tropper 81] [tannenbaum 81] [babic 78] [metcalfe 76] [clark 78] one lan architecture is significant for several reasons. this architecture is the token ring [carsten 77]. it has attracted interest because of its simplicity fairness and efficiency. the interest it has generated has resulted in the proposal of several different versions. this paper concentrates on one of these versions - the single token token ring protocol as described in [bux 81]. this particular version is attractive because of its overall simplicity and reliability. this paper presents an algorithm for predicting response times in a token ring with the single token protocol. robert berry k. mani chandy books michele tepper bimodal multicast there are many methods for making a multicast protocol "reliable." at one end of the spectrum, a reliable multicast protocol might offer tomicity guarantees, such as all-or- nothing delivery, delivery ordering, and perhaps additional properties such as virtually synchronous addressing. at the other are protocols that use local repair to overcome transient packet loss in the network, offering "best effort" reliability. yet none of this prior work has treated stability of multicast delivery as a basic reliability property, such as might be needed in an internet radio, television, or conferencing application. this article looks at reliability with a new goal: development of a multicast protocol which is reliable in a sense that can be rigorously quantified and includes throughput stability guarantees. we characterize this new protocol as a "bimodal multicast" in reference to its reliability model, which corresponds to a family of bimodal probability distributions. here, we introduce the protocol, provide a theoretical analysis of its behavior, review experimental results, and discuss some candidate applications. these confirm that bimodal multicast is reliable, scalable, and that the protocol provides remarkably stable delivery throughput. kenneth p. birman mark hayden oznur ozkasap zhen xiao mihai budiu yaron minsky a model for enhancing connection rerouting in mobile networks in active networks (ans) programs can be injected into routers and switches to extend the functionalities of the network. this allows programmers to enhance existing protocols and enables the rapid deployment of new protocols. the main objective of this paper is to show why ans are ideal in solving the problem of connection rerouting and how current end-to-end based approaches can be enhanced. in this paper we propose a new model called active connection rerouting (acr). in the acr model, programs are dynamically injected into switches/routers in mobile networks to facilitate efficient connection rerouting during mobile host (mh) migration. we show how connection rerouting can be performed efficiently within the network. the acr model uses a two stage optimization process: (i) path extension and (ii) lazy optimization. unlike previous work on two stage connection rerouting acr has the following properties: elimination of loops within switches/routers and incremental optimization which minimizes buffer requirements and maximized path reuse. acr performs well in all topologies. our experimental results show that acr is efficient and scalable and it performs well in all topologies. power and energy reduction via pipeline balancing _minimizing power dissipation is an important design requirement for both portable and non-portable systems. in this work, we propose an architectural solution to the power problem that retains performance while reducing power. the technique, known as_ pipeline balancing (plb), _dynamically tunes the resources of a general purpose processor to the needs of the program by monitoring performance within each program. we analyze metrics for triggering_ plb, _and detail instruction queue design and energy savings based on an extension of the alpha 21264 processor. using a detailed simulator, we present component and full chip power and energy savings for single and multi-threaded execution. results show an issue queue and execution unit power reduction of up to 23% and 13%, respectively, with an average performance loss of 1% to 2%_. r. iris bahar srilatha manne an approach to scalability study of shared memory parallel systems the overheads in a parallel system that limit its scalability need to be identified and separated in order to enable parallel algorithm design and the development of parallel machines. such overheads may be broadly classified into two components. the first one is intrinsic to the algorithm and arises due to factors such as the work-imbalance and the serial fraction. the second one is due to the interaction between the algorithm and the architecture and arises due to latency and contention in the network. a top-down approach to scalability study of shared memory parallel systems is proposed in this research. we define the notion of overhead functions associated with the different algorithmic and architectural characteristics to quantify the scalability of parallel systems; we isolate the algorithmic overhead and the overheads due to network latency and contention from the overall execution time of an application; we design and implement an execution-driven simulation platform that incorporates these methods for quantifying the overhead functions; and we use this simulator to study the scalability characteristics of five applications on shared memory platforms with different communication topologies. anand sivasubramaniam aman singla umakishore ramachandran h. venkateswaran towards interprocess communication and interface synthesis for a heterogeneous real-time rapid prototyping environment franz fischer annette muth georg fäber a threaded/flow approach to reconfigurable distributed systems and service primitives architectures this paper discusses a methodology for managing the assembly, control, and disassembly of large numbers of independent small-scale configurations within large-scale reconfigurable distributed systems. the approach is targeted at service primitives architectures for enhanced telecommunications networks, but can apply to more general settings such as multi-tasking supercomputers and network operations systems.* study of the methods presented here was a key motivation in founding the bell communications research integrated media architecture laboratory (imal) [1]. the threaded/flow approach uses data- flow constructs to assemble higher level functions from other distributed functions and resources with arbitrary degrees of decentralization. equivalence between algorithms and hard and virtual resources is accomplished via threaded-interpretive constructs. function autonomy, concurrency, conditional branching, pipelining, and setup/execution interaction are implicitly supported. some elementary performance comparisons are argued. this work is motivated by telecommunications applications involving coordinated multiple-media in open architectures supporting large numbers of users and outside service vendors. in such networks it is desired that services may be flexibly constructed by the network, service vendors, or by users themselves from any meaningful combination of elementary primitives and previously defined services. reliability, billing, call progress, real-time user control, and network management functions must be explicitly supported. these needs are handled with apparent high performance by the approach. l. f. ludwig an overview of the university of texas at dallas' center for advanced telecommunications systems and services (catss) imrich chlamtac stefano basagni stephen gibbs a study of branch prediction strategies in high-performance computer systems, performance losses due to conditional branch instructions can be minimized by predicting a branch outcome and fetching, decoding, and/or issuing subsequent instructions before the actual outcome is known. this paper discusses branch prediction strategies with the goal of maximizing prediction accuracy. first, currently used techniques are discussed and analyzed using instruction trace data. then, new techniques are proposed and are shown to provide greater accuracy and more flexibility at low cost. james e. smith a multiply-and-accumulate selection algorithm for dynamic entropy coding d irwin conferences jay blickstein the impact of synchronization on the session problem marios mavronicolas multicast virtual topologies for collective communication in mpcs and atm clusters y. huang c. c. huang p. k. mckinley whither the network? the next millennium jerry golick protocol implementation using integrated layer processing integrated layer processing (ilp) is an implementation concept which "permit[s] the implementor the option of performing all the [data] manipulation steps in one or two integrated processing loops" [1]. to estimate the achievable benefits of **ilp** a file transfer application with an encryption function on top of a user-level tcp has been implemented and the performance of the application in terms of throughput and packet processing times has been measured. the results show that it is possible to obtain performance benefits by integrating marshalling, encryption and tcp checksum calculation. they also show that the benefits are smaller than in simple experiments, where ilp effects have not been evaluated in a complete protocol environment. simulations of memory access and cache hit rate show that the main benefit of ilp is reduced memory accesses rather than an improved cache hit rate. the results further show that data manipulation characteristics may significantly influence the cache behavior and the achievable performance gain of ilp. torsten braun christophe diot the architecture of a gb/s multimedia protocol adapter erich rutsche the simple book: an introduction to management of tcp/ip-based internets greg satz optimizing video-on-demand through requestcasting video-on-demand (vod) designs typically feature either request or broadcast architectures. both have limitations. request architectures experience a limit in the number of clients that can be adequately serviced. broadcast architectures require large, often unavailable, bandwidth. in addition, it is difficult to limit viewing to a target audience. in this paper, we present a new architecture for a metropolitan vod service that we name requestcasting. our architecture combines the two general approaches of request and broadcast, but not their respective limitations. with the requestcast architecture, we implement an improved pyramid broadcasting protocol. our synchronized method of employing pyramid broadcasting is key to providing robust vod service. julie pochueva ethan v. munson denis pochuev reliability and performance of hierarchical raid with multiple controllers redundant arrays of inexpensive disks (raid) offer fault tolerance against disk failures. however a storage system having more disks suffers from less reliability and performance. a raid architecture tolerating multiple disk failures shows severe performance degradation in comparison to the raid level 5 due to the complexity of implementation. we present a new raid architecture that tolerates at least three disk failures and offers similar throughout to the raid level 5. we call it the hierarchical raid, which is hierarchically composed of raid levels. furthermore, we formally introduce the mean-time-to- data-loss (mttdl) of traditional raid and the hierarchical raid using markov process for detailed comparison. sung hoon baek bong wan kim eui joung joung chong won park reduced instruction set computers reduced instruction set computers aim for both simplicity in hardware and synergy between architectures and compilers. optimizing compilers are used to compile programming languages down to instructions that are as unencumbered as microinstructions in a large virtual address space, and to make the instruction cycle time as fast as possible. david a. patterson performance analysis of an asynchronous multi-rate crossbar with bursty traffic one of the most promising approaches to building high speed networks and distributed multiprocessors is the use of optical interconnections. the basic component of such a system is a switch (interconnection network) that has a capacity of interconnecting a large number of inputs to outputs. in this paper we present an analysis of an n1 x n2 asynchronous crossbar switch model for all-optical circuit-switching networks that incorporates multi-rate arrival traffic with varied arrival distributions. we compare the model behavior using traffic loads derived from the binomial, pascal, and poisson statistical distributions. we give efficient algorithms to compute the performance measures. we analyze the effect of load changes from particular traffic distribution streams on system performance and give a simple "economic" interpretation. paul stirpe eugene pinsky a single-relation module for a data base machine the purpose of this paper is twofold. first, a set of design goals for data base machines is defined, together with a partial order of importance. the main objective is chosen to be a machine capable of executing any relational operation in constant time. second, architecture of a data base machine is described. associative array processors are employed together with other means to achieve very high performance. the difficulties arising from the proposed design are considered, various alternatives and trade-offs are discussed, and the effects of future technology are predicted. bruce w. arden ran ginosar getting the most for your megabit michael h. comer michael w. condry scott cattanach roy campbell t: integrated building blocks for parallel computing g. m. papadopoulos g. a. boughton r. greiner m. j. beckerle efficient demultiplexing of incoming tcp packets when a transport protocol segment arrives at a receiving system, the receiving system must determine which application is to receive the protocol segment. this decision is typically made by looking up a protocol control block (pcb) for the segment, based on information in the segment's header. pcb lookup (a form of demultiplexing) is typically one of the more expensive operations in handling inbound protocol segment [fe190]. many recent protocol optimizations for the transmission control protocol (tcp) [jac88] assume that a large component of tcp traffic is bulk-data transfers, which result in packet trains [jr86]. if packet trains are prevalent, there is a high likelihood that the next tcp segment is en route to the same application (i.e. uses the same pcb) as the previous tcp segment. in these environments a very simple one-pcb cache like those used in bsd systems yields very high cache hit rates. however, there are classes of applications that do not form packet trains, and these applications do not perform well with a one- pcb cache. examples of such applications are quite common in the area of heads-down data entry into on- line transaction-processing (oltp) systems. oltp systems make heavy use of computer communications networks and have large aggregate-packet-rates but are also characterized by large numbers of connections, low per-connection packet rates, and rather small packets. this combination of characteristics results in a very low incidence of packet trains. this paper uses a simple analytic approach to examine how different pcb lookup schemes perform with oltp traffic. one scheme is shown to work an order of magnitude better for oltp traffic than the one- pcb cache approach while still maintaining good performance for packet-train traffic. paul e. mckenney ken f. dove on the performance of tdd-td/cdma architectures with heterogeneous traffic this paper determines the capacity of a tdd- td/cdma architecturewhich supports different classes of subscribers and adopts aninterference-driven admission control policy. the blocking and theoutage probability of the system users are evaluated under varioustraffic conditions for several uplink/downlink configurations ofthe time slots, demonstrating that the time division full duplexapproach needs a careful tuning in order to maximize systemcapacity. our work further shows that even when data traffic playsthe predominant role, the tdd system can satisfyingly cope withremarkable traffic loads and support several erlangs of traffic percell. maurizio casoni maria luisa merani experience with mean value analysis model for evaluating shared bus, throughput-oriented multiprocessors mee-chow chiang gurindar s. sohi phantom: a simple and effective flow control scheme this paper presents _phantom_, a simple constant space algorithm for rate based flow control. as shown by our simulations, it converges fast to a fair rate allocation while generating a moderate queue length. while our approach can be easily implemented in atm switches for managing abr traffic, it is also suitable for flow control in tcp router based networks. both the introduced overhead and the required modifications in tcp flow control systems are minimal. the implementation of this approach in tcp guarantees fairness and provides a unifying interconnection between tcp routers and atm networks. the new algorithm easily inter-operates with current tcp flow control mechanisms and thus can be gradually introduced into installed based tcp networks. yehuda afek yishay mansour zvi ostfeld enhanced superscalar hardware: the schedule table j. k. pickett d. g. meyer spectrum sharing under the asynchronous upcs etiquette: the performance of collocated systems under heavy load ivan vukovic john mckown performance measures and scheduling policies in ring networks leandros tassiulas jinoo joung a direct signaling system for flexible access and deployment of telecommunication services thomas f. la porta kuo-wei herman chen the sigma network the sigma network is one of the most important element of the sigma system which is designed to improve productivity of a software. the sigma network has been developed in order to establish a infrastructure which acts as development environment provided by logically integrated sigma workstations spread over various companies and inside the companies which approve the concept of the sigma system. it is also included in the scope of its development to enrich application programs mainly for message communication required by network community. the network supports ieee802.3, digital packet exchange (x.25) and serial line under the tcp/ip layer and realizes end-to-end immediate communication. sigma network is a name oriented virtual network defined by hierarchical name space (domain) and named objects placed under the domains. its key feature lies in the network management mechanism, i.e. name server. k. saito globally constrained power control across mulitple channels in wireless data networks we investigate multi-channel transmission schemes for packetized wireless data networks. the transmitting unit transmits concurrently in several orthogonal channels (for example, distinct fdma bands or cdma codes) with randomly fluctuating interference and there is a global constraint on the total power transmitted across all channels at any time slot. incoming packets to the transfer are queued up in separate buffers, depending on the channel they are to be transmitted in. in each time slot, one packet can be transmitted in each channel from its corresponding queue. the issue is how much power to transmit in each channel, given the interference in it and the packet backlog, so as to optimize various power and delay costs associated with the system. we formulate the general problem taking a dynamic programming approach. through structural decomposition of the problem, we design practical novel algorithms for allocating power to various channels under the global constraint. nicholas bambos sunil kandukuri a protocol for efficient transfer of data over hybrid fiber/coax systems john o. limb dolors sala an architecture for qos analysis and experimentation william s. marcus the tera computer system robert alverson david callahan daniel cummings brian koblenz allan porterfield burton smith using pathchar to estimate internet link characteristics we evaluate **pathchar**, a tool that infers the characteristics of links along an internet path (latency, bandwidth, queue delays). looking at two example paths, we identify circumstances where **pathchar** is likely to succeed, and develop techniques to improve the accuracy of **pathchar**'s estimates and reduce the time it takes to generate them. the most successful of these techniques is a form of adaptive data collection that reduces the number of measurements **pathchar** needs by more than 90% for some links. allen b. downey communications networks for the force xxi digitized battlefield in striving to meet the increasing demands for timely delivery of multimedia information to the warfighter of the 21st century, the us army is undergoing a gradual evolution from its "legacy" communications networks to a flexible internetwork architecture based solidly on the underlying communications protocols and technology of the commercial internet. the framework for this new digitized battlefield, as described in the dod's joint technical architecture (jta), is taken from the civilian telecommunications infrastructure which, in many cases, differs appreciably from the rigors of the battlefield environment. the purpose of this paper is to survey the components and characteristics of the army's legacy communications networks, to illustrate the directions currently being taken for accomplishing this digitization, to describe the areas in which the civilian and military systems differ, and to define a glide path for convergence of the two technologies in support of the military's increasing appetite for information. paul sass a new adaptive mac layer protocol for broadband packet wireless networks in harsh fading and interference environments anthony s. acampora srikanth v. krishnamurthy ip data services over cdma digital cellular phil karn magnet: an architecture for dynamic resource allocation patty kostkova julie a. mccann hdlc reliability and the frbs method to improve it this paper arrived late and can be found, published in full, on pages 260-267 of the proceedings. j. selga j. rivera dependency sequences and hierarchical clocks: efficient alternatives to vector clocks for mobile computing systems vector clocks have been used to capture causal dependencies between processes in distributed computing systems. vector clocks are not suitable for mobile computing systems due to (i) lack of scalability: its size is equal to the number of nodes, and (ii) its inability to cope with fluctuations in the number of nodes. this paper presents two efficient alternatives to vector clock, namely, sets of dependency sequences, and hierarchical clock. both the alternatives are scalable and are immune to fluctuations in the number of nodes in the system. ravi prakash mukesh singhal a case for network musical performance a network musical performance (nmp) occurs when a group of musicians, located at different physical locations, interact over a network to perform as they would if located in the same room. in this paper, we present a case for nmp as a practical internet application, and describe a method to ameliorate the effect of late and lost packets on nmp. we describe an nmp system that embodies this concept, that combines several existing standards (midi, mpeg 4 structured audio, rtp/avp, and sip) with a new rtp packetization for midi performance. we analyze nmp experiments performed on calren2 hosts on the uc berkeley, stanford, and caltech campuses. john lazzaro john wawrzynek the locus distributed operating system locus is a distributed operating system which supports transparent access to data through a network wide filesystem, permits automatic replication of storage, supports transparent distributed process execution, supplies a number of high reliability functions such as nested transactions, and is upward compatible with unix. partitioned operation of subnet's and their dynamic merge is also supported. the system has been operational for about two years at ucla and extensive experience in its use has been obtained. the complete system architecture is outlined in this paper, and that experience is summarized. bruce walker gerald popek robert english charles kline greg thiel adaptive rate control and qos provisioning in direct broadcast satellite networks adaptive rate control, if properly employed, is an effective mechanism to sustain acceptable levels of quality of service (qos) in wireless networks where channel and traffic conditions vary over time. in this paper we present an adaptive rate (source and channel) control mechanism, developed as part of an adaptive resource allocation and management (aram) algorithm, for use in direct broadcast satellite (dbs) networks. the algorithm performs admission control and dynamically adjusts traffic source rate and forward error correction (fec) rate in a co-ordinated fashion to satisfy qos requirements. to analyze its performance, we have simulated the adaptive algorithm with varying traffic flows and channel conditions. the traffic flow is based on a variable bit rate (vbr) source model that represents motion picture expert group (mpeg) traffic fluctuations while the dbs channel model is based on a two-state additive white gaussian noise (awgn) channel. for measures of performance, the simulator quantifies throughput, frame loss due to congestion during transmission as well as qos variations due to channel (fec) and source (mpeg compression and data transmission) rate changes. to show the advantage of the adaptive fec mechanism, we also present the performance results when fixed fec rates are employed. the results indicate significant throughput and/or quality gains are possible when the fec/source pairs are adjusted properly in co-ordination with source rate changes. fatih alagöz david walters amina alrustamani branimir vojcic raymond pickholtz efficient simulation of multiprogramming w. p. dawkins v. debbad j. r. jump j. b. sinclair developing an architecture validation suite: application to the powerpc architecture laurent fournier anatoly koyfman moshe levinger aitpm: a strategy for integrating ip with atm this paper describes research on new methods and architectures that enable the synergistic combination of ip and atm technologies. we have designed a highly scalable gigabit ip router based on an atm core and a set of tightly coupled general-purpose processors. this aitpm (pronounced "ip on atm" or, if you prefer, "ip-attem") architecture provides flexibility in congestion control, routing, resource management, and packet scheduling.the aitpm architecture is designed to allow experimentation with, and fine tuning of, the protocols and algorithms that are expected to form the core of the next generation ip in the context of a gigabit environment. the underlying multi-cpu embedded system will ensure that there are enough cpu and memory cycles to perform all ip packet processing at gigabit rates. we believe that the aitpm architecture will not only lead to a scalable high-performance gigabit ip router technology, but will also demonstrate that ip and atm technologies can be mutually supportive. guru parulkar douglas c. schmidt jonathan s. turner delay analysis of a window tree conflict resolution algorithm in a local area network environment expressions are found for the throughput and delay performance of a tree conflict resolution algorithm that is used in a local area network with carrier sensing (and possibly also collision detection). we assume that massey's constant size window algorithm is used to control access to the channel, and that the resulting conflicts (if any) are resolved using a capetanakis-like preorder traversal tree algorithm with d-ary splitting. we develop and solve functional equations for various performance metrics of the system and apply the "moving server" technique to calculate the main component of the delay. our results compare very favorably with those for csma protocols, which are commonly used in local area networks that support sensing. george c. polyzos mart l. molle a dynamic network architecture network software is a critical component of any distributed system. because of its complexity, network software is commonly layered into a hierarchy of protocols, or more generally, into a protocol graph. typical protocol graphs--- including those standardized in the iso and tcp/ip network architectures--- share three important properties; the protocol graph is simple, the nodes of the graph (protocols) encapsulate complex functionality, and the topology of the graph is relatively static. this paper describes a new way to organize network software that differs from conventional architectures in all three of these properties. in our approach, the protocol graph is complex, individual protocols encapsulate a single function, and the topology of the graph is dynamic. the main contribution of this paper is to describe the ideas behind our new architecture, illustrate the advantages of using the architecture, and demonstrate that the architecture results in efficient network software. sean w. o'malley larry l. peterson switching strategies in a class of packet switching networks this paper investigates some methods for improving the performance of single stage shuffle exchange networks (sens) and multistage interconnection networks (mins). the three new switching strategies proposed use extra buffers to enhance performance. approximate analysis and simulation results indicate significant improvement in performance for both sens and mins. an intuitive method for determining the applicability of the approximate analysis is discussed and some performance measures, which should be useful in evaluating the performance of networks are defined. manoj kumar daniel m. dias j. r. jump the wireless net dennis fowler a multiple stream microprocessor prototype system: amp-1 a general- purpose multiple-stream processor with shared memory and a single time- multiplexed synchronous bus has been implemented. the amp-1 system uses eight standard microprocessors and 64k bytes of memory. the design is highly efficient in the use of processor, bus, and memory resources. preliminary performance measurements agree closely with an analytic memory access conflict model and show extremely low conflict-based performance degradation. heavy interleaving of the memory and effective multitasking of a job can yield significant performance speedups. considerations for future implementations are presented. edward s. davidson multi-terminal nets do change conventional wire length distribution models conventional models for estimating wire lengths in computer chips use rent's rule to estimate the number of terminals between sets of gates. the number of interconnections then follows by taking into account that most nets are point- to-point connections. in this paper, we introduce a model for multi- terminal nets and we show that such nets have a fundamentally different influence on the wire length estimations than point-to-point nets. the multi- terminal net model is then used to estimate the wire length distribution in two cases: (i)m the distribution of source-sink pairs for applications of delay estimation and (ii) the distribution of steiner tree lengths for applications related to routing resource estimation. the effects of including multi-terminal nets in the estimations are highlighted. experiments show that the new estimated wire length distributions are close to the measured ones. dirk stoobandt adding a vector unit to a superscalar processor francisca quintana jesus corbal roger espasa mateo valero the totem multiple-ring ordering and topology maintenance protocol the totem multiple-ring protocol provides reliable totally ordereddelivery of messages across multiple local-area networks interconnectedby gateways. this consistent message order is maintained in the presenceof network partitioning and remerging, and of processor failure andrecovery. the protocol provides accurate topology change information aspart of the global total order of messages. it addresses the issue ofscalability and achieves a latency that increases logarithmically withsystem size by exploiting process group locality and selectiveforwarding of messages through the gateways. pseudocode for the protocoland an evaluation of its performance are given. \\---authors' abstract d. a. agarwal l. e. moser p. m. melliar-smith r. k. budhia a method of trading diameter for reduced degree to construct low cost interconnection networks aaron harwood hong shen rdmar: a bandwidth-efficient routing protocol for mobile ad hoc networks george aggelou rahim tafazolli fair scheduling in wireless packet networks songwu lu vaduvur bharghavan r. srikant performance evaluation of reduced bandwidth multistage interconnection networks this paper presents and evaluates a class of buffered interconnection networks which provide performance and cost levels intermediate to a bus and a delta network. these networks, referred to as hybrid networks, are formed by beginning with a delta network and substituting buses for the final stages of the network. the choice of the number of stages replaced determines the bandwidth of the network. the reduction of network bandwidth is accompanied by a corresponding reduction in network cost. hybrid networks provide the system architect with a cost-effective solution to design problems in which the required interconnection bandwidth is greater than that of a bus but less than that of a full delta network. the performance of hybrid networks is investigated by developing a numerical model and by using simulation. two features of the model, buffers of arbitrary length at network switches and resource service times greater than interstage transfer delays, have not been included in previous analytical models of delta networks. results show that in some cases, the expense of a multistage network is not required in order to maintain the maximum level of system performance. d. t. harper j. r. jump performance of various computers using standard linear equations software jack j. dongarra an efficient multicast protocol for pcs networks in a personal communication services (pcs) network, mobile hosts communicate with other mobile hosts through base stations on a wired (static) network. the mobile hosts connect to different base stations through wireless links and the base stations to which mobile hosts are connected change depending on the current location of the mobile hosts. in this environment, the problem of efficiently delivering a multicast message from one mobile host to a group of other mobile hosts becomes challenging. in this paper, we present a multicast protocol that delivers multicast messages from a mobile host to a group of other mobile hosts without flooding the wired network. the multicast protocol is built on top of a user location strategy that should follow one of the three models of user location described in the paper. the basic multicast protocol proposed guarantees exactly-once message delivery to all mobile hosts in the multicast group and also ensures that multicast messages are delivered in fifo order from the point of view of the base station that originates the multicast message (referred to as bs-fifo). more importantly, an extension of the basic protocol is provided that, unlike earlier work, delivers multicast messages in fifo order from the point of view of the mobile host that initiates the multicast message (referred to as mh-fifo). the modifications to be made to the multicast protocol to accommodate each of the three models of user location is also described. vanitha aravamudhan karunaharan ratnam sampath rangarajan managing transient internetwork links in the xerox internet siranush radicati a novel single instruction computer architecture p. a. laplante "session swapping": a new approach for optimal bandwidth sharing of ring circuit switched channels reuven cohen the dragon processor russell r. atkinson edward m. mccreight configurable flow control mechanisms for fault-tolerant routing fault-tolerant routing protocols in modern interconnection networks rely heavily on the network flow control mechanisms used. optimistic flow control mechanisms such as wormhole routing (wr) realize very good performance, but are prone to deadlock in the presence of faults. conservative flow control mechanisms such as pipelined circuit switching (pcs) insures existence of a path to the destination prior to message transmission, but incurs increased overhead. existing fault-tolerant routing protocols are designed with one or the other, and must accommodate their associated constraints. this paper proposes the use of configurable flow control mechanisms. routing protocols can then be designed such that in the vicinity of faults, protocols use a more conservative flow control mechanism, while the majority of messages that traverse fault-free portions of the network utilize a wr like flow control to maximize performance. such protocols are referred to as two-phase protocols, where routing decisions are provided some control over the operation of the virtual channels. this ability provides new avenues for optimizing message passing performance in the presence of faults. a fully adaptive two-phase protocol is proposed and compared via simulation to those based on wr and pcs. the architecture of a network router supporting configurable flow control is described, and the paper concludes with avenues for future research. binh vien dao jose duato sudhakar yalamanchili adapting to network and client variability via on-demand dynamic distillation the explosive growth of the internet and the proliferation of smart cellular phones and handheld wireless devices is widening an already large gap between internet clients. clients vary in their hardware resources, software sophistication, and quality of connectivity, yet server support for client variation ranges from relatively poor to none at all. in this paper we introduce some design principles that we believe are fundamental to providing "meaningful" internet access for the entire range of clients. in particular, we show how to perform on-demand datatype-specific lossy compression on semantically typed data, tailoring content to the specific constraints of the client. we instantiate our design principles in a proxy architecture that further exploits typed data to enable application-level management of scarce network resources. our proxy architecture generalizes previous work addressing all three aspects of client variation by applying well-understood techniques in a novel way, resulting in quantitatively better end-to-end performance, higher quality display output, and new capabilities for low-end clients. armando fox steven d. gribble eric a. brewer elan amir achieving the best performance on superscalar processors in this paper the operation of a superscalar processor is studied. an analytical model is developed for computing the throughput and the speedup of the superscalar processors. the throughput is computed as a function of the reoccurrence period of the instructions in the program. it is shown that, when the number of the instructions is sufficiently large, the probability of referring to the different pipes and the number of pipes are more important than the length of the pipes.also the results show that maximum throughput of a p-issues superscalar processor is equal to p instructions per cycle. this upper bound is reachable if and only if some conditions for reoccurrence period of the instructions in the program are fulfilled. at the end of the paper some remarks related to the structural designing and the number of pipes are offered. h. s. shahhoseini m. naderi s. nemati the slide mechanism with applications in dynamic networks this paper presents a simple and efficient building block, called slide, for constructing communication protocols in dynamic networks whose topology frequently changes. we employ slide to derive (1) an end-to-end communication protocol with optimal amortized message complexity, and (2) a general method to efficiently and systematically combine dynamic and static algorithms. (dynamic algorithms are designed for dynamic networks, and static algorithms work in networks with stable topology.) the new end-to-end communication protocol has amortized message communication complexity o(n) (assuming that the sender is allowed to gather enough data items before transmitting them to the receiver), where n is the total number of nodes in the network (the previous best bound was (o(m), where m is the total number of links in the network). this protocol also has bit communication complexity o(nd), where d is the data item size in bits (assuming data items are large enough; i.e., for d = Ω(nm log n)). in addition we give, as a byproduct, an end-to-end communication protocol using o(n2m) messages per data item, which is considerably simpler than other protocols known to us (the best known end-to-end protocol has message complexity o(nm)[ag91]). the protocols above combine in an interesting way several ideas: the information dispersal algorithm of rabin [rab89], the majority insight of [afwz88, aaf+], and the slide protocol. . the second application of slide develops a systematic mechanism to combine a dynamic algorithm with a static algorithm for the same problem, such that the combined algorithm automatically adjusts its communication complexity to the network conditions. that is the combined algorithm solves the problem in a dynamic network, and if the network stabilizes for a long enough period of time then the algorithm's communication complexity matches that of the static algorithm. this approach has been first introduced in [am88] in the context of topology update algorithms. yehuda afek eli gafni adi rosen performance and stability of communication networks via robust exponential bounds opher yaron moshe sidi laboratory for emulation and study of integrated and coordinated media communication in future telecommunications networks, understanding the issues of user- network control, customer premise equipment (cpe) technologies, services and user applications is as important as the classical network problems of channel structure, switching, and transmission. this paper discusses a bell communications research facility, the integrated media architecture laboratory (imal), designed to flexibly emulate a wide range of current and future network and cpe environments with a focus on multiple media communications. imal combines off-the-shelf technologies to create an easily clonable emulation environment for studying, planning, demonstrating, and checking the feasibility of integrated media communications. the imal project has assembled workstations which feature speech- synthesis/sampled-audio/telephony capabilities, local 1 mip computation capacity, and a high-resolution color display integrating text/graphics/image/video under an expanded x window display management system. (x windows is an emerging windowing standard to provide high performance device-independent graphics.) the workstations may be augmented as needed by local image digitizers, video cameras, and color image printers producing paper and viewgraph hardcopies. also, the workstations are interconnected with switches permitting access to one another as well as shared databases, temporary storage, intelligence, and information processing/conversion resources. communications services are implemented under a distributed, real-time service primitive control scheme. this multiple-media service primitives scheme employs a threaded/dataflow-type architecture to support user-defined, network-defined, and vendor-defined services while including a wealth of flexible features for the study of network architecture, protocol, network management, and billing functions. l. f. ludwig d. f. dunn notable computer networks computer networks are becoming more numerous and more diverse. collectively, they constitute a worldwide metanetwork. john s. quarterman josiah c. hoskins tier automation representation of communication protocols the tier automation is presented as a model for communication protocols. several advantages of the model are cited, among which are universality of representation and manipulability. a scheme for using the tier automation to model specific distributed architectures and their protocols is described. the scheme is then used on a sample protocol and a transmission session with the sample protocol as exhibited. z bavel j grzymala-busse y hsia r mancisidor-landa research alerts marisa campbell analysis of polling protocols for fieldbus networks prasad raja guevara noubir luis ruiz jean hernandez jean-dominique decotignie forward acknowledgement: refining tcp congestion control we have developed a forward acknowledgment (fack) congestion control algorithm which addresses many of the performance problems recently observed in the internet. the fack algorithm is based on first principles of congestion control and is designed to be used with the proposed tcp sack option. by decoupling congestion control from other algorithms such as data recovery, it attains more precise control over the data flow in the network. we introduce two additional algorithms to improve the behavior in specific situations. through simulations we compare fack to both reno and reno with sack. finally, we consider the potential performance and impact of fack in the internet. matthew mathis jamshid mahdavi design of a high-performance atm firewall jun xu mukesh singhal internet mobility 4 4 mobile ip protocols allow mobile hosts to send and receive packets addressed with their home network ip address, regardless of the ip address of their current point of attachment in the internet.while some recent work in mobile ip focuses on a couple of specific routing optimizations for sending packets to and from mobile hosts [joh96] [mon96], we show that a variety of different optimizations are appropriate in different circumstances. the best choice, which may vary on a connection-by-connection or even a packet-by-packet basis, depends on three factors: the characteristics the protocol should optimize, the permissiveness of the networks over which the packets travel and the level of mobile-awareness of the hosts with which the mobile host corresponds.of the sixteen possible routing choices that we identify, we describe the seven that are most useful and discuss their benefits and limitations. these optimizations range from the most costly, which provides completely transparent mobility in all networks, to the most economical, which does not attempt to conceal location information. in particular, hosts should retain the option to communicate conventionally without using mobile ip whenever appropriate.further, we show that all optimizations can be described using a 4 4 grid of packet characteristics. this makes it easier for a mobile host, through a series of tests, to determine which of the currently available optimizations is the best to use for any given correspondent host. stuart cheshire mary baker delay analysis for forward signaling channels in wireless cellular network we consider connection-oriented wireless cellular networks. such second generation systems are circuit-switched digital networks which employ dedicated radio channels for the transmission of signaling information. a forward signaling channel is a common signaling channel assigned to carry the multiplexed stream of paging and channel-allocation packets from a base station to the mobile stations. similarly, for atm wireless networks, paging and virtual-circuit-allocation packets are multiplexed across the forward signaling channels as part of the virtual-circuit set-up phase. the delay levels experienced by paging and channel-allocation packets are critical factors in determining the efficient utilization of the limited radio channel capacity. a multiplexing scheme operating in a "slotted mode" can lead to reduced power consumption at the handsets, but may in turn induce an increase in packet delays. in this paper, focusing on forward signaling channels, we present schemes for multiplexing paging and channel-allocation packets across these channels, based on channelization plans, access priority assignments and paging group arrangements. for such multiplexing schemes, we develop analytical methods for the calculation of the delay characteristics exhibited by paging and channel-allocation packets. the resulting models and formulas provide for the design and analysis of forward signaling channels for wireless network systems. izhak rubin cheon won choi using prediction to accelerate coherence protocols most large shared-memory multiprocessors use directory protocols to keep per-processor caches coherent. some memory references in such systems, however, suffer long latencies for misses to remotely-cached blocks. to ameliorate this latency, researchers have augmented standard coherence protocols with optimizations for specific sharing patterns, such as read-modify-write, producer-consumer, and migratory sharing. this paper seeks to replace these directed solutions with general prediction logic that monitors coherence activity and triggers appropriate coherence actions.this paper takes the first step toward using general prediction to accelerate coherence protocols by developing and evaluating the cosmos coherence message predictor. cosmos predicts the source and type of the next coherence message for a cache block using logic that is an extension of yeh and patt's two-level pap branch predictor. for five scientific applications running on 16 processors, cosmos has prediction accuracies of 62% to 93%. cosmos' high prediction accuracy is a result of predictable coherence message signatures that arise from stable sharing patterns of cache blocks. shubhendu s. mukherjee mark d. hill analytical comparison of local and end-to-end error recovery in reactive routing protocols for mobile ad hoc networks in this paper we investigate the effect of local error recovery vs. end-to-end error recovery in reactive protocols. for this purpose, we analyze and compare the performance of two protocols: the dynamic source routing protocol (dsr[2]), which does end-to-end error recovery when a route fails and the witness aided routing protocol (war[1]), which uses local correction mechanisms to recover from route failures. we show that the performance of dsr degrades extremely fast as the route length increases (that is, dsr is not scalable), while war maintains both low latency and low resource consumption regardless of the route length. ionut d. aron sandeep k. s. gupta a codesign experiment in acoustic echo cancellation: gmdf continuous advances in processor and asic technologies enable the integration of more and more complex embedded systems. embedded systems have become commonplace in recent years. since their implementations generally require the use of heterogeneous resources (e.g., processor cores, asics) in one system with hard design constraints, the importance of hardware/software codesign methodologies increases steadily. hw/sw codesign approaches consist generally of hw/sw partitioning and scheduling, constrained code generation, and hardware and interface synthesis. this article presents the codesign of an industrial experiment in acoustic echo cancellation (gmdfα algorithm); and emphasizes the partitioning and communication synthesis steps. this experiment brings to light interesting problems such as data and program distribution between system memories and the modeling of communications in the partitioning process l. freund m. israel f. rousseau j. m. berge m. auguin c. belleudy g. gogniat modeling mobile ip in mobile unity with recent advances in wireless communication technology, mobile computing is an increasingly important area of research. a mobile system is one where independently executing components may migrate through some space during the course of the computation, and where the pattern of connectivity among the components changes as they move in and out of proximity. mobile unity is a notation and proof logic for specifying and reasoning about mobile systems. in this article it is argued that mobile unity contributes to the modular development of system specifications because of the declarative fashion in which coordination among components is specified. the packet-fowarding mechanism at the core of the mobile ip protocol for routing to mobile hosts is taken as an example. a mobile unity model of packet forwarding and the mobile system in which it must operate is developed. proofs of correctness properties, including important real-time properties, are outlined, and the role of formal verification in the development of protocols such as mobile ip is discussed. peter j. mccann gruia-catalin roman hierarchically-organized, multihop mobile wireless networks for quality-of- service support mmwn is a modular system of adaptive link- and network-layer algorithms that provides a foundation on which to build mechanisms for quality-of-service provision in large, multihop mobile wireless networks. such networks are a practical means for creating a communications infrastructure where none yet exists or where the previously existing infrastructure has been severely damaged. these networks provide communications for such diverse purposes as tactical maneuvering and strategic planning on the battlefield, emergency relief in an area afflicted by a natural disaster, and field studies conducted by a team of scientists in a remote location. in this paper, we describe three key components of the mmwn system: the clustering procedures for defining a virtual, hierarchical control structure superimposed on a large network of mobile switches and endpoints; the location management procedures for determining the current locations of mobile endpoints relative to the hierarchical control structure; and the virtual circuit management procedures for setting up and repairing virtual circuits as switches and endpoints move. we also provide simulation results that illustrate the robustness of each of these components with respect to a broad spectrum of transmission ranges and relative mobility of switches and endpoints. ram ramanathan martha steenstrup dasp: a general-purpose mimd parallel computer using distributed associative processing this paper presents a general purpose mimd (multiple instruction stream multiple data stream) loosely-coupled parallel computer called dasp (distributed associative processor). the dasp organization partitions the communication and application functions. the communication functions are performed by custom-made communication handlers called network communication modules, while application functions are performed by any general purpose processor suitable for the application. the communication subsystem of dasp takes advantage of the properties of loosely-coupled mimd parallel computers: the very short inter-processor distances and the locality of task reference. by pipelining the time slices of the bus hierarchically with the cito (content induced transaction overlap) protocol, dasp provides virtual full-connectivity to the application processors without physical full connections; thus, its architecture exhibits a very high degree of extensibility and modularity. analytical and simulation results have validated the dasp approach. a prototype has been constructed and several algorithms have been successfully implemented on the prototype. y. k. park c. walter h. yee t. roden s. berkovich a measurement-based admission control algorithm for integrated service packet networks sugih jamin peter b. danzig scott j. shenker lixia zhang a comparative study of distributed resource sharing on multiprocessors in this paper, we have studied the interconnection of resources to multiprocessors and the distributed scheduling of these resources. three different classes of interconnection networks have been investigated; namely, single shared bus, multiple shared buses, and networks with logarithmic delays such as the cube and omega networks. for a given network, the resource mapping problem entails the search of one (or more) of the free resources which can be connected to each requesting processor. to prevent the bottleneck of sequential scheduling, the type(s) and number(s) of resources desired by a processor are given to the network and it is the responsibility of the network to find the necessary resources and connect them to the processor. the addressing mechanism is, thus, distributed in the network. this is a generalization of conventional interconnection networks with routing tags in which all the resources are of different types. benjamin w. wah secure group communications using key graphs many emerging applications (e.g., teleconference, real-time information services, pay per view, distributed interactive simulation, and collaborative work) are based upon a group communications model, i.e., they require packet delivery from one or more authorized senders to a very large number of authorized receivers. as a result, securing group communications (i.e., providing confidentiality, integrity, and authenticity of messages delivered between group members) will become a critical networking issue.in this paper, we present a novel solution to the scalability problem of group/multicast key management. we formalize the notion of a secure group as a triple (_u,k,r_) where _u_ denotes a set of users, _k_ a set of keys held by the users, and _r_ a user-key relation. we then introduce key graphs to specify secure groups. for a special class of key graphs, we present three strategies for securely distributing rekey messages after a join/leave, and specify protocols for joining and leaving a secure group. the rekeying strategies and join/leave protocols are implemented in a prototype group key server we have built. we present measurement results from experiments and discuss performance comparisons. we show that our group key management service, using any of the three rekeying strategies, is scalable to large groups with frequent joins and leaves. in particular, the average measured processing time per join/leave increases linearly with the logarithm of group size. chung kei wong mohamed gouda simon s. lam processing sets on a simd machine alberto baudino giancarlo colla giuseppe a. marino giancarlo suci scalable internet resource discovery: research problems and approaches c. mic bowman peter b. danzig udi manber michael f. schwartz mobile power management for wireless communication networks for fixed quality-of-service constraints and varying channel interference, how should a mobile node in a wireless network adjust its transmitter power so that energy consumption is minimized? several transmission schemes are considered, and optimal solutions are obtained for channels with stationary, extraneous interference. a simple dynamic power management algorithm based on these solutions is developed. the algorithm is tested by a series of simulations, including the extraneous-interference case and the more general case where multiple, mutually interfering transmitters operate in a therefore highly responsive interference environment. power management is compared with conventional power control for models based on fdma/tdma and cdma cellular networks. results show improved network capacity and stability in addition to substantially improved battery life at the mobile terminals. john m. rulnick nicholas bambos faster ip lookups using controlled prefix expansion v. srinivasan george varghese an efficient communication architecture for commodity supercomputers stephan brauss martin frey martin heimlicher andreas huber martin lienhard patrick muller martin näf josef nemecek roland paul anton gunzinger network and nodal architectures for the internetworking between frame relaying services wai sum lai mobile and multicast ip services in pacs: system architecture, prototype, and performance yongguang zhang bo ryu energy-effective issue logic _the issue logic of a dynamically-scheduled superscalar processor is a complex mechanism devoted to start the execution of multiple instructions every cycle. due to its complexity, it is responsible for a significant percentage of the energy consumed by a microprocessor. the energy consumption of the issue logic depends on several architectural parameters, the instruction issue queue size being one of the most important. in this paper we present a technique to reduce the energy consumption of the issue logic of a high-performance superscalar processor. the proposed technique is based on the observation that the conventional issue logic wastes a significant amount of energy for useless activity. in particular, the wake-up of empty entries and operands that are ready represents an important source of energy waste. besides, we propose a mechanism to dynamically reduce the effective size of the instruction queue. we show that on average the effective instruction queue size can be reduced by a factor of 26% with minimal impact on performance. this reduction together with the energy saved for empty and ready entries result in about 90.7% reduction in the energy consumed by the wake-up logic, which represents 14.9% of the total energy of the assumed processor._ daniele folegnani antonio gonzalez a measurement-based admission control algorithm for integrated services packet networks many designs for integrated service networks offer a bounded delay packet delivery service to support real-time applications. to provide bounded delay service, networks must use admission control to regulate their load. previous work on admission control mainly focused on algorithms that compute the worst case theoretical queueing delay to guarantee an absolute delay bound for all packets. in this paper we describe a _measurement- based_ admission control algorithm for _predictive_ service, which allows occasional delay violations. we have tested our algorithm through simulations on a wide variety of network topologies and driven with various source models, including some that exhibit long-range dependence, both in themselves and in their aggregation. our simulation results suggest that, at least for the scenarios studied here, the measurement-based approach combined with the relaxed service commitment of predictive service enables us to achieve a high level of network utilization while still reliably meeting the delay bound. sugih jamin peter b. danzig scott shenker lixia zhang guest editorial henry m. levy firefly: a multiprocessor workstation charles p. thacker lawrence c. stewart the burroughs integrated adaptive routing system (biastm) s gruchevsky d piscitello project exodus: experimental mobile multimedia deployment in atm networks this paper reports on european commission sponsored research within the exodus project. the project carries out research in the context of the evolution towards umts and performs personal and terminal multimedia mobility experiments using fixed and wide-band radio access in an international atm network. after introducing the exodus platform and services, the paper presents an in-based functional model which is suitable for support of mobile multimedia services in atm networks. interworking and mobility management issues are discussed, and information flows for call handling are presented. the paper includes a presentation of the mobility management and enhanced inap protocols which have been developed by the exodus project. j. c. francis d. v. polymeros g. l. lyberopoulos v. vande keere a. elberse p. rogl l. vezzoli the effects of asymmetry on tcp performance in this paper, we study the effects of network asymmetry on end-to-end tcp performance and suggest techniques to improve it. the networks investigated in this study include a wireless cable modem network and a packet radio network, both of which can form an important part of a mobile ad hoc network. in recent literature (e.g., [18]), asymmetry has been considered in terms of a mismatch in bandwidths in the two directions of a data transfer. we generalize this notion of bandwidth asymmetry to other aspects of asymmetry, such as latency and media-access, and packet error rate, which are common in wide-area wireless networks. using a combination of experiments on real networks and simulation, we analyze tcp performance in such networks where the throughput achieved is not solely a function of the link and traffic characteristics in the direction of data transfer (the forward direction), but depends significantly on the reverse direction as well. we focus on bandwidth and latency asymmetries, and propose and evaluate several techniques to improve end-to-end performance. these include techniques to decrease the rate of acknowledgments on the constrained reverse channel (ack congestion control and ack filtering), techniques to reduce source burstiness when acknowledgments are infrequent (tcp sender adaptation), and algorithms at the reverse bottleneck router to schedule data and acks differently from fifo (acks-first scheduling). hari balakrishnan randy h. katz venkata n. padmanbhan wait-freedom vs. t-resiliency and the robustness of wait-free hierarchies (extended abstract) tushar chandra vassos hadzilacos prasad jayanti sam toueg the (un)revised osi reference model in 1988, sc21, the iso committee responsible for the open systems interconnection (osi) reference model, determined that it was time to undertake revising iso 7498-1, the basic osi reference model. since the model had been published in 1984, this was in accordance with iso practice of reviewing and revising standards every five years, and there was good reason for considering the task. over the intervening five years, the groups developing the osi protocols had raised many questions about the architecture and these had been answered and documented in the approved commentaries [1994]. many of these commentaries contained information that could be usefully incorporated into the reference model itself. in addition, the addendum describing connectionless mode, i.e. datagrams, had been completed several years before and needed to be incorporated. there was also considerable demand for interworking connection- mode and connectionless mode communication, something not supported by the reference model or any architecture. also, when the original version of the reference model was frozen about 1983 some aspects of osi such as the upper layers were not well understood and were only described in the most cursory manner. and while connection-mode and connectionless mode had been brought within osi, there was no indication as to how broadcast and multicast were to be handled. thus, the revision might be able to provide a more comprehensive description of these areas.this paper describes how the revision was carried out, describes the changes and additions that were made, considers the effect and contribution this revision has made to our understanding, and describes the outstanding issues that were not addressed by this revision. john day the fifth generation computer systems projects (invited session) harold stone eric manning harriet rigas philip treleaven spire: streaming processing with instructions release element eligiusz wajda optimizing term rewriting using discrimination nets with specialization kazuhiro ogata shigenori ioroi kokichi futatsugi mac protocol and traffic scheduling for wireless atm networks the medium access control (mac) protocol defined in the wireless atm network demonstrator (wand) system being developed within the project magic wand is presented. magic wand is investigating extensions of atm technology to cover wireless customer premises networks, in the framework of the advanced communications technologies and services (acts) programme, funded by the european union. the mac protocol, known as mascara, uses a dynamic tdma scheme, which combines reservation- and contention-based access methods to provide multiple access efficiency and quality of service guarantees to wireless atm terminal connections sharing a common radio channel. the paper focuses on the description of prados, a delay-oriented traffic scheduling algorithm, which aims at satisfying the requirements of the various traffic classes defined by the atm architecture. simulation results are presented to assess the performance of the proposed algorithm in scheduling transmission of variable bit rate connections. nikos passas lazaros merakos dimitris skyrianoglou frederic bauchot gerard marmigere stephane decrauzat performance modeling of multiprocessor implementations of protocols mats björkman per gunningberg data flow graph partitioning to reduce communication cost this paper presents a cost-effective scheme for partitioning large data flow graphs. standard data flow machine architectures are assumed in this work. the objective is to reduce the overhead due to token transfers through the communication network of the machine. when this scheme is employed on large graphs, the load distribution on the rings of the data flow machine is also improved. a canonical form of a data flow graph is introduced to establish the relationship between the communication overhead and the size reduction of the partition cut-set. general lower estimates on the overhead are derived in terms of processing and transmission delay parameters of the machine. the method uses heuristics and an evaluation function to guide the partition algorithm. some implications of the proposed method on the organization of the data flow machines are discussed. c. koutsougeras c. a. papachristou r. r. vemuri z-net a microprocessor based local network z-net, a microprocessor based local network recently announced by zilog, inc., is designed to provide a fourth choice which significantly reduces the effort involved in utilizing a local network. z-net is a collection of hardware and software layers that provide the flexibility needed to customize the network to a particular application without rewriting all of the software. the z-net architecture is "ethernet"-like in that it provides a packet switched, single channel, contention based, multiple access network with fully distributed control. the advantages of this type of architecture have been proven in many research environments. judy estrin editor's introduction anita k. jones predicting timing behavior in architectural design exploration of real-time embedded systems rajeshkumar s. sambandam xiaobo hu total acknowledgements (extended abstract): a robust feedback mechanism for end-to-end congestion control j. waldby u. madhow t. v. lakshman optimally adaptive, minimum-distance, circuit-switched routing in hypercubes in circuit-switched routing, the path between a source and its destination is established by incrementally reserving all required links before the data transmission can begin. if the routing algorithm is not carefully designed, deadlocks can occur in reserving these links. deadlock- free algorithms based on dimension-ordered routing, such as the e-cube, exist. however, e-cube does not provide any flexibility in choosing a path from a source to its destination and can thus result in long latencies under heavy or uneven traffic. adaptive, minimum-distance routing algorithms, such as the turn model and the up preference algorithms, have previously been reported. in this article, we present a new class of adaptive, provably deadlock-free, minimum- distance routing algorithms. we prove that the algorithms developed here are optimally adaptive in the sense that any further flexibility in communication will result in deadlock. we show that the turn model is actually a member of our new class of algorithms that does not perform as well as other algorithms within the new class. it creates artificial hotspots in routing the traffic and allows fewer total paths. we present an analytical comparison of the flexibility and balance in routing provided by various algorithms and a comparison based on uniform and nonuniform traffic simulations. the extended up preference algorithm developed in this article is shown to have improved performance with respect to existing algorithms. the methodology and the algorithms developed here can be used to develop routing for other schemes such as wormhole routing, and for other recursively defined networks such as k-ary n-cubes. ausif mahmood donald j. lynch roger b. shaffer cpus unite! aaron weiss hardware resources: a generalizing view on computer architectures wolfgang matthes some critical considerations on the iso/osi rm from a network implementation point of view the paper outlines the conceptual and practical aspects encountered in the design and implementation of a heterogeneous computer network based on the standardization output. we discuss the feasability of the reference model and its related standards from a design, implementation and operational point of view focussing on the adopted solutions and their rationales. r. popescu-zeletin how convincing is your protocol? gilles brassard brisk: a portable and flexible distributed instrumentation system aleksandar bakic matt w. mutka diane t. rover elections in the presence of faults the news media often bombards the public with forecasts of election results. polls predict, sometimes years in advance; exit polls are more accurate, and unofficial tallies tend to be closer to the final results. if close elections are disputed, it may take the courts weeks to determine the actual outcome of an election. if the election is nearly unanimous, however, a few disputed votes can have no outcome on the final results. the time at which the final results may be known with certainty thus depends upon the accuracy of the forecast (the number of disputed votes), and the closeness of the election. michael merritt halsim - a very fast sparc v9 behavioral model david barach jaspal kohli john slice marc spaulding rajeev bharadhwaj don hudson cliff neighbors nirmal saxena rolland crunk masking the overhead of protocol layering protocol layering has been advocated as a way of dealing with the complexity of computer communication. it has also been criticized for its performance overhead. in this paper, we present some insights in the design of protocols, and how these insights can be used to mask the overhead of layering, in a way similar to client caching in a file system. with our techniques, we achieve an order of magnitude improvement in end-to-end message latency in the horus communication framework. over an atm network, we are able to do a round-trip message exchange, of varying levels of semantics, in about 170 seconds, using a protocol stack of four layers that were written in ml, a high-level functional language. robbert van renesse towards a taxonomy of computer architecture based on the machine data type view existing taxonomies of computer architecture lack the descriptive tools to deal with the large variety of existing principles, features, and mechanisms of the existing spectrum of single processor, multi processor, and multi computer architectures. consequently, they lack the discriminating power to be able to taxonomize computer architecture. in the paper, a new approach toward a complete taxonomy is presented. the key to the taxonomy is to start with the dichotomy of 'operational principle' and 'hardware structure' as the foundation of a computer architecture and describe the constituents of the operational principle in terms of 'machine data types' consisting of 'machine data objects', their representations, and the functions applicable on the objects. the resulting taxonomy provides a systematic approach to the design of innovative computer architectures. w. k. giloi computing on an anonymous network masafumi yamashita tiko kameda asymptotic resource consumption in multicast reservation styles the goal of network design is to meet the needs of resident applications in an efficient manner. adding real-time service and point-to-multipoint multicast routing to the internet's traditional point-to-point best effort service model will greatly increase the internet's efficiency in handling point-to- multipoint real-time applications. recently, the rsvp resource reservation protocol has introduced the concept of "reservation styles", which control how reservations are aggregated in multipoint-to-multipoint real-time applications. in this paper, which is an extension of [9], we analytically evaluate the efficiency gains offered by this new paradigm on three simple network topologies: linear, m-tree, and star. we compare the resource utilization of more traditional reservation approaches to the rsvp reservation styles in the asymptotic limit of large multipoint applications. we find that in several cases the efficiency improvements scale linearly in the number of hosts. danny j. mitzel scott shenker markov analysis of the prma protocol for local wireless networks prma (packet reservation multiple access) is a reservation-aloha access protocol specifically designed for wireless microcellular networks that handle both real-time and non-real-time traffic. we present a thorough analysis of this protocol, considering real-time traffic only, based on a suitable markov model. the size of the model is such that it can be directly used for an exact quantitative analysis of the system. in particular, we are able to analyze the packet dropping process, by evaluating both average and distribution measures. the latter are particularly useful to characterize the degradation caused to real-time traffic (e.g., voice) by the loss of consecutive packets. besides, we also derive from the markov model a qualitative analysis of the system stability, based on the equilibrium point analysis (epa) technique. by this technique, we characterize the system stability and analyze the effect on it of several system parameters (e.g., load, permission probability). francesco lo presti vincenzo grassi an adaptive location management strategy for mobile ip subhashini rajagopalan b. r. badrinath asn.1 protocol specification for use with arbitrary encoding schemes duke tantiprasut john neil craig farrell uniform access to internet directory services as networks and internetworks of computers expand in size and scope, discovery and location of resources becomes a primary function of the networked computing environment. static tables describing network resources have been replaced by dynamic directory services, such as x.500 and the internet domain name system. these dynamic directory services provide more timely and accurate information about network resources than static tables. a wide variety of services address various components of the resource discovery and location problem. these services can be loosely classified as either low-level protocols or high-level services. low-level protocols, such as rarp and icmp, are simple delivery protocols and provide limited information; high-level services, such as the internet domain name system and x.500, use complex delivery protocols to answer complex queries. neither class of directory service is appropriate in all situations. low-level services are too restrictive in the type of queries and information they support, while high- level services may be too expensive for some low-function networks. d. comer r. e. droms revenue maximization in atm networks using the clp capability and buffer priority management sridhar ramesh catherine rosenberg anurag kumar information superhighway 2015 hospitable as the traditional ones. peter j. denning consistent overhead byte stuffing stuart cheshire mary baker design and implementation of a personal computer local network with the increase in the number of personal computers in offices and factories, a demand has been created to connect these independent units in a network and to share not only data but also expensive peripheral equipments. on the other hand, local area network technology has become an active area of research and development during the last few years. keiji satou yoshihiro nakamura sadao fukatsu nobuo watanabe takashi kimoto design and performance evaluation of an rra scheme for voice-data channel access in outdoor microcellular environments in pcs networks, the multiple access problem is characterized by spatially dispersed mobile source terminals sharing a radio channel connected to a fixed base station. in this paper, we design and evaluate a reservation random access (rra) scheme that multiplexes voice traffic at the talkspurt level to efficiently integrate voice and data traffic in outdoor microcellular environments. the scheme involves partitioning the time frame into two request intervals (voice and data) and an information interval. thus, any potential performance degradation caused by voice and data terminals competing for channel access is eliminated. we consider three random access algorithms for the transmission of voice request packets and one for the transmission of data request packets. we formulate an approximate markov model and present analytical results for the steady state voice packet dropping probability, mean voice access delay and voice throughput. simulations are used to investigate the steady state voice packet dropping distribution per talkspurt, and to illustrate preliminary voice-data integration considerations. allan c. cleary michael paterakis a novel feedback scheme to increase throughput in multiple access radio systems to enhance the throughput of a slotted aloha control channel in a radio communication system, we present and analyze a method for estimating the number of remote stations that are attempting to transmit to a central base station. each of the contending remote stations randomly chooses to transmit with probability p. our novel contribution is the use of information concerning the number of successful packet transmissions that arrive without retransmission i.e., that are successfully received on their first transmission attempt , as a metric for accurately and robustly estimating the number of contending remote stations. this estimate is determined at the base station and then used to compute the optimal transmission probability p that is used as feedback to the remote stations for their use. the proposed estimation method is analyzed and shown to provide good steady - state performance for a variety of system models, including situations with noise and idling remote stations. the scheme is also shown to provide good tracking capabilities. richard o. lamaire arvind krishna bounds on the speedup and efficiency of partial synchronization in parallel processing systems in this paper, we derive bounds on the speedup and efficiency of applications that schedule tasks on a set of parallel processors. we assume that the application runs an algorithm that consists of n iterations and before starting its i+1st iteration, a processor must wait for data (i.e., synchronize) calculated in the ith iteration by a subset of the other processors of the system. processing times and interconnections between iterations are modeled by random variables with possibly deterministic distributions. scientific applications consisting of iterations of recursive equations are examples of such applications that can be modeled within this formulation. we consider the efficiency of applications and show that, although efficiency decreases with an increase in the number of processors, it has a nonzero limit when the number of processors increases to infinity. we obtain a lower bound for the efficiency by solving an equation that depends on the distribution of task service times and the expected number of tasks needed to be synchronized. we also show that the lower bound is approached if the topology of the processor graph is ldquo;spread- out," a notion we define in the paper. c. s. chang r. nelson design and evaluation of a distributed asynchronous vlsi crossbar switch controller for a packet switched supercomputer network andrew j. dubois john rasure a model for naming, addressing and routing naming and addressing are areas in which there is still a need for clarification. many definitions for names, addresses, and routes have been proposed, but the exact relations among these concepts are obscure. a taxonomy of names, addresses, and routes is presented. first, we identify names and routes as the essential concepts of communication. then, addresses are introduced as an intermediate form that eases the process of mapping between names and routes; an original definition of an address is thus proposed. relations among names, addresses, and routes are explained with the concept of mapping. on this basis, a general model relating names, addresses, and routes is built and then applied recursively throughout a layered architecture, leading to a layered naming and addressing model which may play the same role for naming and addressing features that the os1 reference model plays for the definition of services and protocols. finally, the model is particularized to a typical network architecture. the model may also be applied to non-osi layered systems; naming, addressing, and routing issues in any network architecture could be a particular instance of this layered model. bernard m. hauzeur performance of integrated services (voice/data) csma/cd networks we consider a voice/data integrated local area communication system. due to the high suitability of csma/cd protocols for data communication and the existence of real time voice delay constraints we consider a hybrid tdm/csma/cd protocol. this model fundamentally differs from the very well documented voice/data integrated systems in point to point networks in which both voice and data users are assigned fixed duration time slots for transmission. the tdm/csma/cd integrated system performance is analysed and basic performance tradeoffs in the system design are manifested. i. chlamtac m. eisinger evaluation of tcp vegas: emulation and experiment this paper explores the claims that tcp vegas [2] both uses network bandwidth more efficiently and achieves higher network throughput than tcp reno [6]. it explores how link bandwidth, network buffer capacity, tcp receiver acknowledgment algorithm, and degree of network congestion affect the relative performance of vegas and reno. jong suk ahn peter b. danzig zhen liu limin yan flip-flop: a stack-oriented multiprocessing system peter grabienski crossover switch discovery for wireless atm lans the emergence of wireless local area networks (wlans) has brought about the possibility of mobile computing. in order to maintain connectivity to mobile hosts (mhs), a handover mechanism is needed as mhs migrate from one base station's (bs) wireless cell to another. current handover schemes are mainly catered for connectionless wlans (example mobile ip) which do not have the ability to support quality of service (qos) for continuous media traffic. hence, mobility for connection-oriented wlans (example wireless atm) should be considered. the problem faced in a connection-oriented wlan is the ability to provide a fast, efficient and continuous handover mechanism. mechanisms that can meet most of these requirements are the incremental and multicast based re-establishment handover schemes. in particular, the incremental re- establishment scheme relies on the presence of a "crossover switch" (cx) to establish the new partial circuits to the new bs. in this paper, five cx discovery schemes are proposed to compute and select the optimised new partial path such that both the set-up latency and network resource consumption associated with the handover are small. the proposed cx discovery schemes (loose select, prior path knowledge, prior path optimal, distributed hunt and backward tracking) are suitable for wireless atmlans employing either the centralised or distributed connection management approach with either distance-vector or link-state-like minimum-hop routing schemes. simulation results obtained from a trace-driven mobile network simulator on four different network topologies (random, star, tree and hierarchical redundancy) reveal that the prior path knowledge and distributed hunt discoveries outperform the other schemes in various aspects. finally, using the ibm paris gigabit network as an example, we show how cx discovery is incorporated with routing, connection management and qos. chai-keong toh an internodal protocol for packet switched data networks this paper describes a proprietary data transmission protocol developed for internodal trunks on the northern telecom sl-10 packet switched data network. the protocol has been implemented and tested in the new universal trunk processor. two separate market forces dictated the protocol requirements. the first was a demand for high speed trunks in the t1 (1.54 mb/s) and above range to cost effectively serve future networks. the second market force was the emergence of satellite data technology with its attendant high bandwidth and lower cost. a protocol with a large sequence number range and the ability to selectively retransmit more than one packet in error without impacting the flow of correctly received packets was required. none of the standard data protocols met these requirements. d. drynan d. baker performance from architecture: comparing a risc and a cisc with similar hardware organization dileep bhandarkar douglas w. clark a new method for analysing feedback-based protocols with applications to engineering web traffic over the internet d. p. heyman t. v. lakshman arnold l. neidhardt design and evaluation of multicast gfr for supporting tcp/ip over hybrid wired/wireless atm w. melody moh hua mei design of a lisp machine - flats design of a 10 mips lisp machine used for symbolic algebra is presented. besides incorporating the hardware mechanisms which greatly speed up primitive lisp operations, the machine is equipped with parallel hashing hardware for content addressed associative tabulation and a very fast multiplier for speeding up both arithmetic operations and fast hash address generation. e. goto t. soma n. inada t. ida m. idesawa k. hiraki m. suzuki k. shimizu b. philipov end-to-end internet packet dynamics vern paxson an architectural approach for integrated network and systems management today's enterprises are accepting networked systems as a fundamental part of their information technology strategy. the constant growth in quantity and quality of networked systems and the thereby arising problems concerning complexity, heterogeneity and diversity of components in a multi-vendor environment require a sophisticated management of resources. increasingly the automation of such management is being demanded.in this paper we introduce an architecture for the integrated management of all resources in a networked system, i.e. application, system and network resources. the architecture uses domains as flexible and pragmatic means of grouping resources and of specifying management responsibility and authority boundaries. it maintains a clear distinction between management objectives and the resources being managed in order to provide an integrated view of the various tasks of management as well as an integrated and uniform view of the distributed and heterogeneous managed environment. the uniform management model on which the architecture is based is expressive enough to capture the full richness of management structures and policies both within enterprises and between them. it allows for recursive and generic structuring, we consider as the basis for management activity automation.as an example, we apply our architectural concepts to structure the management of a high speed multi-network (atm, dqdb, fddi). emphasis lies on an automated quality of service management in the fddi management domain. raouf boutaba simon znaty efficient distributed recovery using message logging a. p. sistla j. l. welch interpreting benchmarks brad carlile access control in multicast packet switching xing chen jeremiah f. hayes applying packet techniques to cellular radio n. f. maxemchuk dynamics of large scale distributed networks this talk addresses the question of robustness for large scale, distributed communications networks from a dynamical standpoint. in particular, we are interested in maximizing call throughput (carried load) subject to varying levels of localized congestion and network node and link outages. the networks considered use alternate routing and crankback control to route calls. a set of approximately one hundred public switched network (psn) switching systems, together with a subset of psn facility and trunk interconnectivity, are used to generate several network designs. trunk to facility assignments are included in the design process, so that the effect of correlated trunk failures is properly taken into account. two networks having different types of connectivity are discussed. the first network possesses "nearest neighbor" connectivity in which each node is connected by trunk groups to its nearest neighbors only ("nearest" referring to the physical proximity of nodes). the second network incorporates a "route- around" capability in which node pairs are interconnected by facility paths around interveining nodes. the route-around connectivity is extracted from psn trunk groups, and hence is not considered to be costly augmentation. route- around allows an alternate routing plan, perhaps used in conjunction with an adaptive routing algorithm, to route calls around trouble spots in the network. using a network model to evaluate network performance, it is shown that these two networks (i.e., the nodes together with their trunk group connectivity) have very different performance characteristics when subjected to congestion and node and link outages. furthermore, it is seen that the network with bypass connectivity not only maintains a higher call-carrying capacity than the nearest neighbor network, under local congestion and node outage, but also has a lower average call set-up time. the data and algorithms used for network design and performance evaluation will be discussed. through the use of simulation it is shown that the "nearest neighbor" topology exhibits natural high usage routes with the attendant robustness concerns. the question of how to increase trunk connectivity so as to improve robustness without significantly affecting network cost is investigated. clayton m. lockhart mario r. garzia douglas l. mikula perspectives on atm switch architecture and the influence of traffic pattern assumptions on switch design switch designs and the uniform distribution traffic pattern that has been the basis of much switch design analysis are discussed. in particular it is shown that head of line blocking is not the major cause of switch and link underutilization. switch fabric and buffer system input bandwidth tradeoffs are described. for client server applications it is shown that having the majority of switch buffers on the input side of a switch reduces overall switch buffering. robert j. simcoe tong-bi pei a proof for lack of starvation in dqdb with and without slot reuse oran sharon axon: network virtual storage design james p. g. sterbenz gurudatta m. parulkar traffic placement policies for multi-band network recently protocols have been introduced that enable the integration of synchronous traffic (voice or video) and asynchronous traffic (data) and extend the size of local area networks without loss in speed or capacity. one of these is drama, a multiband protocol based on broadband technology. it provides dynamic allocation of bandwidth among clusters of nodes in the total network. in this paper, we propose and evaluate a number of traffic placement policies for such networks. metrics used for performance evaluation include average network access delay, degree of fairness of access among the nodes, and network throughput. the feasibility of the drama protocol is established through simulation studies. drama provides effective integration of synchronous and asynchronous traffic due to its ability to separate traffic types. under the suggested traffic placement policies, the drama protocol is shown to handle diverse loads, mixes of traffic types, and numbers of nodes, as well as modifications to the network structure and momentary traffic overloads. k. j. maly e. c. foudriat d. game r. mukkamala c. m. overstreet distributing resources in hypercube computers given a type of resource such as disk units, extra memory modules, connections to the host processor, or software modules, we consider the problem of distributing the resource units to processors in a hypercube computer so that certain performance requirements are met at minimal cost. typical requirements include the condition that every processor is within a given distance of a resource unit, that every processor is within a given distance of each of several resources, and that every m-dimensional subcube contains a resource unit. the latter is particularly important in a multiuser system in which different users are given their own subcubes. in this setting, we also consider the problem of meeting the performance requirements at minimal cost when the subcube allocation system cannot allocate all possible subcubes and the requirements apply only to allocable subcubes. we also analyze the problem of partitioning processors with resources into different classes, requiring that every processor is within a given distance of, or in a subcube of given dimension with, a member of each class. efficient constructive techniques for distributing or partitioning a resource are given for several performance requirements, along with upper and lower bounds on the total number of resource units required. m. livingston q. f. stout network support for mobile multimedia using a self-adaptive distributed proxy recent advancements in video and audio codec technologies~(e.g., realv ideo [18] make multimedia streaming possible across a wide range of network conditions. with an increasing trend of ubiquitous connectivity, more and more areas have overlapping coverage of multiple wired and wireless networks. because the best network service changes as the user moves, to provide good multimedia application performance, the service needs to adapt to user movement as well as network and computational resource variations. for wireless multimedia applications, one must ensure smooth transitions when network connectivity changes. we argue that network adaptations for multimedia applications should be provided at the application layer with help from proxies in the network. the reasons are ease of programming, ease of deployment, better fault-tolerance, and greater scalability. we propose aself-adaptive distributed proxy systemthat provides streaming multimedia service to mobile wireless clients. our system intelligently adapts to the real-time network variations and hides handoff artifacts using application protocol specific knowledge whenever possible. it also uses application- independent techniques such as dynamic relocation of transcoders and automatic insertion of forward error correction and compression into the data transcoding path. we advocate a composable, relocatable transcoding data path consisting of a directed acyclic graph ofstrongly- typedoperators to bridge any data format mismatch between the client and the data source. in this paper, we present the design, implementation, and evaluation of our system in the context of streaming video playback involving a series of transcoding proxies and a mobile client. zhuoqing morley mao hoi-sheung wilson so byunghoon kang developing a managed system in the osi network management young-chul shim comparison of signaling loads for pcs systems thomas f. la porta malathi veeraraghavan richard w. buskens case study: system model of crane and embedded control eduard moser wolfgang nebel editorial pointers diane crawford performance of a decnet based disk block server this report describes an experimental disk block server implemented for the rsx-11m operating system using decnet. the block server allows user programs on one system to access files on a disk physically located on a different system. the actual interface is at the level of physical blocks and io transfers. results of basic performance measurements are given, and explained in terms of major components. performance predictions are made for servers of this type supporting more complex workloads. rollins turner jeffrey schriesheim indrajit mitra nsf report - computer and computation research d. t. lee the meiko cs-2 system architecture duncan roweth a model for evaluating demand assignment protocols with arbitrary workloads demand assignment protocols have been implemented in a large number of operational lans. however, the analytic models for protocol evaluation currently capture only a limited range of lan operational circumstances. in this paper we present an analytic tool for evaluating fair and prioritized demand assignment protocols with a relatively general workload model. we consider arbitrary and station dependent message size distributions and arrival rates. this allows the modeling of lans with heterogeneous workloads, thus permitting a larger correspondence between the modeled system and the real life network. z koren i chlamtac a ganz mac-cube, the macintosh-based hypercube the mac-cube is a macintosh- based hypercube. it uses appletalk hardware and software as the medium for the nodal connections. at the physical level appletalk has a bus topology. hypercube connectivity is emulated on the appletalk local area network while the hypercube communication protocol is integrated in the appletalk software. available for mac-cube is the crystalline operating system iii (cros iii.) mac-cube provides a programming environment similar to any other hypercube systems running cros iii. it allows inexpensive hands-on experience with a concurrent machine. data can be displayed on graphics monitor and/or stored locally at each node. in addition to the low cost, these capabilities of the mac-cube makes it an indispensable instructional and development tool for parallel processing. some application programs which are taken from the book solving problems on concurrent processors [fox 88] have been implemented with graphics enhancement on mac- cube. the applications are solving the mandelbrot set in the complex plane, and solving a 2-dimensional laplace equation using finite difference. a. ho g. c. fox d. w. walker m. breaden s. chen a. knutson s. kuwamoto t. cole duplication of packets and their detection in x.25 communication protocols in the context of x.25 communication protocols, this paper is concerned with the process of duplication of packets at the frame level, and with their detection at the packet level. with suitable constraints involving (i) the window sizes used at both the frame and packet levels to flow control, and (ii) the numbering scheme used to number user data packets, it is possible to detect and discard all duplicate copies thereof. further, it is argued that duplication of 'messages' by a particular layer be detected at the level where these messages are generated and interpreted. deferring this detection to higher levels is unnecessary and more difficult. bijendra n. jain business: the 8th layer: the net needs quality of service - but what exactly is it? kate gerwig an uplink cdma system architecture with diverse qos guarantees for heterogeneous traffic sunghyun choi kang g. shin mobile router technology development cisco system and nasa have been performing joint research on mobile routing technology under a nasa space act agreement. cisco developed mobile router technology and provided that technology to nasa for applications to aeronautic and space-based missions. nasa has performed stringent performance testing of the mobile router, including of the interaction of routing and transport level protocols. this paper describes mobile routing, the mobile router, and some key configuration parameters. in addition, the paper describes the mobile routing test network and test results documenting the performance of transport protocols in dynamic routing environments. william d. ivancic david h. stewart terry l. bell brian a. kachmar dan shell kent leung network issues in the growth and adoption of networked cscw services computer supported cooperative work (cscw) environments havetraditionally made heavy use of network technology to allow users(often at different locations) to work together via computersystems. however, this dependency of cscw applications on theunderlying network technology has up to now not been a real issuewithin the community. cscw research has traditionally focused onthe design of shared environment applications that run on thisunderlying network, e.g., the development of interaction andpresentation techniques for shared tasks. also, cscw has been atestbed for design methodologies such as the ethnographic analysisof user behaviour. although we support this emphasis on socialscience within the field, we feel that in this trulymultidisciplinary area researchers should become more aware ofnetwork- related research issues. a number of parallel events can beidentified that triggered our concern: " the move from local area network technology to wide areanetwork technology (the global internet) for cscw applications. " the evolution of this internet towards commercialization ofservices and the privatisation of many telecommunication serviceproviders in europe. " the political debate over the information super highway,necessary to support the increasing bandwidth requirements typicalfor cscw applications that require the use of multiple channels ofinformation. the move towards the internet has given us the opportunity forglobal participation in cscw environments, resulting in a moregeneric utilization of this technology. however, the caveat for thecscw community lies in the fact that we now no longer fully controlour shared environments, and have become more dependent on theglobal decision- making process regarding network infrastructure.the same institution that gave our research community access tointernetworking in the late-1970s, the national science foundation(nsf), has decided that further governmental funding of the nsfnetbackbone, which constituted a major part of internet infrastructurein the usa, is no longer required. commercial service providers nowoperate major backbone services for the internet on a commercialbasis [8]. these commercial network providers thus take charge overessential pieces of internet infrastructure, taking decisions whichcould have a serious impact on distributed multimedia servicesprovided by the internet such as the world-wide web and mbone,which we'll discuss further on. in europe, a similar development istaking place: traditional, often monopoly-based, network providerssuch as british telecom and the dutch ptt telecom have recentlybeen privatised and are in the process of reconsidering their tasksand services. the debate in both the usa and europe over theinformation super highway or national information infrastructure(nii) gives further evidence that governments will not be able tomaintain development and support of new high-bandwidth informationservices in the near future. this, however, is only one network- related development thatdirectly concerns the cscw community. new standards are emergingfor the transmission of real- time high-bandwidth data over internetconnections. this high-bandwidth data typically involve video andaudio information from shared multimedia communicationenvironments. in the next section, we will discuss what thesestandards might provide in terms of functionality to the cscwcommunity, and what constraints these standards introduce. they areoften defined by network researchers who have a genuine interest inproviding network capabilities, but typically do not have the samemeans as the cscw community to regard cognitive ergonomical aspectsof network functionality. we try to demonstrate the importance of adialogue between the two areas of research, a cross-fertilizationfrom which both communities will benefit. the current attitude in the cscw community towards connectivityis very similar to the attitude towards computing power in thefield of human-computer interaction (hci) in the late 1970s: theinfrastructure necessary to apply our research ideas is not of ourconcern, and will be delivered by others. that may have been truefor direct manipulation systems, but to what extend will it be truefor shared real-time multimedia environments? roel vertegaal steve guest a hybrid handover protocol for local area wireless atm networks while handovers of voice calls in a wide area mobile environment are well understood, handovers of multi-media traffic in a local area mobile environment is still in its early stage of investigation. unlike the public wireless networks, handovers for multi-media wireless lans (wlans) have special requirements. in this paper, the problems and challenges faced in a multi-media wlan environment are outlined and a multi-tier wireless cell clustering architecture is introduced. design issues for multi-media handovers are specified and a fast, continuous and efficient hybrid handover protocol is proposed. the protocol is scalable and supports source and destination mobile handovers in a mutually exclusive manner. crossover switch (cx) discovery is also introduced to support fast inter-cluster handovers with consideration given to mobile quality of service (m-qos). the resulting wireless atm lan exhibits a distributed mobile location management, call admission control and handover management architecture. a prototype of the proposed handover protocol is implemented into a cambridge fairisle atm switch and the results of handovers for a single mobile host (mh) with a single on-going connection are evaluated. it was found that implementing transport mobility for a wireless atm environment is not practical as the cell re-routing function changes the traffic characteristics and is not scalable to increasing cell rate and to the number of mobile connections. the data-link layer mobility implementation however, is found to work well. the protocol provides symmetric data disruption to traffic flows in both directions and up to seventy-five intra-cluster handovers can be supported in a second. throughout the experiment, cells arrive in sequence with no cell loss observed during the handover, up to the capacity limit of the atm switch. finally, `zig-zag' handovers and handovers for a single mh with multiple ongoing unicast connections are performed in order to evaluate the robustness and performance of the protocol under different mhs' migration and communication environment. chai-keong toh researches in network development of junet junet was developed in order to provide a testing environment for studies of computer networking and distributed processing by connecting a large number of computers and by providing actual services for the users. research interests in development of the network have been focused on resource name managing, japanese character handling and communication technologies. for the name managements, the hierarchical domain concept is employed to construct a name space for the network, and a mechanism for text message exchange is implemented using the concept. an environment for text message exchange using japanese characters is achieved as results of general discussions to handle 16-bit kanji codes in computers. efficient data transmissions with the high speed modems are achieved by a new uucp protocol and the dial-up ip link mechanism developed with a tty driver which provides host-to-modem flow control mechanism. as the result of researches described above, junet currently connects various types of organizations relating computer science which are 87 organizations with more than 250 computers in number. it connects universities and major research laboratories in japan, and the protocols currently used are tcp/ip over leased lines as well as dial-up lines, and uucp over dial-up lines. the services such as electronic mail and network news have been provided since the network was started, and special technologies for japanese character handling, name servers, and multimedia mail supports have been developed. in this paper, the current status of junet and its research results in development of the network are described. j. murai a. kato distributed channel allocation for pcn with variable rate traffic partha p. bhattacharya leonidas georgiadis arvind krishna instruction set design issues relating to a static dataflow computer in an effort to minimize traffic in the distribution network of a static dataflow machine the design of the system includes alternate data paths so that data movement may take place over "shorter" paths when it is permissible to do so. the main emphasis of this approach is to allow rapid transfer of data in sequential code segments residing in single memory blocks. this decreases crowding in the more expensive distribution network utilized by data that fans out to two or more blocks as required when more concurrent activity is to be initiated during the execution of the program. the objective of data movement minimization has also influenced the design of the instruction set. in this case, composite, that is, "multi-actor" instructions have been proposed as an effective strategy. this has been done without compromizing the utility of the instructions or overly increasing the time and space requirements of their execution. in the paper, these principles are illustrated by defining controlled instructions that are especially useful in the management of loops. f. j. burkowski combined tuning of rf power and medium access control for wlans mobile communications, such as handhelds and laptops, still suffer from short operation time due to limited battery capacity. we exploit the approach of protocol harmonization to extend the time between battery charges in mobile devices using an ieee 802.11 network interface. many known energy saving mechanisms only concentrate on a single protocol layer while others only optimize the receiving phase by on/off switching. we show, that energy saving is also possible during the sending process. this is achieved by a combined tuning of the data link control and physical layer. in particular, harmonized operation of power control and medium access control will lead to reduction of energy consumption. we show a rf power and medium access control trade-off. furthermore we discuss applications of the results in ieee 802.11 networks. a simulation based study of tlb performance this paper presents the results of a simulation-based study of various translation lookaside buffer (tlb) architectures, in the context of a modern vlsi risc processor. the simulators used address traces, generated by instrumented versions of the spec marks and several other programs running on a decstation 5000. the performance of two-level tlbs and fully-associative tlbs were investigated. the amount of memory mapped was found to be the dominant factor in tlb performance. small first-level fifo instruction tlbs can be effective in two level tlb configurations. for some applications, the cyles-per-instruction (cpi) loss due to tlb misses can be reduced from as much as 5cpi to negligible levels with typical tlb parameters through the use of variable-sized pages. j. bradley chen anita borg norman p. jouppi simulation of a market model for distributed control decentralized control in the allocation of computing resources to tasks in a multiprocessing environment is designed using a general concept of a market. two different market models are developed: auction and barter. in both cases, the tasks compete among themselves for computing resources rather than having them assigned by a host. definite relationships between market strategy and system performance were found. moreover, the results varied with machine configuration and appear applicable to systems of distributed control. ross a. gagliano martin d. fraser mark e. schaefer measurement and evaluation of the mips architecture and processor mips is a 32-bit processor architecture that has been implemented as an nmos vlsi chip. the instruction set architecture is risc-based. close coupling with compilers and efficient use of the instruction set by compiled programs were goals of the architecture. the mips architecture requires that the software implement some constraints in the design that are normally considered part of the hardware implementation. this paper presents experimental results on the effectiveness of this processor as a program host. using sets of large and small benchmarks, the instruction and operand usage patterns are examined both for optimized and unoptimized code. several of the architectural and organizational innovations in mips, including software pipeline scheduling, multiple-operation instructions, and word-based addressing, are examined in light of this data. thomas r. gross john l. hennessy steven a. przybylski christopher rowen improved algorithms for synchronizing computer network clocks the network time protocol (ntp) is widely deployed in the internet to synchronize computer clocks to each other and to international standards via telephone modem, radio and satellite. the protocols and algorithms have evolved over more than a decade to produce the present ntp version 3 specification and implementations. most of the estimated deployment of 100,000 ntp servers and clients enjoy synchronization to within a few tens of milliseconds in the internet of today. this paper describes specific improvements developed for ntp version 3 which have resulted in increased accuracy, stability and reliability in both local- area and wide-area networks. these include engineered refinements of several algorithms used to measure time differences between a local clock and a number of peer clocks in the network, as well as to select the best ensemble from among a set of peer clocks and combine their differences to produce a clock accuracy better than any in the ensemble. this paper also describes engineered refinements of the algorithms used to adjust the time and frequency of the local clock, which functions as a disciplined oscillator. the refinements provide automatic adjustment of message-exchange intervals in order to minimize network traffic between clients and busy servers while maintaining the best accuracy. finally, this paper describes certain enhancements to the unix operating system software in order to realize submillisecond accuracies with fast workstations and networks. david l. mills performance impact of proxies in data intensive client-server applications michael d. beynon alan sussman joel saltz seven comments on charging and billing philip ginzboorg a novel adaptive hybrid arq scheme for wireless atm networks this paper describes the design and performance of a novel adaptive hybrid arq scheme using concatenated fec codes for error control over wireless atm networks. the wireless links are characterized by higher, time - varying error rates and burstier error patterns in comparison with the fiber - based links for which atm was designed. the purpose of the hybrid arq scheme is to provide a capability to dynamically support reliable atm - based transport over wireless channels by using a combination of our arq scheme called sdlp and the concatenated fec scheme. the key ideas in the proposed hybrid arq scheme are to adapt the code rate to the channel conditions using incremental redundancy and to increase the starting code rate as much as possible with the concatenated fec, maximizing the throughput efficiency. the numerical results show that our proposed scheme outperforms other arq schemes for all snr values. inwhee joe tagged architecture: how compelling are its advantages? edward f. gehringer j. leslie keedy analytical modeling and architectural modifications of a dataflow computer dataflow computers are an alternative to the von neumann architectures and are capable of exploiting large amount of parallelism inherent in many computer applications. this paper deals with the performance analysis of the manchester dataflow computer based on queueing network models. the model of the dataflow computer has been validated by comparing the analytical results with those obtained from the prototype manchester dataflow computer. the bottleneck centers in the prototype machine have been identified through the model and various architectural modifications have been investigated both from performance and reliability viewpoints. d. ghosal l. n. bhuyan parameter replacement for celp coded speech in land mobile radio the contribution of this paper is in applying parameter replacement techniques to speech that is compressed by the federal standard 1016 celp speech coder, protected by reed--solomon codes, and transmitted over a wireless channel. the parameter replacement results in significant improvement in speech quality without any increase in bit rate. yaacov yesha scheduling data broadcast in asymmetric communication environments with the increasing popularity of portable wireless computers, mechanisms to efficiently transmit information to wireless clients are of significant interest. the environment under consideration is asymmetric in that the information server has much more bandwidth available, as compared to the clients. in such environments, often it is not possible (or not desirable) for the clients to send explicit requests to the server. it has been proposed that in such systems the server should broadcast the data periodically. one challenge in implementing this solution is to determine the schedule for broadcasting the data, such that the wait encountered by the clients is minimized. a broadcast schedule determines what is broadcast by the server and when. in this paper, we present algorithms for determining broadcast schedules that minimize the wait time. broadcast scheduling algorithms for environments subject to errors, and systems where different clients may listen to different number of broadcast channels are also considered. performance evaluation results are presented to demonstrate that our algorithms perform well. nitin h. vaidya sohail hameed cycle time properties of the fddi token ring protocol (extended abstract) communication technology now makes it possible to support high data transmission rates at relatively low cost. in particular, optical fiber can be used as the medium in local area networks with data rates in the range of 100 megabits per second. unfortunately, local area network topologies and communication protocols that work well with lower speed media are not necessarily appropriate when the data transmission rate is scaled up by approximately an order of magnitude. recognizing this fact, an ansi sub- committee (ansix3t9) has been working for the past two years on a proposed standard for a token ring protocol tailored to a transmission medium with transmission rate in the 100 megabits per second range. the protocol is referred to as the fddi (fiber distributed data interface) token ring protocol. the proposal for the standard is now quite mature and nearly stable. while numerous analyses of the performance of token ring protocols have been carried out and described in the literature, these have for the most part dealt with protocol variations of less complexity than fddi. the major feature that distinguishes fddi from token ring protocols that have been analyzed previously is the concept of a "timed token", which selectively allocates the right to transmit data among the stations depending in part on how rapidly the token progressed around the ring on the previous cycle. a station is allowed to transmit certain types of data only if the token's last cycle has been shorter than a "target" token rotation time. this feature makes it possible to give guaranteed response to time-critical messages. the "timed token" creates some dependencies among transmissions at various stations, however, and these dependencies complicate the analysis of the protocol's performance. the basic ideas of the timed token protocol on which the fddi protocol is based were first presented by grow ["a timed-token protocol for local area networks", electro '82, 1982]. he distinguished two types of traffic. synchronous traffic is a type of traffic that has delivery time constraints. examples include voice and video transmissions, where delays in transmission can result in disruptions of the sound or picture signal. asynchronous traffic has no such time constraints, or at least the time constraints are measured in units that are large relative to the token cycle time. here is a brief overview of how the "timed token" protocol works. the stations on the local area network choose, in a distributed fashion, a target token rotation time (ttrt). basically, the ttrt is chosen to be sufficiently small that requirements for responsiveness at every station will be met. the right to use network bandwidth for transmission of synchronous traffic is allocated among the stations in a manner that guarantees that network capacity is not exceeded. the token is then forced by the protocol to circulate with sufficient speed that all stations receive their allocated fractions of capacity for synchronous traffic. this is done by conditioning the right to transmit asynchronous messages on the fact that the token has rotated sufficiently fast that it is "ahead of schedule" in delivering synchronous allocations to the stations. in essence, the ttrt value dictates a departure schedule for the token to pass from station to station, and asynchronous traffic can be transmitted only when doing so does not cause that schedule to broken. subsequently, ulm ["a timed token ring local area network and its performance characteristics", proc. of conf. on local area networks, ieee, 1982] analyzed the protocol described by grow and determined its sensitivity to various parameters. he considered the effect of overheads and provided a number of graphs indicating the impact of various parameters on maximum transmission capacity. as well as describing the timed token protocol, grow and ulm included intuitive arguments supporting two fundamental properties of (a somewhat idealized version of) the protocol. these two properties are: the average token cycle time in the absence of failures is at most the ttrt. the maximum token cycle time in the absence of failures is at most twice the ttrt. both these properties are important to the successful operation of the protocol. the first one guarantees that the average long run bandwidth provided to each station is at least its allocated fraction of the network's capacity. the second property guarantees that, in the absence of component failures, the time between a station's successive opportunities to transmit synchronous traffic will never exceed twice the target token rotation time. while grow and ulm assert that these properties hold for the timed-token protocol, neither formal proofs nor references are provided. because the fddi protocol is based on the same timed- token protocol, subsequent publications specifically describing the fddi protocol have also claimed that the two properties hold. in this paper, we prove both properties using a common notational framework. we first treat an idealized situation in which several types of overhead are ignored. we actually study a protocol that is slightly more liberal that the fddi proposed standard in that it allows asynchronous transmission more often because "lateness" is not carried forward from cycle to cycle. the protocol variation, which still guarantees properties (1) and (2), is at least as easily implemented as the original version. also, it guarantees sufficient responsiveness and capacity for the transmission of synchronous traffic, while providing improved responsiveness to asynchronous transmissions. when overheads are considered, it is found that the proposed standard fddi protocol satisfies the constraint on average token rotation time (relying on the retention of "lateness" from cycle to cycle), but not the one on maximum cycle time. we analyze a variation of the protocol that ignores accumulated lateness, but accounts for the various overhead sources. the advantages of the new rule include: it guarantees both desired properties without having to retain "lateness" from one cycle to the next. it provides better service to asynchronous requests in the case where the amount of overhead is small relative the token rotation time. (when the amount of overhead is large, the original proposed protocol may have token rotation times significantly in excess of twice the ttrt.) it is easier to implement. work is underway on the task of quantifying the performance of the fddi protocol by determining estimates of, or tighter bounds on, the average token rotation time and on the average delivery time of a submitted message. the properties established in this paper are required to form the basis of the quantitative analysis. kenneth sevcik marjory j. johnson physical limitations of a computer iraj danesh traffic analysis of rectangular sw-banyan networks this paper describes an algorithm to route packets in rectangular sw-banyans. a packet switching scheme is modelled on single and double sided sw-banyans and the results presented. effects of queue lengths and processor configurations on performance are studied. prevention of deadlocks in the network is discussed. r. m. jenevein t. mookken an effective scheme for pre-emptive priorities in dual bus metropolitan area networks the ieee 802.6 standard for metropolitan area networks does not provide multiple priority traffic for connectionless data services. a priority mechanism that was considered for the standard showed to be not effective. as of now, there exists no protocol for multiple access dual bus networks that is able to implement pre-emptive priorites and, at the same time, can satisfy minimal fairness requirements for transmissions at the highest priority level. in this study, a protocol with strictly pre-emptive priorities, i.e., a protocol that does not admit low priority traffic if the load from high priority traffic exceeds the capacity of the transmission channel, is presented. the protocol is derived from a unique bandwidth allocation scheme with a full utilization of the bus capacity, with a fair distribution of bandwidth respective to traffic from a particular priority level and with pre- emptive priorities. the performance of the presented protocol is compared to a priority mechanism that is based on the bandwidth balancing mechanism. it is shown that adopting the new protocol results in shorter access delays for high priority transmissions. jörg liebeherr ian f. akyildiz asser n. tantawi a conceptual framework for network and client adaptation modern networks are extremely complex, varying both statically and dynamically. this complexity and dynamism are greatly increased when the network contains mobile elements. a number of researchers have proposed solutions to these problems based on dynamic adaptation to changing network conditions and application requirements. this paper summarizes the results of several such projects and extracts several important general lessons learned about adapting data flows over difficult network conditions. these lessons are then formulated into a conceptual framework that demonstrates how a few simple and powerful ideas can describe a wide variety of different software adaptation systems. this paper describes an adaptation framework in the context of the several successful adaptation systems and suggests how the framework can help researchers think about the problems of adaptivity in networks. b. badrinath armando fox leonard kleinrock gerald popek peter reiher m. satyanarayanan migrating sockets - end system support for networking with quality of service guarantees david k. y. yau simon s. lam layered cross product - a technique to construct interconnection networks shimon even ami litman a new cache replacement scheme based on backpropagation neural networks in this paper, we present a new neural network-based algorithm, **kora** (_k_halid shad_o_w _r_eplacement _a_lgorithm), that uses backpropagation neural network (bpnn) for the purpose of guiding the line/block replacement decisions in cache. this work is a continuation of our previous research presented in [1]-[3]. the kora algorithm attempts to approximate the replacement decisions made by the optimal scheme (opt). the key to our algorithm is to identify and subsequently discard the dead lines in cache memories. this allows our algorithm to provide better cache performance as compared to the conventional lru (least recently used), mru (most recently used), and fifo (first in first out) replacement policies. extensive trace- driven simulations were performed for 30 different cache configurations using different spec (standard performance evaluation corp.) programs. simulation results have shown that kora can provide substantial improvement in the miss ratio over the conventional algorithms. our work opens new dimensions for research in the development of new and improved page replacement schemes for virtual memory systems and disk caches. humayun khalid architectural and performance evaluation of giganet and myrinet interconnects on clusters of small-scale smp servers giganet and myrinet are two of the leading interconnects for clusters of commodity computer systems. both provide memory-protected user-level network interface access, and deliver low- latency and high-bandwidth communication to applications. giganet is a connection-oriented interconnect based on a hardware implementation of virtual interface (vi)architecture and asynchronous transfer mode (atm) technologies. myrinet is a connection-less interconnect which leverages packet switching technologies from experimental massively parallel processors (mpp) networks. this paper investigates their architectural differences and evaluates their performance on two commodity clusters based on two generations of symmetric multiple processors (smp) servers. the performance measurements reported here suggest that the implementation of message passing interface (mpi) significantly affects the cluster performance. although mpich-gm over myrinet demonstrates lower latency with small messages, the polling-driven implementation of mpich-gm often leads to tight synchronization betweencommunication processes and higher cpu overhead. jenwei hsieh tau leng victor mashayekhi reza rooholamini an overview of risc architecture samuel o. aletan design and analysis of an algorithm for fair service in error-prone wireless channels songwu lu thyagarajan nandagopal vaduvur bharghavan assigning codes in wireless networks: bounds and scaling properties in the code division multiple access (cdma) framework, collisions that can occur in wireless networks are eliminated by assigning orthogonal codes to stations, a problem equivalent to that of coloring graphs associated to the physical network. in this paper we present new upper and lower bounds for two versions of the problem (hidden and primary collision avoidance -- hp- ca -- or hidden collision avoidance only -- h-ca). in particular, optimal assignments for special topologies and heuristics for general topologies are proposed. the schemes show better average results with respect to existing alternatives. furthermore, the gaps between the upper bound given by the heuristic solution, the lower bound obtained from the maximum-clique problem, and the optimal solution obtained by branch and bound are investigated in the different settings. a scaling law is then proposed to explain the relations between the number of codes needed in euclidean networks with different station densities and connection distances. the substantial difference between the two versions hp-ca and h-ca of the problem is investigated by studying the probabilistic distribution of connections as a function of the distance, and the asymptotic size of the maximum cliques. roberto battiti alan a. bertossi maurizio a. bonuccelli decoupled access/execute computer architectures james e. smith editorial: recasting the vote ken korman command execution in a heterogeneous environment as a user's computing environment grows from a single time-shared host to a network of specialized and general-purpose machines, the capability for the user to access all of these resources in a consistent and transparent manner becomes desirable. instead of viewing commands as binary files, we expect the user to view commands as services provided by servers in the network. the user interacts with a personal workstation that locates and executes services on his behalf. executing a single service provided by any server in the network is useful, but the user would also like to combine services from different machines to perform complex computations. to provide this facility we expand on the unix notion of pipes to a generalized pipeline mechanism containing services from a variety of servers. in this paper we explain the merits of a multi-machine pipeline for solving problems of accessing services in a heterogeneous environment. we also give a design and performance evaluation of a general mechanism for multi-machine pipes using the darpa udp and tcp protocols. j t korb c e wills broadcasting under network ignorance scenario xiangchuan chen hong an shirong zheng use of tcp decoupling in improving tcp performance over wireless networks we propose using the tcp decoupling approach to improve a tcp connection's goodput over wireless networks. the performance improvement can be analytically shown to be proportional to , where mtu is the maximum transmission unit of participating wireless links and hp_sz is the size of a packet containing only a tcp/ip header. for example, on a wavelan [32] wireless network, where mtu is 1500 bytes and hp_sz is 40 bytes, the achieved goodput improvement is about 350%. we present experimental results demonstrating that tcp decoupling outperforms tcp reno and tcp sack. these results confirm the analysis of performance improvements. tact: tunable availability and consistency tradeoffs for replicated internet services haifeng yu suez: high-performance real-time ip router prashant pradhan anindya neogi evolution of the harris h-series computers and speculations on their future chuck crawford supercomputers: where are the lost cycles? willi schönauer hartmut häfner a simple and efficient routing protocol for the umts access network this paper presents a simple network layer protocol that integrates routing and connectionless transfer of data in a wireless environment. the protocol is specifically geared towards supporting transfer of signalling in mobile networks based on a rooted tree topology. exploiting the special characteristics of such a topology allows the specification of a very simple and processing efficient routing function. using the routing function, a connectionless message transport service is implemented. the connec- tionless transport service is comparable to that of typical network layer protocols of existing data networks. the protocol has originally been specified to carry signalling messages in the control plane of mobile, cellular systems but has the potential to be used also in other environments. håkan mitts harri hansen a comparison study of the two-tier and the single-tier personal communications services systems a two-tier pcs system integrates the high tier pcs system and the low tier pcs systems into a single system to provide the advantages of both tiers. such a system is expected to provide better service (more available and more cost effective to the users) at the expense of the extra tier switching management. we compare the performance of the two-tier pcs system and the single low tier system in two aspects: the registration traffic and the service availability. because of the tier management, the two- tier system generates more registration traffic than the single low tier system. under the range of the input parameters in our study, we show that the two-tier system generates less than 10cases and generates less than 20system is better than the single one tier system. we also study the probability that a call is forced terminated in the single low tier system because the low tier becomes unavailable during the call (such a call can be continued in the two- tier system). yi-bing lin websites ken korman optical link and processor clustering in the delft parallel processor the avoidance of transfer-bound processing in large parallel computers is first analyzed with respect to the processor interconnectability. a quantitative measure is introduced for the processor substask interactivity, based on the number on nonzeroes in the interaction matrix. a similar quantitative measure can be introduced for the processor interconnectability in case of a multi-bus communication system. for a parallel computer with p processors and p busses the asymptotic speedup for large p is proportional to p, both for tightly and loosely coupled tasks. in case of 2-level parallelization considerably more speedup can be obtained by application of powerful interconnects at both levels. as to the question compute- versus transfer-bound processing, it is discussed that also the choice of the data exchange protocol is of great importance. it proves that the 'newspaper' protocol in combination with full processor interconnectability is an appropriate way to avoid transfer-bound processing. for a parallel computer with many processors one global newspaper for all processors may frequently result in a too small efficiency. an improvement can be obtained through the introduction of a multi-newspaper protocol, based on the interaction matrix as a function of time. it is elucidated that this protocol requires dynamic processor clustering during a parallel run. the delft parallel processor dpp84 (with maximally 16 processors) is mentioned as an example of a parallel processor, based on complete processor interconnectability, where electrical interconnects have been applied. it is discussed that for large parallel computers, like the dpp8x (with maximally 1000 processors) full interconnectability is only feasible by the way of optical interconnects, consisting of a combination of optical wave guides and free space kaleidoscopic devices and provided with multi-input, multi-output electro-to- optic and opto-o-electric transducers. as an example of an opto-to-electric transducer, the powerram is discussed. this transducer has been realized as a prototype multi-accessible processor input memory ic capable to accept a multi-variable input in one data transfer clock cycle. l. dekker e. e. e. frietman design issues in multimedia messaging for next generation wireless systems current wireless systems allow simple messaging services such as one- or two- way text messaging using short messaging service (sms), paging and voicemail. motivated by the convergence of messaging (eg., emails) with information services (eg., request for stock quotes), entertainment (eg., interactive games), commerce (eg., advertisements) etc. in the wireless space, emerging wireless telecommunication systems envisage messaging applications with much richer media involving audio, video, web and text. in this paper, we discuss various design issues that arise in multimedia messaging from a telecommunications carrier point of view. we present a simple system design that addresses many of these issues, a key challenge being data management of (shared) multimedia messaging content. darin nelson s. muthukrishnan timestamp snooping: an approach for extending smps milo m. k. martin daniel j. sorin anastassia ailamaki alaa r. alameldeen ross m. dickson carl j. mauer kevin e. moore manoj plakal mark d. hill david h. wood rednet: a wireless atm local area network using infrared links j. h. condon t. s. duff m. f. jukl c. r. kalmanek b. n. locanthi j. p. savicki j. h. venutolo data management for mobile computing on the internet rajiv tewari peter grillo save: an algorithm for smoothed adaptive video over explicit rate networks n. g. duffield k. k. ramakrishnan amy r. reibman the aloha system the aloha system is composed of a related series of contracts and grants from a variety of funding agencies with principal support from arpa, which deal with two main themes: computer communications (task 1), and computer structures (task 2).under computer-communications there is work in (a) studies on computer communications using radio and satellites, (b) the development of a prototype radio-linked time-sharing network, (c) system studies and planning for a pacific area computer communications network linking major universities in the u.s., japan, australia and other pacific countries.under computer structures, we are engaged in research/development in multiprocessor computing structures, computer networks, and geographically distributed computing systems. this work is being undertaken in two phases: 1) the establishment of a research facility and 2) the research work itself. the research facility is centered around the bcc 500 computing system. f. f. kuo tcp source activity and its impact on call admission control in cdma voice/data network sanjoy sen. jastinder jawanda kalyan basu naveen k. kakani sajal k. das toward a parametric approach for modeling local area network performance the task of modeling the performance of a single computer (host) with associated peripheral devices is now well understood [computer 80]. in fact, highly usable tools based on analytical modeling techniques are commercially available and in widespread use throughout the industry. [buzen 78] [buzen 81] [won 81] these tools provide a mechanism for describing computerized environments and the workloads to be placed on them in a highly parameterized manner. this is important because it allows users to describe their computer environments in a structured way that avoids unnecessary complexity. it also is helpful in facilitating intuitive interpretations of modeling results and applying them to capacity planning decisions. a first step toward building a modeling tool and associated network specification language that allows straightforward, inexpensive, and interpretable modeling of multi- computer network performance is to identify the set of characteristics (parameters) that most heavily influence that performance. the result of such a study for the communication aspects of local area networks is the subject of this paper. peter s. mager dynamic queue length thresholds for shared-memory packet switches abhijit k. choudhury ellen l. hahne local and global handovers based on in-band signaling in wireless atm networks this paper presents a handover protocol for wireless atm networks, which makes use of in-band signaling, i.e., of atm resource management cells, to process network handovers and guarantee the in-sequence and loss-free delivery of the atmcells containing user data. the goal of the proposed approach is to minimize the modifications of the atm signaling standard required to overlay user mobility onto the fixed network infrastructure, and provide for a gradual upgrade of the fixed network to handle mobility. the proposed protocol handles both local handovers, in which the connection access point needs not migrate to a new atm local exchange, and global handovers, in which the connection access point must migrate to a new local exchange. the handover scheme is devised so as to grant in-sequence delivery of cells. the performance of the network during handover is analyzed in case of connections requiring loss-free operation. the considered performance figures are the cell transmission delay introduced by the handover and the cell buffering requirements posed to the network. the behavior of the proposed protocol in presence of multiple handovers is studied via simulation, while a simple analytical method is derived for the performance evaluation of a single handover in isolation. m. ajmone marsan a. fumagalli r. lo cigno c. f. chiasserini m. munafó multi-layer tracing of tcp over a reliable wireless link reiner ludwig bela rathonyi almudena konrad kimberly oden anthony joseph on the minimal synchronism needed for distributed consensus reaching agreement is a primitive of distributed computing. whereas this poses no problem in an ideal, failure-free environment, it imposes certain constraints on the capabilities of an actual system: a system is viable only if it permits the existence of consensus protocols tolerant to some number of failures. fischer et al. have shown that in a completely asynchronous model, even one failure cannot be tolerated. in this paper their work is extended: several critical system parameters, including various synchrony conditions, are identified and how varying these affects the number of faults that can be tolerated is examined. the proofs expose general heuristic principles that explain why consensus is possible in certain models but not possible in others. danny dolev cynthia dwork larry stockmeyer performance analysis of multipath multistage interconnection networks this paper closely examines the performance analysis for unbuffered multipath multistage interconnection networks. a critical discussion of commonly used analysis is provided to identify a basic flaw in the model. a new analysis based on the grouping of alternate links is proposed as an alternative to rectify the error. the results based on the new analysis and extensive simulation are presented for three representative networks. the simulation study strongly supports the results of the new analysis. s. c. kothari a. jhunjhunwala a. mukherjee evaluation of the lock mechanism in a snooping cache this paper discusses the design concepts of a lock mechanism for a parallel inference machine (the pim/c prototype) and investigates the performance of the mechanism in detail. lock operations are extremely frequent on the pim; however, lock contention rarely occurs during normal memory usage. for this reason, the lock mechanism is designed so as to minimize the lock overhead time in the case of no contention. this is done by using an invalidation lock mechanism, which utilizes the exclusive state of the snooping cache and in which the locked address is not broadcast. experimental results demonstrate the benefits of the lock mechanism in regions of few lock contentions. they also confirm that, in most cases, the lock mechanism works well on the pim. however, the mechanism is also found to cause performance degradation when a locked address is accessed by multiple processing elements (pes) in a tightly-coupled multi- processor (tcmp). this is because shared data such as the flags for inter-pe communication, which are shared by all the pes, may be accessed by multiple pes at the same time, thus generating heavy contention. this paper also shows that combining a register- based broadcasting facility with the proposed lock mechanism can solve the above problem. toshiaki tarui takayuki nakagawa noriyasu ido machiko asaie mamoru sugie description of a planned federal information processing standard for transport protocol the national bureau of standards has developed service and design specifications for transport and session protocols for use in computer system and network procurements. these protocols reside in layers four and five of the international organization for standardization's (iso) reference model for open systems interconnection. this paper describes the services, interfaces, and internal behavior of the transport protocol. the transport (and session) protocol specifications were derived from the most recent developments within iso on these protocols. specific features were selected based on the needs of the agencies of the federal government within the united states, but they are consistent with the needs of any large organization engaged in the procurement or development of networks of heterogeneous computer systems. john f. heafner robert p. blanc a fair protocol for fast resource assignment in wireless pcs networks efficient sharing of communication resources is essential to pcs networks since the wireless bandwidth is limited. the resource auction multiple access (rama) protocol was recently proposed for fast resource assignment and handover in wireless pcs networks. the rama protocol assigns available communication resources (e.g., tdma time slots or frequency channels) to subscribers one at a time using a collision resolution protocol based on subscriber id's. however, the rama protocol encounters an unfairness problem; furthermore, performance results also indicate that it is inefficient at transmitting fixed-length subscriber id's. moreover, the emerging services such as teleconferencing have been presenting new challenges to dynamic- priority resource assignment. in this paper, we propose a modification to the rama protocol to improve its performance and resolve the unfairness problem. the proposed protocol also adopts dynamic priority assignment to improve the qos for subscribers in overload environments. tsan-pin wang chien- chao tseng shu-yuen hwang on the relevance of long-range dependence in network traffic there is mounting experimental evidence that network traffic processes exhibit ubiquitous properties of self-similarity and long range dependence (lrd), i.e. of correlations over a wide range of time scales. however, there is still considerable debate about how to model such processes and about their impact on network and application performance. in this paper, we argue that much recent modeling work has failed to consider the impact of two important parameters, namely the finite range of time scales of interest in performance evaluation and prediction problems, and the first-order statistics such as the marginal distribution of the process.we introduce and evaluate a model in which these parameters can be easily controlled. specifically, our model is a modulated fluid traffic model in which the correlation function of the fluid rate is asymptotically second-order self-similar with given hurst parameter, then drops to zero at a cutoff time lag. we develop a very efficient numerical procedure to evaluate the performance of the single server queue fed with the above fluid input process. we use this procedure to examine the fluid loss rate for a wide range of marginal distributions, hurst parameters, cutoff lags, and buffer sizes.our main results are as follows. first, we find that the amount of correlation that needs to be taken into account for performance evaluation depends not only on the correlation structure of the source traffic, but also on time scales specific to the system under study. for example, the time scale associated to a queueing system is a function of the maximum buffer size. thus for finite buffer queues, we find that the impact on loss of the correlation in the arrival process becomes nil beyond a time scale we refer to as the correlation horizon. second, we find that loss depends in a crucial way on the marginal distribution of the fluid rate process. third, our results suggest that reducing loss by buffering is hard. we advocate the use of source traffic control and statistical multiplexing instead. m. grossglauser j.-c. bolot summary of the international seminar on parallel processing systems s. r. das some new efficient techniques for the simulation of computer communications networks w. w. larue v. s. frost k. s. shanmugan analysis of a variant hypercube topology each node of a hypercube system, when fabricated, comes with a fixed number of links designed for a maximum sized construction. very often, there are links left unused at each node in a real system. in this article, we study the hypercube in which extra connections are added between pairs of nodes through otherwise unused links. those extra connections are made in order to maximize the improvement of the performance measure of interest under various traffic distributions. the resulting hypercube, called the variant hypercube, requires a simple routing algorithm and is guaranteed not to create any traffic- congested point or link. the variant hypercube is found to achieve considerable reduction in diameter, and noticeable improvement in mean internode distance and traffic density. in addition, a variant hypercube is more cost-effective than a regular hypercube and does not suffer from practical implementation difficulty. as a result, it also appears advantageous for hypercube systems with no available unused links to augment each node so as to accommodate an extra link, provided that the building block is not pin limited and is allowed to do so. nian-feng tzeng an overview of the center for wireless information network studies at worcester polytechnic institute, ma, usa kaveh pahlavan cell multiplexing in atm networks zvi rosberg architecture and evaluation of a high-speed networking subsystem for distributed-memory systems p. steenkiste m. hemy t. mummert b. zill distributed admission control for power-controlled cellular wireless systems it is well known that power control can help to improve spectrum utilization in cellular wireless systems. however, many existing distributed power control algorithms do not work well without an effective connection admission control (cac) mechanism, because they could diverge and result in dropping existing calls when an infeasible call is admitted. in this work, based on a system parameter defined as the discriminant, we propose two distributed cac algorithms for a power-controlled system. under these cac schemes, an infeasible call is rejected early, and incurs only a small disturbance to existing calls, while a feasible call is admitted and the system converges to the pareto optimal power assignment. simulation results demonstrate the performance of our algorithms. mingbo xiao ness b. shroff edwin k. p. chong coding guidelines for pipelined processors this paper is a tutorial for assembly language programmers of pipelined processors. it describes the general characteristics of pipelined processors and presents a collection of coding guidelines for them. these guidelines are particularly significant to compiler developers who determine object code patterns. james w. rymarczyk tracking long-term growth of the nsfnet kimberly c. claffy hans-werner braun george c. polyzos guest editorial: management of mobility in distributed systems adarshpal s. sethi metin feridun tact: tunable availability and consistency tradeoffs for replicated internet services (poster session) haifeng yu editorial len bass daniel p. siewiorek new call blocking versus handoff blocking in cellular networks in cellular networks, blocking occurs when a base station has no free channel to allocate to a mobile user. one distinguishes between two kinds of blocking, the first is called new call blocking and refers to blocking of new calls, the second is called handoff blocking and refers to blocking of ongoing calls due to the mobility of the users. in this paper, we first provide explicit analytic expressions for the two kinds of blocking probabilities in two asymptotic regimes, i.e., for very slow mobile users and for very fast mobile users, and show the fundamental differences between these blocking probabilities. next, an approximation is introduced in order to capture the system behavior for moderate mobility. the approximation is based on the idea of isolating a set of cells and having a simplifying assumption regarding the handoff traffic into this set of cells, while keeping the exact behavior of the traffic between cells in the set. it is shown that a group of 3 cells is enough to capture the difference between the blocking probabilities of handoff call attempts and new call attempts. moshe sidi david starobinski the heterogeneous structure problem in hardware/software codesign: a macroscopic approach j. a. maestro d. mozos r. hermida isdn architecture: a basis for new services isdn is an acronym for integrated services digital network. isdn embodies world-wide standards that will engender the technology necessary to weave the fabric of a global communications network and provide the foundation for information age services. siemens has a cost effective isdn introduction strategy that will allow isdn to interwork with existing switching systems, current transmission networks, and nonproprietary customer premises equipment (cpe), as well as to interface with operational and administration systems. this strategy also entails the development of isdn terminals, terminal adapters and service capabilities so that the end user can quickly realize the benefits of isdn. siemens is forming partnerships with bell operating telephone companies, like wisconsin bell telephone co., to perform market research of the end users to better determine their isdn needs and applications. isdn architecture is briefly discussed to provide a basis for some examples on how isdn and network components can be combined to meet customer needs. daryl j. eigen cost reduction in location management using semi-realtime movement information this paper introduces a dynamic paging scheme based on the semi- realtime movement information of an individual user, which allows a more accurate predication of the user location at the time of paging. in general, a realtime location tracking scheme may require complex control schemes and incur unacceptably high computation and messaging cost. our proposed approach, namely the velocity paging scheme, relaxes the realtime constraints to semi- realtime to provide a good combination of cost reduction and ease of implementation. the proposed velocity paging scheme utilizes semi-realtime velocity information, namely velocity classes, of individual mobile terminals and dynamically calculates a paging zone (a list of cells to be paged) for an incoming call. therefore, the total paging cost can be reduced due to the paging area reduction. much consideration also has been given to reduce the complexity of the proposed scheme. as a result, it only requires minimal extra overhead and is feasible to implement in current cellular/pcs networks. the velocity paging can be combined with the movement-based registration or other registration schemes. analytical and simulation results of the velocity paging and movement-based registration combination are provided to demonstrate the cost effectiveness of the scheme under various parameters in comparison with the location area scheme. guang wan eric lin multicast support for mobile hosts using mobile ip: design issues and proposed architecture in this paper, we consider the problem of providing multicast to mobile hosts using mobile ip for network routing support. providing multicast in an internetwork with mobile hosts is made difficult because many multicast protocols are inefficient when faced with frequent membership or location changes. this basic difficulty can be handled in a number of ways, but three main problems emerge with most solutions. the tunnel convergence problem, the duplication problem, and the scoping problem are identified in this paper and a set of solutions are proposed. the paper describes an architecture to support ip multicast for mobile hosts using mobile ip. the basic unicast routing capability of mobile ip is used to serve as the foundation for the design of a multicast service facility for mobile hosts. we believe that our scheme is transparent to higher layers, simple, flexible, robust, scalable, and, to the extent possible, independent of the underlying multicast routing facility. for example, our scheme could interoperate with dvmrp, mospf, cbt, or pim in the current internet. where differences exist between the current version of ip (ipv4) and the next generation protocol (ipv6), these differences and any further optimizations are discussed. vineet chikarmane carey l. williamson richard b. bunt wayne l. mackrell preface: special issue on measurement and modeling of computer systems alan jay smith a petri net reduction algorithm for protocol analysis petri net is a powerful model for analyzing communication protocols because they share many common properties. currently, protocol analysis suffers the state explosion problem especially for error-recoverable protocols and multi- party protocols. protocol synthesis relieves this problem by generating new and complicated protocols from simple subsets of the protocol models. reduction analysis provides theoretical ground for correct synthesis or expansion. thus, reduction is a very important research area. in this paper, we present a general petri net reduction algorithm that reduces the number of states while preserving all desirable and undesirable properties. to the best of our knowledge, this is the first general petri net reduction algorithm for protocol analysis. we first present and extend dong's [don 83] definition of wbms to include more subnets as wbms. to render the reductions automated, a new concept of simple well-behaved modules (swbms) is introduced. recursively performing reductions of swbms, complicated wbms can be reduced. a main program is written to implement this recursive procedure. the problem is then reduced to finding conditions for swbms. we do this by progressing from simpler swbms to more complicated ones, i.e., from single-arc ones to multi- arcs ones. finally, we demonstrate the usefulness of this algorithm by applying it to the state exploration in protocol synthesis. other applications such as error detection, performance evaluation, and software engineering will be discussed in future. c v ramamoorthy y yaw collection servers: an architecture for multicast receiver reporting brad cain thomas hardjono resources section: conferences jay blickstein ibm system/38 support for capability-based addressing the ibm system/38 provides capability-based addressing. this paper describes how support is divided among architectural definition, microcode, and hardware to minimize overhead for this function. merle e. houdek frank g. soltis roy l. hoffman multi-language system design ahmed jerraya rolf ernst a gap theorem for consensus types extended abstract gary l. peterson rida a. bazzi gil neiger contingency planning and disaster recovery the ability to provide uninterruptable access to automated informational resources will be profoundly different in the fifth generation. the types of changes that we can expect are already being reflected in the emerging building blocks for the fifth generation. as our dependency on the computerized environment, as well as our need for easy access to informational resources increases, there is a corresponding increase in tools and mechanisms designed to allow us to quickly and easily recover from many types of natural and man- made disasters. today, we are just beginning to address the need for uninterruptable access. by the fifth generation, we will have solved problems which today appear to be so complex as to be impossible to solve. ken fong performance modeling of asynchronous data transfer methods of ieee 802.11 mac protocol to satisfy the needs of wireless data networking, study group 802.11 was formed under ieee project 802 to recommend an international standard for wireless local area networks (wlans). a key part of standard are the medium access control (mac) protocol needed to support asynchronous and time bounded delivery of data frames. it has been proposed that unslotted carrier sense multiple access with collision avoidance (csma/ca) be the basis for the ieee 802.11 wlan mac protocols. we conduct performance evaluation of the asynchronous data transfer protocols that are a part of the proposed ieee 802.11 standard taking into account the decentralized nature of communication between stations, the possibility of "capture", and presence of "hidden" stations. we compute system throughput and evaluate fairness properties of the proposed mac protocols. further, the impact of spatial characteristics on the performance of the system and that observed by individual stations is determined. a comprehensive comparison of the access methods provided by the 802.11 mac protocol is done and observations are made as to when each should be employed. extensive numerical and simulation results are presented to help understand the issues involved. harshal s. chhaya sanjay gupta low-level router design and its impact on supercomputer system performance v. puente j. a. gregorio c. izu r. beivide f. vallejo cost-effective traffic grooming in wdm rings ornan gerstel rajiv ramaswami galen h. sasaki editorial the generation and propagation of radio frequency (rf) electromagnetic waves was first demonstrated by heinrich hertz in 1888. a few years later, guglielmo marconi succeeded in transmitting, wirelessly, a radio signal over a long terrestrial distance in 1896 and then across the atlantic ocean in 1901. the number of devices and systems that emit rf radiation has been increasing at an accelerating rate ever since. some have estimated that there were over 80 million users of mobile telephones by the end of 1996. indeed, wireless communication service is sweeping the world and has brought instant, two-way radio communication to many people constantly on the move. the concept of personal communication systems (pcs) aims at providing two-way communication services, speech and data to individual users, indoors or out. its goal is to establish a mass network for mobile communications and to provide a competitive alternative to the conventional wired public switched telecommunication network. the wide-spread impact of this new technology has raised concerns about the safety of human exposure to rf energy emitted by these telecommunication devices. clearly, we need a better understanding of the biological effects of rf electro-magnetic field so that we can safeguard against possible harm to the general population and enhance its beneficial uses. within the last few years there has been a resurgence of research effort to achieve better and more quantitative understanding of the relationships between the biological effects of rf radiation and the physical variables that may cause them. some results are beginning to appear in the literature. this special issue is intended to provide an overview of the current status of our scientific understanding and to present recent advances coming from various research laboratories. paolo bernardi james c. lin lossless handover for wireless atm handover is one of the key research topics for the emerging wireless atmnetworks. this paper describes a handover mechanism for intra-switch handovers for wireless atm. the handover procedure is simple enough to be implementable as a limited enhancement to atm switch platforms for fixed network, yet provides lowdelay and lossless handover when used together with a suitable radio interface. the paper also reports on initial simulation result. håkan mitts harri hansen jukka immonen simo veikkolainen transport protocol processing at gbps rates this paper proposes an architecture for accomplishing transport protocol processing at gbps rates. the limitations of currently used transport protocols have been analyzed extensively in recent literature. several benchmark studies have established the achievable throughput of iso tp4 and tcp to be in the low mbps range; several new protocols and implementation techniques have been proposed to achieve 100 mbps and higher throughput rates. we briefly review some of these protocols and establish the need for a radically different approach to meet our objective. an estimate of the aggregate processing power required for gbps throughput is developed. it is proposed that a cost effective and practical solution to the processing requirements could be based on a multi-processor system. the opportunities for parallel processing in a typical transport protocol are examined. several alternate parallel processing approaches are examined and arguments are advanced for selecting a favored approach. a corresponding parallel processing architecture is described. data structures used to preserve packet ordering and techniques for reducing contention in a multi-processing environment are discussed. an implementation methodology for conventional transport protocols (e.g. tp4) is outlined. some suggestions are made for improving efficiency by making modifications to the protocol that do not compromise functionality. the performance achievable with this modified architecture is analyzed and some suggestions for further work are presented. n. jain m. schawrtz t. bashkow efficient spmd constructs for asynchronous message passing architectures aloke majumdar marina c. chen arpanet performance tuning techniques as part of its operation and maintenance of the arpanet for the past twelve years, bbn has been asked to investigate a number of cases of degradation in network performance. this presentation discusses the practical methods and tools used to uncover and correct the causes of these service problems. a basic iterative method of hypothesis generation, experimental data gathering, and analysis is described. emphasis is placed on the need for experienced network analysts to direct the performance investigation and for the availability of network programmers to provide special purpose modifications to the network node software in order to probe the causes of the traffic patterns under observation. many typical sources of performance problems are described, a detailed list of the tools used by the analyst are given, and a list of basic techniques provided. throughout the presentation specific examples from actual arpanet performance studies are used to illustrate the points made. james g. herman a design methodology for mobile distributed applications based on unity formalism and communication-closed layering camelia zlatea tzilla elrad data composability in myriad nets (invited talk): de-layering in billion node mobile networks h. shrikumar ergonomics of wearable computers wearable computers represent a new and exciting area for technology development, with a host of issues relating to display, power and processing still to be resolved. wearable computers also present a new challenge to the field of ergonomics; not only is the technology distinct, but the manner in which the technology is to be used and the relationship between user and computer have changed in a dramatic fashion. in this paper, we concentrate on some traditional ergonomics concerns and examine how these issues can be addressed in the light of wearable computers. chris baber james knight d. haniff l. cooper parallel implementation of a frontal finite element solver on multiple platforms james d. callahan john m. tyler security risks in computer-communication systems peter neumann open issues and challenges in providing quality of service guarantees in high- speed networks jim kurose adaptive recovery for mobile environments nuno neves w. kent fuchs trading packet headers for packet processing in high speed networks, packet processing is relatively expensive while bandwidth is cheap. thus it pays to add information to packet headers to make packet processing easier. while this is an old idea, we describe several specific new mechanisms based on this principle. we describe a new technique, _source hashing_, which can provide _o_(1) lookup costs at the data link, routing, and transport layers. source hashing is especially powerful when combined with the old idea of a _flow id_; the flow identifier allows packet processing information to be cached, and source hashing allows efficient cache lookups. unlike virtual circuit identifiers (vcis), source hashing does not require a round trip delay for set up. in an experiment with the bsd packet filter implementation, we found that adding a flow id and a source hash improved packet processing costs by a factor of 7. we also found a 45% improvement when we conducted a similar experiment with ip packet forwarding. we also describe two other new techniques: _threaded indices_, which allows fast vci-like lookups for datagram protocols like ip; and a _data manipulation layer_, which compiles out all the information needed for integrated layer processing into an easily accessible portion of each packet. girish p. chandranmenon george varghese amp: a highly parallel atomic multicast protocol this paper deals with the problem of reliable group communication for distributed applications, in the context of the reliable broadcast class of protocols. an atomic multicast protocol for token passing lans is presented. the actual implementation is on an 8802/4 token-bus, although it is applicable to 8802/5 token-rings and the fddi fibre-optic network. the simplicity and efficiency of reliable broadcast protocols may be considerably improved, if the system fault model is restricted or convenient architectures are used. fail-controlled communication components are used here to build an efficient reliable multicast protocol on top of the exposed mac interface of a vlsi lan controller. the architecture is built on standard lans, in view of taking advantage of the availability of communications hardware and the possibility of coexistence with standard stations, in the same network. the service offered allows transparent multicasting inside logical groups, which are dynamically created and updated. the primitive is highly parallel and provides atomic agreement and consistent delivery order, respecting logical precedence. these features are an important contribution for the implementation of high performance distributed computing systems. p. veríssimo l. rodrigues m. baptista on power-law relationships of the internet topology despite the apparent randomness of the internet, we discover some surprisingly simple power-laws of the internet topology. these power-laws hold for three snapshots of the internet, between november 1997 and december 1998, despite a 45% growth of its size during that period. we show that our power-laws fit the real data very well resulting in correlation coefficients of 96% or higher.our observations provide a novel perspective of the structure of the internet. the power-laws describe concisely skewed distributions of graph properties such as the node outdegree. in addition, these power-laws can be used to estimate important parameters such as the average neighborhood size, and facilitate the design and the performance analysis of protocols. furthermore, we can use them to generate and select realistic topologies for simulation purposes. michalis faloutsos petros faloutsos christos faloutsos the simulation of a risc processor with n.mpc charles a. baxley frederick a. zapka user-space protocols deliver high performance to applications on a low-cost gb/s lan two important questions in high-speed networking are firstly, how to provide gbit/s networking at low cost and secondly, how to provide a flexible low- level network interface so that applications can control their data from the instant it arrives. we describe some work that addresses both of these questions. the jetstream gbit/s lan is an experimental, low-cost network interface that provides the services required by delay-sensitive traffic as well as meeting the performance needs of current applications. jetstream is a combination of traditional shared- medium lan technology and more recent atm cell- and switch-based technology. jetstream frames contain a channel identifier so that the network driver can immediately associate an incoming frame with its application. we have developed such a driver that enables applications to control how their data should be managed without the need to first move the data into the application's address space. consequently, applications can elect to read just a part of a frame and then instruct the driver to move the remainder directly to its destination. individual channels can elect to receive frames that have failed their crc, while applications can specify frame-drop policies on a per- channel basis. measured results show that both kernel- and user-space protocols can achieve very good throughput: applications using both tcp and our own reliable byte- stream protocol have demonstrated throughputs in excess of 200 mbit/s. the benefits of running protocols in user-space are well known- the drawback has often been a severe penalty in the performance achieved. in this paper we show that it is possible to have the best of both worlds. aled edwards greg watson john lumley david banks costas calamvokis c. dalton instruction set selection for asip design michael gschwind floor acquisition multiple access (fama) for packet-radio networks a family of medium access control protocols for single-channel packet radio networks is specified and analyzed. these protocols are based on a new channel access discipline called floor acquisition multiple access (fama), which consists of both carrier sensing and a collision-avoidance dialogue between a source and the intended receiver of a packet. control of the channel (the floor) is assigned to at most one station in the network at any given time, and this station is guaranteed to be able to transmit one or more data packets to different destinations with no collision with transmissions from other stations. the minimum length needed in control packets to acquire the floor is specified as a function of the channel propagation time. the medium access collision avoidance (maca) protocol proposed by karn and variants of csma based on collision avoidance are shown to be variants of fama protocols when control packets last long enough compared to the channel propagation delay. the throughput of fama protocols is analyzed and compared with the throughput of non-persistent csma. this analysis shows that using carrier sensing as an integral part of the floor acquisition strategy provides the benefits of maca in the presence of hidden terminals, and can provide a throughput comparable to, or better than, that of non-persistent csma when no hidden terminals exist. chane l. fullmer j. j. garcia-luna-aceves ip unwired m. scott corson report on the panel: "how can computer architecture researchers avoid becoming the society for irreproducible results?" trevor mudge analyzing the performance of message passing mimd hypercubes: a study with the intel ipsc/860 jukka helin rudolf berrendorf misleading performance in the supercomputing field d. h. bailey an energy consumption model for performance analysis of routing protocols for mobile ad hoc networks a mobile ad hoc network (or manet) is a group of mobile, wireless nodes which cooperatively form a network independent of any fixed infrastructure or centralized administration. in particular, a manet has no base stations: a node communicates directly with nodes within wireless range and indirectly with all other nodes using a dynamically-computed, multi- hop route via the other nodes of the manet. simulation and experimental results are combined to show that energy and bandwidth are substantively different metrics and that resource utilization in manet routing protocols is not fully addressed by bandwidth-centric analysis. this report presents a model for evaluating the energy consumption behavior of a mobile ad hoc network. the model was used to examine the energy consumption of two well- known manet routing protocols. energy-aware performance analysis is shown to provide new insights into costly protocol behaviors and suggests opportunities for improvement at the protocol and link layers. laura marie feeney single instruction stream parallelism is greater than two michael butler tse- yu yeh yale patt mitch alsup hunter scales michael shebanow performance analysis of the cm-2, a massively parallel simd computer the performance evaluation process for a massively parallel distributed memory simd computer is described generally. the performance in basic computation, grid communication, and computation with grid communication is analyzed. a practical performance evaluation and analysis study is done for the connection machine 2 and conclusions about its performance are drawn. jukka helin the wakeup problem in synchronous broadcast systems (extended abstract) this paper studies the differences between two levels of synchronization in a distributed broadcast system (or a multiple access channel). in the globally synchronous model, all processors have access to a global clock. in the locally synchronous model, processors have local clocks ticking at the same rate, but each clock starts individually, when the processor wakes up. we consider the fundamental problem of waking up all of n processors of a completely connected broadcast system. some processors wake up spontaneously, while others have to be woken up. only wake processors can send messages; a sleeping processor is woken up upon hearing a message. the processors hear a message in a given round if and only if exactly one processor sends a message in that round. our goal is to wake up all processors as fast as possible in the worst case, assuming an adversary controls which processors wake up and when. we analyze the problem in both the globally synchronous and locally synchronous models, with or without the assumption that n is known to the processors. we propose randomized and deterministic algorithms for the problem, as well as lower bounds in some of the cases. these bounds establish a gap between the globally synchronous and locally synchronous models. leszek gasieniec andrzej pelc david peleg performance modelling of the orwell basic access mechanism orwell is a high speed slotted ring. its protocol uses destination release of the slots. because of this the carried load can be much larger than the transmission rate. a new analytical model of the orwell basic access mechanism is presented in this paper. the model shows to be accurate and usable over a wide range of parameters. the performance analysis of the orwell basic access mechanism is presented. m. zafirovic i. g. niemegeers an overview of motorola's powerpc simulator family william anderson pulsed battery discharge in communication devices c. f. chiasserini r. r. rao supercomputers: challenges to designers and users session overview: the objective of this panel is to present and discuss issues concerned with the development and use of present and future supercomputers for large-scale scientific and engineering problems. presentations by each of four panelists will be followed by a question and answer session with audience participation. below some brief background information is given on the subject matter along with a list of several topics that will be addressed. also included is a bibliography to help the reader become acquainted with the many aspects of the session theme. myron ginsberg performance objectives - how to define them shyam johari performance analysis of a feedback congestion control policy under non- negligible propagation delay y. t. wang b. sengupta energy-conserving access protocols for identification networks imrich chlamtac chiara petrioli jason redi a comparative study of fuzzy versus "fixed" thresholds for robust queue management in cell-switching networks allen r. bonde sumit ghosh the dash prototype: implementation and performance the fundamental premise behind the dash project is that it is feasible to build large-scale shared-memory multiprocessors with hardware cache coherence. while paper studies and software simulators are useful for understanding many high-level design trade-offs, prototypes are essential to ensure that no critical details are overlooked. a prototype provides convincing evidence of the feasibility of the design allows one to accurately estimate both the hardware and the complexity cost of various features, and provides a platform for studying real workloads. a 16-processor prototype of the dash multiprocessor has been operational for the last six months. in this paper, the hardware overhead of directory-based cache coherence in the prototype is examined. we also discuss the performance of the system, and the speedups obtained by parallel applications running on the prototype. using a sophisticated hardware performance monitor, we characterize the effectiveness of coherent caches and the relationship between an application's reference behavior and its speedup. daniel lenoski james laudon truman joe david nakahira luis stevens anoop gupta john hennessy sizing exit buffers in atm networks: an intriguing coexistence of instability and tiny cell loss rates hanoch levy tzippi mendelson moshe sidi joseph keren- zvi vertical handoffs in wireless overlay networks no single wireless network technology simultaneously provides a low latency, high bandwidth, wide area data service to a large number of mobile users. wireless overlay networks \-- a hierarchical structure of room-size, building- size, and wide area data networks -- solve the problem of providing network connectivity to a large number of mobile users in an efficient and scalable way. the specific topology of cells and the wide variety of network technologies that comprise wireless overlay networks present new problems that have not been encountered in previous cellular handoff systems. we have implemented a vertical handoff system that allows users to roam between cells in wireless overlay networks. our goal is to provide a user with the best possible connectivity for as long as possible with a minimum of disruption during handoff. results of our initial implementation show that the handoff latency is bounded by the discovery time, the amount of time before the mobile host discovers that it has moved into or out of a new wireless overlay. this discovery time is measured in seconds: large enough to disrupt reliable transport protocols such as tcp and introduce significant disruptions in continuous multimedia transmission. to efficiently support applications that cannot tolerate these disruptions, we present enhancements to the basic scheme that significantly reduce the discovery time without assuming any knowledge about specific channel characteristics. for handoffs between room-size and building-size overlays, these enhancements lead to a best-case handoff latency of approximately 170 ms with a 1.5% overhead in terms of network resources. for handoffs between building-size and wide-area data networks, the best-case handoff latency is approximately 800 ms with a similarly low overhead. mark stemm randy h. katz achieving bounded fairness for multicast and tcp traffic in the internet there is an urgent need for effective multicast congestion control algorithms which enable reasonably fair share of network resources between multicast and unicast tcp traffic under the current internet infrastructure. in this paper, we propose a quantitative definition of a type of bounded fairness between multicast and unicast best-effort traffic, termed "essentially fair". we also propose a window-based random listening algorithm (rla) for multicast congestion control. the algorithm is proven to be essentially fair to tcp connections under a restricted topology with equal round-trip times and with phase effects eliminated. the algorithm is also fair to multiple multicast sessions. this paper provides the theoretical proofs and some simulation results to demonstrate that the rla achieves good performance under various network topologies. these include the performance of a generalization of the rla algorithm for topologies with different round-trip times. huayan amy wang mischa schwartz pruning algorithms for multicast flow control multicast flow (congestion) control establishes a data rate for a multicast session based on prevailing bandwidth availability given other network traffic. this rate is dictated by the sender's path to the slowest receiver. this level of performance, however, may not be in the best interest for the multicast group as a whole. a pruning algorithm is then used to identify and remove some slow members so that the performance is acceptable for the whole group. this paper discusses the conceptual issues with pruning, and proposes practical algorithms for pruning. the crux of the problem is to achieve a balance between speed and accuracy, because increased accuracy tends to require monitoring for a longer time and using more global information. we evaluate and compare different strategies using both simulation and measurement of real implementations. dah ming chiu miriam kadansky joe provino joseph wesley haifeng zhu hardware support for interprocess communication in recent years there has been increasing interest in message-based operating systems, particularly in distributed environments. such systems consist of a small message-passing kernel supporting a collection of system server processes that provide such services as resource management, file service, and global communications. for such an architecture to be practical, it is essential that basic messages be fast, since they often replace what would be a simple procedure call or "kernel call" in a more traditional system. careful study of several operating systems shows that the limiting factor, especially for small messages, is typically not network bandwidth but processing overhead. therefore, we propose using a special- purpose coprocessor to support message passing. our research has two parts: first, we partitioned an actual message-based operating system into communication and computation parts interacting through shared queues and measured its performance on a multiprocessor. second, we designed hardware support in the form of a special- purpose smart bus and smart shared memory and demonstrated the benefits of these components through analytical modeling using generalized timed petri nets. our analysis shows good agreement with the experimental results and indicates that substantial benefits may be obtained from both the partitioning of the software and the addition of a small amount of special-purpose hardware. u. ramachandran m. solomon m. vernon improving the accuracy of dynamic branch prediction using branch correlation shien-tai pan kimming so joseph t. rahmeh an extended x.400 architectural model m medina instruction translation for an experimental s/390 processor the ibm s/390 architecture is a complex architecture, which has grown over a long period of time. typical implementations use microcode to cope with the more complex instructions and facilities of s/390. current ibm s/390 processors even contain two levels of microcode.we report on an experimental s/390 processor based on a risc processor kernel employing superscalar, out of order execution of instructions. s/390 instructions have to be translated into internal sequences of risc instructions. actually two closely coupled internal sequences - one for register based execution and one for storage based execution are generated. the translation is a straight-forward mapping in most cases with some flexibility for special instructions.the paper introduces the hardware mechanisms used for mapping s/390 instructions to internal sequences. the facilities, which provide a greater degree of flexibility are discussed. the interactions of the low-level mapping scheme with the microcode levels is examined. finally we discuss our experiences with this type of implementation of a cisc architecture on a risc processor kernel. rolf hilgendorf wolfram sauer load-sensitive routing of long-lived ip flows internet service providers face a daunting challenge in provisioning network resources, due to the rapid growth of the internet and wide fluctuations in the underlying traffic patterns. the ability of dynamic routing to circumvent congested links and improve application performance makes it a valuable traffic engineering tool. however, deployment of load-sensitive routing is hampered by the overheads imposed by link-state update propagation, path selection, and signaling. under reasonable protocol and computational overheads, traditional approaches to load-sensitive routing of ip traffic are ineffective, and can introduce significant route flapping, since paths are selected based on out-of-date link-state information. although stability is improved by performing load- sensitive routing at the flow level, flapping still occurs, because most ip flows have a short duration relative to the desired frequency of link-state updates. to address the efficiency and stability challenges of load-sensitive routing, we introduce a new hybrid approach that performs dynamic routing of long-lived flows, while forwarding short-lived flows on static preprovisioned paths. by relating the detection of long-lived flows to the timescale of link- state update messages in the routing protocol, route stability is considerably improved. through simulation experiments using a one-week isp packet trace, we show that our hybrid approach significantly outperforms traditional static and dynamic routing schemes, by reacting to fluctuations in network load without introducing route flapping. anees shaikh jennifer rexford kang g. shin local error recovery in srm: comparison of two approaches ching-gung liu deborah estrin scott shenker lixia zhang can tcp be the transport protocol of the 21st century? kostas pentikousis simulating fail-stop in asynchronous distributed systems laura sabel keith marzullo design of an atm switch for handoff support in a wireless atm system, a network must provide seamless services to mobile users. to support this, mobility function should be added to existing atm networks. through a handoff operation, a mobile user can receive a service from the network without disconnecting the communication. a handoff results in connection path rerouting during an active connection. to avoid cell loss during a handoff, cell buffering and rerouting are required in the network. a handoff switch is a connection breakdown point on an original connection path in the network from which a new connection sub - path is established. it performs cell buffering and rerouting during a handoff. cell buffering and rerouting can introduce a cell out - of - sequence problem. in this paper we propose a handoff switch architecture with a shared memory. the architecture performs cell buffering and rerouting efficiently by managing logical queues of virtual connections in the shared memory and sorting head - of - line cells for transmission, thus achieving in - sequence cell delivery during a handoff. we also present simulation results to understand the impacts of handoffs on switch performance. heechang kim h. jonathan chao data allocation in a distributed mobile environment anton donchev resources section: web sites michele tepper network performance reporting managing networks using network administration centers is increasingly considered. after introducing the information demand for operational, tactical and strategic network management the paper is dealing with the investigation of the applicability of tools and techniques for these areas. network monitors and software problem determination tools are investigated in greater detail. also implementation details for a multihost-multinode network including software and hardware tools combined by sas are discussed. k. terplan low copy message passing on the alliant campus/800 c. m. burns r. h. kuhn e. j. werme forum diane crawford design and implementation of a prototype optical deflection network we describe the design and implementation of a packet-switched fiber optic interconnect prototype with a shufflenet topology, intended for use in shared- memory multiprocessors. coupled with existing latency-hiding mechanisms, it can reduce latency to remote memory locations. nodes use deflection routing to resolve contention. each node contains a processor, memory, photonic switch, and packet routing processor. payload remains in optical form from source to final destination. each host processor is a commercial workstation with fifo interfaces between its bus and the photonic switch. a global clock is distributed optically to each node to minimize skew. component costs and network performance figures are presented for various node configurations including bit-per-wavelength and fiber-parallel packet formats. our efforts to implement and test a practical interconnect including real host computers distinguishes our work from previous theoretical and experimental work. we summarize obstacles we encountered and discuss future work. john feehrer jon sauer lars ramfelt simulation of high-q oscillators m. gourary s. ulyanov m. zharov s. rusakov ds/cdma m-ary orthogonal signalling in a shadowed rayleigh channel for mobile communications in this paper a modified rake receiver is studied for a frequency selective mobile radio channel. the reverse link (mobile to base station) is analysed, assuming lognormal shadowing and rayleigh fading and k asynchronous users, with m orthogonal sequences per user. the analysis is based on the consideration of the quadrature components of the signal and noise, taking advantage of the multipath effects. the performance evaluation is carried out in terms of both the bit error rate and outage probability in order to qualify completely the proposed receiver. the positive results assure the possibility of applying this system in a microcellular mobile radio environment. panagiotis i. dallas fotini-niovi pavlidou performance characteristics of two ethernets: an experimental study local computer networks are increasing in popularity for the interconnection of computers for a variety of applications. one such network that has been implemented on a large scale is the ethernet. this paper describes an experimental performance evaluation of a 3 and a 10 mb/s ethernet. the effects of varying packet length and transmission speed on throughput, mean delay and delay distribution are quantified. the protocols are seen to be fair and stable. these measurements span the range from the region of high performance of the csma/cd protocol to the upper limits of its utility where performance is degraded. the measurements are compared to the predictions of existing analytical models. the correlation is found to range from good to poor, with more sophisticated models yielding better results than a simple one. timothy a. gonsalves performance evaluation of a decoded instruction cache for variable instruction-length computers a decoded instruction cache (dinc) serves as a buffer between the instruction decoder and the other instruction- pipeline stages. in this paper we explain how techniques that reduce the branch penalty based on such a cache, can improve cpu performance. we analyze the impact of some of the design parameters of dincs on variable instruction- length computers, e.g., cisc machines. our study indicates that tuning the mapping function of the instructions into the cache, can improve the performance substantially. this tuning must be based on the instruction length distribution for a specific architecture. in addition, the associativity degree has a greater effect on the dinc's performance, than on the performance of regular caches. we also discuss the difference between the performance of dincs and other caches, when longer cache lines are used. the results presented were obtained by both analytical study and trace-driven simulations of several integer unix applications. gideon intrater ilan spillinger measurement and modeling of computer reliability as affected by system activity this paper demonstrates a practical approach to the study of the failure behavior of computer systems. particular attention is devoted to the analysis of permanent failures. a number of important techniques, which may have general applicability in both failure and workload analysis, are brought together in this presentation. these include: smeared averaging of the workload data, clustering of like failures, and joint analysis of workload and failures. approximately 17 percent of all failures affecting the cpu were estimated to be permanent. the manifestation of a permanent failure was found to be strongly correlated with the level and type of workload prior to the failure. although, in strict terms, the results only relate to the manifestation of permanent failures and not to their occurrence, there are strong indications that permanent failures are both caused and discovered by increased activity. more measurements and experiments are necessary to determine their respective contributions to the measured workload/failure relationship. r. k. iyer d. j. rossetti m. c. hsueh instruction scheduling and executable editing eric schnarr james r. larus implications of hierarchical n-body methods for multiprocessor architectures to design effective large-scale multiprocessors, designers need to understand the characteristics of the applications that will use the machines. application characteristics of particular interest include the amount of communication relative to computation, the structure of the communication, and the local cache and memory requirements, as well as how these characteristics scale with larger problems and machines. one important class of applications is based on hierarchical n-body methods, which are used to solve a wide range of scientific and engineering problems efficiently. important characteristics of these methods include the nonuniform and dynamically changing nature of the domains to which they are applied, and their use of long-range, irregular communication. this article examines the key architectural implications of representative applications that use the two dominant hierarchical n-body methods: the barnes-hut method and the fast multipole method. we first show that exploiting temporal locality on accesses to communicated data is critical to obtaining good performance on these applications and then argue that coherent caches on shared-address-space machines exploit this locality both automatically and very effectively. next, we examine the implications of scaling the applications to run on larger machines. we use scaling methods that reflect the concerns of the application scientist and find that this leads to different conclusions about how communication traffic and local cache and memory usage scale than scaling based only on data set size. in particular, we show that under the most realistic form of scaling, both the communication-to-computation ratio as well as the working-set size (and hence the ideal cache size per processor) grow slowly as larger problems are run on larger machines. finally, we examine the effects of using the two dominant abstractions for interprocessor communication: a shared address space and explicit message passing between private address spaces. we show that the lack of an efficiently supported shared address space will substantially increase the programming complexity and performance overheads for these applications. jaswinder pal singh john l. hennessy anoop gupta personal work-stations at the university of waterloo students and faculty at the university of waterloo are using the ibm personal computer as a powerful workstation to develop programs, and do their applications. the pcs are connected using two different network technologies. in one case, the pcs are connected into a micro-mainframe network called waterloo pc network, which allows both files and software to be located in a centralized file system accessed from any pc. since the language and applications software is portable and runs on both the mainframe or micro the user may choose to run an application on the computer that is appropriate for the size of the job. in the other case, the pcs are connected to another pc which acts as a file and print server. in both cases, the workstations do not have any local file storage. the talk will describe the design goals for pc networks, their design, current applications, and future plans. sandra ward the dcm: a hardware extensible architecture robert edward boring business: the 8th layer: computing power to the people kate gerwig aquarius a. despain y. patt v. srini p. bitar w. bush c. chien w. citrin b. fagin w. hwu s. melvin r. mcgeer a. singhal m. shebanow p. van roy se-osi: a prototype support environment for open systems interconnection owen newnan personal technology architecture espen andersen router level filtering for receiver interest delivery delivering data to on-line game participants requires the game data to be "customized" in real-time to each participant's characteristics. using multicast in such an environment might sound contradictory. but multicast is a very efficient communication paradigm to minimize the transmission delays. also, multicast reduces the workload at the sender. content delivery according to receiver interest can be achieved by group management in multicast. but the natural dynamics of the application results in numerous delays because of join/leave latencies. in this paper, we propose the router level filtering as a solution to the above problem. rlf relies on an extension to the current ip multicast service model. it introduces "filters" in the router forwarding process thereby providing a simple effective mechanism to customize the data delivered to a multicast session receiver while minimizing the number of groups and the related management cost. contrary to other router filtering proposals, the filter semantics is determined by the application. the paper discusses protocol specification and implementation details of rlf, and shows how it may be implemented in routers. manuel oliveira jon crowcroft christophe diot projecting the growth of cellular communications the tremendous success of cellular technology has fundamentally changed the way people communicate and prompted the evolution of a new multibillion dollar wireless communications industry. linking service areas, wireless communications has altered the way business is conducted. for instance, with a laptop computer, a pcmcia modem and a cellular phone, a real estate agent can contact his or her office and clients, check sales listings and arrange appointments while traveling. field service and sales people can, from customer locations, access corporate databases to check inventory status, prepare up-to-the-minute price and delivery quotes, and cut orders directly to the factory. two-way paging services allow a firm's workforce to stay in close contact, even when traditional wired communication services are not available. hand-held hybrids of phone-computer-fax machines feed information to wireless communication networks, allowing an executive to make decisions while watching a little league baseball game. michael wang william j. kettinger using events for the scalable federation of heterogeneous components john bates jean bacon ken moody mark spiteri on the working set concept for data-flow machines this paper discusses the concept of the working set for data-flow machines in order to establish one of the criteria for the realization of cost effective data-flow machines. the characteristics of program execution in conventional machines and data-flow machines are compared. then, a definition of the working set for data-flow machines is proposed, based on the simultaneity of execution and the principle of locality. several segmentation, fetch, and removal policies are described. evaluation is made in terms of feasibility, efficiency, and performance, through computer simulations. mario tokoro j. r. jagannathan hideki sunahara the cm-5 connection machine: a scalable supercomputer w. daniel hillis lewis w. tucker anonymous credit cards and their collusion analysis steven h. low nicholas f. maxemchuk sanjoy paul array abstractions using semantic analysis of trapezoid congruences with the growing use of vector supercomputers, efficient and accurate data structure analyses are needed. what we propose in this paper is to use the quite general framework of cousot's abstract interpretation for the particular analysis of multi-dimensional array indexes. while such indexes are integer tuples, a relational integer analysis is first required. this analysis results of a combination of existing ones that are interval and congruence based. two orthogonal problems are directly concerned with the results of such an analysis, that are the parallelization/vectorization with the dependence analysis and the data locality problem used for array storage management. after introducing the analysis algorithm, this paper describes on a complete example how to use it in order to optimize array storage. françois masdupuy algorithmic problems in internet research (abstract) george varghese composable ad hoc location-based services for heterogeneous mobile clients todd d. hodes randy h. katz reliable broadband communication using a burst erasure correcting code traditionally, a transport protocol corrects errors in a computer communication network using a simple arq protocol. with the arrival of broadband networks, forward error correction is desirable as a complement to arq. this paper describes a simplified reed-solomon erasure correction coder architecture, adapted for congestion loss in a broadband network. simulations predict it can both encode and decode at rates up to 1 gigabit per second in a custom 1 micron cmos vlsi chip. a. j. mcauley genetic list scheduling algorithm for scheduling and allocation on a loosely coupled heterogeneous multiprocessor system martin grajcar visualizing packet traces this paper describes an environment for visualizing packet traces that greatly simplifies troubleshooting protocol implementations. network management centers routinely collect packet traces to tally traffic statistics and to troubleshoot protocol configuration and implementation problems. previous efforts have focused on the reliable collection of traces and their statistical interpretation. display of packet traces was restricted to a textual representation of the raw headers. our prototype environment interprets the trace as a whole. it identifies conversations across protocol layers, simulates the services offered by lower layers, and hides the lower layer implementation detail from the representation of higher layer conversations. our prototype offers over a dozen different types of diagrams for showing protocol interactions and uses linked highlighting to show the relationships between the objects in different diagrams. john a. zinky fredric m. white 7-layered (small mental exercise) ivan ryant processor queueing disciplines in distributed systems a distributed program consists of processes, many of which can execute concurrently on different processors in a distributed system of processors. when several processes from the same or different distributed programs have been assigned to a processor in a distributed system, the processor must select the next process to run. the following two questions are investigated: what is an appropriate method for selecting the next process to run? under what conditions are substantial gains in performance achieved by an appropriate method of selection? standard processor queueing disciplines, such as first- come-first-serve and round-robin-fixed-quantum, are studied. the results for four classes of queueing disciplines tested on three problems are presented. these problems were run on a testbed, consisting of a compiler and simulator used to run distributed programs on user-specified architectures. elizabeth williams reprint from computing reviews michael c. loui errata for "measured capacity of an ethernet: myths and reality" david r. boggs jeffrey c. mogul christopher a. kent quantifying loop nest locality using spec'95 and the perfect benchmarks this article analyzes and quantifies the locality characteristics of numerical loop nests in order to suggest future directions for architecture and software cache optimizations. since most programs spend the majority of their time in nests, the vast majority of cache optimization techniques target loop nests. in contrast, the locality characteristics that drive these optimizations are usually collected across the entire application rather than at the nest level. researchers have studied numerical codes for so long that a number of commonly held assertions have emerged on their locality characteristics. in light of these assertions, we use the spec'95 and perfect benchmarks to take a new look at measuring locality on numerical codes based on references, loop nests, and program locality properties. our results show that several popular assertions are at best overstatements. for example, although most reuse is within a loop nest, in line with popular assertions, most misses are internest capacity misses, and they correspond to potential reuse between nearby loop nests. in addition, we find that temporal and spatial reuse have balanced roles within a loop nest and that most reuse across nests and the entire program is temporal. these results are consistent with high hit rates (80&percent; or more hits), but go against the commonly held assumption that spatial reuse dominates. our locality measurements reveal important differences between loop nests and programs, refute some popular assertions, and provide new insights for the compiler writer and the architect. kathryn s. mckinley olivier temam omp: a risc-based multiprocessor using orthogonal-access memories and multiple spanning buses this paper presents the architectural design and risc based implementation of a prototype supercomputer, namely the orthogonal multiprocessor (omp). the omp system is constructed with 16 intel 1860 risc microprocessors and 256 parallel memory modules, which are 2-d interleaved and orthogonally accessed using custom- designed spanning buses. the architectural design has been validated by a csim-based multiprocessor simulator. the design choices are based on worst- case delay analysis and simulation validation. the current omp prototype chooses a 2-dimensional memory architecture, mainly for image processing, computer vision, and neural network simulation applications. the 16-processor omp prototype is targeted to achieve a peak performance of 400 risc integer mips or a maximum of 640 mflops. this paper presents the architectural design of the omp prototype at system and pc board levels. we are presently entering the fabrication stage of all the pc boards. the system is expected to become operational in late 1991 and benchmarking results will be available in 1992. only hardware design features are reported here. software and simulation results are reported elsewhere. k. hwang m. dubois d. k. panda s. rao s. shang a. uresin w. mao h. nair m. lytwyn f. hsieh j. liu s. mehrotra c. m. cheng a study of a priority protocol for pc based local area networks r. signorile j. latourrette m. fleisch networking: introduction mark allman design principles and performance analysis of sscop: a new atm adaptation layer protocol the service specific connection oriented protocol (sascop) has been approved recently as a new b-isdn atm adaptation layer (aal) protocol standard, initially for use in the signaling atm adaptation layer (saal), but also for support of certain types of use data transfer. sscop is a new type of protocol, embodying several design principles for high speed link and transport layer protocols. in this paper, the basic operation of sscop is described and the sscop design is compared with other similar protocols. next, the relationships between key protocol parameters (protocol window, control message transmission interval) and maximum achievable throughput effieiency are explored. in particular, approximate performance equations are derived for predicting the maximum throughput efficiency of sscop basedon the selected environment and parameter eettings. the equations can be used to determine how much buffer capacity and/or what protocol timer settings allow sscop to operate at high performance. the analytical results are confirmed through comparison with simulation results. in addition, simulation results illustrate the high throughput performance achievable when using sscop in a highly errored or loosy environment. thomas r. henderson wass recent years have witnessed the rapid growth in demanding for the use of wireless networks especially the one with can transmit all the service. wireless atm is the one that fulfil those requirements. various architectures have been proposed depending on the intended application domain. one of the key components is the security protocol. this paper tries to propose security service into the wireless atm network. danai patiyoot s. j. shepherd on the comparison between single and multiple processor systems we study the comparison between an m-processor (multiprocessor) system and a single-processor system whose processor is m times as fast as any in the multiprocessor system. the expected superiority of the single-processor system is measured in terms of mean and maximum flow times, using both combinatorial and probabilistic models. e. g. coffman kimming so the importance of being square we present a theory that defines performance of packet-switching interconnection networks (delay and capacity) and their cost in terms of their geometry. this is used to prove that square banyan networks have optimal performance/cost ratio. these results, together with some known results on the complexity of routing in multistage networks, show that multistage shuffle- exchange networks are the unique networks with both optimal performance and simple routing. finally, square delta networks are shown to have optimal area complexity. clyde p. kruskal marc snir a copy network with shared buffers for large-scale multicast atm switching wen de zhong jaidev kaniyil y. onozato modeling and simulation of medium-access-protocols in local area networks ilan katz a parallel pipelined data flow coprocessor a parallel pipelined data flow coprocessor has been developed for the 68000 based commodore amiga workstation. the coprocessor, based on nippon electric corporation's μpd7281 image pipelined processor (impp), was designed as an algorithm processor for numerically intensive applications such as image processing, image synthesis, and numerical analysis. the coprocessor can accomodate up to seven of the 5-mips impp's providing over 30 mips of processing power to dedicated tasks. the impp's act as true coprocessor's providing a multi-processor arrangement consisting of the 68000 and the coprocessor's parallel pipeline. the workstation and parallel processors share all system resources including memory and peripherals. the coprocessor has its own dma capabilities for transferring program code and data between the workstation and the pipeline. the coprocessor is completely programmable via a development environment running native on the amiga, including an assembler, a simulator, and a runtime library. additional programming tools are also being developed. j. t. canning r. miner personal imaging and lookpainting as tools for personal documentary and investigative photojournalism a means and apparatus for covert capture of extremely high-resolution photorealistic images is presented. the apparatus embodies a new form of user- interface -- instead of the traditional "point and click" metaphor which was thought to be the simplest photography had to offer, what is proposed is a "look" metaphor in which images are generated through the natural process of looking around, in a manner that does not require conscious thought or effort. these "lookpaintings" become photographic/videographic memories that may, at times, exceed the quality attainable with even large and cumbersome professional photographic film cameras, yet they may be captured through a device that resembles ordinary sunglasses. the method is based on long-term psychophysical adaptation using a covert sunglass-based reality-mediating apparatus, together with two new results in image processing. the first new result is a means of estimating the true projective coordinate transformation between successive pairs of images, and the second is that of estimating, to within a single unknown scalar constant, the quantity of light arriving at the image plane. furthermore, what is captured is more than just a picture. the resulting environment map may be explored by one or more remote participants who may also correspond and interact with the wearer during the actual shooting process, giving rise to computer supported collaborative (collective) photography, videography, shared photographic/videographic memory, etc. steve mann politics as usual dennis fowler stability of binary exponential backoff binary exponential backoff is a randomized protocol for regulating transmissions on a multiple-access broadcast channel. ethernet, a local-area network, is built upon this protocol. the fundamental theoretical issue is stability: does the backlog of packets awaiting transmission remain bounded in time, provided the rates of new packet arrivals are small enough? it is assumed n ≥ 2 stations share the channel, each having an infinite buffer where packets accumulate while the station attempts to transmit the first from the buffer. here, it is established that binary exponential backoff is stable if the sum of the arrival rates is sufficiently small. detailed results are obtained on which rates lead to stability when n = 2 stations share the channel. in passing, several other results are derived bearing on the efficiency of the conflict resolution process. simulation results are reported that, in particular, indicate alternative retransmission protocols can significantly improve performance. jonathan goodman albert g. greenberg neal madras peter march professional workstations (panel session) andries van dam the panel will examine the evolution of the professional workstation from a time shared terminal to powerful graphics-based personal computer connected to a resource-sharing local network. the panelists will speculate on the future evolution of both the hardware/software architecture and end user environment. workstation development at stanford james h. clark workstation development at stanford is closely linked with graphics, distributed systems and networking. two distinct systems have evolved over the last 3 years: the sun system and the iris system. the sun is a low-cost system based upon an efficient mc68000 processor/memory design and a relatively low- performance, high-resolution bit- map display. the iris (integrated raster imaging system) is based upon the same mc68000 design, but the graphics part of the system is a modest- to low- cost, high-performance, high-resolution, color or black and white system that uses the geometry engine and several other custom ic parts. both sun and iris interface to the stanford ethernet network, and both are being used for distributed systems research, vlsi design stations, and graphics research. in addition, iris will probably be used for mechanical cad research by the mechanical engineering department and in situations where high-performance graphics is important. improvement goals for workstation facilities robert m. dunn three major areas of improvement are needed: interaction techniques that are simpler, provide faster reaction, are useful for higher-level inputs, and have user- style-oriented alternatives. the second area is for image rendering based on local capabilities, in varying degrees of image quality as a function of desired "grade of service". the third area is to provide support to incremental model construction and approximate design evaluation. there must be evaluation techniques that can work on partial models and give one approximate results. implications of these criteria for workstation architecture will be discussed. the application of network workstations to large-scale engineering harvey kriloff the development of computerized analysis procedures during the last twenty years has been largely oriented toward providing increased analytic function. this has meant that considerations of user access to specific capabilities or ease of use of these mammoth programs are only now becoming user concerns. the emerging new technologies for display, input and their interaction, when applied to the professional workstation, will be playing an increasingly important role in satisfying these concerns. a professional workstation can be used both to improve the efficiency by which data is collected for an existing analysis program and to assist the user in the preparation of data for a formal presentation or report. this performance improvement can be accomplished through the development of user-adapted "macro procedures" for data entry, the execution of processes to check program input data for accuracy and consistency, and workstation assistance in the training of both new and existing users. at the output stage of analysis, the workstation can be used to select and reformat information prepared by the analysis program, explore the interrelationship of results from several different analyses, and derive the data for a succeeding analysis in an iterative design mode. at boeing computer services, we have been exploring the benefits realized by the development of such a workstation when applied to the field of structural engineering. an interaction-rich workstation with a human-engineered executive that controls and integrates the user interface, connected to our national network of ibm, cdc and cray computers with a broad variety of engineering software, supplies the engineer with a full range of tools for analysis, data preparation, report writing and data relationship exploration that are only beginning to be appreciated. in the next few years it is expected that this workstation (distributed) methodology for doing engineering and other quantitative professional functions will radically change the way these analytical processes are performed. personal workstations in a local network david nelson for many applications, personal workstations provide a superior form of computing compared to timesharing. the requirements of workstations, however, go well beyond the facilities provided by the local computer; they must include the equivalent advantages of timesharing such as user-to-user communications, shared programs and data files, shared peripherals, etc. the preferred way to implement these functions is to interconnect all workstations by means of a high-speed local network, controlled by a distributed operating system which provides transparent access to all network resources through a network-wide virtual memory system. additional workstation functions such as large virtual address space and a concurrent multiple-window display system significantly increase user productivity. a high resolution bit-map display system with hardware support for dynamics is an essential component. for the future we look towards both significant cost reductions and improvement in performance, as well as significantly higher level software to better implement the user- computer interface. andries van dam james h. clark robert m. dunn harvey kriloff david nelson delegating remote operation execution in a mobile computing environment remote operation execution is nowadays the most popular paradigm used to build distributed systems and applications. this success originates in the simplicity exhibited by programming along the client-server paradigm. unfortunately, connectivity and bandwidth restrictions defy the unchanged porting of this well known mechanisms to the mobile computing field. in this paper we present an approach that allows to develop applications which are tailored for the specific requirement of mobile computing, while retaining the simple and well understood remote execution paradigm. the approach provides the additional benefit that established services could easily be used from mobile platforms. the cornerstone of our approach is integrated linguistic support for dynamically delegating the execution and control of remote procedure calls (rpc) to a delegate located on the fixed part of the network. besides presenting the language constructs, we discuss the extensions to the rpc-based development process and the necessary run-time support. remote operation execution is nowadays the most popular paradigm used to build distributed systems and applications. this success originates in the simplicity exhibited by programming along the client-server paradigm. unfortunately, connectivity and bandwidth restrictions defy theunchanged porting of this well known mechanisms to the mobile computing field. in this paper we present an approach that allows to develop applications which are tailored for the specific requirement of mobile computing, while retaining the simple and well understood remote execution paradigm. the approach provides the additional benefit that established services could easily be used from mobile platforms. the cornerstone of our approach is integrated linguistic support for dynamically delegating the execution and control of remote procedure calls (rpc) to a delegate located on the fixed part of the network. besides presenting the language constructs, we discuss the extensions to the rpc-based development process and the necessary run-time support. dietmar a. kottmann ralph wittmann markus posur a graph-oriented mapping strategy for a hypercube the mapping problem is the problem of implementing a computational task on a target architecture in order to maximize some performance metric. for a hypercube-interconnected multiprocessor, the mapping problem arises when the topology of a task graph is different from a hypercube. it is desirable to find a mapping of tasks to processors that minimizes average path length and hence interprocessor communication. the problem of finding an optimal mapping, however, has been proven to be np-complete. several different approaches have been taken to discover suitable mappings for a variety of target architectures. since the mapping problem is np-complete, approximation algorithms are used to find good mappings instead of optimal ones. usually, greedy and/or local search algorithms are introduced to approximate the optimal solutions. this paper presents a greedy mapping algorithm for hypercube interconnection structures, which utilizes the graph- oriented mapping strategy to map a communication graph to a hypercube. the strategy is compared to previous strategies for attacking the mapping problem. a simulation is performed to estimate both the worst-case bounds for the greedy mapping strategy and the average performance. w-k. chen e. f. gehringer transputer systems for the macintosh corporate levco improving ap1000 parallel computer performance with message communication the performance of message-passing applications depends on cpu speed, communication throughput and latency, and message handling overhead. in this paper we investigate the effect of varying these parameters and applying techniques to reduce message handling overhead on the execution efficiency of ten different applications. using a message level simulator set up for the architecture of the ap1000, we showed that improving communication performance, especially message handling, improves total performance. if a cpu that is 32 times faster is provided, the total performance increases by less than ten times unless message handling overhead is reduced. overlapping computation with message reception improves performance significantly. we also discuss how to improve the ap1000 architecture. takeshi horie kenichi hayashi toshiyuki shimizu hiroaki ishihata prefetch unit for vector operations on scalar computers (abstract) current caches are not adequate for vector operations. a new kind of support for vector operations, called prefetch unit, is designed to improve the performance of the scalar (sisd) processors. the prefetch unit can be used for any sisd architecutre and also for many kinds of mimd architectures. it may run in parallel and asynchronously with other parts of processor. it keeps trace of the history of memory references, and initializes rarely any superfluous prefetches. ivan sklenar the jitter time-stamp approach for clock recovery of real-time variable bit- rate traffic when multimedia streams arrive at the receiver, their temporal relationships may be distorted due to jitter. assuming the media stream is packetized, the jitter is then the packet's arrival time deviation from its expected arrival time. there are various ways to reduce jitter, which include synchronization at the application layer, or synchronization at the _asynchronous transfer mode_ (atm) _adaptation layer_ (aal). the new source rate recovery scheme called _jitter time-stamp_ (jts) provides synchronization at the _atm adaptation layer 2_ (aal2) which is used to carry variable bit- rate traffic such as compressed voice and video. jts is implemented, and experiments have shown that it is able to recover the source rate. weilian su ian f. akyildiz a fast and flexible performance simulator for micro-architecture trade-off analysis on ultrasparc-i marc tremblay guillermo maturana atsushi inoue les kohn data caches for superscalar processors toni juan juan j. navarro olivier temam distributed testing and measurement across the atlantic packet satellite network(satnet) the analysis of the test and measurement of tcp/ip performance over the atlantic packet satellite network (satnet) is described. both the methodology and tools as well as the results and their analysis are discussed. because of the internetwork nature of the environment, the tests were designed to allow the satnet measurement taskforce to look at the effects of each component of the end-to-end path, e.g., local networks, gateways, satnet, and protocol layers. results are given for the ip service provided by satnet as a function of offered load and for tcp behavior as a function of offered load and the underlying ip service. k. seo j. crowcroft p. spilling j. laws j. leddy a quantitative analysis of cache policies for scalable network file systems current network file system protocols rely heavily on a central server to coordinate file activity among client workstations. this central server can become a bottleneck that limits scalability for environments with large numbers of clients. in central server systems such as nfs and afs, all client writes, cache misses, and coherence messages are handled by the server. to keep up with this workload, expensive server machines are needed, configured with high-performance cpus, memory systems, and i/o channels. since the server stores all data, it must be physically capable of connecting to many disks. this reliance on a central server also makes current systems inappropriate for wide area network use where the network bandwidth to the server may be limited. in this paper, we investigate the quantitative performance effect of moving as many of the server responsibilities as possible to client workstations to reduce the need for high-performance server machines. we have devised a cache protocol in which all data reside on clients and all data transfers proceed directly from client to client. the server is used only to coordinate these data transfers. this protocol is being incorporated as part of our experimental file system, xfs. we present results from a trace-driven simulation study of the protocol using traces from a 237 client nfs installation. we find that the xfs protocol reduces server load by more than a factor of six compared to afs without significantly affecting response time or file availability. michael d. dahlin clifford j. mather randolph y. wang thomas e. anderson david a. patterson meta-level architectures for component-based mobile computing we present an approach to mobile-aware application development that integrates mobile-awareness into existing distributed component designs. our proposed mobile components encapsulate a context of distributed components that collaborate to fulfill a set of published application functions. the approach os based on a set of meta-level architectures for mobility. first, we present a reflective layered-adaptation model derived from a proposed set of dynamic adaptation architectures. then, we present a set of mobility-enhanced component services and corresponding object-oriented architectures for their realization. the approach is demonstrated with the development of a video delivery component. we show how mobile awareness can be integrated into an existing component-based design based on our proposed mobility-enhanced corba- based services. arnel i. periquet eric c. lin network transparency: the planet approach inder gopal roch guerin the performance analysis of partitioned circuit switched multistage interconnection networks nathaniel j. davis howard jay siegel measuring web performance in the wide area paul barford mark crovella the last mile: making the broadband connection dennis fowler high speed networking at cray research andy nicholson joe golio david a. borman jeff young wayne roiger delivering qos requirements to traffic with diverse delay tolerances in a tdma environment jeffrey m. capone ioannis stavrakakis uniform theory of the shuffle-exchange type permutation networks this paper presents the uniform theory for describing of the shuffle-exchange type permutation networks - the theory of ek stages. the use of this new approach is demonstrated by applying it to the flip, omega, and other incomplete permutation networks and to some complete networks /e.g. the beneš network/ as well. especially the paper deals with the p, 2p, and /2p 1/-stage networks, where p log2n and n is number of the inputs /or outputs/ of these networks. františek soviš optimal reactive k-stabilization: the case of mutual exclusion joffroy beauquier christophe genolini shay kutten 48-bit absolute internet and ethernet host numbers xerox internets and ethernet local computer networks use 48-bit absolute host numbers. this is a radical departure from practices currently in use in internetwork systems and local networks. this paper describes how the host numbering scheme was designed in the context of an overall internetwork and distributed systems architecture. yogen k. dalal robert s. printis disc: dynamic instruction stream computer mario daniel nemirovsky forrest brewer roger c. wood delay-optimal quorum-based mutual exclusion for distributed systems guohong cao mukesh singhal naphtali rishe the computational speed of supercomputers problems related to the evaluation of computational speeds of supercomputers are discussed. measurements of sequential speeds, vector speeds, and asynchronous parallel processing speeds are presented. a simple model is developed that allows us to evaluate the workload-dependent effective speed of current systems such as vector computers and asynchronous parallel processing systems. results indicate that the effective speed of a supercomputer is severely limited by its slowest processing mode unless the fraction of the workload that has to be processed in this mode is negligibly small. ingrid y. bucher building a high-performance communication layer over virtual interface architecture on linux clusters the virtual interface architecture (via) is an industry standard user-level communication architecture for cluster or system area networks. the via provides a protected, directly- accessible interface to a network hardware, removing the operating system from the critical communication path. although the via enables low-latency high- bandwidth communication, the application programming interface defined in the via specification lacks many high-level features. in this paper, we develop a high performance communication layer over via, named sovia (sockets over via). our goal is to make the sovia layer as efficient as possible so that the performance of native via can be delivered to the application, while retaining the portable sockets semantics. we find that the single-threaded implementation with conditional sender-side buffering is effective in reducing latency. to increase bandwidth, we implement a flow control mechanism similar to the tcp's sliding window protocol. our flow control mechanism is enhanced further by adding delayed acknowledgments and the ability to combine small messages. with these optimizations, sovia realizes comparable performance to native via, showing the latency of 10.5 _sec_ for 4-byte messages and the peak bandwidth of 814mbps on giganet's clan. the functional validity of sovia is verified by porting ftp (file transfer protocol) application over sovia. jin-soo kim kangho kim sung-in jung lan based distributed database management system architecture (laddbms) (abstract only) the integration of the heterogeneous independent databases within the office should not adversely impact on the local autonomy, reliability or availability of data. it is thus necessary to allow for both fragmented and replicated data. most query processing strategies do not handle multiple copies or the processing of fragments or restrict them to the processing of disjoint fragments. these approaches are thus unsatisfactory for the office environment. the query processing algorithms in a distributed environment make use of the global directory schema; the global schema directory placement and maintenance is in itself a distributed database problem. we present an architecture of laddbms that is capable of handling both fragmented and replicated data and does not require the availability of a global directory. the query processing approach is based upon the self- identification of a 'best' node. a query is decomposed into disjunctive subqueries by the originator node and the subqueries are broadcast. each node examines if it is well placed to respond and if so takes on the task of the subquery evaluation. the evaluation is measured for performance and the node updates its query answering capabilities based on this evaluation. t. s. narayanan random oracles in constantipole: practical asynchronous byzantine agreement using cryptography (extended abstract) byzantine agreement requires a set of parties in a distributed system to agree on a value even if some parties are corrupted. a new protocol for byzantine agreement in a completely asynchronous network is presented that makes use of cryptography, specifically of threshold signatures and coin-tossing protocols. these cryptographic protocols have practical and provably secure implementations in the "random oracle" model. in particular, a coin-tossing protocol based on the diffie-hellman problem is presented and analyzed. the resulting asynchronous byzantine agreement protocol is both practical and nearly matches the known theoretical lower bounds. more precisely, it tolerates the maximum number of corrupted parties, runs in constant expected time, has message and communication complexity close to the maximum, and uses a trusted dealer only in a setup phase, after which it can process a virtually unlimited number of transactions. novel dual- threshold variants of both cryptographic protocols are used. the protocol is formulated as a transaction processing service in a cryptographic security model, which differs from the standard information- theoretic formalization and may be of independent interest. christian cachin klaus kursawe victor shoup editorial ken korman accelerating telnet performance in wireless networks barron housel ian shields characterization of alpha axp performance using tp and spec workloads z. cvetanovic d. bhandarkar wavevideo - an integrated approach to adaptive wireless video the transmission of wireless video in acceptable quality is only possible by following an end-to-end approach. wavevideo is an integrated, adaptive video coding architecture designed for heterogeneous wireless networks. it includes basic video compression algorithms based on wavelet transformations, an efficient channel coding, a filter architecture for receiver-based media scaling, and error-control methods to adapt video transmissions to the wireless environment. using a joint source/channel coding approach, wavevideo offers a high degree of error tolerance on noisy channels, still being competitive in terms of compression. adaptation to channel conditions and user requirements is implemented on three levels. the coding itself features spatial and temporal measures to conceal transmission errors. additionally, the amount of introduced error-control information is controlled by feedback. the video stream coding, applied to multicast capable networks, can serve different user's needs effeiciently at the same time by scaling the video stream in the network according to receivers' quality reqirements. the wavevideo architecture is unique in terms of its capability to use qos mapping and adaptation functions across all network nodes providing the same uniform interface. george fankhauser marcel dasen nathalie weiler bernhard plattner burkhard stiller fast and scalable handoffs for wireless internetworks ramon caceres venkata n. padmanabhan a new approach to service provisioning in atm networks steven h. low pravin p. varaiya from defects to failures: a view of dependable computing behrooz parhami two classes of communication patterns ajay d. kshemkalyani mukesh singhal a comparison of mechanisms for improving tcp performance over wireless links reliable transport protocols such as tcp are tuned to perform well in traditional networks where packet losses occur mostly because of congestion. however, networks with wireless and other lossy links also suffer from significant non-congestion-related losses due to reasons such as bit errors and handoffs. tcp responds to all losses by invoking congestion control and avoidance algorithms, resulting in degraded end-to-end performance in wireless and lossy systems. in this paper, we compare several schemes designed to improve the performance of tcp in such networks. these schemes are classified into three broad categories: end-to-end protocols, where the sender is aware of the wireless link; link-layer protocols, that provide local reliability; and split-connection protocols, that break the end-to-end connection into two parts at the base station. we present the results of several experiments performed in both lan and wan environments, using throughput and goodput as the metrics for comparison.our results show that a reliable link-layer protocol with some knowledge of tcp provides very good performance. furthermore, it is possible to achieve good performance without splitting the end-to-end connection at the base station. we also demonstrate that selective acknowledgments and explicit loss notifications result in significant performance improvements. hari balakrishnan venkata n. padmanabhan srinivasan seshan randy h. katz alternative implementations of two-level adaptive branch prediction tse-yu yeh yale n. patt increased capacity through hierarchical cellular structures with inter-layer reuse in an enhanced gsm radio network in today's cellular networks it becomes harder to provide the resources for the increasing and fluctuating traffic demand exactly in the place and at the time where and when they are needed. moreover, frequency planning for a hierarchical cellular network, especially to cover indoor areas and hot-spots, is a complicated and expensive task. therefore, we study the ability of hierarchical cellular structures with inter-layer resuse to increase the capacity of a gsm (global system for mobile communications) radio network by applying total frequency hopping (t-fh) and adaptive frequency allocation (afa) as a strategy to reuse the macro- and microcell resources without frequency planning in indoor picocells. the presented interface analysis indicates a considerable interference reduction gain by t-fh in conjunction with afa, which can be used for carrying an additional indoor traffic of more than 300 erlang/km2, i.e., increasing the spectral capacity by over 50%, namely 33 erlang/km2/mhz. from these results we draw a number of general conclusions for the design of hierarchical cellular structures in future mobile radio networks. for example, we may conclude that they require reuse strategies that not only adapt to the current local interference situation, but additionally distribute the remaining interference to as many resources as possible. for a hierarchical gsm network this requirement is fulfilled by the t-fh/afa technique very well. jurgen deissner gerhard p. fettweis fast object-oriented procedure calls: lessons from the intel 432 as modular programming grows in importance, the efficiency of procedure calls assumes an ever more critical role in system performance. meanwhile, software designers are becoming more aware of the benefits of object-oriented programming in structuring large software systems. but object-oriented programming requires a good deal of support, which can best be distributed between the compiler and architectural levels. a major part of this support relates to the execution of procedure calls. must such support exact an unacceptable performance penalty? by considering the case of the intel 432, a prominent object-oriented architecture, we argue that it need not. the 432 provided all the facilities needed to support object orientation. though its procedure call was slow, the reasons were only tenuously related to object orientation. most of the inefficiency could be removed in future designs by the adoption of a few new mechanisms: stack-based allocation of contexts, a memory-clearing coprocessor, and the use of multiple register sets to hold addressing information. these proposals offer the prospect of an object- oriented procedure call that can, on average, be performed nearly as fast as an ordinary unprotected procedure call. e. f. gehringer r. p. colwell algorithms for energy-efficient multicasting in static ad hoc wireless networks in this paper we address the problem of multicasting in ad hoc wireless networks from the viewpoint of energy efficiency. we discuss the impact of the wireless medium on the multicasting problem and the fundamental trade-offs that arise. we propose and evaluate several algorithms for defining multicast trees for session (or connection-oriented) traffic when transceiver resources are limited. the algorithms select the relay nodes and the corresponding transmission power levels, and achieve different degrees of scalability and performance. we demonstrate that the incorporation of energy considerations into multicast algorithms can, indeed, result in improved energy efficiency. jeffrey e. wieselthier gam d. nguyen anthony ephremides provision of real-time services over atm using aal type 2 guven mercankosk john f. siliquini zigmantas l. budrikis when the crc and tcp checksum disagree traces of internet packets from the past two years show that between 1 packet in 1,100 and 1 packet in 32,000 fails the tcp checksum, even on links where link-level crcs should catch all but 1 in 4 billion errors. for certain situations, the rate of checksum failures can be even higher: in one hour-long test we observed a checksum failure of 1 packet in 400. we investigate why so many errors are observed, when link-level crcs should catch nearly all of them. we have collected nearly 500,000 packets which failed the tcp or udp or ip checksum. this dataset shows the internet has a wide variety of error sources which can not be detected by link-level checks. we describe analysis tools that have identified nearly 100 different error patterns. categorizing packet errors, we can infer likely causes which explain roughly half the observed errors. the causes span the entire spectrum of a network stack, from memory errors to bugs in tcp. after an analysis we conclude that the checksum will fail to detect errors for roughly 1 in 16 million to 10 billion packets. from our analysis of the cause of errors, we propose simple changes to several protocols which will decrease the rate of undetected error. even so, the highly non-random distribution of errors strongly suggests some applications should employ application-level checksums or equivalents. jonathan stone craig partridge locality-aware request distribution in cluster-based network servers vivek s. pai mohit aron gaurov banga michael svendsen peter druschel willy zwaenepoel erich nahum intelligent, stupid, and really smart networking dalibor f. vrsalovic resources section: conferences jay blickstein a carrier sensed multiple access protocol high data rate ring networks e. c. foudriat k. maly c. m. overstreet s. khanna f. paterra cache-optimal methods for bit-reversals zhao zhang xiaodong zhang are eda platform preferences about to shift? (panel) william s. johnson the 9373 and 9375 pipelined processing unit ron kalla a formal protocol conversion method the need for the protocol conversion has been recognized with the proliferation of heterogeneous networks. from a formal viewpoint, we regard that problem as generating a protocol which satisfies the properties of the conversion. in this paper, we prove that, one can determine whether a converter exists, for some protocol classes, given protocols in the form of communicating finite automata, moreover, we give a construction method for such a converter for those classes, and derive an upper bound of the computational complexity of the construction algorithm. k okumura wireless integrated network sensors g. j. pottie w. j. kaiser telescience and advanced technologies space station and its associated laboratories, coupled with the availability of new computing and communications technologies, have the potential of significantly enhancing scientific research. telescience involves the interaction of scientific researchers and equipment on earth with on-board personnel and equipment as well as with other researchers, remote ground-based resources, mission control personnel, and space station developers. to assure that this potential is met, scientists and managers associated with the space station project must gain significant experience with the use of these technologies for scientific research, and this experience must be fed into the development process for space station. in this talk, a pilot program is described that is attempting to address this problem. university researchers are conducting rapid prototyping testbeds employing new telescience technologies and ideas. these testbeds are specific research experiments within the scientific discipline areas that will use space station laboratories. the experiments are being carried out in a coordinated manner to allow the critical questions to be answered by groups of scientists working with technologists in a rapid prototyping testbed environment. the rapid prototyping testbeds are not like a typical testbed. rather than being used to evaluate and integrate systems on the way to deployment, the rapid prototyping testbeds constitute a technology evaluation environment (tee), allowing users to interact with advanced technologies in the conduct of scientific research in order to develop the required base of experience to permit development and evaluation of requirements and specifications. b. m. leiner the complexity of end-to-end communication in memoryless networks micah adler faith fich distributed computing the distributed computing column covers the theory of systems that are composed of a number of interacting computing elements. these include problems of communication and networking, databases, distributed shared memory, multiprocessor architectures, operating systems, verification, internet, and the web. sergio rajsbaum specification and testing of the behavior of network management agents using sdl-92 olaf henniger michel barbeau behçet sarikaya an efficient communication protocol for high-speed packet-switched multichannel networks this paper proposes a new media-access protocol for high-speed packet-switched multichannel networks based on a broadcast topology, for example, optical passive star networks using wavelength-division multiple access. the protocol supports connection-oriented traffic with or without bandwidth reservation as well as datagram traffic, in an attempt to integrate transport-layer functions with the media-access layer. it utilizes the bandwidth efficiently while keeping the processing requirements low by requiring stations to compute their transmission and reception schedules only at the start and end of each connection. a simple analysis shows that we can achieve low blocking probabilities for connections, as well as high network throughput. pierre a. humblet rajiv ramaswami kumar n. sivarajan a path-oriented routing strategy for packet switching networks with end-to-end protocols a path-oriented routing strategy is proposed for packet switching networks with end-to-end internal protocols. it allows switch pairs to communicate over multiple paths (for better network throughput), while maintaining knowledge of user connections at the network's endpoints only. the most significant aspect of this strategy lies in its flow assignment method. a distributed loop-free shortest path algorithm assigns a number to a path at the time it is created and this number remains valid through shortest path changes. consequently, existing traffic can be maintained on existing paths, while new traffic is assigned to the current shortest paths. stable multiple path routing is thus achieved without packet disordering. abnormal conditions such as trunk failure and recovery and trunk congestion are dealt with by tagging routing updates with update causes. simulation of this routing strategy shows that maximum network throughput (under a certain congestion constraint) can be increased substantially compared to a single path routing strategy. r. aubin p. ng inside risks: ghosts, mysteries and uncertainty background: perhaps the most frustrating of all the risks forum cases are those in which uncertainty and doubts remain to spook us long after the event. peter g. neumann source to destination communication in the presence of faults o. goldreich a. herzberg y. mansour wisq: a restartable architecture using queues in this paper, the wisq architecture is described. this architecture is designed to achieve high performance by exploiting new compiler technology and using a highly segmented pipeline. by having a highly segmented pipeline, a very-high-speed clock can be used. since a highly segmented pipeline will require relatively long pipelines, a way must be provided to minimize the effects of pipeline bubbles that are formed due to data and control dependencies. it is also important to provide a way of supporting precise interrupts. these goals are met, in part, by providing a reorder buffer to help restore the machine to a precise state. the architecture then makes the pipelining visible to the programmer/compiler by making the reorder buffer accessible and by explicitly providing that issued instructions cannot be affected by immediately preceding ones. compiler techniques have been identified that can take advantage of the reorder buffer and permit a sustained execution rate approaching or exceeding one per clock. these techniques include using trace scheduling and providing a relatively easy way to "undo" instructions if the predicted branch path is not taken. we have also studied ways to further reduce the effects of branches by not having them executed in the execution unit. in particular, branches are detected and resolved in the instruction fetch unit. using this approach, the execution unit is sent a stream of instructions (without branches) that are guaranteed to execute. a. r. pleszkun j. r. goodman w. c. hsu r. t. joersz g. bier p. woest p. b. schechter worst-case fraction of cbr teletraffic unpunctual due to statistical multiplexing daniel chonghwan lee fragmentation considered harmful internetworks can be built from many different kinds of networks, with varying limits on maximum packet size. throughput is usually maximized when the largest possible packet is sent; unfortunately, some routes can carry only very small packets. the ip protocol allows a gateway to fragment a packet if it is too large to be transmitted. fragmentation is at best a necessary evil; it can lead to poor performance or complete communication failure. there are a variety of ways to reduce the likelihood of fragmentation; some can be incorporated into existing ip implementations without changes in protocol specifications. others require new protocols, or modifications to existing protocols. c. a. kent j. c. mogul performance analysis of a synchronous, circuit-switched interconnection cached network in many parallel applications, each computation entity (process, thread etc.) switches the bulk of its communication between a small group of other entities. we call this phenomenon switching locality. the interconnection cached network (icn) is a reconfigurable network especially suited for exploiting switching locality. it consists of many small, fast crossbars interconnected by a large, slow switching crossbar. the large crossbar is used for topology reconfiguration and the smaller crossbars for circuit switching. for a large class of communication patterns displaying switching locality (this includes meshes, tori, trees, rings, pyramids, etc.), it is possible to choose appropriate icn configurations and assignments of processes to processors such that all communication paths pass through two or less switching components. much of the previous work on performance analysis of networks has assumed random, uniformly distributed communication and is inapplicable to many real- life parallel applications that lack this uniformity. we develop a methodology to analyze the performance of synchronous, circuit switched networks under different communication traffic patterns. we employ this methodology to study the performance of the icn in comparison to more popular reconfigurable networks: the delta and the crossbar. we choose two different communication patterns---a 2-d torus representing a high degree of switching locality and a fully connected graph representing complete absence of such locality. we show that in the presence of locality, the icn comes very close to matching the crossbar's performance. this, together with the shorter network cycle period of the icn, makes it more desirable. in the absence of switching locality, the reconfigurability of the icn allows for a graceful degradation in performance. vipul gupta eugen schenfeld electrical substation service-area estimation using cellular automata: an initial report john w. fenwick l. jonathan dowell performance prediction for the horizon super computer the performance of one horizon processing element can be quantified by user operations per instruction, the instructions per tick, and the basic clock rate. assuming there is sufficient parallelism within a problem, the performance of one pe can be multiplied by the number of pes contained in the horizon system. for a 256-pe horizon the expected sustained performance is in the order of 50 giga user operations per second for problems with sustained parallelism of 8,000 to 12,000 instruction streams. r. r. glenn models for performance perturbation analysis allen d. malony daniel a. reed on the performance of bandwidth allocation strategies for interconnecting atm and connectionless networks using atm networks as the switching fabric for interconnecting lans and mans means that a strategy for bandwidth allocation must be developed to map connectionless traffic in the lan/man to the atm network which is essentially connection-oriented. this paper presents a bandwidth allocation algorithm based on bandwidth advertising and burst drop when overflow occurs. the performance is evaluated by simulation and shown to reduce burst loss rate significantly. edward chan victor c. s. lee jim m. ng ucsd center for wireless communications "sun, fun and ph.d.s, too" ramesh r. rao lawrence larson profiling and reducing processing overheads in tcp/ip jonathan kay joseph pasquale performance analysis of local communication loops the communication loops analyzed here provide an economic way of attaching many different terminals which may be some kilometers away from a host processor. main potential bottlenecks were found to be the loop transmission speed, the loop adapter processing rate, and the buffering capability, all of which are analyzed in detail. the buffer overrun probabilities are found by convolving individual buffer usage densities and by summing over the tail-end of the obtained overall density function. examples of analysis results are given. kuno m. roehr horst sadlowski ad hoc mobility management with uniform quorum systems zygmunt j. haas ben liang practical trade-offs for open interconnection there is increasing market pressure to provide support for the open interconnection of systems via general purpose protocol suites such as osi and tcp/ip. the complexity of these protocols means that the achievement of acceptable performance is not easy. indeed, some would claim it is impossible, and advocate lean, closed protocols. a further aspect of communications architectures in the world outside the research laboratory is that they must be well structured and modular, in order to meet the needs of orderly systems development and the provision of configurable products. this paper examines the trade-offs between these three aspects of protocol stack development: conformance to standards, reasonable performance and modularity. it finds that while a considerable amount of work has been carried out in recent times, it is apparent that we do not know yet how to achieve all three. michael fry resources section: websites michele tepper websites ken korman editorial dan rosenbaum the i860tm 64-bit supercomputing microprocessor the intel i860tm processor is a risc-based microprocessor incorporating a risc core with memory management, a floating point unit, and caches on a single chip. the 1,000,000 transistors allow a single chip implementation with highly optimized interunit communication and wide internal data buses. the parallelism and pipelining between the execution units, and the innovative cache management techniques are under explicit control of software. vectorizable applications can use the pipelined adder and multiplier units to achieve up to 80 mflops at 40 mhz for the inner loops of common calculations. special instructions allow using the data cache as a flexible vector register and support high data bandwidth from main memory. finally, to support visualization of data, special hardware for 3d graphics is included. l. kohn n. margulis evaluation of a fault tolerant distributed broadcast algorithm in hypercube multicomputers this paper performs a detailed evaluation of a fault- tolerant distributed broadcasting algorithm for cube connected networks. the main areas of evaluation are the following: (1) algorithm effectiveness in the presence of multiple faults, (2) establishing the maximum number of link faults allowed, before the algorithm fails to guarantee 100% effectiveness. the evaluation was done to networks connected in 3-, 4-, 5-, and 6-cube configurations. the results of the simulation were analyzed to establish algorithm characteristics under multiple faults. jace w. krull jie wu andres m. molina the current status of map map is a standard computer networking discipline and a unified set of communications protocols which make it possible to implement a vendor independent network of computers. map provides a complete service to the user or application program. the major facilities, file transfer, process to process messaging, inter-machine messaging, and directory service are supported. virtual terminal protocol support is planned for the near future. map is a subset of the internationally agreed open systems interconnection computer networking architecture, protocols and supporting standards. the map subset of osi protocols was originally selected by general motors and has been agreed by a map task force comprised of nearly 100 user and vendor corporations. map specifications are also agreed by the nbs/osi implementor's workshop, indicating that the map specifications are osi compatible. the map system and protocols are described here in sufficient detail to permit the reader to gain a general understanding of map against a background of osi fundamentals. the future growth trends for map conclude this paper. j s foley y weon- yoon effects of building blocks on the performance of super-scalar architecture the inherent low level parallelism of super-scalar architectures plays an important role in the processing power provided by these machines: independent functional units promote opportunities for executing several machine operations simultaneously. from the viewpoint of the hardware designer it is very important to assess the influence of each functional unit, and the way they communicate, on the overall performance of the machine. particularly, it is highly desirable to determine an upper bound in the number of additional functional units which give significant performance improvement ratios. this work describes experiments that have been carried out to assess the effect of alternative instruction issue mechanisms, multiple functional units, instruction queues, common data bus and other hardware solutions on the performance of super-scalar machines. the assessment was obtained by interpreting non optimized object code fo an actual processor on some basic machine models. the paper outline the main aspects of the research, and shows that speed-up ratios of up to 3.35 times were observed during the interpretation of benchmark programs. edil s. t. fernandes fernando m. b. barbosa performance evaluation of cisc computer systems under single- and two-level cache environments m. s. obaidat h. khalid k. sadiq the convex c240 architecture the c240, a tightly coupled, shared memory, parallel/multi-processor, supports up to 4, 40-nanosecond ecl/cmos cray-like processors. managed by a fully semaphored unix operating the c240, it can support up to 4 gigabytes of directly addressable physical memory. convex proprietary compiler technology provides automatic vectorization and parallelization for fortran, c, and ada. allocation of parallel threads to physical processors is managed by an innovation approach asap (automatic self- allocating processors). asap dynamically allocates and deallocates parallel threads to the processors. in creating a second generation supercomputer, several factors were considered prior to commencing design. among these factors are: performance - scalar/vector/parallel. parallel processing efficiency. semiconductor technology. compiler optimization strategies. with these factors in mind, the c240 system evolved. the salient features of the c240 are: tightly coupled, mimd processor system. where each processor, up to 4, is a general purpose scientific processor, with a 4 gigabyte virtual address space. operands are addressed at the byte level. each processor is a 40 nanosecond, ecl/cmos implementation of the convex c1, cray-like, instruction set architecture (see wallach 85). additional instructions were added, in a super-set manner using pre-existing spare opcodes. these instructions provide vector conversions, vector shifts, vector square root, and fully pipelined operations under mask. additional scalar instructions were added mainly in the area of intrinsics: sin, cos, etc. a set of parallel processing instructions was also added and will be discussed in more detail later. a very high bandwidth, 800 mbytes/sec, multiported memory system capable of supporting up to 4 gigabytes of directly addressable physical memory. revisions of convex's proprietary compiler technology that support automatic parallelism for fortran, c and ada. a fully semaphored unix kernel. the unix kernel was modified extensively to support multiple simultaneous system calls to different resources and the management of convex's parallel processing paradigm. lastly, a new and innovative concept for the allocation and scheduling of parallel processes and threads is supported. this unique concept, asap, (automatic self-allocating processors), is a distributed mechanism that permits physical processors to dynamically associate and disassociate themselves from an executing process. m. chastain g. gostin j. mankovich s. wallach editorial it has been a pleasure to undertake the guest-editorship in putting together this special issue of wireless networks. the reason is that it has provided the opportunity to acquaint the wireless networking community with the emerging importance of satellite systems as parts of future wireless networks. in fact, more broadly, the hybrid use of satellites and terrestrial communication resources, wireless or not, is emerging as a practical and efficient way of organizing the world's communication infrastructure. and, it has been so that the development of satellite communications has been traditionally somewhat separated from the development of terrestrial wireless communications and especially from that of communication networks. this issue contributes toward bridging this gap. a. ephremides f. vatalaro web prefetching between low-bandwidth clients and proxies: potential and performance li fan pei cao wei lin quinn jacobson an analysis model on nonblocking multirate broadcast networks designing efficient interconnection networks with powerful connecting capability remains a key issue to parallel and distributed computing systems. many progresses have been made in nonblocking broadcast networks which can realize all one-to-many connections between any network input port and a set of output ports without any disturbance (that is, rearrangement) of other existing connections. however, all results obtained so far for broadcast networks are for the circuit switching or single rate communication model. meanwhile, there have been growing interests in large networks operated in a packet switching manner. this type of network can be modeled as a multirate network wherein a single link can be shared by multiple connections with arbitrary data rate. previous work has been done on the blocking behavior of multirate one-to-one connection or permutation networks. however, yet very little is known about the behavior of multirate one-to-many connection or broadcast networks. in this paper, we will determine nonblocking conditions for v(m,n< ;subscrpt>1,r1,n2,r2) networks under which any multirate broadcast connection request from a network input port to a set of network output ports can be satisfied without any disturbance of the existing connection in the network. our results show that more general multirate broadcast networks can be constructed in the same order of hardware complexity as the best available single rate nonblocking broadcast networks. our proofs for the theorems also imply an efficient routing algorithm for such networks. multirate broadcast networks can provide strong support for parallel and distributed computing systems which require to broadcast multirate data in a random-access environment. yuanyuan yang tight bounds for weakly bounded protocols ewan tempero richard ladner a control-theoretic approach to the design of an explicit rate controller for abr service aleksandar kolarov g. ramamurthy classification of parallel processor architectures (invited tutorial session) herb schwetman daniel gajski dennis gannon daniel hills jacob schwartz james browne mapping data flow programs on a vlsi array of processors with the advent of vlsi, relatively large processing arrays may be realized in a single vlsi chip. such regularly structured arrays take considerably less time to design and test, and fault-tolerance can easily be introduced into them. however, only a few computational algorithms which can effectively use such regular arrays have been developed so far. we present an approach to mapping arbitrary algorithms, expressed as programs in a data flow language, onto a regular array of data-driven processors implemented by a number of vlsi chips. each chip contains a number of processors, interconnected by a set of regular paths, and connected to processors in other similar chips to form a large array. this array is thus tailored to perform a specific computational task, as an attached processor in a larger system. the data flow program is first translated into a graph representation, the data flow graph, which is then mapped onto a finite but (theoretically) unbounded array of identical processors. each node in the graph represents an operation which can be performed by an individual processor in the array. therefore, the mapping operation consists of assigning nodes in the graph to processors in the array, and defining the connections between the processors according to the arcs in the graph. the last step consists of partitioning the unbounded array into a number of segments, to account for the number of processors which fit in a single vlsi chip. b. mendelson g. m. silberman a tagged token dataflow machine for computing small, iterative algorithms g. b. shippen j. k. archibald a cdma-based radio interface for third generation mobile systems this paper deals with the use of a cdma-based radio interface in third generation mobile systems (universal mobile telecommunications system---umts, and future public land mobile telecommunications system---fplmts). the paper is not intended as a detailed analysis of the radio interface performance, but as an overview of the main issues arising in a typical cdma-based mobile system, discussing the different available technical solutions. first of all, the basic requirements of the radio interface in a third generation mobile system are outlined. in particular, the support of variable bit rate transmission, the adaptability to the different propagation and service environments and the flexibility are felt to be important topics to be discussed. then, the main characteristics of the cdma access technique are depicted, in relation with the above mentioned requirements, focusing in particular on the ds-cdma radio interface designed within the race ii---codit project. in that context the paper describes some of the technical solutions proposed for the provision of advanced features such as macrodiversity, multibearer transmission and variable bit rate services. sergio barberis ermanno berruto fail-awareness in timed asynchronous systems christof fetzer flaviu cristian multicast and multiprotocol support for atm based internets both the internet engineering task force (ietf) and atm forum are addressing the issue of building higher layer networks using atm. the ietf's focus is on ip, while the atm forum has expanded its vision to include all common network layer protocols. the task faced by both organizations is to develop solutions that will cause minimal interoperability problems, both for users and network architects. this paper focuses on the solutions coming out of the ietf's ip over atm working group, including the author's proposal for flexible, high performance ip multicast over atm. the emerging next hop resolution protocol from the ietf's routing over large cloud group is briefly examined, and its possible conflict with the atm forum's multiprotocol over atm'work is also discussed. the author finishes by noting how current ietf work could support 'multiprotocol' networks. g. j. armitage editorial at the same time that this special issue of wireless networks is being published, the next generation of wireless networks is coming to life. the fcc auction of the spectrumhad brought into the arena new players that are willing and able to take part in writing the wireless communications revolution in the us, and potentially elsewhere in the world. the price paid for the pcs spectrum, however, in addition to the infrastructure building cost, is adding up to billions of dollars. given the current market size of the cellular industry, the expected growth rate in cellular subscribers, the competitive aspect of this service sector and inevitable price decline, it becomes apparent that voice service alone is not a winning feasible business proposition for wireless operators to recoup the investment, let alone yield a profit. the clear competitive advantage therefore lies in the ability to offer advanced wireless services that combine multiple media types such as video and data in addition to voice. emerging technologies based on on-going research carry a great promise for shifts in the paradigm on which the cellular industry was built. these technologies include wireless atm and mobile multimedia.<\par> in organizing this special issue around the theme of "wireless multimedia" we focus on research areas that will enable the above paradigm shift. we attempt to cover the research "spectrum" from transmission to transport protocols and mobile applications, in order to present a bright and compelling forecast of this industry as it emerges from the lab and enters the realm of competitive business and advanced mobile application. two invited papers in this special issue present state-of-the-art technology proposals in wireless networks. raychaudhuri proposes an atm-based wireless network capable of supporting integrated voice, video, and data services with quality of service control, and discusses design issues for major functional layers in the network. liu et al. propose and study multimode cdma with distributed queuing request update multiple access for multi-rate wireless packet networks which support an integrated mix of multimedia traffic. the proposed network incorporates a flexible multiplexing scheme for providing multi-rate transmission and an efficient demand assignment for wireless access and scheduling.<\par> the contribution from naghshineh and acampora focuses on call admission control for wireless/mobile networks supporting multimedia traffic. ananticipatory hand-off control mechanism and an adaptive link partitioning scheme are proposed by lee in an adapted reserved services framework for mobile connections of multimedia traffic. liu and el zarki propose a two-class data partitioning and unequal error protection scheme to make best use of limited channel bandwidth. the performance issues of a typical cdma wireless link using protocol stack are investigated in the contribution from bao, with focus on the dynamics of tcp and rlp. a two-step plan is proposed by delli priscoli for a smooth migration from the current gsm system to the third generation system in order to reuse most of the existing technologies and infrastructures. finally, iera et al. present an overall research work dealing with an effective management of multimedia and multi- requirement services in the third generation mobile radio systems.<\par> we would like to thank all reviewers for their efforts to ensure this issue a high standard of quality. the support of editor-in-chief professor imrich chlamtac is greatly appreciated. george makhoul zhensheng zang security problems in the tcp/ip protocol suite s. m. bellovin congestion control in ip/tcp internetworks congestion control is a recognized problem in complex networks. we have discovered that the department of defense's internet protocol (ip), a pure datagram protocol, and transmission control protocol (tcp), a transport layer protocol, when used together, are subject to unusual congestion problems caused by interactions between the transport and datagram layers. in particular, ip gateways are vulnerable to a phenomenon we call congestion collapse, especially when such gateways connect networks of widely different bandwidth. we have developed solutions that prevent congestion collapse.these problems are not generally recognized because these protocols are used most often on networks built on top of arpanet imp technology. arpanet imp based networks traditionally have uniform bandwidth, identical switching nodes, and are sized with substantial excess capacity. this excess capacity, and the ability of the imp system to throttle the transmissions of hosts has for most ip/tcp hosts and networks, been adequate to handle congestion. with the recent split of the arpanet into two interconnected networks and the growth of other networks with differing properties connected to the arpanet, however, reliance on the benign properties of the imp system is no longer enough to allow hosts to communicate rapidly and reliably. improved handling of congestion is now mandatory for successful network operation under load.ford aerospace and communications corporation, and its parent company, ford motor company, operate the only private ip/tcp long-haul network in existence today. this network connects six facilities (one in michigan, two in california, one in colorado, one in texas, and one in england) some with extensive local networks. this net is cross-tied to the arpanet but uses its own long-haul circuits; traffic between ford facilities flows over private leased circuits, including a leased transatlantic satellite connection. all switching nodes are pure ip datagram switches with no node-to-node flow control, and all hosts run software either written or heavily modified by ford or ford aerospace. bandwidth of links in this network varies widely, from 1200 to 10,000,000 bits per second. in general, we have not been able to afford the luxury of excess long-haul bandwidth that the arpanet possesses, and our long-haul links are heavily loaded during peak periods. transit times of several seconds are thus common in our network.because of our pure datagram orientation, heavy loading, and wide variation in bandwidth, we have had to solve problems that the arpanet/milnet community is just beginning to recognize. our network is sensitive to suboptimal behavior by host tcp implementations, both on and off our own net. we have devoted considerable effort to examining tcp behavior under various conditions, and have solved some widely prevalent problems with tcp. we present here two problems and their solutions. many tcp implementations have these problems; if throughput is worse through an arpanet/milnet gateway for a given tcp implementation than throughput across a single net, there is a high probability that the tcp implementation has one or both of these problems. john nagle the hughes data flow multiprocessor: architecture for efficient signal and data processing rex vedder dennis finn risc vector cpu's and crossbars in desktops martin dowd a fetch-and-op implementation for parallel computers an efficient fetch-and-op circuit is described. a bit-serial circuit-switched implementation requires only 5 gates per node in a binary tree. this versatile circuit is also capable of test-and-set primitives (priority circuits) and swap operators, as well as and and or operations used in simd tests such as "branch on all carries set." it provides an alternative implementation for the combining fetch-and-add circuit to the one designed for the ultracomputer project; this implementation is suited to simd computing and can be adapted to mimd computing. g. j. lipovski p. vaughan performance issues of enterprise level web proxies carlos maltzahn kathy j. richardson dirk grunwald managing an ethernet installation: case studies from the front lines chris johnson the case for persistent-connection http the success of the world-wide web is largely due to the simplicity, hence ease of implementation, of the hypertext transfer protocol (http). http, however, makes inefficient use of network and server resources, and adds unnecessary latencies, by creating a new tcp connection for each request. modifications to http have been proposed that would transport multiple requests over each tcp connection. these modifications have led to debate over their actual impact on users, on servers, and on the network. this paper reports the results of log-driven simulations of several variants of the proposed modifications, which demonstrate the value of persistent connections. jeffrey c. mogul an improved conditional branching scheme for a single instruction computer architecture p. a. laplante a unified resource management and execution control mechanism for data flow machines this paper presents a unified resource management and execution control mechanism for data flow machines. the mechanism integrates load control, depth-first execution control, cache memory control and a load balancing mechanism. all of these mechanisms are controlled by such basic information as the number of active state processes, na. in data flow machines, synchronization among processes is an essential hardware function. hence, na can easily be detected by the hardware. load control and depth-first execution control make it possible to execute a program with a designated degree of parallelism, and depth-first order. a cache memory of data flow processors in multiprocessing environments can be realized by using load and depth-first execution controls together with a deterministic replacement algorithm, i.e. replacement of only waiting state processes. a new load balancing method called group load balancing is also presented to evaluate the above mentioned mechanisms in multiprocessor environments. these unified control mechanisms are evaluated on a register transfer level simulator for a list-processing oriented data flow machine. m. takesue high availability path design in ring-based optimal networks wayne d. grover m3l: a list-directed architecture this paper describes the basic principles and the architecture of a general host machine based upon lists processing. current works in this field are dealing with conventional direct execution schemes which use lineary structured directly executable languages: prefixed languages with varying formats for operators and operands. if these languages are convenient for interpretation and provide an efficient execution scheme, on the other hand, they are very hard to generate. therefore, we propose here a new direct execution model based upon the definition of a class of directly executable languages with a list oriented structure using lisp as model. the first part of the scheme is held by an editor which translates the high level source-text into the internal tree- structured form. the second part is held by an interpreter which executes this form on an appropriate machine. in this paper we pursue the design of the list-structured intermediate form and we give the reasons of our choice. once we have brought out the concepts and the functions required for the implementation of non-numerical processing and particularly for list-structured forms, we discuss the architecture of the lists-directed machine. j. p. sansonnet m. castan c. percebois the experimental literature of the internet: an annotated bibliography jeffrey c. mogul issues related to mimd shared-memory computers: the nyu ultracomputer approach jan edler allan gottlieb clyde p. kruskal kevin p. mcauliffe larry rudolph marc snir patricia j. teller james wilson real-time object sharing with minimal system support srikanth ramamurthy mark moir james h. anderson a case for caching file objects inside internetworks peter b. danzig richard s. hall michael f. schwartz design and performance of special purpose hardware for time warp a special purpose simulation engine based on the time warp mechanism is proposed to attack large-scale discrete event simulation problems. a key component of this engine is the rollback chip, a hardware component that efficiently implements state saving and rollback functions in time warp. the algorithms implemented by the rollback chip are described, as well as mechanisms that allow efficient implementation. results of simulation studies are presented that show that the rollback chip can virtually eliminate the state saving overhead that plagues current software implementations of time warp. r. m. fujimoto j.-j. tsai g. gopalakrishnan fine-grain multithreading with the em-x multiprocessor andrew sohn yuetsu kodama jui ku mitsuhisa sato hirofumi sakane hayato yamana shuichi sakai yoshinori yamaguchi improving call admission policies in wireless networks it is well known that the call admission policy can have a big impact on the performance of a wireless network. however, the nonlinear dependence of new calls and handoff calls makes the search for a better call admission policy -- in terms of effective utilization -- a difficult task. many studies on optimal policies have not taken the correct dependence into consideration. as a result, the reported gains in those studies cannot be confirmed in a real network. in this paper we develop a solution to the problem of finding better call admission policies. the technique consists of three components. first, we search for the policy in an approximate reduced-complexity model. second, we modify the linear programming technique for the inherently nonlinear policy- search problem. third, we verify the performance of the found policy in the exact, high-complexity, analytical model. the results shown in the paper clearly demonstrate the effectiveness of the proposed technique. chi-jui ho chin-tau lea simulation of computer systems and applications william s. keezer artificial synesthesia via sonification: a wearable augmented sensory system a design for an implemented, prototype wearable artificial sensory system is presented, which uses data sonification to compensate for normal limitations in the human visual system. the system gives insight into the complete visible-light spectra from objects being seen by the user. long-term wear and consequent training might lead to identification of various visually- indistinguishable materials based on the sounds of their spectra. a detailed system design and results of user testing are presented, and many possible extensions to both the sonification and the sensor package are discussed. leonard n. foner a mobile virtual-distributed system architecture for supporting wireless mobile computing and communications george liu alexander marlevi gerald q. maguire mobile radio slotted aloha with capture, diversity and retransmission control in the presence of shadowing in this paper, the capture performance of a random access scheme in the presence of rayleigh fading, shadowing and diversity is studied. the conditional throughput c_n, i.e., the average number of packets which are correctly received per slot, given the number of colliding packets, n, is computed, as well as its limit as n\to\infty. some different diversity schemes are compared. also, retransmission control is considered as a means to enhance the system performance. the stability of the controlled system is directly proved. finally, the effect of long-term attenuations on the system performance and stability is discussed. michele zorzi scalable reliable multicast using multiple multicast groups sneha k. kasera jim kurose don towsley ip multicast fault recovery in pim over ospf (poster session) xin wang c. yu henning schulzrinne paul stirpe wei wu a comment on "a fetch - and - op implementation for parallel computers" tsong \---chih hsu ling---yang kung an architecture of a dataflow single chip processor a highly parallel (more than a thousand) dataflow machine em-4 is now under development. the em-4 design principle is to construct a high performance computer using a compact architecture by overcoming several defects of dataflow machines. constructing the em-4, it is essential to fabricate a processing element (pe) on a single chip for reducing operation speed, system size, design complexity and cost. in the em-4, the pe , called emc-r, has been specially designed using a 50,000-gate gate array chip. this paper focuses on an architecture of the emc-r. the distinctive features of it are: a strongly connected arc dataflow model; a direct matching scheme; a risc-based design; a deadlock-free on-chip packet switch; and an integration of a packet-based circular pipeline and a register-based advanced control pipeline. these features are intensively examined, and the instruction set architecture and the configuration architecture which exploit them are described. s. sakai y. yamaguchi k. hiraki y. kodama t. yuba trends in telecommunication management and configurations patricia j. carlson james c. wetherbe an overview of the center for wireless information technology at the hong kong university of science and technology khaled ben letaief roger cheng ross d. murch realizing the performance potential of the virtual interface architecture evan speight hazim abdel-shafi john k. bennett topology-based tracking strategies for personal communication networks this paper explores tracking strategies for mobile users in personal communication networks which are based on the topology of the cells. we introduce the notion of topology-based strategies in a very general form. in particular, the known paging areas, overlapping paging areas, reporting centers, and distance-based strategies are covered by this notion. we then compare two topology-based strategies with the time-based strategy on the line and mesh cell topology. amotz bar-noy ilan kessler mahmoud naghshineh comparing rtl and behavioral design methodologies in the case of a 2m-transistor atm shaper imed moussa zoltan sugar rodolph suescun mario diaz- nava marco pavesi salvatore crudo luca gazi ahmed amine jerraya the turn model for adaptive routing christopher j. glass lionel m. ni comparison of dataflow control techniques in distributed data-intensive systems in dataflow architectures, each dataflow node (i.e., operation) is typically executed on a single physical node. we are concerned with distributed data- intensive systems, in which each base (i.e., persistent) set of data has been declustered over many physical nodes to achieve load balancing. because of large base set size, each operation is executed where the base set resides, and intermediate results are transferred between physical nodes. in such systems, each dataflow node is typically executed on many physical nodes. furthermore, because computations are data- dependent, we cannot know until run time which subset of the physical nodes containing a particular base set will be involved in a given dataflow node. this uncertainty affects program loading, task activation and termination, and data transfer among the nodes. in this paper we focus on the problem of how a dataflow node in such an environment knows when it has received data from all the physical nodes from which it is ever going to receive. we call this the dataflow control problem. the interesting part of the problem is trying to achieve correctness efficiently. we propose three solutions to this problem, and compare them quantitatively by the metrics of total message traffic, message system throughput and data transfer response time. w. alexander g. copeland clock rate versus ipc: the end of the road for conventional microarchitectures the doubling of microprocessor performance every three years has been the result of two factors: more transistors per chip and superlinear scali ng of the processor clock with technology generation. our results show that, due to both diminishing improvements in clock rates and poor wire scaling as semiconductor devices shrink, the achievable performance growth of conventional microarchitectures will slow substantially. in this paper, we describe technology-driven models for wire capacitance, wire delay, and microarchitectural component delay. using the results of these models, we measure the simulated performance---estimating both clock rate and ipc ---of an aggressive out-of-order microarchitecture as it is scaled from a 250nm technology to a 35nm technology. we perform this analysis for three clock scaling targets and two microarchitecture scaling strategies: pipeline scaling and capacity scaling. we find that no scaling strategy permits annual performance improvements of better than 12.5%, which is far worse than the annual 50-60% to which we have grown accustomed. vikas agarwal m. s. hrishikesh stephen w. keckler doug burger measured performance of data transmission over cellular telephone networks timo alanko markku kojo heimo laamanen mika liljeberg marko moilanen kimmo raatikainen fast and scalable layer four switching in layer four switching, the route and resources allocated to a packet are determined by the destination address as well as other header fields of the packet such as source address, tcp and udp port numbers. layer four switching unifies firewall processing, rsvp style resource reservation filters, qos routing, and normal unicast and multicast forwarding into a single framework. in this framework, the forwarding database of a router consists of a potentially large number of filters on key header fields. a given packet header can match multiple filters, so each filter is given a cost, and the packet is forwarded using the _least cost matching filter_.in this paper, we describe two new algorithms for solving the least cost matching filter problem at high speeds. our first algorithm is based on a grid-of-tries construction and works optimally for processing filters consisting of two prefix fields (such as destination-source filters) using linear space. our second algorithm, cross-producting, provides fast lookup times for arbitrary filters but potentially requires large storage. we describe a combination scheme that combines the advantages of both schemes. the combination scheme can be optimized to handle pure destination prefix filters in 4 memory accesses, destination-source filters in 8 memory accesses worst case, and all other filters in 11 memory accesses in the typical case. v. srinivasan g. varghese s. suri m. waldvogel type of service wide area networking there are a variety of long distance types of circuits that can be used for data communication. this paper describes an experiment which optimizes the simultaneous use of terrestrial and satellite circuits in a tcp/ip network. traditionally, there have been problems effectively using the full capacity of high bandwidth satellite circuits, due to long round trip times coupled with tcp window-size limitations. the experiment significantly reduced the problem, and the result was effective use of all the available bandwidth of a satellite circuit for a single data transfer. this experiment also demonstrates the use of multiple t1 communication lines, and type of service networking over these lines. j. lekashman m-rpc: a remote procedure call service for mobile clients ajay bakre b. r. badrinath atm virtual private networks asynchronous transfer mode (atm) networks are aimed at supporting a variety of services, including voice, video and data, while allowing the users to efficiently share network resources. efficiency and flexibility in managing the resources are essential to meet the different quality requirements of these services. network resources include trunk bandwidth, switching capacity, and buffer space in the atm cross- connects. sharing these resources is a complex task, since different applications have different traffic characteristic and quality of service (qos) requirements. for example, video is characterized by continuous bit rate (cbr) or variable bit rate (vbr) traffic, while voice and data are bursty in nature. the quality requirements also depend on the type of service under considerations---some services, like video and voice, have strict end-to-end delay requirements, whereas others may be carried on a best-effort basis. shivi fotedar mario gerla paola crocetti luigi fratta a parallel embedded-processor architecture for atm reassembly richard f. hobson p. s. wong design and analysis of a large-scale multicast output buffered atm switch h. jonathan chao byeong-seog choe reducing location update cost in a pcs network yi-bing lin from the i-way to the national technology grid rick stevens paul woodward tom defanti charlie catlett scheduling with implicit information in distributed systems andrea c. arpaci- dusseau david e. culler alan m. mainwaring satellite systems for personal communication networks the paper addresses some issues related to satellite personal communication networks (s-pcns). the role of satellite communications in that scenario is discussed, and some characteristics of s-pcns are identified. in addition, the problem of the integration of s-pcns with the universal mobile telecommunication system (umts) is considered. in this respect an original methodology for accomplishing such integration is proposed; such methodology aims at avoiding complex protocol conversions at the interfaces between the terrestrial and the satellite segment. the paper is partly based upon the work performed by the authors in the framework of the european community insured project "integrated satellite umts real environment demonstrator". fulvio ananasso francesco delli priscoli new design concepts for an intelligent internet geng-sheng kuo jing-pei lin intelligent handoff for mobile wireless internet jon chung-shien wu chieh-wen cheng gin-kou ma nen-fu huang internet nuggets this column consists of selected traffic from the comp.arch newsgroup, a forum for discussion of computer architecture on internet---an international computer network. as always, the opinions expressed in this column are the personal views of the authors, and do not necesarily represent the institutions to which they are affiliated. text which sets the context of a message appears in italics; this is usually text the author has quoted from earlier messages. the code-like expressions below the authors' names are their addresses on internet. mark thorson a distributed measurement technique for an operating ethernet network (abstract only) the objective of the research is to design and implement a monitoring system for an operating ethernet network. several existing monitoring techniques have been examined and contrasted. this examination led to the development of a master/slave monitoring system. the proposed technique counters the drawbacks of existing hybrid techniques by relocating the interface between the node and monitor and establishing a master/slave relationship among the monitors. a monitor observes the network, collects data, and presents results in a usable form. monitors can provide traces or profiles of network transactions. although useful for debugging purposes, traces, or complete records of network traffic, impose excessive storage and processing requirements on a monitor. thus, a profile made up of certain statistics about network traffic is often more practical. these statistics are collected by both the slave and master monitors with the final data reduction occurring in the master monitor. the slave monitor is a passive device that collects information at a node by directly monitoring the transceiver cable, processes the information, and transfers statistical data to the master monitor over a dedicated inter- monitor bus. by using the passive tap the slave monitors provide distributed measurement without introducing changes in the network nodes or the software running on the nodes. the slave monitors communicate with the master monitor via an economical multidrop twisted pair network. this network provides sufficient bandwidth without the overhead and cost associated with a standard network. the overhead produced by using the network under evaluation as the master to slave communication facility is also eliminated. the slave monitors consist of a small number of off the self components and derive power from the transceiver cable. these monitors process the data locally and only send the processed results to the master monitor. the slave monitors can be reconfigured by the master monitor to provide maximum flexibility. the master monitor functions as a data collector, data analyzer, and traffic generator, and supports an intelligent user interface. the master monitor can generate traffic with time stamps and can measure several system parameters. the master monitor also can use the information obtained from the slave monitors to provide detailed traffic information for each node. this node by node traffic information will provide an accurate profile of the network and of network performance. the master monitor consists of a standard multibus ethernet controller board with an on- board cpu and memory. the software for the master monitor will replace the standard ethernet driver software for the board. the monitor will also contain a standard multibus cpu for additional processing and to provide the user interface. in addition the master monitor will provide and interface to a personal computer for additional data processing and storage. the monitoring system can support very large networks consisting of multiple networks and bridges. the monitors can be configured in a hierarchical topology with multiple master monitors connected together. each master monitor is responsible for a portion of the total network. these sub-master monitors will communicate with a center master monitor that provides the user interface and central control. this monitoring system is under development at iowa state university and will be tested on a 15 node network that supports a wide variety of traffic. once the monitor system is developed it can be used for both teaching and research and will help in the development of a distribute file system based on a large collection of unix machines connected via ethernet. d. w. jacobson about maximum transfer rates for fast packet switching networks jean-yves le boudec atm/fr interworking valentin hristov the case for services over cascaded networks anthony d. joseph b. r. badrinath randy h. katz reducing branch costs via branch alignment several researchers have proposed algorithms for basic block reordering. we call these branch alignment algorithms. the primary emphasis of these algorithms has been on improving instruction cache locality, and the few studies concerned with branch prediction reported small or minimal improvements. as wide-issue architectures become increasingly popular the importance of reducing branch costs will increase, and branch alignment is one mechanism which can effectively reduce these costs. in this paper, we propose an improved branch alignment algorithm that takes into consideration the architectural cost model and the branch prediction architecture when performing the basic block reordering. we show that branch alignment algorithms can improve a broad range of static and dynamic branch prediction architectures. we also show that a program performance can be improved by approximately 5% even when using recently proposed, highly accurate branch prediction architectures. the programs are compiled by any existing compiler and then transformed via binary transformations. when implementing these algorithms on a alpha axp 21604 up to a 16% reduction in total execution time is achieved. brad calder dirk grunwald active tracking: locating mobile users in personal communication service networks hanoch levy zohar naor sending messages to mobile users in disconnected ad-hoc wireless networks an ad-hoc network is formed by a group of mobile hosts upon a wireless network interface. previous research in this area has concentrated on routing algorithms which are designed for fully connected networks. the usual way to deal with a disconnected ad-hoc network is to let the mobile computer wait for network reconnection passively, which may lead to unacceptable transmission delays. in this paper, we propose an approach that guarantees message transmission in minimal time. in this approach, mobile hosts actively modify their trajectories to transmit messages. we develop algorithms that minimize the trajectory modifications under two different assumptions: (a) the movements of all the nodes in the system are known and (b) the movements of the hosts in the system are not known. qun li daniela rus branch folding in the crisp microprocessor: reducing branch delay to zero a new method of implementing branch instructions is presented. this technique has been implemented in the crisp microprocessor. with a combination of hardware and software techniques the execution time cost for many branches can be effectively reduced to zero. branches are folded into other instructions, making their execution as separate instructions unnecessary. branch folding can reduce the apparent number of instructions needed to execute a program by the number of branches in that program, as well as reducing or eliminating pipeline breakage. statistics are presented demonstrating the effectiveness of branch folding and associated techniques used in the crisp microprocessor. d. r. ditzel h. r. mclellan a two-bit contention-based tdma technique for data transmissions the performance of a contention-based tdma technique is studied in this paper. the frame structure of the time-axis is similar to [1] and [2]. the protocols proposed in [1],[2] and here are all active multiaccess techniques. the protocol in [1] is contention free and suitable for heavy traffic while a contention-based protocol suitable for light traffic is considered in [2]. the protocol to be studied in this paper is also contention in nature and performs considerably better than [2]. this protocol is less complicated than [1] and out-performs [1] unless traffic is very high. performance analyses, both transient and steady-state, have been successfully completed. results obtained include average queue length and packet delay, etc. the validity of analysis is also verified by computer simulations. d tsai j chang a signaling and control architecture for mobility support in wireless atm networks this paper presents a signaling and control architecture for mobility support in a "wireless atm" network that provides integrated broadband services to mobile terminals. a system level protocol architecture for a wireless atm network is outlined. the proposed protocol stack incorporates new wireless link mac, dlc and wireless control sublayers, together with appropriate mobility extensions to the existing atm network control layer. wireless control and atm signaling capabilities required for mobility support are discussed, and preliminary solutions are given for selected major functions. potential extensions to standard q.2931 atm signaling are proposed to support handoff and service parameter/qos renegotiation required for mobility. an associated wireless control protocol for supporting terminal migration, resource allocation, and handoff is discussed. preliminary experimental results are given which validate the proposed handoff control protocol on an atm network testbed. r. yuan s. k. biswas l. j. french j. li d. raychaudhuri design & application of a memory-coupled microprocessor network in the present paper, design and applications of a memory coupled network of four 8080a microprocessors are discussed. the architecture fully exploits all the inherent capabilities of a network namely, flexibility, dynamic reconfiguration, redundancy for fault-tolerance, higher throughput due to parallelism and pipelining and most effective utilisation of expensive resources. a simple and novel hardware protocole completely eliminates the communications software and provides a fast access to shared resources to each processor in the network. the network is extremely versatile and finds application in vast number of completely divergent areas. p. w. dandekar synthesis of pipelined dsp accelerators with dynamic scheduling patrick schaumont bart vanthournout ivo bolsens hugo de man parallel access to files in the vesta file system d. g. feitelson p. f. corbett j.-p. prost s. j. baylor stochastic control of path optimization for inter-switch handoffs in wireless atm networks one of the major design issues in wireless atm networks is the support of inter-switch handoffs. an inter-switch handoff occurs when a mobile terminal moves to a new base station connecting a different switch. apart from resource allocation at the new base station, inter-switch handoff also requires connection rerouting. with the aim of minimizing the handoff delay while using the network resources efficiently, the two-phase handoff protocol uses path extension for each inter-switch handoff, followed by path optimization if necessary. the objective of this paper is to determine when and how often path optimization should be performed. the problem is formulated as a semi-markov decision process. link cost and signaling cost functions are introduced to capture the tradeoff between the network resources utilized by a connection and the signaling and processing load incurred on the network. the time between inter-switch handoffs follows a general distribution. a stationary optimal policy is obtained when the call termination time is exponentially distributed. numerical results show significant improvement over four other heuristics. vincent w. s. wong mark e. lewis victor c. m. leung off-line permutation embedding and scheduling in multiplexed optical networks with regular topologies chunming qiao yousong mei a control and management network for wireless atm systems this paper describes the design of a control and management network (orderwire) for a mobile wireless asynchronous transfer mode (atm) network. this mobile wireless atm network is part of the rapidly deployable radio network (rdrn). the orderwire system consists of a packet radio network which overlays the mobile wireless atm network. each network element in this network uses global positioning system (gps) information to control a beamforming antenna subsystem which provides for spatial reuse. this paper also proposes a novel virtual network configuration (vnc) algorithm for predictive network configuration. a mobile atm private network--network interface (pnni) based on vnc is also discussed. finally, as a prelude to the system implementation, results of a maisie simulation of the orderwire system are discussed. stephen f. bush sunil jagannath ricardo sanchez joseph b. evans gary j. minden k. sam shanmugan victor s. frost performance of a joint cdma/prma protocol with heavy-tailed on/off source the performance of a joint cdma/prma protocol with heavy-tailed on/off source has been studied. compared with the random access scheme, the prma protocol improves the system performance (such as packet loss, throughput) whether the traffic is srd or lrd. the less bursty traffic is, the greater the improvement. the buffer design should take into account knowledge about the network traffic such as the presence or absence of the noah effect in a typical source, especially of \alpha_{\mathrm{on}, the intensity of the noah effect of on-period. the smaller \alpha_{\mathrm{on} is, the smaller the buffering gain, and the more packets will be lost. lrd has impacts on the overall system performance. the noah effect, especially \alpha_{\mathrm{off}, the intensity of the noah effect of off-period, has significant impact on the overall system performance such as capacity, time delay, etc. as \alpha_{\mathrm{off} gets closer to 1, the traffic becomes more bursty, the system capacity is decreased and time delay is increased. m. wang z. s. wang wei lu j. l. lin d. r. chen the clipper processor: instruction set architecture and implementation intergraph's clipper microprocessor is a high performance, three chip module that implements a new instruction set architecture designed for convenient programmability, broad functionality, and easy future expansion. w. hollingsworth h. sachs a. j. smith performance analysis of a fault detection scheme in multiprocessor systems a technique is described for detecting and diagnosing faults at the processor level in a multiprocessor system. in this method, a process is assigned whenever possible to two processors: the processor that it would normally be assigned to (primary) and an additional processor which would otherwise be idle (secondary). two strategies will be described and analyzed: one which is preemptive and another which is non-preemptive. it is shown that for moderately loaded systems, a sufficient percentage of processes can be performed redundantly using the system's spare capacity to provide a basis for fault detection and diagnosis with virtually no degradation of response time. anton t. dahbura krishan k. sabnani william j. hery the performance of query control schemes for the zone routing protocol zygmunt j. haas marc r. pearlman cost-bandwidth tradeoffs for communication networks c. p. kruskal m. snir receiver-driven layered multicast state of the art, real-time, rate-adaptive, multimedia applications adjust their transmission rate to match the available network capacity. unfortunately, this source-based rate-adaptation performs poorly in a heterogeneous multicast environment because there is no single target rate --- the conflicting bandwidth requirements of all receivers cannot be simultaneously satisfied with one transmission rate. if the burden of rate- adaption is moved from the source to the receivers, heterogeneity is accommodated. one approach to receiver-driven adaptation is to combine a layered source coding algorithm with a layered transmission system. by selectively forwarding subsets of layers at constrained network links, each user receives the best quality signal that the network can deliver. we and others have proposed that selective-forwarding be carried out using multiple ip-multicast groups where each receiver specifies its level of subscription by joining a subset of the groups. in this paper, we extend the multiple group framework with a rate-adaptation protocol called receiver-driven layered multicast, or rlm. under rlm, multicast receivers adapt to both the static heterogeneity of link bandwidths as well as dynamic variations in network capacity (i.e., congestion). we describe the rlm protocol and evaluate its performance with a preliminary simulation study that characterizes user- perceived quality by assessing loss rates over multiple time scales. for the configurations we simulated, rlm results in good throughput with transient short-term loss rates on the order of a few percent and long-term loss rates on the order of one percent. finally, we discuss our implementation of a software-based internet video codec and its integration with rlm. steven mccanne van jacobson martin vetterli parallel processing architecture for the hitachi s-3800 shared-memory vector multiprocessor this paper discusses the architecture of the new hitachi supercomputer series, which is capable of achieving 8 gflops in each of up to four processors. this architecture provides high-performance processing for fine-grain parallelism, and it allows efficient parallel processing even in an undedicated environment. it also features the newly- developed time-limited spin-loop synchronization, which combines spin-loop synchronization with operating system primitives, and a communication buffer (cb) which caches shared variables for synchronization, thus allowing them to be accessed faster. three new instructions take advantage of the cb in order to reduce the parallel overhead. the results of performance measurements confirm the effectiveness of the cb and the new instructions. katsuyoshi kitai tadaaki isobe yoshikazu tanaka yoshiko tamaki masakazu fukagawa teruo tanaka yasuhiro inagami performance evaluation of isolated and interconnected token bus local area networks the token bus based local area network, redpuc, designed and implemented at the pontificia universidade catolica do rio de janeiro is briefly described. analytic models are presented, which allow one to obtain an approximation for the average packet delay, as well as exact upper and lower bounds for the same performance measure. a performance evaluation of interconnected local networks is also given. daniel a. menasce leonardo lellis p. leite connection admission control for capacity-varying networks with stochastic capacity change times many connection-oriented networks, such as low earth orbit satellite (leos) systems and networks providing multipriority service using advance reservations, have capacities which vary over time. connection admission control (cac) policies which only use current capacity information may lead to intolerable dropping of admitted connections whenever network capacity decreases. we present the admission limit curve (alc) for capacity-varying networks with random capacity change times. we prove the alc is a constraint limiting the conditions under which any connection-stateless cac policy may admit connections and still meet dropping guarantees on an individual connection basis. the alc also leads to a lower bound on the blocking performance achievable by any connection-stateless cac policy which provides dropping guarantees to individual connections. in addition, we describe a cac policy for stochastic capacity change times which uses knowledge about future capacity changes to provide dropping guarantees on an individual connection basis and which achieves blocking performance close to the lower bound. j. siwko i. rubin a credit manager for traffic regulation in high-speed networks: a queueing analysis kin k. leung raymond w. yeung bhaskar sengupta behavioral characterization of decoupled access/execute architecture d. windheiser w. jalby routing in multidomain networks dragomir d. dimitrijevic basil maglaris robert r. boorstyn effective null pointer check elimination utilizing hardware trap motohiro kawahito hideaki komatsu toshio nakatani mixing traffic in a buffered banyan network a model of banyan networks is developed to account for their operational as well as topological characteristics. the model incorporates a description of the states of individual switching nodes and explicitly captures the interconnection patterns between stages. accordingly, the performance of a network can be evaluated via an iterative procedure. using the model, we analyzed one class of nonuniform traffic pattern referred to as "point-to- point" traffic that has particular significance to mixed voice, data, and video applications. the results obtained indicate that even a single dedicated channel (with point-to-point traffic) can significantly limit the throughput of the background uniform traffic. the paper concludes with several switch design strategies for switching point-to-point traffic using a self- routing switching network. l. t. wu array processor with multiple broadcasting v. k. prasanna kumar c. s. raghavendra adapation in a ubiquitous computing management architecture markus lauff hans- werner gellersen branch history table prediction of moving target branches due to subroutine returns david r. kaeli philip g. emma comments on "minimum-latency transport protocols with modulo-n incarnation numbers" andrás l. oláh sonia m. heemstra de groot a new model for scheduling packet radio networks packet radio networks are modeled as arbitrary graphs by most researchers. in this paper we show that an arbitrary graph is an inaccurate model of the radio networks. this is true because there exists a large class of graphs which will not model the radio networks. radio networks can be modeled accurately by a restricted class of graphs called the planar point graphs. since the radio networks can accurately be modeled only by a restricted class of graphs, the np- completeness results for scheduling using an arbitrary graph as the model, do not correctly reflect the complexity of the problem. in this paper we study the broadcast scheduling problem using the restricted class as the model. we show that the problem remains np-complete even in this restricted domain. we give an o(n log n) algorithm when all the transceivers are located on a line. arunabha sen mark l. huson a vector and array multiprocessor extension of the sylvan architecture the main intent of this paper will be the description of a multiprocessor system that uses microprogrammed hardware to support operating system primitives that contribute to its high performance vector processing capability. the system consists of nodes that communicate over a system interconnect. each node is a tripartite subsystem that consists of a host processor complex running application code, a vector co-processor and a kernel support processor that handles both operating system functions and control of the vector co-processor. the microcoded kernel processor is used to support a message based operating system that allows concurrent processes to communicate while residing in the same node or in different nodes. since the kernel processor controls the functioning of the vector co-processor as well as the management of processes (for example, context switching), the node can utilize the resources of the co-processor very effectively. f. j. burkowski observations on the dynamics of a congestion control algorithm: the effects of two-way traffic lixia zhang scott shenker daivd d. clark wireless atm: an enabling technology for multimedia personal communication an atm-based wireless network capable of supporting integrated voice, video and data services with quality-of-service (qos) control is proposed as a key element of the future distributed multimedia computing scenario. a specific architecture for "wireless atm" is described, and design issues are briefly discussed for each major functional layer of the network. the system approach is based on the incorporation of wireless channel specific medium access, data link and wireless control layers into a mobility- enhanced atm protocol stack. selected software emulation results are given for applicable medium access control (mac) and data link control (dlc) protocols. the paper concludes with a brief view of related ongoing prototyping activities aimed at demonstrating a seamless wired plus wireless multimedia networking environment. d. raychaudhuri delivery of time-critical messages using a multiple copy approach reliable and timely delivery of messages between processing nodes is essential in distributed real-time systems. failure to deliver a message within its deadline usually forces the system to undertake a recovery action, which introduces some cost (or overhead) to the system. this recovery cost can be very high, especially when the recovery action fails due to lack of time or resources. proposed in this paper is a scheme to minimize the expected cost incurred as a result of messages failing to meet their deadlines. the scheme is intended for distributed real-time systems, especially with a point-to-point interconnection topology. the goal of minimizing the expected cost is achieved by sending multiple copies of a message through disjoint routes and thus increasing the probability of successful message delivery within the deadline. however, as the number of copies increases, the message traffic on the network increases, thereby increasing the delivery time for each of the copies. there is therefore a tradeoff between the number of copies of each message and the expected cost incurred as a result of messages missing their deadlines. the number of copies of each message to be sent is determined by optimizing this tradeoff. simulation results for a hexagonal mesh and a hypercube topology indicate that the expected cost can be lowered substantially by the proposed scheme. parameswaran ramanathan kang g. shin netman: the design of a collaborative wearable computer system this paper presents a wearable groupware system designed to enhance the communication and cooperation of highly mobile network technicians. it provides technicians in the field with the capabilities for real-time audio- conferencing, transmission of video images back to the office, and context- sensitive access to a shared notebook. an infrared location-tracking device allows for the automatic retrieval of notebook entries depending on the user's current location. gerd kortuem martin bauer zary segall fundamental challenges in mobile computing m. satyanarayanan pushing dependent data in clients-providers-servers systems in a satellite and wireless networks and in advanced traffic information systems in which the up-link bandwidth is very limited, a server broadcasts data files in a round-robin manner. the data files are provided by different providers and are accessed by many clients. the providers are independent and therefore files may share information. the clients who access these files may have different patterns of access. some clients may wish to access more than one file at a time in any order, some clients may access one file out of of several files, and some clients may wish to access a second file only after accessing another file. the goal of the server is to order the files in a way that minimizes the access time of the clients given some a-priori knowledge of their access patterns. this paper introduces a clients- providers-servers model that represents certain environments better than the traditional clients-servers model. then, we show that a random order of the data files performs well independent of the specific access pattern. our main technical contribution is showing how to de-randomize the randomized algorithm that is based on selecting a random order. the resulting algorithm is a polynomial time deterministic algorithm that finds an order that achieves the bounds of the random order. amotz bar-noy joseph naor baruch schieber the impact of delayed acknowledgments on tcp performance over satellite links the performance of the transmission control protocol (tcp) overnetwork paths that include a wireless or a satellite link is a veryactive area of research. a number of papers have been publishedproposing different ways to improve tcp over these kinds of links,either by new tcp functionality, or by link level enhancements.very few of these papers consider the delayed acknowledgmentstrategy that is implemented by many tcp receivers, when presentingtheir results. in this paper we will show that this acknowledgmentstrategy can have a significant impact on the performance of tcpand should not be neglected when evaluating new solutions forwireless environments. tanja lang daniel floreani a mesh/token ring hybrid-architecture lan this paper presents a hybrid architecture for local area networks (lans) which combines the advantages of token ring and mesh networks while remaining relatively simple. each node may have two communication channels. the first is a token ring and is used for short or prioritized message data transmissions. the other consists of an arbitrary collection of data links, which are used for relatively high volume and low priority data messages. with this architecture, a lan can be installed in the form of token ring and can be upgraded and "tuned" to its application environment by the insertion of node- to-node data links. simulation results indicate that even with rather simple network control and routing algorithms, performance is improved over token ring due to data transmission concurrency greater than one. c. kang j. herzog design choices in the shrimp system: an empirical study the shrimp cluster- computing system has progressed to a point of relative maturity; a variety of applications are running on a 16-node system. we have enough experience to understand what we did right and wrong in designing and building the system. in this paper we discuss some of the lessons we learned about computer architecture, and about the challenges involved in building a significant working system in an academic research environment. we evaluate significant design choices by modifying the network interface firmware and the system software in order to empirically compare our design to other approaches. matthias a. blumrich richard d. alpert yuqun chen douglas w. clark stefanos n. damianakis cezary dubnicki edward w. felten liviu iftode kai li margaret martonosi robert a. shillner what is scalability? mark d. hill application of sampling methodologies to network traffic characterization kimberly c. claffy george c. polyzos hans-werner braun a real-time expert system for computer network monitor and control semacs, is a continuous, real-time expert monitor and control system that actively and passively monitors a computer network. it detects and diagnoses hardware and software problems with the network and provides advice and solutions to an operator in his domain of influence. it was developed jointly by sperry and one of its customers during an expert systems apprenticeship with the sperry knowledge systems center. the prototype system, which includes all major components of the final, operational system, was developed over a period of six weeks, and an operational system was put on- line within 3 months of starting the project. barton b dunning john switlik guarded execution and branch prediction in dynamic ilp processors d. n. pnevmatikatos g. s. sohi editorial ouri wolfson implementing tcp/ip on a cray computer david a. borman crash failures vs. crash + link failures anindya basu bernadette charron-bost sam toueg editorial m. scott corson james a. freebersyser ambatipudi sastry performance simulation of a token ring: users' view cheoul-shin kang e. k. park internet/osi application migration/portability henry lowe battery-powered distributed systems (extended abstract) paul j. m. havinga arne helme sape j. mullender gerard j. m. smit jaap smit resources section: web sites michele tepper logical conditional instructions mitchell h. clifton static and dynamic polling mechanisms for fieldbus networks prasad raja guevara noubir integration of circuit and packet switched transport in a 3rd generation mobile network andrea calvi francisco cano hila hmipv6: a hierarchical mobile ipv6 proposal claude castelluccia construction of internet for japanese academic communities wide (widely integrated distributed environment) is a research project aimed at achieving a transparent distributed environment over heterogeneous distributed computing elements with the consideration of various types of connections for internetworking. the target environment of the research is computing environment in the academic and research communities especially in japan. the wide project started its research activities at the end of 1986. the initial purpose of the group was to design the future junet environment. in japan junet has been the only network that provides connectivities among academic and research institutes in both private and public organizations. in order to provide better computer communication with the network, interconnection of networks based on open architecture is strongly required. as the result of various studies, the project started its actual design in late 1987 and has been establishing working environments by connecting several universities with various types of links. together with the actual state and the future plan of the wide project, this paper discusses the overview of technologies being employed for the achievement of project goals such as networking technologies, operating system architecture and name space. as the background status of the wide projects, this paper also reports an overview of existing academic and research computer networks in japan, especially focusing on university environments where the sharing of the large scale computing resources are provided. j. murai h. kusumoto s. yamaguchi a. kato an algorithm for distributed computation of a spanningtree in an extended lan a protocol and algorithm are given in which bridges in an extended local area network of arbitrary topology compute, in a distributed fashion, an acyclic spanning subset of the network. the algorithm converges in time proportional to the diameter of the extended lan, and requires a very small amount of memory per bridge, and communications bandwidth per lan, independent of the total number of bridges or the total number of links in the network. algorhyme i think that i shall never see a graph more lovely than a tree. a tree whose crucial property is loop-free connectivity. a tree which must be sure to span so packets can reach every lan. first the root must be selected by id it is elected. least cost paths from root are traced. in the tree these paths are placed. a mesh is made by folks like me then bridges find a spanning tree. radia perlman the effectiveness of multiple hardware contexts multithreaded processors are used to tolerate long memory latencies. by executing threads loaded in multiple hardware contexts, an otherwise idle processor can keep busy, thus increasing its utilization. however, the larger size of a multi- thread working set can have a negative effect on cache conflict misses. in this paper we evaluate the two phenomena together, examining their combined effect on execution time. the usefulness of multiple hardware contexts depends on: program data locality, cache organization and degree of multiprocessing. multiple hardware contexts are most effective on programs that have been optimized for data locality. for these programs, execution time dropped with increasing contexts, over widely varying architectures. with unoptimized applications, multiple contexts had limited value. the best performance was seen with only two contexts, and only on uniprocessors and small multiprocessors. the behavior of the unoptimized applications changed more noticeably with variations in cache associativity and cache hierarchy, unlike the optimized programs. as a mechanism for exploiting program parallelism, an additional processor is clearly better than another context. however, there were many configurations for which the addition of a few hardware contexts brought as much or greater performance than a larger multiprocessor with fewer than the optimal number of contexts. radhika thekkath susan j. eggers a model of the contention resolution time for binary tree protocols in a binary tree type local area network (lan) protocol, the number of probe slots (k) needed for contention resolution, when m out of n terminals are contending for the bus access depends not only on the value of m but also on the pattern of the active terminals, i.e., where the physical addresses of the active terminals are located in the logical binary tree structure. the value of m depends on the traffic intensity and increases with the packet arrival rate. in this paper, we develop simple mathematical models for the upper and lower bounds on k as a function of m, with n as a parameter taking into account the above active terminal patterns. the models developed are useful in determining the maximum and minimum transmission frame durations for the performance evaluation of binary type lan protocols and other reservation type lan protocols which employ binary tree type probing algorithm. in the case of integrated traffic, when frame duration is to be fixed, the model will enable to optimize bandwidth allocation to different types of traffic. jagan p. agrawal mary l. gerken real-time support in multihop wireless networks personal communications and mobile computing will require a wireless network infrastructure which is fast deployable, possibly multihop, and capable of multimedia service support. the first infrastructure of this type was the packet radio network (prnet), developed in the 70's to address the battlefield and disaster recovery communication requirements. prnet was totally asynchronous and was based on a completely distributed architecture. it handled datagram traffic reasonably well, but did not offer efficient multimedia support. recently, under the wamis (wireless adaptive mobile information systems) and glomo arpa programs several mobile, multimedia, multihop (m3) wireless network architectures have been developed, which assume some form of synchronous, time division infrastructure. the synchronous time frame leads to efficient multimedia support implementations. however, it introduces more complexity and is less robust in the face of mobility and channel fading. in this paper, we examine the impact of synchronization on wireless m3 network performance. first, we introduce maca/pr, an asynchronous network based on the collision avoidance mac scheme employed in the ieee 802.11 standard. then, we evaluate and compare several wireless packet networks ranging from the totally asynchronous prnet to the synchronized cluster tdma network. we examine the tradeoffs between time synchronization and performance in various traffic and mobility environments. r), the token should make up to p round-trips, where p is the number of priority levels, before p is reduced to r. during this time period, no station may seize the token and send a message. this leads to loss of bandwidth. the paper presents a new priority mechanism that retains the desired properties of the standard. however, in the new protocol when p > r holds, p is reduced to r in at most 1 round-trip rather than in up to p round-trips. reuven cohen adrian segall comparative performance of circuit-switched networks based on blocking probability a class of dynamic parallel processor interconnection networks, called circuit-switched networks, are composed of layers of small crossbar elements. although such networks provide full connectivity, they are often called blocking networks, since contentions for network links sometimes block message pathways. the results from a study are reported in which the goal was to determine the effect of variations in a network's topology to aid in the selection of more optimal architectures. three approaches were used in the study to determine the relative blocking performance of networks based on topology. several popular topologies were analyzed by these approaches with respect to availability and probability of blocking. a discussion of the results provides insight into the design of networks containing large numbers of computer nodes by showing why some types of topologies produce more efficient networks than do others. c. hein hardware/software tradeoffs for increased performance most new computer architectures are concerned with maximizing performance by providing suitable instruction sets for compiled code and providing support for systems functions. we argue that the most effective design methodology must make simultaneous tradeoffs across all three areas: hardware, software support, and systems support. recent trends lean towards extensive hardware support for both the compiler and operating systems software. however, consideration of all possible design tradeoffs may often lead to less hardware support. several examples of this approach are presented, including: omission of condition codes, word-addressed machines, and imposing pipeline interlocks in software. the specifics and performance of these approaches are examined with respect to the mips processor. john hennessy norman jouppi forest baskett thomas gross john gill preface: special issue on measurement and modeling of computer systems herbert d. schwetman distributing collective adaptation via message passing thomas haynes efficient hierarchical interconnection for multiprocessor systems s. wei s. levy erlang capacity and uniform approximations for shared unbuffered resources debasis mitra john a. morrison conferences jay blickstein impulse: a high performance processing unit for multiprocessors for scientific calculation in this paper, we propose a high performance processing unit for multiprocessor systems for scientific calculations. this processing unit is called impulse. impulse is equipped with a hardware process control mechanism, and a powerful floating point processor and its controller. the process control method is based on the concurrent process model called the nc model. in the nc model, the processes and their communicating channels are static, and it is relatively easy to implement the interprocess communication server and process scheduler in hardware according to this model. to enhance the system performance, impulse is composed of three parts, the task engine, ipc engine, and fpp engine. from the results of simulations, it appears that the ipc engine provides efficient process control even if the granularity of the processes is very fine. t. boku s. nomura h. amano supporting transactional cache consistency in mobile database systems sangkeun lee chong-sun hwang heongchang yu providing reliable and fault tolerant broadcast delivery in mobile ad-hoc networks mobile ad-hoc networks are making a new class of mobile applications feasible. they benefit from the fast deployment and reconfiguration of the networks, are mainly characterized by the need to support many-to-many interaction schema within groups of cooperating mobile hosts and are likely to use replication of data objects to achieve performances and high data availability. this strong group orientation requires specialized solutions that combine adaptation to the fully mobile environment and provide the adequate level of fault tolerance. in this paper, we present the reliable broadcast protocol that has been purposely designed for mobile ad-hoc networks. the reliable broadcast service ensures that all the hosts in the network deliver the same set of messages to the upper layer. it represents the building block to obtain higher broadcast and multicast services with stronger guarantees and is an efficient and reliable alternative to flooding. the protocol is constructed on top of the wireless mac protocol, which in turn sits over the clustering protocol. it provides an exactly once message delivery semantics and tolerates communication failures and host mobility. temporary disconnections and network partitions are also tolerated under the assumption that they are eventually repaired, as specified by a liveness property. the termination of the protocol is proved and complexity and performance analyses are also provided. elena pagani a security architecture for computational grids ian foster carl kesselman gene tsudik steven tuecke fault-tolerant routing in hypercubes using masked interval routing scheme mahmoud al-omari mohammed mahafzah system archtecture directions for networked sensors jason hill robert szewczyk alec woo seth hollar david culler kristofer pister scalable qos provision through buffer management in recent years, a number of link scheduling algorithms have been proposed that greatly improve upon traditional fifo scheduling in being able to assure rate and delay bounds for individual sessions. however, they cannot be easily deployed in a backbone environment with thousands of sessions, as their complexity increases with the number of sessions. in this paper, we propose and analyze an approach that uses a simple buffer management scheme to provide rate guarantees to individual flows (or to a set of flows) multiplexed into a common fifo queue. we establish the buffer allocation requirements to achieve these rate guarantees and study the trade-off between the achievable link utilization and the buffer size required with the proposed scheme. the aspect of fair access to excess bandwidth is also addressed, and its mapping onto a buffer allocation rule is investigated. numerical examples are provided that illustrate the performance of the proposed schemes. finally, a scalable architecture for qos provisioning is presented that integrates the proposed buffer management scheme with wfq scheduling that uses a small number of queues. r. guerin s. kamat v. peris r. rajan pushing functionality into even smaller devices cameron miner a performance comparison of four supercomputers margaret l. simmons harvey j. wasserman olaf m. lubeck christopher eoyang raul mendez hiroo harada misako ishiguro architecture and experimental framework for supporting qos in wireless networks using differentiated services this paper describes the design an implementation of an enhanced differentiated services (diffserv) architectural framework for providing quality of service (qos) in wireless networks. the diffserv architecture has been recently proposed to compliment the integrated services (intserv) model for providing qos in the wired internet. the paper studies whether diffserv framework takes into consideration several factors, including signaling requirements, mobility, losses, lower wireless bandwidth and battery power constraints. it identifies the need for supporting signaling and mobility in wireless networks. the framework and mechanisms have been implemented in the wireless testbed at washington state university. experimental results from this testbed show the validity of the proposed diffserv model and also provide performance analyses. the framework is also designed to be extensible so that other researchers may use our implementation as a foundation for implementing other wireless network algorithms and mechanisms. indu mahadevan krishna m. sivalingam bi-directional networks for large parallel processors matthew moore charles mcdowell tree lans with collision avoidance: protocol, switch architecture, and simulated performance packet collisions and their resolution create a performance bottleneck in random access lans. a hardware solution to this problem is to use collision avoidance switches. these switches allow the implementation of random access protocols without the penalty of collisions among packets. we describe the designs of some tree lans that use collision avoidance switches. the collision avoidance lans we describe are broadcast star and camb tree (collision avoidance multiple broadcast tree). we next present a possible implementation of a collision avoidance switch using currently available photonic devices. finally, we show the performance of broadcast star and camb tree networks using simulations. t. suda s. morris t. nguyen mobile wireless computing: challenges in data management tomasz imielinski b. r. badrinath a scalable and efficient intra-domain tunneling mobile-ip scheme ashar aziz exploiting spatial locality in data caches using spatial footprints modern cache designs exploit spatial locality by fetching large blocks of data called cache lines on a cache miss. subsequent references to words within the same cache line result in cache hits. although this approach benefits from spatial locality, less than half of the data brought into the cache gets used before eviction. the unused portion of the cache line negatively impacts performance by wasting bandwidth and polluting the cache by replacing potentially useful data that would otherwise remain in the cache.this paper describes an alternative approach to exploit spatial locality available in data caches. on a cache miss, our mechanism, called spatial footprint predictor (sfp), predicts which portions of a cache block will get used before getting evicted. the high accuracy of the predictor allows us to exploit spatial locality exhibited in larger blocks of data yielding better miss ratios without significantly impacting the memory access latencies. our evaluation of this mechanism shows that the miss rate of the cache is improved, on average, by 18% in addition to a significant reduction in the bandwidth requirement. sanjeev kumar christopher wilkerson a microcode-based environment for noninvasive performance analysis we have developed an environment which allows us to collect data for performance analysis by modifying the microcode of a vax 8600. this use of microprogramming permits data to be collected with minimal system perturbation (i.e. the data is almost as good as that obtained with a hardware monitor) but at the cost and with the ease of use of a software simulator. in this paper we describe the environment that we have developed and present two examples of its use. the first example, procedure call instrumentation, illustrates a technique for gathering data on how certain architectural features are used. the second example, instruction tracing, illustrates a technique for collecting data that can then be used in trace---driven simulation. s. w. melvin y. n. patt a comparison of priority-based decentralized load balancing policies load balancing policies in distributed systems divide jobs into two classes; those processed at their of origination (local jobs) and those processed at some other site in the system after being transfered through a communication network (remote jobs). this paper considers a class of decentralized load balancing policies that use a threshold on the local job queue length at each host in making decisions for remote processing. they differ from each other according to how they assign priorities to each of these job classes, ranging from one providing favorable treatment to local jobs to one providing favorable treatment to remote jobs. under each policy, the optimal load balancing problem is formulated as an optimization problem with respect to the threshold parameter. the optimal threshold is obtained numerically using matrix-geometric formulation and an iteration method. last, we consider the effects that the job arrival process can have on performance. one expects that load balancing for systems operating in an environment of bursty job arrivals should be more beneficial than for an environment with random job arrivals. this fact is observed through numerical examples. kyoo jeong lee don towsley m-udp: udp for mobile cellular networks in this paper we present our implementation of a modified udp protocol appropriate for the mobile networking environment. our protocol, like udp, does not guarantee reliable delivery of datagrams. however, unlike udp, it does ensure that the number of lost datagrams is kept small. in this paper we discuss our implementation of m-udp (in netbsd) and compare its performance against that of udp in an experimental mobile network that is currently under development at the university of south carolina. kevin brown suresh singh fast protocol transition in a distributed environment (brief announcement) adaptivity is a desired feature of the distributed systems. because many characteristics of the environment (network topology, active process distribution, etc.) may change from time to time, a good system should be able to adapt itself and perform sufficiently well under different conditions. modern distributed systems are generally built from a set of components. such a system has the freedom to adapt itself by switching from using one component to another. because most components in the distributed systems are running protocols, an agreement must be achieved among the processes when doing the adaptation. the traditional approach to do the protocol switch is by using the two-phase- commit algorithm, in which a coordinator first broadcasts a "prepare" message, and all the other processes pause their work and send back acknowledgments. each process is buffering messages from its own application at this point. after the coordinator receives all the acknowledgments, it broadcasts a "switch" message, and upon receiving which all the processes resume working using the new configuration. this approach is clean and easy to implement. however, it has two shortcomings: (1) the reconfiguration is not "smooth", i.e.,, the overhead is large; (2) it is not scalable due to the centralized scheme. we propose a method which allows the protocol switch with very little overhead. it is scalable as well. the method is based on the fact that if two protocols p1 and p2 are derived from the same abstract specification as, there exist converting functions and ′ that can convert the local state of a process in one protocol to another. we can then build a hybrid protocol based on p1, p2, and ′ that can make smooth adaptation at runtime. we briefly describe the generic algorithm of the hybrid protocol in three steps: (1) one process initiates the protocol switch by broadcasting a "switch" message; (2) when a process learns about the switching, it stops the current protocol by starting buffering application messages. it then sends out its information that other processes may need in order to convert their local states; (3) when a process gets all the needed information, it converts its local state to that of the new protocol using the converting function provided. it then starts working using the new protocol immediately. each configuration is associated with a timestamp, which is tagged to the messages sent in that configuration. when a message with a timestamp greater than the local timestamp arrives, it gets buffered, and is processed after the local conversion finishes. to ensure that there is only one reconfiguration at any time, a token mechanism is being used. the hybrid protocol is smooth, because the protocol switch is not depend on the slowest process as in the two-phase- commit approach. it is efficient because the local state conversion saves many unnecessary memory operations. as an example, we apply our algorithm to two types of atomic broadcast protocols, namely, sequencer (s-)protocol and token (t-)protocol. in the s-protocol, each process has a buffer (sbu ) holding the messages yet to be ordered by the sequencer. in the t-protocol, each process has a buffer (tbu ) holding the messages to be broadcast when the token arrives. when switching from s-protocol to t-protocol, the sequencer sends out the information including the number of the messages from each process that have been ordered so far. other processes convert by transferring the unordered messages from sbu to tbu . when switching from t-protocol to s-protocol, the process with the token sends the ordered information to the sequencer, and all the processes will transfer the messages in tbu to sbu by sending the message proposals to the sequencer. we implement the algorithm with our group communication toolkit. the following table shows the performance of the hybrid protocol (hy b) versus that of the two-phase- commit protocol (2pc). in the test, each process broadcasts 100 messages in each round, when a process receives all the messages in this round, it starts a new round. we switch the protocol every 3 rounds. the result being shown (in msec/round) is the average round latency of 100 rounds for 3 processes. the s-protocol and t-protocol data is of no protocol switch and just for the comparison. the algorithm works much better when the number of processes increases. our algorithm provides a generic way of building efficient and scalable adaptive protocols. we believe it is a step towards the modular approach to adding new functionalities, such as adaptation, to the distributed systems. xiaoming liu robbert van renesse signature caching techniques for information filtering in mobile enviroments this paper discusses signature caching strategies to reduce power consumption for wireless broadcast and filtering services. the two-level signature scheme is used for indexing the information frames. a signature is considered as the basic caching entity in this paper. four caching policies are compared in terms of tune-in time and access time. with reasonable access time delay, all of the caching policies reduce the tune-in time for the two- level signature scheme. moreover, two cache replacement policies are presented and compared by simulation. the result shows that, when the cache size is small, caching only the integrated signatures is recommended. when the size of cache is greater than that of the integrated signatures, caching both of the integrated and simple signatures is better. wang-chien lee dik lun lee high-speed local area networks and their performance: a survey at high data transmission rates, the packet transmission time of a local area network (lan) could become comparable to or less than the medium propagation delay. the performance of many lan schemes degrades rapidly when the packet transmission time becomes small comparative to the medium propagation delay. this paper introduces lans and discusses the performance degradation of lans at high speeds. it surveys recently proposed lan schemes designed to operate at high data rates, including their performance characteristics desirable in lan medium access protocols are identified and discussed. the paper serves as a tutorial for readers less familiar with local computer communication networks. it also serves as a survey of the state-of-the-art lans. bandula w. abeysundara ahmed e. kamal hierarchical packet fair queueing algorithms jon c. r. bennett hui zhang mobility support using sip elin wedlund henning schulzrinne the nas parallel benchmarks - summary and preliminary results d. h. bailey e. barszcz j. t. barton d. s. browning r. l. carter l. dagum r. a. fatoohi p. o. frederickson t. a. lasinski r. s. schreiber h. d. simon v. venkatakrishnan s. k. weeratunga resource section: books michele tepper a small scale multi-microprocessor network controller (abstract only) the purpose of this project has been to develop a general purpose network controller for use in a microcomputer laboratory situation. the system uses a cp/m based system as a cross assembler for a variety of microcomputers, i.e. 8085, 8086, 6802, 68000, z80, etc. the developed code is prefaced with a packet of information specifying which microcomputer the code is to be sent to. all of the above mentioned microcomputers are connected to the network controller and the developed code is routed to the specified microcomputer. the system also allows the transmission of code from the microcomputer back to the cp/m system for storage. kenneth cooper tim kerlin ratchet: real-time address trace compression hardware for extended traces colleen d. schieber eric e. johnson comparison of raw and internet protocols in a hippi/atm/sonet based gigabit network we compare implementation of raw and internet protocols (tcp, udp) on a programmable hippi host-interface called the network interface unit. the network interface unit connects pixel-planes 5, a message-based graphics multicomputer, to a wide area gigabit network called vistanet. the bisdn network consists of a sonet cross-connect switch and an atm switch. we discuss the tradeoffs between protocols for our target application and present a comparison of end-to-end throughput based on empirical measurements. raj k. singh stephen g. tell shaun j. bharrat fault detection in an ethernet network using anomaly signature matching frank feather dan siewiorek roy maxion fault-tolerant routing and multicasting in butterfly networks feng cao ding- zhu du blue: an alternative approach to active queue management this paper exposes an inherent weakness in current active queue manage ment techniques such asredin that they rely on queue lengths to indicate the severity of congestion. in light of this observation, a fundamentally different active queue management algorithm calledblueis proposed.blueuses packet loss and link utilization to manage congestion. using simulation and controlled experiments,blueis shown to significantly outperformredin providing lower packet loss rates and smaller queuing delays to networked applications such as interactive audio and video. wu-chang feng dilip kandlur debanjan saha kang g. shin design, implementation, and evaluation of programmable handoff in mobile networks we describe the design, implementation and evaluation of a programmable architecture for profiling, composing and deploying handoff services. we argue that future wireless access networks should be built on a foundation of open programmable networking allowing for the dynamic deployment of new mobile and wireless services. customizing handoff control and mobility management in this manner calls for advances in software and networking technologies in order to respond to specific radio, mobility and service quality requirements of future wireless internet service providers. two new handoff services are deployed using programmable mobile networking techniques. first, we describe a "multi- handoff" access network service, which is capable of simultaneously supporting multiple styles of handoff control over the same physical wireless infrastructure. second, we discuss a "reflective handoff" service, which allows programmable mobile devices to freely roam between heterogeneous wireless access networks that support different signaling systems. evaluation results indicate that programmable handoff architectures are capable of scaling to support a large number of mobile devices while achieving similar performance to that of native signaling systems. michael e. kounavis andrew t. campbell gen ito giuseppe bianchi channel carrying: a novel handoff scheme for mobile cellular networks junyi li ness b. shroff edwin k. p. chong nsf report - computer and computation research d. t. lee a discrete-time paradigm to evaluate skew performance in a multimedia atm multiplexer alifo lombardo giacomo morabito giovanni schembra high performance parallel architectures in this paper the author describes current high performance parallel computer architectures. a taxonomy is presented to show computer architecture from the user programmer's point- of-view. the effects of the taxonomy upon the programming model are described. some current architectures are described with respect to the taxonomy. finally, some predictions about future systems are presented. r. e. anderson nomadic computing - an opportunity we are in the midst of some truly revolutionary changes in the field of computer-communications, and these offer opportunities and challenges to the research community. one of these changes has to do with nomadic computing and communications. nomadicity refers to the system support needed to provide a rich set of capabilities and services to the nomad as he moves from place to place in a transparent and convenient form. this new paradigm is already manifesting itself as users travel to many different locations with laptops, pda's, cellular telephones, pagers, etc. in this paper we discuss some of the open issues that must be addressed as we bring about the system support necessary for nomadicity. in addition, we present some of the considerations with which one must be concerned in the area of wireless communications, which forms one (and only one) component of nomadicity. leonard kleinrock the info superhighway: for the people the opportunities are attractive, but some pavers of the information superhighway (ish) are too eager to pour concrete. they risk making rough roads that will alienate the very users they seek. these technologically oriented ish devotees may be building dramatic overpasses and painting stripes without figuring out where the highway should be going. i believe greater attention should be paid to identifying appropriate services, designing a consistent user interface, and developing a clearer model of the diverse user communities. ben shneiderman probabilistic testing of protocols test sequences are used for the conformance testing of communication protocols to standards. this paper discusses a new approach to generating test sequences. the approach is based on probabilistic concepts about protocol state transitions and communication channels. the novel feature of the test sequences generated by this technique is that the most probable states of a protocol will be tested more promptly. d. p. sidhu c.-s. chang an internet multicast system for the stock market we are moving toward an international, 24-hour, distributed, electronic stock exchange. the exchange will use the global internet, or internet technology. this system is a natural application of multicast because there are a large number of receivers that should receive the same information simultaneously. the data requirements for the stock exchange are discussed. the current multicast protocols lack the reliability, fairness, and scalability needed in this application. we describe a distributed architecture and a timed reliable multicast protocol, trmp, that has the appropriate characteristics. we consider three applications: (1) a unified stock ticker of the transactions that are being conducted on the various physical and electronic exchanges. our objective is to deliver the same combined ticker reliability and simultaneously to all receivers, anywhere in the world. (2) a unified sequence of buy and sell offers that are delivered to a single exchange or a collection of exchanges. our objective is to give all traders the same fair access to an exchange independent of their relative distances to the exchange or delay and loss characteristics of the international network. (3) a distributed, electronic trading floor that can replace the current exchanges. this application has the fairness attributes of the first two applications and uses trmp to conduct irrefutable, distributed trades. running everyware on the computational grid rich wolski john brevik chandra krintz graziano obertelli neil spring alan su size, power, and speed (keynote address) the author discusses the roles of power and size in determining the speed of a computer. maurice v. wilkes local network architectures local networks have many characteristics which are quite different than those of geographically dispersed networks. in this paper, we first review these defining characteristics of local networks and present a brief evolutionary history of local networks. we then review three dominant architectures for local networks together with examples. in the last part of this paper, we describe some of the protocol considerations for local networks. jeffry w. yeh william siegmund business: the 8th layer: quality of service kate gerwig high-speed policy-based packet forwarding using efficient multi-dimensional range matching the ability to provide differentiated services to users with widely varying requirements is becoming increasingly important, and internet service providers would like to provide these differentiated services using the same shared network infrastructure. the key mechanism, that enables differentiation in a connectionless network, is the packet classification function that parses the headers of the packets, and after determining their context, classifies them based on administrative policies or real-time reservation decisions. packet classification, however, is a complex operation that can become the bottleneck in routers that try to support gigabit link capacities. hence, many proposals for differentiated services only require classification at lower speed edge routers and also avoid classification based on multiple fields in the packet header even if it might be advantageous to service providers. in this paper, we present new packet classification schemes that, with a worst-case and traffic-independent performance metric, can classify packets, by checking amongst a few thousand filtering rules, at rates of a million packets per second using range matches on more than 4 packet header fields. for a special case of classification in two dimensions, we present an algorithm that can handle more than 128k rules at these speeds in a traffic independent manner. we emphasize worst-case performance over average case performance because providing differentiated services requires intelligent queueing and scheduling of packets that precludes any significant queueing before the differentiating step (i.e., before packet classification). the presented filtering or classification schemes can be used to classify packets for security policy enforcement, applying resource management decisions, flow identification for rsvp reservations, multicast look-ups, and for source-destination and policy based routing. the scalability and performance of the algorithms have been demonstrated by implementation and testing in a prototype system. t. v. lakshman d. stiliadis optimal efficiency of optimistic contract signing birgit pfitzmann matthias schunter michael waidner connections with multiple congested gateways in packet-switched networks part 1: one-way traffic sally floyd deriving protocol specifications from service specifications including parameters the service specification concept has acquired an increasing level of recognition by protocol designers. this architectural concept influences the methodology applied to service and protocol definition. since the protocol is seen as the logical implementation of the service, one can ask whether it is possible to formally derive the specification of a protocol providing a given service. this paper addresses this question and presents an algorithm for deriving a protocol specification from a given service specification. it is assumed that services are described by expressions, where names identifying both service primitives and previously defined services are composed using operators for sequence, parallelism and alternative. services and service primitives may have input and output parameters. composition of services from predefined services and service primitives is also permitted. the expression defining the service is the basis for the protocol derivation process. the algorithm presented fully automates the derivation process. future work will focus on the optimization of traffic between protocol entities and on applications. reinhard gotzhein gregor von bochmann virtual tree-based multicast routing with a distributed numbering algorithm for wm-atm handover valerio mocci yim-fun hu os and compiler considerations in the design of the ia-64 architecture rumi zahir jonathan ross dale morris drew hess modeling and measurement of the impact of input/output on system performance janaki akella daniel p. siewiorek waiting time estimates in symmetric atm-oriented rings with the destination release of used slots mirjana zafirovic-vukotic ignatius g. m. m. niemegeers a multi-layer collision resolution multiple access protocol for wireless in mobile communication networks operating in unreliable physical transmission, random access protocol with the collision resolution (cr) scheme is more attractive than the aloha family including carrier sense multiple access (csma) [ieee networks (september 1994) 50--64], due to likely failure on the channel sensing. being a member of cr family schemes, a protocol known as non-preemptive priority multiple access (npma) is utilized in a new high- speed wireless local area network, hiperlan, standardized by european telecommunication standard institute (etsi). a conceptually three-layer cr multiple access protocol generalized from npma, supporting single type of traffic, is thus presented and analyzed in this paper. the cr capability of such a protocol (and hence npma) is proved to be significant by numerical substantiation that additional collision detection schemes are dispensable; also its throughput/delay performance is excellent when the proportion of the transmission phase to a channel access cycle is large enough (i.e., the winner of contention should transmit all of its packets successively). on the other hand, the simulated performance of npma serving integrated traffic is not fully satisfactory, primarily due to its distributed control mode and distinguishing traffic types only by the prioritization process. ya- ku sun kwang-cheng chen a data flow processor array system: design and analysis this paper presents the architecture of a highly parallel processor array system which executes programs by means of a data driven control mechanism. the data driven control mechanism makes it easy to construct an mimd (multiple instruction stream and multiple data stream) system, since it unifies inter- processor data transfer and intra-processor execution control. the design philosophy of the data flow processor array system presented in this paper is to achieve high performance by adapting a system structure to operational characteristics of application programs, and also to attain flexibility through executing instructions based on a data driven mechanism. the operational characteristics of the proposed system are analyzed using a probability model of the system behavior. comparing the analytical results with the simulation results through an experimental hardware system, the results of the analysis clarify the principal effectiveness of the proposed system. this system can achieve high operation rates and is neither sensitive to inter-processor communication delay nor sensitive to system load imbalance. naohisa takahashi makoto amamiya the effect of employing advanced branching mechanisms in superscalar processors yen-jen oyang chun-hung wen yu-fen chen shu-may lin editorial irfan khan dynamic handoff of multimedia streams sometimes a client that receives a multimedia stream from a server can change the connection used to transfer the data. there may be multiple paths or multiple servers, but a switch from one connection to another requires a handoff. during such a handoff, the player (of the video and/or audio stream) should be fed with a constant data stream so that they player does not have to stop. handoffs can be used in addition to adaptive (frame-dropping) filters to improve the quality of multimedia streams as received by the client. this paper investigates the influence of several factors on the quality of the multimedia stream during the handoff: the time needed to establish the connection to the new server, the size and the fill degree of the client buffer, the length of the synchronization phase (where both the old and the new connection are sending) and the most appropriate start packet for the new connection. the evaluation showsm that a handoff can be a viable solution if an appropriate strategy is chosen to phase over from the old connection to the new one. roger karrer thomas gross crop: cluster resource optimization package for pvm applications everett e. mullis self-stabilization - beyond the token ring circulation (brief announcement) there is a diffused feeling that the self-stabilization concept, as defined by dijkstra in 1974, is extremely deep in pointing out the features of a fault tolerant distributed system, but is also too restrictive in the definition of what is self- stabilizing. my primary purpose is to explain why dijkstra's definition needs to be extended: the reader understands that the task is extremely delicate, since there is a unanimous appreciation of the original definition of self- stabilization, and its modification should exhibit extremely strong motivations. secondarily, we show how it can be modified to widen its applicability. first i restate the original definition using chandy and misra's unity. this step is mechanical, so to keep the result as independent as possible from any subjective understanding of the self-stabilization concept: definition 1 (self-stabilizing system). given a (flat) unity program p with guards and a predicate l, p is self-stabilizing to l if and only if: the initially section of p is empty, unconditional < ;/item> l is stable, _stability & amp;forall;p ∈ , {l p} 0, minimal statements ∀x, y ∈ {l}, x y, minimal states true l.finite convergence the label attached to each predicate may help intuition. the next theorem, whose proof is in the full paper that can be requested by e.mail, limits the set of problems that can be treated with self- stabilization: theorem 1. a predicate l satisfies stability and minimal states if and only if only one state satisfies l and the execution of any statement leaves that state unaltered or& lt;/italic> there are n > 1 states that satisfy l and an indexing l} = {s< /italic>0,…,sn \\- 1} exists such that si ensures s(i+1) mod n. < ;/list> a corollary says that, if there is more than one legitimate state, then units are arranged in a ring, and are enabled to move according to that arrangement. that limits the applicability of the self-stabilization concept to the range of token circulation problems, and concludes the first step. next i want to modify the definition 1 in order to extend the number of problems that can be treated using the self-stabilization concept. coherently with a conservative approach, the impact of the modification should be minimal. i opt to redefine the minimal states predicate so that it operates set-wise: to this purpose i split l, the set of legitimate states, in a family of subsets l, and for any pair in l i want that "there exists a sequence of moves transferring the system from the one into the other". the sets in l can be defined either by enumeration, or by a characteristic property. the new formulation of the minimal states is: definition 2 (minimal states (set-wise)). a family l = {l1, …, lk}, with ∪l = l exists such that ∀(la, lb) ∈ l2, la ;lb the last step consists in proving that the new definition widens the applicability of the self-stabilization concept. in figure 1 i give a simple clock synchronization problem that can be treated with self-stabilization only when this concept is extended with the set-wise minimal states. the problem cannot be treated using definition 1: the proof is by contradiction, using the only if part of the theorem, if i want to exploit the presence of redundant paths. the partition i need to apply the set-wise extension is: l = {ticki, tacki | i ∈ [0, 2n \\- 1]} where ticki = {s | a = 2i \\- 1 a = c \\+ 1 (∀k : bk = a ∨ bk = c)} tacki = {s | a = 2i \\- 1 a = c \\- 1 (∀k : bk = a ∨ bk = c)} and the problem is solved by the following self-stabilizing algorithm (the proof is available): a: if (b==b2) and (b2==b3) if even(b1) a=b1+1; else a=b1; c: if (b1==b2) and (b2==b3) if even(b1) c=b1; else c=b1+1; bk: if a!=(c-1) bk=a; else bk=c; augusto ciuffoletti distributed neurocomputing on transputer systems h. franke j. rodriguez k. kawamura resource partitioning for real-time communication amit gupta domenico ferrari ip lookups using multiway and multicolumn search butler lampson venkatachary srinivasan george varghese cluster oriented architecture for the mapping of parallel processor networks to high performance applications massively parallel computer architectures have been discussed since the very beginning of electronic computing. the major design goal was clearly getting a superior system performance. today, with supercomputers, most commonly used, this has changed to match the computing power of supercomputers for a fraction of their costs. up to now most parallel computer systems have been build up by university institutes for research work with few commercial success. the recent progress in computer architecture design has made parallel processor networks much more feasible. due to combinatorical problems in processor interconnection networks, cluster oriented approaches are the most favorable way for designing parallel systems. this requires a special interconnection hardware as well as mapping algorithms. in this paper we will present our new parallel cluster oriented computer system based on the inmos transputer technology. this system includes a special network manager supported by dedicated hardware to make use of advanced dynamic mapping strategies in an efficient and flexible way. f-d. kubler f. lucking overview of 5ess-2000 switch performance richard singer aggregation and conformance in differentiated service networks: a case study the differentiated service (diff-serv) architecture [1] advocates a model based on different "granularity" at network edges and within the network. in particular, core routers are only required to act on a few aggregates that are meant to offer a pre-defined set of service levels. the use of aggregation raises a number of questions for end-to-end services, in particular when crossing domain boundaries where policing actions may be applied. this paper focuses on the impact of such policing actions in the context of individual and the bulk services built on top of the expedited forwarding (ef) [7] per- hop-behavior (phb). the findings of this investigation confirm and quantify the expected need for reshaping at network boundaries, and identify a number of somewhat unexpected behaviors. recommendations are also made for when reshaping is not available. roch a. guerin vicent pla paging area optimization based on interval estimation in wireless personal communication networks we consider an optimum personal paging area configuration problem to improve the paging efficiency in pcs/cellular mobile networks. the approach is to set up the boundaries of a one-step paging area that contain the locations of a mobile user with a high probability and to adjust the boundaries to gain a coverage that is matched to the mobile user's time- varying mobility pattern. we formulate the problem as an interval estimation problem. the objective is to reduce the paging signaling cost by minimizing the size of the paging area constrained to certain confidence measure (probability of locating the user), based on a finite number of available location observations of the mobile user. modeling user mobility as a brownian motion with the drift stochastic process and by estimating the parameters of the location probability distribution of the mobility process, the effects of the mobility characteristics and the system design parameters on the optimum paging area are investigated. results show: (1) the optimum paging area expands with the time elapsed after the last known location of the user; (2) it also increases with the length of a prediction interval and the location probability; (3) the relative change in the paging area size decreases with the increase in the number of location observations. zhuyu lei cem u. saraydar narayan b. mandayam single sided mpi implementations for sun mpir this paper describes an implementation of generic mpi-2 single-sided communications for sun-mpi. our implementation is layered on top of point-to- point mpi communications and therefore can be adapted to other mpi implementations. the code is designed to co-exist with other mpi-2 single-sided implementations (for example, direct use of shared memory) providing a generic fall-back implementation for those communication paths where an optimized single-sided implementation is not available. mpi-2 single-sided communications require the transfer of data-type information as well as user data. we describe a type packing and caching mechanism used to optimize the transfer of data-type information. the performance of this implementation is measured in comparison to equivalent point-to-point operations and the sharedmemory implementation provided by sun. s. booth e. mourao fast connection establishment in high speed networks protocols for establishing, maintaining and terminating connections in packet switched networks have been studied in the literature and numerous standards have been developed to address this problem. in this paper, we reexamine connection establishment in the context of a fast packet network with an integrated traffic load, explain why previously proposed solutions are inadequate and develop a protocol for connection establishment/takedown that is appropriate for such a network. the underlying model that we use is the recently developed paris network, though our ideas are sufficiently general to cover many other fast packet networking architectures. i. cidon i. gopal a. segall visualizing performance in the frequency plane d. g. keehn a performance evaluation of hyper text transfer protocols paul barford mark crovella a tightly-coupled processor-network interface dana s. henry christopher f. joerg a flexible model for resource management in virtual private networks as ip technologies providing both tremendous capacity and the ability to establish dynamic secure associations between endpoints emerge, virtual private networks (vpns) are going through dramatic growth. the number of endpoints per vpn is growing and the communication pattern between endpoints is becoming increasingly hard to forecast. consequently, users are demanding dependable, dynamic connectivity between endpoints, with the network expected to accommodate any traffic matrix, as long as the traffic to the endpoints does not overwhelm the rates of the respective ingress and egress links. we propose a new service interface, termed a _hose_, to provide the appropriate performance abstraction. a hose is characterized by the aggregate traffic to and from one endpoint in the vpn to the set of other endpoints in the vpn, and by an associated performance guarantee.hoses provide important advantages to a vpn customer: (i) flexibility to send traffic to a set of endpoints without having to specify the detailed traffic matrix, and (ii) reduction in the size of access links through multiplexing gains obtained from the natural aggregation of the flows between endpoints. as compared with the conventional point to point (or customer-pipe) model for managing qos, hoses provide reduction in the state information a customer must maintain. on the other hand, hoses would appear to increase the complexity of the already difficult problem of resource management to support qos. to manage network resources in the face of this increased uncertainty, we consider both conventional statistical multiplexing techniques, and a new _resizing_ technique based on online measurements.to study these performance issues, we run trace driven simulations, using traffic derived from at&t;'s voice network, and from a large corporate data network. from the customer's perspective, we find that aggregation of traffic at the hose level provides significant multiplexing gains. from the provider's perspective, we find that the statistical multiplexing and resizing techniques deal effectively with uncertainties about the traffic, providing significant gains over the conventional alternative of a mesh of statically sized customer-pipes between endpoints. n. g. duffield pawan goyal albert greenberg partho mishra k. k. ramakrishnan jacobus e. van der merive distributed termination detection in a mobile wireless network jeff matocha an overview of the georgia tech broadband institute at the georgia institute of technology, atlanta, usa nikil jayant john pippin the broadcast storm problem in a mobile ad hoc network sze-yao ni yu-chee tseng yuh-shyan chen jang-ping sheu improving tcp throughput over two-way asymmetric links: analysis and solutions lampros kalampoukas anujan varma k. k. ramakrishnan presentation abstract: a tutorial on local networks local networks have garnered ever-increasing attention during the past several years. local networks have been the subject of numerous conferences, of standardization activities, and of a proliferation of vendor offerings. the purpose of this tutorial is to overview the key technical issues underlying the design and application of local networks and thereby develop a framework for categorizing and, based on user requirements, evaluating local network approaches and products. naturally, an overview of this type draws from a large number of existing and planned networks. unfortunately, the limited time allotted to the topic does not permit a detailed review of any particular approach, but rather necessitates an attempt to extract the major concepts from each system. the phrase "local network " (or any of its variations) has become so widely used, that it has been used here without further explanation. in general, local networks have been characterized as having relatively high bandwidth, i.e., total system bandwidth in excess of 1 megabit/second, and covering a limited geographic area, i.e., at most 25 kilometers. other characteristics which have been associated with local networks include broadcast capability and localized administrative control. this definition includes more traditional communication switching products, even though the ethernet is frequently noted as being prototypical of local networks. while this tutorial attempts to span the entire spectrum of local network approaches, the ethernet and other similar networks are the primary focus. david j. kaufman rearrangeability of multistage shuffle/exchange networks in this paper we study the rearrangeability of multistage shuffle/exchange networks. although a theoretical lower bound of (2 log2n \\- 1) stages for rearrangeability of a network with n = 2n inputs and outputs has been known, the sufficiency of (2 log2n \\- 1) stages has neither been proved nor disproved. the best known upper bound for rearrangeability is (3 log2n \\- 3) stages. we prove that, if (2 log2r \\- 1) shuffle/exchange stages are sufficient for rearrangeability of a network with r = 2' inputs and outputs, then, for any n > r, 3 log2n \\- (r \\+ 1) stages are sufficient for a network with n inputs and outputs. this result is established by setting some of the middle stages of the network to realize a fixed permutation and showing the reduced network to be topologically equivalent to a member of the benes class of rearrangeable networks. we first characterize equivalence to benes networks in set-theoretic terms and use this to prove equivalence of the reduced shuffle/exchange network to the benes network. from the known result that 5 stages are sufficient for rearrangeability when n = 8, we obtain an upper bound of (3 log2n \\- 4) stages for rearrangeability when n ≥ 8\. further, any increase in the network size r for which the rearrangeability of (2 log2r \\- 1) stages could be shown, results in a corresponding improvement in the upper bound for all n ≥ r. in addition, due to the one-to-one correspondence that exists between the switches in the reduced shuffle/exchange network and those in the benes network, the former network can be controlled by the well-known looping algorithm. a. varma c. s. raghavendra inside risks: some reflections on a telephone switching problem peter g. neumann on the impossibility of group membership tushar deepak chandra vassos hadzilacos sam toueg bernadette charron-bost performance models for noahnet noahnet is an experimental flood local area network with features such as high reliability and high performance. noahnet uses a randomly connected graph topology with four to five interconnections per node and a flooding protocol to route messages. the purpose of this paper is to present two analytical performance models which we have designed to understand the load- throughput behavior of noahnet. both models assume slotted noahnet operation and also assume that if k messages attempt transmission in a slot, the network gets divided into k partitions of arbitrary sizes - one partition for each message. first, we show that the average number of successful messages in a slot given k partitions of transmissions is (m \\- k)/(n \\- 1), where n is the number of nodes in the network and m is the number of nodes out of n that participate in the flooding of k messages. this is an interesting result and is used in both models to derive the load- throughput equations. each model is then presented using a set of assumptions, derivations of load- throughput equations, a set of plots, and the discussion of results. models one and two differ in the way they account for retransmissions. g. m. parulkar a. s. sethi d. j. farber load distribution among replicated web servers: a qos-based approach marco conti enrico gregori fabio panzieri implementing remote procedure calls remote procedure calls (rpc) are a useful paradigm for providing communication across a network between programs written in a high level language. this paper describes a package, written as part of the cedar project, providing a remote procedure call facility. the paper describes the options that face a designer of such a package, and the decisions we made. we describe the overall structure of our rpc mechanism, our facilities for binding rpc clients, the transport level communication protocol, and some performance measurements. we include descriptions of some optimisations we used to achieve high performance and to minimize the load on server machines that have many clients. our primary aim in building an rpc package was to make the building of distributed systems easier. previous protocols were sufficiently hard to use that only members of a select group of communication experts were willing to undertake the construction of distributed systems. we hoped to overcome this by providing a communication paradigm as close as possible to the familiar facilities of our high level languages. to achieve this aim, we concentrated on making remote calls efficient, and on making the semantics of remote calls as close as possible to those of local calls. andrew d. birrell bruce jay nelson insights into the implementation and application of heterogeneous local area networks the ideal local area network is a mechanism which provides concurrent high speed error-free data paths over a limited geographical area and between any computational entities on the network. a computational entity may be a program running on any type of computer, any intelligent device, or any terminal. this means that such an ideal network must be able to support systems of cooperating processes within disjoint and dissimilar host environments. achieving such an ideal heterogeneous network (referred to as an "open" network with respect to the iso reference model for open systems interconnection) may be impossible in the limit. however, the ideal shows promise of being approachable. recent work in local area networks, protocols, and multiprocessor operating systems looks promising for attacking the problem. this paper first discusses data paths and the dilemma of heterogeneous data communications. existing local area networks, their functionality, shortcomings, and relationship to networking standards are described. commercial lans are discussed and classified. standards, high level protocols, and operating system concepts are discussed and related to networks. finally some possible approaches to the implementation of hetergeneous networks are discussed. william p. lidinsky empirical study of latency hiding on a fine-grain parallel processor latency associated with memory accesses and process communications are one of the most difficult obstacles in constructing a practical massively parallel system. so far, two approaches to hide latencies have been proposed. they are prefetching and multi- threading. an instruction-level data-driven computer is an ideal test-bed for evaluating these latency hiding methods because prefetching and multi-threading are naturally implemented in an instruction- level data- driven computer as unfolding and concurrent execution of multiple contexts. this paper evaluates latency hiding methods on sigma-1, a dataflow supercomputer developed in electrotechnical laboratory. as a result of evaluation, these methods are effective to hide static latencies but not effective to hide dynamic latencies. also, concurrent execution of multiple contexts is more effective than prefetching. kei hiraki toshio shimada satoshi sekiguchi a new localized channel sharing scheme for cellular networks junyi li ness b. shroff edwin k. p. chong apl.68000 level ii for the amiga harry c. bertuccelli fast broadcast in high-speed networks ajei gopal inder gopal shay kutten analysis of channel access schemes for high-speed lans a series of simulation studies into channel access protocols suitable for use in local area networks operating in baseband mode at bit rates of 100 mbit/s or more is presented, and the usefulness of these protocols for supporting data transmission with mixed traffic is discussed. niels nørup pedersen robin sharp a layered protocol architecture for multimedia wireless-pcs networks coupled with the growing interest in the universal mobile telecommunication system (umts) as a standard for future mobile communications, the need for a set of functions to effectively support multimedia teleservices in such an environment is also increasing. starting from the idea that multimedia means the integrated manipulation of different information and hence the independent handling of separate information is not satisfactory, an enhanced protocol architecture for the support of multimedia teleservices in wireless personal communication systems based on umts is proposed. it involves physical, mac, data link, and network layers. a synchronisation sub-layer is introduced on the mac level with the main aim of assuring a rough multimedia inter-stream synchronisation over the air interface, which is a first step prior to a fine end-to-end synchronisation performed by higher layers. proposed functions, their basic algorithms, their location in the protocol stack, as well as the signalling exchange among modules implementing them, on network and user sides, are described in detail in the paper. the resulting architecture well fits the demanding nature of multimedia services and can be easily interfaced with the wired backbone of the system. antonio iera salvatore marano antonella molinaro paging strategy optimization in personal communication systems mobility tracking is concerned with finding a mobile subscriber (ms) within the area serviced by the wireless network. the two basic operations for tracking an ms, location updating and paging, constitute additional load on the wireless network. the total cost of updating and paging can be minimized by optimally dividing the cellular area into location registration (lr) areas. in current systems broadcast paging messages are sent within the lr area to alert the ms of an incoming call. in this paper we propose a selective paging strategy which uses the ms mobility and call patterns to minimize the cost of locating an ms within an lr area subject to a constraint on the delay in locating the ms. ahmed abutaleb victor o.k. li performance of multiple-dwell pseudo-noise code acquisition with i-q detector on frequency-nonselective multipath fading channels multiple-dwell pseudo-noise code acquisition with a noncoherent i-q detector is analyzed for rayleigh and rician fading channels that takes into account the detection correlations resulting from multipath fading. minimum mean acquisition times with optimized dwell times and thresholds are obtained, and the effects of multipath fading and frequency offsets are evaluated. in addition, a detailed comparison between i-q and square-law detectors is conducted under various channel conditions. wern-ho sheen sauz chiou dimensioning server access bandwidth and multicast routing in overlay networks application-level multicast is a new mechanism for enabling multicast in the internet. driven by the fast growth of network audio/video streams, application-level multicast has become increasingly important for its efficiency of data delivery and its ability of providing value-added services to satisfy application specific requirements. from a network design perspective, application-level multicast differs drastically from traditional ip multicast in its network cost model and routing strategies. we present these differences and formulate them as a network design problem consisting of two parts: one is bandwidth assignment in the overlay network, the other is load-balancing multicast routing with delay constraints. we use analytical methods and simulations to show that our design solution is a valid and cost- effective approach. simulation results show that we are able to achieve network utilization within 10% of the best possible utilization while keeping the session rejection rate low. sherlia y. shi jonathan s. turner marcel waldvogel network maps: getting the big picture j. g. honeyman l. a. kelleher t. b. libert p. g. smith feedback control of congestion in packet switching networks: the case of a single congested node lotfi benmohamed semyon m. meerkov a distributed mechanism for power saving in ieee 802.11 wireless lans the finite battery power of mobile computers represents one of the greatest limitations to the utility of portable computers. furthermore, portable computers often need to perform power consuming activities, such as transmitting and receiving data by means of a random-access, wireless channel. the amount of power consumed to transfer the data on the wireless channel os negatively affected by the channel congestion level, and significantly depends on the mac protocol adopted. this paper illustrates the design and the performance evaluation of a new mechanism that, by controlling the accesses to the shared transmission channel of a wireless lan, leads each station to an optimal power consumption level. specifically, we considered the standard ieee 802.11 distributed coordination function (dcf) access scheme for wlans. for this protocol we analytically derived the optimal average power consumption levels required for a frame transmission. by exploiting these analytical results, we define a power save, distributed contention control (ps-dcc) mechanism that can be adopted to enhance the performance of the standard ieee 802.11 dcf protocol from a power saving standpoint. the performance of an ieee 802.11 network enhanced with the ps-dcc mechanism has been investigated by simulation. results show that the enhanced protocol closely approximates the optimal power consumption level, and provides a channel utilization close to the theoretical upper bound for the ieee 802.11 protocol capacity. in addition, even in low load situations, the enhanced protocol does not introduce additional overheads with respect to the standard protocol. luciano bononi marco conti lorenzo donatiello the seventh generation the fifth generation computer is fairly well defined. by extrapolation, we can make some statements about a sixth generation machine. but what of the seventh generation? this presentation explores one possible scenario for the seventh generation computer. general specifications, expandability factors, initial and operating costs, useful lifetime and special use-interactive characteristics will be defined. the seventh generation system design will have to be sensitive to design constraints in the 2010 timeframe, including: • energy availability, • operational lifetime requirements, • materials processing breakthroughs, and • user requirements. kerry mark joels applications of restrictive cutsets and topological cross's for minimum total load hikyoo koh reversing the collision-avoidance handshake in wireless networks j. j. garcia- luna-aceves asimakis tzamaloukas a comparison of techniques for diagnosing performance problems in information systems (extended abstract) joseph l. hellerstein observing self-stabilization chengdian lin janos simon power management techniques for mobile communication robin kravets p. krishnan modeling of an availability driven computer network architecture d. marinescu v. rego w. szpankowski a partial-multiple-bus computer structure with improved cost effectiveness this paper addresses the design and performance analysis of partial- multiple- bus interconnection networks. one such structure, called processor- oriented partial-multiple-bus (or ppmb), is proposed. it serves as an alternative to the conventional structure called memory-oriented partial- multiple-bus (or mpmb) and is aimed at higher system performance at less or equal system cost. ppmb's structural feature, which distinguishes itself from the conventional, is to provide every memory module with b paths to processors (where b is the total number of buses). this in contrast to the mere b/g paths provided in the conventional mpmb structure (where g is the number of groups), suggests a potential for higher system bandwidth. this potential is fully fulfilled by the load-balancing arbitration mechanism suggested, which in turn highlights the advantages of the proposed structure. as a result, it has been shown, both analytically and by simulation, that a substantial increase in system bandwidth (up to 20%) is achieved by the ppmb structure over the mpmb structure. in addition to the fact that the cost of ppmb is less than, or equal to, that of mpmb, its reliability is shown to be slightly increased as well. h. jiang k. c. smith a dynamic packet reservation multiple access scheme for wireless atm the dynamic packet reservation multiple access (dprma) scheme, a medium access control protocol for wireless multimedia applications, is proposed and investigated. dprma allows the integration of multiple traffic types through a single access control mechanism that permits users to specify their immediate bandwidth requirements. the primary feature of dprma is the dynamic matching of the traffic source generation rates with the assigned portion of the channel capacity. this is accomplished by a control algorithm that regulates the actual amount of channel capacity assigned to users. to support multimedia communication, channel capacity assignments are prioritized by traffic type. the performance of the scheme is evaluated and the scheme is shown to perform well in a system with voice, video conferencing, and data users present. it is also shown to provide improved performance over a system with a modified version of the packet reservation multiple access (prma) scheme. furthermore, several system parameters are studied and optimized. deborah a. dyson zygmunt j. haas performance evaluation of multiple time scale tcp under self-similar traffic conditions measurements of network traffic have shown that self- similarity is a ubiquitous phenomenon spanning across diverse network environments. in previous work, we have explored the feasibility of exploiting long-range correlation structure in self-similar traffic for congestion control. we have advanced the framework of multiple time scale congestion control and shown its effectiveness at enhancing performance for rate-based feedback control. in this article, we extend the multiple time scale control framework to window- based congestion control, in particular, tcp. this is performed by interfacing tcp with a large time scale module that adjusts the aggressiveness of bandwidth consumpton behavior exhibited by tcp as a function of large time scale network state, that is, information that exceeds the time horizon of the feedback loop as determined by rtt. how to effectively utilize such information---due to its probabilistic nature, dispersion over multiple time scales, and realization on top of existing window-based congestion controls--- is a nontrivial problem. first, we define a modular extension of tcp (a function call with a simple interface that applies to various flavors of tcp, e.g., tahoe, reno, and vegas) and show that it significantly improves performance. second, we show that multiple time scale tcp endows the underlying feedback control with proacativity by bridging the uncertainty gap associated with reactive controls which is exacerbated by the high delay- bandwidth product in broadband wide area networks. third, we investigate the influence of three traffic control dimensions---tracking ability, connection duration, and fairness---on performance. performance evaluation of multiple time scale tcp is facilitated by a simulation benchmark environment based on physical modeling of self-similar traffic. we explicate our methodology for disc kihong park tsunyi tuan the high-performance computing continuum sidney karin susan graham database and location management schemes for mobile communications anna hac bo liu integrated services packet networks with mobile hosts: architecture and performance this paper considers the support of real-time services to mobile users in an integrated services packet network. in the currently existing architectures, the service guarantees provided to the mobile hosts are mobility dependent, i.e., mobile hosts experience wide variation in the quality of service and often service disruption when hosts move from one location to another. the network performance degrades significantly when mobile hosts are provided with mobility independent service guarantees. in this paper we have proposed a service model for mobile hosts that can support adaptive applications which can withstand service degradation and disruption, as well as applications which require mobility independent service guarantees. we describe an admission control scheme for implementing this service model and evaluate its performance by simulation experiments. simulation results show that, if sufficient degree of multiplexing of the mobility dependent and independent services are allowed, the network does not suffer any significant performance degradation and in particular our admission control scheme achieves high utilization of network resources. anup kumar talukdar b. r. badrinath arup acharya d-arm: a new proposal for multi-dimensional interconnection networks this paper presents a new topology for multidimensional interconnection networks, namely d-arm, which has the goal of simultaneously providing a high network transmission capacity and a low information transfer delay. the new d-arm topology has a connection pattern arranged in alternated regular mesh fashions with toroidal boundaries. five distinct network attributes, normally used to characterize interconnection network topologies, were employed to analyze the d-arm topology: network diameter, bisection width, deflection index, degree of connectivity and symmetry. also the evaluation of the performance of the d-arm network through computer simulations was carried out based on the following measures: throughput and information transfer delay. an upper-bound of the network transmission capacity was derived in function of the network dimension (d) and length (w). in order to validate our proposal, as a viable topology among other well-known topologies, a comparative analysis among the d-arm, msn and shufflenet was performed. the analysis results show that the d-arm outperforms the msn and shufflenet in many aspects and suggest some plausible applications of the d-arm networks, e.g., broadband switching architecture, multiprocessor connection, high-speed man, wdm optical networks and photonic networks. lee luan ling alberto jose centeno filho an overview of the wireless information network laboratory (winlab) at rutgers university, nj, usa roy yates scheduling in parallel systems with a hierarchical organization of tasks to exploit multiple processors, a job is usually partitioned into several tasks that can be executed concurrently. these tasks wait for processors in task ready queue(s). there are two basic ways in which waiting ready tasks can be organized: centralized organization or distributed organization. in a centralized organization, a single central task queue is maintained. in the latter case, each processor has its own private ready queue of tasks. ideally, a central ready queue global to all processors is desired over the distributed organization. however, the centralized organization is not suitable for large parallel systems because the global task queue could become a bottleneck. a hierarchical organization has been proposed to incorporate the good features of these two organizations. this paper studies the impact of job and task scheduling policies on the performance of the hierarchical organization. s. p. cheng s. dandamudi design and control of micro-cellular networks with qos provisioning for data traffic the major focus of this paper is the design and control of micro/picocellular wireless access systems supporting non-real-time (or data) traffic subject to a guaranteed quality-of-service as defined by four metrics: call blocking probability, cell overload probability, average wireless bandwidth available to a mobile terminal, and probability that the available wireless bandwidth per mobile is less than some specified threshold. we utilize a cell-cluster based call admission control concept and provide a model as well as an analytical methodology which can be used to design wireless micro-cellular networks (as specified by the number of base stations required to serve the traffic generated within a given geographical area), and to control new call admission such that once a call is admitted to the system, it will enjoy a predefined quality-of-service without requiring intervention of the network call processor independent of how large the erlang load offered to the network may become. this is an important property for wireless access networks: although the network may be traffic engineered to provide an acceptable call blocking probability under anticipated erlang load, user mobility may occasionally cause that load to exceed design thresholds, and yet the qos offered to already admitted calls must be maintained. mahmoud naghshineh anthony s. acampora computing performance as a function of the speed, quantity, and cost of the processors everyone wants more computing power for their applications, and the industry has responded in two ways: first by increasing the speed of single cpu's, and second by deploying multiple processors in parallel. much controversy exists over how best to balance processor speed against the number of processors employed. is it better to have a single, very fast and very expensive cpu, thousands of very slow but very cheap cpu's, or is there some optimal mix in between? the value of single processors, measured in floating-point performance per dollar, is relatively easy to assess, but the corresponding value of parallel systems is obscured by the fact that applications are not generally perfectly parallel, with some loss in efficiency occurring due to sequential bottlenecks and communication overhead. parallel speedup, the ratio of execution time on a single processor to that on p processors, is often used to capture the effect and measure the efficiency of parallel utilization. we argue that this measure of efficiency is not a good measure of parallel performance because it rewards slow processors. instead we evaluate delivered floating-point performance as a function of the number of processors for either constant aggregate performance of the processors, or constant total cost. from these measures we offer two conclusions: 1) for a given aggregate floating-point performance, actual delivered performance never increases with the number of processors. and 2) for a given cost, delivered performance is maximized by selecting the fastest processor available at a given technology level, and employing as many as the budget allows. these results, which are generally known to parallel researchers, are often overlooked in the marketing announcements promoting "massively parallel" systems. we motivate this discussion by giving measured performance results from an actual application, and then show the theoretical basis. m. l. barton g. r. withers a pipelined, multiprocessor architecture for a connectionless server for broadband isdn daniel s. omundsen a. roger kaye samy a. mahmoud errata for Â"measured capacity of an ethernet: myths and realityÂ" david r.boggs jeffrey c. mogul christopher a. kent the impact of a zero-scan internet checksumming mechanism this paper describes a "zero-scan" mechanism that reduces internet checksumming overhead from a per-byte scan (or copy) cost, to a small and constant per-message cost. unlike previous techniques, this mechanism requires no message buffering within the source. this will allow internet transport protocols to achieve transfer latencies comparable to specialized protocols implemented directly on high- speed lan (link-layer) services. in addition, this mechanism is transparent to systems outside of the source lan. hence, this mechanism affords applications the portability of internet protocols without sacrificing the high performance of specialized lan transport protocols.the proposed zero-scan checksumming scheme eliminates the last requirement for an additional data copy/scan, beyond the scan required to transmit or receive from the network channel. if this checksumming mechanism is combined with zero-copy operating system mechanisms that provide low-overhead transfer across application and kernel boundaries, a network interface architecture that provides separate message buffering is no longer required. a consequence is that the network interface may be reduced, essentially, to dma engines plus link- and physical-layer logic. taken one step further, the network interface could be integrated with the cpu to create an "internet microprocessor". these alternative interface designs are discussed, along with their requirements and effects upon operating system and computer system architectures. gregory g. finn steve hotz rod van meter efficient support of delay and rate guarantees in an internet in this paper, we investigate some issues related to the efficient provision of end-to-end delay guarantees in the context of the guaranteed (g) services framework [16]. first, we consider the impact of reshaping traffic within the network on the end-to-end delay, the end-to-end jitter, as well as per-hop buffer requirements. this leads us to examine a class of traffic disciplines that use reshaping at each hop, namely rate-controlled disciplines. in this case, it is known that it is advantageous to use the earliest deadline first (edf) scheduling policy at the link scheduler [8]. for this service discipline, we determine the appropriate values of the parameters that have to be exported, as specified in [16]. subsequently, with the help of an example, we illustrate how the g service traffic will typically underutilize the network, regardless of the scheduling policy used. we then define a guaranteed rate (gr) service, that is synergetic with the g service framework and makes use of this unutilized bandwidth to provide rate guarantees to flows. we outline some of the details of the gr service and explain how it can be supported in conjunction with the g service in an efficient manner. l. georgiadis r. guerin v. peris r. rajan the performance of cache-coherent ring-based multiprocessors advances in circuit and integration technology are continuously boosting the speed of microprocessors. one of the main challenges presented by such developments is the effective use of powerful microprocessors in shared memory multiprocessor configurations. we believe that the interconnection problem is not solved even for small scale shared memory multiprocessors, since the speed of shared buses is unlikely to keep up with the bandwidth requirements of new microprocessors. in this paper we evaluate the performance of unidirectional slotted ring interconnection for small to medium scale shared memory systems, using a hybrid methodology of analytical models and trace-driven simulations. we evaluate both snooping and directory-based coherence protocols for the ring and compare it to high performance split transaction buses. luis andre barroso michel dubois problem and machine sensitive communication optimization thomas fahringer eduard mehofer critical path analysis of tcp transactions improving the performance of data transfers in the internet (such as web transfers) requires a detailed understanding of when and how delays are introduced. unfortunately, the complexity of data transfers like those using http is great enough that identifying the precise causes of delays is difficult. in this paper we describe a method for pinpointing where delays are introduced into applications like http by using critical path analysis. by constructing and profiling the critical path, it is possible to determine what fraction of total transfer latency is due to packet propagation, network variation (e.g., queuing at routers or route fluctuation), packet losses, and delays at the server and at the client. we have implemented our technique in a tool called tcpeval that automates critical path analysis for web transactions. we show that our analysis method is robust enough to analyze traces taken for two different tcp implementations (linux and freebsd). to demonstrate the utility of our approach, we present the results of critical path analysis for a set of web transactions taken over 14 days under a variety of server and network conditions. the results show that critical path analysis can shed considerable light on the causes of delays in web transfers, and can expose subtleties in the behavior of the entire end-to-end system. paul barford mark crovella efficient user-level communication on multicomputers with an optimistic flow- control protocol (extended abstract) j. william lee wide-area traffic: the failure of poisson modeling network arrivals are often modeled as poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. we evaluate 21 wide-area traces, investigating a number of wide-area tcp arrival processes (session and connection arrivals, ftpdata connection arrivals within ftp sessions, and telnet packet arrivals) to determine the error introduced by modeling them using poisson processes. we find that user-initiated tcp session arrivals, such as remote-login and file- transfer, are well-modeled as poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from poisson; that modeling telnet packet interarrivals as exponential grievously underestimates the burstiness of telnet traffic, but using the empirical tcplib[djcme92] interarrivals preserves burstiness over many time scales; and that ftpdata connection arrivals within ftp sessions come bunched into "connection burst", the largest of which are so large that they completely dominate ftpdata traffic. finally, we offer some preliminary results regarding how our findings relate to the possible self- similarity of wide-area traffic. vern paxson sally floyd enhancing boosting with semantic register in a superscalar processor (abstract) ias-s supports boosting with "semantic register" and boosting boundary register to remove the dependences caused by conditional branches. in ias-s, there is no dedicated shadow register file, and multiple levels of boosting is supported without multiple copies of register files. any general-purpose register in ias-s can be regarded as a sequential register or a shadow register flexibly. furthermore, multi-way jump mechanism is combined with boosting to reduce the penalty due to frequent control transfers. feipei lai meng-chou chang a reconfigurable hardware approach to network simulation dimitrios stiliadis anujan varma location dependent query processing the advances in wireless and mobile computing allow a mobile user to perform a wide range of aplications once limited to non-mobile hard wired computing environments. as the geographical position of a mobile user is becoming more trackable, users need to pull data which are related to their location, perhaps seeking information about unfamiliar places or local lifestyle data. in these requests, a location attribute has to be identified in order to provide more efficient access to location dependent data, whose value is determined by the location to which it is related. local yellow pages, local events, and weather information are some of the examples of these data. in this paper, we give a formalization of location relatedness in queries. we differentiate location dependence and location awareness and provide thorough examples to support our approach. ayse y. seydim margaret h. dunham vijay kumar deterministic delay bounds for vbr video in packet-switching networks: fundamental limits and practical trade-offs dallas e. wrege edward w. knightly hui zhang jörg liebeherr instruction scheduling for the hp pa-8000 david a. dunn wei-chung hsu approximate performance analysis of real-time traffic over heavily loaded networks with timed token protocols jacek swiderski implementation of a local area network (abstract only) thriftnet was created by dr. tobin maginnis and dr. donald miller in 1979 as a locally developed basic network for file transfer and establishment of a virtual terminal between two computers connected by a pair of serial line units or slus. this network has been described as an unobtrusive network in the sense that it requires no operating system modifications and uses existing terminal lines for intercomputer communication. (turbeville 85). thriftnet-version 2 was ported or implemented on the microvax from the pdp-11 because: groups of programs from the pdp-11's unix system needed to be modified and uploaded to the microvax's ultrix system and the microvax needed to communicate with the other computers within the network enabling the user of the microvax to use other systems' utilities. thriftnet is a basic network that controls communication and file transfer between two computers connected by two slus in half or full duplex mode. the two computers in this case are the pdp-11 and the microvax with unix and ultrix operating systems respectively. in version 2. there are only three files involved called thrift. rcv. and thrift.h. thrift and rcv are the two c programs that create thriftnet. thrift.h is an include file of constants and variables common to both. thrift establishes communication and starts file transfers between the two computers while rcv is sued to verify communications and receive the file from the host machine. [turbeville 85] [maginnis 82] thriftnet is established in a series of steps. the thrift program when invoked sets the terminal in a raw sight bit mode which invokes rev in the connecting computer. rcv then sends back ack and the computer wordsize. this sets a flag that signals that communications have been established. now the user can login. to transfer a file thrift starts by asking for the file spec from the master system as well as what to call the file when transferred. the name and specs of the file are sent to the receiving computer with an ack returning. the master then sends 131 bytes at a time. the group of data is then checked for errors and if correct. rcv sends out an ack and writes the block to the computer's disk, but if incorrect, rcv sends out a nak and disregards the block causing retransmission of the block. this goes on till eof is reached. at that point everything is reset to a virtual mode. the characters are sent in groups of 131 bytes. in each group. there is a start byte. 128 characters, stop byte, and a logitudinal redundancy check character or lrcc. the lrcc is calculated by doing an exclusive or the previous characters of the data block. if the receiving computer doesn't get the correct number of characters or a correctly calculated lrcc. the receiving computer will send a nak which causes a retransmission five transmissions are allowed before the file transfer is aborted. [henry 85] [turbeville 85] the first step in porting thriftnet was to type in thrift.h and rcv.c into the microvax. one problem was when a nak(0) was inadvertently left cut of the code when typed in causing the program to wait forever on a nak that would never appear. another obstacle was that all unix system calls had to be changed to ultrix. the code also generated warning statements because of slight differences in the c compilers. to correct this, some sections of code had to be rewritten while others required the use of masks also the numeric constants for the bauds were changed to be compatible with ultrix. the constants for raw, echo, and eightb were also checked. sherry lesiker performance analysis of msp: feature-rich high-speed transport protocol thomas f. la porta mischa schwartz using value prediction to increase the power of speculative execution hardware this article presents an experimental and analytical study of value prediction and its impact on speculative execution in superscalar microprocessors. value prediction is a new paradigm that suggests predicting outcome values of operations (at run-time ) and using these predicted values to trigger the execution of true-data-dependent operations speculatively. as a result, stals to memory locations can be reduced and the amount of instruction-level parallelism can be extended beyond the limits of the program's dataflow graph. this article examines the characteristics of the value prediction concept from two perspectives: (1) the related phenomena that are reflected in the nature of computer programs and (2) the significance of these phenomena to boosting instruction-level parallelism of superscalar microprocessors that support speculative execution. in order to better understand these characteristics, our work combines both analytical and experimental studies. freddy gabbay avi mendelson development of the domain name system the domain name system (dns) provides name service for the darpa internet. it is one of the largest name services in operation today, serves a highly diverse community of hosts, users, and networks, and uses a unique combination of hierarchies, caching, and datagram access.this paper examines the ideas behind the initial design of the dns in 1983, discusses the evolution of these ideas into the current implementations and usages, notes conspicuous surprises, successes and shortcomings, and attempts to predict its future evolution. paul v. mockapetris kevin j. dunlap architecture and experimental results for quality of service in mobile networks using rsvp and cbq efforts are underway to enhance the internet with quality of service qos capabilities for transporting real - time data. the issue of wireless networks and mobile hosts being able to support applications that require qos has become very significant. the reservation protocol rsvp provides a signaling mechanism for end - to - end qos negotiation. rsvp has been designed to work with wired networks. to make rsvp suitable for wireless networks, changes need to be made by i changing the way control messages are sent, and ii introducing wireless/mobile specific qos parameters that take into account the major features of wireless networks, namely, high losses, low bandwidth, power constraints and mobility. in this paper, an architecture with a modified rsvp protocol that helps to provide qos support for mobile hosts is presented. the modified rsvp protocol has been implemented in an experimental wireless and mobile testbed to study the feasibility and performance of our approach. class based queueing cbq which is used as the underlying bandwidth enforcing mechanism is also modified to fit our approach. the experimental results show that the modified rsvp and cbq help in satisfying resource requests for mobile hosts, after handoff occurs. the experiments also show how different power and loss profile mechanisms can be used with our framework. the system performance using the modified rsvp control mechanism is also studied. indu mahadevan krishna m. sivalingam probabilistic diagnosis of multiprocessor systems this paper critically surveys methods for the automated probabilistic diagnosis of large multiprocessor systems. in recent years, much of the work on system-level diagnosis has focused on probabilistic methods, which can diagnose intermittently faulty processing nodes and can be applied in general situations on general interconnection networks. the theory behind the probabilistic diagnosis methods is explained, and the various diagnosis algorithms are described in simple terms with the aid of a running example. the diagnosis methods are compared and analyzed, and a chart is produced, showing the comparative advantages of the various diagnosis algorithms on the basis of several factors important to the probabilistic diagnosis. sunggu lee kang geun shin a "stupid" idea jay blickstein the himat model for mobile agent applications marco cremonini andrea omicini franco zambonelli mobility management for hierarchical wireless networks in this paper, we consider the mobility management in large, hierarchically organized multihop wireless networks. the examples of such networks range from battlefield networks, emergency disaster relief and law enforcement etc. we present a novel netwroks addressing architecture to accommodate mobility using a "home agent" concept akin to mobile ip. we distinguish between the "physical" routing hierarchy (dictated by geographical relationships between nodes) and "logical" hierarchy of subnets in which the members moves as a group (e.g., company, brigade, abttalion in the battlefield). the performance of the mobility management scheme is investigated through simulation. guangyu pei mario gerla a communication structure for a multiprocessor computer with distributed global memory an experimental multiprocessor computer was designed and built in order to explore the feasibility of certain internal communication mechanisms. the system consisted of seven processing elements, each containing a part of the global memory connected to a local bus. for each processor the global memory is seen as one single, linearly addressable structure. the processing elements were all connected to a common, global bus, consisting of three separate busses in order to increase the capacity. a bus selection unit was designed, capable of making a unique bus selection for each request, within a fraction of a memory cycle. the experiments have shown that communication structures based on distributed global memory and global bus systems can be used efficiently for medium scale systems. lars philipson bo nilsson bjorn breidegard wireless atm - an overview this paper sketches the requirements and possibilities of wireless atm in local area networks. because of the wide range of services supported by atm networks, atm technology is expected to become the dominant networking technology in the mediumtermfor both public infrastructure networks and for local area networks. atm infrastructure can support all types of services, from time-sensitive voice communications and desk-top multi-media conferencing, to bursty transaction processing and lan traffic. extending the atm infrastructure with a wireless access mechanismmeets the needs of those users and customers that want a unified, end-to-end networking infrastructure with high-performance, consistent service characteristics. the paper introduces atm concepts, discusses the requirements for wireless atm, in particular for data link control and radio functions. it closes with some notes on development of wireless atm research systems standardization and spectrum allocations. geert a. awater jan kruys renaming in an asynchronous environment this paper is concerned with the solvability of the problem of processor renaming in unreliable, completely asynchronous distributed systems. fischer et al. prove in [8] that "nontrivial consensus" cannot be attained in such systems, even when only a single, benign processor failure is possible. in contrast, this paper shows that problems of processor renaming can be solved even in the presence of up to t < n/2 faulty processors, contradicting the widely held belief that no nontrivial problem can be solved in such a system. the problems deal with renaming processors so as to reduce the size of the initial name space. when only uniqueness of the new names is required, we present a lower bound of n \\+ 1 on the size of the new name space, and a renaming algorithm that establishes an upper bound on n \\+ t. if the new names are required also to preserve the original order, a tight bound of 2′(n \\- t \\+ 1) - 1 is obtained. hagit attiya amotz bar-noy danny dolev david peleg rudiger reischuk characterizing and interpreting periodic behavior in computer systems robert f. berry joseph l. hellerstein interaction of tcp and data access control in an integrated voice/data cdma system this paper considers the interaction between a proposed data access control scheme and the standardized error recovery schemes on the radio link of a voice/data cdma system. a data access control scheme for combined voice-data cdma systems has been proposed and studied in previous literature. the scheme aims to maintain a certain target voice signal to interference ratio (sir); this is achieved by controlling the data load according to the measured voice sir. the data users are allowed to transmit in a radio-link time slot with a certain permission probability, which is determined by the base station based on the measured voice sir in the previous slot. as per the is-99 standards, however, data transmission operates under the framework of tcp, which is a higher level end-to-end protocol. the tcp data unit, called a segment, is typically equivalent to several tens of physical layer frames; hence, a segment transmission takes up several tens of slots. due to changes in the number of voice users in talkspurt (which occur on a time scale shorter than a segment transmission time), the slot level data access control scheme can introduce significant variability in the segment transmission time. the effect of such variability on the tcp timers, which operate at the segment level, is of interest. in this paper, an approximate upper bound on the data throughput, taking the presence of tcp into account, is computed. the results provide one with an insight into the interaction of the access control scheme with tcp; they also give practical pointers as to choosing suitable parameters and operating points for the scheme. sudhir ramakrishna jack m. holtzman real-time estimation and dynamic renegotiation of upc parameters for arbitrary traffic sources in atm networks brian l. mark gopalakrishnan ramamurthy execution control and memory management of a data flow signal processor the architecture of the data flow signal processor (dfsp) is discussed with the emphasis on its control mechanism. it is argued that the data flow principle can be efficiently applied to block processing operations of nonrecursive dsp computations, when shared data structures are avoided. simulation results involving the optimal operand size and the memory use of the control section are presented. due to the expandability and convenient programmability of the dfsp architecture, the range of its potential applications extends beyond signal processing as demonstrated by a dfsp based database machine. klaus kronlöf analysis of protocol sequences for slow frequency hopping an error probability bound for protocol sequences is derived for frame asynchronous access on a slow frequency hopping channel. this bound depends on the maximum and average cyclic hamming correlation properties of the protocol sequences used. constructions of protocol sequences with good cyclic correlation properties are given. laszlo gyorfi istvan vajda secure and mobile networking the ietf mobile ip protocol is a significant step towards enabling nomadic internet users. it allows a mobile node to maintain and use the same ip address even as it changes its point of attachment to the internet. mobility implies higher security risks than static operation. portable devices may be stolen or their traffic may, at times, pass through links with questionable security characteristics. most commercial organizations use some combination of source-filtering routers, sophisticated firewalls, and private address spaces to protect their network from unauthorized users. the basic mobile ip protocol fails in the presence of these mechanisms even for authorized users. this paper describes enhancements that enable mobile ip operation in such environments, i.e., they allow a mobile user, out on a public portion of the internet, to maintain a secure virtual presence within his firewall-protected office network. this constitutes what we call a mobile virtual private network (mvpn). vipul gupta gabriel montenegro optimal cost/performance design of atm switches paolo coppo matteo d'ambrosio riccardo melen stability of long-lived consensus (extended abstract) this paper introduces the notion stability for a long-lived consensus system. this notion reflects how sensitive to changes the decisions of the system are, from one invocation of the consensus algorithm to the next, with respect to input changes. stable long-lived consensus systems are proposed, and tight lower bounds on the achievable stability are proved, for several different scenarios. the scenarios include systems that keep memory from one invocation of consensus to the next versus memoryless systems; systems that take their decisions based on the number of different inputs but not on the source identities of those inputs versus non-symmetric systems. these results intend to study essential aspects of stability, and hence are independent of specific models of distributed computing. applications to particular asynchronous and synchronous system are described. shlomi dolev sergio rajsbaum an overview of unp larry l. peterson hashed and hierarchical timing wheels: efficient data structures for implementing a timer facility george varghese anthony lauck exploiting state equivalence on the fly while applying code motion and speculation luiz c. v. dos santos jochen a. g. jess an energy consumption model for performance analysis of routing protocols for mobile ad hoc networks a mobile ad hoc network (or manet) is a group of mobile, wireless nodes which cooperatively form a network independent of any fixed infrastructure or centralized administration. in particular, a manet has no base stations: a node communicates directly with nodes within wireless range and indirectly with all other nodes using a dynamically-computed, multi- hop route via the other nodes of the manet. simulation and experimental results are combined to show that energy and bandwidth are substantively different metrics and that resource utilization in manet routing protocols is not fully addressed by bandwidth-centric analysis. this report presents a model for evaluating the energy consumption behavior of a mobile ad hoc network. the model was used to examine the energy consumption of two well- known manet routing protocols. energy-aware performance analysis is shown to provide new insights into costly protocol behaviors and suggests opportunities for improvement at the protocol and link layers. laura marie feeney the performance of tcp/ip for networks with high bandwidth-delay products and random loss t. v. lakshman upamanyu madhow performance modeling of partial packet discarding using the end-of-packet indicator in aal type 5 ahmed e. kamal hardware combining and scalability susan r. dickey richard kenner resources section: books michele tepper workload characterization of a web proxy in a cable modem environment martin arlitt rich friedrich tai jin dimensioning bandwidth for elastic traffic in high-speed data networks arthur w. berger yaakov kogan forwarding database overhead for inter-domain routing yakov rekhter optimal distributed location management in mobile networks an important issue in the design of future personal communication services (pcs) networks is the efficient management of location information. in this paper, we consider a distributed database architecture for location management in which update and query loads of the individual databases are balanced. we obtain lower bounds to the worst-case delay in locating a mobile user, to the average delay, and to the call blocking probability. we then propose a dynamic location management algorithm that meets these lower bounds. the optimality of this algorithm with respect to these three perofrmance measures, as well as simplicity, make it an appealing candidate for distributed location management in pcs networks. govind krishnamurthi murat azizoglu arun k. somani a gigabit/sec fibre channel circuit switch-based lan p. r. rupert time synchronization over networks using convex closures jean-marc berthaud an adaptive congestion control scheme for real time packet video transport hemant kanakia partho p. mishra amy r. reibman collection and analysis of data from communication system networks this paper provides a brief overview of a new, limited and specific direction in the study of computer-based information systems: the study of communication networks by means of computer-monitored usage and content. this area of research is the result of the interaction of two foci in recent research: (1) the increasing use of computer-mediated communication systems in organizations, and (2) the conceptualization of communication as a process of convergence, or networks of relations. ronald e. rice a reduced register file for risc architectures miquel huguet tomás lang fragmentation considered harmful internetworks can be built from many different kinds of networks, with varying limits on maximum packet size. throughput is usually maximized when the largest possible packet is sent; unfortunately, some routes can carry only very small packets. the ip protocol allows a gateway to _fragment_ a packet if it is too large to be transmitted. fragmentation is at best a necessary evil; it can lead to poor performance or complete communication failure. there are a variety of ways to reduce the likelihood of fragmentation; some can be incorporated into existing ip implementations without changes in protocol specifications. others require new protocols, or modifications to existing protocols. christopher a. kent jeffrey c. mogul work and infrastructure susan leigh star geoffrey c. bowker whatever happened to the next-generation internet? mark weiser computing and communications (keynote address/panel discussion): a new world or a fantasy? robert v. adams robert a. dryden paul c. ely robert metcalf hierarchical function distribution - a design principle for advanced multicomputer architectures an abstract view of a computer system is provided by a hierarchy of functions, ranging from the high-level operating system functions down to the primitive functions of the hardware. vertical migration of high-level functions into the microcode of a cpu or horizontal migration of hardware functions out of the cpu into dedicated processors alone is not an adequate realization method for innovative computer architectures with complex functionality. in the paper, a new design principle called hierarchical function distribution is introduced to cope with the task of designing innovative multicomputer systems with complex functionality. the design rules of hierarchical function distribution are presented, and the advantages of the approach are discussed and illustrated by examples. w. k. giloi p. behr a comparison of architectural support for messaging in the tmc cm-5 and the cray t3d programming models based on messaging continue to be an important programming model for parallel machines. messaging costs are strongly influenced by a machine's network interface architecture. we examine the impact of architectural support for messaging in two machines --- the tmc cm-5 and the cray t3d --- by exploring the design and performance of several messaging implementations. the additional features in the t3d support remote operations: memory access, fetch-and-increment, atomic swaps, and prefetch.experiments on the cm-5 show that requiring processor involvement for message reception can increase the communication overheads from 60% to 300% for moderate variations in computation grain size at the destination. in contrast, the t3d hardware for remote operations decouples message reception from processor activity, producing high-performance messaging independent of computation grain size or variability.in addition, hardware support for a shared address space in the t3d can be used to solve the output contention problem (output hot spots), producing messaging implementations that are robust over a wide variety of traffic patterns. atomic swap hardware can be used to build a distributed message queue, enabling a "pull" messaging scheme where the destination requests data transfer upon receive. this scheme uses prefetches to mask receive latency. while this yields performance robust over output contention, its base cost is competitive only for small messages (up to 64 bytes) because of the high cost of issuing and resolving prefetches in the t3d. emulation shows that if the interaction costs can be reduced by a factor of eight (250ns to 31ns), perhaps by moving the prefetch queue on chip, and there is a corresponding increase in the prefetch queue size, the pull scheme can give superior performance in all eases. vijay karamcheti andrew a. chien highly personalized information delivery to mobile clients the inherent limitations of mobile devices necessitate information to be delivered to mobile clients to be highly personalized according to their profiles. this information may be coming from a variety of resources like web servers, company intranets, email servers. a critical issue for such systems is scalability, that is, the performance of the system should be in acceptable limits when the number of users increases dramatically. another important issue is being able to express highly personalized information in the user profiles which requires a querying power as that of sql on relational databases. finally, the results should be customized according to user needs and preferences. since the queries will be executed on the documents fetched over the internet, it is natural to expect the documents to be xml documents. this paper describes an architecture for mobile network operators to deliver highly personalized information from xml resources to mobile clients. to achieve high scalability in this architecture, we index the user profiles rather than the documents because of the excessively large number of profiles expected in the system. in this way all queries that apply to a document at a given time are executed in parallel through a finite state machine (fsm) approach while parsing the document. furthermore the queries that have the same fsm representation are grouped and only one finite state machine is created for each group which contributes to the excellent performance of the system as demonstrated in the performance evaluation section. to provide for user friendliness and expressive power, we have developed a graphical user interface that translates the user profiles into xml-ql. xml- ql's querying power and its elaborate construct statement allow the format of the results to be specified. the results to be pushed to the mobile clients are converted to wireless markup language (wml) by the delivery component of the system. bahattin ozen ozgur kilic mehmet altinel asuman dogac promoting the use of end-to-end congestion control in the internet sally floyd kevin fall the unimin switch architecture for large-scale atm switches sung hyuk byun dan keun sung a deadlock model for a multi-service medium access protocol employing multi- slot n-ary stack algorithm (msstart) fraser cameron moshe zukerman milosh ivanovich sivathasan saravanabavananthan ranil hewawasam a performance comparison of three supercomputers: fujitsu vp-2600, nec sx-3, and cray y-mp margaret l. simmons harvey j. wasserman olaf m. lubeck christopher eoyang eoyang raul mendez hirro harada misako ishiguro reliable and efficient hop-by-hop flow control hop-by-hop flow control can be used to fairly share the bandwidth of a network among competing flows. no data is lost even in overload conditions; yet each flow gets access to the maximum throughput when the network is lightly loaded. however, some schemes for hop-by-hop flow control require too much memory; some of them are not resilient to errors. we propose a scheme for making hop- by-hop flow control resilient and show that it has advantages over schemes proposed by kung. we also describe a novel method for sharing the available buffers among the flows on a link; our scheme allows us to potentially reduce the memory requirement (or increase the number of flows that can be supported) by an order of magnitude. most of the work is described in the context of an atm network that uses credit based flow control. however our ideas extend to networks in which flows can be distinguished, and to rate based flow control schemes. cuneyt ozveren robert simcoe george varghese the big three - today's 16-bit microprocessor this paper reports on the functional evaluation of the three 16-bit microprocessors, namely the intel 8086, the zilog z8000, and the motorola mc68000. these microprocessors were employed in several crt applications, both monochrome and color. execution time benchmark tests were made, mechanization problems compared and instruction/architectural characteristics highlighted. conclusions and recommendations are made applicable to terminals and similar sperry univac products. r. k. bell w. d. bell t. c. cooper t. k. mcfarland using space-time grid for efficient management of moving objects efficient storage and retrieval of moving objects in dbms have received significant interest recently. there are applications that would benefit from the management of dynamically changing information about moving objects. in this paper, we develop a system that manages such information interacting with moving objects. we model the space-time domain space as a grid (space-time grid) and model the trajectory of a moving object as a polyline in the space- time grid. the polyline is the result of the interactions among other moving objects. in this paper, the insertion algorithm and several other query processing algorithms are presented. hae don chon divyakant agrawal amr el abbadi websites michele tepper usenet nuggets mark thorson resource discovery in distributed networks mor harchol-balter tom leighton daniel lewin synchronizing processors through memory requests in a tightly coupled multiprocessor to satisfy the growing need for computing power, a high degree of parallelism will be necessary in future supercomputers. up to the late 70s, supercomputers were either multiprocessors (simd-mimd) or pipelined monoprocessors. current commercial products combine these two levels of parallelism. effective performance will depend on the spectrum of algorithms which is actually run in parallel. in a previous paper [je86], we have presented the dspa processor, a pipeline processor which is actually performant on a very large family of loops. in this paper, we present the greedy network, a new interconnection network (in) for tightly coupled multiprocessors (tcms). then we propose an original and cost effective hardware synchronization mechanism. when dspa processors are connected with a shared memory through a greedy network and synchronized by our synchronization mechanism, a very high parallelism may be achieved at execution time on a very large spectrum of loops including loops where independency of the successive iterations cannot be checked at compile time as e.g. loop 1: do 1 i=1 n 1 a(p(i)=a(q(i)) a. seznec y. jegou technologies for low latency interconnection switches thomas f. knight problems and approaches for a teraflop processor this paper discusses problems associated with designing a processor capable of sustaining a teraflop (1012 floating point operations per second) of processing power. several researcher have speculated on achieving this performance. the technical problems of a practical design are shown to be formidable. however, none of these problems requires a technology breakthrough for their solution. the predictable advances of the next generation of technology together with a major engineering effort is all that will be required to build such a parallel machine with usable teraflop processing power. a. h. frey g. c. fox optimal routing in closed queuing networks hiroshi kobayashi mario gerla on the memory overhead of distributed snapshots lior shabtay adrian segall the horizon supercomputing system: architecture and software horizon is the name currently being used to refer to a shared-memory multiple instruction stream - multiple data stream (mimd) computer architecture under study by independent groups at the supercomputing research center and at tera computer company. its performance target is a sustained rate of 100 giga (1011) floating point operations per second (flops). horizon achieves this speed with a few hundred identical scalar processors. each processor has a horizontal instruction set that allows the production of one or more floating point results per cycle without resorting to vector operations. memory latency is hidden, assuming enough parallelism is available, by allowing processors to switch context on each machine cycle. in this overview, the horizon architecture is introduced and its performance is estimated. the processor instruction set and a simple programming example are given. additional details on the processor architecture, interconnection network design, performance analyses, machine simulator, compiler development, and application studies can be found in companion papers. j. t. kuehn b. j. smith performance analysis of redundant-path networks for multiprocessor systems performance of a class of multistage interconnection networks employing redundant paths is investigated. redundant path networks provide significant tolerance to faults at minimal costs; in this paper improvements in performance and very graceful degradation are also shown to result from the availability of redundant paths. a markov model is introduced for the operation of these networks in the circuit-switched mode and is solved numerically to obtain the performance measures of interest. the structure of the networks that provide maximal performance is also characterized. krishnan padmanabhan duncan h. lawrie amorphous computer system architecture: a preliminary look noel w. anderson baked potatoes: deadlock prevention via scheduling shlomi dolev evangelos kranakis danny krizanc a modular multirate video distribution system: design and dimensioning yiu- wing leung tak-shing yum replacement policies for a proxy cache luigi rizzo lorenzo vicisano the eifel retransmission timer we analyze two alternative retransmission timers for the transmission control protocol (tcp). we first study the retransmission timer of tcp-lite which is considered to be the current de facto standard for tcp implementations. after revealing four major problems of tcp-lite's retransmission timer, we propose a new timer, named the eifel retransmission timer, that eliminates these. the strength of our work lies in its hybrid analysis methodology. we develop models of both retransmission timers for the class of network-limited tcp bulk data transfers in steady state. using those models, we predict the problems of tcp-lite's retransmission timer and develop the eifel retransmission timer. we then validate our model-based analysis through measurements in a real network that yield the same results. r. ludwig k. sklower the pim architecture for wide-area multicast routing stephen deering deborah l. estrin dino farinacci van jacobson ching-gung liu liming wei network support for ip traceback this paper describes a technique for tracing anonymous packet flooding attacks in the internet back toward their source. this work is motivated by the increased frequency and sophistication of denial-of-service attacks and by the difficulty in tracing packets with incorrect, or "spoofed," source addresses. in this paper, we describe a general purpose traceback mechanism based on probabilistic packet marking in the network. our approach allows a victim to identify the network path(s) traversed by attack traffic without requiring interactive operational support from internet service providers (isps). moreover, this traceback can be performed "post mortem"\---after an attack has completed. we present an implementation of this technology that is incrementally deployable, (mostly) backward compatible, and can be efficiently implemented using conventional technology. stefan savage david wetherall anna karlin tom anderson integration of security in network routing protocols brijesh kumar implementation trade-offs in using a restricted data flow architecture in a high performance risc microprocessor the implementation of a superscalar, speculative execution sparc-v9 microprocessor incorporating restricted data flow principles required many design trade-offs. consideration was given to both performance and cost. performance is largely a function of cycle time and instructions executed per cycle while cost is primarily a function of die area. here we describe our restricted data flow implementation and the means with which we arrived at its configuration. future semiconductor technology advances will allow these trade-offs to be relaxed and higher performance restricted data flow machines to be built. m. simone a. essen a. ike a. krishnamoorthy t. maruyama n. patkar m. ramaswami m. shebanow v. thirumalaiswamy d. tovey performance of multipath routing for on-demand protocols in mobile ad hoc networks mobile ad hoc networks are characterized by multi-hop wireless links, absence of any cellular infrastructure, and frequent host mobility. design of efficient routing protocols in such networks is a challenging issue. as class of routing protocols called on- demandprotocols hs recently found attention because of their low routing overhead. the on-demand protocols depend on query floods to discover routes whenever a new route is needed. such floods take up a substantial portion of network bandwidth. we focus on a particular on-demand protocol, called dynamic source routing, and show how intelligent use of multipath techniques can reduce the frequency of query floods. we develop an analytic modeling framework to determine the relative frequency of query floods for various techniques. our modeling effort shows that while multipath routing is significantly better than single path routing, the performance advantage is small beyond a few paths and for long paths lengths. it also shows that providing all intermediate nodes in the primary (shortest) route with alternative paths has a significantly better performance than providing only the source with alternate paths. we perform some simulation experiments which validate these findings. asis nasipuri robert castañeda samir r. das distributed round-robin and first-come first-serve protocols and their applications to multiprocessor bus arbitration two new distributed protocols for fair and efficient bus arbitration are presented. the protocols implement round-robin (rr) and first-come first-serve (fcfs) scheduling, respectively. both protocols use relatively few control lines on the bus, and their logic is simple. the round-robin protocol, which uses statically assigned arbitration numbers to resolve conflict during an arbitration, is more robust and simpler to implement than previous distributed rr protocols that are based on rotating agent priorities. the proposed fcfs protocol uses partly static arbitration numbers, and is the first practical proposal for a fcfs arbiter known to the authors. the proposed protocols thus have a better combination of efficiency, cost, and fairness characteristics than existing multiprocessor bus arbitration algorithms. three implementations of our rr protocol, and two implementations of our fcfs protocol, are discussed. simulation results are presented that address: 1) the practical potential for unfairness in the simpler implementation of the fcfs protocol, 2) the practical implications of the higher waiting time variance in the rr protocol, and 3) the allocation of bus bandwidth among agents with unequal request rates in each protocol. the simulation results indicate that there is very little practical difference in the performance of the two protocols. m. k. vernon u. manber dynamic capacity allocation and hybrid multiplexing techniques for atm wireless lans we consider digital wireless multimedia lans and time- varying traffic rates. to deal effectively with the dynamics of the time- varying traffic rates, a traffic monitoring algorithm (tma) is deployed to dynamically allocate channel capacities to the heterogeneous traffics. the tma is implemented as a higher level protocol that dictates the capacity boundaries within two distinct framed transmission techniques: a framed time domain-based (ftdb) technique and a framed cdma (fcdma) technique. the performance of the tma in the presence of the ftdb technique is compared to its performance in the presence of the fcdma technique for some traffic scenarios. the performance metrics used for the tma-ftdb and tma-fcdma combinations are channel capacity utilization factors, traffic rejection rates, and traffic delays. it is found that the tma-ftdb is superior to the tma-fcdma when the speed of the transmission links is relatively low and the lengths of the transmitted messages are relatively short. as the speed of the transmission links and the length of the transmitted messages increase, the tma-fcdma eventually outperforms the tma-ftdb. anthony burrell harold p. stern p. papantoni-kazakos a performance study of a token ring protocol s. guptan b. srinivasan n. simha a comparison of channel scanning schemes for distributed formation and reconfiguration a packet radio network (for example, wireless lan, mobile tactical network) consists of a number of nodes, each equipped with a transceiver, exchanging data packets via radio channels. in this paper we identify and discuss issues related to the process of forming a network in an automatic and distributed manner. during network initialisation and reconfiguration, the time to complete the network formation process is an important performance parameter. we define two measures which can be used for computing the network formation time. also, we present a comparative evaluation of two channel-scanning schemes (synchronous and asynchronous) for multiple frequency operations using a broadcast access scheme based on csma. our study of networks with different spatial distributions of nodes shows that the asynchronous scanning scheme performs better when the ratio of the number of 1-hop neighbours to the number of available channels is high. its performance is somewhat worse than that of the synchronous scheme for sparse networks. it is also sensitive to changes in the channel dwell time of the receivers, compared with the synchronous case. a. o. mahajan a. j. dadej k. v. lever amtree: an active approach to multicasting in mobile networks active networks (ans) are a new paradigm in computer networking. in ans, programs can be injected into routers and switches to extend the functionalities of the network. this allows programmers to enhance existing protocols and enables the rapid deploymentof new protocols. little work has been done in the area of multicast routing in heterogeneous environments. in this paper, we propose amtree, an an-based multicast tree that is bidirectional, optimizable on demand and adaptive to source migration. we show how ans can be exploited to enable multicast tree to be modified and optimized efficiently. by filtering unnecessary signaling messages, maintaining minimal storage at routers and incorporating features from shared-tree methods we are able to achieve a scalable solution. furthermore, we introduce an an-based optimisation algorithm that is executed on demand by receivers. besides that we introduce a fast rejoin protocol for receiver migration that makes no assumptions about the existence of multicast services in foreign networks. the performance of amtree is compared to those of the bidirectional home agent (ha) method and the remote subscription method. we found that compared to the bidirectional ha method amtree has a much lower handoff and end-to-end latency. unlike the bidirectional ha method where end-to-end latency increases as the mobile host (mh) migrates further away from its ha, amtree's latency remains fairly constant. we found that after optimization, the resulting tree's end-to-end latency to be comparable to the remote subscription method but without the need for building a new multicast tree after each handoff. kwan-wu chin mohan kumar adaptive resource management algorithms for indoor mobile computing environments emerging indoor mobile computing environments seek to provide a user with an advanced set of communication-intensive applications, which require sustained quality of service in the presence of wireless channel error, user mobility, and scarce available resources. in this paper, we investigate two related approaches for the management of critical networking resources in indoor mobile computing environments: adaptively re-adjusting the quality of service within pre-negotiated bounds in order to accommodate network dynamics and user mobility. classifying cells based on location and handoff profiles, and designing advance resource reservation algorithms specific to individual cell characteristics.preliminary simulation results are presented in order to validate the approaches for algorithmic design. a combination of the above approaches provide the framework for resource management in an ongoing indoor mobile computing environment project at the university of illinois. songwu lu vaduvur bharghavan high-level specification and efficient implementation of pipelined circuits this paper describes a novel approach to high-level synthesis of complex pipelined circuits, including pipelined circuits with feedback. this approach combines a high-level, modular specification language with an efficient implementation. in our system, the designer specifies the circuit as a set of independent modules connected by conceptually unbounded queues. our synthesis algorithm automatically transforms this modular, asynchronous specification into a tightly coupled, fully synchronous implementation in synthesizable verilog. maria-cristina marinescu martin rinard the performance of query control schemes for the zone routing protocol in this paper, we study the performance of route query control mechanisms for the recently proposed zone routing protocol (zrp) for ad-hoc networks. the zrp proactively maintains routing information for a local neighborhood (routing zone), while reactively acquiring routes to destinations beyond the routing zone. this hybrid routing approach has the potential to be more efficient in the generation of control traffic than traditional routing schemes. however, without proper query control techniques, the zrp can actually produce more traffic than standard flooding protocols.our proposed query control schemes exploit the structure of the routing zone to provide enhanced detection (query detection (qd1/qd2)), termination (loop-back termination (lt), early termination (et)) and prevention (selective bordercasting (sbc)) of overlapping queries. we demonstrate how certain combinations of these techniques can be applied to single channel or multiple channel ad-hoc networks to improve both the delay and control traffic performance of the zrp. our query control mechanisms allow the zrp to provide routes to all accessible network nodes with only a fraction of the control traffic generated by purely proactive distance vector and purely reactive flooding schemes, and with a response time as low as 10% of a flooding route query delay. zygmunt j. haas marc r. pearlman the journal review column ian parberry packet voice communicatins over pc based local area networks this paper presents actual implementations of packet voice communication systems over two types of pc based local area networks. one is a token-passing ring network and the other is an ethernet network. the system configuration, system operation and system performance analysis is described for both networks. a formula for the maximum allowable number of active voice stations is presented for both systems. the last part of the paper describes a proposed design for a distributed packet voice communications protocol. the protocol presented deals with the higher levels of the communication system. the purpose of the protocol is to establish and maintain a telephone conversation between two users by using the underlying network services. eluzor friedman chaim ziegler requirements for optimal execution of oops with tests both the efficient execution of branch intensive code and knowing the bounds on same are important issues in computing in general and supercomputing in particular. in prior work, it has been suggested, implied, or left as a possible maximum, that the hardware needed to execute code with branches optimally, i.e., oracular performance, is exponentially dependent on the total number of dynamic branches to be executed, this number of branches being proportional at least to the number of iterations of the loop. for classes of code taking at least one cycle per iteration to execute, this is not the case. for loops containing one test (normally in the form of a boolean recurrence of order 1), it is shown that the hardware necessary varies from exponential to polynomial in the length of the dependency cycle l, while execution time varies from one time cycle per iteration to less than l time cycles per iteration; the variation depends on specific code dependencies. a. uht on reliable message diffusion y. moses g. roth a study of preemptable vs. non-preemptable token reservation access protocols w. timothy strayer optimizing bulk data transfer performance: a packet train model c. song l. h. landweber rapid design and manufacture of wearable computers s. finger m. terk e. subrahamanian c. kasabach f. prinz d. p. siewiorek a. smailagic j. stivoric l. weiss performance modelling of a hslan slotted ring protocol the slotted ring protocol which is evaluated in this paper is suitable for use at very large transmission rates. in terms of modelling it is a multiple cyclic server system. a few approximative analytical models of this protocol are presented and evaluated vs the simulation in this paper. the cyclic server model shows to be the most accurate and usable over a wide range of parameters. a performance analysis based on this model is presented. m. zafirovic- vukotic i. g. m. m. niemegeers status of osi standards a. l. chapin prioritized channel borrowing without locking: a channel sharing strategy for cellular communications hua jiang stephen s. rappaport emulation service protocols for a multimedia testbed emulation d. c. wolfe a methodology for designing communication protocols we propose a compositional technique for designing protocols. the technique involves specifying constraints between the events of the component protocols. a constraint may either require synchronization between certain events of the component protocols or may require inhibiting an event in one protocol on the occurrence of an event in another component protocol. we find both types of constraints useful in composing protocols. we demonstrate the applicability of the technique by deriving several protocols. the technique facilitates modular design and verification. our technique, in conjunction with the sequential composition technique, can be used to design complex protocols. gurdip singh response to the collapsed lan rodney van meter greg finn steve hotz dave dyer an experiment on measuring application performance over the internet calton pu frederick korz robert c. lehman a general class of processor interconnection strategies a new class of general topologies is proposed in this paper for interconnecting a large network of computers in parallel and distributed environment. these structures have been shown to possess small internode distances, fairly low number of links per node, easy message routing and large number of alternate paths that can be used in case of faults in the system. the interconnection is based on a mixed radix number system, presented in this paper. the technique results in a variety of structures for a given number of processors n, depending on the required diameter in the network. a bus oriented structure is also introduced here, based on the same mathematical framework. these structures possess only two i/o ports per processor and are also shown to have small internode distances. laxmi n. bhuyan dharma p. agrawal a tcp-friendly rate adjustment protocol for continuous media flows over best effort networks jitendra padhye jim kurose don towsley rajeev koodli the elusive goal of workload characterization allen b. downey dror g. feitelson design of a distributed system support based on a centralized parallel bus claudio kirner eduardo marques vm-based shared memory on low-latency, remote-memory-access networks recent technological advances have produced network interfaces that provide users with very low-latency access to the memory of remote machines. we examine the impact of such networks on the implementation and performance of software dsm. specifically, we compare two dsm systems---cashmere and treadmarks---on a 32-processor dec alpha cluster connected by a memory channel network.both cashmere and treadmarks use virtual memory to maintain coherence on pages, and both use lazy, multi-writer release consistency. the systems differ dramatically, however, in the mechanisms used to track sharing information and to collect and merge concurrent updates to a page, with the result that cashmere communicates much more frequently, and at a much finer grain.our principal conclusion is that low-latency networks make dsm based on fine-grain communication competitive with more coarse-grain approaches, but that further hardware improvements will be needed before such systems can provide consistently superior performance. in our experiments, cashmere scales slightly better than treadmarks for applications with false sharing. at the same time, it is severely constrained by limitations of the current memory channel hardware. in general, performance is better for treadmarks. leonidas kontothanassis galen hunt robert stets nikolaos hardavellas michal cierniak srinivasan parthasarathy wagner meira sandhya dwarkadas michael scott performance evaluation of tcp/rlp protocol stack over cdma wireless link due to the high frame error rate in wireless communication channels, an additional link layer protocol, radio link protocol (rlp), has been introduced in the newly approved data services option standard for wideband spread spectrum digital cellular system. in this paper, we investigate performance issues of a typical cod division multiple access (cdma) wireless link using the protocol stack given in the standard. in particular, we focus on the dynamics of the tcp and rlp layers of the protocol stack since the fluctuation of systemperformance is largely caused by automatic repeat request (arq) mechanisms implemented at these two layers. we compare the network performance of default parameter setting to those of other possible parameter settings. analytical and simulation results presented in this paper can provide guidance to those attempting to further improve performance of interest. gang bao constructing instruction traces from cache-filtered address traces (citcat) instruction traces are useful tools for studying many aspects of computer systems, but they are difficult to gather without perturbing the systems being traced. in the past, researchers have collected instruction traces through various techniques, including single-stepping, instruction inlining, hardware monitoring, and processor simulation. these approaches, however, fail to produce accurate traces because they interfere with the processor's normal execution.because processors are deterministic machines, their behavior can be predicted if their initial states and external inputs are known. we have developed a technique, called "citcat," which exploits this fact to generate nearly perfect instruction traces through trace-driven simulation. citcat combines the best features of instruction inlining, hardware monitoring, and processor simulation to produce long, accurate instruction traces without perturbing the system being traced. because citcat instruction traces are computed, rather than stored, this hybrid technique delivers not just accurate traces, but also an extremely efficient trace compression algorithm. charlton d. rose j. kelly flanagan mobile ip debalina ghosh a static and dynamic workload characterization study of the san diego supercomputer center cray x-mp joseph pasquale barbara bittel daniel kraiman a low-complexity multiuser detector for up-link cdma qpsk mobile radio communications e. del re r. fantacci s. morosi g. vivaldi optimizing file transfer response time using the loss-load curve congestion control mechanism carey l. williamson performance analysis of two echo control designs in atm networks zsehong tsai wen-der wang chien-hwa chiou jin-fu chang lung-sing liang self-similarity through high-variability: statistical analysis of ethernet lan traffic at the source level walter willinger murad s. taqqu robert sherman daniel v. wilson optimistic distributed timed cosimulation based on thread simulation model sungjoo yoo kiyoung choi distributing the comparison of dna and protein sequences across heterogeneous supercomputers hugh nicholas grace giras vasiliki hartonas-garmhausen michael kopko christopher maher alexander ropelewski how standards will enable hardware/software co-design mark genoe chris lennard joachim kunkel brian bailey gjalt de jong grant martin kamal hashmi shay ben- chorin anssi haverinnen the intel 8087 numeric data processor this paper describes a new device, the intel®8087 numeric data processor, with unprecedented speed, accuracy and capability. its modified stack architecture and instruction set are explained illustrative examples are included. the 8087, which conforms to the proposed ieee floating-point standard, is a coprocessor in the intel®8086 family. it supports seven data types: three real, three integer and one packed bcd format, and performs all necessary numeric operations from addition to logarithmic and trigonometric functions. john palmer use of instruction set simulators to evaluate the low risc brian short local anchor scheme for reducing location tracking costs in pcns joseph s. m. ho ian f. akyildiz tornet: a local area network tornet is an experimental local area computer network presently being designed and built in the computer group laboratory of the department of electrical engineering at the university of toronto. the network consists of a number of local rings, each attached to a central ring. the local rings employ a variation on the slotted-ring format that uses a limited insertion technique to achieve reasonable response times for character traffic among many devices and small computers. two fixed-length packet formats (one byte or 128 bytes of data) are used on the local rings. only the longer format is used on the central ring which generally provides record level access to shared specialized equipment. the main objective in the local ring design is a low-cost port for inexpensive terminals, sensors, etc. the paper concentrates on the functional characteristics of the local rings, including delay-throughput comparisons with other well-known ring schemes. z. g. vranesic v. c. hamacher w. m. loucks s. g. zaky implementing network protocols at user level chandramohan a. thekkath thu d. nguyen evelyn moy edward d. lazowska network and single user performance evaluation of a mobile data system over flat fading transmission channels in this paper the contrasting effects of transmission impairments and capture on both the network and single user performance of a slotted aloha system are investigated in a mobile radio environment, accounting for frequency non- selective random propagation phenomena, and employing the packet error probability in order to define packet losses and capture. with this study we demonstrate that it is possible to generalize in a real propagation context a method previously proposed in literature for evaluating the network behavior in terms of steady-state throughput, backlog and stability in conventional transmission conditions, i.e., when all the transmission channels were error- free and the collisions caused the loss of all the packets involved. moreover, we indicate under which specific constraints on the terminal mobility we can apply this method to analytically predict the single user performance, which we show being an important design parameter. through the numerical results reported we quantitatively point out that in the absence of coding capture increases system stability and moderately improves the overall system throughput and backlog. we also outline the trade-off between an increased capture gain obtained by means of coding and the corresponding system cost in terms of complexity and bandwidth occupancy. furthermore, we demonstrate the unfairness which affects the single user performance and the consequent need for countermeasures in order not to discriminate among users differently located within the network; yet, this last solution is detrimental as regards the positive capture effects. piero castoldi gianni immovilli maria luisa merani ip multicast channels: express support for large-scale single-source applications in the ip multicast model, a set of hosts can be aggregated into a group of hosts with one address, to which any host can send. however, internet tv, distance learning, file distribution and other emerging large- scale multicast applications strain the current realization of this model, which lacks a basis for charging, lacks access control, and is difficult to scale.this paper proposes an extension to ip multicast to support the _channel_ model of multicast and describes a specific realization called _explicitly requested single-source (express)_ multicast. in this model, a multicast _channel_ has exactly one explicitly designated source, and zero or more channel _subscribers_. a single protocol supports both channel subscription and efficient collection of channel information such as subscriber count. we argue that express addresses the aforementioned problems, justifying this multicast service model in the internet. hugh w. holbrook david r. cheriton framework for a taxonomy of fault-tolerance attributes in computer systems a conceptual framework is presented that relates various aspects of fault- tolerance in the context of system structure and architecture. such a framework is an essential first step for the construction of a taxonomy of fault-tolerance. a design methodology for fault-tolerant systems is used as the means to identify and classify the major aspects of fault-tolerance: system pathology, fault detection and recovery algorithms, and methods of modeling and evaluation. a computing system is described in terms of four universes of observation and interpretation, ordered in the following sequence: physical, logic, information, and interface, or user's. the description is used to present a classification of faults, i.e., the causes of undesired behavior of computing systems. algirdas avizienis putting it together: getting unwired: the wifi way win treese performance of broadcast and unknown server (bus) in atm lan emulation in this paper, we develop performance models of the broadcast and unkn own server (bus) in the lane. the traffic on the bus is divided into two classes: the broadcast and multicast traffic, and the unicast relay flow. the broadcast and multicast traffic is assumed to form a markov modulated poisson process (mmpp). the traffic for a particular unicast relay flow is an mmpp as well. however, the number of active unicast relay flows sojourning on the bus is determined by a tandem queueing system, where the flow arrival process is poisson, the address resolution delay is exponentially distributed, and the connection setup delay is three-stage erlang distributed. the size of data frames in traffic flows is a random variable with three possible values: short, medium, and large. in order to deal with the intractability (i.e., largeness and stiffness) of the underlying markov chain, a hierarchical model is used to decompose the system. with the help of stochastic petri net package (spnp), a software package for the automated generation and solution of markovian stochastic systems, and the decomposition method, we study the performance of the bus module under different loads. we also investigate the effect of address resolution delay and connection setup delay on the performance of the bus. hairong sun xinyu zang kishor s. trivedi traffic phase effects in packet-switched gateways sally floyd van jacobson optical interconnection technology in the telecommunications network davis h. hartman a testbed for wide area atm research asynchronous transfer mode (atm) has been advocated as the basis of multiservice telecommunications in the next century. however, the public telecommunication technology of the 1990's will be isdn, which presents a circuit switched subscriber interface. an obvious approach to building a wide area atm network is to build it on top of primary rate isdn. in this paper we describe the lessons learned through the design and deployment of such a network. d. l. tnnenhouse i. m. leslie analysis and simulation of multiplexed single-bus networks with and without buffering jose m. llaberia grino mateo valero cortes enrique herrada lillo jesus labarta mancho a general multi-microprocessor interconnection mechanism for non-numeric processing mmps interconnection problems are discussed in terms of a single time-shared bus. the performance drawbacks usually associated with the single bus alternative are attributed to the high bus utilization at the basic building block level. the pended transaction bus protocol is presented as a general solution to such utilizations. such a bus is developed to support more than 50 processors without severe contention. the basic protocol of the mc68000 as a current generation microprocessor is investigated, and shown ineffective for true multi-microprocessor systems. hoo-min d. toong svein o. strommen earl r. goodrich a binary feedback scheme for congestion avoidance in computer networks we propose a scheme for congestion avoidance in networks using a connectionless protocol at the network layer. the scheme uses a minimal amount of feedback from the network to the users, who adjust the amount of traffic allowed into the network. the routers in the network detect congestion and set a congestion- indication bit on packets flowing in the forward direction. the congestion indication is communicated back to the users through the transport- level acknowledgment. the scheme is distributed, adapts to the dynamic state of the network, converges to the optimal operating point, is quite simple to implement, and has low overhead. the scheme maintains fairness in service provided to multiple sources. this paper presents the scheme and the analysis that went into the choice of the various decision mechanisms. we also address the performance of the scheme under transient changes in the network and pathological overload conditions. k. k. ramakrishnan r. jain retrospective on high-level language computer architecture high-level language computers (hllc) have attracted interest in the architectural and programming community during the last 15 years; proposals have been made for machines directed towards the execution of various languages such as algol,1,2 apl,3,4,5 basic,6,7 cobol,8,9 fortran,10,ll lisp,12,13 pascal,14 pl/i,15,16,17 snobol,18,19 and a host of specialized languages. though numerous designs have been proposed, only a handful of high- level language computers have actually been implemented.4,7,9,20,21 in examining the goals and successes of high-level language computers, the authors have found that most designs suffer from fundamental problems stemming from a misunderstanding of the issues involved in the design, use, and implementation of cost-effective computer systems. it is the intent of this paper to identify and discuss several issues applicable to high-level language computer architecture, to provide a more concrete definition of high-level language computers, and to suggest a direction for high-level language computer architectures of the future. david r. ditzel david a. patterson remote operations across a network of small computers this paper discusses the design of a remote operation call (roc) mechanism. rocs are a generalisation of the remote procedure call concept. they provide for a wider variety of remote calls, such as asynchronous, directed and multicast calls. an implementation of rocs on a network of personal computers is also described. brent nordin ian a. macleod t. patrick martin perspective on local area networks this note summarizes current status of local area network offerings, local area network standardization activities, and growth projections for the lan market during 1982-90. deepinder p. sidhu network design and improvement h. noltemeier h.-c. wirth s. o. krumke transmission facilities for computer communications a. g. fraser p. s. henry next century challenges: radioactive networks vanu bose david wetherall john guttag intersystem location update and paging schemes for multitier wireless networks global wireless networks enable mobile users to communicate regardless of their locations. one of the most important issues is location management in a highly dynamic environment because mobile users may roam between different wireless networks, network operators, and geographical regions. in this paper, a location tracking mechanism is introduced, which consists of intersystem location updates using the concept of boundary location area (bla) and paging using the concept of boundary location register (blr). the bla is determined by a dynamic location update policy in which the velocity and the quality of service (qos) are taken into account on a per-user basis. the blr is used to maintain the records of mobile users crossing the boundary of networks. this mechanism not only reduces location tracking costs but also significantly decreases call loss rates and average paging delays. the performance evaluation of the proposed schemes is provided to demonstrate their effectiveness in multitier wireless networks. wenye wang ian f. akyildiz fault-tolerant task management and load re-distribution on massively parallel hypercube systems i. ahmad a. ghafoor interface-based design james a. rowson alberto sangiovanni-vincentelli the use of connectionless network layer protocols over fddi networks dave katz monotonic evolution: an alternative to induction variable substitution for dependence analysis we present a new approach to dependence testing in the presence of induction variables. instead of looking for closed form expressions, our method computes _monotonic evolution_ which captures the direction in which the value of a variable changes. this information is then used in the dependence test to help determine whether array references are dependence-free. under this scheme, closed form computation and induction variable substitution can be delayed until after the dependence test and be performed on-demand. to improve computative efficiency, we also propose an optimized (non-iterative) data-flow algorithm to compute evolution. experimental results show that dependence tests based on evolution information matches the accuracy of that based on closed-form computation (implemented in polaris), and when no closed form expressions can be calculated, our method is more accurate than that of polaris. peng wu albert cohen jay hoeflinger david padua alternative specification and verification of a periodic state exchange protocol andrás l. oláh sonia m. heemstra de groot fault detection with multiple observers clark wang mischa schwartz network locality at the scale of processes packets on a lan can be viewed as a series of references to and from the objects they address. the amount of locality in this reference stream may be critical to the efficiency of network implementations, if the locality can be exploited through caching or scheduling mechanisms. most previous studies have treated network locality with an addressing granularity of networks or individual hosts. this paper describes some experiments tracing locality at a finer grain, looking at references to individual processes, and with fine- grained time resolution. observations of typical lans show high per-process locality; that is, packets to a host usually arrive for the process that most recently sent a packet, and often with little intervening delay. jeffrey c. mogul establishment communication systems: lan's or pabx's - which is better? in the sixties and early seventies, major activities in networking were focused on private and public data networks using either circuit and packet- switching technology or integrating both switching methods into a single network. in this time frame, manufacturers of main frames defined and implemented network architectures, iso started work on the reference model of the open-system interconnect architecture, and ccitt recommended the x-series of interfaces. local-area communication networks (lan's) represent a comparatively new field of activity which can be viewed as an extension to data networks for making high-speed packet-switching services available to the in-house domain. currently, much research and development work is being pursued in this field, both at universities and in industry. the term local refers to communication on the users' premises, i.e., within a building or among a cluster of buildings. local data and voice communication are not new. a typical example of a system widely used today for local data communication between hosts and display terminals is a star configuration where the terminals are attached to control units tightly coupled to a processor via i/o channels. the necessary terminal control functions are provided in the control unit and shared among a set of terminals. the most widely used local communication system is the private automatic branch exchange (pabx). most pabx's installed today are optimised for real-time voice, and use analog technology. k. kummerle "whither massive parallelism?" howard jay siegel energy efficiency of tcp in a local wireless environment the focus of this paper is to analyze the energy consumption performance of various versions of tcp, namely, tahoe, reno and newreno, for bulk data transfer in an environment where channel errors are correlated. we investigate the performance of a single wireless tcp connection by modeling the correlated packet loss/error process (e.g., as induced by a multipath fading channel) as a firstorder markov chain. based on a unified analytical approach, we compute the throughput and energy performance of various versions of tcp. the main findings of this study are that (1) error correlations significantly affect the energy performance of various versions of analogous conclusions for throughput), and in particular they result in considerably better performance for tahoe and newreno than iid errors, and (2) the congestion control mechanism implemented by tcp does a good job at saving energy as well, by backing off and idling during error bursts. an interesting conclusion is that, unlike throughput, the energy efficiency metric may be very sensitive to the tcp version used and to the choice of the protocol parameters, so that large gains appear possible. michele zorzi ramesh r. rao adaptive protocols for information dissemination in wireless sensor networks wendi rabiner heinzelman joanna kulik hari balakrishnan introduction to routing in multicomputer networks farooq ashraf mostafa abd- el-barr khalid al-tawil a grammar-based methodology for protocol specification and implementation a new methodology for specifying and implementing communication protocols is presented. this methodology is based on a formalism called "real-time asynchronous grammars" (rtag), which uses a syntax similar to that of attribute grammars to specify allowable message sequences. in addition rtag provides mechanisms for specifying data-dependent protocol activities, real- time constraints, and concurrent activities within a protocol entity. rtag encourages a top-down approach to protocol design that can be of significant benefit in expressing and reasoning about highly complex protocols. as an example, an rtag specification is given for part of the class 4 iso transport protocol (tp-4). because rtag allows protocols to be specified at a highly detailed level, major parts of an implementation can be automatically generated from a specification. an rtag parser can be written which, when combined with an rtag specification of a protocol and a set of interface and utility routines, constitutes an implementation of the protocol. to demonstrate the viability of rtag for implementation generation, an rtag parser has been integrated into the kernel of the 4.2 bsd unix operating system, and has been used in conjunction with the rtag tp-4 specification to obtain an rtag-based tp-4 implementation in the dod internet domain. david p. anderson lawrence h. landweber design of a highly reliable cube-connected cycles architecture nian-feng tzeng tcp-peach: a new congestion control scheme for satellite ip networks current tcp protocols have lower throughput performance in satellite networks mainly due to the effects of long propagation delays and high link error rates. in this paper, a new congestion control scheme called tcp-peach is introduced for satellite networks. tcp-peach is composed of two new algorithms, namely sudden start and rapid recovery, as well as the two traditional tcp algorithms, congestion avoidance and fast retransmit. the new algorithms are based on the novel concept of using dummy segments to probe the availability of network resources without carrying any new information to the sender. dummy segments are treated as low-priority segments and accordingly they do not effect the delivery of actual data traffic. simulation experiments show that tcp-peach outperforms other tcp schemes for satellite networks in terms of goodput. it also provides a fair share of network resources. ian f. akyildiz giacomo morabito sergio palazzo adaptive power management a hierarchical/distributed system the automatic control of electrical system (aces) for aircraft shipboard applications was first demonstrated at the westinghouse aerospace electrical division in 1970 [4]. this system was implemented on a single computer which controlled the hardware directly with discrete signals. presently hierarchical and distributed computer architectures are being evaluated [3, 14], to perform the aces task. westinghouse is pursuing a combined hierarchical/distributed architecture for its adaptive power management (apm) system [5]. the hierarchical network performs system control, conflict resolution, and top level resource monitoring and adaption/reconfiguration. the distributed network performs the system input/output functions, all direct control and coordination of the peripherial hardware and the adaption/reconfiguration of their resource/control tables (see figure 10). in addition, apm complies with the air forces requirement to use the standard dais an/ayk-15a processor [10, 11]; the standard higher order programming language as defined in mil-std-1589 [12], and interface and data bus multiplex system standard mil-std-1553b [8, 9]. william f. honey the dawn of the "stupid network" david s. isenberg admission control with priorities: approaces for multi-rate wireless system priority based link-bandwidth partitioning is required to support wireless multimedia services, having diverse qos (delay, throughput) requirements, in mobile ad hoc networks with multimedia nodes. a new class of service disciplines, termed "batch and prioritize" or bp admission control (ac), is proposed. the bp algorithms use the delay tolerance of applications to batch requests in time slots. bandwidth assignment is made either at the end of the slot, or during the slot, on a priority basis. analytical and simulation models are developed to quantify the performance of the bp schemes. the results are compared with those obtained for a first-come- first-served (fcfs) service discipline. the class of bp schemes trade off the delay and loss tolerance of applications to improve the net carried traffic on the link. further, such schemes enable an easy implementation for adaptive prioritization, where the degree of precedence given to an application varies with offered load and the link capacity. deepak ayyagari anthony ephremides ubiquitous devices united: enabling distributed computing through mobile code kjetil jacobsen dag johansen log-based receiver-reliable multicast for distributed interactive simulation reliable multicast communication is important in large-scale distributed applications. for example, reliable multicast is used to transmit terrain and environmental updates in distributed simulations. to date, proposed protocols have not supported these applications' requirements, which include wide-area data distribution, low-latency packet loss detection and recovery, and minimal data and management over-head within fine-grained multicast groups, each containing a single data source.in this paper, we introduce the notion of _log-based receiver-reliable multicast_ (lbrm) communication, and we describe and evaluate a collection of log-based receiver reliable multicast optimizations that provide an efficient, scalable protocol for high- performance simulation applications. we argue that these techniques provide value to a broader range of applications and that the receiver-reliable model is an appropriate one for communication in general. hugh w. holbrook sandeep k. singhal david r. cheriton a high performance broadcast file transfer protocol this paper describes a broadcast bulk file transfer protocol and presents its performance characteristics. the protocol is for bulk file distribution over a single satellite channel. the transmitting site and multiple receiver sites share channel access time to send data and acknowledgements. the protocol performance characteristics are investigated using a simulation model that includes modelling of the uplink and downlink error processes at both the transmitter and receivers. the results obtained show that this protocol offers efficient reliable bulk file transfer to multiple receivers using a single satellite channel. j. s. j. daka a. j. waters distributed code assignments for cdma packet radio network limin hu a network performance tool for grid environments craig a. lee james stepanek rich wolski carl kesselman ian foster a power metric for mobile systems t. martin d. siewiorek on the self-similar nature of ethernet traffic we demonstrate that ethernet local area network (lan) traffic is statistically _self- similar_, that none of the commonly used traffic models is able to capture this fractal behavior, and that such behavior has serious implications for the design, control, and analysis of high-speed, cell-based networks. intuitively, the critical characteristic of this self-similar traffic is that there is no natural length of a "burst": at every time scale ranging from a few milliseconds to minutes and hours, similar-looking traffic bursts are evident; we find that aggregating streams of such traffic typically intensifies the self-similarity ("burstiness") instead of smoothing it.our conclusions are supported by a rigorous statistical analysis of hundreds of millions of high quality ethernet traffic measurements collected between 1989 and 1992, coupled with a discussion of the underlying mathematical and statistical properties of self- similarity and their relationship with actual network behavior. we also consider some implications for congestion control in high-bandwidth networks and present traffic models based on self-similar stochastic processes that are simple, accurate, and realistic for aggregate traffic. will e. leland walter willinger murad s. taqqu daniel v. wilson lossless handover for wireless atm håkan mitts harri hansen jukka immonen simo vekkolainen analysis of techniques to improve protocol processing latency this paper describes several techniques designed to improve protocol latency, and reports on their effectiveness when measured on a modern risc machine employing the dec alpha processor. we found that the memory system---which has long been known to dominate network throughput---is also a key factor in protocol latency. as a result, improving instruction cache effectiveness can greatly reduce protocol processing overheads. an important metric in this context is the _memory cycles per instructions_ (mcpi), which is the average number of cycles that an instruction stalls waiting for a memory access to complete. the techniques presented in this paper reduce the mcpi by a factor of 1.35 to 5.8. in analyzing the effectiveness of the techniques, we also present a detailed study of the protocol processing behavior of two protocol stacks---tcp/ip and rpc---on a modern risc processor. david mosberger larry l. peterson patrick g. bridges sean o'malley on the relevance of long-range dependence in network traffic matthias grossglauser jean-chrysostome bolot on "hot spot" contention rosanna lee the effect of channel-exit protocols on the performance of finite population radom-access systems random-access systems (ras) for collision-type channels have been studied extensively under the assumption of an infinite population which generates a poisson arrival process. if the population is finite and if the (practically desirable) free-access channel-access protocol is used, then it is shown that the specification of a channel-exit protocol is crucial for the stability and the fairness of the ras. free-exit and blocked- exit protocols are analyzed and it is concluded that the p-persistent blocked- exit protocol provides the mechanisms to assure stability and fairness for a wide range of arrival process models. peter mathys boi v. faltings performance evaluation of a wireless hierarchical data dissemination system qinglong hu dik lun lee wang-chien lee performance analysis of cellular mobile communication networks supporting multimedia services this paper illustrates the development of an analytical model for a communication network providing integrated services to a population of mobile users, and presents performance results to both validate the analytical approach, and assess the quality of the services offered to the end users. the analytical model is based on continuous---time multidimensional birth--death processes, and is focused on just one of the cells in the network. the cellular system is assumed to provide three classes of service: the basic voice service, a data service with bit rate higher than the voice service, and a multimedia service with one voice and one data component. in order to improve the overall network performance, some channels can be reserved to handovers, and multimedia calls that cannot complete a handover are decoupled, by transferring to the target cell only the voice component and suspending the data connection until a sufficient number of channels become free. numerical results demonstrate the accuracy of the approximate model, as well as the effectiveness of the newly proposed multimedia call decoupling approach. m. ajmone marsanr s. marano c. mastroianni m. meo conferences jennifer bruer interference analysis in fixed service microwave links due to overlay of broadband ssds-cdma wireless local loop system in this paper the results of the interference analysis in fs - ml due to new wll systems with broadband ssds - cdma techniques, sharing the same frequency band, is evaluated. effects of wll systems are expressed through the interference noise power at the fdm/fm receiver and ber at the dml receiver output, respectively. the results obtained show that in the typical working conditions with careful cell planning, interference can be neglected, and in most situations wll overlay would not cause excessive interference in the fs - ml. this conclusion is of great importance for the development of telecommunication systems, specially in rural areas, because it is possible to develop telecommunication systems fast and economically, without any additional frequency demands and under favourable economical conditions. mirolav l. dukic marko b. babovic mark iiifp hypercube concurrent processor architecture the mark lllfp hypercube is a new generation of hypercube concurrent processor system developed at jpl/caltech, with peak performance of 5 mips, 14 mflops per node, and a peak communication rate of 6 mbytes per second. each node utilizes two motorola mc68020 microprocessors, an mc68882 scalar floating- point coprocessor, and a weitek 8000 floating-point chip set. one of the mc68020 processors serves as the application and computational processor, the other is dedicated to communication. the three processors are interconnected through a common system bus and share 4 mbytes of dynamic memory. each processor has his own fast local memory, which increases parallelism, minimizes shared memory referencing, reduces memory contentions and improves system performance. j. tuazon j. peterson m. pniel software fault isolation in wide area networks the problem of real- time detection and isolation of errors in distributed software systems operating in a wide-area networked environment is considered. the approach presented combines the results of static software analysis with dynamic event- driven monitoring. static software analysis is used to generate a model of the distributed system. the model describes all possible executions of the processes composing the distributed system. the event-driven monitoring algorithm upon detecting an erroneous event uses the model to isolate the distributed software process states causing the fault. because this approach does not require the use of the network for fault isolation, it is ideal for use in the low-bandwidth, high-latency communications environments characterizing wide-area networks. dinesh gambhir ivan frish micheal post quality of service for wide area clusters colin allison computer system for processing of the requests to the 144 phone number milen lukanchevski hikolay kostadinov hovanes avakian modeling communication pipeline latency randolph y. wang arvind krishnamurthy richard p. martin thomas e. anderson david e. culler realizing fault resilience in web-server cluster today, a successful internet service is absolutely critical to be up 100 percent of the time. server clustering is the most promising approach to meet this requirement. however, the existing web server-clustering solutions merely can provide high availability derived from their redundancy nature, but offer no guarantee about fault resilience for the service. in this paper, we address this problem by implementing an innovative mechanism which enables a web request to be smoothly migrated and recovered on another working node in the presence of server failure. we will show that request migration and recovery could be efficiently achieved in the manner of user transparency. the achieved capability of fault resilience is important and essential for a variety of critical services (e.g., e-commerce), which are increasingly widespread used. our approach takes an important step toward providing a highly reliable web service. chu-sing yang mon-yen luo a simd computer for multigrid methods this paper describes briefly a project for building a highly parallel computer for simulating physical phenomena described by partial differential equations. the special-purpose computer implements multigrid algorithms for solving partial differential equations. we show that the multigrid machine is a good alternative to commercial microprocessor-based parallel computers for running grid-based applications i. martín f. tirado triad david cheriton graceful preemption for multi-link link layer protocols this paper discusses a powerful priority mechanism that can be used on a physical communication medium supporting multiple, independent, logical data links. this scheme allows a high priority message to immediately interrupt a lower priority message which is in the process of being transmitted, and yet will not require the retransmission of the first part of the lower priority message; hence, the term graceful preemption. graceful preemption may be extended to furnish multiple priority levels similar to existing processor interrupt structures which implement multiple levels. this protocol may serve a single processor which handles several different processes, or several independent stations that share a common bus. hence, priority levels can be enforced without the need for a centralized bus "master" or "arbitrator" unit. as an example, the application of graceful preemption to the integrated services digital network (isdn) link access procedure, lapd, is discussed. m. wm. beckner t. j. j. starr books michele tepper jay blickstein noncoherent multistage multiuser detection of m-ary orthogonal signals in recent years, a great deal of research has been performed on methods of alleviating the performance degradations suffered by code division multiple access (cdma) systems due to multiple access interference. in this paper, we consider a multistage detector for noncoherent cdma using m-ary orthogonal signals. using a decorrelating detector as its first stage, the detector iteratively estimates the multiple access interference affecting the desired signal, subtracts the estimated interference, and forms symbol estimates for each of k users. through numerical examples, the bit error performance of the proposed detector is demonstrated to be superior to that of previous detectors for the same signalling scheme. christopher j. hegarty branimir r. vojcic protocol validation in complex systems protocol validation and verification techniques have been applied successfully for a number of years to a wide range of protocols. the increasing complexity of our communications systems requires us to examine existing techniques and make a realistic assessment of which techniques will be applicable to the complex systems of the future. we believe that validation techniques based on an exhaustive reachability analysis will not be effective. sampling techniques, such as executing a random walk through the reachable state space, are effective in complex systems, since most protocol errors occur in many different states of the system. more than 10 years have elapsed since the first automated validation techniques were first applied to communications protocols [1,2]. in the meantime, considerable advances have been made in validation technology, but more significantly, the scale and nature of the validation problem have changed. the communications systems that we need to validate today are significantly more complex than those of ten years ago. our methods must evolve in order to address the current challenge. this paper is concerned primarily with the nature of the validation process, and the results that we might reasonably expect to achieve when validating protocols in complex communications systems. c. h. west multiple-access protocols and time-constrained communication james f. kurose mischa schwartz yechiam yemini the leopard workstation project peter j. ashenden chris j. barter chris d. marlin strategies for optimal capacity allocations in dama satellite communication systems kevin wong nicolas d. georganas a yellow-pages service for a local-area network we introduce a yellow-pages service that maps service names into server addresses. the service is novel in that it associates a set of attributes with each server. clients specify the attributes the server should possess when requesting a service and the yellow-pages service determines what servers satisfy the request. in addition to describing the implementation of the yellow-pages service within a local-area network, we show how the service can be integrated with the available internet communication protocols to enable clients from throughout the internet to access local servers. l. l. peterson hll architectures: pitfalls and predilections an examination of high- level language architectures reveals that some design considerations are based on inaccurate premises. this paper discusses some hll architecture design misconceptions. some features that should be considered in the design of hll architectures are presented. based on the hll architecture design considerations, a methodology for quantifying architecture is proposed. krishna kavi boumediene belkhouche evelyn bullard lois delcambre stephen nemecek achieving independence efficiently and securely rosario gennaro the not-so-accidental holist peter g. neumann indulgent algorithms (preliminary version) informally, an indulgent algorithm is a distributed algorithm that tolerates unreliable failure detection: the algorithm is indulgent towards its failure detector. this paper formally characterises such algorithms and states some of their interesting features. we show that indulgent algorithms are inherently safe and uniform. we also state impossibility results for indulgent solutions to divergent problems like consensus, and failure-sensitive problems like non- blocking atomic commit and terminating reliable broadcast. rachid guerraoui the distributed v kernel and its performance for diskless workstations the distributed v kernel is a message-oriented kernel that provides uniform local and network interprocess communication. it is primarily being used in an environment of diskless workstations connected by a high-speed local network to a set of file servers. we describe a performance evaluation of the kernel, with particular emphasis on the cost of network file access. our results show that over a local network: 1\\. diskless workstations can access remote files with minimal performance penalty. 2\\. the v message facility can be used to access remote files at comparable cost to any well-tuned specialized file access protocol. we conclude that it is feasible to build a distributed system with all network communication using the v message facility even when most of the network nodes have no secondary storage. david r. cheriton willy zwaenepoel alpha transport the control of the resources of a packet switching network is a very difficult problem to solve.1 past attempts to solve this problem have been handicapped primarily by lack of an adequate means to quantify the user demands and then assign distributed resources to the demands in a globally coherent way. this paper presents a new approach to control the resources of packet switching networks. the key to this control is a new network transport scheme called alpha transport. alpha transport combines the tdm circuit switching concept and packet switching technology. it is easy to implement and particularly cost effective to carry the subnet traffic of meshed packet switching networks. alpha transport is robust, enables high utilization of the subnet bandwidth, reduces the subnet delay, and provides fairness. bahram enshayan a two-symbols/branch mbcm scheme based on 8-psk constellation under fading channels a two-symbols/branch scheme of multiple block coded modulation (mbcm) is investigated under fading channels. compared with a conventional scheme of block coded modulation (bcm), this two-symbols/branch mbcm scheme greatly increases the minimum squared euclidean distance (msed), minimum symbol distance (msd), and minimum product distance (mpd). these three distances determine the bit-error-rate (ber) performance under either gaussian or fading channels. a pilot symbol assisted fading compensation, as well as the techniques of symbol interleaving and branch weighting, are employed to combat the effect of channel fading. through computer simulations, it is shown that large coding gains are obtained under both rayleigh and rician fading channels. huan-bang li tetsushi ikegami the design and analysis of an atm multicast switch with adaptive traffic controller jae w. byun tony t. lee strategies for decentralized resource management decentralized resource management in distributed systems has become more practical with the availability of communication facilities that support multicasting. in this paper we present several example solutions for managing resources in a decentralized fashion, using multicasting facilities. we review the properties of these solutions in terms of scalability, fault tolerance and efficiency. we conclude that decentralized solutions compare favorably to centralized solutions with respect to all three criteria. m. stumm emerging mobile and wireless networks upkar varshney ron vetter toward a dataflow/von neumann hybrid architecture dataflow architectures offer the ability to trade program level parallelism in order to overcome machine level latency. dataflow further offers a uniform synchronization paradigm, representing one end of a spectrum wherein the unit of scheduling is a single instruction. at the opposite extreme are the von neumann architectures which schedule on a task, or process, basis. this paper examines the spectrum by proposing a new architecture which is a hybrid of dataflow and von neumann organizations. the analysis attempts to discover those features of the dataflow architecture, lacking in a von neumann machine, which are essential for tolerating latency and synchronization costs. these features are captured in the concept of a parallel machine language which can be grafted on top of an otherwise traditional von neumann base. in such an architecture, the units of scheduling, called scheduling quanta, are bound at compile time rather than at instruction set design time. the parallel machine language supports this notion via a large synchronization name space. a prototypical architecture is described, and results of simulation studies are presented. a comparison is made between the mit tagged-token dataflow machine and the subject machine which presents a model for understanding the cost of synchronization in a parallel environment. r. a. iannucci gip: an infrastructure for mobile intranets development the gprs and umts specifications define the procedures supportingthe mobility and the data sessions of a mobile user moving withinthe area of the corresponding plmns. for the case, though, ofmobile users working in group, using a plmn infrastructure, theaforementioned networks foresee no special treatment. however,services tightly related to a specific geographic area, like forexample security or surveillance services, could be implemented bya group of collaborating mobile nodes forming a mobile intranetthat uses the facilities of a plmn. in this paper, after adescription of what the specifications provide, methods areproposed for the deployment of intranets over the gprs or the umtsinfrastructure. at this aim, the concept of the gip is introducedregarding a frame of interconnected sgsns, within the gprs/umtsenvironment. this frame supports, without the intervention of theggsn, the mobility of a number of mobile nodes belonging to thesame group, as well as the data traffic between them. moreover, theadditional tasks to be undertaken by the sgsns forming the frameare described. constantinos f. grecas sotirios i. maniatis iakovos s. venieris throughput-delay characteristics and stability considerations of the access channel in a mobile telephone system in this paper a performance study of the access channel in a cellular mobile telephone system /1/ is presented. the method used in the cellular system for multiplexing the population of mobile terminals over the access channel is a hybrid between the methods known as csma/cd and btma. in the paper we extend an analysis of csma/cd to accomodate the function of the particular random multiaccess protocol. results are shown which illustrate the equilibrium channel performance and the approximate stability-througput-delay tradeoff. finally an estimate of the average message delay is given. bengt stavenow an efficient pipelined dataflow processor architecture this paper demonstrates that the principles of pipelined instruction execution can be effectively applied in dataflow computers, yielding an architecture that avoids the main sources of pipeline gaps during program execution in many conventional processor designs. the new processing element design uses an architecture called argument-fetch dataflow architecture. it has two parts: a dataflow instruction scheduling unit (disu) and a pipelined instruction processing unit (pipu). the pipu is an instruction processor that uses many conventional techniques to achieve fast pipelined operation. the disu holds the dataflow signal graph of the collection of dataflow instructions allocated to the processing element, and maintains a large pool of enabled instructions available for execution by the pipu. the new architecture provides a basis for achieving high performance for many scientific applications. to show that the realization of an efficient dataflow processing element is feasible, a trial design and fabrication of an enable memory-a key component of the disu-is reported. j. b. dennis g. r. gao location update optimization in personal communication systems mobility tracking is concerned with finding a mobile subscriber (ms) within the area serviced by the wireless network. the two basic operations for tracking an ms, location updating and paging, constitute additional load on the wireless network. the total cost of updating and paging can be minimized by optimally dividing the service area into location registration (lr) areas. there are various factors affecting this cost, including the mobility and call patterns of the individual ms, the shape, size and orientation of the lr area, and the method of searching for the ms within the lr area. based on various mobility patterns of users and network architecture, the design of the lr area is formulated as a combinatorial optimization problem. the objective is to minimize the location update cost subject to a constraint on the size of the lr area. ahmed abutaleb victor o.k. li an approach for interconnecting sna and xns networks interest in computer internetworking has resulted from the proliferation of wide area and local area networks. the ccitt, darpa/dod, and iso/ecma internetworking models, which have become widely accepted for doing this, do not address the pragmatic problem of interconnecting computer networks that are based upon closed-system, vendor-proprietary network architectures. this paper presents an approach for interconnecting private data networks that are based upon ibm's system network architecture (sna) and xerox's network system (xns). paramount to the solution of the sna/xns network interconnection problem is the identification and definition of a number of gateway-provided data transport and application protocol support services, which can be incorporated into a multi-function, multi-level, protocol translation gateway. kenneth o. zoline william p. lidinsky data networks as cascades: investigating the multifractal nature of internet wan traffic in apparent contrast to the well-documented self-similar (i.e., monofractal) scaling behavior of measured lan traffic, recent studies have suggested that measured tcp/ip and atm wan traffic exhibits more complex scaling behavior, consistent with multifractals. to bring multifractals into the realm of networking, this paper provides a simple construction based on cascades (also known as multiplicative processes) that is motivated by the protocol hierarchy of ip data networks. the cascade framework allows for a plausible physical explanation of the observed multifractal scaling behavior of data traffic and suggests that the underlying multiplicative structure is a traffic invariant for wan traffic that co-exists with self-similarity. in particular, cascades allow us to refine the previously observed self-similar nature of data traffic to account for local irregularities in wan traffic that are typically associated with networking mechanisms operating on small time scales, such as tcp flow control.to validate our approach, we show that recent measurements of internet wan traffic from both an isp and a corporate environment are consistent with the proposed cascade paradigm and hence with multifractality. we rely on wavelet-based time-scale analysis techniques to visualize and to infer the scaling behavior of the traces, both globally and locally. we also discuss and illustrate with some examples how this cascade- based approach to describing data network traffic suggests novel ways for dealing with networking problems and helps in building intuition and physical understanding about the possible implications of multifractality on issues related to network performance analysis. a. feldmann a. c. gilbert w. willinger cbs spark new-england group--from glass box to crystal ball arthur anger service creation and management in active telecom networks marcus brunner bernhard plattner rolf stadler active network vision and reality: lessons from a capsule-based system david wetherall efficient token-based control in rings esteban feuerstein stefano leonardi alberto marchetti-spaccamela nicola santoro improving dqdb behaviour: combining bandwidth balancing and priority mechanism dqdb is a new standard for mans. in this paper we first present the dqdb standard. previous investigations have shown that the combination of bwb and the priority mechanism in a dqdb network does not work correctly. we shall analyze that behaviour and develop some solutions to overcome the problems. the principle ideals of the proposed solutions were developed by the way when focusing on aspects of network management [ro"thig 91]. they were shortly outlined and published in [seitz 91]. in this paper they shall be explained in detail and simulation results shall show how the various mechanisms affect dqdb behaviour. jurgen rothig scalable multicasting most of the multicast routing protocols for ad hoc networks today are based on shared or source-based trees; however, keeping a routing tree connected for the purpose of data forwarding may lead to a substantial network overhead. a different approach to multicast routing consists of building a shared mesh for each multicast group. in multicast meshes, data packets can be accepted from any router, as opposed to trees where data packets are only accepted from routers with whom a "tree branch" has been established. the difference among multicast routing protocols based on meshes is in the method used to build these structures. some mesh-based protocols require the flooding of sender or recieved announcements over the whole network. this paper presents the core- assisted mesh protocol, which uses mesh for data forwarding, and avoids flooding by generalizing the notion of core-based trees introduced for internet multicasting. group members form the mesh of a group by sending join requests to a set of cores. simulation experiments show that meshes can be used effectively as multicast routing structures without the need for flooding control packets. ewerton l. madruga j. j. garcia-luna-aceves the ibm token-ring - a functional perspective michael willett response to "problems with dce security services" walter tuvell selective eager execution on the polypath architecture control-flow misprediction penalties are a major impediment to high performance in wide- issue superscalar processors. in this paper we present _selective eager execution_ (see), an execution model to overcome mis- speculation penalties by executing both paths after diffident branches. we present the micro- architecture of the _polypath_ processor, which is an extension of an aggressive superscalar, out-of-order architecture. the polypath architecture uses a novel instruction tagging and register renaming mechanism to execute instructions from multiple paths simultaneously in the same processor pipeline, while retaining maximum resource availability for single-path code sequences.results of our execution-driven, pipeline-level simulations show that see can improve performance by as much as 36% for the go benchmark, and an average of 14% on specint95, when compared to a normal superscalar, out-of- order, speculative execution, monopath processor. moreover, our architectural model is both elegant and practical to implement, using a small amount of additional state and control logic. artur klauser abhijit paithankar dirk grunwald adaptive packet marking for maintaining end-to-end throughput in a differentiated-services internet wu-chang feng dilip d. kandlur the making of the powerpc: introduction nasr ullah philip k. brownfield high performance computing grand challenges in simulation adrian m. tentner a micro-vectorprocessor architecture: performance modeling and benchmarking this paper proposes and examines some architectural features suitable for micro-vectorprocessors. due to the i/o-pin bottleneck, micro-vectorprocessors should save the off-chip memory bandwidth by exploiting the on-chip register bandwidth instead. those features include the vector-instruction-level multithreading and fifo vector registers. there are three variations of multithreading:periodic, forced, and round-robin. the paper also formulates the performance of micro-vectorprocessors with such architectural features. and then, the paper evaluates the performance attainable by those micro- vectorprocessors with such architectural features. and then, the paper evaluates the performance attainable by those micro-vectorprocessors through software simulation. from the benchmark results, it is found that the vector- instruction-level multithreading and fifo vector registers can improve the performance of the micro-vectorprocessors with the half memory bandwidth comparable to that of ones with the full memory bandwidth. furthermore, forced multithreading is found to be tolerant to the large memory access latency. from these results, the paper concludes that the forced multithreading at the vector-instruction level is a good candidate for the architectural features suitable to micro-vectorprocessors. takashi hashimoto kazuaki murakami tetsuo hironaka hiroto yasuura selecting sequence numbers a characteristic of almost all communication protocols is the use of unique numbers to identify individual pieces of data. these identifiers permit error control through acknowledgement and retransmission techniques. usually successive pieces of data are identified with sequential numbers and the identifiers are thus called sequence numbers.this paper discusses techniques for selecting and synchronizing sequence numbers such that no errors will occur if certain network characteristics can be bounded and if adequate data error detection measures are taken. the discussion specifically focuses on the protocol described by cerf and kahn (1), but the ideas are applicable to other similar protocols. raymond s. tomlinson performance comparison of routing protocols under dynamic and static file transfer connections a. udaya shakar cengiz alaettinoglu klaudia dussa-zieger ibrahim matta flow synchronization protocol julio escobar craig partridge debra deutsch reevaluating amdahl's law john l. gustafson modeling cost/performance of a parallel computer simulator babak falsafi david a. wood optimality of greedy power control and variable spreading gain in multi-class cdma mobile networks seong-jun oh kimberly m. wasserman verification of a methodology for designing reliable communication protocols in this paper we present a new methodology for designing reliable communication protocols. this methodology enhances communicating processes with a synchronization mechanism so that they can detect and resolve the errors caused by collisions automatically. the major advantages of this new methodology include: (1) the "state explosion" problem involved in protocol validation is alleviated; and (2) the burden of handling errors due to collisions is removed from the protocol designer. there is a need to verify that this methodology is applicable to arbitrary protocols. in this paper we also discuss the application of a program verification technique to this methodology. huai-an lin ming t. liu charles j. graff performance of atm networks under hybrid arq/fec error control scheme maan a. kousa ahmed k. elhakeem hui yang a scalable, distributed middleware service architecture to support mobile internet applications middleware layers placed between user clients and applicationservers have been used to perform a variety of functions. inprevious work we have used middleware to perform a new capability,application session handoff, using a single middleware server toprovide all functionality. however, to improve the scalability ofour architecture, we have designed an efficient distributedmiddleware service layer that properly maintains applicationsession handoff semantics while being able to service a largenumber of clients. we show that this service layer improves thescalability of general client-to-application server interaction aswell as the specific case of application session handoff. we detailprotocols involved in performing handoff and analyse animplementation of the architecture that supports the use of a realmedical teaching tool. from experimental results it can be seenthat our middleware service effectively provides scalability as aresponse to increased workload. thomas phan richard guy rajive bagrodia multicast routing in datagram internetworks and extended lans multicasting, the transmission of a packet to a group of hosts, is an important service for improving the efficiency and robustness of distributed systems and applications. although multicast capability is available and widely used in local area networks, when those lans are interconnected by store-and-forward routers, the multicast service is usually not offered across the resulting internetwork. to address this limitation, we specify extensions to two common internetwork routing algorithms---distance- vector routing and link-state routing---to support low-delay datagram multicasting beyond a single lan. we also describe modifications to the single-spanning-tree routing algorithm commonly used by link-layer bridges, to reduce the costs of multicasting in large extended lans. finally, we discuss how the use of multicast scope control and hierarchical multicast routing allows the multicast service to scale up to large internetworks. stephen e. deering david r. cheriton communication synthesis and hw/sw integration for embedded system design guy gogniat michel auguin luc bianco alain pegatoquet a geographically distributed framework for embedded system design and validation the difficulty of emb edded system co-design is increasing rapidly due to the incr easing complexity of individual parts, the variety of parts available and pr essure to use multiple processors to me et performanc e criteria. v alidation tools should contain sever al features in order to keep up with this trend, including the ability to dynamically change detail levels, built in protection for intellectual property, and supp ort for gradual migr ation of functionalityfrom a simulation environment to the real hardware. in this paper, we present our appr oach to the problem which includes a geographically distributed co-simulation framework. this fr amework is a system of nodes such that each can include either portions of the simulator or real hardware. in supp ort of this, the framework includes a me chanism for maintaining consistent versions of virtual time. ken hines gaetano borriello wireless atm mac performance evaluation, a case study: hiperlan type 1 vs. modified mdr this paper deals with wireless atm and in particular with mac (medium access control) mechanisms. the requirements for wireless atm mac are discussed, and contention-based and tdma/reservation based mac protocols are compared. the objective is to find out the suitability of current wireless mac schemes for atm interworking, in comparison to new wireless atm mac proposals. two candidate mechanisms, ey-npma used in hiperlan type 1, and a modified mdr protocol, are discussed in more detail and their performance in different traffic scenarios is evaluated through simulations. jouni mikkonen liina nenonen methods for handling structures in data-flow systems j. l. gaudiot ren: a reconfigurable experimental network p. krystosek s. sirazi g. campbell network assisted power control for wireless data the cellular telephone success story prompts the wireless communications community to turn its attention to other information services, many of them in the category of "wireless data" communications. one lesson of cellular telephone network operation is that effective power control is essential to promote system quality and efficiency. in recent we have applied microeconomic theories to power control taking into account motions of utility and pricing. our earlier work has shown that this new approach to power control for wireless data performs better than traditional techniques applied for voice signals. however, the operating points of such a strategy result in an unfair equilibrium in that users operate with unequal signal-to-interface ratios. further, the power control algorithms required to achieve such operating points are more complex than the simple signal-to-interference ratio balancing algorithms for voice,. in this paper, we introduce a new concept, network assisted power control (napc) that maximizes utilities for users while maintaining equal signal-to-interference ratios for all users. the power control algorithm is easily implemented via signal-to-interference ratio balancing with the assistance of the network that broadcasts the common signal-to-interference ratio target. david goodman narayan mandayam group-based multicast and dynamic membership in wireless networks with incomplete spatial coverage in this paper we examine the problem of group-based multicast communication in the context of mobile computing with wireless communication technology. we propose a protocol in which group members may be mobile computers and such that the group membership may change dynamically. multicasts are delivered in the same order at all group members (totally-ordered multicast). mobile computers are resource-poor devices that communicate with a wired network through a number of spatially limited cells defining wireless links. the spatial coverage provided by wireless links may be either complete or incomplete, which makes the overall system model both general and realistic. the proposed protocol is simple and does not require any hand-off in the wired network upon movements of group members. moreover, there is no part of the protocol requiring that group members do not move during its execution. this feature leads to mobility assumptions that are practical because they involve only the global movement of group members, e.g., assumptions of the form "a group member does not move very fast all the time". alberto bartoli architectural considerations for a new generation of protocols the current generation of protocol architectures, such as tcp/ip or the iso suite, seem successful at meeting the demands of todays networks. however, a number of new requirements have been proposed for the networks of tomorrow, and some innovation in protocol structuring may be necessary. in this paper, we review some key requirements for tomorrow's networks, and propose some architectural principles to structure a new generation of protocols. in particular, this paper identifies two new design principles, application level framing and integrated layer processing. additionally, it identifies the presentation layer as a key aspect of overall protocol performance. d. d. clark d. l. tennenhouse on resource discovery in distributed systems with mobile hosts daniel o. awduche arthur gaylord aura ganz observable clock synchronization extended abstract danny dolev rudiger reischuk ray strong local networking and internetworking in the v-system local networking can be treated as a subset of internetworking for remote terminal access and file transfer. however, a distributed operating system, such as the v-system uses a local network more as an extended backplane than a fast, minature long-haul network. this paper describes the use of server-based "intelligent gateways" to provide internetworking using standard protocols in conjunction with an efficient light-weight protocol for v ipc on a local network. besides providing good local network performance, this design allows a gateway to act as an access control mechanism, addressing some reliability and security issues that arise with local networks. david r. cheriton performance bounds in communication networks with variable-rate links in most network models for quality of service support, the communication links interconnecting the switches and gateways are assumed to have fixed bandwidth and zero error rate. this assumption of steadiness, especially in a heterogeneous internet-working environment, might be invalid owing to subnetwork multiple-access mechanism, link-level flow/error control, and user mobility. techniques are presented in this paper to characterize and analyze work-conserving communication nodes with varying output rate. in the deterministic approach, the notion of "fluctuation constraint," analogous to the "burstiness constraint" for traffic characterization, is introduced to characterize the node. in the statistical approach, the variable-rate output is modelled as an "exponentially bounded fluctuation" process in a way similar to the "exponentially bounded burstiness" method for traffic modelling. based on these concepts, deterministic and statistical bounds on queue size and packet delay in isolated variable-rate communication server-nodes are derived, including cases of single-input and multiple-input under first-come-first- serve queueing. queue size bounds are shown to be useful for buffer requirement and packet loss probability estimation at individual nodes. our formulations also facilitate the computation of end-to-end performance bounds across a feedforward network of variable-rate server-nodes. several numerical examples of interest are given in the discussion. kam lee a program-driven simulation model of an mimd multiprocessor fredrik dahlgren crusade: hardware/software co-synthesis of dynamically reconfigurable heterogeneous real-time distributed embedded systems bharat p. dave a mobile user location update and paging mechanism under delay constraints a mobile user location management mechanism is introduced that incorporates a distance based location update scheme and a paging mechanism that satisfies predefined delay requirements. an analytical model is developed which captures the mobility and call arrival pattern of a terminal. given the respective costs for location update and terminal paging, the average total location update and terminal paging cost is determined. an iterative algorithm is then used to determine the optimal location update threshold distance that results in the minimum cost. analytical results are also obtained to demonstrate the relative cost incurred by the proposed mechanism under various delay requirements. ian f. akyildiz joseph s. m. ho measuring the scalability of parallel computer systems this paper discusses scalability and outlines a specific approach to measuring the scalability of parallel computer systems. the relationship between scalability and speedup is described. it is shown that a parallel system is scalable for a given algorithm if and only if its speedup is unbounded. a technique is proposed that can be used to help determine whether a candidate model is correct, that is, whether it adequately approximates the system's scalability. experimental results illustrate this technique for both a poorly scalable and a very scalable system. j. r. zirbas d. j. reble r. e. vankooten about this issue peter wegner evaluation of pipelined dilated banyan switch architectures for atm networks mayez a. al-mouhamed mohammed kaleemuddin habib yousef a closer look at noahnet noahnet is a robust, highly available, high bandwidth local area network (lan) architecture, in the implementation phase at the university of delaware. noahnet uses a distributed switch-oriented node topology, a flooding approach for message routing, and high bandwidth communication media. the purpose of this paper is to consider some of the important issues in detail such as: the various ways flood control can be implemented; their relative advantages/disadvantages; the functions of a node; and one possible implementation of a node in noahnet. expected performance of the noahnet in comparison to ethernet and token ring networks is also discussed. though noahnet uses more complex structure and protocol, we demonstrate that the overhead of using flooding in noahnet is minimal and that a noahnet node implementation is very simple and inexpensive. the paper concludes that noahnet provides features such as robustness, high availability, high throughput, and high communication band width at almost no additional cost. d j faber g m parulkar preface: special issue on communication architectures and protocols anita k. jones deriving protocol specifications from service specifications g von bochmann r gotzhein analysis of a local-area wireless network to understand better how users take advantage of wireless networks, we examine a twelve-week trace of a building-wide local-area wireless network. we analyze the network for overall user behavior (when and how intensively people use the network and how much they move around), overall network traffic and load characteristics (observed throughput and symmetry of incoming and outgoing traffic), and traffic characteristics from a user point of view (observed mix of applications and number of hosts connected to by users). amongst other results, we find that users are divided into distinct location- based sub- communities, each with its own movement, activity, and usage characteristics. most users exploit the network for web-surfing, session- oriented activities and chat-oriented activities. the high number of chat- oriented activities shows that many users take advantage of the mobile network for synchronous communication with others. in addition to these user-specific results, we find that peak throughput is usually caused by a single user and application. also, while incoming traffic dominates outgoing traffic overall, the opposite tends to be true during periods of peak throughput, implying that significant asymmetry in network capacity could be undesirable for our users. while these results are only valid for this local- area wireless network and user community, we believe that similar environments may exhibit similar behavior and trends. we hope that our observations will contribute to a growing understanding of mobile user behavior. diane tang mary baker a simulator for a reduced pdp-11/34 gerhard ohlendorf martin willmann on the stability of the ethernet we consider the stochastic behavior of binary exponential backoff, a probabilistic algorithm for regulating transmissions on a multiple access channel. ethernet, a local area network, is built upon this algorithm. the fundamental theoretical issue is stability: does the backlog of packets awaiting transmission remain bounded in time, provided the rates of new packet arrivals are small enough? we present a realistic model of n ≥ 2 stations communicating over the channel. our main result is to establish that the algorithm is stable if the sum of the arrival rates is sufficiently small. we report detailed results on which rates lead to stability when n = 2 stations share the channel. in passing we derive several other results bearing on the efficiency of the conflict resolution process. lastly, we report results from a simulation study, which, in particular, indicate alternative retransmission strategies can significantly improve performance. j goodman a g greenberg n madras p march subject index subject index modeling tcp reno performance: a simple model and its empirical validation jitendra padhye victor firoiu donald f. towsley james f. kurose exploiting fine-grained parallelism through a combination of hardware and software techniques stephen melvin yale patt multilayered illiac network scheme sanjiv k. bhatia a. g. starling efficient and flexible value sampling m. burrows u. erlingson s.-t. a. leung m. t. vandevoorde c. a. waldspurger k. walker w. e. weihl the satellite transmission protocol of the universe project the universe network uses a broadcast satellite channel to connect local area networks into a high speed wide area network. the satellite channel is scheduled using a packet tdma scheme. time allocation is controlled by one site acting as master, but is based on indications received from each participating site about its own requirements. the tdma scheduling is thus based not on circuits or the requirements of individual packets, but on a picture of the distributed requirements of the network as a whole. the paper describes the protocol currently implemented on the universe network and looks forward to a more general way of applying this technique. a. gillian waters christopher j. adams the atm physical layer in this article, we present an overview of the physical layer specification of the emerging asynchronous transfer mode (atm) networks. these specifications concern the complete details of how to ship 53-byte atm cells from point a to point b over a physical medium on a local area network (wan). while the task of defining the interfaces and line coding of the transceivers over different physical media is ongoing, the primary underlying theme has been the leveraging of existing standards and practices to the maximum extent possible. sailes k. rao mehdi hatamian ask digital demodulation scheme for noise immune infrared data communication a high performance architecture is proposed for the ask (amplitude shift keying) digital demodulation, which is dedicated to the noise immune wireless infrared data communication. in this architecture, an infrared subcarrier detected by a photodetector is digitized into ttl interface level pulses, and the digitized subcarrier is demodulated by a 1-bit digital demodulator. to improve the noise immunity against fluorescent lamps, the optical noises from the lamps are analyzed and the behavior of an ask infrared communication link is modeled under these noises. on the basis of this model, a digital demodulator is synthesized by means of a high level synthesis tool, aiming at implementing an algorithm of discriminating the subcarrier from optical noises. a part of experimental results shows that the ask receiver realized with the use of this digital demodulator can achieve an error free infrared link even under the intense noises from fluorescent lamps. hiroshi uno keiji kumatani hiroyuki okuhata isao shirakawa toru chiba multicast support for mobile-ip with the hierarchical local registration approach the mobile-ip (m-ip) protocol allows ip hosts to move between different networks without the need to tear down established sessions. the mobile-ip systems supporting local registration were introduced to reduce the frequency by which home registration with the remotely located home agent is needed. providing an efficient system that support ip multicast, in an environment where the multicast group members frequently change their locations, is a challenge for systems providing mobility support. the local registration introduces its own additional challenges that prevent the direct application of the approaches proposed in the ietf standard to support multicast on mobile-ip. in this work we will propose an architecture that supports ip multicast in this environment. three flavours of the architecture are described while presenting the corresponding features. the proposed schemes take advantage of the inherent characteristics of the local registration to enhance the efficiency of the multicast support. performance aspects such as multicast datagram delay, delivery cost and robustness are analyzed, then simulation results are used to describe the performance of the different approaches under various conditions. h. omar t. saadawi m. lee on estimating end-to-end network path properties the more information about current network conditions available to a transport protocol, the more efficiently it can use the network to transfer its data. in networks such as the internet, the transport protocol must often form its own estimates of network properties based on measurements performed by the connection endpoints. we consider two basic transport estimation problems: determining the setting of the retransmission timer (rto) for a reliable protocol, and estimating the bandwidth available to a connection as it begins. we look at both of these problems in the context of tcp, using a large tcp measurement set [pax97b] for trace-driven simulations. for rto estimation, we evaluate a number of different algorithms, finding that the performance of the estimators is dominated by their minimum values, and to a lesser extent, the timer granularity, while being virtually unaffected by how often round-trip time measurements are made or the settings of the parameters in the exponentially- weighted moving average estimators commonly used. for bandwidth estimation, we explore techniques previously sketched in the literature [hoe96, ad98] and find that in practice they perform less well than anticipated. we then develop a receiver-side algorithm that performs significantly better. mark allman vern paxson an analysis of link level protocols for error prone links this paper analyzes the maximum throughput across a full duplex link, under three link level protocols. the three protocols all assume cumulative acknowledgements, but the sender's retransmission policy and the destination's policy on retaining correctly received packets which arrive before an expected retransmission do differ. the results quantify the throughput advantages in retaining all correctly received packets, for the two different retransmission policies. a retention policy on the part of the destination is most advantageous when the link is quite error-prone. leslie jill miller how computer architecture trends may affect future distributed systems: from infiniband clusters to inter-processor speculation (abstract) the design of distributed systems is and will be altered by the computer architecture innovations enabled by moore's law. i will survey some of these issues and how they might affect the design of distributed systems. topics will include ideas for merging of clusters and large shared-memory multiprocessors, the emerging infiniband system area network standard, the effect of simultaneous multithreading, and the potential for softening the memory wall through inter-processor speculation. mark d. hill transmission-efficient routing in wireless networks using link-state information the efficiency with which the routing protocol of a multihop packet-radio network uses transmission bandwidth is critical to the ability of the network nodes to conserve energy. we present and verify the source-tree adaptive routing (star) protocol, which we show through simulation experiments to be far more efficient than both table-driven and on-demand routing protocols proposed for wireless networks in the recent past. a router in star communicates to its neighbors the parameters of its source routing tree, which consists of each link that the router needs to reach every destination. to conserve transmission bandwidth and energy, a router transmits changes to its source routing tree only when the router detects new destinations, the possibility of looping, ot the possibility of node failures or network partitions. simulation results show that star is an order of magnitude more efficient than any topology-broadcast protocol proposed to date and depending on the scenario is up to six times more efficient than the dynamic source routing (dsr) protocol, which has been shown to be one of the best performing on-demand routing protocols. j. j. garcia-luna- aceves marcelo spohn transmitting time-critical data over heterogeneous subnetworks using standardized protocols current communication networks consist of subnetworks of different types. therefore a common network protocol has to be used for the transmission of data in such a heterogeneous network. since some time the requirement of mobility in communication networks is showing up. for that reason wireless networks are playing an increasing role as subnetworks. on the other hand there is the need for multiplexed transmission of time- critical and non time- critical (normal) data within a heterogeneous network. in this paper we discuss the problem of multiplexed transmission of time- critical and of non time-critical data over a wireless type subnetwork using a common standardized network protocol. many of the available wireless subnetworks are of low or medium transmission speed and guarantee a fixed transmission bandwidth at the access point. we describe a mechanism to transmit time-critical data in such a type of subnetwork using a connectionless transport and a connectionless network protocol. the concurrent transmission of non time-critical data using a connection oriented transport and the same connectionless network protocol is assumed to be of lower priority; it is scheduled in a way to fill the remaining capacity, which has not been reserved for the transmission of time- critical data. in our discussion we concentrate on the standardized iso/osi protocools clnp as connectionless network protocol, cltp as connectionless transport protocol and tp4 as connection oriented transport protocol. we propose a header compression protocol and a fragmentation protocol for use on low bandwidth subnetworks. w. storz g. beling hardware/software cooperation in the iapx-432 the intel iapx-432 is an object-based microcomputer system with a unified approach to the design and use of its architecture, operating system, and primary programming language. the concrete architecture of the 432 incorporates hardware support for data abstraction, small protection domains, and language-oriented run-time environments. it also uses its object- orientation to provide hardware support for dynamic heap storage management, interprocess communication, and processor dispatching. we begin with an overview of the 432 architecture so readers unfamiliar with its basic concepts will be able to follow the succeeding discussion without the need to consult the references. following that, we introduce the various forms of hardware/software cooperation and the criteria by which a function or service is selected for migration. this is followed by several of the more interesting examples of hardware/software cooperation in the 432. a comparison of cooperation in the 432 with several contemporary machines and discussions of development issues, past and future, complete the paper. justin rattner software for multiprocessor system for tv and fm transmiters control liudmil lazarov georgi georgiev low load latency through sum-addressed memory (sam) load latency contributes significantly to execution time. because most cache accesses hit, cache-hit latency becomes an important component of expected load latency. most modern microprocessors have base+offset addressing loads; thus effective cache-hit latency includes an addition as well as the ram access.this paper introduces a new technique used in the ultrasparc iii microprocessor, sum-addressed memory (sam), which performs true addition using the decoder of the ram array, with very low latency. we compare sam with other methods for reducing the add part of load latency. these methods include sum-prediction with recovery, and bitwise indexing with duplicate-tolerance. the results demonstrate the superior performance of sam. william l. lynch gary lauterbach joseph i. chamdani a parallel pipelined processor with conditional instruction execution rod adams gordon steven the state of the art in protocol engineering t f paitkowski the intel ipsc/2 system: the concurrent supercomputer for production applications corporate intel exploiting conditional instructions in code generation for embedded vliw processors rainer leupers speeding up protocols for small messages many techniques have been discovered to improve performance of bulk data transfer protocols which use large messages. this paper describes a technique that improves protocol performance for protocols that use small messages, such as signalling protocols, by reducing memory system penalties. detailed measurements show that for tcp, most memory system costs are due to poor locality in the protocol code itself, rather than movement of data. we present a new technique, analogous to blocked matrix multiplication, for scheduling layer processing to reduce memory system costs, and analyze its performance in a synthetic environment. trevor blackwell integrating notification services in computer network and mobile telephony vittorio ghini giovanni pau paola salomoni dynamics of congestion control and avoidance of two-way traffic in an osi testbed rick wilder k. k. ramakrishnan allison mankin performance comparison of battery power consumption in wireless multiple access protocols jyh-cheng chen krishna m. sivalingam prathima agrawal dataflow machine architecture dataflow machines are programmable computers of which the hardware is optimized for fine-grain data-driven parallel computation. the principles and complications of data-driven execution are explained, as well as the advantages and costs of fine-grain parallelism. a general model for a dataflow machine is presented and the major design options are discussed. most dataflow machines described in the literature are surveyed on the basis of this model and its associated technology. for general-purpose computing the most promising dataflow machines are those that employ packet-switching communication and support general recursion. such a recursion mechanism requires an extremely fast mechanism to map a sparsely occupied virtual space to a physical space of realistic size. no solution has yet proved fully satisfactory. a working prototype of one processing element is described in detail. on the basis of experience with this prototype, some of the objections raised against the dataflow approach are discussed. it appears that the overhead due to fine- grain parallelism can be made acceptable by sophisticated compiling and employing special hardware for the storage of data structures. many computing- intensive programs show sufficient parallelism. in fact, a major problem is to restrain parallelism when machine resources tend to get overloaded. another issue that requires further investigation is the distribution of computation and data structures over the processing elements. arthur h. veen samp: a general purpose processor based on a self-timed vliw structure lothar nowak a passive protected self-healing mesh network architecture and applications tsong-ho wu experiences with modeling of analog and mixed a/d systems based on pwl technique jerzy dabrowski andrzej pulka an examination of high-performance computing export control policy in the 1990s seymour e. goodman peter wolcott grey burkhart a progress report on spur: february 1, 1987 dave patterson an efficient location management protocol for wireless atm customer premises networks in this paper, we present a new location management protocol for wireless atm networks, called lmcp (location management and control protocol). this protocol is based on the pnni (private network-to-network interface) routing functionality to advertise the movement of mobile terminals within predefined areas. moreover, lcmp uses specialized entities to store and retrieve the current location area of the mobile terminals. these entities are located in mobility enhanced switches that control the execution of mobility procedures (e.g., handovers). the main benefit from the application of lmcp is the establishment of connections that do not contain any misrouted segments. furthermore, it requires minor modifications to the pnni and enables its inter-working with other location management mechanisms. the protocol is compared with other similar mechanisms, and its efficiency is demonstrated by the results of an analytical model. evangelos zervas alexandros kaloxylos lazaros merakos fast set agreement in the presence of timing uncertainty dimitris michailidis enhanced reserved polling multiaccess technique for multimedia personal communication systems this article describes a multiaccess technique which allows the transport of multimedia information across global personal communication systems (pcs). impressive growth in the application of wireless technologies to telecommunications has sparked active research on a new generation of mobile radio networks projected to handle heterogeneous traffic types. one of the key requirements of these advanced systems is the multiaccess protocol which must guarantee quality of service and provide efficient access to multirate broadband applications that combine voice, video and data communications. in addition, the protocol is required to operate with the demanding constraints imposed by moving users, dynamic traffic load variations and highly sensitive wireless links. to this end, a multiaccess scheme called "enhanced reserved polling" is proposed. the scheme is designed to execute many pcs-related functions including radio resource assignment, connection control and mobility management. it accommodates a diverse mixture of delay classes/message priorities and can also enhance the bandwidth sharing among different cells in the network. benny bing regu subramanian interleaved parallel schemes: improving memory throughput on supercomputers on many commercial supercomputers, several vector register processors share a global highly interleaved memory in a mimd mode. when all the processors are working on a single vector loop, a significant part of the potential memory throughput may be wasted due to the asynchronism of the processors. in order to limit this loss of memory throughput, a simd synchronization mode for vector accesses to memory may be used. but an important part of the memory bandwidth may be wasted when accessing vectors with an even stride. in this paper, we present ips, an interleaved parallel scheme, which ensures an equitable distribution of elements on a highly interleaved memory for a wide range a vector strides. we show how to organize access to memory, such that unscrambling of vectors from memory to the vector register processors requires a minimum number of passes through the interconnection network. andre seznec jacques lenfant a data labelling technique for high-performance protocol processing and its consequences david c. feldmeier high performance communication for mimd supercomputers jochen gries axel hahlweg ralf harneit axel kern hans christoph zeidler the nsfnet backbone network the nsfnet backbone network interconnects six supercomputer sites, several regional networks and arpanet. it supports the darpa internet protocol suite and dcn subnet protocols, which provide delay-based routing and very accurate time-synchronization services. this paper describes the design and implementation of this network, with special emphasis on robustness issues and congestion-control mechanisms. d. l. mills h. braun worst-case analysis of dynamic wavelength allocation in optical networks ori gerstel galen sasaki shay kutten rajiv ramaswami an experimental configuration for the evaluation of cac algorithms andrew moore simon crosby dynamics of tcp traffic over atm networks we investigate the performance of tcp connections over atm networks without atm-level congestion control, and compare it to the performance of tcp over packet-based networks. for simulations of congested networks, the effective throughput of tcp over atm can be quite low when cells are dropped at the congested atm switch. the low throughput is due to wasted bandwidth as the congested link transmits cells from "corrupted" packets, i.e., packets in which at least one cell is dropped by the switch. this fragmentation effect can be corrected and high throughput can be achieved if the switch drops whole packets prior to buffer overflow; we call this strategy early packet discard. we also discuss general issues of congestion avoidance for best-effort traffic in atm networks. allyn romanow sally floyd dataflow computer development in japan this paper describes the research activity on dataflow computing in japan focusing on dataflow computer development at the electrotechnical laboratory (etl). first, the history of dataflow computer development in japan is outlined. some distinguished milestones in the history are mentioned in detail. second, two types of dataflow computing systems developed at etl, sigma-1 and em-4, are described with their research goals. the fundamental characteristics of the both systems are given and some comparisons are made. finally, future problems toward new generation computer systems to meet the challenge of tera flops machines are discussed. toshitsugu yuba toshio shimada yoshinori yamaguchi kei hiraki shuichi sakai an integrated congestion management architecture for internet hosts this paper presents a novel framework for managing network congestion from an end-to-end perspective. our work is motivated by trends in traffic patterns that threaten the long-term stability of the internet. these trends include the use of multiple independent concurrent flows by web applications and the increasing use of transport protocols and applications that do not adapt to congestion. we present an end-system architecture centered around a congestion manager (cm) that ensures proper congestion behavior and allows applications to easily adapt to network congestion. our framework integrates congestion management across all applications and transport protocols. the cm maintains congestion parameters and exposes an api to enable applications to learn about network characteristics, pass information to the cm, and schedule data transmissions. internally, it uses a window-based control algorithm, a scheduler to regulate transmissions, and a lightweight protocol to elicit feedback from receivers.we describe how tcp and an adaptive real-time streaming audio application can be implemented using the cm. our simulation results show that an ensemble of concurrent tcp connections can effectively share bandwidth and obtain consistent performance, without adversely affecting other network flows. our results also show that the cm enables audio applications to adapt to congestion conditions without having to perform congestion control or bandwidth probing on their own. we conclude that the cm provides a useful and pragmatic framework for building adaptive internet applications. hari balakrishnan hariharan s. rahul srinivasan seshan design and performance evaluation of a distributed contention control(dcc) mechanism for ieee 802.11 wireless local area networks luciano bononi marco conti lorenzo donatiello the n-ary stack algorithm for the wireless random access channel the emergence of wireless and personal communication networks has brought random access protocols for packet radio networks back to the research forefronts. most such protocols are based on the ever popular aloha protocol. unfortunately, this protocol is inherently unstable and requires sophisticated schemes to stabilize it. another class of random access schemes, called limited sensing or stack algorithms, has been proposed that is stable and allows for the dynamic incorporation of new stations into the network. in this paper, we will review the simple to implement n-ary stack algorithm, and we will study its performance under various system parameters in the presence of capture, and also in the presence of feedback errors. finally, we will investigate its maximum system throughput under various traffic generation processes. chatschik c. bisdikian description and performance of a class of orthogonal multiprocessor networks isaac d. scherson peter f. corbett dynamics of ip traffic: a study of the role of variability and the impact of control using the _ns-2_-simulator to experiment with different aspects of user- or session-behaviors and network configurations and focusing on the qualitative aspects of a wavelet-based scaling analysis, we present a systematic investigation into how and why variability and feedback- control contribute to the intriguing scaling properties observed in actual internet traces (as our benchmark data, we use measured internet traffic from an isp). we illustrate how variability of both user aspects and network environments (i) causes self-similar scaling behavior over large time scales, (ii) determines a more or less pronounced change in scaling behavior around a specific time scale, and (iii) sets the stage for the emergence of surprisingly rich scaling dynamics over small time scales; i.e., multifractal scaling. moreover, our scaling analyses indicate whether or not open-loop controls such as udp or closed-loop controls such as tcp impact the local or small-scale behavior of the traffic and how they contribute to the observed multifractal nature of measured internet traffic. in fact, our findings suggest an initial physical explanation for why measured internet traffic over small time scales is highly complex and suggest novel ways for detecting and identifying, for example, performance bottlenecks.this paper focuses on the qualitative aspects of a wavelet-based scaling analysis rather than on the quantitative use for which it was originally designed. we demonstrate how the presented techniques can be used for analyzing a wide range of different kinds of network-related measurements in ways that were not previously feasible. we show that scaling analysis has the ability to extract relevant information about the time-scale dynamics of internet traffic, thereby, we hope, making these techniques available to a larger segment of the networking research community. anja feldmann anna c. gilbert polly huang walter willinger an experiment to improve operand addressing mcode is a high-level language, stack machine designed to support strongly- typed, pascal-based languages with a variety of data types in a modular programming environment. the instruction set, constructed for efficiency and extensibility, is based on an analysis of 120,000 lines of pascal programs. the design was compared for efficiency with the instruction sets of the digital equipment pdp-11 and vax by examining the generated code from the same compiler for all three machines. in addition, the original design choices were tested by analyzing the generated code from 19,000 lines of starmod programs. as a result of this iterative experiment, we have summarized our observations in an efficient reorganization of the vax's addressing modes. robert p. cook nitin donde cost-minimizing construction of a unidirectional shr with diverse protection sung-hark chung hu-gon kim yong-seok yoon dong-wan tcha a hop by hop rate-based congestion control scheme the flow/congestion control scheme of tcp is based on the sliding window mechanism. as we demonstrate in this paper, the performance of this and other similar end-to- end flow control schemes deteriorates as networks move to the gigabit range. this has been the motivation for our search for a new flow and congestion control scheme. in this paper, we propose as an alternative, a hop- by-hop rate-based mechanism for congestion control. due to the increasing sophistication in switch architectures, to provide "quality of service" guarantees for real-time as well as bursty data traffic, the implementation of hop-by-hop controls has become relatively inexpensive. a cost-effective implementation of the proposed scheme for a multi-gigabit packet switch is described in [2]. in this paper, we present results of a simulation study comparing the performance of this hop-by-hop flow control scheme to two end- to-end flow control schemes. the results indicate that the proposed scheme displays stable behavior for a wide range of traffic conditions and diverse network topologies. more importantly, the performance of the scheme, measured in terms of the average number of occupied buffers, the end-to-end throughput, the network delay, and the link utilization at the bottleneck, is better than that of the end-to-end control schemes studied here. these results present a convincing case against popular myths about hop- by-hop control mechanisms. partho p. mishra hemant kanakia data-driven and demand-driven computer architecture philip c. treleaven david r. brownbridge richard p. hopkins editorial stephen s. rappaport thomas g. robertazzi construction of a validated simulator for performance prediction of decnet- based computer networks predicting important performance parameters of computer networks, recognizing potential bottlenecks, comparing design alternatives are factors of decisive importance in building complex computer networks. in this respect, computer aided simulation has proved to be a very effective design tool in a number of practical applications. it is shown by the example of the mosaic modeling system how a simulator applicable on a broad basis could be adapted to the specific characteristics of a class of existing computer networks. the class of computer networks chosen for modeling is based on decnet communication software. modeling concentrates mainly on the decnet protocols and their hierarchies. the paper indicates the adaptations necessary to adjust the mosaic kernel system adequately to decnet computer networks and summarizes the results of a rather extensive validation for the modeling system, comprising calibration and accuracy establishment, which has been carried out successfully. bernd wolfinger max muhlhauser a resilient communication structure for local area netowrks reliable communication is crucial to the correct functioning of distributed systems. we propose a multi-ring communication structure and a reconfiguration algorithm that tolerate multiple link failures before the network divides into more than one partition. in case of partitioning, each partition is reconfigured to allow communication among the sites within the partition. the algorithm handles recovery of links and merges partitions once links become operational again. the algorithm itself is fault-tolerant, and it is fully distributed and does not require global knowledge about the status of the network at any one site. amr el abbadi thomas rauchle branch with masked squashing in superpipelined processors c.-l su a. m. despain publish/subscribe in a mobile enviroment a publish/subscribe system dynamically routes and delivers events from sources to interested users, and is an extremely useful communication service when it is not clear in advance who needs what information. in this paper we discuss how a publish/subscribe system can be extended to operate in a mobile environment, where events can be generated by moving sensors or users, and subscribers can request delivery at handheld and/or mobile devices. we describe how the publish/subscribe system itself can be distributed across multiple (possibly mobile) computers to distribute load, and how the system can be replicated to cope with failures, message loss, and disconnections. yongqiang huang hector garcia- molina flip-flop: a stack-oriented multiprocessing system peter grabienski accelerating shared virtual memory via general-purpose network interface support clusters of symmetric multiprocessors (smps) are important platforms for high- performance computing. with the success of hardware cache- coherent distributed shared memory (dsm), a lot of effort has also been made to support the coherent shared-address-space programming model in software on clusters. much research has been done in fast communication on clusters and in protocols for supporting software shared memory across them. however, the performance of software virtual memory (svm) is still far from that achieved on hardware dsm systems. the goal of this paper is to improve the performance of svm on system area network clusters by considering communication and protocol layer interactions. we first examine what are the important communication system bottlenecks that stand in the way of improving parallel performance of svm clusters; in particular, which parameters of the communication architecture are most important to improve further relative to processor speed, which ones are already adequate on modern systems for most applications, and how will this change with technology in the future. we find that the most important communication subsystem cost to improve is the overhead of generating and delivery interrupts for asynchronous protocol processing. then we proceed to show, that by providing simple and general support for asynchronous message handling in a commodity network interface (ni) and by altering svm protocols appropriately, protocol activity can be decoupled from asynchronous message handling, and the need for interrupts or polling can be eliminated. the ni mechanisms needed are generic, not svm- dependent. we prototype the mechanisms and such a synchronous home-based lrc protocol, called genima (general-purpose network interface support for shared memory abstractions), on a cluster of smps with a programmable ni. we find that the performance improvements are substantial, bringing performance on a small-scale smp cluster much closer to that of hardware- coherent shared memory for many applications, and we show the value of each of the mechanisms in different applications. angelos bilas dongming jiang jaswinder pal singh analysis of a metropolitan-area wireless network diane tang mary baker survey of commercial parallel machines gowri ramanathan joel oren a multi-microprocessor architecture with hardware support for communication and scheduling we describe a multiprocessor system that attempts to enhance the system performance by incorporating into its architecture a number of key operating system concepts. in particular: \\--- the scheduling and synchronization of concurrent activities are built in at the hardware level, \\--- the interprocess communication functions are performed in hardware, and, \\--- a coupling between the scheduling and communication functions is provided which allows efficient implementation of parallel systems that is precluded when the scheduling and communication functions are realized in software. sudhir r. ahuja abhaya asthana the transport layer: tutorial and survey transport layer protocols provide for end-to-end communication between two or more hosts. this paper presents a tutorial on transport layer concepts and terminology, and a survey of transport layer services and protocols. the transport layer protocol tcp is used as a reference point, and compared and contrasted with nineteen other protocols designed over the past two decades. the service and protocol features of twelve of the most important protocols are summarized in both text and tables. sami iren paul d. amer phillip t. conrad computing the internet checksum r. braden d. borman c. partridge acknowledgment procedures at radio link control level in gprs in this paper, we investigate the acknowledgment procedures used at radio link control level in general packet radio service (gprs). the gprs is a new gsm service, which is currently being standardized by etsi for gsm phase 2+ and it provides packet switched data services over gsm network resources. the role of acknowledgment procedures is to assure the delivery of packets on gprs radio interface. this paper gives a brief description of gprs radio interface with a special attention to the radio link control and medium access control (rlc/mac) layer procedures. particularly, the acknowledgment parameters and operations are described and their performance is evaluated. the delay introduced by acknowledgment procedures is studied analytically. in order to ameliorate the performance of rlc acknowledgment mechanism, we propose and describe a new additional hybrid fec/arq mechanism, which can operate with the current one. the purpose of new mechanism is to decrease the number of control blocks used for rlc acknowledgment mechanism and thus reduces the delay requested for a packet delivery. after presenting the channel models used, we evaluate by simulation the performance of acknowledgment procedures presented in etsi specifications and the proposed one over a stationary channel model and over a noisy wireless channel model affected by burst of errors. a high speed transport protocol for datagram/virtual circuit networks we present a design and preliminary analysis of an end- to-end transport protocol that is capable of high throughput consistent with the evolving wideband physical networks based on fiber optic transmission lines and high capacity switches. unlike the current transport protocols in which changes in control state information are exchanged between the two communicating entities only when some significant event occurs, our protocol exchanges relevant and full state information periodically, routinely and frequently. we show that this results in reducing the complexity of protocol processing by removing many of the procedures required to recover from the inadequacies of the network such as bit-errors, packet loss, out of sequence packets and makes it more amenable to parallel processing. also, to increase channel utilization in the presence of high speed, long latency networks, and to support datagrams, we propose an efficient implementation of selective repeat method of error control used in our protocol. thus, we utilize small extra bandwidth to simplify protocol processing; a trade-off that appears proper since electronic speeds for protocol processing are far slower than fiber transmission rates. our preliminary estimates indicate that 20,000 packets/second can be handled in a completely software implementation on a 10 mip microprocessor using 8% of its cycles. k. k. sabnani a. n. netravali quality of service based routing: a performance perspective recent studies provide evidence that quality of service (qos) routing can provide increased network utilization compared to routing that is not sensitive to qos requirements of traffic. however, there are still strong concerns about the increased cost of qos routing, both in terms of more complex and frequent computations and increased routing protocol overhead. the main goals of this paper are to study these two cost components, and propose solutions that achieve good routing performance with reduced processing cost. first, we identify the parameters that determine the protocol traffic overhead, namely (a) policy for triggering updates, (b) sensitivity of this policy, and (c) clamp down timers that limit the rate of updates. using simulation, we study the relative significance of these factors and investigate the relationship between routing performance and the amount of update traffic. in addition, we explore a range of design options to reduce the processing cost of qos routing algorithms, and study their effect on routing performance. based on the conclusions of these studies, we develop extensions to the basic qos routing, that can achieve good routing performance with limited update generation rates. the paper also addresses the impact on the results of a number of secondary factors such as topology, high level admission control, and characteristics of network traffic. george apostolopoulos roch guerin sanjay kamat satish k. tripathi psnow: a tool to evaluate architectural issues for now environments mangesh kasbekar shailabh nagar anand sivasubramaniam performance of a weakly consistent wireless web access mechanism ming feng chang yi-bing lin framework for the design and analysis of large scale material handling systems santhanam harit g. don taylor a modified cdma/prma medium access control protocol for integrated services in leo satellite systems the goal of this paper is the design of a multiservice mac (medium access control) layer able to use efficiently the radio channel within a leo (low earth orbit) constellation. after presenting the context of the problem, a set of capabilities within mac layer, which provide communications able to support various applications such as video, data and voice, are defined. a new mac protocol, based on the cdma/prma one, is proposed in the context of leo systems. compared with the initial cdma/prma protocol, the proposed modifications do significantly increase the efficiency of mac layer in leo context. a cac (connection admission control) function is introduced, and several control users traffic methods are presented and studied by simulation. abbas ibrahim samir tohme cut-through switching, pipelining, and scheduling for network evacuation leandros tassiulas effective realization of kermit's microcomputer potential mark b. johnson design constraints in the construction of a truly distributed operating system (abstract only) distributed computing will be viewed from three levels. first, from possible hardware configurations, where emphasis will be placed on loosely coupled systems interconnected with high speed networks (hsn). second, from the operating system (os) level where five fundamental additions to existing uniprocessor oss are required. they include the creation of a message-based interprocess communication facility, a communication manager, an extended file manager, a resource manager, and an extended user interface. the third and final view of truly distributed oss will be from the applications level. how the user views the logical system, and automatic versus manual control of computational granularity will be discussed. p. tobin maginnis using handheld computers in the classroom: laboratories and collaboration on handheld machines michael j. jipping joshua krikke sarah dieter samantha sandro connection establishment in high-speed networks israel cidon inder s. gopal adrian segall adaptive interpretation as a means of exploiting complex instruction sets in this paper we concentrate on the effect of instruction set architecture on the performance potential of a computer system. these issues are key in considerations of what instruction set is most appropriate for the support of high level languages on general purpose machines. two possible approaches are so called complex instruction sets, such as those of the vax and iapx 432, and the "reduced" instruction set of the risc i [2] microcomputer, which is expected to have performance similar to that of a vax 11-780. we propose a method of instruction set interpretation that takes advantage of the architectural features of complex instruction sets. these methods have been simulated executing real programs and in the case of the vax instruction set have resulted in a typical improvement of a factor of two, assuming the same cycle time as the vax 11-780. the techniques presented exploit the context available in a complex instruction and retain this information for use in subsequent execution of that instruction. since this context is available only within a single instruction, low level instruction sets cannot benefit from this technique. in this sense the reduced instruction set machines as implemented in risc are at their architectural limits, while complex instruction sets such as the vax are far from theirs. for these reasons, complex instruction sets can have a significantly greater performance potential than a risc instruction set for a given technology. richard l. norton jacob a. abraham performance of the expressnet with voice/data traffic in the past few years, local area networks have come into widespread use for the interconnection of computers. together with the trend towards digital transmission in voice telephony, this has spurred interest in integrated voice/data networks. the expressnet, an implicit-token round-robin scheme using unidirectional busses, achieves high performance even at bandwidths of 100 mb/s. other features that make the protocol attractive for voice/data traffic are bounded delays and priorities. the latter is achieved by devoting alternate rounds to one or the other of the two traffic types. by the use of accurate simulation, the performance of the expressnet with voice/data traffic is characterized. it is shown that the expressnet satisfies the real-time constraints of voice traffic adequately even at bandwidths of 100 mb/s. data traffic is able to effectively utilize bandwidth unused by voice traffic. the trade-offs in the alternating round priority mechanism are quantified. loss of voice samples under overload is shown to occur regularly in small, frequent clips, subjectively preferable to irregular clips. in a comparison of the expressnet, the contention-based ethernet and the round-robin token bus protocols, the two round-robin protocols are found to perform better than the ethernet under heavy load owing to the more deterministic mode of operation. the comparison of the two round-robin protocols highlights the importance of minimizing scheduling overhead at high bandwidths. timothy a. gonsalves fouad a. tobagi implementation of precise interrupts in pipelined processors james e. smith andrew r. pleszkun a high speed mechanism for short branches bernard k. gunther towards a design methodology for adaptive applications malcolm mcilhagga ann light ian wakeman distributed fault-tolerance for large multiprocessor systems techniques for dealing with hardware failures in very large networks of distributed processing elements are presented. a concept known as distributed fault-tolerance is introduced. a model of a large multiprocessor system is developed and techniques, based on this model, are given by which each processing element can correctly diagnose failures in all other processing elements in the system. the effect of varying system interconnection structures upon the extent and efficiency of the diagnosis process is discussed, and illustrated with an example of an actual system. finally, extensions to the model, which render it more realistic, are given and a modified version of the diagnosis procedure is presented which operates under this model. j. g. kuhl s. m. reddy resources section: books michele tepper security in a networked environment networking allows multiple users to share data, information, software, and hardware. in addition, a network can centralize the management of a large base of connected processing units often configured to provide one location for coordinating security, backup and control. security is the means to limit damage to data and to prevent user access to unauthorized information. the question of maintaining adequate security provisions across heterogenous platforms presents new security challenges for the network administrator. computer crimes are on the rise. these crimes range from eavesdropping confidential information to physicial damage of equipments that information is stored. some reasons of the increase in computer crimes include that there are more computers in use, there are more networks, and pcs can now access larger mainframe databases. cleopas o. angaye determination of the registration point for location update by dynamic programming in pcs location management is important to effectively keep track of mobile terminals with reduced signal flows and database queries. even though dynamic location management strategies are known to show good performance, we in this paper consider the static location management strategy which is easy to implement. a system with single home location register and pointer forwarding is assumed. a mobile terminal is assumed to have memory to store the ids of visitor location registers (vlrs) each of which has the forwarding pointer to identify its current location. to obtain the registration point which minimizes the database access and signaling cost from the current time to the time of power-off probabilistic dynamic programming formulation is presented. a selective pointer forwarding scheme is proposed which is based on one-step dynamic programming. the proposed location update scheme determines the least cost temporary vlr which point forwards the latest location of the mobile. the computational results show that the proposed scheme outperforms is-41, pure pointer forwarding, and one-step pointer forwarding at the expense of small storage and a few computations at the mobile terminals. chae y. lee seon g. chang fast and scalable wireless handoffs in supports of mobile internet audio future internetworks will include large numbers of portable devices moving among small wireless cells. we propose a hierarchical mobility management scheme for such networks. our scheme exploits locality in user mobility to restrict handoff processing to the vicinity of a mobile node. it thus reduces handoff latency and the load on the internetwork. our design is based on the internet protocol (ip) and is compatible with the mobile ip standard. we also present experimental results for the lowest level of the hierarchy. we implemented our local handoff mechanism on unix-based portable computers and base stations, and evaluated its performance on a wavelan network. these experiments show that our handoffs are fast enough to avoid noticeable disruptions in interactive audio traffic. for example, our handoff protocol completes less than 10 milliseconds after a mobile node initiates it. our mechanism also recovers from packet losses suffered during the transition from one cell to another. this work helps extend internet telephony and teleconferencing to mobile devices that communicate over wireless networks. ramon caceres venkata n. padmanabhan a new approach to the design and analysis of peer-to-peer mobile networks this paper introduces a new model and methodological approach for dealing with the probabilistic nature of mobile networks based on the theory of random graphs. probabilistic dependence between the random links prevents the direct application of the theory of random graphs to communication networks. the new model, termed random network model, generalizes conventional random graph models to allow for the inclusion of link dependencies in a mobile network. the new random network model is obtained through the superposition of kolmogorov complexity and random graph theory, making in this way random graph theory applicable to mobile networks. to the best of the authors' knowledge, it is the first application of random graphs to the field of mobile networks and a first general modeling framework for dealing with ad- hoc network mobility. the application of this methodology makes it possible to derive results with proven properties. the theory is demonstrated by addressing the issue of the establishment of a connected virtual backbone among mobile clusterheads in a peer-to-peer mobile wireless network. using the random network model, we show that it is possible to construct a randomized distributed algorithm which provides connectivity with high probability, requiring exponentially fewer connections (peer-to-peer logical links) per clusterhead than the number of connections needed for an algorithm with a worst case deterministic guarantee. imrich chlamtac andrás faragó technical opinion: does data traffic exceed voice traffic? a. michael noll footprint handover rerouting protocol for low earth orbit satellite networks huseyin uzunalioglu ian f. akyildiz yelena yesha wei yen tetra radio performance evaluated via the software package tetrasim tetra (terrestrial trunked radio) is a digital mobile radio standard for voice and data transmission. it aims at satisfying the growing request of applications and facilities coming from professional users and emergency services. the system has been standardized by etsi (european telecommunications standards institute) and is provided with an european harmonized frequency band. the first tetra networks appeared on the market in 1997\. this paper reports tetra radio performance evaluated via a simulation software package, named tetrasim, entirely developed at cselt according to tetra specifications. the simulation results have been obtained for some of the traffic and control channels specified by the standard, in terms of ber (bit error rate) and mer (message erasure rate). as far as the simulated receiver scheme is concerned, the characteristics of the equivalent low-pass filters and the adopted synchronization technique are reported. the simulated demodulator uses a differential detection scheme with soft decision outputs in the case of coded channels. performance analyses and results comparison are provided by taking into account the effects of signal-to-noise ratio, co- channel interference, adjacent channel interference, propagation models defined in the standard and mobile unit speed. the simulation results reported in this work have been included in the etr (etsi technical report) "tetra designers" guide part 2: radio and traffic performance". armando annunziato davide sorbara the tenet real-time protocol suite: design, implementation, and experiences anindo banerjea domenico ferrari bruce a. mah mark moran dinesh c. verma hui zhang an application developer's perspective on reliable multicast for distributed interactive media in this paper we investigate which characteristics reliable multicast services should have in order to be appropriate for use by distributed interactive media applications such as shared whiteboards, networked computer games, or distributed virtual environments. we take a close look at the communication requirements of these applications and at existing approaches to realize reliable multicast. based on this information we deduce which reliable multicast transport protocols are appropriate for the different aspects of distributed interactive media. furthermore we discuss how the application program interface of a reliable multicast service should be designed in order to support the development of applications for distributed interactive media. martin mauve volker hilt an architecture for packet-striping protocols link-striping algorithms are often used to overcome transmission bottlenecks in computer networks. traditional striping algorithms suffer from two major disadvantages. they provide inadequate load sharing in the presence of variable-length packets, and may result in non-fifo delivery of data. we describe a new family of link-striping algorithms that solves both problems. our scheme applies to any layer that can provide multiple fifo channels. we deal with variable-sized packets by showing how fair-queuing algorithms can be transformed into load- sharing algorithms. our transformation results in practical load-sharing protocols, and shows a theoretical connection between two seemingly different problems. the same transformation can be applied to obtain load-sharing protocols for links with different capacities. we deal with the fifo requirement for two separate cases. if a sequence number can be added to each packet, we show how to speed up packet processing by letting the receiver simulate the sender algorithm. if no header can be added, we show how to provide quasi fifo delivery. quasi fifo is fifo except during occasional periods of loss of synchronization. we argue that quasi fifo is adequate for most applications. we also describe a simple technique for speedy restoration of synchronization in the event of loss. we develop an architectural framework for transparently embedding our protocol at the network level by striping ip packets across multiple physical interfaces. the resulting stripe protocol has been implemented within the netbsd kernel. our measurements and simulations show that the protocol offers scalable throughput even when striping is done over dissimilar links, and that the protocol synchronized quickly after packet loss. measurements show performance improvements over conventional round-robin striping schemes and striping schemes that do not resequence packets. some aspects of our solution have been implemented in cisco's router operating system (ios 11.3) in the context of multilink ppp striping. adiseshu hari george varghese guru parulkar on the communication throughput of buffered multistage interconnection networks ralf rehrmann burkhard monien reinhard luling ralf diekmann a new method for topological design in large, traffic laden packet switched networks the an/1 network architecture employs compact lans as integrated switching nodes. the multibussed nodal architecture imposes a natural hierarchy to the network interconnection problem. a nodal interconnection method is introduced as a generalization of hierarchical topology design techniques. based on traffic and distance, a flat nodal topology is decomposed into several level, overlapped subnetworks. each subordinate subnetwork communicates with its ordinate subnetwork via two formal gates. the traffic matrix for each subnetwork can be obtained. linear programming techniques are used to determine bandwidth requirements. perturbation procedures are applied to determine the appropriate groupings of lans into subnetworks and the appropriate hierarchy. a flat topology may be returned if it is optimal. kim-joan chen jerrold f. stach tsong-ho wu cod: alternative architectures for high speed packet switching r. l. cruz jung-tsung tsai mechanisms that enforce bounds on packet lifetimes lansing sloan performance problems in bsd4. 4tcp this paper describes problems in the bsd 4.4-lite version of tcp (some of which are also present in earlier versions, such as the net2 implementation of tcp) and proposes fixes that result in a 21% increase in throughput under realistic conditions. lawrence s. brakmo larry l. peterson emma-an industrial experience on large multiprocessing architectures emma (1) is a multiprocessor system designed and built by elsag of genoa, italy, to solve problems of pattern recognition arising in automatic mail sorting. the resulting architecture is modular, organizable around several levels, and offers a theoretically unlimited possibility of increasing the processor power. emma runs under a simple distributed operating system, supplying process communication, synchronization primitives and some diagnostic functions that complement analysis and reconfiguration modules residing in the supervisor. luigi stringa message delay analysis for can based networks zhengou wang huizhu lu george e. hedrick marvin stone a fault-tolerant implementation protocol for replicated database systems on bus local area networks this paper proposes a new protocol for implementing a replicated database system on a bus local network. using the broadcast capability of the bus local network, our protocol handles the database request efficiently with a small number of messages. unlike existing schemes, our implementation protocol provides high resiliency as well as the fast response time. the system continues to service the requests as long as there is at least one site available, and, even when there is no site failure, the overall performance is better than the primary copy approach which is known to be most efficient. simulation results are provided to compare the performance. junguk l. kim heon yeom methods for achieving integrated operation in a high performance optical loop inter-computer communications system masahiro kurata seishiro tsuruho takafumi isogawa hisao nakashima simultaneous multithreading: maximizing on-chip parallelism this paper examines _simultaneous multithreading,_ a technique permitting several independent threads to issue instructions to a superscalar's multiple functional units in a single cycle. we present several models of simultaneous multithreading and compare them with alternative organizations: a wide superscalar, a fine-grain multithreaded processor, and single-chip, multiple- issue multiprocessing architectures. our results show that both (single- threaded) superscalar and fine-grain multithreaded architectures are limited their ability to utilize the resources of a wide- issue processor. simultaneous multithreading has the potential to achieve 4 times the throughput of a superscalar, and double that of fine-grain multithreading. we evaluate several cache configurations made possible by this type of organization and evaluate tradeoffs between them. we also show that simultaneous multithreading is an attractive alternative to single-chip multiprocessors; simultaneous multithreaded processors with a variety of organizations outperform corresponding conventional multiprocessors with similar execution resources.while simultaneous multithreading has excellent potential to increase processor utilization, it can add substantial complexity to the design. we examine many of these complexities and evaluate alternative organizations in the design space. dean m. tullsen susan j. eggers henry m. levy analysis of a hybrid cutoff priority scheme for multiple classes of traffic in multimedia wireless networks in this paper, we propose and analyze the performance of a new handoff scheme called hybrid cutoff priority scheme for wireless networks carrying multimedia traffic. the unique characteristics of this scheme include support for n classes of traffic, each may have different qos requirements in terms of number of channels needed, holding time of the connection and cutoff priority. the proposed scheme can handle finite buffering for both new calls and handoffs. futhermore, we take into consideration the departure of new calls due to caller impatience and the dropping of queued handoff calls due to unavailability of channels during the handoff period. the performance indices adopted in the evaluation using the stochastic petri net (spn) model include new call and handoff blocking probabilities, call forced termination probability, and channel utilization for each type of traffic. impact on the performance measures by various system parameters such as queue length, traffic input and qos of different traffic has also been studied. bo li samuel t. chanson chuang lin global internet roaming with roamip zoltan r. turanyi csanad szabo eszter kail a national trace collection and distribution resource computer systems are modeled before construction to minimize errors and performance bottlenecks. a common modeling approach is to build software models of comptuer system components, and use realistic _trace_ data as input. this methodology is commonly referred to as trace-driven simulation. trace-driven simulation can be very accurate if both the system model and input trace data represent the system under test. the accuracy of the model is typically under the control of the researcher, but so far little or no trace data has been available that accurately represents current or future workloads. to address this issue, we describe the brigham young univ. address collection hardware (bach) and introduce our national trace distribution center and trace collection facility (http://traces.byu.edu). we also illustrate the types of traces we collect and make available to others. niki c. thornock j. kelly flanagan a faster distributed algorithm for computing maximal matchings deterministically michal hanckowiak michal karonski alessandro panconesi mu6-g. a new design to achieve mainframe performance from a mini-sized computer mu6-g is a high performance machine useful for general or scientific applications. its order code and architecture are designed to be sympathetic to the needs of the operating system and to both the compilation and execution of programs written in high level languages and to support a word size suitable for high precision scientific computations. advanced technology, coupled with simplicity of design, is used to achieve a high and more readily predictable performance. innovative features include the unique organisation of the virtual memory mapping hardware and the use of a combined operand and instruction buffer-store, accessed using virtual addresses. fault diagnosis is aided by the inclusion of a microprocessor based diagnostic controller which has read/write access to all bistable devices in the machine and has control of the system clock. the paper includes a description of the various functional units and gives estimates of expected performance. d. b.g. edwards a. e. knowles j. v. woods energy-efficient selective cache invalidation jun cai kian-lee tan hardware implementation of communication protocols: a formal approach this paper presents a formal method that permits the development of verified specifications and implementation of communication protocols. a non- executable high level language is introduced for specification purposes. this language permits the concise and unambiguous specification of those characteristics which are normally needed to be modeled in the design of protocols. a simple algorithm for verifying the protocol specifications is indicated. then it is shown how the verified specifications can be transformed into another descriptive model, a processing machine which is more directly related to hardware implementation. the processing machine description resulting from this model simplifies the last stage of protocol design, that of the realization. miguel garcía hoffmann modelling and performance evaluation of multiprocessors, organizations with multi-memory units m. naderi specification and verification of collusion-free broadcast networks for high-speed local area networks that offer integrated services for data, voice, and image traffic, a class of demand-assigned multiple-access protocols have been presented in the literature. these protocols exploit the directionality of signal propagation and enforce time constraints to achieve collision-freedom. a correct implementation of such a protocol requires a careful analysis of time-dependent interactions of event occurrences using a formal method. to date, most protocol verification methods are intended for the analysis of asynchronous communication over point-to-point channels. we present a model for broadcast bus networks. the novel features of our model include the ability to specify broadcast channels and the specification of real-time behavior. the broadcast characteristics of cables or buses are captured by some simple axioms. real time is modeled as a discrete quantity using clocks and time variables. real-time properties are specified by safety assertions. to illustrate our model and analysis method, we present a specification of the expressnet protocol. we found that to achieve collision- freedom, a small modification to the original expressnet protocol is needed. p. jain s. s. lam software implementation strategies for power-conscious systems a variety of systems with possibly embedded computing power, such as small portable robots, hand-held computers, and automated vehicles, have power supply constraints. their batteries generally last only for a few hours before being replaced or recharged. it is important that all design efforts are made to conserve power in those systems. energy consumption in a system can be reduced using a number of techniques, such as low-power electronics, architecture-level power reduction, compiler techniques, to name just a few. however, energy conservation at the application software-level has not yet been explored. in this paper, we show the impact of various software implementation techniques on energy saving. based on the observation that different instructions of a processor cost different amount of energy, we propose three energy saving strategies, namely (i) assigning live variables to registers, (ii) avoiding repetitive address computations, and (iii) minimizing memory accesses. we also study how a variety of algorithm design and implementation techniques affect energy consumption. in particular, we focus on the following aspects: (i) recursive versus iterative (with stacks and without stacks), (ii) different representations of the same algorithm, (iii) different algorithms - with identical asymptotic complexity - for the same problem, and (iv) different input representations. we demonstrate the energy saving capabilities of these approaches by studying a variety of applications related to power-conscious systems, such as sorting, pattern matching, matrix operations, depth-first search, and dynamic programming. from our experimental results, we conclude that by suitably choosing an algorithm for a problem and applying the energy saving techniques, energy savings in excess of 60% can be achieved. kshirasagar naik david s. l. wei pipelining and performance in the vax 8800 processor douglas w. clark location and storage management in mobile computing systems - track chair message dorota m. huizinga a stochastic model of tcp/ip with stationary random losses we present a technique for identifying repetitive information transfers and use it to analyze the redundancy of network traffic. our insight is that dynamic content, streaming media and other traffic that is not caught by today's web caches is nonetheless likely to derive from similar information. we have therefore adapted similarity detection techniques to the problem of designing a system to eliminate redundant transfers. we identify repeated byte ranges between packets to avoid retransmitting the redundant data. eitan altman konstantin avrachenkov chadi barakat efficient wire formats for high performance computing high performance computing is being increasingly utilized in non-traditional circumstances where it must interoperate with otherapplications. for example, online visualization is being used to monitor the progress of applications, and real-world sensors are used as inputs to simulations. whenever these situations arise, there is a question of what communications infrastructure should be used to link the different components. traditional hpc-style communications systems such as mpi offer relatively high performance, but are poorly suited for developing these less tightly-coupled cooperating applications. object-based systems and meta-data formats like xml offer substantial plug-and-play flexibility, but with substantially lower performance. we observe that the flexibility and baseline performance of all these systems is strongly determined by their "wire format," or how they represent data for transmission in a heterogeneous environment. we examine the performance implications of different wire formats and present an alternative with significant advantages in terms of both performance and flexibility. fabian e. bustamante greg eisenhauer karsten schwan patrick widener computation of x nabil rousan hani abu-salem application performance improvement on the ipsc/2 computer the performance of concurrent computers depends fundamentally on the capabilities of the individual processing nodes and the characteristics of the interprocessor communication system. the intel ipsc®/2 is significantly better in both categories than the original ipsc (ipsc/1). this paper will briefly compare the hardware of the two machines and then discuss the actual measured performance improvement of several kernels of application codes. the improvements vary from a factor of two and a half to more than a factor of 30. s. arshi r. asbury j. brandenburg d. scott architecture of a massively parallel processor the massively parallel processor (mpp) system is designed to process satellite imagery at high rates. a large number (16,384) of processing elements (pe's) are configured in a square array. for optimum performance on operands of arbitrary length, processing is performed in a bit-serial manner. on 8-bit integer data, addition can occur at 6553 million operations per second (mops) and multiplication at 1861 mops. on 32-bit floating-point data, addition can occur at 430 mops and multiplication at 216 mops. kenneth e. batcher separation principle of dynamic transmission and enqueueing priorities for real- and nonreal-time traffic in atm multiplexers chun-chong huang alberto leon-garcia personal computers in the corporate environment (panel) a microcomputer differs from a mainframe or a minicomputer in size, cost and power. typically, micros are smaller, cost less and do not have the throughput of a mainframe. however, with today's technological improvements, microcomputers have surpassed the mainframes of yesteryear. local processing and small data bases have become usual applications. a fault tolerant, bit-parallel, cellular array processor steven g. morton a retrospective on the dorado, a high-performance personal computer in late 1975, members of the xerox palo alto research center embarked on the specification of a high-performance successor to the alto personal minicomputer, in use since 1973. after four years, the resulting machine, called the dorado, was in use within the research community at parc. this paper begins with an overview of the design goals, architecture, and implementation of the dorado and then provides a retrospective view and critique of the dorado project as a whole. the major machine architectural features are evaluated, other project aspects such as design automation and management structures are explained, a chronological history with milestones is included, and a variety of accomplishments, red herrings, and shortfalls is discussed. the paper concludes with some speculations on what the project might have done differently and what might be done differently today instead of in the late 1970s. although more than a dozen scientists and technicians contributed to the project, the evaluative and speculative parts of this paper are the sole responsibility of the author. kenneth a. pier bistro: a framework for building scalable wide-area upload applications samrat bhattacharjee william c. cheng cheng-fu chou leana golubchik samir khuller switcherland: a qos communication architecture for workstation clusters computer systems have become powerful enough to process continuous data streams such as video or animated graphics. while processing power and communication bandwidth of today's systems typically are sufficient, quality of service (qos) guarantees as required for handling such data types cannot be provided by these systems in adequate ways.we present switcherland, a scalable communication architecture based on crossbar switches that provides qos guarantees for workstation clusters in the form of reserved bandwidth and bounded transmission delays. similar to the atm technology switcherland provides qos guarantees with the help of service classes, that is, data transfers are characterized as variable bit rate traffic or constant bit rate traffic. however, unlike lan technologies, switcherland is optimized for cluster computing in that (i) it serves as a backplane interconnection fabric as well as a lan, (ii) it extends support for service classes by also covering the end nodes of the network, (iii) it provides low latency in the order of one microsecond per switch, and (iv) it uses a communication model based on a global memory to simplify programming. hans eberle erwin oertli a new atm adaptation layer for tcp/ip over wireless atm networks this paper describes the design and performance of a new atm adaptation layer protocol aal - t for improving tcp performance over wireless atm networks. the wireless links are characterized by higher error rates and burstier error patterns in comparison with the fiber links for which atm was introduced in the beginning. since the low performance of tcp over wireless atm networks is mainly due to the fact that tcp always responds to all packet losses by congestion control, the key idea in the design is to push the error control portion of tcp to the aal layer so that tcp is only responsible for congestion control. the aal - t is based on a novel and reliable arq mechanism to support quality - critical tcp traffic over wireless atm networks. the proposed aal protocol has been validated using the opnet tool with the simulated wireless atm network. the simulation results show that the aal - t provides higher throughput for tcp over wireless atm networks compared to the existing approach of tcp with aal 5. ian f. akyildiz inwhee joe the iso reference model of open systems interconnection: a first tutorial during the 1970's computing facilities which had previously provided stand- alone processing were linked into a variety of networks under quite different geographical, technical, financial and application constraints. the effect of these independent developments was that similar services were often provided by quite different mechanisms, and interconnection became extremely difficult. late in the 1970's a strong desire for a standard means of internetwork communication arose. it was realized that a common model of distributed data communications would be required to facilitate the development of these standards. intensive activity was initiated by several national and international standards organizations, including the international organization for standardization (iso). in 1978 the "reference model of open systems interconnection" [8] was produced, and this led to the 1981 draft international standard 7498 "data processing. open systems interconnection - basic reference model" [2]. the documents assume an environment of systems which make themselves "open" to the exchange of data with other systems by adhering to a well-defined set of standard procedures for communication. the documents scope includes not only the basic transfer of information between such systems, but also the capability of such systems to work together in support of distributed applications. leslie jill miller how to move data in the wrong direction charles curley evaluation of the wm architecture this report describes the results of studies of the wm architecture---its performance, the values of some of its key architectural parameters, the difficulty of compiling for it, and hardware implementation complexity. the studies confirm that, with comparable chip area and without heroic compiler technology, wm is capable of outperforming traditional scalar architectures by factors of 2-9. they also underscore the need to devise higher bandwidth memory systems. wm. a. wulf pipelining and superscalar processors eugene styer dynamic adaptive windows for high speed data networks: theory and simulations recent results on the asymptotically optimal design of sliding windows for virtual circuits in high speed, geographically dispersed data networks in a stationary environment are exploited here in the synthesis of algorithms for adapting windows in realistic, non-stationary environments. the algorithms proposed here require each virtual circuit's source to measure the round trip response times of its packets and to use these measurements to dynamically adjust its window. our design philosophy is quasi-stationary: we first obtain, for a complete range of parameterized stationary conditions, the relation, called the "design equation", that exists between the window and the mean response time in asymptotically optimal designs; the adaptation algorithm is simply an iterative algorithm for tracking the root of the design equation as conditions change in a non- stationary environment. a report is given of extensive simulations of networks with data rates of 45 mbps and propagation delays of up to 47 msecs. the simulations generally confirm that the realizations of the adaptive algorithms give stable, efficient performance and are close to theoretical expectations when these exist. d. mitra j. b. seery a generalized processor sharing approach to flow control in integrated services networks: the single-node case abhay k. parekh robert g. gallager an overview of the centre for telecommunications research at king's college, london, england hamid aghwami dilshan weerakoon a binary feedback scheme for congestion avoidance in computer networks with a connectionless network layer we propose a scheme for congestion avoidance in networks using a connectionless protocol at the network layer. the scheme uses feedback from the network to the users of the network. the interesting challenge for the scheme is to use a minimal amount of feedback (one bit in each packet) from the network to adjust the amount of traffic allowed into the network. the servers in the network detect congestion and set a congestion indication bit on packets flowing in the forward direction. the congestion indication is communicated back to the users through the transport level acknowledgement. the scheme is distributed, adapts to the dynamic state of the networks, converges to the optimal operating point, is quite simple to implement, and has low overhead while operational. the scheme also addresses a very important aspect of fairness in the service provided to the various sources utilizing the network. the scheme attempts to maintain fairness in service provided to multiple sources. this paper presents the scheme and the analysis that went into the choice of the various decision mechanisms. we also address the performance of the scheme under transient changes in the network and for pathological conditions. k. k. ramakrishnan r. jain the sound of silence - guessing games for saving energy in mobile environment shlomi dolev ephraim korach dmitry yukelson books michele tepper scoped hybrid automatic repeat request with forward error correction (sharqfec) reliable multicast protocols scale only as well as their ability to localize traffic. this is true for repair requests, repairs, and the session traffic that enables receivers to suppress extraneous requests and repairs. we propose a new reliable multicast traffic localization technique called scoped hybrid automatic repeat request with forward error correction (sharqfec). sharqfec operates in an end-to-end fashion and localizes traffic using a hierarchy of administratively scoped regions. session traffic is further reduced through the use of a novel method for indirectly determining the distances between session members. for large sessions, this mechanism reduces the amount of session traffic by several orders of magnitude over non-scoped protocols such as scalable reliable multicast (srm). forward error correction is selectively added to regions which are experiencing greater loss, thereby reducing the volume of repair traffic and recovery times. receivers request additional repairs as necessary. simulations show that sharqfec out performs both srm and non-scoped hybrid automatic repeat request / forward error correction protocols. assuming the widespread deployment of administrative scoping, sharqfec could conceivably provide scalable reliable delivery to tens of millions of receivers without huge increases in network bandwidth. roger g. kermode performance analysis of fddi token ring networks: effect of parameters and guidelines for setting ttrt fiber-distributed data interface (fddi) is a 100-mbps local area network (lan) standard being developed by the american national standards institute (ansi). it uses a timed-token access method and allows up to 500 stations to be connected with a total fiber length of 200 km. we analyze the performance of fddi using a simple analytical model and a simulation model. the performance metrics of response time, efficiency, and maximum access delay are considered. the efficiency is defined as the ratio of maximum obtainable throughput to the nominal bandwidth of the network. the access delay is defined as the time it takes to receive a usable token. the performance of fddi depends upon several workload parameters; for example; the arrival pattern, frame size, and configuration parameters, such as the number of stations on the ring, extent of the ring, and number of stations that are waiting to transmit. in addition, the performance is affected by a parameter called the target token rotation time (ttrt), which can be controlled by the network manager. we considered the effect of ttrt on various performance metrics for different ring configurations, and concluded that a ttrt value of 8 ms provides a good performance over a wide range of configurations and workloads. r. jain the impact of multicast layering on network fairness many definitions of fairness for multicast networks assume that sessions are single-rate, requiring that each multicast session transmits data to all of its receivers at the same rate. these definitions do not account for multi-rate approaches, such as layering, that permit receiving rates within a session to be chosen independently. we identify four desirable fairness properties for multicast networks, derived from properties that hold within the max-min fair allocations of unicast networks. we extend the definition of multicast max-min fairness to networks that contain multi-rate sessions, and show that all four fairness properties hold in a multi-rate max-min fair allocation, but need not hold in a single-rate max-min fair allocation. we then show that multi-rate max-min fair rate allocations can be achieved via intra-session coordinated joins and leaves of multicast groups. however, in the absence of coordination, the resulting max-min fair rate allocation uses link bandwidth inefficiently, and does not exhibit some of the desirable fairness properties. we evaluate this inefficiency for several layered multi-rate congestion control schemes, and find that, in a protocol where the sender coordinates joins, this inefficiency has minimal impact on desirable fairness properties. our results indicate that sender-coordinated layered protocols show promise for achieving desirable fairness properties for allocations in large-scale multicast networks. dan rubenstein jim kurose don towsley an enhanced tcp mechanism - fast-tcp in ip networks with wireless links jian ma jussi ruutu jing wu mu6v: a parallel vector processing system r. n. ibbett p. c. capon n. p. topham personal computer software support the issues and information presented here will be useful to all current and potential users of personal computers (pc's). much of the presentation is focussed on the proper role of the mis department. data is based on a study of how a cross-section of medium and large organizations are dealing with pc software support issues. the best observed practices are described as well as recommendations for further improvements. thomas o'flaherty risks of comparing riscs douglas w. jones the use of broadcast techniques on the universe network the universe network is being used to explore broadcasting techniques in a highly integrated computer communications experiment connecting local area networks via a broadcast satellite channel. the broadcast protocol which has been devised within the project provides a versatile framework in which different types of broadcast applications can be carried out. the requirements and definition of this protocol are described, and details are given of how the broadcast satellite channel is used and the implementation of the protocols in the system which provides satellite access from each site. experimental use of the protocol is also described, in particular its use for file distribution. the paper also aims to show how the protocols could be used for more advanced applications. a. gillian waters christopher j. adams ian m. leslie roger m. needham an analysis of language models for high-performance communication in local- area networks in this paper we present an empirical analysis of language models for communication in distributed systems. we consider a computing environment in which a high-level, distributed programming language kernel is sufficient support for high-performance programming applications. we propose programming language support for such an environment and present the performance results of an implementation. using the distributed programming language starmod as a context, we describe language constructs for message-based communication (including broadcast), remote invocation, and remote memory references. each form of communication is integrated into starmod in a consistent fashion maintaining the properties of transparency, modularity, and full parameter functionality. the costs and benefits associated with the various models of communication are analyzed based on the results of an implementation which runs on 8 pdp 11/23; microprocessors connected by a 1 megabit/second network. thomas j. leblanc robert p. cook the effects of asymmetry on tcp performance hari balakrishnan venkata n. padmanabhan randy h. katz forward error control for mpeg-2 video transport in a wireless atm lan the possibility of providing multimedia services to mobile users has led to interest in designing broadband wireless networks that can guarantee quality of service for traffic flows. however, a fundamental problem in these networks is that severe losses may occur due to the random fading characteristics of the wireless channel. error control algorithms which compensate for these losses are required in order to achieve reasonable loss rates. in this paper, the performance of error control based on forward error correction (fec) for mpeg-2 video transmission in an indoor wireless atm lan is studied. arandombit error model and a multipath fading model are used to investigate the effect of errors on video transport. combined source and channel coding techniques that employ single-layer and scalable mpeg-2 coding to combat channel errors are compared. simulation results indicate that fec- based error control in combination with 2-layer video coding techniques can lead to acceptable quality for indoor wireless atm video. ender ayanoglu pramod pancha amy r. reibman shilpa talwar smooth migration from the gsm system to umts for multimedia services third generation of mobile systems is now entering the operational phase; european community acts (race follow-on) programme just aims at finalizing the plenty of solutions resulting from the european community race programme as well as from several other studies and researches. european manufactures, also deeply involved in the acts programme, seem to show a preference for solutions which gradually upgrade the present pan-european gsmsuccessful standard. the underlying concept is the one of a smooth migration from the gsm network to the third generation system, in order to reuse, at least in the first phases of the transition, most of the existing technologies and infrastructures already implemented for the gsm network. in this respect, this paper, by referring to radio interface aspects, proposes a two step evolution: in the first step, a dynamic channel allocation (dca) strategy with distributed control should be implemented for coping with the high variance of traffic entailed by the reduction of cell dimensions; in the second step, a gradual upgrading of the gsmbase stations should allow a smooth transition towards the third generation packet reservation multiple access (prma) technique and the provision of broadband services. the paper is partly based upon the work performed by the author in the framework of the race project "satellite integration in the future mobile network (saint)" and of the european community acts projects median. the opinions herewith reported are not necessarily those of the european community. francesco delli priscoli a data request scheduling policy for pcs networks this paper proposes a scheduling policy called latest preempted, earliest resumed (lper) for personal communications services (pcs) systems that support voice services as well as the circuit mode data services. the policy gives priority to the voice services over the data services. the scheduling of voice requests is performed as if no data requests exist. thus, if no idle channel is available when a voice request arrives, a data channel (if exists) is interrupted and the channel will be used for the voice request. the interrupted data request will be resumed when an idle channel is available. in lper, the data channel selected for interruption is the one that serves the latest outstanding data request (i.e., other outstanding requests enter the system earlier than this request). when an occupied channel is released, lper resumes the earliest request that entered the system. an analytic model is proposed to study the performance of lper and provides guidelines to select the input parameters for the pcs systems. wen-nung tsai yi-bing lin a taxonomy of issues in name systems design and implementation a. k. yeo a. l. ananda e. k. koh a simulation study of forward error correction in atm networks ernst w. biersack the ipsc/2 direct-connect communications technology this paper describes the hardware architecture and protocol of the message routing system used in the ipsc®/2 concurrent computer. the direct-connect router was developed by intel scientific computers to replace the store-and- forward message passing mechanism used in the original ipsc system. the router enhances the performance of the ipsc/2 system by reducing the message passing latency, increasing the node-to-node channel bandwidth and allowing simultaneous bi-directional message traffic between any two nodes. the new communication system has nearly equal performance between any pair of processing nodes, making the network topology more transparent to the user. the direct-connect router is a specialized self- contained hardware module attached to each hypercube node. the router is implemented in cmos programmable gate-arrays with advanced cmos buffering. routers are connected by full-duplex bit-serial channels to form a boolean n-cube network. the router also provides a high performance interface between the node memory bus and the network. s. f. nugent an architectural approach to minimizing feature interactions in telecommunications israel zibman carl woolf peter o'reilly larry strickland david willis john visser efficient at-most-once messages based on synchronized clocks this paper describes a new message passing protocol that provides guaranteed detection of duplicate messages even when the receiver has no state stored for the sender. it also discusses how to use these messages to implement higher- level primitives such as at-most-once remote procedure calls and sequenced bytestream protocols, and describes an implementation of at-most-once rpcs using our method. our performance measurements indicate that at-most-once rpcs can be provided at the same cost as less desirable rpcs that do not guarantee at-most-once execution. our method is based on the assumption that clocks throughout the system are loosely synchronized. modern clock synchronization protocols provide good bounds on clock skew with high probability; our method depends on the bound for performance but not for correctness. b. liskov l. shrira j. wroclawski limits on multiple instruction issue this paper investigates the limitations on designing a processor which can sustain an execution rate of greater than one instruction per cycle on highly- optimized, non-scientific applications. we have used trace-driven simulations to determine that these applications contain enough instruction independence to sustain an instruction rate of about two instructions per cycle. in a straightforward implementation, cost considerations argue strongly against decoding more than two instructions in one cycle. given this constraint, the efficiency in instruction fetching rather than the complexity of the execution hardware limits the concurrency attainable at the instruction level. m. d. smith m. johnson m. a. horowitz a dynamic scheduling logic for exploiting multiple functional units in single chip multithreaded architectures prasad n. golla eric c. lin performance issues in correlated branch prediction schemes nicolas gloy michael d. smith cliff young a new congestion control scheme: slow start and search (tri-s) zheng wang jon crowcroft locating nearby copies of replicated internet servers in this paper we consider the problem of choosing among a collection of replicated servers, focusing on the question of how to make choices that segregate client/server traffic according to network topology. we explore the cost and effectiveness of a variety of approaches, ranging from those requiring routing layer support (e.g., anycast) to those that build location databases using application-level probe tools like traceroute. we uncover a number of tradeoffs between effectiveness, network cost, ease of deployment, and portability across different types of networks. we performed our experiments using a simulation parameterized by a topology collected from 7 survey sites across the united states, exploring a global collection of network time protocol servers. james d. guyton michael f. schwartz distributed communicating media-a multitrack bus-capable of concurrent data exchanging we propose a multitrack bus architecture: a novel interconnection scheme of functional units in a multiprocessor system. it features concurrent data transfer among functional units and flexible bus structure. we have clarified the possibility of more mass transfer of data in comparison with conventional interconnection schemes. makoto hasegawa tadao nakamura yoshiharu shigei ip next generation overview robert m. hinden rate controls in standard transport layer protocols charles a. eldridge the token grid network terence d. todd replicated invocations in wide-area systems arno bakker maarten van steen andrew s. tanenbaum hierarchical performance modeling with macs: a case study of the convex c-240 the macs performance model introduced here can be applied to a machine and application of interest, the compiler-generated workload, and the scheduling of the workload by the compiler. the ma, mac, and macs bounds each fix the named subset of m, a, c, and s while freeing the bound from the constraints imposed by the others. a/x performance measurement is used to measure access- only and execute-only code performance. such hierarchical performance modeling exposes the gaps between the various bounds, the a/x measurements, and the actual performance, thereby focusing performance optimization at the appropriate levels in a systematic and goal-directed manner. a simple, but detailed, case study of the convex c-240 vector mini- supercomputer illustrates the method. eric l. boyd edward s. davidson a multi-broker architecture for sharing information amongst diverse user communities peter mott stuart roberts editorial luigi fratta biswanath mukjerjee on the interconnection of causal memory systems a large amount of work has been invested in devising algorithms to implement distributed shared memory (dsm) systems under different consistency models. however, to our knowledge, the possibility of interconnecting dsm systems with simple protocols and the consistency of the resulting system has never been studied. with this paper, we start a series of works on the properties of the interconnection of dsm systems, which tries to fill this void. in this paper, we look at the interconnection of propagation-based causal dsm systems. we present extremely simple algorithms to interconnect two such systems (possibly implemented with different algorithms), that only require the existence of a bidirectional reliable fifo channel connecting one process from each system. we show that the resulting dsm system is also causal. this result can be generalized to interconnect any number of dsm propagation-based causal systems. antonio fernández ernesto jimenez vicent cholvi highly concurrent scalar processing high speed scalar processing is an essential characteristic of high performance general purpose computer systems. highly concurrent execution of scalar code is difficult due to data dependencies and conditional branches. this paper proposes an architectural concept called guarded instructions to reduce the penalty of conditional branches in deeply pipelined processors. a code generation heuristic, the decision tree scheduling technique, reorders instructions in a complex of basic blocks so as to make efficient use of guarded instructions. performance evaluation of several benchmarks are presented, including a module from the unix kernel. even with these difficult scalar code examples, a speedup of two is achievable by using conventional pipelined uniprocessors augmented by guard instructions, and a speedup of three or more can be achieved using processors with parallel instruction pipelines. p. y t hsu e. s. davidson complete characterization of adversaries tolerable in secure multi-party computation (extended abstract) martin hirt ueli maurer video over tcp with receiver-based delay control unicasting video streams over tcp connections is a challenging problem because video sources cannot normally adapt to delay and throughput variations of tcp connections. this paper points out a direction on how tcp can be modified such that tcp connections can carry hierarchically-encoded layered video streams well, while being friendly to other competing flows. the method is calledreceiver-based delay control(rdc). under rdc, a tcp connec?n can slow down its transmission rate to avoid congestion by delaying ack packet generation at the tcp receiver based on notifications from routers. the paper presents the principle behind rdc, argue that it is tcp- friendly, describe an implementation that uses 1-bit congestion notification from routers, and give our simulation results. pai-hsiang hsiao h. t. kung koan-sin tan dynamic bandwidth management of primary rate isdn to support atm access one of the principle advantages of asynchronous transfer mode (atm) access is that variable rate streams can be supported efficiently. in order to preserve this efficiency when using atm on an underlying circuit switched network, such as primary rate isdn, flexible control of the underlying circuit bandwidth must be provided. this paper discusses techniques used to provide dynamic management of variable rate circuits to support atm access. models are presented for some of these control schemes and their performance is studied through a combination of simulation and analysis. techniques implemented for an experimental network providing atm access on primary rate isdn are presented. b. r. harita i. m. leslie slipstream processors karthik sundaramoorthy zach purser eric rotenberg a better-than-token protocol with bounded packet delay time for ethernet-type lan's p. gburzynski p. rudnicki multiuser microcomputer systems microcomputers are small, efficient, powerful and inexpensive. they have found many uses in commerce and industry as well as for education and enlightenment. they are so inexpensive that it is possible in some firms to put one on everybody's desk. the computers are truly personal. personal stand-alone computers are fine for doing independent individual work. when it comes to sharing the work of others, we need an integrated system. information created by one person should be easily available to any other person. this is a common policy in education, commerce, industry and engineering where the medium of exchange is the mainframe. terminals allow users access to large quantities of information, as long as they are privileged to do so. ivan flores hci and the 3c convergence we attempt to explore the implications of the "3c convergence" (convergence of content, computers and communications) on the hci research agenda. the interaction between humans and the 3c's is becoming a daily staple as we continue to witness powerful computers unite in a networked environment where wireline and wireless communications enjoy broadening bandwidths that permit the transmission of multimedia content. we draw illustrative examples from the hong kong environment related to the 3c convergence, and we present issues which we believe to be important for the research and development of a universal, useful and usable human-computer interface. helen m. meng the butterfly satellite imp for the wideband packet satellite network multiprocessor computer systems have proven effective as high performance switching nodes in packet switched data communications networks. they are well suited to performing the required queuing, routing, and scheduling tasks, and can scale upward to provide higher system throughput when combined with software that exploits the parallelism provided by the hardware. this paper describes the packet switch used in the darpa wideband packet satellite network and the butterfly multiprocessor on which it is based. w edmond s bumenthal a echenique s storch t calderwood an analysis of short-term fairness in wireless media access protocols (poster session) can emre koksal hisham kassab hari balakrishnan an improved conditional branching scheme for a single instruction computer architecture p. a. laplante interval diagram techniques for symbolic model checking of petri nets karsten strehl lothar thiele on refuting the creation theory of computer architecture there is an enormous body of software implemented on the common cpu architectures of today, such as intel 8080, dec pdp-11, and ibm system/370. for computer vendors in a profit-seeking environment, future products must not only support this existing software, but also exploit hardware and software technologies that are being, and will be, developed. the computer architect must balance the changes made to support new software with the requirement for cost- effective execution of old software. this problem is independent of the execution speed of the processor and the size of the system of which it is a part. the process of striking a balance between advancement and compatibility will be presented. strengths and weaknesses of various types of solutions will be described using examples from commercial vendors. reasons why certain well- known concepts in computer architecture have not been included in the products of commercial vendors will be discussed. the impact of management-level personnel on changes in computer architecture will also be explored. future computer architectures must be engineered to meet the needs and concepts of those who use and control computers. therefore, the process of its development within the industry must necessarily occur in orderly and constrained steps, rather than random leaps. kenneth e. mackenzie analysis of cray-1s architecture an analysis of the cray-1s architecture based on dataflow graphs is presented. the approach consists of representing the components of a cray-1s system as the nodes of a dataflow graph and the interconnections between the components as the arcs of the dataflow graph. the elapsed time and the resources used in a component are represented by the attributes of the node corresponding to the component. the resulting dataflow graph model is simulated to obtain timing statistics using as input a control stream that represents the instruction and data stream of the real computer system. the cray-1s architecture is analyzed by conducting several experiments with the model. it is observed that the architecture is a well balanced one and performance improvements are hard to achieve without major changes. significant improvement in performance is shown when parallel instruction issue is allowed with multiple cip/lips in the architecture. vason p. srini jorge f. asenjo scalable feedback for large groups jörg nonnenmacher ernst w. biersack design and performance evaluation of a new medium access control protocol for local wireless data communications dong guen jeong chong-ho choi wha sook jeon a modified distributed call admission control scheme and its performance shengming jiang bo li xiayuan luo danny h. k. tsang resource sharing in synchronous optical hypergraph p. mckinley y. ofek context-aware mobile telephony albrecht schmidt hans w. gellersen a template matching algorithm using optically-connected 3-d vlsi architecture three-dimensional vlsi (in short, 3-d vlsi) is a new device technology that is expected to realize high performance systems. in this paper, we propose an image processing architecture based on 3-d vlsi consisting of optically- connected layers. since the optical inter-layer connection seems to have some of useful functions due to isotropic radiation of the light, we algebraicly formulate them as the picture processing operators. moreover, we show that the operators are available for applications such as template matching. the availability of a proposed template matching algorithm is verified by simulation. s. fujita r. aibara m. yamashita t. ae broadcast with partial knowledge (preliminary version) baruch awerbuch israel cidon shay kutten yishay mansour david peleg organization of invalidation reports for energy-efficient cache invalidation in mobile environments in a wireless environment, mobile clients often cache frequently accessed data to reduce contention on the limited wireless bandwidth. however, it is difficult for clients to ascertain the validity of their cache content because of their frequent disconnection. one promising cache invalidation approach is the bit- sequences scheme that organizes invalidation reports as a set of binary bit sequences with an associated set of timestamps. the report is periodically broadcast by the server to clients listening to the communication channel. while the approach has been shown to be effective, it is not energy efficient as clients are expected to examine the entire invalidation report. in this paper, we examine the bit-sequences method and study different organizations of the invalidation report to facilitate clients to selectively tune to the portion of the report that are of interest to them. this allows the clients to minimize the power consumption when invalidating their cache content. we conducted extensive studies based on a simulation model. our study shows that, compared to the bit-sequences approach, the proposed schemes are not only equally effectively in salvaging the cache content but are more efficient in energy utilization. kian-lee tan large-grain pipelining on hypercube multiprocessors a new paradigm, called large-grain pipelining, for developing efficient parallel algorithms on distributed-memory multiprocessors, e.g., hypercube machines, is introduced. large-grain pipelining attempts to maximize the degree of overlapping and minimize the effect of communication overhead in a multiprocessor system through macro-pipelining between the nodes. algorithms developed through large-grain pipelining to perform matrix multiplication are presented. to model the pipelined computations, an analytic model is introduced, which takes into account both underlying architecture and algorithm behavior. through the analytic model, important design parameters, such as data partition sizes, can be determined. experiments were conducted on a 64-node ncube multiprocessor. the measured results match closely with the analyzed results, which establishes the analytic model as an integral part of algorithm design. comparison with an algorithm which does not use large-grain pipelining also shows that large-grain pipelining is an efficient scheme for achieving a greater parallelism. c-t. king l. m. ni t series hypercube corporate floating point systems analysis of nonblocking atm switches with multiple input queues ge nong jogesh k. muppala mounir hamdi survivable load sharing protocols: a simulation study the development of robust, survivable wireless access networks requires that the performance of network architectures and protocols be studied under normal as well as faulty conditions where consideration is given to faults occurring within the network as well as within the physical environment. user location, mobility, and usage patterns and the quality of the received radio signal are impacted by terrain, man-made structures, population distribution, and the existing transportation system. the work presented herein has two thrusts. one, we propose the use of overlapping coverage areas and dynamic load balancing as a means to increase network survivability by providing mobiles with multiple access points to the fixed infrastructure. two, we describe our simulation approach to survivability analysis which combines empirical spatial information, network models, and fault models for more realistic analysis of real service areas. we use our simulation approach to compare the survivability of our load balancing protocols to a reference scheme within two diverse geographic regions. we view survivability as a cost-performance tradeoff using handover activity as a cost metric and blocking probabilities as performance metrics. our results illustrate this tradeoff for the protocols studied and demonstrate the extent to which the physical environment and faults therein affect the conclusions that are drawn. t. a. dahlberg j. jung dynamic voting for consistent primary components danny dolev idit keidar esti yeger lotem fast temporary storage for serial and parallel execution there is an apparent conflict between the hardware requirements for fast parallel execution and the hardware requirements for fast serial execution. for example, fast vector execution is achieved by maintaining high execution concurrency over extended periods of time. with many operations executing in parallel, the time to carry out individual operations is much less important than the average execution concurrency. fast serial execution, on the other hand, requires rapid execution of relatively few operations at a time; hardware concurrency can be sacrificed in favor of short execution times. fewer registers and memory locations are required, but they must have shorter access times than for parallel execution. we show how to integrate these seemingly conflicting requirements into a single computer, using asymmetric distribution of hardware, and sometimes using software to allocate variables to appropriate parts of the storage hierarchy. j. swensen y. patt towards junking the pbx: deploying ip telephony we describe the architecture and implementation of our internet teleph ony test-bed intended to replace the departmental pbx (telephone switch). it interworks with the traditional telephone networks via a pstn/ip gateway. it also serves as a corporate or campus infrastructure for existing and future services like web, email, video and streaming media. initially intended for a few users, it will eventually replace the plain old telephones from our offices, due to the cost benefit and new services it offers. we also discuss common inter-operability problems between the pbx and the gateway. wenyu jiang jonathan lennox henning schulzrinne kundan singh hardware and software support for speculative execution of sequential binaries on a chip-multiprocessor venkata krishnan josep torrellas deriving a protocol converter: a top-down method a protocol converter mediates the communication between implementations of different protocols, enabling them to achieve some form of useful interaction. the problem of deriving a protocol converter from specifications of the protocols and a desired service can be viewed as the problem of finding the "quotient" of two specifications. we define a class of finite-state specifications and present an algorithm for solving "quotient" problems for the class. the algorithm is applied to an example conversion problem. we also discuss its application in the context of layered network architectures. k. l. calvert s. s. lam simulation of fast packet-switched photonic networks for interprocessor communication k. a. aly p. w. dowd a hierarchical network approach to project control systems (abstract only) in pert, cpm, and similar project control systems each task of the project is represented as an art of a network diagram. in the approach presented here each task may be further subdivided into additional tasks, the process continuable indefinitely until reaching "elemental" tasks. a task file is presented with an imbedded data structure which includes both the precedence and hierarchical relationships. this approach may be useful in the display of such a system on a fixed area such as that of a display terminal. iza goroff evaluation of retransmission strategies in a local area network environment we present an evaluation of retransmission strategies over local area networks. expressions are derived for the expectation and the variance of the transmission time of the go-back-n and the selective repeat protocols in the presence of errors. these are compared to the expressions for blast with full retransmission on error (bfre) derived by zwaenepoel [zwa 85]. we conclude that go-back-n performs almost as well as selective repeat and is very much simpler to implement while bfre is stable only for a limited range of messages sizes and error rates. we also present a variant of bfre which optimally checkpoints the transmission of a large message. this is shown to overcome the instability of ordinary bfre. it has a simple state machine and seems to take full advantage of the low error rates of local area networks. we further investigate go-back-n by generalizing the analysis to an upper layer transport protocol, which is likely to encounter among other things, variable delays due to protocol overhead, multiple connections, process switches and operating system scheduling priorities. a. mukherjee l. h. landweber j. c. strikwerda performance prediction tools for cedar: a multiprocessor supercomputer walid abu-sufah alex y. kwok a critique of z39.50 based on implementation experience martin l. schoffstall wengyik yeong estimating the service time of web clients using server logs this article proposes and evaluates measures for estimating the service time of a web client using server logs, only from the server side without introducing traffic into the network. the http protocol is described as well as the different interactions between the web server, the communication components, and the web client application. the first measure is based on the time it takes for the web server application to deliver an object to its operating system, keeping in mind the buffer effect of the server network. the second measure also considers the inter-arrival times to the server application of the get requests for the objects that are part of a web page. we propose formulas, validated experimentally, that relate the proposed measurements in the server with the different components that take part in a web transaction and the service time experienced by clients. we have carried out several experiments to evaluate the validity of the proposed measurements and the best measure estimates the service time of the client with an error below 20% for 90% of the requests. we observed a cyclic component in the measurements of the server that can simplify the estimation of future values. for this reason, the proposed measures may be used to know what is the service time the client perceives in each visit to a server. it can also be used by content distribution networks to choose between several replica located in different places in the internet. oscar ardaiz felix freig leandro navarro improving the start-up behavior of a congestion control scheme for tcp based on experiments conducted in a network simulator and over real networks, this paper proposes changes to the congestion control scheme in current tcp implementations to improve its behavior during the start-up period of a tcp connection.the scheme, which includes slow-start, fast retransmit, and fast recovery algorithms, uses acknowledgments from a receiver to dynamically calculate reasonable operating values for a sender's tcp parameters governing when and how much a sender can pump into the network. during the start-up period, because a tcp sender starts with default parameters, it often ends up sending too many packets and too fast, leading to multiple losses of packets from the same window. this paper shows that recovery from losses during this start-up period is often unnecessarily time-consuming.in particular, using the current fast retransmit algorithm, when multiple packets in the same window are lost, only one of the packet losses may be recovered by each fast retransmit; the rest are often recovered by slow-start after a usually lengthy retransmission timeout. thus, this paper proposes changes to the fast retransmit algorithm so that it can quickly recover from multiple packet losses without waiting unnecessarily for the timeout. these changes, tested in the simulator and on the real networks, show significant performance improvements, especially for short tcp transfers. the paper also proposes other changes to help minimize the number of packets lost during the start-up period. janey c. hoe comparative analysis of wireless atm channel access protocols supporting multimedia traffic extension of multimedia services and applications offered by atm networks to wireless and mobile users has captured a lot of recent research attention. research prototyping of wireless atm networks is currently underway at many leading research and academic institutions. various architectures have been proposed depending on the intended application domain. successful implementation of wireless connectivity to atm services is significantly dependent on the medium access control (mac) protocol, which has to provide support for multimedia traffic and for quality-of-service (qos) guarantees. the objective of this paper is to investigate the comparative performance of a set of access protocols, proposed earlier in the literature, with more realistic source traffic models. data traffic is modeled with self- similar (fractal) behavior. voice traffic is modeled by a slow speech activity detector (sad). video traffic is modeled as a h.261 video teleconference, where the number of atm cells per video frame is described by a gamma distribution and a first-order discrete autoregressive process model. a comparison of the protocols based on simulation data is presented. the goal of the paper is to identify appropriate techniques for effectively and efficiently supporting multimedia traffic and qos. simulation results show that boundaries between different types of services are necessary for multimedia traffic. reservation for certain traffic type especially video can significantly improve its quality. reducing the number of collisions is an important issue for wireless networks since contentions lead not only to potentially high delay but also result in high power consumption. jyh-cheng chen krishna m. sivalingam raj acharya osi service specification: sap and cep modelling jg tomas j pavon o pereda a new distributed route selection approach for channel establishment in real- time networks g. manimaran hariharan shankar rahul c. siva ram murthy a centrally controlled shuffle network for reconfigurable and fault-tolerant architecture nripendra n. biswas s. srinivas trishala dharanendra a distributed overload control algorithm for delay-bounded call setup as communication networks provide newer services, signaling is becoming more and more compute intensive compared to present day networks. it is known that under overload conditions, the call throughput (goodput) and the network revenue drops to zero even when transport resources are available. a distributed overload control algorithm for delay-bounded call setup is proposed in this paper. the end-to-end delay bound is budgeted among the switching nodes involved in call setup, and these nodes apply a local overload control with a deterministic delay threshold and drop call requests experiencing higher delays. this algorithm does not depend on feedback on network conditions and makes use of only parameters that can be instrumented locally by the switching node. using an _m/m/_1 queueing model with first-in- first-out (fifo) service discipline at a switching node, two optimized control schemes are considered for local overload control and compared their performance through analysis: one with arrival rate limit and the other with buffer size limit. though both the schemes minimize the unproductive call processing at heavy load, the latter is found to yield higher call throughput and lower average call setup delays compared to the former. also, the buffer size required for the scheme with buffer size limit is typically small and call throughput close to the server capacity can be achieved during overload. the performance of the distributed overload control algorithm in a network is evaluated through simulation experiments, using the scheme with buffer size limit for the local overload control. it shows that the average end-to-end delay could be much less than the end-to-end delay bound, providing room for overprovisioning of the delay bounds. the tradeoff between the number of nodes, call throughput, and average end-to-end delay needs to be considered while deciding the route budgeting the end-to-end delay bound among different nodes along the route. these performance results are expected to serve as lower bounds to more sophisticated local call rejection mechanisms such as push-out or time-out along with a last-in-first-out (lifo) service discipline. r. radhakrishna pillai dummynet: a simple approach to the evaluation of network protocols network protocols are usually tested in operational networks or in simulated environments. with the former approach it is not easy to set and control the various operational parameters such as bandwidth, delays, queue sizes. simulators are easier to control, but they are often only an approximate model of the desired setting, especially for what regards the various traffic generators (both producers and consumers) and their interaction with the protocol itself.in this paper we show how a simple, yet flexible and accurate network simulator - **dummynet** \\- can be built with minimal modifications to an existing protocol stack, allowing experiments to be run on a standalone system. **dummynet** works by intercepting communications of the protocol layer under test and simulating the effects of finite queues, bandwidth limitations and communication delays. it runs in a fully operational system, hence allowing the use of real traffic generators and protocol implementations, while solving the problem of simulating unusual environments. with our tool, doing experiments with network protocols is as simple as running the desired set of applications on a workstation.a freebsd implementation of **dummynet**, targeted to tcp, is available from the author. this implementation is highly portable and compatible with other bsd-derived systems, and takes less than 300 lines of kernel code. luigi rizzo intelligent paging strategies for personal communication services network partha sarathi bhattacharjee debashis saha amitava mukherjee optimizing amplifier placements in a multiwavelength optical lan/man: the unequally powered wavelengths case byrav ramamurthy jason iness biswanath mukherjee performance analysis of md5 md5 is an authentication algorithm proposed as the required implementation of the authentication option in ipv6. this paper presents an analysis of the speed at which md5 can be implemented in software and hardware, and discusses whether its use interferes with high bandwidth networking. the analysis indicates that md5 software currently runs at 85 mbps on a 190 mhz risc architecture, a rate that cannot be improved more than 20-40%. because md5 processes the entire body of a packet, this data rate is insufficient for current high bandwidth networks, including hippi and fiberchannel. further analysis indicates that a 300 mhz custom vlsi cmos hardware implementation of md5 may run as fast as 256 mbps. the hardware rate cannot support existing ipv4 data rates on high bandwidth links (800 mbps hippi). the use of md5 as the default required authentication algorithm in ipv6 should therefore be reconsidered, and an alternative should be proposed. this paper includes a brief description of the properties of such an alternative, including a sample alternate hash algorithm. joseph d. touch schemes for slot reuse in crma oran sharon adrian segall an overview of cray research computers including the y-mp/c90 and the new mpp t3d wilfried oed martin walker diva: a reliable substrate for deep submicron microarchitecture design building a high-performance microprocessor presents many reliability challenges. designers must verify the correctness of large complex systems and construct implementations that work reliably in varied (and occasionally adverse) operating conditions. to further complicate this task, deep submicron fabrication technologies present new reliability challenges in the form of degraded signal quality and logic failures caused by natural radiation interference. in this paper, we introduce dynamic verification, a novel microarchitectural technique that can significantly reduce the burden of correctness in microprocessor designs. the approach works by augmenting the commit phase of the processor pipeline with a functional checker unit. the functional checker verifies the correctness of the core processor's computation, only permitting correct results to commit. overall design cost can be dramatically reduced because designers need only verify the correctness of the checker unit. we detail the diva checker architecture, a design optimized for simplicity and low cost. using detailed timing simulation, we show that even resource-frugal diva checkers have little impact on core processor performance. to make the case for reduced verification costs, we argue that the diva checker should lend itself to functional and electrical verification better than a complex core processor. finally, future applications that leverage dynamic verification to increase processor performance and availability are suggested. todd m. austin a performance evaluation model for a digital group with multislot traffic streams we present a unified model to compute various performance measures when a mixture of multi-slot traffic streams is offered on a trunk group of an integrated digital network under various bandwidth control schemes. when we considered a bandwidth reservation scheme based on traditional trunk reservation and a variation called partial bandwidth reservation, we have observed that the performance criteria themselves has significant impact in deciding which reservation scheme to use and at what level to use. d. medhi a. van de liefvoort c. s. reece resource management for extensible internet servers grzegorz czajkowski chi- chao chang chris hawblitzel deyu hu thorsten von eicken retransmission control and fairness issue in mobile slotted aloha networks with fading and near-far effect the effects of different retransmission control policies on the performance of a mobile data system employing the slotted aloha protocol are investigated, with the emphasis on the unfairness between close-in and distant users due to the near-far effect. an analytical multi-group model is developed to evaluate both the user and the network performance of the mobile slotted aloha network under two classes of retransmission control policies, namely the uniform policy and the nonuniform policy. the uniform policy requires that all users adopt the same retransmission probability, whereas the nonuniform policy allows more distant users to have larger retransmission probability in order to compensate for the unfairness caused by the near-far effect. the performance of a slotted aloha network with a linear topology in a rician fading channel under the two policies is compared by the multi-group model and simulation. the nonuniform policy is found to be more effective in alleviating the unfairness of user throughput over a wider range of the data traffic load than the uniform policy, which is effective only when the data traffic load is very light. thus, a mobile data network can enjoy the network performance improvement derived from the near-far effect while the unfairness between close-in and distant users can be greatly mitigated without resorting to power control. te-kai liu john a. silvester andreas polydoros books michele tepper high performance computing and communications program d. b. nelson energy-aware adaptation for mobile applications jason flinn m. satyanarayanan the mobile people architecture petros maniatis mema roussopoulos ed swierk kevin lai guido appenzeller xinhua zhao mary baker verifying correct pipeline implementation for microprocessors jeremy levitt kunle olukotun monitoring quality improvement on a pc report processor for supplier rating and reporting on-line reporting to further analyze the raw and summarized data, including on-line graphics backup and recovery to backup from the fixed disk to floppy diskettes and vice versa. a general schematic showing the overall structure of the pc system is shown in figure 1. even though this system has a particular application in mind, the statistical methods and all capabilities of the system could be used to monitor the quality improvement of any set of products, processes, or services. examples of such applications include: monitoring the quality of products produced at manufacturing, monitoring the quality of the entire manufacturing process, monitoring the quality of service provided through questionnaires on customer satisfaction, or monitoring the processes needed to smoothly run a warehousing operation. jan p. gipe deborah a. guyton a case study of decnet applications and protocol performance this paper is a study based on measurements of network activities of a major site of digital's world-wide corporate network. the study yields two kinds of results: (1) decnet protocol performance information and (2) decnet session statistics. protocol performance is measured in terms of the various network overhead (non-data) packets in routing, transport and session layers. from these protocol performance data, we are able to review how effective various network protocol optimizations are; for example the on/off flow control scheme and the delayed acknowledgement scheme in the transport protocol. decnet session statistics characterizes the workload in such a large network. the attributes of a session include the user who started it, the application invoked, the distance between the user and the application, the time span, the number of packets and bytes in each direction, and the various reasons if a session is not successfully established. based on a large sample of such sessions, we generate distributions based on various attributes of sessions; for example the application mix, the visit count distribution and various packet number and size distributions. d.-m. chiu r. sudama books michele tepper jay blickstein a new 'building block' for performance evaluation of queueing networks with finite buffers we propose a new 'building block', for analyzing queueing networks. this is a model of a server with a variable buffer-size. such a model enables efficient analysis of certain queueing networks with blocking due to limited buffer spaces, since it uses only product-form submodels. the technique is extensively tested, and found to be reasonably accurate over a wide range of parameters. several examples are given, illustrating practical situations for which our model would prove to be a useful performance analysis tool, specially since it is simple to understand, and easy to implement using standard software for closed queueing networks. rajan suri gregory w. diehl error correction and error detection techniques for wireless atm systems error correction and error detection techniques are often used in wireless transmission systems. the asynchronous transfer mode (atm) employs header error control (hec). since atm specifications have been developed for high- quality optical fiber transmission systems, hec has single-bit error correction and multiple-bit error detection capabilities. when hec detects multiple-bit error, the cell is discarded. however, wireless atm requires a more powerful forward error correction (fec) scheme to improve the bit error rate (ber) performance resulting in a reduction in the transmission power and antenna size. this concatenation of wireless fec and hec of the atm may effect cell loss performance. this paper proposes error correction and error detection techniques suitable for wireless atm and analyzes the performance of the proposed schemes. satoru aikawa yasushi motoyama masahiro umehira a unified approach to scan time analysis of token rings and polling networks. token rings and multipoint polled lines are two widely used network interconnection techniques. the general concept of cyclic allocation processes is defined and used to characterize token passing and polling in these networks. scan time, the time to poll all nodes at least once, is an important quantity in the response time analysis of such networks. we derive expressions for the mean and variance of scan times using a direct, operational approach. resulting expressions are general and are applicable to both exhaustive and non-exhaustive service. the effect of higher level protocols is easily incorporated in the analysis via calculations of constituent quantities. the expression for mean scan time is exact and depends only on the means of message transmission times and arrival rates. the approximate analysis of variance takes into account the correlation between message transmissions at different nodes. expected level of accuracy is indicated by an example. subhash c. agrawal jeffrey p. buzen ashok k. thareja processing location-dependent queries in a multi-cell wireless enviroment we develop several methods for scheduling location-dependent queries clients cross cell bounderies in a multi-cell wireless environment. our study is based on a common scenario where data objects are stationary while clients, which issue the queries, are mobile. for query processing, we use voronoi diagrams to construct an index and a semantic cache for improving data reusability. for handoff clients, we propose three scheduling methods, namely, the priority method, the intelligent method, and the hybrid method to improve performance. a simulation is conducted to study the performance of the methods. baihua zheng dik lun lee reuse partitioning in cellular networks in dynamic channel allocation great interest in recent years has been devoted to mobile communications. the research effort has been directed to increasing the capacity of radio systems by applying space reuse techniques. higher efficiency in the usage of the available frequency spectrum can be obtained either by reducing the cell size, thus requiring the provision of new base stations, or by reusing the available spectrum more efficiently without cell size reduction. in this paper we present a dynamic frequency allocation algorithm for cellular networks that exploits a given reuse pattern. the performance of the proposed scheme, in terms of blocking probability, is evaluated by means of computer simulations both when the position of the mobiles remains unchanged and when mobility is taken into account, under both uniform and hot-spot traffic. the numerical results show that the capacity of the proposed scheme is sensibly higher than that of a dynamic channel allocation without reuse partitioning. the effects of both user mobility and reuse partitioning on the signalling load are also considered. a. pattavina s. quadri v. trecordi the impact of unresolved branches on branch prediction scheme performance a. r. talcott w. yamamoto m. j. serrano r. c. wood m. nemirovsky network protocols andrew s. tanenbaum a new protocol conformance test generation method and experimental results xiaojun shen guogang li hardware review: harris rtx-2001 contest board paul frenger on the self-similar nature of ethernet traffic (extended version) will e. leland murad s. taqqu walter willinger daniel v. wilson corrections to "routing on longest-matching prefixes" willibald doeringer gunter karjoth mehdi nassehi requirements for success in gigabit networking samir chatterjee editorial diane crawford a performance bound analysis of multistage combining networks using a probabilistic model byung-changho kang gyungtto lee richard kain near-optimality of distributed load-adaptive dynamic channel allocation strategies for cellular mobile networks in this paper we focus on the so-called load-adaptive dynamic channel allocation (dca) strategies for cellular mobile networks. such strategies envisage the dynamic assignment of radio resources with the constraint that the outage probability (i.e. the probability that the carrier-to-interference power ratio be less than a given threshold) be less than a specified value, even in the worst foreseen propagation scenario. we identify a set of constraints to be satisfied in order that a dca strategy belongs to the load- adaptive class. this provides a tight lower bound on traffic blocking and dropping performance such that: (i) it implies a dramatically lower computational effort than the known optimum strategy (based on the maximum packing algorithm); (ii) it is much tighter than the bound provided by the simple erlang-b formula. a performance evaluation is carried out to compare the call blocking and dropping probabilities resulting from the tight bound above with those relevant to the fixed channel allocation and to some recently proposed dca strategies, including the geometric dca. the simulations exploit a mobility model that provides different degrees of offered traffic peakedness. it emerges that the geometric dca yields a practical way to attain near optimal performance in the load-adaptive class, leading a viable pathway to enhance the capacity of nowadays 2nd generation cellular networks in the short-medium term. andrea baiocchi fabrizio sestini editorial joseph bannister tatsuya suda average diameter and its estimation in non-linear structures zhizhang shen never lost, never forgotten jon crowcroft a perspective on ubiquitous computing this panel explores the networking impact of placing hundreds to thousands of computers into the office or home environment using wireless (e.g. infrared) techniques to interlink them. vint cerf a performance-driven standard-cell placer based on a modified force-directed algorithm we propose a performance-driven cell placement method based on a modified force-directed approach. a pseudo net is added to link the source and sink flip-flops of every critical path to enforce their closeness. given user- specified i/o pad locations at the chip boundaries and starting with all core cells in the chip center, we iteratively move a cell to its force-balanced location assuming all other cells are fixed. the process stops when no cell can be moved farther than a threshold distance. next, cell rows are adjusted one at a time starting from the top and bottom. after forming these two rows (top/bottom), all movable core cells force-balanced locations are updated. the row-formation-and-update process continues until all rows are adjusted and, hence, a legal placement is obtained. we have integrated the proposed approach into an industrial apr flow. experimental results on benchmark circuits up to 191k-cell (500k-gate) show that the critical path delay can be improved by as much as 11.5%. we also study the effect on both layout quality and cpu time consumption due to the amount of pseudo net added. we found that the introduction of pseudo net indeed significantly improves the layout quality. yih-chih chou youn-long lin dynamic selection of a performance-effective transmission sequence for token- passing networks with mobile nodes we describe a distributed method for keeping down "token-passing" overhead in a mobile multi-access network, by performing local corrections in the token- passing sequence as they become necessary due to changes in node spatial configuration. these corrections involve only a subset of the nodes in the network, thus reducing the required computational effort. the method consists of three distributed protocols. the first is for detecting deterioration and identifying a subset of nodes that constitutes a "problem area", the second is to provide each node with the distances (propagation delays) to all other nodes, and the third is to use this topological information to construct a minimal spanning tree, from which a good "token passing" sequence can be derived. the second and third protocols may also have applications other than the one described here, such as position location, routing and broadcasting. yaron i. gold shlomo moran an open approach for deploying programming nodes into communication networks nowadays, we are facing in the computer network reserach communication new challenges related to the concepts of programmable and active networks. great efforts have been spent to make current data communication networks more flexible and dynamic. unfortunately, such efforts are followed by complexity and, therefore, the process of deploying the mechanisms has been committed. in order to make this process simpler, this work presents an open approach for having programmable nodes that make computer networks more dynamic in terms of adding new services without increasing complexity. this approach presents snpi (simple network programmable interface), which is group of skeletons and data structures resources that make possible to the programmers and developers a fast deployment of new services into the network. the mechanism presented allows that access to internal mechanisms of data flow processing in the network nodes. cris amon c. rocha aecio paiva braga jose neuman de souza connection principles for multipath, packet switching networks packet switched multistage interconnection networks (mins) have been mostly proposed to use unique connection path between any source and destination. we propose to add a few extra stages in an min to create multiple paths between any source and destination. connection principles of multipath mins (mmins) for packet switching are presented in this paper. performance of such network is analyzed for possible use in multiprocessor systems and dataflow computers. for an mmin with n nodes, the number of required stages is confined in the range [log2n+1, 2log2n-1]. each stage consists of n/2 buffered 2-by-2 switching cells. in practice, one or two extra stages is sufficient beyond log2n stages required in a unique-path min. the delays of mmins are shown much shorter than that of using unique-path mins for packet switching. the improvement lies in significantly reduced packet wait delays in buffers, especially under heavy traffic conditions. the tradeoffs between reduced network delays and increased hardware cost are studied. optimal design criteria and procedures are provided for developing mmins with a fixed network size and stages. chi-yuan chin kai hwang probabilistic language analysis of weighted voting algorithms we present a method of analyzing the performance of weighted voting algorithms in a fault-tolerant distributed system. in many distributed systems, some processors send messages more frequently than others and all processors share a common communication medium, such as an ethernet. typical fault-tolerant voting algorithms require that a certain minimum number of votes be collected from different processors. system performance is significantly affected by the time required to collect those votes. we formulate the problem of weighted voting in terms of probabilistic languages and then use the calculus of generating functions to compute the expected delay to collect that number of votes. an application of the method to a particular voting algorithm, the total protocol, is given. louise e. moser vikas kapur p. m. melliar- smith a priority ethernet lan protocol and its performance m. s. obaidat d. l. donahue detection of pathological tcp connections using a segment trace filter trevor mendez resources section: books michele tepper jay blickstein time and message-efficient s-based consensus (brief announcement) the class of strong failure detectors (denoted s) includes all failure detectors that suspect all crashed processes and that do not suspect some (a priori unknown) process that never crashes. so, a failure detector that belongs to s is intrinsically unreliable as it can arbitrarily suspect correct processes. several s-based consensus protocols have been designed. they proceed in consecutive asynchronous rounds. some of them systematically require n rounds (n being the number of processes), each round involving n2 or n messages. others allow early decision (i.e., when there are no erroneous suspicions, the number of rounds depends only on the maximal number of crashes) but require each round to involve n2 messages. this brief annoucement introduces an early deciding (s-based consensus protocol each round of which involves 3(n \\- 1) messages. so, the proposed protocol is particularly time and message-efficient. it is based on the rotating coordinator paradigm. due to space limitation the reader is referred to [2] for an in-depth presentation of the protocol (underlying principles, proof, cost analysis and generalization). here, the limited space allows us to only compare it with the other s-based consensus protocols we are aware of. this comparison is provided in table 1. "nb", "msg" and "c_s" are shortcuts for "number", "messages" and "communication steps" respectively; ghmr denotes the proposed protocol; denotes the maximum number of processes that are allowed to crash; |v| is the maximal size (in bits) of a value a process can initially propose to the consensus. all the protocols work provided that < n. hence, they are optimal with respect to the number of process crashes that can be tolerated. the protocols of the first two lines systematically require n rounds ([4] and [5] each proposes a particular generalization of s; we consider here their protocols used with s). the behavior of the protocols of the last two lines depends on failure occurrences and false failure suspicions (that is why they allow early decision in "good" scenarios, i.e., when the failure detector does not make mistake). for those lines, the table considers that (1) the failure detector does not make mistake, (2) there are failures, and (3) there is a failure per round (the "simultaneity" of failure occurrences can actually reduce the number of rounds). for the generic protocol described in [3], the table considers its instantiation with s. hence, it appears that the proposed protocol owns several noteworthy features. the most important of them are its ability to allow early decision and its linear cost for the number of messages per round. fabíola greve michel hurfin raimundo macêdo michel raynal network security considerations in bln in a heterogeneous computer network, resources on each node are protected by the local security mechanism. network services allow local resources to be accessed remotely from other nodes. without appropriate network security measures, each node is vulnerable to attack from the network. this paper analyzes the heterogeneous bell labs network (bln) environment and lists the potential network security risks. by assessing the risks, counter-measures are proposed for minimizing potential exposure. these counter-measures are part of a global network security mechanism being implemented for bln. j. yao a performance study of a highly-parallel architecture to provide registration and translation services in an intelligent network this paper presents the initial investigation of a highly parallel architecture that can be used as an intelligent network translation and registration platform (intarp). the intarp architecture is deterministic and uses a hyper-switch as well as bus connections to pass messages. threads through the multiprocessing pipelines are maintained by service definitions of the call procedures on "anchor processors". the processes call procedures distributed over "sequencing processors". intarp is being prototyped using the inmos, family of transputers. unlike other architectures designed for the intelligent network, intarp is designed specifically to minimize the post dial delay (pdd) the architecture contributes to circuit switch call processing. unmanaged pdd in the intelligent network can lead to higher call abandonments. pdd also increase trunk and port capitalization due to the greater call holding time in peripheral processors prior to information transfer. the intarp architecture is evaluated using a simulation model. translation, pcn registration and calling card services have been implemented on the simulation model and the performance of the architecture is inferred from the performance of the simulation studies. jerry stach jerry place analyzing communication latency using the nectar communication processor for multicomputer applications, the most important performance parameters of a network is the latency for short messages. in this paper we present an analysis of communication latency using measurement of the nectar system. nectar is a high-performance multicomputer built around a high- bandwidth crosspoint network. nodes are connected to the nectar network using network coprocessors that are primarily responsible the protocol processing, but that can also execute application code. this architecture allows us to analyze message latency both between workstations with an outboard protocol engine and between lightweight nodes with a minimal runtime system and a fast, simple network interface (the coprocessors). we study how much context switching, buffer management and protocol processing contribute to the communication latency and we discuss how the latency is influenced by the protocol implementation. peter steenkiste self-stabilizing topology maintenance protocols for high-speed networks hosame abu-amara brian a. coan shlomi dolev arkady kanevsky jennifer l. welch heat dissipation in wearable computers aided by thermal coupling with the user wearable computers and pda's are physically close to, or are in contact with, the user during most of the day. this proximity would seemingly limit the amount of heat such a device may generate, conflicting with user demands for increasing processor speeds and wireless capabilities. however, this paper explores significantly increasing the heat dissipation capability per unit surface area of a mobile computer by thermally coupling it to the user. in particular, a heat dissipation model of a forearm-mounted wearable computer is developed, and the model is verified experimentally. in the process, this paper also provides tools and novel suggestions for heat dissipation that may influence the design of a wearable computer. thad starner yael maguire security protocol for ieee 802.11 wireless local area network as wireless local area networks (wlans) are rapidly deployed to expand the field of wireless products, the provision of authentication and privacy of the information transfer will be mandatory. these functions need to take into account the inherent limitations of the wlan medium such as limited bandwidth, noisy wireless channel and limited computational power. moreover, some of the ieee 802.11 wlan characteristics such as the use of a point coordinator and the polling based point coordination function (pcf) have also to be considered in this design. in this paper, we introduce a security protocol for the ieee 802.11 pcf that provides privacy and authentication, and is designed to reduce security overheads while taking into account the wlan characteristics. we prove this protocol using the original and modified ban logic. se hyun park aura ganz zvi ganz next century challenges: data-centric networking for invisible computing: the portolano project at the university of washington mike esler jeffrey hightower tom anderson gaetano borriello impact of sharing-based thread placement on multithreaded architectures r. thekkath s. j. eggers analysis of a 3d toroidal network for a shared memory architecture this paper describes a synchronized network model suitable for the horizon architecture. the model is defined in terms of a topology and routing policy. a three dimensional toroidal topology is investigated for its multiple redundant paths, memory locality and simplicity in both routing and construction. the routing policy is based on a desperation routing scheme in which it is not guaranteed that a message will make progress on a given network cycle. this scheme requires no complex deadlock avoidance algorithms or node to node flow control, allowing a very simple and efficient implementation. design considerations and preliminary performance results are discussed. f. pittelli d. smitley service specification and protocol construction for the transport layer in a computer network, the transport layer uses the service offered by the network layer and in turn offers its users the transport service of reliable connection management and data transfer. we provide a formal specification of the transport service in terms of an event-driven system and safety and progress properties. we construct three verified transport protocols that offer the transport service. the first transport protocol assumes a perfect network service, the second assumes loss-only network service, and the third assumes loss, reordering and duplication network service. our transport service specifications are very realistic. each user can be closed, listening, active opening, passive opening, open, or closing. a local incarnation number uniquely identifies every active opening and listening duration. users can issue requests for connection, listening, closing, data send, etc. the transport layer issues indications for successful or unsuccessful connection, closing, data reception, etc. a connection is established only if one user requested the connection and the other was listening, or both requested the connection. a user receives data only from the appropriate incarnation of the distant user, and receives it insequence, without loss or duplication. progress properties ensure that every outstanding user request is eventually responded to by an appropriate transport indication. our protocols are constructed by stepwise refinement of the transport service. the construction method automatically generates a verification that the protocols satisfy the transport service. one distinctive feature of our protocol construction is that the events and verification of the data transfer function is directly obtained from any one of the numerous verified single- incarnation data transfer protocols already presented in the literature. s. l. murphy a. u. shankar mitigating server-side congestion in the internet through pseudoserving keith kong dipak ghosal comparative performance evaluation of cache-coherent numa and coma architectures two interesting variations of large-scale shared-memory machines that have recently emerged are cache-coherent non- uniform-memory-access machines (cc- numa) and cache-only memory architectures (coma). they both have distributed main memory and use directory-based cache coherence. unlike cc- numa, however, coma machines automatically migrate and replicate data at the main-memory level in cache-line sized chunks. this paper compares the performance of these two classes of machines. we first present a qualitative model that shows that the relative performance is primarily determined by two factors: the relative magnitude of capacity misses versus coherence misses, and the granularity of data partitions in the application. we then present quantitative results using simulation studies for eight parallel applications (including all six applications from the splash benchmark suite). we show that coma's potential for performance improvement is limited to applications where data accesses by different processors are finely interleaved in memory space and, in addition, where capacity misses dominate over coherence misses. in other situations, for example where coherence misses dominate, coma can actually perform worse than cc-numa due to increased miss latencies caused by its hierarchical directories. finally, we propose a new architectural alternative, called coma-f, that combines the advantages of both cc-numa and coma. per stenström truman joe anoop gupta internet technology barry m. leiner link layer retransmission schemes for circuit-mode data over the cdma physical channel in the last few years, wide-area data services over north american digital (tdma and cdma) cellular networks have been standardized. the standards were developed under three primary constraints: (i) compatibility with existing land-line standards and systems, (ii) compatibility with existing cellular physical layer standards that are optimized for voice, and (iii) market demands for quick solutions. in particular, the is-95 cdma air interface standard permits multiplexing of primary traffic (e.g., voice or circuit data) and secondary traffic (e.g., packet data) or in-band signaling within the same physical layer burst. in this paper, we describe two radio link protocols for circuit-mode data over is-95. the first protocol, protocol s, relies on a single level of recovery and uses a flexible segmentation and recovery (fsar) sublayer to efficiently pack data frames into multiplexed physical layer bursts. we next describe protocol t, that consists of two levels of recovery. protocol t has been standardized for cdma circuit-mode data as is-99 (telecommunications industry association, 1994). we provide performance comparisons of the two protocols in terms of throughput, delay and recovery from fades. we find that the complexity of the two level recovery mechanism can buy higher throughput through the reduced retransmission data unit size. however, the choice of tcp (and its associated congestion control mechanism) as the upper layer of recovery on the link layer, leads to long fade recovery times for protocol t. the two approaches also have significant differences with respect to procedures and performance at handoff and connection establishment. mooi choo chuah bharat doshi subra dravida richard ejzak sanjiv nanda a comprehensive approach to signaling, transmission, and traffic management for wireless atm networks we propose and evaluate a signaling and transmission algorithmic system for wireless digital networks, in conjunction with a traffic monitoring algorithm (tma) for dynamic capacity allocation in multimedia atm environments. the deployed signaling protocol is stable, and two transmission techniques are compared: a framed time-domain based (ftdb) technique and a framed cdma (fcdma) technique. the overall signaling/transmission/traffic monitoring proposed system has powerful performance characteristics, while the tma-ftdb combination is superior to the tma-fcdma combination in environments where message lengths are relatively short and the speed of the transmission lines is relatively low. we also evaluate a system deploying the ethernet protocol as a signaling protocol, instead. in the presence of relatively tight admission delay constraints in signaling, the latter is significantly inferior to our proposed signaling technique. as the above admission delay constraints diminish, the ethernet protocol breaks down, while our proposed signaling technique maintains its high performance characteristics. anthony burrell p. papantoni-kazakos balanced assignment of cells in pcs networks jie li hisao kameda hideto itoh performance analysis of small fddi networks jesse smith l. donnell payne tom nute the design and development of a dynamic program behavior measurement tool for the intel 8086/88 r. j. schwartz performance improvements for iso transport the nbs protocol performance laboratory is developing enhanced protocol mechanisms for osi class 4 transport that will improve the throughput efficiency achieved on a satellite channel. a selective acknowledgement mechanism has been shown to improve throughput efficiency by as much as 34%. several alternative expedited data mechanisms have demonstrated throughput efficiency improvements as great as 38%. most of the protocol mechanism enhancements considered require only minor changes to the international standard osi transport protocol. richard colella robert aronoff kevin mills delay analysis for cbr traffic under static-priority scheduling katsuyoshi iida tetsuya takine hideki sunahara yuji oie editorial lazaros merakos ioannis stavrakakis an integration of network communication with workstation architecture gregory g. finn tcp over wireless with link level error control: analysis and design methodology hemant m. chaskar t. v. lakshman u. madhow a slotted cdma protocol with ber scheduling for wireless multimedia networks ian f. akyildiz david a. levine inwhee joe analysis of an algorithm for distributed recognition and accountability computer and network systems are vulnerable to attacks. abandoning the existing huge infrastructure of possibly-insecure computer and network systems is impossible, and replacing them by totally secure systems may not be feasible or cost effective. a common element in many attacks is that a single user will often attempt to intrude upon multiple resources throughout a network. detecting the attack can become significantly easier by compiling and integrating evidence of such intrusion attempts across the network rather than attempting to assess the situation from the vantage point of only a single host. to solve this problem, we suggest an approach for distributed recognition and accountability (dra), which consists of algorithms which "process", at a central location, distributed and asynchronous "reports" generated by computers (or a subset thereof) throughout the network. our highest-priority objectives are to observe ways by which an individual moves around in a network of computers, including changing user names to possibly hide his/her true identity, and to associate all activities of multiple instances of the same individual to the same networkwide user. we present the dra algorithm and a sketch of its proof under an initial set of simplifying albeit realistic assumptions. later, we relax these assumptions to accommodate pragmatic aspects such as missing or delayed "reports", clock skew, tampered "reports", etc. we believe that such algorithms will have widespread applications in the future, particularly in intrusion-detection systems. calvin ko deborah a. frincke terrance goan todd heberlein karl levitt biswanath mukherjee christopher wee the cedar system and an initial performance study in this paper, we give an overview of the cedar multiprocessor and present recent performance results. these include the performance of some computational kernels and the perfect benchmarks. we also present a methodology for judging parallel system performance and apply this methodology to cedar, cray ymp-8, and thinking machines cm-5. d. kuck e. davidson d. lawrie a. sameh c. q. zhu a. veidenbaum j. konicek p. yew k. gallivan w. jalby h. wijshoff r. bramley u. m. yang p. emrath d. padua r. eigenmann j. hoeflinger g. jaxon z. li t. murphy j. andrews architecture and protocols of stella: a european experiment on satellite interconnection of local area networks stella (satellite transmission experiment linking laboratories) is a european wide-band data transmission experiment. stella makes use of the european orbital test satellite (ots) which provides a 2 mb/s broadcast data transmission channel. the first phase of the stella project (stella/i) is summarized. the more important design characteristics of an improved version (stella/ii) of stella/i are emphasized. collaboration between stella/ii and universe projects in the framework of cost-11bis is then outlined. n. celandroni e. ferro l. lenzini b. m. segal k. s. olofsson dynamic qos allocation for multimedia ad hoc wireless networks in this paper, we propose an approach to support qos for multimedia applications in ad hoc wireless network. an ad hoc network is a collection of mobile stations forming a temporary network without the aid of any centralized coordinator and is different from cellular networks which require fixed base stations interconnected by a wired backbone. it is useful for some special situations, such as battlefield communications and disaster recovery. the approach we provide uses csma/ca medium access protocol and additional reservation and control mechanisms to guarantee quality of service in ad hoc network system. the reson we choose csma protocol instead of other mac protocols is that it is used in most of currently wireless lan productions. via qos routing information and reservation scheme, network resources are dynamically allocated to individual multimedia application connections. hsiao-kuang wu pei-hung chuang resources section: web sites michele tepper heterogeneous computer architecture this paper reviews computer architecture as it has progressed from the stand- alone computer to the computer system to computer networks. the degree of success of an architecture can be measured by the number of different implementations made of it. an architecture is ultimately limiting in some way, thereby spawning specialized units, called algorithm boxes, or special- purpose computers. the author takes the view that this is a beneficial trend, but one that needs architectural guidance. an architecture is valuable if designers can build, and programmers can use, the resulting system. the paper reviews current trends that might be described as the development of computer architecture by "default". the particular case of local computer networks is discussed. james e. thornton the concept of relevant time scales and its application to queuing analysis of self-similar traffic (or is hurst naughty or nice?) arnold l. neidhardt jonathan l. wang quality adaptation for congestion controlled video playback over the internet streaming audio and video applications are becoming increasingly popular on the internet, and the lack of effective congestion control in such applications is now a cause for significant concern. the problem is one of adapting the compression without requiring video-servers to re-encode the data, and fitting the resulting stream into the rapidly varying available bandwidth. at the same time, rapid fluctuations in quality will be disturbing to the users and should be avoided.in this paper we present a mechanism for using layered video in the context of unicast congestion control. this quality adaptation mechanism adds and drops layers of the video stream to perform long-term coarse-grain adaptation, while using a tcp-friendly congestion control mechanism to react to congestion on very short timescales. the mismatches between the two timescales are absorbed using buffering at the receiver. we present an efficient-scheme for the distribution of buffering among the active layers. our scheme allows the server to trade short-term improvement for long-term smoothing of quality. we discuss the issues involved in implementing and tuning such a mechanism, and present our simulation results. reza rejaie mark handley deborah estrin distributed orthogonal factorization we describe several algorithms for computing the orthogonal factorization on distributed memory multiprocessors. one of the algorithms is based on givens rotations, two others employ column householder transformations but with different communication schemes: broadcast and pipelined ring. a fourth algorithm is a hybrid; it uses househlolder transformations and givens rotations in separate phases. we present expressions for the arithmetic and communication complexity of each algorithm. the algorithms were implemented on an ipsc-286 and the observed times agree well with our analyses. a. pothen p. raghavan analysis of the autonomous system network topology mapping the internet is a major challenge for network researchers. it is the key to building a successful modeling tool able to generate realistic graphs for use in networking simulations. in this paper we provide a detailed analysis of the inter-domain topology of the internet. the collected data and the resulting analysis began in november 1997 and cover a period of two and a half years. we give results concerning major topology properties (nodes and edges number, average degree and distance, routing policy, etc.) and main distributions (degree, distance, etc.). we also present many results about the trees of this network. the evolution of these properties is reviewed and major trends are highlighted. we propose some empirical laws that match this current evolution. four new power-laws concerning the number of shortest paths between node pairs and the tree size distribution are provided with their detailed validation. damien magoni jean jacques pansiot atm network: goals and challenges asynchronous transfer mode (atm) can provide both circuit and packet-switching services with the same protocol, and this integration of circuit and packet- switching services can be beneficial in many ways. four major benefits of the atm technique are considered here: scalability, statistical multiplexing, traffic integration, and network simplicity. in the course of achieving those benefits, atm makes compromises. in this article we assess the benefits and ensuing penalties of these compromises and put them into perspective. b. g. kim p. wang replication requirements in mobile environments replication is extremely important in mobile environments because nomadic users require local copies of important data. however, today's replication systems are not "mobile-ready". instead of improving the mobile user's environment, the replication system actually hinders mobility and complicates mobile operation. designed for stationary environments, the replication services do not and cannot provide mobile users with the capabilities they require. replication in mobile environments requires fundamentally different solutions than those previously proposed, because nomadicity presents a fundamentally new and different computing paradigm. here we outline the requirements that mobility places on the replication service, and briefly describe roam, a system designed to meet those requirements. david ratner peter reiher gerald j. popek geoffrey h. kuenning the ultimate ultimate risc glenn w. griffin on the measured behaviour of a x.25 packet switching subnetwork the aim of this paper is to present some measurements on the performance of a x.25 packet switching subnetwork and of a typical x.25 dte-dce interface. the communication parameters considered are the window size of the link layer flow control, the length and contents of the packets and the internode distance. the collected measurements on the x.25 subnetwork are compared with others obtained through different versions of dte connected to the same subnetwork. some considerations are presented about the time spent by the packets inside the dte and consequently about the efficiency of the dte version used in these experiments. alberto faro gaetano messina franco martinucci secure audit logs to support computer forensics in many real-world applications, sensitive information must be kept it log files on an untrusted machine. in the event that an attacker captures this machine, we would like to guarantee that he will gain little or no information from the log files and to limit his ability to corrupt the log files. we describe a computationally cheap method for making all log entries generated prior to the logging machine's compromise impossible for the attacker to read, and also impossible to modify or destroy undetectably. bruce schneier john kelsey equal resource sharing scheduling for pcs data services for high speed mobile communication applications, the data rate can be increased by using multiple channels (or time slots) instead of one channel. to reduce the high blocking rate of multiple channels assignment, flexible resource allocation strategies have been proposed. this paper proposes the equal resource sharing allocation scheme (ersa scheme) for flexible resource allocation. the ersa scheme dynamically averages the allocated resource to the call requests based on the number of calls in a base station. the scheme accommodates the maximum number of requests while providing acceptable quality to the admitted requests. we developed an analytic model to investigate the performance of ersa, and conducted simulation experiments to validate the analytic model. we define satisfaction indication si as the performance measurement of the resource allocation algorithm. the experiment results indicate that the ersa scheme outperforms other resource allocation algorithms proposed in our previous study. jeu-yih jeng yi-bing lin are wires plannable? a simple approach to global wire delays leads to the conclusion that within a few years interconnect is going to demand an overwhelming portion of the chip estate. in addition, memory-to-compute ratio for area is growing very fast. these observations are added to the already considerable set of arguments for breaking radically with traditional silicon design paradigms. ralph h. j. m. otten giuseppe s. garcea boosting the performance of hybrid snooping cache protocols previous studies of bus-based shared-memory multiprocessors have shown hybrid write- invalidate/write-update snooping protocols to be incapable of providing consistent performance improvements over write-invalidate protocols. in this paper, we analyze the deficiencies of hybrid snooping protocols under release consistency, and show how these deficiencies can be dramatically reduced by using write caches and read snarfing.our performance evaluation is based on program-driven simulation and a set of five scientific applications with different sharing behaviors including migratory sharing as well us producer- consumer sharing. we show that a hybrid protocol, extended with write caches as well as read snarfing, manages to reduce the number of coherence misses by between 83% and 95% as compared to a write-invalidate protocol for all five applications in this study. in addition, the number of bus transactions is reduced by between 36% and 60% for four of the applications and by 9% for the fifth application. because of the small implementation cost of the hybrid protocol and the two extensions, we believe that this combination is an effective approach to boost the performance of bus-based multiprocessors. fredrik dahlgren the application of token rings to local networks of personal computers it is the thesis of this short note that a network which operates by passing a control token sequentially around a loop, much like that of the distributed computing system [farb 73, farb 75] is appropriate for at least two forthcoming kinds of local personal computer networks. high end personal scientific computers will probably require very high bandwidth (50 mhz) local networks, a requirement well suited for a token type ring network. at the other end of the spectrum, a low bandwidth (300 baud) network for low cost home or business personal computers may be established that use the conventional phone system and a variation of the token ring for arbitration. r. l. gordon performance of cached dram organizations in vector supercomputers drams containing cache memory are studied in the context of vector supercomputers. in particular, we consider systems where processors have no internal data caches and memory reference streams are generated by vector instructions. for this application, we expect that cached drams can provide high bandwidth at relatively low cost. we study both drams with a single, long cache line and with smaller, multiple cache lines. memory interleaving schemes that increase data locality are proposed and studied. the interleaving schemes are also shown to lead to non- uniform bank accesses, i.e. hot banks. this suggest there is an important optimization problem involving methods that increase locality to improve performance, but not so much that hot banks diminish performance. we show that for uniprocessor systems, both types of cached drams work well with the proposed interleave methods. for multiprogrammed multiprocessors, the multiple cache line drams work better. w.-c. hsu j. e. smith u-net: a user-level network interface for parallel and distributed computing (includes url) t. von eicken a. basu v. buch w. vogels transmission-efficient routing in wireless networks using link-state information the efficiency with which the routing protocol of a multihop packet-radio network uses transmission bandwidth is critical to the ability of the network nodes to conserve energy. we present and verify the source-tree adaptive routing (star) protocol, which we show through simulation experiments to be far more efficient than both table-driven and on-demand routing protocols proposed for wireless networks in the recent past. a router in star communicates to its neighbors the parameters of its source routing tree, which consists of each link that the router needs to reach every destination. to conserve transmission bandwidth and energy, a router transmits changes to its source routing tree only when the router detects new destinations, the possibility of looping, or the possibility of node failures or network partitions. simulation results show that star is an order of magnitude more efficient than any topology-broadcast protocol proposed to date and depending on the scenario is up to six times more efficient than the dynamic source routing (dsr) protocol, which has been shown to be one of the best performing on-demand routing protocols. j.j. garcia-luna-aceves marcelo spohn power control for link quality protection in cellular ds-cdma networks with integrated (packet and circuit) services deepak ayyagari anthony ephremides the flex/32 multicomputer nicholas matelan effective bandwidth in wireless atm networks jeong geun kim marwan krunz on protective buffer policies israel cidon roch guerin asad khamisy an ethernet compatible low cost/high performance communication solution the lan-hub is a new local area network designed to combine the properties of several existing lan standards to provide highly reliable communication at a relatively lower cost per station, improve network capacity/delay performance and increase the lan user's flexibility in configuring his network. the lan- hub network is configured around the codex 4320 lan-hub communication controllers which allow up to eight ethernet/ieee 802.3 stations to transparently share one network transceiver or rf modem. each lan-hub controller executes a fair, collision free, arbitration among its local stations attached to its lan ports, using a patented collision avoidance algorithm. the hub's rear panel further provides a network port which can be attached to a standard 802.3 transceiver, to another hub or terminated in a loop back mode. the hubs, thus, allow stations to be organized in a standalone star network, in a "cascading" star configuration, or as an integrated bus/star network. when organized as star or cascaded star networks, the hub controllers provide highly reliable and controlled, collision free, bandwidth allocation suitable for applications such as cpu room clusters. alternately, in a bus configuration the hubs support an ethernet type bus network, with random access channel control. in the latter configuration the hubs provide a lower cost/higher performance solution for bursty traffic users, since they reduce cabling requirements, decrease cost by sharing each transceiver among several lan stations and improve network throughput/delay performance by eliminating local collisions. the lan-hub is thus effective in creating a local communication solution with a significant degree of flexibility in choosing the network layout and the type of service, random or deterministic, provided to the network users. i. chlamtac a. herman estimation of the cell loss ratio in atm networks with a fuzzy system and application to measurement-based call admission control brahim bensaou shirley t. c. lam hon-wai chu danny h. k. tsang predicting rf coverage in large environments using ray-beam tracing and partitioning tree represented geometry we present a system for efficient prediction of rf power distribution in site specific environments using a variation of ray tracing, which we called ray- beam tracing. the simulation results were validated against measured data for a number of large environments with good statistical correlation between the two. we represent geometric environments in full 3d which facilitates rooftop deployment along with any other 3d locations. we use broadcast mode of propagation, whose cost increases more slowly with an increase in the number of receiving bins. the scheme works well both for indoor and outdoor environments. simple ray tracing has a major disadvantage in that adjacent rays from a transmitter diverge greatly after large path lengths due to multiple reflections, such that arbitrarily large geometric entities could fall in between these rays. this results in a sampling error problem. the error increases arbitrarily as the incident angle approaches $90^\circ $. the problem is addressed by introducing the notion of beams while retaining the simplicity of rays for intersection calculations. a beam is adaptively split into child beams to limit the error. a major challenge for computational efficiency is to quickly determine the closest ray-surface intersection. we achieve this by using partitioning trees which allows representation of arbitrarily oriented polygonal environments. we also use partitioning trees for our full 3d interactive visualization along with interactive placement of transmitters, receiving bins, and querying of power. a. rajkumar b. f. naylor f. feisullin l. rogers internetworking using switched multi-megabit data service in tcp/ip enviroments david m. piscitello michael kramer multiprocessor systems with general breakdowns and repairs (extended abstract) ram chakka isi mitrani a cost-effective approach to implement a long instruction word microprrocessor yen-jen oyang bor-ting chang shu-may lin exploiting data parallelism in signal processing on a dataflow machine this paper will show that the massive data parallelism inherent to most signal processing tasks may be easily mapped onto the parallel structure of a data flow machine. a special system called structflow has been designed to optimize the static data flow model for hardware efficiency and low latency. the same abstractions from the general purpose data flow model that lead to a quasi systolic operation of the processing elements make explicit flow control of the data tokens as they pass through the arcs of the flow graph obsolete. we will describe the architecture of the system and discuss the restrictions on the structure of the flow graphs. p. nitezki architecture of a message-driven processor we propose a machine architecture for a high-performance processing node for a message-passing, mimd concurrent computer. the principal mechanisms for attaining this goal are the direct execution and buffering of messages and a memory-based architecture that permits very fast context switches. our architecture also includes a novel memory organization that permits both indexed and associative accesses and that incorporates an instruction buffer and message queue. simulation results suggest that this architecture reduces message reception overhead by more than an order of magnitude. w. j. dally l. chao a. chien s. hassoun w. horwat j. kaplan p. song b. totty s. wills the post-pc era: it's about the services-enabled new internet many believe that the post-pc is about new information appliances. we disagree. it is about a new kind of services-enabled internet, able to deliver distributed, scalable, high performance and adaptable applications to diverse access networks and terminal devices. the focus of the internet has been its protocols, primarily tcp/ip and http. with the rise of the world wide web, and the explosion in the number of people with access to the internet, the technical community has shifted towards new applications and network-based services to support them. we contend that while the protocol stack is well understood, the service architecture underlying these applications is evolving rapidly and somewhat chaotically. we review the evolution of the internet, with a particular emphasis on the rapid developments of the last five years, and propose a candidate service architecture based on the emerging structure of the new internet industry.. we divide the history of the internet into two periods: before the rapid spread of the world wide web (www) and the emergence of graphical web browsers, and afterwards. we set the pivotal year to be 1995, even though ncsa mosaic appeared in 1993 and the www in 1991. the crucial event was the final privatization of nsfnet, enabling the formation of today's commercial internet. this provided the critical flame that allowed commercial use of the internet to ignite, brought on by the development of a real market: the simultaneous explosion in the number of individuals with access to the internet (in part, due to lower cost pcs allowing higher home penetration rates) and the amount of increasingly useful content embedded in the web. as late as 1994, the answer to "what is the internet?" was "anything that runs tcp/ip protocols." its essential feature was that applications run on a small number of well engineered protocols and apis, thus hiding the complexities of the underlying access networks. the internet successfully spans technology based on twisted pair, coax cable, telephone, fiber optic, packet radio, and satellite access. this conceptual architecture was so essential to the internet's success that it was dubbed the "narrow- waist" model. a raging debate developed in how best to converge data, television, and telephony networks to achieve a "national information infrastructure." rather unexpectedly, the killer application was not video-on-demand but information at your fingertips delivered via the www. the clear trend is to move telephony and video onto the internet as particular applications, and to enhance the underlying networking technology to better support these. virtually all of the long distance telephone carriers have already announced that they are migrating their networks to become ip-based. however, the most rapid innovation has been in the proliferation of a new service model on top of the internet's narrow waist. this is being driven by the privatization of the internet, leading to intense competition in access and backbone connectivity services (e.g., isps, nsps, hosting, colocation, overlay networks, etc.), and the emergence of significant web-based businesses: portals, content delivery, e-commerce, and in the near future, entertainment. while much of the current development is being driven by consumer access to web content and business-to-consumer commerce, there is also significant focus on using internet-based services to integrate enterprise business processes. the internet is rapidly becoming the means that allows a corporation to outsource many elements of its information technology operations, not simply to present a web presence to consumers or business partners. elements of this emerging service model include: network service providers (nsps), internet service providers (isps), web hosting services, applications service providers (asps), applications infrastructure providers (aips), content delivery services, and so on. some of these services are oriented towards corporate users (the enterprise market), others to isps, and still others to content developers. this presentation further elaborates on the service model we see emerging, within the context of the internet as it exists in mid-2000. we also make predictions of how this may evolve from this point forward. our focus is on the rich set of services above basic connectivity. the presentation is organized as follows. we set the context by giving a brief history of the internet. our particular focus is on the last five years, which has not been well documented in the technical literature. next, we present the emerging internet service architecture. we then use case studies to illustrate this architecture. finally, we give some predictions of how this will evolve in the future. r. h. katz editorial christopher rose ramesh sitaraman a simple lan performance measure jan vis an effective low powr design methodology based on interconnect prediction the demand for low power digital systems has motivated significant research. however, the power estimation at the logic level is a difficult task because interconnect plays a role in determining the total chip power dissipation. as a result, the power optimization at the logic level may be inaccurate due to the lack of physical place and route information. in this paper, we will present an effective low power design methodology based on interconnect prediction at the logic level. the proposed design methodology includes the method to create wire load models and the design procedures to select the appropriate wire load model during synthesis. the main distinction of the proposed approach is that it constructs physical hierarchy during the synthesis stage. by taking advantage of wire load models, the proposed design methodology is able to develop low power digital systems while speeding up the design process. experimental data shows that this design methodology achieved very good results. shih-hsu huang the world-wide web tim berners-lee robert cailliau ari luotonen henrik frystyk nielsen arthur secret receiver-initiated busy-tone multiple access in packet radio networks the aloha and carrier sense multiple access (csma) protocols have been proposed for packet radio networks (prn). however, csma/cd which gives superior performance and has been successful applied in local area networks cannot be readily applied in prn since the locally generated signals will overwhelm a remote transmission, rendering it impossible to tell whether a collision has occurred or not. in addition, csma and csma/cd suffer from the "hidden node" problem in a multihop prn. in this paper, we develop the receiver-initiated busy-tone multiple access protocol to resolve these difficulties. both fully connected and multihop networks are studied. the busy tone serves as an acknowledgement and prevents conflicting transmissions from other nodes, including "hidden nodes". c. wu v. li flow-control machines: the structured execution architecture (sxa) j. m. terry a new fault tolerant distributed mutual exclusion algorithm r. l. n. reddy b. gupta pradip k. srimani phs terminating call control this paper describes terminating call control for personal handyphone system (phs), which is achieved by simultaneously forwarding terminating call control messages to multiple interfaces. the network assisted reforwarding (nar) scheme in which a terminating call control message is reforwarded after a certain period of time to improve the likelihood of its receipt by the personal station (ps), is described. the nar scheme is modeled and analyzed. this paper also proposes an enhanced nar (e-nar) scheme in which the terminating call control messages are reforwarded only if the response is not received within a certain time-period. the e-nar scheme is modeled, analyzed and compared with the nar scheme. finally, a secure control method that prevents fraud for terminating call control is proposed. shigefusa suzuki takeshi ihara yoshiaki shikata source-oriented topology aggregation with multiple qos parameters in hierarchical networks in this paper, we investigate the problem of topology aggregation (ta) for scalable, qos-based routing in hierarchical networks. ta is the process of summarizing the topological information of a subset of network elements. this summary is flooded throughout the network and used by various nodes to determine appropriate routes for connection requests. a key issue in the design of a ta scheme is the appropriate balance between compaction and the corresponding routing performance. the contributions of this paper are twofold. first, we introduce a source-oriented approach to ta, hich provides better performance than existing approaches. the intuition behind this approach is that the advertised topology-state information is used by source nodes to determine tentative routes for connection requests. accordingly, only information relevant to source nodes needs to be advertised. we integrate the source-oriented approach into three new ta schemes that provide different trade-offs between compaction and accuracy. second, we extend our source- oriented approach to multi-qos-based ta. a key issue here is the determination of appropriate values for the multiple qos parameters associated with a logical link. two new approaches to computing these values are introduced. extensive simulations are used to evaluate the performance of out proposed schemes. turgay korkmaz marwan krunz a scalable wireless virtual lan zhao liu malathi veeraraghavan kai y. eng an integrated approach to enterprise computing architectures george s. nezlek hemant k. jain derek l. nazareth integration of synchronous and asynchronous traffic on the metaring and its performance study yoram ofek khosrow sohraby ho-ting wu dynamic channel allocation for linear macrocellular topology mostafa a. bassiouni chun-chin fang high-speed processing schemes for summation type and iteration type vector instructions on hitachi supercomputer s-820 system the hitachi supercomputer s-820 system has been developed as hitachi's top end supercomputer. it is also rated as one of the most powerful supercomputers in the world. among the vector instructions which supercomputers support, summation type vector instructions and iteration type vector instructions are not suitable for parallel processing, since elements to be processed are not independent in these instructions. the s-820 employs high-speed processing schemes for summation type vector instructions and iteration type vector instructions; the performance of summation type instructions is enhanced by high-speed post- processing scheme and the performance of iteration type instructions is enhanced by high-speed parallelizing scheme for iteration arithmetic. thanks to these schemes, the execution speeds for kernel 3 and kernel 4 of the lawrence livermore laboratory's 24 kernels become 838.7 mflops and 258.5 mflops respectively, and those for kernel 5, kernel 6 and kernel 11 of the lawrence livermore laboratory's 14 kernels become 114.6 mflops, 111.8 mflops and 98.4 mflops, respectively. h. wada k. ishil m. fukagawa h. murayama s. kawabe a dynamic request set based algorithm for mutual exclusion in distributed systems ye-in chang efficient implementation of experimental design systems george d. m. ross resource management in a distributed internetwork environment the resource management system is designed to support location-transparent access to resources and to improve performance in a distributed internetwork environment. in this environment, access to one of several machines that have a given resource is determined by the effective bandwidth and reliability of internetwork communications, the ability to interact with the machines offering the resources, and the load on those machines. using these criteria, the resources management system enables applications to determine and obtain access to the "best" copy of a resource. should the resource become unavailable from one machine, the resource management system can dynamically direct the applications to another machine known to offer the same resource. this paper discusses some of the problems that motivated the design, an implementation for a unix workstation environment, and future directions for the resource management system. g. skinner j. m. wrabetz louis schreir a characterization of eventual byzantine agreement joseph y. halpern yoram moses orli waalrts a quantitative approach to dynamic networks baruch awerbuch oded godlreich amir herzberg performance analysis of circuit switching, baseline interconnection networks performance evaluation, using both analytical and simulation models, of circuit switching baseline networks is presented. two configurations of the baseline networks, single and dual, are evaluated. in each configuration, two different conflict resolution strategies, drop and hold, are tried to see the performance difference. our analytical models are based on a more realistic assumption. new analyses are given and are verified by simulation results. in single network configuration, it is shown that the drop strategy is better than the hold strategy in the case that the data transfer time is longer than 10 cycles under a high request rate. in the dual network configuration, five different communication strategies are investigated and the optimum performance level is shown to be dependent on the length of the data transfer time. manjai lee chuan-lin wu the shadow cluster concept for resource allocation and call admission in atm- based wireless networks david a. levine ian f. akyildiz mahmoud naghshineh source time scale and optimal buffer/bandwidth tradeoff for heterogeneous regulated traffic in a network node francesco lo presti zhi-li zhang jim kurose don towsley overallocation in a virtual circuit computer network in this paper, we study the end-to-end control through virtual circuits in a computer network built following the x.25 recommendations. we develop a mathematical model to obtain the maximum overallocation of node buffers, in order for the probability of overflow not to exceed a given value. a. kurinckx g. pujolle scalable wdm access network architecture based on photonic slot routing imrich chlamtac viktória elek andrea fumagalli csaba szabó impossibility of distributed consensus with one faulty process the consensus problem involves an asynchronous system of processes, some of which may be unreliable. the problem is for the reliable processes to agree on a binary value. in this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. by way of contrast, solutions are known for the synchronous case, the "byzantine generals" problem. michael j. fischer nancy a. lynch michael s. paterson real time character scaling and rotation this paper describes a real time signal processing subsystem used to perform character scaling and rotation. first, the algorithms used in the subsystem are described. the algorithms were developed for performing sorting decisions on mail for the usps. the computational complexity of the algorithms implemented in the subsystem is then reviewed, and finally the implementation of the algorithms in discrete logic, dsp's, and fpga's is discussed. david l. andrews randy brown charles caldwell andrew wheeler information technology and physical space henry c. lucas disconnected operation for heterogeneous servers dorota m. huizinga patrick mann failure detectors in omission failure environments danny dolev roy friedman idit keidar dahlia malkhi a comparison of mpi, shmem and cache-coherent shared address space programming models on the sgi origin2000 hongzhang shan jaswinder pal singh multiplexing issues in communication system design this paper considers some of the multiplexing issues in communication system design by examining overall system issues. in particular, we distinguish physical multiplexing of resources from logical multiplexing of streams. both physical- resource multiplexing and logical multiplexing determine the service that can be provided by a communication system. we also discuss two issues affected by logical multiplexing - flow control and the relationship between control and data streams of a connection. we conclude that the granularity of physical resource sharing must be fine enough to meet the jitter and latency constraints of demanding applications. also, high speed communication systems should restrict their logical multiplexing to layer 3. c. c. feldmeier multicast contention resolution with single-cycle windowing using content addressable fifo's kenneth j. schultz p. glenn gulak lats: a load-adaptive threshold scheme for tracking mobile users zohar naor hanoch levy notable abbreviations in telecommunications hans w. barz performance analysis of replicated banyan switches under mixed traffic patterns g. corazza g. galletti c. raffaelli conferences marisa campbell the cost of messages jim gray binary translation and architecture convergence issues for ibm system/390 we describe the design issues in an implementation of the esa/390 architecture based on binary translation to a very long instruction word (vliw) processor. during binary translation, complex esa/390 instructions are decomposed into instruction "primitives" which are then scheduled onto a wide-issue machine. the aim is to achieve high instruction level parallelism due to the increased scheduling and optimization opportunities which can be exploited by binary translation software, combined with the efficiency of long instruction word architectures. a further aim is to study the feasibility of a common execution platform for different instruction set architectures, such as esa/390, rs?6000, as/400 and the java virtual machine, so that multiple systems can be built around a common execution platform. michael gschwind kemal ebcioglu erik altman sumedh sathaye the hardware architecture of the crisp microprocessor d. r. ditzel h. r. mclellan a. d. berenbaum diamond a digital analyzer and monitoring device this paper describes the design and application of a special purpose computer system. it was developed as an internal tool by a computer manufacturer, and has been used in solving a variety of measurement problems encountered in computer performance evaluation. james h. hughes a scalable web cache consistency architecture the rapid increase in web usage has led to dramatically increased loads on the network infrastructure and on individual web servers. to ameliorate these mounting burdens, there has been much recent interest in web caching architectures and algorithms. web caching reduces network load, server load, and the latency of responses. however, web caching has the disadvantage that the pages returned to clients by caches may be _stale_, in that they may not be consistent with the version currently on the server. in this paper we describe a scalable web cache consistency architecture that provides fairly tight bounds on the staleness of pages. our architecture borrows heavily from the literature, and can best be described as an _invalidation_ approach made scalable by using a caching hierarchy and application-level multicast routing to convey the invalidations. we evaluate this design with calculations and simulations, and compare it to several other approaches. haobo yu lee breslau scott shenker an inclusive session level protocol for distributed applications the design of an inclusive session level protocol targeted at distributed applications on local networks is presented. the session protocol is motivated by the observation that application requirements, as well as network characteristics, for current and future distributed systems are not well matched to available protocol suites. the nature of services provided in the proposed protocol are derived from typical application requirements, and include group communications, synchronization and recovery, and integrated distributed primitives such as mutual exclusion and consensus. the protocol is also influenced by the characteristics of typical local networks that support sequenced delivery with data integrity, have low latency and high throughput, and are well suited to global addressing schemes. initial experiences with a test implementation indicate that high level service support is valuable and can be provided with good performance. v. s. sunderam dynamic, distributed resource configuration on sw-banyans john feo roy jenevein j. c. browne microseqencer architecture supporting arbitrary branching up to 2m targets v. dvorak a scalable wireless virtual lan this paper presents a wireless virtual local area network (wvlan) to support mobility in ip-over-atm local area networks. mobility is handled by a joint atm-layer handoff for connection rerouting and mac-layer handoff for location tracking, such that the effects of mobility are localized and transparent to the higher-layer protocols. different functions, such as address resolution protocol (arp), mobile location, and atm connection admission are combined to reduce protocol overhead and front-end delay for connectionless packet transmission in connection-oriented atm networks. the proposed wvlan, through the use of atm technology, provides a scalable wireless virtual lan solution for ip mobile hosts. zhao liu malathi veeraraghavan kai y. eng some principles for designing a wide-area wdm optical network biswanath mukherjee dhritiman banerjee s. ramamurthy amarnath mukherjee escrow techniques for mobile sales and inventory applications we address the design of architectures and protocols for providing mobile users with integrated personal information services and applications (pisa), such as personalized news and financial information, and mobile database access. we present a system architecture for delivery of pisa based on replicated distributed servers connected to users via a personal communications services (pcs) network. the pisa architecture partitions the geographical coverage area into service areas, analogous to pcs registration areas, each of which is served by a single local server. when a user moves from one service area to another, the service is provided by the new local server. this is accomplished by a service handoff, analogous to a pcs call handoff, which entails some context information transfer from the old to the new server. we focus on the mobile sales and inventory application as an example of a pisa with a well- defined market segment. we design a database management protocol for supporting both mobile and stationary salespersons. our design uses the site- transaction escrow method, thus allowing faster responses to mobile clients, minimizing the amount of context information which must be transferred during a service handoff, and allowing mobile clients to operate in disconnected mode by escrowing items on their local disks. we develop a formal model for reasoning about site-transaction escrow, and develop a scheme for performing dynamic resource reconfiguration which avoids the need for time-consuming and costly database synchronization operations (i.e., a two-phase commit) when the mobile sales transaction completes. a further refinement to the scheme avoids an n-way two-phase commit during resource reconfiguration operations, replacing it with several simpler two-phase commits. narayanan krishnakumar ravi jain commercial multiprocessors (title only) j. rattner a new multicasting-based architecture for internet host mobility jayanth mysore vaduvur bharghavan an architectural framework for migration from cisc to higher performance platforms we describe a novel architectural framework that allows software applications written for a given complex instruction set computer (cisc) to migrate to a different, higher performance architecture, without a significant investment on the part of the application user or developer. the framework provides a hardware mechanism for seamless switching between two instruction sets, resulting in a machine that enhances application performance while keeping the same program behavior (from a user perspective). high execution speed on migrated applications is achieved through automated translation of the object code of one machine to that of the other, using advanced global optimization and scheduling techniques. issues affecting application behavior, such as precise exceptions, as well as self-modifying code, are addressed. relaxation of full compatibility on these issues lead to further possible performance gains, encouraging applications to adopt the newer architecture. the proposed framework offers a path for moving from complex instruction set computers (ciscs) to newer architectures, such as reduced instruction set computers (riscs), superscalars, or very long instruction word (vliw) machines, while protecting the extensive economic investment represented by existing software. to illustrate our approach, we show how system code written (and compiled) for the ibm system/390 can yield fine-grain parallelism, as it is targeted for execution by a vliw machine, with encouraging performance results. gabriel m. silberman kemal ebcioglu address resolution for an intelligent filtering bridge running on a subnetted ethernet system g. parr efficient mapping of algorithms to single-stage interconnections in this paper, we consider the problem of restructuring or transforming algorithms to efficiently use a single-stage interconnection network. all algorithms contain some freedom in the way they are mapped to a machine. we use this freedom to show that superior interconnection efficiency can be obtained by implementing the interconnections required by the algorithm within the context of the algorithm rather than attempting to implement each request individually. the interconnection considered is the bidirectional shuffle- shift. it is shown that two algorithm transformations are useful for implementing several lower triangular and tridiagonal system algorithms on the shuffle-shift network. of the 14 algorithms considered, 85% could be implemented on this network. the transformations developed to produce these results are described. they are general-purpose in nature and can be applied to a much larger class of algorithms. robert h. kuhn data buffering: run-time versus compile-time support data-dependency, branch, and memory-access penalties are main constraints on the performance of high-speed microprocessors. the memory-access penalties concern both penalties imposed by external memory (e.g. cache) or by under utilization of the local processor memory (e.g. registers). this paper focuses solely on methods of increasing the utilization of data memory, local to the processor (registers or register-oriented buffers). a utilization increase of local processor memory is possible by means of compile-time software, run-time hardware, or a combination of both. this paper looks at data buffers which perform solely because of the compile-time software (single register sets); those which operate mainly through hardware but with possible software assistance (multiple register sets); and those intended to operate transparently with main memory implying no software assistance whatsoever (stack buffers). this paper shows that hardware buffering schemes cannot replace compile-time effort, but at most can reduce the complexity of this effort. it shows the utility increase of applying register allocation for multiple register sets. the paper also shows a potential utility decrease inherent to stack buffers. the observation that a single register set, allocated by means of interprocedural allocation, performs competitively with both multiple register set and stack buffer emphasizes the significance of the conclusion h. mulder transparent interconnection of incompatible local area networks using bridges no single lan technology is sufficient to interconnect all the computers in a given plant, campus, or site. thus it is desirable to combine different types of lans, using a device called a bridge, to produce an extended lan. some lans in the extended lan may have incompatible data link formats. thus a bridge may need to encapsulate a frame originating on lan a inside the data link header of another (incompatible) lan b in order to allow the type a frame to travel over lan b. in general, frames sent between any pair of lans in the extended lan must be encapsulated across every incompatible lan in the path between the lans. bridges learn their routing information from information contained in frames they forward. besides the problems of distinguishing various kinds of encapsulated and unencapsulated frames, the encapsulating protocol used by bridges must also solve the learning problem. this leads to a new set of considerations and solutions. we begin with a rough solution, and refine it using informal arguments and examples to lead to the final description. the stages in the description roughly mimic the design process. r. perlman g. varghese distributed systems support for adaptive mobile applications mobile applications must operate in environments which experience rapid and significant fluctuations in the quality of service (qos) offered by their underlying communications infrastructure. these fluctuations may be the result of explicit changes between networks, increased competition for network resources or degradation of service due to environmental factors. in order to continue to operate effectively mobile applications must be capable of adapting to these changes. this paper reports on the design and implementation of a number of services to support adaptive applications. in particular, the paper describes in detail a remote procedure call protocol (rpc) called qex which has been designed to adapt to changes in communications qos and to provide feedback to applications when changes to the qos occur. qex has been implemented as part of the ansaware distributed systems platform and together with a number of other services described in this paper enables ansaware to support advanced mobile applications . nigel davies adrian friday gordon s. blair keith cheverst wireless atm mac performance evaluation, case study: hiperlan vs. modified mdr liina nenonen jouni mikkonen a nonblocking architecture for broadband multichannel switching p. s. min h. saidi m. v. hegde an example risc vector machine architecture martin dowd comments on "a deterministic approach to the end-to-end analysis of packet flows in connection oriented networks" jean-yves le boudec gerard hebuterne slow shadowing and macrodiversity in the capture-division packet access (cdpa) recently, a new method for achieving spectrum reuse in cellular systems, called capture-division packet access (cdpa), has been introduced. the method uses a single frequency in all cells, and exploits packet switching and packet retransmission as a means to overcome destructive cochannel interference. as the cdpa key factor is packet retransmission, it is also very effective in fighting lognormal shadowing attenuation as long as this attenuation can be considered independent from slot to slot. in this paper we analyze cdpa in presence of "slow shadowing", to account for situations in which obstacles obscure the receiver for several transmission periods. the results show a severe throughput impairment with respect to the "fast shadowing" model. a variation of cdpa, macrodiversity cdpa, that uses three corner fed antennas and three packet receivers is analyzed. the numerical results show that m-cdpa is more efficient than cdpa and that it is only slightly affected by both fast and slow shadowing. to further investigate its robustness, a non-lognormal slow shadowing model, the hard shadowing model, is also analyzed. flaminio borgonovo michele zorzi luigi fratta agent-based forwarding strategies for reducing location management cost in mobile networks an important issue in location management for dealing with user mobility in wireless networks is to reduce the cost associated with location updates and searches. the former operation occurs when a mobile user moves to a new location registration area and the network is being informed of the mobile user's current location; the latter operation occurs when there is a call for the mobile user and the netowrk must deliver the call to the mobile user. in this paper, we propose and analyze a class of new agent-based forwarding schemes with the objective to reduce the location management cost in mobile wireless networks. we develop analytical models to compare the performance of the proposed schemes with existing location management schemes to demonstrate their feasibility and also to reveal conditions under which our proposed schemes are superior to existing ones. our proposed schemes are particularly suitable for mobile networks with switches which can cover a large number of location registration areas. ing-ray chen tsong-min chen chiang lee the failure of tcp in high-performance computational grids distributed computational grids depend on tcp to ensure reliable end- to-end communication between nodes across the wide-area network (wan). unfortunately, tcp performance can be abysmal even when buffers on the end hosts are manually optimized. recent studies blame the self-similar nature of aggregate network traffic for tcp's poor performance because such traffic is not readily amenable to statistical multiplexing in the internet, and hence computational grids. in this paper, we identify a source of self-similarity previously ignored, a source that is readily controllable-tcp. via an experimental study, we examine the effects of the tcp stack on network traffic using different implementations of tcp. we show that even when aggregate application traffic ought to smooth out as more applications' traffic are multiplexed, tcp induces burstiness into the aggregate traffic load, thus adversely impacting network performance. furthermore, our results indicate that tcp performance will worsen as wan speeds continue to increase. w. feng p. tinnakornsrisuphap complexity/performance tradeoffs with non-blocking loads k. i. farkas n. p. jouppi an efficient reliable broadcast protocol m. frans kaashoek a. s. tanenbaum s. f. hummel completeness theorems for non-cryptographic fault-tolerant distributed computation every function of n inputs can be efficiently computed by a complete network of n processors in such a way that: if no faults occur, no set of size t < n/2 of players gets any additional information (other than the function value), even if byzantine faults are allowed, no set of size t < n/3 can either disrupt the computation or get additional information. furthermore, the above bounds on t are tight! michael ben-or avi wigderson distributed virtual machines: a system architecture for network computing emin gun sirer robert grimm brian n. bershad arthur j. gregory sean mcdirmid new parallelization and convergence results for nc: a negotiation-based fpga router the negotiation-based routing paradigm has been used successfully in a number of fpga routers. in this paper, we report several new findings related to the negotiation-based routing paradigm. we examine in-depth the convergence of the negotiation-based routing algorithm. we illustrate that the negotiation-based algorithm can be parallelized. finally, we demonstrate that a negotiation- based parallel fpga router can perform well in terms of delay and speedup with practical fpga circuits. pak k. chan martine d. f. schlag an approach for modeling the methods of mapping and trim of load in distributed computing svetla vasileva palm springs eternal: new york economist staff improving tcp performance over asymmetric networks bandwidth asymmetry is quite common among modern networks; e.g., adsl, cable tv, wireless, and satellite link with a terrestrial return path. in these networks, the bandwidth over one direction can be orders of magnitude smaller than that over the other. the performance of tcp transfer in the high bandwidth direction can be severly reduced by the delay acknowledgment packets experienced in the reverse direction. in this paper, we describe our proposed solution ace (acknowledgement based on cwnd estimation). in comparison to other solutions, ace requires only modification of the tcp stack at terminals attached to an asymmetric network. we evaluated the performance of ace over a cable modem network by simulation. we have also implemented ace on the linux platform and tested it on a small testbed network with an emulated asymmetric link. the performance improvement was significant especially when there is a high degree of asymmetry. ivan tam ming- chit du jinsong weiguo wang improving round-trip time estimates in reliable transport protocols as a reliable, end-to-end transport protocol, the arpa transmission control protocol (tcp) uses positive acknowledgements and retransmission to guarantee delivery. tcp implementations are expected to measure and adapt to changing network propagation delays so that its retransmission behavior balances user throughput and network efficiency. however, tcp suffers from a problem we call _retransmission ambiguity_: when an acknowledgment arrives for a segment that has been retransmitted, there is no indication which transmission is being acknowledged. many existing tcp implementations do not handle this problem correctly.this paper reviews the various approaches to retransmission and presents a novel and effective approach to the retransmission ambiguity problem. phil karn craig partridge taming xunet iii an architecture for network management and control for emerging wide-area atm networks is presented. the architecture was implemented on xunet iii, a nationwide atm network deployed by at&t.; the xunet network management system is based on the osi standards and includes configuration, fault and performance management. an osi agent resides at every switching node. its capabilities include monitoring of cell level quality of service in real time and estimation of the schedulable region. the complexity and accuracy of real-time monitoring functionalities is investigated. to provide realistic traffic loads, distributed traffic generation systems both at the cell and call level have been implemented. in order to study the trade- off between the network transport and signalling system we have implemented a virtual path signalling capability. our experiments show that the ability of a network to admit calls is limited by two distinct factors: the capacity of the network and the processing power of the signalling system. depending on the bandwidth requirement of calls, the limit of one or the other will be first reached. this is a key observation, unique to broadband networks. nikos g. aneroussis aurel a. lazar dimitrios e. pendarakis experimentations with tcp selective acknowledgment this paper reports our experimentation results with tcp selective acknowledgments (tcp-sack), which became an internet proposed standard protocol recently. to understand the performance impact of tcp-sack deployment, in this study we examined the following issues: how much performance improvement tcp-sack may bring over the current tcp implementation, tcp-reno. we conducted experiments both on a lab testbed and over the internet, under various conditions of link delay and losses. in particular, how well tcp-sack may perform over long delay paths that include satellite links. what is the performance impact of tcp connections with sack options on those connections without sack when the two types running in parallel, if there is any. renaud bruyeron bruno hemon lixia zhang dynamic bandwidth allocation using loss-load curves carey l. williamson inside risks: cyber underwriters lab bruce schneier a cellular wireless local area network with qos guarantees for heterogeneous traffic a wireless local area network (wlan) or a cell with quality- of-service (qos) guarantees for various types of traffic is considered. a centralized (i.e., star) network is adopted as the topology of a cell which consists of a base station and a number of mobile clients. dynamic time division duplexed (tdd) transmission is used, and hence, the same frequency channel is time-shared for downlink and uplink transmissions under the dynamic control of the base station. we divide traffic into two classes: class i (real-time) and ii (non- real-time). whenever there is no eligible class-i traffic for transmission, class-ii traffic which requires no delay-bound guarantees is transmitted, while uplink transmissions are controlled with a reservation scheme. class-i traffic which requires a bounded delay and guaranteed throughput is handled with the framing strategy (golestani, ieee j. selected areas commun. 9(7), 1991) which consists of a smoothness traffic model and the stop-and-go queueing scheme. we also establish the admission test for adding new class-i connections. we present a modified framing strategy for class-i voice uplink transmissions which utilizes the wireless link efficiently at the cost of some packet losses. finally, we present the performance (average delay and throughput) evaluation of the reservation scheme for class-ii traffic using both analytical calculations and simulations. sunghyun choi kang g. shin a model for dataflow based vector execution although the dataflow model has been shown to allow the exploitation of parallelism at all levels, research of the past decade has revealed several fundamental problems: synchronization at the instruction level, token matching, coloring and re- labeling operations have a negative impact on performance by significantly increasing the number of non-compute "overhead" cycles. recently, many novel hybrid von-neumann data driven machines have been proposed to alleviate some of these problems. the major objective has been to reduce or eliminate unnecesssary synchronization costs through simplified operand matching schemes and increased task granularity. moreover, the results from recent studies quantifying locality suggest sufficient spatial and temporal locality is present in dataflow execution to merit its exploitation. in this paper we present a data structure for exploiting locality in a data driven environment: the vector cell. a vector cell consists of a number of fixed length chunks of data elements. each chunk is tagged with a presence bit, providing intra-chunk strictness and inter-chunk non- strictness to data structure access. we describe the semantics of the model, processor architecture and instruction set as well as a sisal to dataflow vectorizing compiler back-end. the model is evaluated by comparing its performance to those of both a classical fine-grain dataflow processor employing i-structures and a conventional pipelined vector processor. results indicate the model is surprisingly resilient to long memory and communication latencies, and is able to dynamically exploit the underlying parallelism across multiple processing elements at run time. w. marcus miller walid a. najjar a. p. wim böhm a reliable multicast framework for light-weight sessions and application level framing this paper describes srm (scalable reliable multicast), a reliable multicast framework for application level framing and light-weight sessions. the algorithms of this framework are efficient, robust, and scale well to both very large networks and very large sessions. the framework has been prototyped in wb, a distributed whiteboard application, and has been extensively tested on a global scale with sessions ranging from a few to more than 1000 participants. the paper describes the principles that have guided our design, including the ip multicast group delivery model, an end-to-end, receiver-based model of reliability, and the application level framing protocol model. as with unicast communications, the performance of a reliable multicast delivery algorithm depends on the underlying topology and operational environment. we investigate that dependence via analysis and simulation, and demonstrate an adaptive algorithm that uses the results of previous loss recovery events to adapt the control parameters used for future loss recovery. with the adaptive algorithm, our reliable multicast delivery algorithm provides good performance over a wide range of underlying topologies. sally floyd van jacobson steve mccanne ching-gung liu lixia zhang performance analysis of centralized fh-cdma wireless networks centralized cdma networks with unslotted aloha random-access mode are considered. users communicate with a central node by sharing a finite number of signature sequences assigned to the receivers at the central node. two methods for sharing preamble codes are considered. in one method a common preamble code is used by all receivers, and in the other method a distinct code is assigned for each receiver. a unified analysis framework for evaluating the performance of centralized fh-cdma networks is developed. closed form expressions for packet decoding error probabilities are derived. numerical results show that common preamble codes can support only low traffic levels. however, by appropriately selecting the design parameters, acceptable levels of packet loss probabilities are achievable with the receiver-based preambles. khairi ashour mohamed lazlo pap network's design allows seamless integration into existing microcomputer lab thomas gerace traffic characterization of the nsfnet national backbone traditionally, models of packet arrival in communication networks have assumed either poisson or compound poisson arrival patterns. a study of a token ring local area network (lan) at mit [5] found that packet arrival followed neither of these models. instead, traffic followed a more general model dubbed the "packet train," which describes network traffic as a collection of packet streams traveling between pairs of nodes. a packet train consists of a number of packets travelling between a particular node pair. this study examines the existence of packet trains on nsfnet, a high speed national backbone network. train characteristics on nsfnet are not as striking as those found on the mit local network; however, certain protocols exhibit quite strong train behavior given the great number of hosts communicating through the backbone. descriptions of the packet train model can be found in [3] and [5]. steven a. heimlich performance analysis case study (abstract): application of experimental design & statistical data analysis techniques _extended abstract_ a common requirement of computer vendors competitive performanceanalysis departments is to measure and report on the pcrfonnancccharacteristics of another vendors system. in many cases the amountof prior knowledge concerning the competitors systcm is limited tosales brochures and non-technical publications. availability of thesyslcm for benchmarking is minimal; there is little choiceconcerning memory and 1/0 configurations; and time to complete theproject is short. a project of this nature is not, however, uniqueto computer vendors. many users of computer systems that want tobetter understand a systems performance characteristics beforedeciding on a purchase ,are also faced with similmrestrictions. due to time and resource constraints, a common practice is tousc an existing benchmark. the benchmarks chosen arc usuallypopular ones which claim to be representative of one or morespecific production workloads. current popular benchmarks includethe spec suite, dhrystone, linpack 100 x 100 single precision, etc.while easy to use, these generally do not provide a lot of insightas to how well a system will perform on a different benchmark orworkload. an alternative to using a representative benchmark is to use afunctional benchmark. as an example, a functional 1/0 benchmakwould be one that can execute a number of different types orvariants of 1/0, and allow the systems performance characteristicsto be measured when executing each type. in general, functionalbenchmarks can be used to collect an overwhelming amount of datathat can be used to generate estimates as to how well a system willperform on other benchmarks and workloads. use of a functional benchmark also provides the opportunity toexploit the benefits of experimental design and statistical dataanalysis techniques. incorporating their usage in to the processcan be very beneficial. in general terms. the benefits are: 1. a reduction in the number of tests run and the quantity of datacollected. 2. a method for choosing which tests to run to help ensure thatthe results are not biased. 3. an increase in what can be determined about the systemsperformance characteristics. 4. an improvement in the quality of the results. some examples of benefits are: by designing an experiment inwhich the primary factors affecting 1/0 are controlled (e.g. filesize, read vs. write, sequential vs. random, etc.), the main effectas well as the interactive effects of the factors can bedetermined; statistical techniques are available to provideevidence that the benchmark is in fact testing what was intended tobe testec regression analysis can be used with the reduced set ofdata collected to provide predictive capabilities not otherwiseeasily obtained. the use of experimental design and statistical data analysistechniques are in common practice in many other fields and are notunreasonably difficult to apply. unfortunately, their applicationis not in common usage in computer performance analysis. the paper contains a case study of the use of a functional 1/0benchmark executed and the results analyzed under such amethodology. the experiment was executed on a cdc 4680, on an ibmrs6000/540, and on a sun 4/490. for all three architectures, themain and interactive effects of file location (cached or on disk),file size, file action (read or write), record size, record access(sequential or random), and number of concurrently active 1/0streams were tested on elapsed time, throughput, and disk servicetime. an analysis of variance determined that file size was not asignificant factor for throughput and disk service time. a stepwisetype regression analysis, developed models with a parsimonious setof factors having little statistical bias. the models identifiedthe presence of two and three factor interactions. the resulting r2and residual plots, and a visual analysis were used to determinethe efficacy of the test measuring these factors across differentarchitectures. in summary, when there are many unknowns, and resourcesincluding time are short, using a functional benchmark as part of awell designed experiment in conjunction with statistical dataanalysis, is a methodology by which the original goal ofcharacterizing a systems performance can be meet with a high levelof quality. robb newman a model for description of communication protocol this paper presents a model for the description of communication protocol in view of the total behavior of terminal equipment. in this model, a protocol entity is modeled as a group of processes and monitors, and synchronization between input and output processes is represented by the synchronizing mechanism based on the concept of the monitor. furthermore, it is possible to specify the relation between entities of different layers of the protocol, including the relation between communication and local functions of terminal equipment. as an example of protocol description based on this model, teletex document layer protocol written in concurrent pascal language is illustrated. kazuhiko chiba kazunori konishi akira kurematsu experimental wan and lan computer network for simulation and optimization of the traffic load t. stoilov spine: a safe programmable and integrated network environment marc e. fiuczynski richard p. martin tsutomu owa brian n. bershad potential of the gsm air interface to support cdma operation the ongoing development of third generation systems sets the path for evolution of the existing second generation systems. important issues are the need for compatibility and the establishment of a path of migration from the current operative mobile systems. this paper considers the feasibility of supporting cdma capabilities within the gsm air interface in order to provide umts services. it produces performance estimates for two specific examples joint detection cdma channels, and multicarrier ds - cdma channels. it is shown that a number of advantages can be obtained by using a hybrid cdma scheme. j. a. pons puig j. dunlop reflections: in search of the killer app steven pemberton analysis of dynamic movement-based location update scheme jie li yi pan xiaohua jia modeling a circuit switched multiprocessor interconnect daniel nussbaum ingmar vuong-adlerberg anant agarwal on the efficiency of slot reuse in the dual bus configuration oran sharon adrian segall efficient point-to-point and point-to-multipoint selective-repeat arq schemes with multiple retransmissions: a throughput analysis s. mohan j. qian n. l. rao estimation dsp processor's performance miroslav galabov tcp for high performance in hybrid fiber coaxial broad-band access networks reuven cohen srinivas ramanathan performance analysis of time warp with homogeneous processors and exponential task times anurag gupta ian akyildiz richard m. fujimoto implementing cooperative prefetching and caching in a globally-managed memory system geoffrey m. voelker eric j. anderson tracy kimbrel michael j. feeley jeffrey s. chase anna r. karlin henry m. levy winner take some clay shirky a novel single instruction computer architecture p. a. laplante local anchor scheme for reducing signaling costs in personal communications networks joseph s. m. ho ian f. akyildiz comparing algorithm for dynamic speed-setting of a low-power cpu kinshuk govil edwin chan hal wasserman measured capacity of an ethernet: myths and reality ethernet, a 10 mbit/sec csma/cd network, is one of the most successful lan technologies. considerable confusion exists as to the actual capacity of an ethernet, especially since some theoretical studies have examined operating regimes that are not characteristic of actual networks. based on measurements of an actual implementation, we show that for a wide class of applications, ethernet is capable of carrying its nominal bandwidth of useful traffic, and allocates the bandwidth fairly. we discuss how implementations can achieve this performance, describe some problems that have arisen in existing implementations, and suggest ways to avoid future problems. d. r. boggs j. c. mogul c. a. kent satellite communications systems move into the twenty-first century this paper discusses the evolution of communication satellite systems and communications satellite technology from the 1960's to the 1990's. the paper identifies the key attributes of satellite communications that has driven this evolution and now drives the future directions such systems will take. the paper then discusses the future direction of communication satellite systems including dbs, mss, fss and hybrid satellite/terrestrial systems. the paper points to the continued evolution of the satellite payload to use of spot beams, onboard processing and switching, and intersatellite links, with capability for higher eirps. the paper also identifies the earth station trends to more compact, lower cost stations, produced in higher volumes, with the handheld phone for mss operation being the prime example of this trend. the paper then points to some revolutionary trends in satellite communication networks being proposed for mss and fss applications involving fleets of ngso satellites combined with more extensive ground networks involving new networking concepts, new services (such as multimedia) and new hybrid configurations working with terrestrial networks, involving a host of new network issues and operations. leonard s. golding the case for the sustained performance computer architecture apostolos dollan robert f. krick what's in the future for parallel architectures? f. darema d. douglas a. gupta o. lubeck d. maier j. ratner b. smith p. messina a comparison of sender-initiated and receiver-initiated reliable multicast protocols sender-initiated reliable multicast protocols, based on the use of positive acknowledgments (acks), lead to an ack implosion problem at the sender as the number of receivers increases. briefly, the ack implosion problem refers to the significant overhead incurred by the sending host due to the processing of acks from each receiver. a potential solution to this problem is to shift the burden of providing reliable data transfer to the receivers---thus resulting in a receiver-initiated multicast error control protocol based on the use of negative acknowledgments (naks). in this paper we determine the maximum throughputs of the sending and receiving hosts for generic sender-initiated and receiver-initiated protocols. we show that the receiver-initiated error control protocols provide substantially higher throughputs than their sender- initiated counterparts. we further demonstrate that the introduction of random delays prior to generating naks coupled with the multicasting of naks to all receivers has the potential for an additional substantial increase in the throughput of receiver-initiated error control protocols over sender-initiated protocols. sridhar pingali don towsley james f. kurose comparison of rate-based service disciplines hui zhang srinivasan keshav distributing a chemical process optimization application over a gigabit network robert l. clay peter a. steenkiste multiprocessor system for tv and fm transmiters control liudmil lazarov georgi georgiev performance analysis of switching strategies d. a. reed p. k. mckinley m. f. barr challenges for nomadic computing: mobility management and wireless communications in this paper, we present several challenges and innovative approaches to support nomadic computing. the nomadic computing environment is characterized by mobile users that may be connected to the network via wired or wireless means, many of whom will maintain only intermittent connectivity with the network. furthermore, those accessing the network via wireless links will contend with limitations of the wireless media. we consider three general techniques for addressing these challenges: (1) asymmetric design of applications and protocols, (2) the use of network- based proxies which perform complex functions on behalf of mobile users, and (3) the use of pre- fetching and caching of critical data. we examine how these techniques have been applied to several systems, and present results in an attempt to quantify their relative effectiveness. thomas f. la porta krishan k. sabnani richard d. gitlin possibilities of using protocol converters for nir system construction volumes of information available from network information services have been increasing considerably in recent years. users' satisfaction with an information service depends very much on the quality of the network information retrieval (nir) system used to retrieve information. the construction of such a system involves two major development areas: user interface design and the implementation of nir protocols.in this paper we describe and discuss the possibilities of using formal methods of protocol converter design to construct the part of an nir system client that deals with network communication. if this approach is practicable it can make implementation of nir protocols more reliable and amenable to automation than traditional designs using general purpose programming languages. this will enable easy implementation of new nir protocols custom-tailored to specialized nir services, while the user interface remains the same for all these services.based on a simple example of implementing the gopher protocol client we conclude that the known formal methods of protocol converter design are generally not directly applicable for our approach. however, they could be used under certain circumstances when supplemented with other techniques which we propose in the discussion. sven ubik a verified sliding window protocol with variable flow control we present a verified sliding window protocol which uses modulo-n sequence numbers to achieve reliable flow-controlled data transfer between a source and a destination. communication channels are assumed to lose, duplicate and reorder messages in transit. the destination's data needs are represented by a receive window whose size can vary with time. the destination entity uses acknowledgement messages to inform the source entity of the current receive window size and the sequence number of the data word next expected. the source entity responds by sending segments of data words that lie within the allowed window. each data segment is accompanied by an identifying sequence number and the size of the data segment. the destination entity also uses selective acknowledgement and selective reject messages to inform the source entity of the reception or lack of reception, respectively, of out-of-sequence data segments. thus, this protocol is a proper extension of the arpanet's tcp. we have obtained the minimum value of n that ensures correct data transfer and flow control, in terms of the minimum message transmission time, the maximum message lifetime, and the maximum receive window size. the protocol imposes no constraints on the retransmissions of messages or on the data segment sizes; thus, any retransmission policy that optimizes the protocol's performance can be used. a u shankar some thoughts on the packet network architecture lixia zhang floor acquisition multiple access (fama) in single-channel wireless networks the fama-ncs protocol is introduced for wireless lans and ad-hoc networks that are based on a single channel and asynchronous transmissions (i.e., no time slotting). fama-ncs (for floor acquisition multiple access with non-persistent carrier sensing) guarantees that a single sender is able to send data packets free of collisions to a given receiver at any given time. fama-ncs is based on a three-way handshake between sender and receiver in which the sender uses non-persistent carrier sensing to transmit a request-to- send (rts) and the receiver sends a clear-to-send (cts) that lasts much longer than the rts to serve as a "busy tone" that forces all hidden nodes to back off long enough to allow a collision-free data packet to arrive at the receiver. it is shown that carrier sensing is needed to support collision-free transmissions in the presence of hidden terminals when nodes transmit rtss asynchronously. the throughput of fama-ncs is analyzed for single-channel networks with and without hidden terminals; the analysis shows that fama-ncs performs better than aloha, csma, and all prior proposals based on collision avoidance dialogues (e.g., maca, macaw, and ieee 802.11 dfwmac) in the presence of hidden terminals. simulation experiments are used to confirm the analytical results. j. j. garcia-luna-aceves chane l. fullmer supercomputing: big bang or steady state growth? j. r. gurd simp (single instruction stream/multiple instruction pipelining): a novel high-speed single-processor architecture simp is a novel multiple instruction-pipeline parallel architecture. it is targeted for enhancing the performance of sisd processors drastically by exploiting both temporal and spatial parallelisms, and for keeping program compatibility as well. degree of performance enhancement achieved by simp depends on; i) how to supply multiple instructions continuously, and ii) how to resolve data and control dependencies effectively. we have devised the outstanding techniques for instruction fetch and dependency resolution. the instruction fetch mechanism employs unique schemes of; i) prefetching multiple instructions with the help of branch prediction, ii) squashing instructions selectively, and iii) providing multiple conditional modes as a result. the dependency resolution mechanism permits out-of-order execution of sequential instruction stream. our out-of-order execution model is based on tomasulo's algorithm which has been used in single instruction-pipeline processors. however, it is greatly extended and accommodated to multiple instruction pipelining with; i) detecting and identifying multiple dependencies simultaneously, ii) alleviating the effects of control dependencies with both eager execution and advance execution, and iii) ensuring a precise machine state against branches and interrupts. by taking advantage of these techniques, simp is one of the most promising architectures toward the coming generation of high-speed single processors. k. murakami n. irie s. tomita accurate modelling of interconnection networks in vector supercomputers j. e. smith w. r. taylor the "worm" programs - early experience with a distributed computation the "worm" programs were an experiment in the development of distributed computations: programs that span machine boundaries and also replicate themselves in idle machines. a "worm" is composed of multiple "segments," each running on a different machine. the underlying worm maintenance mechanisms are responsible for maintaining the worm---finding free machines when needed and replicating the program for each additional segment. these techniques were successfully used to support several real applications, ranging from a simple multimachine test program to a more sophisticated real-time animation system harnessing multiple machines. john f. shoch jon a. hupp cycle and phase accurate dsp modeling and integration for hw/sw co- verification lisa guerra joachim fitzner dipankar talukdar chris schläger bassam tabbara vojin zivojnovic a simulation study of the performance of the extra-stage cube and the augmented-shuffle exchange networks under faults in this paper, we conducted a simulation study to compare the performance of the extra-stage cube(esc) and the augmented-shuffle exchange (ase) networks in the presence of multiple link failures. the main objective of this study is gain an insight as to which of these two networks can better support graceful degradation under varying network loads and link failure rates. to achieve this task, three pertormance metrics are introduced, these are throughput, message-delivery ratio, and average delay per message and three simulation experiments using the simulation package comnet ii.5 are designed. the simulation results indicate that with respect to the three performance matrices, the esc network outperforms the ase for network size n≥32 and link mttf=0.75 hours. however, when the link mttf=0.12 hours, the asc network outperforms the esc network for n<64\\. in both cases, the esc network outperforms the ase network when n=64. this result appears to be directly related to the link complexities of the two networks. for large networks, the link complexity of the ase is much larger than that of the esc network and accounts for its poorer performance when n>32. abdullah a. abonamah fadi n. sibai securing the internet protocol pau-chen cheng juan a. garay amir herzberg hugo krawczyk websites michele tepper best-effort versus reservations: a simple comparative analysis using a simple analytical model, this paper addresses the following question: should the internet retain its best-effort-only architecture, or should it adopt one that is reservation-capable? we characterize the differences between reservation- capable and best-effort-only networks in terms of application performance and total welfare. our analysis does not yield a definitive answer to the question we pose, since it would necessarily depend on unknowable factors such as the future cost of network bandwidth and the nature of the future traffic load. however, our model does reveal some interesting phenomena. first, in some circumstances, the amount of incremental bandwidth needed to make a best- effort-only network perform as well as a reservation capable one diverges as capacity increases. second, in some circumstances reservation-capable networks retain significant advantages over best-effort-only networks, no matter how cheap bandwidth becomes. lastly, we find bounds on the maximum performance advantage a reservation-capable network can achieve over best-effort architectures. lee breslau scott shenker a function oriented corporate network the itt/net is an internal corporate network built to satisfy the distributed programming requirements of itt technology and engineering communities. this paper describes the network: its goals, its architecture, its operational environment and prototype development activities. the various issues involved in the construction of a corporate network based upon diverse, existing systems and communication methods are described in detail. the architectural and implementation decisions are discussed and justified. the programming process and the methodologies adopted are presented and some conclusions are drawn as to the practicality and usefulness of formal development methods in the context of this particular project. p. s.-c. wang s. r. kimbleton websites ken korman asynchronous secure computations with optimal resilience (extended abstract) michael ben-or boaz kelmer tal rabin a simple multiple access protocol for metropolitan area networks as lans/mans operate at higher speeds (100 mb/s-gb/s) and over longer distances (10's - 100's km) the access mechanism has to be chosen carefully so as to provide high utilization. protocols are available that operate efficiently at high speeds and long distances. we describe here a particularly simple access mechanism referred to as simple (s). performance is simulated under a range of traffic, physical lengths, number of stations and overload conditions. s is compared with both fasnet and the distributed queue dual bus (dqdb) schemes. it is almost as efficient as more complex protocols and its overload performance is well behaved. j. o. limb introduction scott lewandowski experience with test generation for real protocols this paper presents results on the application of four protocol test sequence generation techniques (t-, u-, d-, and w-methods) to the nbs class 4 transport protocol (tp4). the ability of a test sequence to decide whether a protocol implementation conforms to its specification depend on the range of faults that it can capture. the study shows that a test sequence produced by the t-method has a poor fault detection capability whereas test sequences produced by the u-, d- and w-methods have comparable (superior to that for t-method) fault coverage on several classes of randomly generated machines. the lengths of test sequences produced by the four methods tend to be different. the length of a test sequence produced by the t-method (w-method) is the smallest (largest). the length of a test sequence from the u-method is smaller than that for the d-method and lengths for both are greater than that for the t-method and less than that for the w-method. d. sidhu t. leung joint optimal channel base station and power assignment for wireless access symeon papavassiliou leandros tassiulas the impact of delay on the design of branch predictors daniel a. jimenez stephen w. keckler calvin lin alternative implementations of two-level adaptive branch prediction as the issue rate and depth of pipelining of high performance superscalar processors increase, the importance of an excellent branch predictor becomes more vital to delivering the potential performance of a wide-issue, deep pipelined microarchitecture. we propose a new dynamic branch predictor (two- level adaptive branch prediction) that achieves substantially higher accuracy than any other scheme reported in the literature. the mechanism uses two levels of branch history information to make predictions, the history of the last k branches encountered, and the branch behavior for the last s occurrences of the specific pattern of these k branches. we have identified three variations of the two-level adaptive branch prediction, depending on how finely we resolve the history information gathered. we compute the hardware costs of implementing each of the three variations, and use these costs in evaluating their relative effectiveness. we measure the branch prediction accuracy of the three variations of two-level adaptive branch prediction, along with several other popular proposed dynamic and static prediction schemes, on the spec benchmarks. we show that the average prediction accuracy for two-level adaptive branch prediction is 97 percent, while the other known schemes achieve at most 94.4 percent average prediction accuracy. we measure the effectiveness of different prediction algorithms and different amounts of history and pattern information. we measure the costs of each variation to obtain the same prediction accuracy. tse-yu yeh yale n. patt towards an active network architecture active networks allow their users to inject customized programs into the nodes of the network. an extreme case, in which we are most interested, replaces packets with "capsules" - program fragments that are executed at each network router/switch they traverse.active architectures permit a massive increase in the sophistication of the computation that is performed within the network. they will enable new applications, especially those based on application-specific multicast, information fusion, and other services that leverage network-based computation and storage. furthermore, they will accelerate the pace of innovation by decoupling network services from the underlying hardware and allowing new services to be loaded into the infrastructure on demand.in this paper, we describe our vision of an active network architecture, outline our approach to its design, and survey the technologies that can be brought to bear on its implementation. we propose that the research community mount a joint effort to develop and deploy a wide area activenet. david l. tennenhouse david j. wetherall performance analysis of four simd machines this paper presents the results of an experiment to study the performance of four simd machines. the objectives of this study are to analyze the cost of regular communication on several simd machines and study its impact on the performance of two kernels. the machines are: a 32k processor cm2, a 16k processor mpp, a 16k processor maspar mp-1, and a 4k processor dap 610c. regular communication is exemplified, in this study, by the shift operation where all elements of an array are shifted some number of positions along an array dimension. the cost of shift operations on the four machines is measured and analyzed for several two-dimensional arrays. the study shows that shift cost varies significantly from one machine to another, and depends on several factors including network topology, communication bandwidth, and the compilation partitioning scheme. results also show that the communication overhead is quite significant on some of these machines even for nearest neighbor communication. finally, results from this study are useful for obtaining a rough estimate of the communication overhead for many algorithms on these machines. rod fatoohi ring-connected ring (rcr) topology for high-speed networking: analysis and implementation aloknath de n. prithviraj optimizing file transfer response time using the loss-load curve congestion control mechanism carey l. williamson design, implementation, and evaluation of a software-based real-time ethernet protocol distributed multimedia applications require performance guarantees from the underlying network subsystem. ethernet has been the dominant local area network architecture in the last decade, and we believe that it will remain popular because of its cost-effectiveness and the availability of higher-bandwidth ethernets. we present the design, implementation and evaluation of a software-based timed-token protocol called rether that provides real-time performance guarantees to multimedia applications without requiring any modifications to existing ethernet hardware. rether features a hybrid mode of operation to reduce the performance impact on non-real-time network traffic, a race-condition-free distributed admission control mechanism, and an efficient token-passing scheme that protects the network against token loss due to node failures or otherwise. to our knowledge, this is the first software implementation of a real-time protocol over existing ethernet hardware. performance measurements from experiments on a 10 mbps ethernet indicate that up to 60% of the raw bandwidth can be reserved without deteriorating the performance of non-real-time traffic. additional simulations for high bandwidth networks and faster workstation hardware indicate that the protocol allows reservation of a greater percentage of the available bandwidth. chitra venkatramani tzi-cker chiueh performance analysis of atm banyan networks with shared queueing - part ii: correlated/unbalanced offered traffic achille pattavina stefano gianatti analysis of a composite performance reliability measure for fault-tolerant systems today's concomitant needs for higher computing power and reliability has increased the relevance of multiple-processor fault-tolerant systems. multiple functional units improve the raw performance (throughput, response time, etc.) of the system, and, as units fail, the system may continue to function albeit with degraded performance. such systems and other fault-tolerant systems are not adequately characterized by separate performance and reliability measures. a composite measure for the performance and reliability of a fault-tolerant system observed over a finite mission time is analyzed. a markov chain model is used for system state-space representation, and transient analysis is performed to obtain closed-form solutions for the density and moments of the composite measure. only failures that cannot be repaired until the end of the mission are modeled. the time spent in a specific system configuration is assumed to be large enough to permit the use of a hierarchical model and static measures to quantify the performance of the system in individual configurations. for a multiple- processor system, where performance measures are usually associated with and aggregated over many jobs, this is tantamount to assuming that the time to process a job is much smaller than the time between failures. an extension of the results to general acyclic markov chain models is included. lorenzo donatiello balakrishna r. iyer a simple bandwidth management strategy based on measurements of instantaneous virtual path utilization in atm networks kohei shiomoto shinichiro chaki naoaki yamanaka qosmic: quality of service sensitive multicast internet protocol in this paper, we present, qosmic, a multicast protocol for the internet that supports qos-sensitive routing, and minimizes the importance of _a priori configuration_ decisions (such as _core_ selection). the protocol is resource- efficient, robust, flexible, and scalable. in addition, our protocol is provably loop-free.our protocol starts with a resources-saving tree (shared tree) and individual receivers switch to a qos- competitive tree (source-based tree) when necessary. in both trees, the new destination is able to choose the most promising among several paths. an innovation is that we use dynamic routing information without relying on a link state exchange protocol to provide it. our protocol limits the effect of pre-configuration decisions drastically, by separating the management from the data transfer functions; administrative routers are not necessarily part of the tree. this separation increases the robustness, and flexibility of the protocol. furthermore, qosmic is able to adapt dynamically to the conditions of the network.the qosmic protocol introduces several new ideas that make it more flexible than other protocols proposed to date. in fact, many of the other protocols, (such as yam, pimsm, bgmp, cbt) can be seen as special cases of qosmic. this paper presents the motivation behind, and the design of qosmic, and provides both analytical and experimental results to support our claims. michalis faloutsos anindo banerjea rajesh pankaj performance of a wireless ad hoc network supporting atm matthias lott bernhard walke experiences of building an atm switch for the local area the fairisle project was concerned with atm in the local area. an earlier paper [9] described the preliminary work and plans for the project. here we present the experiences we have had with the fairisle network, describing how implementation has changed over the life of the project, the lessons learned, and some conclusions about the work so far. richard black ian leslie derek mcauley multiprocessor hardware: an architectural overview the subject of multiprocessor computer systems has been discussed almost since the inception of the modern digital computer in its uniprocessor form. the motivation for multiprocessor system research and development activity arises from a consideration of one or more of the following factors: throughput flexibility extendability price/performance availability reliability fault tolerance. while any one of these factors may be the central issue, it should not be construed that these factors are disjoint. quite the contrary, each may have a subtle, nonobvious effect on any multiprocessor system design. john tartar an evaluation methodology for microprocessor and system architecture r. j. chevance an overview of the communications and information technology research center (cictr) at pennsylvania state university mohsen kavehrad mobile computing: dataman project perspective the objective of mobile computing is to develop system and application level software for small, battery powered terminals equipped with the wireless network connection. there is a rapidly growing interest in this field with companies spending billions of dollars developing technology and buying spectrum in the recent pcs auctions. in this paper we offer a perspective of mobile computing from the standpoint of our own research project at rutgers university. the dataman project (t.imielinski as project director and b.r.badrinath as the codirector) is funded by arpa(within glomoprogram), and two awards from nsf as well as industry support through the industry sponsored wireless information networks laboratory (winlab). tomasz imielinski factors affecting the performance of distributed applications a major reason for the rarity of distributed applications, despite the proliferation of networks, is the sensitivity of their performance to various aspects of the network environment. contrary to much popular opinion, we demonstrate that cpu speed remains the predominant factor. with respect to network issues, we focus on two approaches to performance enhancement: (1) improving the performance of reliable, byte-stream protocols such as tcp; (2) the use of high-level protocols that reduce the frequency and volume of communication. keith a. lantz william i. nowicki marvin m. theimer software implementation strategies for power-conscious systems a variety of systems with possibly embedded computing power, such as small portable robots, hand-held computer, and automated vehicles, have power supply constraints. their batteries generally last only for a few hours before being replaced or recharged. it is important that all design efforts are made to conserve power in those systems. energy consumption in a system can be reduced using a number of techniques, such as low-power electronics, architecture-level power reduction, compiler techniques, to name just a few. however, energy conservation at the application software-level has not yet been explored. in this paper, we show the impact of various software implementation techniques on energy saving. based on the observation that different instructions of a processor cost different amount of energy, we propose three energy saving strategies, namely (i) assigning live variables to registers, (ii) avoiding repetitive address computations, and (iii) minimizing memory accesses. we also study how a variety of algorithm design and implementation techniques affect energy consumption. in particular, we focus on the following aspects: (i) recursive versus iterative (with stacks and without stacks), (ii) indifferent representations of the same algorithms, (iii) different algorithms --- with identical asymptotic complexity --- for the same problem, and (iv) different input representations. we demonstrate the energy saving capabilities of these approaches by studying a variety of applications related to power-conscious systems, such as sorting, pattern matching, matrix operations, depth-first search, and dynamic programming. from our experimental results, we conclude that by suitably choosing an algorithm for a problem and applying the energy saving techniques, energy savings in excess of 60% can be achieved. kshirasagar naik david s. l. wei high-performance cluster-based internet servers eric jul povl koch jørgen s. hansen michael svendsen kim henriksen kenn nielsen mads dydensborg efficient algorithms for erasure node placement on slotted dual bus networks bhagirath narahari sunil shende rahul simha performing remote operations efficiently on a local computer network this paper discusses communication among computers connected by a very high speed local network and focuses on ways to support distributed programs that require efficient interprocessor communication. it is motivated by the availability of increasingly high speed local networks and inefficiencies in existing communication subsystems. mechanisms such as remote procedure calls, monitor calls, and message passing primitives are bases for interprocessor communication at high levels (i.e., within a programming language). at lower levels, interprocessor communication occurs via the transmission of data over some communication medium. on a local network, this basic communication mechanism is the transmission of packets. this paper is concerned with an intermediate communication layer for high speed local networks. to provide overall efficiency, this layer should provide communication primitives that (1) are a good basis on which to implement high level primitives and (2) are specialized enough to be implemented efficiently; for example, in a combination of microcode and hardware. to analyze communication primitives for this intermediate layer, we present a communication model called the remote reference/remote operation model in which a taxonomy of communication primitives is defined. we illustrate the model by describing an implementation of simple communication primitives on xerox alto computers interconnected with a 3 megabit ethernet. alfred z. spector on calibrating measurements of packet transit times vern paxson a unified vector/scalar floating-point architecture in this paper we present a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. the goal of this architecture is to yield improved scalar performance while broadening the range of vectorizable applications. for example, reduction operations and recurrences can be expressed in vector form in this architecture. this approach results in greater overall performance for most applications than does the approach of emphasizing peak vector performance. the hardware required to support the enhanced vector capability is insignificant, but allows the execution of two operations per cycle for vectorized code. moreover, the size of the unified vector/scalar register file required for peak performance is an order of magnitude smaller than traditional vector register files, allowing efficient on-chip vlsi implementation. the results of simulations of the livermore loops and linpack using this architecture are presented. n. p. jouppi j. bertoni d. w. wall putting it together: going wireless win treese access to a public switched multi-megabit data service offering f. r. dix m. kelly r. w. klessig editorial the field of mobile computing and networking is experiencing accelerated growth with the introduction of new mobile-centric products every month. more and more service providers tailor their hardware and software products to the traveling professional needs. with the conversion and augmentation of the cellular networks to digital communication formats (such as cdpd, is-136, is-95, and gsm), it is expected that wireless data services will become accessible to a large population of users and at affordable prices. finally, the vision of personal communication services (pcs) and its spectral allocation in the u.s., coupled with the ip mobility support (rfc 2002), are two of the many driving forces behind the trends of mobile computing and networking. mobicom '96, the second annual international conference on mobile computing and networking, took place in rye, ny, on november 10 12, 1996. mobicom is a prestigious forum for presenting and discussing progress in the field of mobile computing and networking. it is sponsored by the acm and ieee organizations. in 1996, there were all together 18 papers selected from 90 submissions, which originated from 16 countries. as most of the submissions were of very high quality, the technical program committee had a difficult choice of selecting the limited number of papers that could be accommodated. we have the pleasure of bringing to you in this issue of the wireless networks journal a selected set of nine papers that were presented at the conference. the papers are organized roughly from the more physical layer topics to the application- oriented articles. the first paper, by lorch and smith, deals with the important issue of power preservation in mobile computing hardware. in particular, it presents a set of heuristic schemes for detecting when the cpu could be put in a low-power state, thus reducing the battery drainage. as compared with the current apple's cpu power management, the papers' results suggest improvement of about 20% in the battery lifetime, with less than 2% loss in system performance. the second paper, by vukovic and mckown, discusses the unlicensed pcs (upcs) band, which consists of three 10 mhz bands that were recently opened by the u.s. government. although no licensing is required to operate in this spectrum, the users are supposed to comply with a set of rules, known as the upcs etiquette (u.s. code of federal regulations, title 47, part 15(d)). this paper evaluates the channel sharing effects; i.e., blocking access among competing systems. channel assignment problem has been a subject of intensive studies in the last years, with new schemes allowing greater channel reuse. the dynamic channel allocation schemes are of particular interest, because of their ability to adapt to the spatial and temporal differing traffic demands in cellular networks. the third paper, by das, sen and jayaram, proposes a dynamic channel allocation scheme and compares its performance with other channel allocation algorithms. synchronization and dependency relations are important issues, in the design of a distributed system. vector clocks have been traditionally used to represent casual dependencies in a distributed system. in the fourth paper, by prakash and singhal, it is claimed that vector clocks are inappropriate for a distributed mobile system implementation, mainly because of their static nature and inefficiency in a large-scale system. two alternatives to vector clocks are presented and discussed in this paper. one of the fundamental issues in mobile systems is user location management. efficient local management strategies are of particular importance to the emerging systems because of the anticipation that the number of mobile users in the future networks will be significantly larger than in today's system. the paper by jannink, lam, shivakumar, widom and cox addresses the design and evaluation of efficient, scalable, and flexible location management techniques. in particular, a family of location management techniques is introduced and its performance is evaluated and compared with other techniques. the internet protocol (ip) and the related tcp and upd are the communication protocols of choice for computing applications. as such, the effect of the use of these protocols in the wireless and mobile environment is of particular importance. the ietf ip mobility support addresses mobility issues in ip, allowing the mobile host to continue its internet connectivity while away from home. however, the current tcp/ip protocol suite was not designed for the wireless communication links in mind, in which capacity is at premium. the sixth paper in this issue, by degermark, engan, nordgren and pink, discusses tcp/ip header compression techniques for wireless communication. the paper especially concentrates on the new version of ip, the ipv6, in which the headers are larger to increase addressing space and functionality. this paper received the best student paper award at mobicom '96. still on the issue of tcp for wireless communication, the seventh paper, by durst, miller and travis, suggests that the space communication environment and the mobile and wireless networks share a lot of similar characteristics. specifically, issues such as harsh propagation conditions (high error rate and link outage), requirements for low power design and small physical dimensions, and low available data rate are common to both of the environments. the paper proposes a set of extensions to tcp to cope with these problems. the last two papers are in the area of mobile applications. the paper by joseph and kaashoek describes extensions to the rover toolkit for implementation of reliable applications for the mobile environment. in particular, the existing failure models are improved upon by considering the server failures, in addition to the client and communication failures. the extensions to the rover toolkit include both system-level and language-level support. it is suggested that these extensions impose low overhead of only a few percent in the execution time and are especially optimized for the low- bandwidth and high- latency mobile communication environment. last but not least, the paper by abowd, atkeson, hong, long, kooper and pinkerton addresses the design of context-aware applications in the mobile environment, where the knowledge of the user's current location and history of his or her location can be used in providing improved services. in particular, the addressed mobile application in this paper is the tour guide---the cyberguide. we believe that the papers brought here are a sample that well represents the current state-of-the-art of the mobile computing and networking area and describes the up-to-date issues being addressed by the research community these days. zygmunt j. haas ian f. akyildiz problems of the building of vpns svetlin petrov peter antonov bidirectional token flow system over the last few years there has been an avalanche of technical literature on techniques for allowing nodes to access and share one of the most valuable resources of a computer communication network, i.e., the bandwidth of the communication media.1 this report describes the design of a new type of local area computer network called bid that uses a bidirectional token flow as a form of token passing.2 the design of bid not only allows it to process messages having various properties but also makes it possible to operate simultaneously in a packet and circuit- switching mode. this latter mode, using minipackets, enables bid to be used for those applications having real- time response requirements. in describing bid this report defines the basic concepts of the implicit token, token carrier, token generator, and token wait time. it also describes a new preemption technique which reduces the walk time normally associated with token and polling systems. a gpss computer simulation was made of the bid system to determine the average delay times experienced by the various priority classes of messages. an analytical model using a single priority class of messages is presented, and the results of the computer simulation are compared with this. the simulation results also indicate the uniformity of the average delays experienced by the bus interface units across the bus resulting from the use of bidirectional flow of tokens. m. e. ulug g. m. white w. j. adams connection closures adding application-defined behaviour to network connections new techniques in the implementation of out-of-band control in atm networks are causing both industry and research laboratories to look again at the whole question of atm signalling. these techniques devolve the control from the network devices into a higher level distributed processing environment, resulting in simpler network devices and more flexible control architectures.this paper takes this idea one stage further and suggests that at least in some cases, the only place in which control can be exerted without inhibiting applications is within the applications themselves. we call the combination of an application defined control policy and a network connection a _connection closure_. s. rooney towards a unifying approach to mobile computing in this paper we address the diversity of mobile devices and the related problematic issues with a largely platform-independent approach to development. our work is motivated by the amount and variety of mobile devices foreseen to reside in future ubiquitous computing environments. to reduce the development effort and to augment data exchange facilities, we present an application framework that abstractly defines common interaction objects in user interfaces on mobile devices. carsten magerkurth thorsten prante editorial willie w. lu william c. y. lee pc-cube, a personal computer based hypercube pc-cube is an ensemble of ibm pcs or close compatibles connected in the hypercube topology with ordinary computer cables. communication occurs at the rate of 115.2 k-baud via the rs-232 serial links. available for pc-cube is the crystalline operating system iii (cros iii), mercury operating system, cubix and plotix which are parallel i/o and graphics libraries. a cros performance monitor was developed to facilitate the measurement of communication and computation time of a program and their effects on performance. also available are cxlisp, a parallel version of the xlisp interpreter; grafix, some graphics routines for the ega and cga; and a general execution profiler for determining execution time spent by program subroutines. pc-cube provides a programming environment similar to all hypercube systems running cros iii, mercury and cubix. in addition, every node (personal computer) has its own graphics display monitor and storage devices. these allow data to be displayed or stored at every processor, which has much instructional value and enables easier debugging of applications. some application programs which are taken from the book solving problems on concurrent processors [fox 88] were implemented with graphics enhancement on pc-cube. the applications range from solving the mandelbrot set, laplace equation, wave equation, long range force interaction, to wa-tor, an ecological simulation. a. ho g. c. fox d. w. walker s. snyder d. chang s. chen m. breaden t. cole fair-efficient call admission control policies for broadband networks - a game theoretic framework zbigniew dziong lorne g. mason dsns (dynamically-hazard-resolved statically-code-scheduled, nonuniform superscalar): yet another superscalar processor architecture morihiro kuga kazuaki murakami shinji tomita a run time support system for multiprocessor machines we report on the design and development of a run time support system (rts) which overcomes known disadvantages of other existing systems. the rts is independent of the configuration - if the system is reconfigured there is no need to change user's processes. it supports both synchronous and asynchronous communication. a system based on the same design philosophy with this rts is actually implemented for messages of arbitrary length in the prototype of the padmavati machine, being developed within esprit project 1219 (967). experiments were conducted in this environment and performance measurements were obtained and are reported here. m. a. tsoukarellas t. s. papatheodorou conferences marisa campbell optimizing tcp forwarder performance oliver spatscheck jørgen s. hansen john h. hartman larry l. peterson error control using retransmission schemes in multicast transport protocols for real-time media sassan pejhan mischa schwartz dimitris anastassiou the network architecture of the connection machine cm-5 (extended abstract) charles e. leiserson zahi s. abuhamdeh david c. douglas carl r. feynman mahesh n. ganmukhi jeffrey v. hill daniel hillis bradley c. kuszmaul margaret a. st. pierre david s. wells monica c. wong shaw-wen yang robert zak introduction to mobile computing sandeep jain rate analysis for embedded systems embedded systems consist of interacting components that are required to deliver a specific functionality under constraints on execution rates and relative time separation of the components. in this article, we model an embedded system using concurrent processes interacting through synchronization. we assume that there are rate constraints on the execution rates of processes imposed by the designer or the environment of the system, where the execution rate of a process is the number of its executions per unit time. we address the problem of computing bounds on the execution rates of processes constituting an embedded system, and propose an interactive rate analysis framework. as part of the rate analysis framework we present an efficient algorithms for checking the consistency of the rate constraints. bounds on the execution rate of each process are computed using an efficient algorithm based on the relationship between the execution rate of a process and the maximum mean delay cycles in the process graph. finally, if the computed rates violate some of the rate constraints, some of the processes in the system are redesigned using information from the rate analysis step. this rate analysis framework is implemented in a tool called ratan. we illustrate by an example how ratan can be used in an embedded system design. anmol mathur ali dasdan rajesh k. gupta architectural and protocol frameworks for multicast data transport in multi- service networks the paper presents architectural and protocol techniques for multicast transport of multi-service data (such as video, audio and graphics). a tree-structured channel in the network is employed as building block for constructing multicast transport services. data from different sources are multiplexed over a shared tree channel for reaching destinations through intermediate network nodes and links. the choice of multicast architecture is based on the extent of sharing of common links in a tree across multi-source data flows to allow finer degree of link bandwidth allocation control and reduce the fixed cost of links for data transport. the canonical network substrate so constructed exemplifies a 'programmable network' that may be 'plugged-in' with qos and flow parameters to instantiate the network behavior for matching the needs of each application. this philosophy is in alignment with the evolving 'internet service layer' functionalities. the paper describes the functional and protocol elements of multicast architectures to support the 'programmable network' view. it also describes the salient features of an implementation embedding these architectures. k. ravindran netblt: a high throughput transport protocol d. d. clark m. l. lambert l. zhang algorithms for energy-efficient multicasting in static ad hoc wireless networks in this paper we address the problem of multicasting in ad hoc wireless networks from the viewpoint of energy efficiency. we discuss the impact of hte wireless medium on the multicasting problem and the fundamental trade-offs that arise. we propose and evaluate several algorithms for defining multicast trees for session (or connection-oriented) traffic when transceiver resources are limited. the algorithms select the relay nodes and the corresponding transmission power levels, and achieve different degrees of scalability and performance. we demonstrate that the incorporation of energy considerations into multicast algorithms can, indeed, result in improved energy efficiency. jeffrey e. wieselthier gam d. nguyen anthony ephremides management of virtual private networks for integrated broadband communication j. m. schneider t. preuß p. s. nielsen efficient policies for carrying web traffic over flow-switched networks anja feldmann jennifer rexford ramon caceres architecture of a fiber optics based distributed information network fortis: local area network paul c. barr suban g. krishnamoorthy a multi-user data flow architecture this paper discusses the design of a prototype data flow machine that has memory management hardware in each memory block. this facility allows loading and deleting code that is produced by independent compilations. the first sections of the paper deal with the general architecture of the machine and the format specifications for the instruction cells, logical addresses, and switch packets. the paper concludes with a discussion of the mapping hardware used in the memory blocks. the results of a simulation study for this subsystem are also presented. f. j. burkowski quality of service over ethernet for telelearning applications the objective of this investigation is to implement a technology that provides measurements of quality of service (qos) of established local area networks. improvements of the required infrastructure for applications of telelearning (videoconference, electronic blackboard, shared applications…) are implemented with a minimal cost. we are implementing features of atm on ethernet local area networks, through the protocol "cells in frame" (cif). this protocol encapsulates atm cells in ethernet frames using cif specification. values of measured parameters reveal that the implementation provides the atm quality of service necessary for telelearning traffic type. j. m. arco b. alarcos a. m. hellín d. meziat spread - sprectrum cdma packet radio mac protocol using channel overload detection and blocking an analytical and simulation performance evaluation is presented for a multi - access protocol for a data packet radio network with a limited user capacity. the network employs direct - sequence code division multiple access ds - cdma in a centralised channel load - sensing scheme with channel overload collision detection and blocking via a separate ancillary channel state information broadcast system. traffic models that incorporate both a finite population and an infinite population and variable length data messages are considered. results show that an improved throughput delay performance can be obtained by implementing a channel overload detection message dropping scheme as well as a channel overload avoidance message blocking scheme. the channel overload threshold β is fixed at the system's maximum user capacity whereas it is shown that the overload avoidance blocking threshold should be variable and dependent on the mean message arrival rate. garth judge fambirai takawira the value of a systematic approach to measurement and analysis: an isp case study srinivas ramanathan edward h. perry scalable internet servers: issues and challenges krishna kant prasant mohapatra data traffic in a new centralized switch node architecture for lan ehab a. khalil m. khalid h. b. kekre simple constant-time consensus protocols in realistic failure models using simple protocols, it is shown how to achieve consensus in constant expected time, within a variety of fail-stop and omission failure models. significantly, the strongest models considered are completely asynchronous. all of the results are based on distributively flipping a coin, which is usable by a significant majority of the processors. finally, a nearly matching lower bound is also given for randomized protocols for consensus. benny chor michael merritt david b. shmoys the 801 minicomputer this paper provides an overview of an experimental system developed at the ibm t. j. watson research center. it consists of a running hardware prototype, a control program and an optimizing compiler. the basic concepts underlying the system are discussed as are the performance characteristics of the prototype. in particular, three principles are examined: system orientation towards the pervasive use of high level language programming and a sophisticated compiler, a primitive instruction set which can be completely hard- wired, storage hierarchy and i/o organization to enable the cpu to execute an instruction at almost every cycle. george radin a network environment for computer-supported cooperative work a second generation computer supported cooperative work system called electronic information exchange system (eies2) is described in this paper. the eies2 communications environment allows users to network across geographical constraints using asynchronous or synchronous communications. the architecture of the network environment is decentralized, and is implemented using modern standards, and an easy-to-use user interface. at the heart of eies2 is a high-level, object oriented pseudo machine which incorporates a distributed, communications-oriented database. the set of tools provided by the eies2 communications environment is well suited for implementing group communication systems that have an extensive set of user features built into it. these tools are well suited to support the group communication model described by the amigo task force (ifip 6.5 and iso tc97/sc18/wg64). the eies2 application layer protocols, using ccitt/iso x.410 remote operations, support a distributed object oriented database. j. whitescarver p. mukherji m. turoff r. j. deblock r. m. czech b. k. paul modelling and performance evaluation of multiprocessor based packet switches this paper presents an approximate analytic model for the performance analysis of a class of multiprocessor based packet switches. for these systems, processors and common memory modules are grouped in clusters, each of them composed of several processor-memory pairs that communicate through a multiple bus interconnection network. intercluster communication is also achieved using one or more busses. the whole network operates in a circuit- switched mode. after access completion, a processor remains active for an exponentially distributed random time. access times are also exponential with different means, depending upon the location (local, cluster, external) of the referenced module. the arbitration is done on a priority basis. the performance is predicted by computing the average number of switched packets per time unit. other related indexes are also given. numerical results are obtained rather easily by solving a set of two algebraic equations. simulation is used to validate the accuracy of the approximations used in the model. j. l. melús e. sanvicente j. magriñá towards a new standard for system-level design huge new design challenges for system-on-chip (soc) are the result of decreasing time- to-market coupled with rapidly increasing gate counts and embedded software representing 50-90 percent of the functionality. the exchange of system-level intellectual property (ip) models for creating executable specifications has become a key strategic element for efficient system-to-silicon design flows. because c and c++ are the dominant languages used by chip architects, systems engineers and software engineers today, we believe that a c-based approach to hardware modeling is necessary. this will enable co-design, providing a more natural solution to partitioning functionality between hardware and software. in this paper we present the design of systemc, a c++ class library that provides the necessary features for modeling design hierarchy, concurrency, and reactivity in hardware. we will also describe experiences of using systemc 1) for the co-verification of 8051 processor with a bus-functional model and 2) for the modeling and simulation of an mpeg-2 video decoder. stan y. liao analyzing the fault tolerance of double-loop networks jon m. peha fouad a. tobagi multistar implementation of expandable shufflenets philip p. to tak-shing p. yum yiu-wing leung a system for constructing configurable high-level protocols new distributed computing applications are driving the development of more specialized protocols, as well as demanding greater control over the communication substrate. here, a network subsystem that supports modular, fine-grained construction of high-level protocols such as atomic multicast and group rpc is described. the approach is based on extending the standard hierarchical model of the _x_-kernel with composite protocols in which micro- protocol objects are composed within a standard runtime framework. each micro- protocol realizes a separate semantic property, leading to a highly modular and configurable implementation. in contrast with similar systems, this approach provides finer granularity and more flexible inter-object communication. the design and prototype implementation runing on mach are described. performance results are also given for a micro-protocol suite implementing variants of group rpc. nina t. bhatti richard d. schlichting ip-based protocols for mobile internetworking john ioannidis dan duchamp gerald q. maguire internet protocol traffic analysis with applications for atm switch design andrew schmidt roy campbell on site: the need for speed scott tilley questions for local area network panelists much has been written and spoken about the capabilities of emerging designs for local area networks (lan's). the objective for this panel session was to gather together companies and agencies that have brought lan's into operation. questions about the performance of lans have piqued the curiosity of the computer/communications community. each member of the panel briefly described his or her lan installation and workload as a means of introduction to the audience. questions about performance were arranged into a sequence by performance attributes. those attributes thought to be of greatest important were discussed first. discussion on the remainder of the attributes continued as time and audience interaction permitted. mitchell g. spiegel fault recovery for guaranteed performance communications connections anindo banerjea leaf initiated join handover evaluation m. teughels i. de coster e. van lil a. van de capelle issues and challenges in atm networks ronald j. vetter david h. c. du reliable broadcast protocols jo-mei chang n. f. maxemchuk load execution latency reduction bryan black brian mueller stephanie postal ryan rakvic noppanunt utamaphethai john paul shen resources section: conferences jay blickstein a routing architecture for mobile integrated services networks a drawback of the conventional internet routing architecture is that its route computation and packet forwarding mechanisms are poorly integrated with congestion control mechanisms. any datagram offered to the network is accepted; routers forward packets on a best-effort basis and react to congestion only after the network resources have already been wasted. a number of proposals improve on this to support multimedia applications; a promising example is the integrated services packet network (ispn) architecture. however, these proposals are oriented to networks with fairly static topologies and rely on the same conventional internet routing protocols to operate. this paper presents a routing architecture for mobile integrated services networks in which network nodes (routers) can move constantly while providing end-to-end performance guarantees. in the proposed connectionless routing architecture, packets are individually routed towards their destinations on a hop by hop basis. a packet intended for a given destination is allowed to enter the network if and only if there is at least one path of routers with enough resources to ensure its delivery within a finite time. once a packet is accepted into the network, it is delivered to its destination, unless resource failures prevent it. each router reserves resources for each active destination, rather than for each source-- destination session, and forwards a received packet along one of multiple loop-free paths towards the destination. the resources and available paths for each destination are updated to adapt to congestion and topology changes. this mechanism could be extended to aggregate dissimilar flows as well. shree murthy j. j. garcia-luna-aceves multimedia, network protocols and users - bridging the gap in this paper we present the case for using specifically configured protocol stacks geared towards human requirements in the delivery of distributed multimedia. we define quality of perception (qop) as representing the user side of the more technical and traditional quality of service (qos). qop is a term which encompasses not only a user's satisfaction with the quality of multimedia presentations, but also his/her ability to analyse, synthesise and assimilate the informational content of multimedia displays. the dynamically reconfigurable protocol stacks (drops) architecture supports low cost reconfiguration of individual protocol mechanisms in an attempt to best maintain qop in connections where the provided qos fluctuates unpredictably. results show that drops can be used to improve on the qop provided by legacy protocol stacks (tcp/ip, udp/ip), especially in the case of dynamic and complex sequences. g. ghinea j. p. thomas r. s. fish a dominating set model for broadcast in all-port wormhole-routed 2d mesh networks a new model for broadcast in wormhole-routed networks is proposed. the model uses and extends the concept of dominating sets in order to systematically develop efficient broadcast algorithms for all-port wormhole-routed systems, in which each node can simultaneously transmit messages on different outgoing channels. in this paper, two broadcast algorithms for two-dimensional (2d) mesh networks are presented. in the first approach, the source node uses a multicast algorithm to deliver the message to a set of dominating nodes, which can subsequently deliver the message to all other nodes in the network in a single message-passing step. this algorithm requires at most [log2n] steps, where n is the total number of nodes in the network, although in many cases only [log2n] 1 steps are needed. the second algorithm, called the d-node algorithm, reduces the number of steps by using multiple levels of dominating nodes in a recursive manner. for square meshes containing n=22(k+2) nodes, k≥0, the d-node algorithm requires at most k+4 steps. similar upper bounds are shown to hold for meshes of other sizes and shapes. for specific source nodes and mesh shapes, the number of steps is shown to equal the theoretical lower bound of [log5n]. a simulation study confirms the advantage of the d-node algorithm, under various system parameters and conditions, over other broadcast algorithms. yih-jia tsai philip k. mckinley systems modelling and description motamarri saradhi measure+: a measurement-based dependability analysis package dong tang ravishankar k. iyer prediction of transport protocol performance through simulation a five-layer simulation model of osi protocols is described and applied to predict transport performance on a local area network (lan). emphasis is placed on time-critical applications typical of a small, flexible manufacturing system. the results predict that, with current technology, osi protocols can provide 1.5 mbps throughput, one-way delays between 6 and 10 ms, and response times between 15 and 25 ms. the results also indicate that csma/cd is a reasonable access method for time-critical applications on small, factory lans, if loads of less than 40% are anticipated. for loads between 40% and 70%, a token passing access method provides better performance for time- critical applications. k mills m wheatley s heatley network management viewpoints: a new way of encompassing the network management complexity simon znaty jean sclavos performance analysis of local communication loops the communication loops analyzed here provide an economical way of attaching many different terminals to a ibm 4331 host processor which may be several kilometers away. as a first step of the investigation protocol overhead is derived. it consists of request and transmission headers and the associated acknowledgements as defined by the system network architecture. additional overhead is due to the physical layer protocols of the synchronous data link control including lower level confirmation frames. the next step is to describe the performance characteristics of the loop attachment hardware, primarily consisting of the external loop station adapters for local and teleprocessing connections and the loop adapter processor. kuno m. roehr horst sadlowski a characterization of processor performance in the vax-11/780 joel s. emer douglas w. clark experience using multiprocessor systems - a status report anita k. jones peter schwarz enhanced distributed explicit rate allocation for abr services in atm networks nasir ghani jon w. mark an evaluation framework for multicast ordering protocols a new framework for evaluating multicast ordering protocols is presented. it allows to compare solutions to the problem of ordering multicast messages in an identical way at all receiving sites when multiple senders operate concurrently. a new type of delay measure called synchronization delay forms the basis for this framework. it filters out network-dependent factors and accounts precisely for the excess delay that user messages suffer in order to achieve the ordering property. the usefulness of the evaluation framework is demonstrated by applying it to three protocols covered in the literature. as an example their delay behavior is analyzed for low-traffic environments. the evaluation results allow a user to choose the most suitable protocol for his application. erwin mayer error propagation and error disappearance in ccitt r. 111 system yu zeng-qian conferences ken korman the ldf 100: a large grain dataflow parallel processor ian kaplan netnews dennis fowler supercomputing ken kennedy a case for packet switching in high-performance wide-area networks the large capacity of optical fibers suggests that circuit-switching may become a more attractive switching method in future communication networks. we show, however, that under some reasonable assumptions the delays associated with circuit-switching make the technique inferior to packet- switching in a high-performance, distributed environment. a network design that demonstrates the feasibility of packet-switching in high-performance environment is also presented. this work was sponsored in part by the defense advanced research projects agency under contract n00039-86-k-0431, by the digital equipment corporation, by att information systems and by bell northern research. z. haas d. r. cheriton computing in three dimensions dan teodosiu design of a fair bandwidth allocation policy for vbr traffic in atm networks subir k. biswas rauf izmailov leader election in the presence of link failures (abstract) gurdip singh simulation-based performance evaluation of routing protocols for mobile ad hoc networks in this paper we evaluate several routing protocols for mobile, wireless, ad hoc networks via packet - level simulations. the ad hoc networks are multi - hop wireless networks with dynamically changing network connectivity owing to mobility. the protocol suite includes several routing protocols specifically designed for ad hoc routing, as well as more traditional protocols, such as link state and distance vector, used for dynamic networks. performance is evaluated with respect to fraction of packets delivered, end - to - end delay, and routing load for a given traffic and mobility model. both small 30 nodes and medium sized 60 nodes networks are used. it is observed that the new generation of on - demand routing protocols use much lower routing load, especially with small number of peer - to - peer conversations. however, the traditional link state and distance vector protocols provide, in general, better packet delivery and end - to - end delay performance. samir r. das robert castañeda jiangtao yan cp/m development system (abstract only) one of the most useful tools for a microcomputer laboratory is a development system for the implementation of a variety of microcomputers. such a development system has been implemented for both 8 and 16 bit microprocessors such as the 8085, 8086, 68000, etc. the system incorporates not only the cross assemblers needed but common memory which can be downloaded from the cp/m system and can be tri-stated between the microcomputer in use and the cp/m system. kenneth cooper jeffrey smith performance interactions between p-http and tcp implementations this document describes several performance problems resulting from interactions between implementations of persistent-http (p-http) and tcp. two of these problems tie p-http performance to tcp delayed-acknowledgments, thus adding up to 200ms to each p-http transaction. a third results in multiple slow-starts per tcp connection. unresolved, these problems result in p-http transactions which are 14 times slower than standard http and 20 times slower than potential p-http over a 10 mb/s ethernet. we describe each problem and potential solutions. after implementing our solutions to two of the problems, we observe that p-http performs better than http on a local ethernet. although we observed these problems in specific implementations of http and tcp (apache-1.1b4 and sunos 4.1.3, respectively), we believe that these problems occur more widely. john heidemann call route identification in telephone traffic monitoring systems milen lukanchevski hikolay kostadinov hovanes avakian the distributed double-loop computer network (ddlcn) this paper presents the design of the distributed double-loop computer network (ddlcn), which is a local-area distributed computing system that interconnects midi, mini and micro computers using a fault-tolerant double-loop network. several new features and novel concepts have been incorporated into the design of its subsystems, viz., the reliable communication network, the programming/operating system (p/os), and the distributed loop data base system (dldbs). the interface design is unique in that it employs tri-state control logic and bit-sliced processing, thereby enabling the network to become dynamically reconfigurable and fault tolerant with respect to communication link failure as well as component failure in the interface. three classes of multi-destination communication protocols, each providing a different degree of reliability, have been incorporated into the network to facilitate efficient and reliable exchanges of messages. the p/os is distinguished from other similar research efforts in that its ultimate goal is to support not only communication and cooperation among autonomous, distributed processes running at various nodes, but also to support convenient and correct resource sharing through program generation (semi-automatic programming) for application and systems programmers. a new concurrency control mechanism for dldbs has been developed, which uses distributed control without global locking and is deadlock free. in addition to being simple to implement and having good performance (high throughput and low delay), the mechanism is also robust with respect to failure of both communication links and hosts. ming t. liu sandra a. mamrak jayshree ramanathan adaptive filters in multiuser (mu) cdma detection the primary purpose of this work is to provide a perspective on adaptive code- division multiple- access (cdma) mu receivers that have been proposed for future digital wireless systems. adaptive receivers can potentially adapt to unknown and time-varying environmental parameters such as the number of users, their received powers, spreading codes and time-delays. two adaptive receiver architectures are primarily considered -- one in which the sampled received signal is filtered, and can be used in both the uplink (i.e., at the base station) and downlink (i.e., at the mobile handset), and another in which the spreading codes of users are filtered (assuming knowledge of users' codes and its timing at the receiver) for use in the uplink. relevant issues such as training-based and blind implementations of the adaptive receiver are discussed, as well as (transient) convergence rates and estimation noise in steady-state. teng joon lim sumit roy flexible network support for mobile hosts fueled by the large number of powerful light-weight portable computers, the expanding availability of wireless networks, and the popularity of the internet, there is an increasing demand to connect portable computers to the internet at any time and in any place. however, the dynamic nature of a mobile host's connectivity and its use of multiple network interfaces require more flexible network support than has typically been available for stationary workstations. this paper introduces two flow-oriented mechanisms, in the context of mobile ip [25], to ensure a mobile host's robust and efficient communication with other hosts in a changing environment. one mechanism supports multiple packet delivery methods (such as regular ip or mobile ip) and adaptively selects the most appropriate one to use according to the characteristics of each traffic flow. the other mechanism enables a mobile host to make use of multiple network interfaces simultaneously and to control the selection of the most desirable network interfaces for both outgoing and incoming packets for different traffic flows. we demonstrate the usefulness of these two network layer mechanisms and describe their implementation and performance. xinhua zhao claude castelluccia mary baker superscalar vs. superpipelined machines norman p. jouppi workload models of vbr video traffic and their use in resource allocation policies pietro manzoni paolo cremonesi giuseppe serazzi concurrent common knowledge: a new definition of agreement for asynchronous systems praskash panangaden kim taylor a direct lower bound for k-set consensus hagit attiya improving cisc instruction decoding performance using a fill unit mark smotherman manoj franklin on the performance of loosely coupled multiprocessors a processing element (pe) essentially consists of a processor and a memory module. a loosely coupled multiprocessor is comprised of a set of such pes interconnected through an interconnection network (in). the design of the in is crucial to an efficient communication beween the pes. this paper presents approximate evaluations of two loosely coupled architectures, each having three types of ins, namely: shared bus, crossbar and a class of multistage interconnection networks (mins) called omega network. probability of acceptance (pa) of a message is considered as a measure of the performance. for a high rate of internal requests, it is shown that an omega network performs close to a crossbar while reducing the the cost of interconnection to a large extent. laxmi n. bhuyan wireless research centers: building the future with wireless technology victor bahl analysis of isp ip/atm network traffic measurements raja epsilon jun ke carey williamson session directories and scalable internet multicast address allocation a multicast session directory is a mechanism by which users can discover the existence of multicast sessions. in the mbone, session announcements have also served as multicast address reservations - a dual purpose that is efficient, but which may cause some side-affects as session directories scale.in this paper we examine the scaling of multicast address allocation when it is performed by such a multicast session directory. despite our best efforts to make such an approach scale, this analysis ultimately reveals significant scaling problems, and suggests a new approach to multicast address allocation in the internet environment. mark handley web sites michele tepper abstract personal communications manager (apcm) the abstract personal communications manager (apcm) is an application programmers interface (api) for telecommunications, providing simple and powerful interaction between communication applications and protocol software for circuit switched connections and transmission of voice and data. a few generic service primitives are developed through abstraction of the various call control operations. konrad froitzheim on providing support for protocol adaptation in mobile wireless networks pradeep sudame b. r. badrinath optimization flow control - i: basic algorithm and convergence steven h. low david e. lapsley putting it together: the home area network win treese efficient algorithms for performing packet broadcasts in a mesh network eytan modiano anthony ephremides an associative/parallel processor for partial match retrieval using superimposed codes this paper presents the design and implementation of special hardware for effective use of the method of superimposed codes. it is shown that the method of superimposed codes is particularly well suited to easy design and implementation of fast and modular hardware. the implementation has shown that a performance gain of two orders of magnitude over conventional software implementations is obtained by using the special hardware. this makes the method of superimposed codes extremely attractive for data base system requiring partial match retrieval. we also demonstrate that the associative memory design is easily adaptable to large scale integration which would make such an approach very cost effective and lead to even further gains in performance. sudhir r. ahuja charles s. roberts multiwavelength optical networks with limited wavelength conversion rajiv ramaswami galen sasaki corrigendum: "computational algorithms for state dependent queuing networks" charles h. sauer architecture of the symbolics 3600 david a. moon space-time characteristics of aloha protocols in high-speed bidirectional bus networks whay chiou lee pierre a. humblet nifdy: a low overhead, high throughput network interface in this paper we present nifdy, a network interface that uses admission control to reduce congestion and ensures that packets are received by a processor in the order in which they were sent, even if the underlying network delivers the packets out of order. the basic idea behind nifdy is that each processor is allowed to have at most one outstanding packet to any other processor unless the destination processor has granted the sender the right to send multiple unacknowledged packets. further, there is a low upper limit on the number of outstanding packets to all processors.we present results from simulations of a variety of networks (meshes, tori, butterflies, and fat trees) and traffic patterns to verify nifdy's efficacy. our simulations show that nifdy increases throughput and decreases overhead. the utility of nifdy increases as a network's bisection bandwidth decreases. when combined with the increased payload allowed by in- order delivery nifdy increases total bandwidth delivered for all networks. the resources needed to implement nifdy are small and constant with respect to network size. timothy callahan seth copen goldstein engineering atm networks for congestion avoidance emad al-hammadi mohammad mehdi shahsavari mocha: a quality adaptive multimedia proxy cache for internet streaming multimedia proxy caching is a client-oriented solution for large- scale delivery of high quality streams over heterogeneous networks such as the internet. existing solutions for multimedia proxy caching are unable to adjust quality of cached streams. thus these solutions either can not maximize delivered quality or exhibit poor caching efficiency. this paper presents the design and implementation ofmocha, a quality adaptive multimedia proxy cache for layered encoded streams. the main contribution of mocha is its ability to adjust quality of cached streams based on their popularity and on the available bandwidth between proxy and interested clients. thus mocha can significantly improve caching efficiency without compromising delivered quality. to perform quality adaptive caching, mocha implementsfine-grained replacementandfine-grained prefetchingmechanisms. we describe our prototype implementation of mocha on top of squid and address various design challenges such as managing partially cached streams. finally, we validate our implementation and present some of our preliminary results. reza rejaie jussi kangasharju a simple tcp extension for high-speed paths zheng wang jon crowcroft ian wakeman performance comparison of mobile support strategies rieko kadobayashi masahiko tsukamoto cert incident response and the internet katherine fithen barbara fraser local networking by ring, ethernet, broadband, and pabx - perspectives from the field the advocates of various approaches to local networking have largely emerged from the ranks of research and development organizations, and have thus mostly described the prospective advantages of their preferred technologies. often, in the field, the dominating concerns turn out to be different from the ones anticipated by the developers. this panel session will bring together four representatives of organizations that have each chosen different local network technologies, developed them, placed products in the field, and gained some practical experience. jerome h. saltzer the use of service limits for efficient operation of multistation single- medium communication systems sem c. borst onno j. boxma hanoch levy restoration strategies and spare capacity requirements in self-healing atm networks yijun xiong lorne g. mason the performance potential of multiple functional unit processors in this paper, we look at the interaction of pipelining and multiple functional units in single processor machines. when implementing a high performance machine, a number of hardware techniques maybe used to improve the performance of the final system. our goal is to gain an understanding of how each of these techniques contribute to performance improvement. as a basis for our studies we use a cray-like processor model and the issue rate (instructions per clock cycle) as the performance measure. we then systematically augment this base, non-pipelined, machine with more and more hardware features and evaluate the performance impact of each feature. we find, for example, that in non-vector machines, pipelining multiple function units does not provide significant performance improvements. dataflow limits are then derived for our benchmark programs to determine the performance potential of each benchmark. in addition, other limits are computed which apply more realistic constraints on a computation. based on these more realistic limits, we determine it is worthwhile to investigate the performance improvements that can be achieved from issuing multiple instructions each clock cycle. several hardware approaches are evaluated for issuing multiple instructions each clock cycle. a. r. pleszkun g. s. sohi reworking the rpc paradigm for mobile clients remote procedure call (rpc) is a popular paradigmfor designing distributed applications. the existing rpc implementations, however, do not allow special treatment of mobile hosts and wireless links; which can be a cause of degraded performance and service disruptions in the presence of disconnections, moves and wireless errors. in addition, future information oriented and location aware mobile applications will also need the ability to dynamically bind mobile clients to local information servers. current rpc implementations do not support dynamic binding of mobile clients to servers. in this paper we explore an alternate approach for implementing remote procedure calls that is based on a client- agent-server orindirect model. we describe an rpc implementation based on this approach, called m-rpc, which provides a clean way for mobile wireless clients to access existing rpc services on the wired network via their mobility support routers (msrs). m-rpc adds to the rpc layer on the mobile clients such useful features as dynamic binding, support for disconnected operation and call retries from the msr. ajay v. bakre b. r. badrinath probability of heavy traffic period in third generation cdma mobile communication the paper considers a problem of deriving the multidimensional distribution of a segment of a long-range dependent traffic in the third generation mobile communication network. an exact expression for the probability is found when a self-similar process from [8] models the traffic. the probability of heavy- traffic period, the outage probability, and the level-crossing probability are found. it is shown that the level crossing probability depends on the average call length only. further, this probability for traffic with dependent samples is lower than for traffic with independent samples. also, it is shown that there is a linear dependence between the average heavy traffic interval and the average call length. b. s. tsybakov an end-to-end approach to host mobility we present the design and implementation of an end-to-end architecture for internet host mobility using dynamic updates to the domain name system (dns) to track host location. existing tcp connections are retained using secure and efficient connection migration, enabling established connections to seamlessly negotiate a change in endpoint ip addresses without the need for a third party. our architecture is secure---name updates are effected via the secure dns update protocol, while tcp connection migration uses a novel set of migrate options---and provides a pure end-system alternative to routing-based approaches such as mobile ip. mobile ip was designed under the principle that fixed internet hosts and applications were to remain unmodified and only the underlying ip substrate should change. our architecture requires no changes to the unicast ip substrate, instead modifying transport protocols and applications at the end hosts. we argue that this is not a hindrance to deployment; rather, in a significant number of cases, it allows for an easier deployment path than mobile ip, while simultaneously giving better performance. we compare and contrast the strengths of end-to-end and network- layer mobility schemes, and argue that end-to-end schemes are better suited to many common mobile applications. our performance experiments show that hand- off times are governed by tcp migrate latencies, and are on the order of a round-trip time of the communicating peers. alex c. snoeren hari balakrishnan link set sizing for networks supporting smds frank y. s. lin packet-switched local area networks using wavelength-selective station couplers adrian grah terence d. todd a priority scheme for the ieee 802.14 mac protocol for hybrid fiber-coax networks mark d. corner jörg liebeherr nada golmie chatschik bisdikian david h. su decoupled access/execute computer architectures an architecture for improving computer performance is presented and discussed. the main feature of the architecture is a high degree of decoupling between operand access and execution. this results in an implementation which has two separate instruction streams that communicate via queues. a similar architecture has been previously proposed for array processors, but in that context the software is called on to do most of the coordination and synchronization between the instruction streams. this paper emphasizes implementation features that remove this burden from the programmer. performance comparisons with a conventional scalar architecture are given, and these show that considerable performance gains are possible. single instruction stream versions, both physical and conceptual, are discussed with the primary goal of minimizing the differences with conventional architectures. this would allow known compilation and programming techniques to be used. finally, the problem of deadlock in such a system is discussed, and one possible solution is given. james e. smith a comparison of layering and stream replication video multicast schemes the heterogeneity of the internet's transmission resources and end sys tem capability makes it difficult to agree on acceptable traffic characteristics among the multiple receivers of a multicast video stream. three basic approaches have been proposed to deal with this problem: 1) multicasting of replicated video streams at different rates, 2) multicasting the video encoded in cumulative layers, and 3) multicasting the video encoded in non-cumulative layers. even though there is a common belief that the layering approach is better than the replicated stream approach, there has been no studies that compare these schemes. this paper is devoted to such a systematic comparison. our starting point is an observation (substantiated by results in the literature) that a bandwidth penalty is incurred by encoding a video stream in layers. we argue that a fair comparison of these schemes needs to take into account this penalty as well as the specifics of the encoding used in each scheme, protocol complexity, and the topological placement of the video source and the receivers relative to each other. our results show that the believed superiority of layered multicast transmission relative to stream replication is not as clear cut as is widely believed and that there are indeed scenarios where replication is the preferred approach. taehyun kim mostafa h. ammar load-tolerant differentiation with active queue management current work in the ietf aims at providing service differentiation on the internet. one proposal is to provide loss differentiation by assigning levels of drop procedence to ip packets. in this paper, we evaluate the active queue management (aqm) mechanisms red in and out (rio) and weighted red (wred) in providing levels of drop precedence under different loads. for low drop precedence traffic, fio and wred can be configured to offer sheltering (i.e., low drop precedence traffic is protected from losses caused by higher drop precedence traffic). however, if traffic control fails or is inaccurate, such configurations can cause starvation of traffic at high drop precedence levels. configuring wred to instead offer relative differentiation can eliminate the risk of starvation. however, wred cannot, without reconfiguration, both offer sheltering when low drop precedence traffic is properly controlled and avoid starvation at overload of low drop precedence traffic. to achieve this, we propose a new aqm mechanism, wred with thresholds (wrt). the benefit of wrt is that, without reconfiguration, it offers sheltering when low drop precedence traffic is properly controlled and relative differentiation otherwise. we present simulations showing that wrt has these properties. u. bodin o. schelen s. pink any work-conserving policy stabilizes the ring with spatial re-use leandros tassiulas leonidas georgiadis a technique for reducing synchronization overhead in large scale multiprocessors zhiyuan li walid abu-sufah host groups: a multicast extension for datagram internetworks the extensive use of local networks is beginning to drive requirements for internetwork facilities that connect these local networks. in particular, the availability of multicast addressing in many local networks and its use by sophisticated distributed applications motivates providing multicast across internetworks. in this paper, we propose a model of service for multicast in an internetwork, describe how this service can be used, and describe aspects of its implementation, including how it would fit into one existing internetwork architecture, namely the us dod internet architecture.1 2 david r. cheriton stephen e. deering code generation of nested loops for dsp processors with heterogeneous registers and structural pipelining we propose a microcode-optimizing method targeting a programmable dsp processor. efficient generation of microcodes is essential to better utilize the computation power of a dsp processor. since most state-of-the-art dsp processors feature some sort of irregular architectures and most dsp applications have nested loop constructs, their code generation is a nontrivial task. in this paper, we consider two features frequently found in contemporary dsp processors --- structural pipelining and heterogeneous registers. we propose a code generator that performs instruction scheduling and register allocation simultaneously. the proposed approach has been implemented and evaluated using a set of benchmark core algorithms. simulation of the generated codes targeted towards the ti tms320c40 dsp processor shows that our system is indeed more effective compared with a commercial optimizing dsp compiler. wei-kai cheng youn-long lin performance study of access control in wireless lans - ieee 802.11 dfwmac and etsi res 10 hiperlan currently two projects are on their way to standardize physical layer and medium access control for wireless lans---ieee 802.11 and etsi res 10 hiperlan. this paper presents an introduction to both projects focussing on the applied access schemes. further we will present our simulation results, analyzing the performance of both access protocols depending on the number of stations and on the packet size, evaluating them regarding their capability to support qos parameters, regarding the impact of hidden terminals and their range extension strategy. jost weinmiller morten schläger andreas festag adam wolisz the desk area network mark hayter derek mcauley the internet in evolution, and tcp over atm (panel session) teun ott netnews: growing pains dennis fowler fast address lookups using controlled prefix expansion internet (ip) address lookup is a major bottleneck in high-performance routers. ip address lookup is challenging because it requires a longest matching prefix lookup. it is compounded by increasing routing table sizes, increased traffic, higher-speed links, and the migration to 128-bit ipv6 addresses. we describe how ip lookups and updates can be made faster using a set of of transformation techniques. our main technique, controlled prefix expansion, transforms a set of prefixes into an equivalent set with fewer prefix lengths. in addition, we use optimization techniques based on dynamic programming, and local transformations of data structures to improve cache behavior. when applied to trie search, our techniques provide a range of algorithms (expanded tries) whose performance can be tuned. for example, using a processor with 1mb of l2 cache, search of the maeeast database containing 38000 prefixes can be done in 3 l2 cache accesses. on a 300mhz pentium ii which takes 4 cycles for accessing the first word of the l2 cacheline, this algorithm has a worst-case search time of 180 nsec., a worst- case insert/delete time of 2.5 msec., and an average insert/delete time of 4 usec. expanded tries provide faster search and faster insert/delete times than earlier lookup algirthms. when applied to binary search on levels, our techniques improve worst-case search times by nearly a factor of 2 (using twice as much storage) for the maeeast database. our approach to algorithm design is based on measurements using the vtune tool on a pentium to obtain dynamic clock cycle counts. our techniques also apply to similar address lookup problems in other network protocols. v. srinivasan g. varghese mimd machine communication using the augmented data manipulator network there have been many multistage interconnection networks proposed in the literature for interconnecting the processors that comprise large parallel processing systems. in this paper, the use of a multistage network in the mimd mode of operation is considered. a tag based routing scheme is proposed for the augmented data manipulator network. also included is a rerouting scheme that allows a message blocked by a busy link in its present path to dynamically make use of a non-busy link and continue, when possible. finally, a tag based broadcasting scheme is introduced that allows one processor to send messages to a power of two other processors. robert j. mcmillen howard jay siegel hyper-erlang distribution model and its application in wireless mobille networks this paper presents the study of the hyper-erlang distribution model and its applications in wireless networks and mobile computing systems. we demonstrate that the hyper-erlang model provides a very general model for users' mobility and may provide a viable approximation to fat-tailed distribution which leads to the self-similar traffic. the significant difference from the traditional approach in the self-similarity study is that we want to provide an approximation model which preserves the markovian property of the resulting queueing systems. we also illustrate that the hyper- erlang distribution is a natural model for the characterization of the systems with mixed types of traffics. as an application, we apply the hyper-erlang distribution to model the cell residence time (for users' mobility) and demonstrate the effect on channel holding time. this research may open a new avenue for traffic modeling and performance evaluation for future wireless networks and mobile computing systems, over which multiple types of services (voice, data or multimedia) will be supported. client-server computing in mobile environments recent advances in wireless data networking and portable information appliances have engendered a new paradigm of computing, called mobile computing, in which users carrying portable devices have access to data and information services regardless of their physical location or movement behavior. in the meantime, research addressing information access in mobile environments has proliferated. in this survey, we provide a concrete framework and categorization of the various ways of supporting mobile client- server computing for information access. we examine characteristics of mobility that distinguish mobile client-server computing from its traditional counterpart. we provide a comprehensive analysis of new paradigms and enabler concepts for mobile client-server computing, including mobile-aware adaptation, extended client-server model, and mobile data access. a comparative and detailed review of major research prototypes for mobile information access is also presented. jin jing abdelsalam sumi helal ahmed elmagarmid performance evaluation of packet data services over cellular voice networks in this paper we develop a markov chain modeling framework for throughput/delay analysis of data services over cellular voice networks, using the dynamic channel stealing method. effective approximation techniques are also proposed and verified for simplification of modeling analysis. our study identifies the average voice call holding time as the dominant factor to affect data delay performance. especially in heavy load conditions, namely when the number of free voice channels becomes momentarily less, the data users will experience large network access delay in the range of several minutes or longer on average. the study also reveals that the data delay performance deteriorates as the number of voice channels increases at a fixed voice call blocking probability, due to increased voice trunking efficiency. we also examine the data performance improvement by using the priority data access scheme and speech silence detection technique. young yong kim san-qi li a model for the design of distributed databases (abstract only) this research is addressed towards solving a problem which falls under the general class of file allocation problems (fap). the main purpose here is to provide a tool for the designer of a distributed database system. with an increasing number of enterprises tending towards distributed systems, a major research question that has come up is how to allocate resources (databases and programs) in a distributed environment. this question has been addressed by formulating a nonlinear integer programming model. though the same issue has been addressed several times in the past, there is a major difference between the model developed here and all the other models. none of the models in the past have tried to solve the fap by taking into account the concurrency control mechanism being used. this is a major deficiency of these models because allocation of files is partly based on the communication flows in a network of computers. the communication flows in turn are based on the concurrency control mechanism. it is this weakness that the research here has attempted to overcome by designing a more realistic model. sudha ram geneva g. belford architecture of a massively parallel processor kenneth e. batcher optimal allocation of programs and data in distributed systems (abstract only) the problem of allocating programs and data to minimize total cost (communication cost + local processing cost + storage cost) subject to storage, availability, response time and policy/security constraints is formulated and solved. the problem formulation takes into account the complex interactions between data, programs and transactions, and includes the effect of update synchronization algorithms, query processing options, network topologies and queuing delays due to device-busy and locking conflicts. a wide variety of operations research techniques are integrated into a single optimization algorithm which yields, in polynomial time, exact optimal solutions for simplified cases and near optimal solutions for more complex situations. the solution approach lends itself very easily to application of "distributed" optimization, where portions of the algorithm can be executed in parallel by several computers, thus significantly reducing the total optimization time. amjad umar comparing primary-backup and state machines for crash failures jeremy b. sussman keith marzullo sparse communication networks and efficient routing in the plane (extended abstract) traditional approaches to network design separate the issues of designing the network itself and designing its management and control subsystems. this paper proposes an approach termed routing-oriented network design, which is based on designing the network topology and its routing scheme together, attempting to optimize some of the relevant parameters of both simultaneously. this approach is explored by considering the design of communication networks supporting efficient routing in the special case of points located in the euclidean plane. the desirable network parameters considered include low degree and small number of communication links. the desirable routing parameters considered include small routing tables, small number of hops and low routing stretch. two rather different schemes are presented, one based on direct navigation in the plane and the other based on efficient hierarchical tree covers. on a collection of n sites with diameter d, these methods yield networks with maximum degree o(log d) (hence a total of o(n log d) communication links), coupled with routing schemes with constant routing stretch, o(log n log d) memory bits per vertex and routes with at most log n or log d hops. yehuda hassin david peleg systems aids in determining local area network performance characteristics at bethesda, maryland, the national library of medicine has a large array of heterogenous data processing equipment dispersed over ten floors in the lister hill center and four floors in the library building. the national library of medicine decided to implement a more flexible, expansible access medium (local area network (lan) to handle the rapid growth in the number of local and remote users and the changing requirements. this is a dual coaxial cable communications system designed using cable television (catv) technology. one cable, the outbound cable, transfers information between the headend and the user locations. the other cable, the inbound cable, transfers information from the user locations to the headend. this system will permit the distribution of visual and digital information on a single medium. on-line devices, computers, and a technical control system network control center are attached to the lan through bus interface units (bius). the technical control system will collect statistical and status information concerning the traffic, bius, and system components. the bius will, at fixed intervals, transmit status information to the technical control. the network control centers (ncc) will provide network directory information for users of the system, descriptions of the services available, etc. a x.25 gateway biu will interface the lan to the public networks (telenet and tymnet) and to x.25 host computer systems. barbara r. sternick a case study on modeling shared memory access effects during performance analysis of hw/sw systems marcello lajolo anand raghunathan sujit dey luciano lavagno alberto sangiovanni-vincentelli corporative networks as base of institution computerization there is proposed approach to create corporative network as basis of institution computerization, and is described the main phases its step-by-step implementation. approach particularly application on institute base has confirmed its efficiency. anatoly sachenko sergiy vozniak sergiy rippa analysis of scalability of parallel algorithms and architectures: a survey vipin kumar anshul gupta metanet: principles of an arbitrary topology lan yoram ofek moti yung design of an atm-fddi gateway sanjay kapoor gurudatta m. parulkar software applications running in local area network (lan) issues nedialko nikolov elena racheva denica petkova dealing with server corruption in weakly consistent replicated data systems mike j. spreitzer marvin m. theimer karin petersen alan j. demers douglas b. terry vertical information technology in the galua's baseÂ-new direction indevelopment of computer machines (abstract only)in the indicated article the stated theoretical basis and structural solutionsof the galua's base for construction of elements of peripheral knotsoperational and microprocessor computer machines. the offered methodology ofrealization of vertical information technology in the galua's base in problemsof programming with a deep level of paralle's data processing inmultiprocessor systems with the associative memory.yaroslav nykolajchuk roman korol macaw: a media access protocol for wireless lan's in recent years, a wide variety of mobile computing devices has emerged, including portables, palmtops, and personal digital assistants. providing adequate network connectivity for these devices will require a new generation of wireless lan technology. in this paper we study media access protocols for a single channel wireless lan being developed at xerox corporation's palo alto research center. we start with the maca media access protocol first proposed by karn [9] and later refined by biba [3] which uses an rts-cts-data packet exchange and binary exponential back-off. using packet-level simulations, we examine various performance and design issues in such protocols. our analysis leads to a new protocol, macaw, which uses an rts-cts-ds-data-ack message exchange and includes a significantly different backoff algorithm. vaduvur bharghavan alan demers scott shenker lixia zhang low-loss tcp/ip header compression for wireless networks mikael degermark mathias engan björn nordgren stephen pink a case for now (networks of workstation) david a. patterson david e. culler thomas e. anderson a framework for robust measurement-based admission control matthias grossglauser david n. c. tse adaptive interaction for enabling pervasive services we describe an architecture that allows mobile users to access a variety of services provided by pervasive computing environments. novel to our approach is that the system selects and executes services taking into account arbitrary contextual information (e.g. location or preferences). our architecture is based on an adaptive service interaction scheme; individual service requests are attributed by context constraints, which specify the adaption policy. context constraints may relate to spatial or temporal concerns. we propose a document- based approach; so called context-aware packets (caps) contain context constraints and data for describing an entire service request. intermediary network nodes route and apply the data to services, which match the embedded constraints. services are identified by characterizing attributes, rather then explicit network addresses. the execution of selected services may be deferred, and triggered by a specific event. service requests carry the context of their use, therefore our system works well in environments with intermitted connectivity, less interaction with the issuing requester is required. michael samulowitz florian michahelles claudia linnhoff- popien a simplified approach to the performance evaluation of fdma-cdma systems this work shows it is possible to apply for the performance evaluation of fdma-cdma cellular mobile systems a simple analytical approximated method, previously successfully proposed by two of the authors with reference to fdma- tdma systems. the distinctive feature of the methodology we describe is that it allows for an immediate determination of both the indexes traditionally employed to define system performance, i.e., average bit error probability \overline p_e and outage probability p_{\mathrm{out} at a very low computational cost. the hypothesis required to apply the proposed approximation is that the examined spread spectrum system be characterized by a bandwidth occupancy lower than the coherence bandwidth of the transmission channel. this could be the case of a wireless ds-cdma system envisioned to provide voice service and exploiting a processing gain of the order of a hundred. we apply our methodology to determine the performance improvements in both \overline p_e and p_{\rm out introduced increasing the protection of the transmitted information through error correcting codes and interleaving, in different operating conditions as regards the functioning of the power control loops. a comparison is also satisfyingly carried out with some other approximated analytical methods found in literature. we strongly point out that the corresponding results are achieved at a much more modest computational cost than in traditional approaches. gianni immovilli maria luisa merani mohammed kussai shahin distribution over mobile environments frederic le mouÃ"l françoise andre dynamic time windows: packet admission control with feedback we present a feedback congestion control method, dynamic time windows, for use in high speed wide area networks based on controlling source variance. it is part of the two-level integrated congestion control system introduced in our earlier work[1]. the method consists of a packet admission control system and a feedback system to dynamically control source burstiness. source throughput is not modulated as with traditional packet windows, allowing system throughput to remain high while avoiding congestion. furthermore, the admission control bounds congestion times in the network, allowing feedback to be effective in the face of large bandwidth delay products. the basic control mechanisms are analogs to traditional packet windows applied to controlling time windows - a new mechanism which allows switches to modulate source variances. the proposed system is simulated, and the results reported and analyzed. enhancements to the basic system are also proposed and analyzed. we wish to stress that the system described here is the second level of a two- level congestion control. previous work[1] concentrated on the switch queueing mechanism, pulse, while this work is a detailed examination of the feedback system used to adjust time windows to changing network load. theodore faber lawrence h. landweber amarnath mukherjee fairisle: an atm network for the local area ian leslie derek mcauley using the nren testbed to prototype a high-performance multicast application marjory j. johnson matthew chew spence lawrence chao improving the accuracy of static branch prediction using branch correlation recent work in history-based branch prediction uses novel hardware structures to capture branch correlation and increase branch prediction accuracy. we present a profile-based code transformation that exploits branch correlation to improve the accuracy of static branch prediction schemes. our general method encodes branch history information in the program counter through the duplication and placement of program basic blocks. for correlation histories of eight branches, our experimental results achieve up to a 14.7% improvement in prediction accuracy over conventional profile-based prediction without any increase in the dynamic instruction count of our benchmark applications. in the majority of these applications, code duplication increases code size by less than 30%. for the few applications with code segments that exhibit exponential branching paths and no branch correlation, simple compile-time heuristics can eliminate these branches as code- transformation candidates. cliff young michael d. smith xunet 2: lessons from an early wide-area atm testbed charles r. kalmanek srinivasan keshav william t. marshall samuel p. morgan robert c. restrick qos control in wireless atm youssef iraqi raouf boutaba alberto leon-garcia warp architecture and implementation marco annaratone emmanuel arnould thomas gross h. t. kung monica s. lam onat menzilcioglu ken sarocky jon a. webb implementing network protocols at user level chandramohan a. thekkath thu d. nguyen evelyn moy edward d. lazowska a real time transport scheme for wireless multimedia communications in wireless communications systems, a mobile station is typically equipped with limited processing capability and buffer space for transmitting and receiving. the radio link is usually found to be noisy and its propagation delay is sometimes non-negligible as compared with the packet transmission delay. and because of the necessity of flow control and packet retransmission upon error, the delay and throughput performance cannot satisfy the need of a particular traffic type, i.e., real-time multimedia. this paper presents a scheme suitable for the above condition, called the _burst-oriented transfer with time-bounded retransmission (bttr)._ the present scheme uses a large transmission window for sending/receiving a burst of time-sensitive data and, within this window, another smaller observation window is repeatedly used for error status feedback via the backward channel. there is time limitation on each retransmission such that the burst of data can be received in a timely manner, however, with some degradation on the packet loss rate. an analysis is given in terms of the expectations of delay, throughput, and packet drop rate. a comparison with an error-free link protocol will also be given. the result shows that the proposed scheme can meet the delay and throughput requirement under reasonable packet drop rate. jon chiung-shien wu a service framework for carrier grade multimedia services using parplay apis over a sip system the implementation of new mobile communication technologiesdeveloped in the third generation partnership project (3gpp) willallow to access the internet not only from a pc but also via mobilephones, palmtops and other devices. new applications will emerge,combining several basic services like voice telephony, e-mail,voice over ip, mobility or web- browsing, and thus wiping out theborders between the fixed telephone network, mobile radio and theinternet. offering those value-added services will be the keyfactor for success of network and service providers in anincreasingly competitive market. in 3gpp's service framework theuse of the parlay apis is proposed that allow applicationdevelopment by third parties in order to speed up service creationand deployment. 3gpp has also adopted sip for session control ofmultimedia communications in an ip network. this paper proposes amapping of sip functionality to parlay services and describes aprototype implementation using the sip servlet api. furthermore anarchitecture of a service platform is presented that offers aframework for the creation, execution and management of carriergrade multimedia services in heterogeneous networks. rudolf pailer johannes stadler author index author index security issues with tcp/ip an introduction to network security , basic definitions and aa brief discussion of the architecture of tcp/ip as well as the open system intercornnection(osi) reference model open the paper. the relationship between tcp/ip and of some osi layers is described. an indepth look is provided to the major protocols in tcp/ip suite and the security features and problems in this suite of protocols. the secutiy problems are discussed in the context ofthe protocol services. renqi li e. a. unger cpu sim 3.1: a tool for simulating computer architectures for computer organization classes cpu sim 3.1 is an educational software package written in java foruse in cs3 courses. cpu sim provides students an active learningenvironment in which they can design, modify, and compare variouscomputer architectures at the register-transfer level and higher.they can run assembly language or machine language programs forthose architectures through simulation. cpu sim is a completedevelopment environment, including dialog boxes for designing thecpu architecture, a text editor for editing assembly languageprograms, an assembler, several display windows for viewing theregisters and rams during the execution of programs, and manydebugging features such as the ability to step forward or backwardduring execution, inspecting and optionally changing the values inthe registers and rams after each step. these features andsuggested uses of cpu sim in cs3 classes are discussed. dale skrien benchmarking a vector-processor prototype based on multithreaded streaming/fifo vector (msfv) architecture this paper presents the benchmark results on a vector-processor prototype based on the msfv (multithreaded streaming/fifo vector) architecture. the msfv architecture is single-chip oriented, and thus its main object is to save the off-chip memory bandwidth by exploiting the register bandwidth instead. the register bandwidth is exploited by the synergism of fifo register, chaining, streaming, and multithreading. this paper tries to identify the strength and weakness of those architectural features. the results for basic vector operations and livermore fortran kernels are reported in terms of normalized flopc (floating-point operations per clock cycle) and compared to previously- reported results on the cray x-mp, y-mp, fujitsu vp-200, hitachi s-810/20, nec sx-2, and sx-3. these comparisons show that, for many basic vector operations, the execution rate of the msfv prototype results in worst due to its saving thememory bandwidth. however, for livermore fortran kernels, the msfv prototype results in worst due to its saving the memory bandwidth. however, for livermore fortran kernels, the msfv prototype outperforms the vp-200 by 2.11 times (geometric mean) and one processor of the x-mp by 1.22 times (geometric mean) in terms of flopc. also, it is 0.67 times (geometric mean) faster than the s-810/20, and 0.76 times (geometric mean) faster than the sx-2. the paper concludes that the msfv architecture is successful in saving the memory bandwidth. tetsuo hironaka takashi hashimoto keizo okazaki kazuaki murakami shinji tomita memory consistency and process coordination for sparc v8 multiprocessors (brief announcement) weakening the memory consistency model of a multiprocess system improves its performance and scalability. however, these models sacrifice programmability because they create complex behaviors of shared memory. without the use of expensive, built-in synchronization, these models exhibit poor capabilities to support solutions for fundamental process coordination problems [2]. this leads programmers to aggressively use these forms of synchronization, incurring additional performance burdens on the system. a multiprocessor system constructed from sparc v8 [6] processors is one example of a system with weak memory consistency. since use of synchronization primitives deteriorates performance, we are motivated to study the limitations and capabilities of weak memory consistency models without the use of these primitives. specifically, we want to know which weak memory consistency models have the ability to support solutions to fundamental process coordination problems without the use of explicit synchronization. if the use of explicit synchronization is avoidable, then efficient libraries for certain classes of applications can be built. this would ease the job of distributed application programmers and make their applications more efficient. in the full paper [3], we study the capabilities of the two sparc memory consistency models total store ordering (tso) and partial store ordering (pso) to support solutions to fundamental process coordination problems without resorting to expensive synchronization primitives. to do so, a mathematical description of the behavior of these systems in terms of partial order constraints on possible computations is first derived. our definitions for tso and pso are proven to capture the machine description of the manufacturer. also, they are expressed as a natural variant of sequential consistency [5], which facilitates the comparison of the sparc models to several proposed consistency models including processor consistency, causal consistency, and java consistency. the process coordination problems studied in this paper are critical section coordination and producer/consumer coordination. these are fundamental patterns because they exist in a wide range of parallel and distributed applications. we distinguish two variants of producer/consumer coordination whose solution requirements differ: the set and queue variants. our results show that both tso and pso models are incapable of supporting a read/write solution to the critical section problem, but both can support such solutions to some variants of the producer/consumer problem. one earlier attempt to define tso [4] resulted in a definition that is much stronger than what the sparc provides and lead to erroneous conclusions about the coordination capabilities of tso. in fact, we show that any program (with only read/write operations) that is correct for sequential consistency can be compiled into an equivalent program (with only read/write operations) that is correct for this erroneous definition of tso [3]. to the contrary, this paper proves that read/write operations are insufficient to solve certain coordination problems for tso and pso. another previous definition of the sparc models [1] was too complicated to be useful to programmers, whereas the definitions here are simple to understand and use. the original axiomatic specifications of tso and pso [6] are also complex and are not particularly useful for studying the questions addressed in this paper. jalal kawash lisa higham hardware support for fast capability-based addressing traditional methods of providing protection in memory systems do so at the cost of increased context switch time and/or increased storage to record access permissions for processes. with the advent of computers that supported cycle- by-cycle multithreading, protection schemes that increase the time to perform a context switch are unacceptable, but protecting unrelated processes from each other is still necessary if such machines are to be used in non- trusting environments. this paper examines guarded pointers, a hardware technique which uses tagged 64-bit pointer objects to implement capability-based addressing. guarded pointers encode a segment descriptor into the upper bits of every pointer, eliminating the indirection and related performance penalties associated with traditional implementations of capabilities. all processes share a single 54-bit virtual address space, and access is limited to the data that can be referenced through the pointers that a process has been issued. only one level of address translation is required to perform a memory reference. sharing data between processes is efficient, and protection states are defined to allow fast protected subsystem calls and create unforgeable data keys. nicholas p. carter stephen w. keckler william j. dally world-wide web and computer science reports with the advent of the world-wide web, computing professionals have eagerly pursued the idea of moving from a paper-based technical report service to one that employs networked information systems. many departments keep some version of their reports on an ftp server to help with this process. to facilitate access to cs reports a number of sites have set up lists of these archives (see sidebar). edward a. fox a survey of energy efficient network protocols for wireless networks wireless networking has witnessed an explosion of interest from consumers in recent years for its applications in mobile and personal communications. as wireless networks become an integral component of the modern communication infrastructure, energy efficiency will be an important design consideration due to the limited battery life of mobile terminals. power conservation techniques are commonly used in the hardware design of such systems. since the network interface is a significant consumer of power, considerable research has been devoted to low-power design of the entire network protocol stack of wireless networks in an effort to enhance energy efficiency. this paper presents a comprehensive summary of recent work addressing energy efficient and low-power design within all layers of the wireless network protocol stack. christine e. jones krishna m. sivalingam prathima agrawal jyh cheng chen client/server benefits, problems, best practices peter duchessi indushobha chengalur-smith editorial s. ramanathan martha steenstrup the hyperdynamic architecture (massively parallel message-passing machine) - architecture and performance hojung cha peter jones modeling rpc performance distributed computing applications are collections of processes allocated across a network that cooperate to accomplish common goals. the applications require the support of a distributed computing runtime environment that provides services to help manage process concurrency and interprocess communication. this support helps to hide much of the inherent complexity of distributed environments via industry standard interfaces and permits developers to create more portable applications. the resource requirements of the runtime services can be significant and may impact application performance and system throughput. this paper describes work done to study the potential benefits of redesigning some aspects of the dce rpc and its current implementation on a specific platform. j. a. rolia m. starkey g. boersma approximate mean value analysis algorithms for queuing networks: existence, uniqueness, and convergence results this paper is concerned with the properties of nonlinear equations associated with the scheweitzer-bard (s-b) approximate mean value analysis (mva) heuristic for closed product-form queuing networks. three forms of nonlinear s-b approximate mva equations in multiclass networks are distinguished: schweitzer, minimal, and the nearly decoupled forms. the approximate mva equations have enabled us to: (a) derive bounds on the approximate throughput; (b) prove the existence and uniqueness of the s-b throughput solution, and the convergence of the s-b approximation algorithm for a wide class of monotonic, single-class networks; (c) establish the existence of the s-b solution for multiclass, monotonic networks; and (d) prove the asymptotic (i.e., as the number of customers of each class tends to ∞) uniqueness of the s-b throughput solution, and (e) the convergence of the gradient projection and the primal- dual algorithms to solve the asymptotic versions of the minimal, the schweitzer, and the nearly decoupled forms of mva equations for multiclass networks with single server and infinite server nodes. the convergence is established by showing that the approximate mva equations are the gradient vector of a convex function, and by using results from convex programming and the convex duality theory. k. r. pattipati m. m. kostreva j. l. teele errata errata on the cost of fairness in ring networks ilan kessler arvind krishna speculative execution via address prediction and data prefetching jose gonzalez antonio gonzalez decentralised approaches for network management centralised network management has shown inadequacy for efficient management of large heterogenous networks. as a result, several distributed approaches have been adapted to overcome the problem. this paper is a review of decentralised network management techniques and technologies. we explain distributed architectures for network management, and discuss some of the most important implemented distributed network management systems. a comparison is made between these approaches to show the pitfalls and merits of each. mohsen kahani h. w. peter beadle on computing per-session performance bounds in high-speed multi-hop computer networks we present a technique for computing upper bounds on the distribution of individual per-session performance measures such as delay and buffer occupancy for networks in which sessions may be routed over several "hops." our approach is based on first stochastically bounding the distribution of the number of packets (or cells) which can be generated by each traffic source over various lengths of time and then "pushing" these bounds (which are then shown to hold over new time interval lengths at various network queues) through the network on a per-session basis. session performance bounds can then be computed once the stochastic bounds on the arrival process have been characterized for each session at all network nodes. a numerical example is presented and the resulting distributional bounds compared with simulation as well as with a point-valued worst-case performance bound. jim kurose gprs and umts release 2000 a11-ip option jonne soininen high performance visualization of time-varying volume data over a wide-area network status this paper presents an end-to-end, low-cost solution for visualizing time- varying volume data rendered on a parallel computer located at a remote site. pipelining and careful grouping of processors are used to hide i/o time and to maximize processor utilization. compression is used to significantly cut down the cost of transferring output images from the parallel computer to a display device through a wide-area network. this complete rendering pipeline makes possible highly efficient rendering andremote viewing of high-resolution time- varying data sets in the absence of high-speed network and parallel i/o support. to study the performance of this rendering pipeline and to demonstrate high-performance remote visualization, tests were conducted on a pc cluster in japan as well as an sgi origin 2000 operated at the nasa ames research center with the display located at uc davis. kwan-liu ma david m. camp modeling a transport layer protocol using first-order logic we use a hybrid model based on the first-order logic to specify and verify a transport layer protocol. in this model we specify a protocol as a set of state machines. time expressions are used to describe the temporal relations of transitions. given the specification of a protocol, we verify its properties by logical deduction. reasoning techniques such as decomposition and abstraction are used to reduce the verification complexity. the transport protocol consists of an active process, a passive process, and two communication channels. each of these components is specified by this model. an outline of verification of this protocol is given. h p lin a simplified lan protocol for practicing file transfer, resource locating, and elementary distributed problem solving local area networks exist for transferring files and messages and sharing resources, but do not address the areas of resource locating and elementary distributed problem solving. in order to better understand both the basics of blocked asynchronous protocols and distributed problem solving, an experimental protocol has been created which allows a collection of microcomputers not only to transfer files in an error free environment but to also conduct resource locating and elementary distributed problem solving. the students may then apply hands on experience to the understanding of current blocked asynchronous protocols and eventually leading into the newly created ieee 802 standards. curt m. white a scalable, robust network for parallel computing cx, a network-based computational exchange, is presented. the system's design integrates variations of ideas from other researchers, such as work stealing, non- blocking tasks, eager scheduling, and space-based coordination. the object- oriented api is simple, compact, and cleanly separates application logic from the logic that supports interprocess communication and fault tolerance. computations, of course, run to completion in the presence of computational hosts that join and leave the ongoing computation. such hosts, or producers, use task caching and prefetching to overlap computation with interprocessor communication. to break a potential task server bottleneck, a network of task servers is presented. even though task servers are envisioned as reliable, the self-organizing, scalable network of _n_ servers, described as a _sibling-connected fat tree_, tolerates a sequence of _n_ \\--- 1 server failures. tasks are distributed throughout the server network via a simple "diffusion" process. cx is intended as a test bed for research on automated silent auctions, reputation services, authentication services, and bonding services. cx also provides a test bed for algorithm research into network-based parallel computation. peter cappello dimitros mourloukos performance of hypercube routing schemes with or without buffering emmanouel a. varvarigos dimitri p. bertsekas the m-machine multicomputer marco fillo stephen w. keckler william j. dally nicholas p. carter andrew chang yevgeny gurevich whay s. lee a system architecture for the concurrent evaluation of applicative program expressions the paper outlines the principles for the concurrent evaluation of applicative programs based on berklings reduction language. the recursive style of program design supported by this language lends itself to a recursive partitioning scheme which, for suitable program expressions, generates dynamically a hierarchy of processes for the concurrent evaluation of subexpressions. this hierarchy can elegantly be mapped onto a system of cooperating reduction machines featuring a stack architecture. a special ticket mechanism enforces an upper limit on the number of processes that, at any time, may exist within the system, which does not significantly exceed the number of the available machines. claudia schmittgen werner kluge multi-access mesh (multimesh) networks terence d. todd ellen l. hahne object-oriented cosynthesis of distributed embedded systems this article describes a new hardware-software cosynthesis algorithm that takes advantage of the structure inherent in an object-oriented specification. the algorithm creates a distributed system implementation with arbitrary topology, using the object-oriented structure to partition functionality in addition to scheduling and allocating processes. process partitioning is an especially important optimization for such systems because the specification will not, in general, take into account the process structure required for efficient execution on the distributed engine. the object-oriented specification naturally provides both coarse-grained and fine-grained partitions of the system. our algorithm uses that multilevel structure to guide synthesis. experimental results show that our algorithm takes advantage of the object- oriented specification to quickly converge on high-quality implementations. wayne wolf imageio: design of an extensible image input/output library parag chandra luis ibanez resource aggregation for fault tolerance in integrated services networks for several real-time applications it is critical that the failure of a network component does not lead to unexpected termination or long disruption of service. in this paper, we propose a scheme called raft (resource aggregation for fault tolerance) that guarantees recovery in a timely and resource- efficient manner. raft is presented in the framework of the reliable back-bone (rbone), a virtual network layered on top of an integrated services network. applications can request fault tolerance against rbone link and node failures. the basic idea of raft is to setup every fault tolerant flow along a secondary path that serves as a backup in case the primary path fails. the secondary path resource reservations are aggregated whenever possible to reduce the overhead of providing fault tolerance. we show that the rsvp resource reservation protocol can support raft with simple extensions. constantinos dovrolis parameswaran ramanathan lower than best effort: a design and implementation in recent years, the internet architecture has been augmented so that better- than-best-effort (bbf) services, in the form of reserved resources for specific flows, can be provided by the network. to date, this has been realized through two different and sequentially developed efforts. the first is known as integrated services and focuses on specific bounds on bandwidth and/or delay for specific flows. the differential service model was later introduced, which presented a more aggregated and local perspective regarding the forwarding of traffic. a direction that is missing in today's work on service models is a defined schema used to purposely degrade certain traffic to various levels below that of best effort. in a sense, a new direction that provides a balancing effect in the deployment of bbe service. this is particularly evident with continual and parallel short transaction flows (like that used for web applications) over low bandwidth links that are not subject to any backoff penalty incurred by congestion because state does not persist. in a more indirect perspective, our model correlates degraded service with the application of usage and security policies --- administrative decisions that can operate in tandem or disjointly from conditions of the network. this paper attempts to address these and other issues and presents the design and implementation of such a new degraded service model and queuing mechanism used to support it. ken carlberg panos gevros jon crowcroft acm/kluwer special issue on wireless internet and intranet access r. cáceres l. r. chang r. jain signal design and system operation of globalstar versus is-95 cdma - similarities and differences the globalstar^{tm} system provides telephone and data services to and from mobile and fixed users in the area between ± 70 degrees latitude. connection between user terminals and the pstn is established through fixed terrestrial gateways via a constellation of low earth orbiting leo satellites. globalstar uses an extension of the is - 95 cdma standard that is used in terrestrial digital cellular systems. the leo satellite link is sufficiently different from the terrestrial cellular link that certain departures from is - 95 were needed both in signal design as well as in system operation. this paper describes some of the similarities and differences of globalstar air interface versus is - 95. leonard schiff a. chockalingam multiservices mac-protocol for wireless atm elmar dorner k-stabilization of reactive tasks joffroy beauquier christophe genolini shay kutten effects of ad hoc layer medium access mechanisms under tcp mobile computing is the way of the future, as evident by such initiatives as bluetooth, iceberg and homerf. however, for mobile computing to be successful, an obvious layer, the mac layer, must be efficient in channel access and reservation . therefore, in-dpeth understanding is needed of the wireless mac layerif wireless computing is to takeoff. many random access wireless mac protocols have been proposed and even standardized. however, there has yet been an attempt to understand why certain designs are used and what makes certain protocols better than others. in this paper, we survey several popular, contemporary, wireless, random access mac protocols and determine the effects behind the design choices of these protocols. ken tang mario correa mario gerla usage parameter control and bandwidth allocation methods for atm-based b-isdn naoaki yamanaka youichi sato ken-ichi sato an example risc vector machine architecture martin dowd bayeux: an architecture for scalable and fault-tolerant wide-area data dissemination the demand for streaming multimedia applications is growing at an incr edible rate. in this paper, we propose bayeux, an efficient application-level multicast system that scales to arbitrarily large receiver groups while tolerating failures in routers and network links. bayeux also includes specific mechanisms for load-balancing across replicate root nodes and more efficient bandwidth consumption. our simulation results indicate that bayeux maintains these properties while keeping transmission overhead low. to achieve these properties, bayeux leverages the architecture of tapestry, a fault- tolerant, wide-area overlay routing and location network. shelley q. zhuang ben y. zhao anthony d. joseph randy h. katz john d. kubiatowicz broadcast scheduling for information distribution broadcast data delivery is encountered in many applications where there is a need to disseminate information to a large user community in a wireless asymmetric communication environment. in this paper, we consider the problem of scheduling the data broadcast such that average response time experienced by the users is low. in a push-based system, where the users cannot place requests directly to the server and the broadcast schedule should be determined based solely on the access probabilities, we formulate a deterministic dynamic optimization problem, the solution of which provides the optimal broadcast schedule. properties of the optimal solution are obtained and then we propose a suboptimal dynamic policy which achieves average response time close to the lower bound. the policy has low complexity, it is adaptive to changing access statistics, and is easily generalizable to multiple broadcast channels. in a pull-based system where the users may place requests about information items directly to the server, the scheduling can be based on the number of pending requests for each item. suboptimal policies with good performance are obtained in this case as well. finally, it is demonstrated by a numerical study that as the request generation rate increases, the achievable performance of the pull- and push-based systems becomes almost identical. chi-jiun su leandros tassiulas vassilis j. tsotras qos routing in networks with uncertain parameters dean h. lorenz ariel orda efficient algorithms for scheduling data broadcast with the increasing acceptance of wireless technology, mechanisms to efficiently transmit information to wireless clients are of interest. the environment under consideration is asymmetric in that the information server has much more bandwidth available, as compared to the clients. it has been proposed that in such systems the server should broadcast the information periodically. a broadcast schedule determines what is broadcast by the server and when. this paper makes the simple, yet useful, observation that the problem of broadcast scheduling is related to the problem of fair queueing. based on this observation, we present a log-time algorithm for scheduling broadcast, derived from an existing fair queueing algorithm. this algorithm significantly improves the time-complexity over previously proposed broadcast scheduling algorithms. modification of this algorithm for transmissions that are subject to errors is considered. also, for environments where different users may be listening to different number of broadcast channels, we present an algorithm to coordinate broadcasts over different channels. simulation results are presented for proposed algorithms. sohail hameed nitin h. vaidya a model for recentralization of computing: (distributed processing comes home) harold lorin minimum-latency transport protocols with modulo-n incarnation numbers a. udaya shankar david lee a heuristic framework for source policing in atm networks catherine rosenbergbruno laguÃ" delayed internet routing convergence this paper examines the latency in internet path failure, failover, and repair due to the convergence properties of interdomain routing. unlike circuit- switched paths which exhibit failover on the order of milliseconds, our experimental measurements show that interdomain routers in the packet-switched internet may take tens of minutes to reach a consistent view of the network topology after a fault. these delays stem temporary routing table fluctuations formed during the operation of the border gateway protocol (bgp) path selection process on internet backbone routers. during these periods delayed convergence, we show that end-to-end internet paths will experience intermittent loss of connectivity, as well as increased packet loss and latency. we present a two-year study of internet routing convergence through the experimental instrumentation of key portions of the internet infrastructure, including both passive data collection and fault-injection machines at internet exchange points. based on data from the injection and measurement of several hundred thousand interdomain routing faults, we describe several unexpected properties of convergence and show that the measured upperbound on internet interdomain routing convergence delay is an order of magnitude slower than previously thought. our analysis also shows that the upper theoretic computational bound on the number of router states and control messages exchanged during the process of bgp convergence is factorial with respect to the number of autonomous systems in the internet. finally, we demonstrate that much of the observed convergence delay stems form specific router vendor implementation decisions and ambiguity in the bgp specification. craig labovitz abha ahuja abhijit bose farnam jahanian the end of architecture burton smith optimal multiphase complete exchange on circuit-switched hypercube architectures the complete-exchange communication primitive on a distributed memory multiprocessor calls for every processor to send a message to every other processor, each such message being unique. for circuit-switched hypercube networks there are two well-known schemes for implementing this primitive. direct exchange minimizes communication volume but maximizes startup costs, while standard exchange minimizes startup costs at the price of higher communication volume. this paper analyzes a hybrid, which can be thought of as a sequence of direct echange phases, applied to variable-sized subcubes. this paper examines the problem of determining the optimal subcube dimension sizes di for every phase. we show that optimal performance is achieved using some equi-partition, where |di dj|≤1 for all phases i and j. we study the behavior of the optimal partition as a function of machine communication parameters, hypercube dimension, and message size, and show that the optimal partition can be determined with no more than 2< ;rcd>d+1& lt;rp post="par"> comparisons. finally we validate the model empirically, and for certain problem instances observe as much as a factor of two improvement over the other methods. david m. nicol shahid h. bokhari mtr: a modified token ring protocol and its performance devising efficient and high performance communication protocols for computer networks is a challenging issue. this paper presents a new modified token ring protocol (mtr) for token ring lans. the idea behind proposing this protocol is to improve the performance of the standard (traditional) token ring protocol (str). the advantage of the mtr protocol over the str protocol is its capability of bypassing idle stations in the ring. this is achieved by utilizing two of the reserved bits in the frame status (fs) field. the utilized bits are called transmission reservation bits (tr-bits). the tr-bits operate as an implicit token while they are circulating around the ring with their associated packet. our simulation experiments show that the mtr protocol has higher ring throughput and lower packet delay than that of the str protocol. for high traffic conditions the mtr protocol has shortened the packet delay down to 45% as compared to the str protocol. throughput improvement provided by the mtr can reach about 6% as compared to the str protocol. the mtr protocol is characterized by its fairness, simplicity and cost-effectiveness. m. s. obaidat m. a. al-rousan critical path analysis of tcp transactions improving the performance of data transfers in the internet (such as web transfers) requires a detailed understanding of when and how delays are introduced. unfortunately, the complexity of data transfers like those using http is great enough that identifying the precise causes of delays is difficult. in this paper we describe a method for pinpointing where delays are introduced into applications like http by using critical path analysis. by constructing and profiling the critical path, it is possible to determine what fraction of total transfer latency is due to packet propagation, network variation (e.g., queuing at routers or route fluctuation). packet losses, and delays at the server and at the client. we have implemented our techique in a tool called tcpeval that automates critical path analysis for web transactions. we show that our analysis method is robust enough to analyze traces taken for two different tcp implementations (linux and freebsd). to demonstrate the utility of our approach, we present the results of critical path analysis for a set of web transactions taken over 14 days under a variety of server and network conditions. the results show that critical path analysis can shed considerable light on the causes of delays in web transfers, and can expose subtleties in the behavior of the entire end-to-end system. the numesh: a modular, scalable communications substrate steve ward karim abdalla rajeev dujari michael fetterman frank honore ricardo jenez philippe laffont ken mackenzie chris metcalf milan minsky john nguyen john pezaris gill pratt russell tessier distributed packet switching in arbitrary networks yuval rabani Éva tardos efficient on-line processor scheduling for a class of iris (increasing reward with increasing service) real- time tasks jayanta k. dey james f. kurose don towsley c. m. krishna mahesh girkar scaling video conferencing through spatial tiling we describe an approach to scaling video conferencing, with the use of active agents. such agents tilenvideo frames into one, by modification of their respective meta-data and adjustment of their video frame rate if necessary. the spatial tiling agents are located within a network, and participants in the session unicast video to the "closest" agent. the agent then multicast the tiled video to the group of all participants. results show that spatial tiling increases the ability of the end-user to receive large numbers of video streams and reduces network load both in terms of bandwidth and in packets per second. the result is a significant scaling boost to video conferencing systems. ladan gharai colin perkins allison mankin editorial ken korman leave-in-time: a new service discipline for real-time communications in a packet-switching network leave-in-time is a new rate-based service discipline for packet-switching nodes in a connection-oriented data network. leave-in- time provides sessions with upper bounds on end-to-end delay, delay jitter, buffer space requirements, and an upper bound on the probability distribution of end-to-end delays. a leave-in-time session's guarantees are completely determined by the dynamic traffic behavior of that session, without influence from other sessions. this results in the desirable property that these guarantees are expressed as functions derivable simply from a single fixed- rate server (with rate equal to the session's reserved rate) serving only that session. leave-in-time has a non-work-conserving mode of operation for sessions desiring low end-to-end delay jitter. finally, leave-in-time supports the notion of _delay shifting_, whereby the delay bounds of some sessions may be decreased at the expense of increasing those of other sessions. we present a set of admission control algorithms which support the ability to do delay shifting in a systematic way. norival r. figueira joseph pasquale the contribution to performance of instruction set usage in system/370 o. r. lamaire w. w. white on the equivalent bandwidth of self-similar sources this article presents a method for the computation of the equivalent bandwidth of an aggregate of heterogeneous self-similar sources, as well as the time scales of interest for queueing systems fed by a fractal brownian motion (fbm) process. moreover, the fractal leaky bucket, a novel policing mechanism capable of accurately monitoring self-similar sources, is introduced. nelson l. s. fonseca gilberto s. mayor cesar a. v. neto grpc: a communication cooperation mechanism in distributed systems xingwei wang hong zhao jiakeng zhu network locality at the scale of processes jeffrey c. mogul eliminating periodic packet losses in the 4.3-tahoe bsd tcp congestion control algorithm zheng wang jon crowcroft triad david cheriton run-time generation of hps microinstructions from a vax instruction stream the vax architecture is a popular isp architecture that has been implemented in several different technologies targeted to a wide range of performance specifications. however, it has been argued that the vax has specific characteristics which preclude a very high performance implementation. we have developed a microarchitecture (hps) which is specifically intended for implementing very high performance computing engines. our model of execution is a restriction on fine granularity data flow. in this paper, we concentrate on one particular aspect of an hps implementation of the vax architecture: the generation of hps microinstructions (i.e. data flow nodes) from a vax instruction stream. y. n. patt s. w. melvin w. m. hwu m. c. shebanow c. chen inventing the networked home: sun, 3 com, and other companies share their visions of the future at ces brent butterworth unifying self-stabilization and fault-tolerance ajei s. gopal kenneth j. perry an efficient mobility management strategy for personal communication systems yigal bejerano israel cidon minimizing access costs in replicated distributed systems michael goldweber donald b. johnson an analysis of correlation and predictability: what makes two-level branch predictors work pipeline flushes due to branch mispredictions is one of the most serious problems facing the designer of a deeply pipelined, superscalar processor. many branch predictors have been proposed to help alleviate this problem, including two-level adaptive branch predictors and hybrid branch predictors.numerous studies have shown which predictors and configurations best predict the branches in a given set of benchmarks. some studies have also investigated effects, such as pattern history table interference, that can be detrimental to the performance of these predictors. however, little research has been done on which characteristics of branch behavior make predictors perform well.in this paper, we investigate and quantify reasons why branches are predictable. we show that some of this predictability is not captured by the two-level adaptive branch predictors. an understanding of the predictability of branches may lead to insights ultimately resulting in better or less complex predictors. we also investigate and quantify what fraction of the branches in each benchmark is predictable using each of the methods described in this paper. marius evers sanjay j. patel robert s. chappell yale n. patt prioritized statistical multiplexing of pcm sources ming h. chan john p. princen why a ring? in a world increasingly populated with ethernets and ethernet-like nets a few sites continue to experiment with rings of active repeaters for local data communication. this paper explores some of the engineering problems involved in designing a ring that has no central control, and then compared the m.i.t.-designed ring with ethernet on a variety of operational and subtle technical grounds, on each of which the ring may possess important or interesting advantages. jerome h. saltzer david d. clark kenneth t. pogran an evaluation of audio-centric cmu wearable computers carnegie mellon's wearable computers project is defining the future for not only computing technologies but also for the use of computers in daily activities. fifteen generations of cmu's wearable computers are evolutionary steps in the quest for new ways to improve and augment the integration of information in the mobile environment. the complexity of their architectures has increased by a factor of over 200, and the complexity of the applications has also increased significantly. in this paper, we provide a taxonomy of their capabilities and evaluate the performance of audio-centric cmu wearable computers. asim smailagic optimal processor interconnection topologies this paper proposes the optimal processor interconnection topologies for parallel processing. the topologies are optimal with respect to the performance/cost ratio under the controlled message transfer delay and can be systematically constructed for an arbitrary number of processors. the addition and the deletion of processors are simple and done with the minimum number of bus reconnections. the message transfer delay, as well as the reliability, can be controlled by changing the degree of the topology. owing to these properties, the optimal interconnection topologies are suitable for many kinds of parallel processing systems and algorithms. mamoru maekawa bluesky: a cordless networking solution for palmtop computers pravin bhagwat ibrahim korpeoglu chatschik bisdikian mahmoud naghshineh satish k. tripathi hierarchical registers for scientific computers simulations of scientific programs running on traditional scientific computer architectures show that execution with hundreds of registers can be more than twice as fast as execution with only eight registers. in addition, execution with a small number of fast registers and hundreds of slower registers can be as fast as execution with hundreds of fast registers. a hierarchical organization of fast and slow registers is presented, register-allocation strategies are discussed, and a novel, indirect, register-addressing mechanism is described. j. a. swensen y. n. patt pc clusters applications track (track introduction only) don morton inside risks: observations on risks and risks richard i. cook proteus: a high-performance parallel-architecture simulator eric a. brewer chrysanthos n. dellarocas adrian colbrook william e. weihl tomp a total ordering multicast protocol mahmoud dasser dynamic scheduling on a pc cluster janez brest viljem zumer milan ojstersek aspects of system-level design jonas plantin erik stoy net/one: a commercial local area network net/one is a general purpose, local area communications network which interconnects data and word processing equipment within facilities such as office buildings, factories, laboratories, and computing centers. it is a system designed to maximize modularity, growth, and flexibility. from an initial installation of a few nodes, a network can be incrementally expanded to a communication system for thousands of digital devices. using a high-speed, bit serial bus architecture, net/one supports point-to- point communications for computer systems, terminals, word processors, printers, and other digital devices. it also provides packet switched broadcast communications for intelligent devices and host computers. in addition, net/one allows protocol translation between incompatible devices. no host software changes are required to utilize net/one's virtual circuit capabilities. devices can be connected to net/one in several different ways allowing easy reconfiguration and a free choice of equipment vendors. net/one includes a number of modular components. a network administrative station measures, monitors, and controls network activity. a user programmable network interface unit allows unique management of specific interface functions. a network development system supports custom program development and testing. charlie bass adaptive admission congestion control zygmunt haas mobicast: a multicast scheme for wireless networks in this paper, we propose a multicast scheme known as mobicast that is suitable for mobile hosts in an internetwork environment with small wireless cells. our scheme adopts a hierarchical mobility management approach to isolate the mobility of the mobile hosts from the main multicast delivery tree. each foreign domain has a domain foreign agent. we have simulated our scheme using the network simulator and the measurements show that our multicast scheme is effective in minimizing disruptions to a multicast session due to the handoffs of the mobile group member, as well as reducing packet loss when a mobile host crosses cell boundaries during a multicast session. reconfigurable transputer systems this paper introduces the t800 transputer from inmos, for use as a component in reconfigurable multiple- processor systems. the architecture of such systems will be considered together with the limitations; being static and of fixed valence. solutions to these limitations can be found in virtualising the networks, but this introduces inefficiencies and the potential for deadlock. the latter can be eliminated by careful design, but the former requires further chip development. finally the paper will consider programming methodologies and systems for this and other multiple-processor architectures. c. jesshope a characterization of processor performance in the vax-11/780 this paper reports the results of a study of vax-11/780 processor performance using a novel hardware monitoring technique. a micro-pc histogram monitor was built for these measurements. it keeps a count of the number of microcode cycles executed at each microcode location. measurement experiments were performed on live timesharing workloads as well as on synthetic workloads of several types. the histogram counts allow the calculation of the frequency of various architectural events, such as the frequency of different types of opcodes and operand specifiers, as well as the frequency of some implementation-specific events, such as translation buffer misses. the measurement technique also yields the amount of processing time spent in various activities, such as ordinary microcode computation, memory management, and processor stalls of different kinds. this, paper reports in detail the amount of time the "average" vax instruction spends in these activities. joel s. emer douglas w. clark the mbone: the internet's other backbone jay a. kreibich airmail: a link-layer protocol for wireless networks this paper describes the design and performance of a link-layer protocol for indoor and outdoor wireless networks. the protocol is asymmetric to reduce the processing load at the mobile, reliability is established by a combination of automatic repeat request and forward error correction, and link-layer packets are transferred appropriately during handoffs. the protocol is named airmail (asymmetric reliable mobile access in link-layer). the asymmetry is needed in the design because the mobile terminals have limited power and smaller processing capability than the base stations. the key ideas in the asymmetric protocol design consist of placing bulk of the intelligence in the base station as opposed to placing it symmetrically, in requiring the mobile terminal to combine several acknowledgments into a single acknowledgment to conserve power, and in designing the base stations to send periodic status messages, while making the acknowledgment from the mobile terminal event- driven. the asymmetry in the protocol design results in a one-third reduction of compiled code. the forward error correction technique incorporates three levels of channel coding which interact adaptively. the motivation for using a combination of forward error correction and link-layer retransmissions is to obtain better performance in terms of end-to-end throughput and latency by correcting errors in an unreliable wireless channel in addition to end-to-end correction rather than by correcting errors only by end-to-end retransmissions. the coding overhead is changed adaptively so that bandwidth expansion due to forward error correction is minimized. integrity of the link during handoffs (in the face of mobility) is handled by window management and state transfer. the protocol has been implemented. experimental performance results based on the implementation are presented. ender ayanoglu sanjoy paul thomas f. laporta krishan k. sabnani richard d. gitlin analysis of the link error monitoring protocols in the common channel signaling network v. ramaswami jonathan l. wang ensuring robust call throughput and fairness for scp overload controls donald e. smith using a wearable computer for continuous learning and support a wearable computer with an electronic performance support system can provide continuous learning and support to mobile workers. this system allows mobile users to ask for advice, receive instruction, access productivity tools, communicate with others, and assess their knowledge on a continuous basis. workers can get this support when they need it, where they need it. compared to traditional training and support, this new technique may provide substantial performance improvements. we are developing an architecture for this type of support system and are currently investigating the use of this system to help mobile factory workers perform their tasks. lawrence j. najjar chris thompson jennifer j. ockerman measured capacity of an ethernet: myths and reality ethernet, a 10 mbit/sec csma/cd network, is one of the most successful lan technologies. considerable confusion exists as to the actual capacity of an ethernet, especially since some theoretical studies have examined operating regimes that are not characteristic of actual networks. based on measurements of an actual implementation, we show that for a wide class of applications, ethernet is capable of carrying its nominal bandwidth of useful traffic, and allocates the bandwidth fairly. we discuss how implementations can achieve this performance, describe some problems that have arisen in existing implementations, and suggest ways to avoid future problems. d. r. boggs j. c. mogul c. a. kent a consistent history link connectivity protocol paul lemahieu johoshua bruck micros vs. mainframes (panel) with the advent of powerful, inexpensive microcomputers, the possibility of performing simulations on equipment other than mainframes has become a reality. is this reality at this point more than just a matter of demonstration of small models or an educational tool? have the mainframe and supermini monoliths lost their place in the simulation community to micros just as so many business applications have been downloaded to small systems? or have we seen equipment and software so far which is not yet capable of taking on the simulation tasks have traditionally been done by large-scale systems? val silbey toward a framework for power control in cellular systems efficiently sharing the spectrum resource is of paramount importance in wireless communication systems, in particular in personal communications where large numbers of wireless subscribers are to be served. spectrum resource sharing involves protecting other users from excessive interference as well as making receivers more tolerant to this interference. transmitter power control techniques fall into the first category. in this paper we describe the power control problem, discuss its major factors, objective criteria, measurable information and algorithm requirements. we attempt to put the problem in a general framework and propose an evolving knowledge-bank to share, study and compare between algorithms. zvi rosberg jens zander optimal buffer management policies for shared-buffer atm switches supriya sharma yannis viniotis a note on the use of timestamps as nonces b. clifford neuman stuart g. stubblebine the end-to-end effects of internet path selection the path taken by a packet traveling across the internet depends on a large number of factors, including routing protocols and per-network routing policies. the impact of these factors on the end-to-end performance experienced by users is poorly understood. in this paper, we conduct a measurement-based study comparing the performance seen using the "default" path taken in the internet with the potential performance available using some alternate path. our study uses five distinct datasets containing measurements of "path quality", such as round- trip time, loss rate, and bandwidth, taken between pairs of geographically diverse internet hosts. we construct the set of potential alternate paths by composing these measurements to form new synthetic paths. we find that in 30-80% of the cases, there is an alternate path with significantly superior quality. we argue that the overall result is robust and we explore two hypotheses for explaining it. stefan savage andy collins eric hoffman john snell thomas anderson design and performance of multipath min architectures frederic t. chong thomas f. knight visa: a variable instruction set architecture alessandro de gloria an evaluation of branch architectures branch instructions form a significant fraction of executed instructions, and their design is thus a crucial component of any architecture. this paper examines three alternatives in the design of branch instructions: delayed vs. non-delayed branches, one- vs. two-instruction branches, and the use or non- use of condition codes. simulation and analytical techniques are used to provide quantitative comparisons between these choices. j. a. derosa h. m. levy a unified approach to fault-tolerance in communication protocols based on recovery procedures anjali agarwal j. william atwood distributions of packet delay and interdeparture time in slotted aloha and carrier sense multiple access fouad a. tobagi improving 3d geometry transformations on a simultaneous multithreaded simd processor in this paper we evaluate the performance of an smt processor used as the geometry processor for a 3d polygonal rendering engine. to evaluate this approach, we consider pmesa (a parallel version of mesa) which parallelizes the geometry stage of the 3d pipeline. we show that smt is suitable for 3d geometry and we characterize the execution of the geometry stage in term of memory hierarchy, which is the main bottleneck. the results show that latency is not fully recovered by smt; the use of l2 data prefetching does not succeed in increasing the performance. we show that this problem comes from a pollution of the instruction window by the threads experiencing l2 cache misses, thus reducing the window available for the other threads. we thus propose dcpred, a hardware mechanism to predict l2 misses and control this pollution. coupled with l2 data prefetching, dcpred achieves gains up to 21% over the baseline smt. claude limousin julien sebot alexis vartanian nathalie drach-temam a study of mips programs ishaq h. unwala harvey g. cragon proton: a media access control protocol for optical networks with star topology david a. levine ian f. akyildiz the hcn: a versatile interconnection network based on cubes this paper introduces a family of interconnection networks for loosely-coupled multiprocessors called hierarchical cubic networks (hcns). hcns use the well- known hypercube network as their basic building block. using a considerably lower number of links per node, hcns realize lower network diameters than the hypercube. the performance of several well-known applications on a hypothetical system employing the hcn is identical to their performance on a hypercube. hcns thus enjoy the same advantages as a hypercube, albeit with considerably simpler interconnections. k. ghose k. r. desai an analysis of wide-area name server traffic: a study of the internet domain name system over a million computers implement the internet's domain name system of dns, making it the world's most distributed database and the internet's most significant source of wide-area rpc-like traffic. last year, over eight percent of the packets and four percent of the bytes that traversed the nsfnet were due to dns. we estimate that a third of this wide-area dns traffic was destined to seven root name servers. this paper explores the performance of dns based on two 24-hour traces of traffic destined to one of these root name servers. it considers the effectiveness of name caching and retransmission timeout calculation, shows how algorithms to increase dns's resiliency lead to disastrous behavior when servers fail or when certain implementation faults are triggered, explains the paradoxically high fraction of wide-area dns packets, and evaluates the impact of flaws in various implementations of dns. it shows that negative caching would improve dns performance only marginally in an internet of correctly implemented name servers. it concludes by calling for a fundamental change in the way we specify and implement future name servers and distributed applications. peter b. danzig katia obraczka anant kumar building blocks for data flow prototypes a variety of proposed architectures for data flow computers have been advanced. evaluation of the practical potential of these proposals is being studied through analysis and simulation, but these techniques cannot be used to study a machine design in sufficient detail to make accurate predictions of performance. as a basis for extrapolating cost/performance of these architectures, and for developing a methodology for data flow program preparation, the construction of prototype machines is needed. in this paper we present our plan for realizing experimental data flow machines as packet communication systems using two types of hardware elements: a microprogrammed processing element with provision for packet transmission and reception; and a router unit used to build networks to support packet communication among processing elements. jack b. dennis g. andrew boughton clement k.c. leung an evaluation of quality of service characteristics of pacs packet channel a novel application on the recently emerging wireless personal communications systems (pcs) is the multimedia communication. in this paper, we evaluate multimedia communications capability and quality of service characteristics of one of the pcs standards, personal access communications systems's (pacs) packet channel (ppc) using simulation modeling. the performance of ppc's slot aggregation and data-sense multiple access techniques are studied by considering the downlink and uplink in a single cell and combined uplink/downlink in two cells and changing various parameters such as the number of users and certain protocol parameters. interconnection of ppc with the internet is discussed next. frame rates of mpeg-1 coded images transmitted in a pacs cell as ip datagrams are determined. handover characteristics of ppc downlink are studied by changing different parameters such as the cell size, the speed of the mobile host and time between handovers. the results clearly establish that multimedia communication on ppc is only feasible at lower bandwidths and frame rates and only a few users per cell can be supported. careful tuning of ppc protocol parameters is required. there is one parameter whose variation gives opposite results on the downlink and uplink. behcet sarikaya mehmet ulema self-similarity through high-variability: statistical analysis of ethernet lan traffic at the source level a number of recent empirical studies of traffic measurements from a variety of working packet networks have convincingly demonstrated that actual network traffic is _self-similar_ or _long-range dependent_ in nature (i.e., bursty over a wide range of time scales) - in sharp contrast to commonly made traffic modeling assumptions. in this paper, we provide a plausible physical explanation for the occurrence of self- similarity in high-speed network traffic. our explanation is based on convergence results for processes that exhibit _high variability_ (i.e., infinite variance) and is supported by detailed statistical analyses of real- time traffic measurements from ethernet lan's at the level of individual sources.our key mathematical result states that the superposition of many on/off sources (also known as _packet trains_) whose on-periods and off- periods exhibit the _noah effect_ (i.e., have high variability or infinite variance) produces aggregate network traffic that features the _joseph effect_ (i.e., is self-similar or long-range dependent). there is, moreover, a simple relation between the parameters describing the intensities of the noah effect (high variability) and the joseph effect (self-similarity). an extensive statistical analysis of two sets of high time-resolution traffic measurements from two ethernet lan's (involving a few hundred active source- destination pairs) confirms that the data at the level of individual sources or source- destination pairs are consistent with the noah effect. we also discuss implications of this simple physical explanation for the presence of self- similar traffic patterns in modern high-speed network traffic for (i) parsimonious traffic modeling (ii) efficient synthetic generation of realistic traffic patterns, and (iii) relevant network performance and protocol analysis. walter willinger murad s. taqqu robert sherman daniel v. wilson instruction level profiling and evaluation of the ibm/6000 chriss stephens bryce cogswell john heinlein gregory palmer john p. shen a manager's guide to integrated services digital network james c. brancheau justus d. naumann identification in computer networks to communicate across a network system, entities within the system need to be able to identify one another. an identifier usually takes the form of a name or an address. conventions for the assignment of these identifiers and their resolution affect many aspects of network design. identification is thus a crucial issue for network architecture and standardization. in this paper, our discussion is first devoted to the semantics of names, addresses, and routes, with emphasis on the relationships among them. we then examine the implications of layered network architecture, syntactic differences of identifiers, and broadcast communication. finally, we turn our attention to naming and addressing in large dynamic networks. zaw-sing su efficient hardware for multiway jumps and pre-fetches k. karplus a. nicolau next century challenges: nexus - an open global infrastructure for spatial- aware applications fritz hohl uwe kubach alexander leonhardi kart rothermel markus schwehm low-loss tcp/ip header compression for wireless networks wireless is becoming a popular way to connect mobile computers to the internet and other networks. the bandwidth of wireless links will probably always be limited due to properties of the physical medium and regulatory limits on the use of frequencies for radio communication. therefore, it is necessary for network protocols to utilize the available bandwidth efficiently. headers of ip packets are growing and the bandwidth required for transmitting headers is increasing. with the coming of ipv6 the address size increases from 4 to 16 bytes and the basic ip header increases from 20 to 40 bytes. moreover, most mobility schemes tunnel packets addressed to mobile hosts by adding an extra ip header or extra routing information, typically increasing the size of tcp/ipv4 headers to 60 bytes and tcp/ipv6 headers to 100 bytes. in this paper, we provide new header compression schemes for udp/ip and tcp/ip protocols. we show how to reduce the size of udp/ip headers by an order of magnitude, down to four to five bytes. our method works over simplex links, lossy links, multi-access links, and supports multicast communication. we also show how to generalize the most commonly used method for header compression for tcp/ipv4, developed by jacobson, to ipv6 and multiple ip headers. the resulting scheme unfortunately reduces tcp throughput over lossy links due to unfavorable interaction with tcp's congestion control mechanisms. however, by adding two simple mechanisms the potential gain from header compression can be realized over lossy wireless networks as well as point-to-point modem links. mikael degermark mathias engan björn nordgren stephen pink scalable best matching prefix lookups marcel waldvogel george varghese jonathan turner bernhard plattner continuum computer architecture for exaflops computation thomas sterling the powerpc performance modeling methodology ali poursepanj advocating a remote socket architecture for internet access using wireless lans m. schläger b. rathke a. wolisz s. bodenstein on the performance of slotted aloha in a spread spectrum environment we present an extension of the slotted aloha protocol for use in a spread spectrum packet radio environment. with spread spectrum, we assume that n distinct codes are available, and that each code can be treated as a separate channel. running an independent copy of the protocol on each of these channels would be undesirable, since each user would have to select one channel to monitor for packets addressed to it, inducing a logical partitioning of the user population into n groups. to preserve the logical connectivity of the network, we examine the effect of separating packet transmissions into two parts, a short preamble, which is sent over a public channel, and followed by the body,> which is sent over a private channel. we assume that m of the available codes are used as preamble channels, and the remaining n - m codes are used for actual packet transmissions. if m (@@@@) n, then the network can support broadcast and multicast packets, and still make use of all of the available channels. panos economopoulos mart l. molle the designer's perspective to atomic noncooperative networks lavy libman ariel orda a satellite-augmented cellular network concept some next-generation personal communication systems propose the use of satellite systems for extending geographical coverage of cellular service. we pursue the idea of using satellite capacity to offload congestion within the area serviced by the terrestrial network. an integrated satellite-cellular network configuration is considered. the performance of this system is evaluated by means of an analytical model for a one-dimensional (highway) cellular system overlaid with satellite footprints and by means of simulation for a planar cellular network with satellite spot beam support. under certain re-use assumptions, an improvement is found in the blocking performance of the integrated system over the erlang-b blocking of a purely cellular circuit switched systems. this is achieved by efficient partitioning (static) of the total bandwidth into space and terrestrial segments. major factors that influence performance, such as different reuse considerations on the satellite and cellular systems, cell size to footprint size ratios, admission control and call management policies, and changes in traffic patterns, are also investigated. deepak ayyagari anthony ephremides performance of sna's lu-lu session protocols sna is both an architecture and a set of products built in conformance with the architecture (1,2,3). the architecture is layered and precisely defined; it is both evolutionary and cost effective for implementing products. perhaps the largest component of cost effectiveness is performance: transaction throughput and response times. for sna, this involves data link control protocols (for sdlc and s/370 channel dlc's), routing algorithms, protocols used on the sessions that connect logical units (lu-lu session protocols), and interactions among them. sna's dlc and routing protocols have been discussed elsewhere (4,5,6); this talk examines protocols on sessions between logical units (lu-lu session protocols) and illustrates the results of design choices by comparing the performance of various configurations. james p. gray two issues in reservation establishment this paper addresses two issues related to resource reservation establishment in packet switched networks offering real-time services. the first issue arises out of the natural tension between the local nature of reservations (i.e., they control the service provided on a particular link) and the end-to-end nature of application service requirements. how do reservation establishment protocols enable applications to receive their desired end-to-end service? we review the current one-pass and two-pass approaches, and then propose a new hybrid approach called one-pass-with-advertising. the second issue in reservation establishment we consider arises from the inevitable heterogeneity in network router capabilities. some routers and subnets in the internet will support real-time services and others, such as ethernets, will not. how can a reservation establishment mechanism enable applications to achieve the end-to- end service they desire in the face of this heterogeneity? we propose an approach involving replacement services and advertising to build end-to-end service out of heterogeneous per-link service offerings. scott shenker lee breslau observing tcp dynamics in real networks the behavior of the tcp protocol in simple situations is well-understood, but when multiple connections share a set of network resources the protocol can exhibit surprising phenomena. earlier studies have identified several such phenomena, and have analyzed them using simulation or observation of contrived situations. this paper shows how, by analyzing traces of a busy segment of the internet, it is possible to observe these pheonomena in "real life" and measure both their frequency and their effects on performance. a tcp implementation might use similar techniques to support rate-based congestion control. jeffrey c. mogul virtualclock: a new traffic control algorithm for packet-switched networks one of the challenging research issues in building high-speed packet- switched networks is how to control the transmission rate of statistical data flows. this paper describes a new traffic control algorithm, virtualclock, for high- speed network applications. virtualclock monitors the average transmission rate of statistical data flows and provides every flow with guaranteed throughput and low queueing delay. it provides firewall protection among individual flows, as in a tdm system, while retaining the statistical multiplexing advantages of packet switching. simulation results show that the virtualclock algorithm meets all its design goals. lixia zhang on performance evaluation of fault tolerant multistage interconnection networks youngsong mun hee yong youn the synapse n+1 system: architectural characteristics and performance data of a tightly-coupled multiprocessor system elliot nestle armond inselberg evaluation of load sharing in harts while considering message routing and broadcasting kang g. shin chao-ju hou architecture-cognizant divide and conquer algorithms kang su gatlin larry carter application and evaluation of large deviation techniques for traffic engineering in broadband networks costas courcoubetis vasilios a. siris george d. stamoulis standards and architecture for token-ring local area networks jacalyn winkler jane munn experimental evaluation of throughput performance of irtcp under noisy channels arun k. somani indu peddibhotla resources section: conferences jay blickstein adaptive error coding using channel prediction in this paper, we construct a finite-state markov chain model for a rayleigh fading channel by partitioning the range of the received signal envelope into k intervals. using a simulation of the classic two-ray rayleigh fading model, a markov transition probability matrix is obtained. using this matrix to predict the channel state, we introduce an adaptive forward error correction (fec) coding scheme. simulation results are presented to show that the adaptive fec coding scheme significantly improves the performance of a wireless communication system. r. chen k. c. chua b. t. tan c. s. ng energy-efficient wireless atm design paul j. m. havinga gerard j. m. smit martimus bos tcp extensions for space communications the space communication environment and mobile and wireless communication environments show many similarities when observed from the perspective of a transport protocol. both types of environments exhibit loss caused by data corruption and link outage, in addition to congestion-related loss. the constraints imposed by the two environments are also similar---power, weight, and physical volume of equipment are scarce resources. finally, it is not uncommon for communication channel data rates to be severely limited and highly asymmetric. we are working on solutions to these types of problems for space communication environments, and we believe that these solutions may be applicable to the mobile and wireless community. as part of our work, we have defined and implemented the space communications protocol standards-transport protocol (scps-tp), a set of extensions to tcp that address the problems that we have identified. the results of our performance tests, both in the laboratory and on actual satellites, indicate that the scps-tp extensions yield significant improvements in throughput over unmodified tcp on error- prone links. additionally, the scps modifications significantly improve performance over links with highly asymmetric data rates. robert c. durst gregory j. miller eric j. travis rethinking the tcp nagle algorithm modern tcp implementations include a mechanism, known as the nagle algorithm, which prevents the unnecessary transmission of a large number of small packets. this algorithm has proved useful in protecting the internet against excessive packet loads. however, many applications suffer performance problems as a result of the traditional implementation of the nagle algorithm. an interaction between the nagle algorithm and tcp's delayed acknowledgement policy can create an especially severe problems, through a temporary "deadlock." these flaws in the nagle algorithm have prompted many application implementors to disable it, even in cases where this is neither necessary nor wise. we categorize the applications that should and should not disable the nagle algorithm, and we show that for some applications that often disable the nagle algorithm, equivalent performance can be obtained through an improved implementation of the algorithm. we describe five possible modifications, including one novel proposal, and analyze their performance on benchmark tests. we also describe a receiver-side modification that can help in some circumstances. j. c. mogul g. minshall a comparative study of bandwidth reservation and admission control schemes in qos-sensitive cellular networks sunghyun choi kang g. shin on network-aware clustering of web clients being able to identify the groups of clients that are responsible for a significant portion of a web site's requests can be helpful to both the web site and the clients. in a web application, it is beneficial to move content closer to groups of clients that are responsible for large subsets of requests to an origin server. we introduce clusters\\---a grouping of clients that are close together topologically and likely to be under common administrative control. we identify clusters using a ``network-aware" method, based on information available from bgp routing table snapshots. balachander krishnamurthy jia wang a 16-bit microprocessor with multi-register bank architecture hideo maejima hiroyuki kida tan watanabe shiro baba keiichi kurakazu draft (abstract only): dynamically reconfigurable architecture for factoring tests the draft architecture is a 256 bit system design for extended precision integer arithmetic. it is intended primarily for high speed factoring of large integers. major architectural features include a segmentable rdo which allows parallel computations in up to eight independent all segments. each segment operates from a local control store with a common micro-instruction address broadcast to all segments by a single sequencer. execution of the current operation by the segment is conditional on the operation selected and the status of a local condition code. a micro-program development environment including micro-assembler and simulator has been implemented. a prototype draft machine is currently under construction. donald m. chiarulli walter g. rudd ducan a. buell a comparison of dynamic branch predictors that use two levels of branch history recent attention to speculative execution as a mechanism for increasing performance of single instruction streams has demanded substantially better branch prediction than what has been previously available. we [1,2] and pan, so, and rahmen [4] have both proposed variations of the same aggressive dynamic branch predictor for handling those needs. we call the basic model two-level adaptive branch prediction; pan, so, and rahmeh call it correlation branch prediction. in this paper, we adopt the terminology of [2] and show that there are really nine variations of the same basic model. we compare the nine variations with respect to the amount of history information kept. we study the effects of different branch history lengths and pattern history table configurations. finally, we evaluate the cost effectiveness of the nine variations. tse-yu yeh yale n. patt performance issues in local area networks (tutorial) this tutorial addresses performance problems in local area networks (lan). user level performance measures are affected both by the software as well as communication bottlenecks. techniques for modeling the key components of the performance of a lan will be presented. models will be presented to discuss the throughput and response time characteristics of lans. we also present some measurement data obtained from a lan performance experiment. satish k. tripathi the expandable split window paradigm for exploiting fine-grain parallelsim we propose a new processing paradigm, called the expandable split window (esw) paradigm, for exploiting fine-grain parallelism. this paradigm considers a window of instructions (possibly having dependencies) as a single unit, and exploits fine-grain parallelism by overlapping the execution of multiple windows. the basic idea is to connect multiple sequential processors, in a decoupled and decentralized manner, to achieve overall multiple issue. this processing paradigm shares a number of properties of the restricted dataflow machines, but was derived from the sequential von neumann architecture. we also present an implementation of the expandable split window execution model, and preliminary performance results. manoj franklin gurindar s. sohi a contention/reservation access protocol for speech and data integration in tdma-based advanced mobile systems the performance of third generation mobile systems is greatly influenced by the multiple access protocols used in the radio access system. the paper introduces a multiple access protocol, sir (service integration for radio access), which has the potential for accommodating the requirements of speech and bursty data traffic in an efficient way. sir is evolved from an access protocol (prma++) studied within the framework of the tdma-based version of the european evolving standard for third generation mobile systems. in particular, sir uses the same frame structure and in-band signalling but introduces a contention-free handling of data bandwidth requests while meeting speech service requirements via basic prma++ mechanisms. giuseppe anastasi davide grillo luciano lenzini enzo mingozzi packet routing algorithms for integrated switching networks repeated studies have shown that a single switching technique, either circuit or packet switching, cannot optimally support a heterogeneous traffic mix composed of voice, video and data. integrated networks support such heterogeneous traffic by combining circuit and packet switching in a single network. to manage the statistical variations of network traffic, we introduce a new, adaptive routing algorithm called hybrid, weighted routing. simulations show that hybrid, weighted routing is preferable to other adaptive routing techniques for both packet switched networks and integrated networks. daniel a. reed chong-kwon kim persistent storage for distributed applications richard golding john wilkes performance evaluation of the powerpc 620 microarchitecture the powerpc 620 microprocessor is the most recent and performance leading member of the powerpc family. the 64-bit powerpc 620 microprocessor employs a two-phase branch prediction scheme, dynamic renaming for all the register files, distributed multi-entry reservation stations, true out-of-order execution by six execution units, and a completion buffer for ensuring precise exceptions. this paper presents an instruction-level performance evaluation of the 620 microarchitecture. a performance simulator is developed using the vmw (visualization-based microarchitecture workbench) retargetable framework. the vmw-based simulator accurately models the microarchitecture down to the machine cycle level. extensive trace-driven simulation is performed using the spec92 benchmarks. detailed quantitative analyses of the effectiveness of all key microarchitecture features are presented. trung a. diep christopher nelson john paul shen a reduced-power channel reuse scheme for wireless packet cellular networks junyi li ness b. shroff k. p. chong quality of service and mobility for the wireless internet our paper explores the issue of how to provide appropriate qualityof service mechanisms closely integrated with flexible mobilitymanagement in wireless local area networks. we consider them asaccess networks of choice for the high performance wireless mobileinternet. we present a hierarchical qos architecture that extends_differentiated services (diffserv)_ to mobile hosts in awireless environment. our approach is based on controlling severalparameters of a wiireless lan cell: the limited geographical spanto ensure the same high bit rate for all hosts, the constrainedrate of traffic sources to limit the use of the channel in functionof the required qos and the limited number of active hosts to keepthe load sufficiently low. the qos management is coupled withmobility management at the ip level. we use a micro- mobility schemeimplemented in the ipv6 layer with fast hand-offs between adjacentcells. micro-mobility avoids address translation, traffictunnelling, and enables fast hand-offs. we give some details ofexperiments to show the quality of service diffferentiation overthe 802.11b network. j. antonio garcia-macias franck rousseau gilles berger-sabbatel leyla toumi andrzej duda books michele tepper the performance impact of incomplete bypassing in processor pipelines pritpal s. ahuja douglas w. clark anne rogers an integrated test center for sl-10 packet networks the sheer scale and complexity of large data networks makes testing them a daunting task. system commissioning, release acceptance, network troubleshooting, performance testing, host conformance testing, and certification are all operational activities that involve testing. packet switching systems typically provide built-in features to help with hardware level test operations such as modem loopback commands, system failure alarms and system selftests. however, testing system and protocol level functions still required the use of separate tool boxes. typical data product test tools are designed to test single data lines, not multiline distributed networks (1,2,3). considering all of the above, the sl-10 test tools group set out to develop network tools (4). tools have been implemented which are well- integrated into the sl-10 product. the same tools can be used alone, independent of the sl-10 network. this tool set makes the job of network testing more manageable for developer, manufacturer and network operator alike. terminals and host computers, as well as total networks, can now be tested for conformance and performance automatically, instead of manually. as a result, major productivity improvements and quality gains have been observed since the tools were introduced. the tool set consists of four tools: an interactive protocol tester (ipt) for checking protocol implementations against specifications; the network load test system (nlts) which generates simulated traffic and measures the network's performance; the network process monitor (npm) which examines operating software at the module level; and the network test system (nts) driver which is an automatic or interactive test sequencer for coordinating and controlling distributed network testing. each tool can be operated individually using a standard data terminal. the ipt, nlts and npm tools run on sl-10 rapid* network processors and can be operated remotely via the network or the nts driver. using these tools in combination, a fully integrated and automatic operational test center can be realized, and the need for separate tool boxes can be eliminated. the design and capabilities of each of these tools are described. this paper describes how these tools meet the special needs of network test operations. m. w. a. hornbeek critical path analysis of tcp transactions improving the performance of data transfers in the internet (such as web transfers) requires a detailed understanding of when and how delays are introduced. unfortunately, the complexity of data transfers like those using http is great enough that identifying the precise causes of delays is difficult. in this paper, we describe a method for pinpointing where delays are introduced into applications like http by using critical path analysis. by constructing and profiling the critical path, it is possible to determine what fraction of total transfer latency is due to packet propagation, network variation (e.g., queueing at routers or route fluctuation), packet losses, and delays at the server and at the client. we have implemented our technique in a tool called tcpeval that automates critical path analysis for web transactions. we show that our analysis method is robust enough to analyze traces taken for two different tcp implementations (linux and freebsd). to demonstrate the utility of our approach, we present the results of critical path analysis for a set of web transactions taken over 14 days under a variety of server and network conditions. the results show that critical path analysis can shed considerable light on the causes of delays in web transfers, and can expose subtleties in the behavior of the entire end-to-end system. paul barford mark crovella power-induced time division on asynchronous channels time division multiple access offers certain well-known advantages over methods such as spread spectrum code division. foremost is the interference immunity provided by dedicated time slots. partly offsetting this is tdma's need for network- wide synchronization. viewing arbitrary time intervals as potential tdma time slots, we ask whether it is possible to obtain some of the benefit of time division without incurring the synchronization cost. in particular, we address the question of whether a tdma-like state can be induced on asynchronous channels in such a way as to reduce interference and energy consumption. through analysis and simulation we find conditions under which it is beneficial to use time division, and then show how autonomous power management may be used as a mechanism to induce a form of time division. in this context a backlog-sensitive power management system is presented. john m. rulnick nicholas bambos robust bounded-degree networks with small diameters hisao tamaki cube structures for multiprocessors the exact structural relationship between the hypercube and multistage interconnection networks for multiprocessors is characterized here. by varying the node architecture, structures other than these two interconnection schemes can be derived. krishnan padmanabhan expanded delta networks for very large parallel computers we analyze a generalization of the traditional delta network, dubbed expanded delta network (edn), which provides multiple paths that can be exploited to reduce contention. in massively parallel simd computers, the trend is to put a large number of processors on a chip, but due to i/o constraints only a subset of the processors may have access to the network at any time. this leads to the restricted access expanded delta network of which the maspar mp-1 router network is an example. brian d. alleyne isaac d. scherson multibeam cellular communication systems with dynamic channel assignment across multiple sectors in cellular communication systems, directional multibeam antennas at cell sites can be used to reduce co-channel interference, increase frequency reuse and improve system capacity. when combined with dynamic channel assignment (dca), additional improvement is possible. we propose a multibeam scheme using dynamic channel assignment across multiple sectors. a cell is divided into several sectors, each of which is covered by several directional beams. specific channels are allocated to each sector as in fixed channel assignment (fca). a channel of a sector is dynamically assigned to a wireless user who communicates through one of the several beams of the sector. the assignment is made so that constraints on the allowable co-channel interference are satisfied. limitations due to co-channel interference are analyzed. a tractable analytical model for the proposed scheme is developed using multidimensional birth--death processes. theoretical traffic performance characteristics such as call blocking probability, forced termination probability, hand-off activity, carried traffic and channel rearrangement rate are determined. with the proposed scheme, call blocking probability can be reduced significantly for a fixed offered traffic. alternatively, system capacity can be increased while blocking probability is maintained below the required level. smaller forced termination probability is obtainable in comparison with corresponding fca schemes. jung-lin pan stephen s. rappaport petar m. djuric netnews: free for all dennis fowler providing deterministic delay guarantees in atm networks seok-kyu kweon kang g. shin distributed network control for optical networks rajiv ramaswami adrian segall a high performance transparent bridge martina zitterbart ahmed n. tantawy dimitrios n. serpanos the predictability of data values yiannakis sazeides james e. smith a wireless local area network employing distributed radio bridges this paper presents a novel distributed wireless local area network (wlan) architecture, where each wireless terminal (wt) accesses a backbone local area network (lan) segment via multiple radio bridges (rb's). we introduce a self- learning routing algorithm for the rb's, which automatically adapts to changes in terminal locations, and prevents multiple copies of each data frame from being forwarded over the backbone lan segment. the distributed wlan architecture eases the management of network topological changes and terminal mobility, compared to centralized cellular architectures. we consider the use of direct-sequence spread-spectrum (ds/ss) signaling and the slotted aloha medium access control (mac) protocol over the wireless links, with multiple uplink receivers and downlink transmitters at the rb's for each mac frame. simulation results for the uplink show that the multi-receiver site diversity and the capture effect of ds/ss signaling effectively combat multipath and multiaccess interference, resulting in high throughput capacity and stable operation for the channel. under overload traffic, the system is able to maintain a high level of throughput with bounded delays. it is shown that the use of multiple receiver reduces the access fairness problem for wt's at different locations caused by the near-far effect. victor c. m. leung andrew w. y. au a routing algorithm for connection-oriented low earth orbit (leo) satellite networks with dynamic connectivity low earth orbit leo satellites move with respect to a fixed observer on the earth surface. satellites in the polar regions and the seam switch off their intersatellite links to the neighbor satellites. as a result, the connectivity pattern of the network changes. ongoing calls passing through these links need to be rerouted. a large number of simultaneous rerouting attempts would cause excessive signaling load in the network. moreover, the handover calls could be blocked because of the insufficient network resources in the newly established routes or large connection re - establishment delay. in this paper, a routing protocol is introduced to reduce the number of routing attempts resulting from link connectivity change. the protocol does not use the links that will be switched off before the connection is over. since the call durations are not known a priori, the proposed protocol utilizes a probabilistic approach. the performance of the protocol is evaluated through simulation experiments. the experimental results indicate that the routing protocol reduces the number of rerouting attempts resulting from connectivity changes of the network. huseyin uzunalioglu ian f. akyildiz michael d. bender slotted aloha and cdpa: a comparison of channel access performance in cellular systems the paper compares the performance of two channel-access schemes suitable for the cellular environment, which, in particular, allow the packet capture and can deal with inter-cell interference. the first scheme is the well known s-aloha while the second one is the capture division packet access, recently proposed. the comparison is analytically performed over a common system with a common analytical model. despite the many analyses appeared on s-aloha, the one we develop is new because a throughput density uniformly distributed on the plane is considered in a multiple cell environment. the analysis clearly shows the effect of intra-cell and inter- cell interference on the aloha system and quantifies the throughput gain achieved by cdpa, which completely avoids intra-cell interference. our analysis also provides insight about the effectiveness of power control on both systems. flaminio borgonovo michele zorzi a survey of techniques for synchronization and recovery in decentralized computer systems walter h. kohler an error-controlled approximate analysis of a stochastic fluid flow model applied to an atm multiplexer with heterogeneous on-off sources andrea baiocchi nicola blefari-melazzi a wireless public access infrastructure for supporting mobile context-aware ipv6 applications this paper presents a novel wireless access point architecturedesigned to support the development of next generation mobilecontext-aware applications over metropolitan scale areas. inaddition, once deployed, this network will allow ordinary citizenssecure, accountable and convenient access to the internet fromtheir local city and campus environments. the proposed architecture is based on an approach utilising amodified mobile ipv6 protocol stack that uses packet marking andnetwork level packet filtering at the edge of the wired network toachieve this objective. the paper describes this architecture indetail and contrasts it with existing systems to highlight the keybenefits of our approach. adrian friday maomao wu stefan schmid joe finney keith cheverst nigel davies a systems approach to prediction, compensation and adaptation in wireless networks j. gomez a. t. campbell h. morikawa performance bonds for flow control protocols rajeev agrawal rene l. cruz clayton okino rajendran rajan filing and printing services on a local-area network this paper describes the design and implementation of filing and printing services in a distributed system based on a token-ring local-area network. the main emphasis is put on the communication aspects of the client/server scenario: roles of a client and a server in a communication protocol, and the integration of communication protocols with applications. p. janson l. svobodova e. maehle chaos router: architecture and performance s. konstantinidou l. snyder factors in the performance of the an1 computer network an1 (formerly known as autonet) is a local area network composed of crossbar switches interconnected by 100mbit/second, full-duplex links. in this paper, we evaluate the performance impact of certain choices in the an1 design. these include the use of fifo input buffering in the crossbar switch, the deadlock- avoidance mechanism, cut-through routing, back-pressure for flow control, and multi-path routing. an1's performance goals were to provide low latency and high bandwidth in a lightly loaded network. in this it is successful. under heavy load, the most serious impediment to good performance is the use of fifo input buffers. the deadlock-avoidance technique has an adverse effect on the performance of some topologies, but it seems to be the best alternative, given the goals and constraints of the an1 design. cut-through switching performs well relative to store-and-forward switching, even under heavy load. back- pressure deals adequately with congestion in a lightly-loaded network; under moderate load, performance is acceptable when coupled with end-to-end flow control for bursts. multi-path routing successfully exploits redundant paths between hosts to improve performance in the face of congestion. susan s. owicki anna r. karlin protocol enhancements in wireless multimedia and multiple-access networks abdel-ghani a. daraiseh an examination of information center implementation and impact the concept of the information center was developed by ibm in 1976 in response to the large backlogs in applications development facing data processing departments. the concept called for a new degree of cooperation and mutual support between the end user and the data processing department. under the information center concept, the end users were to assume a far greater responsibility in both defining and developing non-production data processing applications, while the data processing department would supply computer and data as well as instruction and support in utilizing these computer resources in meeting end user needs. since that time, explosive growth in microcomputers technology, development of local area networks, advances in database systems, and increased computer literacy among the user community has enlarged demand for computer resources by non-data processing personnel. despite the fact that large numbers of organizations have made major commitments to the implementation of information centers, there is little in the literature, beyond anecdotal experiences, about the key issues involved in installing an operational information center. this study examines the techniques used by a number of firms in the san diego area to develop information centers and the impact of the information center concept on the organization. the paper makes the following conclusions: while there a number of similarities in approaches, as of yet, no distinct pattern of management response to the opportunities or problems created by the information center concept has emerged. user demands for resource access will continue to increase and many more organizations will adopt the information center concept. computer science and information systems departments must re-examine their curricula and role, in light of these changing computational support environments. norman e. sondak madelyn c. phillips a hierarchical fair service curve algorithm for link-sharing, real-time, and priority services ion stoica hui zhang t. s. eugene ng a third-party value-added network service approach to reliable multicast kunwadee sripanidkulchai andy myers hui zhang distributed systems growth of distributed systems has attained unstoppable momentum. if we better understood how to think about, analyze, and design distributed systems, we could direct their implementation with more confidence. leonard kleinrock mobile commerce for financial services---killer applications or dead end? since mobile commerce (m-commerce) started to be intensively discussed in the press, financial service companies are said to be the winners of m-commerce. but looking at existing m-commerce applications, you will find _really_ interesting information only on few sites. in addition, there are many of these applications, which are just in a prototype state and not yet available to the customers. based on the lessons we have learned from building prototype and productive m-commerce applications, the following paper briefly discusses some of the problems to be solved by corporations that want to use emerging innovative mobile technologies, such as the wireless application protocol (wap). in particular, we address issues regarding application design and development, security, and infrastructure. michael semrau achim kraiss sacrio: an active buffer management scheme for differentiated service networks in this paper, we propose an active resource management approach for d ifferentiated services networks. the proposed approach, sacrio, employs caching and localized packet remarking within a router. it is shown that sacrio is simple and can be implemented transparently within the diff-serv architecture. it is shown that the packet handling cost remains o(1) with sacrio. sacrio is shown to be quite effective and scalable through both ns- based simulations and real-world trace-driven simulations. saikrishnan gopalkrishnan a. l. narasimha reddy guest editorial: issues in design and deployment of ad hoc networks arun k. somani constructing a reliable test&set; bit frank stomp gadi taubenfeld towards a shared-memory massively parallel multiprocessor a set of ultra high throughput (more than one gigabits per second) serial links used as processor-memory network can lead to the starting up of a shared-memory massively parallel multiprocessor. the bandwidth of the network is far beyond values found in present shared-memory multiprocessor networks. to feed this network the memory must be serially multiported. such a multiprocessor can actually be build with current technologies. this paper analyzes the characteristics of such a novel architecture, presents the solutions that must be considered and the practical problems associated with close of experiments. these results show then the way to effectively build this multiprocessor, taking into account main topics such as data coherency, latency time and scalability. daniel litaize abdelaziz mzoughi christine rochange pascal sainrat predictive and adaptive bandwidth reservation for hand-offs in qos-sensitive cellular networks how to control hand-off drops is a very important quality- of-service (qos) issue in cellular networks. in order to keep the hand-off dropping probability below a pre-specified target value (thus providing a _probabilistic_ qos guarantee), we design and evaluate _predictive_ and _adaptive_ schemes for the bandwidth reservation for the existing connections' handoffs and the admission control of new connections.we first develop a method to estimate user mobility based on an aggregate history of hand-offs observed in each cell. this method is then used to predict (probabilistically) mobiles' directions and hand-off times in a cell. for each cell, the bandwidth to be reserved for hand-offs is calculated by estimating the total sum of fractional bandwidths of the expected hand-offs within a mobility-estimation time window. we also develop an algorithm that controls this window for efficient use of bandwidth and effective response to (1) time-varying traffic/mobility and (2) inaccuracy of mobility estimation. three different admission-control schemes for new connection requests using this bandwidth reservation are proposed. finally, we evaluate the performance of the proposed schemes to show that they meet our design goal and outperform the static reservation scheme under various scenarios. sunghyun choi kang g. shin the wm computer architecture wm. a. wulf novanet communications network for a control system novanet is a control system oriented fiber optic local area network that was designed to meet the unique and often conflicting requirements of the nova laser control system which will begin operation in 1984. the computers and data acquisition devices that form the distributed control system for a large laser fusion research facility need reliable, high speed communications. both control/status messages and experimental data must be handled. a subset of novanet is currently operating on the two beam novette laser system. j. r. hill j. r. severyn p. j. vanarsdall gaining efficiency in transport services by appropriate design and implementation choices end-to-end transport protocols continue to be an active area of research and development involving (1) design and implementation of special-purpose protocols, and (2) reexamination of the design and implementation of general- purpose protocols. this work is motivated by the perceived low bandwidth and high delay, cpu, memory, and other costs of many current general-purpose transport protocol designs and implementations. this paper examines transport protocol mechanisms and implementation issues and argues that general-purpose transport protocols can be effective in a wide range of distributed applications because (1) many of the mechanisms used in the special-purpose protocols can also be used in general-purpose protocol designs and implementations, (2) special-purpose designs have hidden costs, and (3) very special operating system environments, overall system loads, application response times, and interaction patterns are required before general-purpose protocols are the main system performance bottlenecks. richard w. watson sandy a. mamrak the search for bandwidth continues dennis fowler performance of globally distributed networks in the design and implementation of computer networks one must be concerned with their overall performance and the efficiency of the communication mechanisms chosen. performance is a major issue in the architecture, implementation, and installation of a computer communication network. the architectural design always involves many cost/performance tradeoffs. once implemented, one must verify the performance of the network and locate bottlenecks in the structure. configuration and installation of a network involves the selection of a topology and communication components, channels and nodes of appropriate capacity, satisfying performance requirements. this panel will focus on performance issues involved in the efficient design, implementation, and installation of globally distributed computer communication networks. discussions will include cost/performance tradeoffs of alternative network architecture structures, methods used to measure and isolate implementation performance problems, and configuration tools to select network components of proper capacity. the panel members have all been involved in one or more performance issues related to the architecture, implementation, and/or configuration of the major networks they represent. they will describe their experiences relating to performance issues in these areas. methodologies and examples will be chosen from these networks in current use. there will be time at the end of the session for questions to the panel. stuart wecker robert gordon james gray james herman raj kanodia dan seligman a unifying methodology for handovers of heterogeneous connections in wireless atm networks the aim of wireless atm is to provide multi-media services to mobile users. while existing research on wireless atm are focussed on handovers of unicast connections, handovers of multicast connections have not been investigated. while conventional multicast join and leave operations occur over the same path, this is not the case during mobile host migrations in a wireless atm network. in this paper, we reveal how handovers of multicast connections can be achieved in a manner irrespective of whether these multicast trees are source-, server- or core-based. more importantly, we demonstrate how the enhanced hybrid handover protocol incorporating crossover switch discovery can be used to support handovers of heterogeneous (i.e., unicast and multicast) connections in a uniform and unified manner, for wireless atm lans employing either the centralised or distributed connection management scheme. c.-k. toh handover management in low earth orbit (leo) satellite networks low earth orbit (leo) satellite networks will play an important role in the evolving information infrastructure. satellites in the low earth orbits provide communication with shorter end-to-end delays and efficient frequency usage. however, some problems need to be solved before leo satellite systems can be successfully deployed. one of these problems is the handover management. the objective of this paper is to survey the basic concepts of leo satellite networks and the handover research. ian f. akyildiz huseyin uzunalioglu michael d. bender the masc/bgmp architecture for inter-domain multicast routing multicast routing enables efficient data distribution to multiple recipients. however, existing work has concentrated on extending single-domain techniques to wide- area networks, rather than providing mechanisms to realize inter-domain multicast on a global scale in the internet.we describe an architecture for inter-domain multicast routing that consists of two complementary protocols. the multicast address-set claim (masc) protocol forms the basis for a hierarchical address allocation architecture. it dynamically allocates to domains multicast address ranges from which groups initiated in the domain get their multicast addresses. the border-gateway multicast protocol (bgmp), run by the border routers of a domain, constructs inter-domain bidirectional shared trees, while allowing any existing multicast routing protocol to be used within individual domains. the resulting shared tree for a group is rooted at the domain whose address range covers the group's address; this domain is typically the group initiator's domain. we demonstrate the feasibility and performance of these complementary protocols through simulation.this architecture, together with existing protocols operating within each domain, is intended as a framework in which to solve the problems facing the current multicast addressing and routing infrastructure. satish kumar pavlin radoslavov david thaler cengiz alaettinoglu deborah estrin mark handley an analysis of mips and sparc instruction set utilization on the spec benchmarks robert f. cmelik shing i. kong david r. ditzel edmund j. kelly modeling tcp throughput: a simple model and its empirical validation in this paper we develop a simple analytic characterization of the steady state throughput, as a function of loss rate and round trip time for a bulk transfer tcp flow, i.e., a flow with an unlimited amount of data to send. unlike the models in [6, 7, 10], our model captures not only the behavior of tcp's fast retransmit mechanism (which is also considered in [6, 7, 10]) but also the effect of tcp's timeout mechanism on throughput. our measurements suggest that this latter behavior is important from a modeling perspective, as almost all of our tcp traces contained more time-out events than fast retransmit events. our measurements demonstrate that our model is able to more accurately predict tcp throughput and is accurate over a wider range of loss rates. jitendra padhye victor firoiu don towsley jim kurose improvements to tcp performance in high-speed atm networks michael perloff kurt reiss optical interference produced by artificial light wireless infrared transmission systems for indoor use are affected by noise and interference induced by natural and artificial ambient light. this paper presents a characterisation (through extensive measurements) of the interference produced by artificial light and proposes a simple model to describe it. these measurements show that artificial light can introduce significant in-band components for systems operating at bit rates up to several mbit/s. therefore it is essential to include it as part of the optical wireless indoor channel. the measurements show that fluorescent lamps driven by solid state ballasts produce the wider band interfering signals, and are then expected to be the more important source of degradation in optical wireless systems. adriano j. c. moreira rui t. valadas a. m. de oliveira duarte sna and osi: three strategies for interconnection the rise of the open systems interconnection reference model (osi) in the past five years is forcing ibm to rethink its plans for its proprietary network standard systems network architecture (sna). this article reviews three interconnection strategies to be considered. matthew a. tillman david c.-c. yen congestion-dependent pricing of network services ioannis ch. paschalidis john n. tsitsiklis reducing the cost of branches pipelining is the major organizational technique that computers use to reach higher single-processor performance. a fundamental disadvantage of pipelining is the loss incurred due to branches that require stalling or flushing the pipeline. both hardware solutions and architectural changes have been proposed to overcome these problems. this paper examines a range of schemes for reducing branch cost focusing on both static (compile-time) and dynamic (hardware-assisted) prediction of branches. these schemes are investigated from quantitative performance and implementation viewpoints.1 s. mcfarling j. hennesey proof of a fundamental result in self-similar traffic modeling we state and prove the following key mathematical result in self-similar traffic modeling: the superposition of many _on/off_ sources (also known as _packet trains_) with strictly alternating _on_\\- and _off_-periods and whose _on_-periods or _off_-periods exhibit the _noah effect_ (i.e., have high variability or infinite variance) can produce aggregate network traffic that exhibits the _joseph effect_ (i.e., is self-similar or long-range dependent). there is, moreover, a simple relation between the parameters describing the intensities of the noah effect (high variability) and the joseph effect (self-similarity). this provides a simple physical explanation for the presence of self-similar traffic patterns in modern high-speed network traffic that is consistent with traffic measurements at the source level. we illustrate how this mathematical result can be combined with modern high- performance computing capabilities to yield a simple and efficient linear-time algorithm for generating self-similar traffic traces.we also show how to obtain in the limit a levy stable motion, that is, a process with stationary and independent increments but with infinite variance marginals. while we have presently no empirical evidence that such a limit is consistent with measured network traffic, the result might prove relevant for some future networking scenarios. murad s. taqqu walter willinger robert sherman fast cluster failover using virtual memory-mapped communication yuanyuan zhou peter m. chen kai li models of machines and computation for mapping in multicomputers michael g. norman peter thanisch multiprocessor/multiarchitecture microprocessor design (m3d) the level of support for high level programming languages (hlpls), despite the claim of support by some of the newly introduced microprocessors, is inadequate despite the falling (rising) cost of hardware (software). computer systems that are to be used to run programs written exclusively in hlpls have a need for a multiprocessor/multiarchitecture microprocessor design (m3 samuel aletan william lively joint source/channel coding of statistically multiplexed real-time services on packet networks mark w. garrett martin vetterli apl and halstead's theory: a measuring tool and some experiments we have designed and implemented an algorithm which measures apl-programs in the sense of software science by m.h. halstead /1/. the reader is assumed to be familiar with the theories of software science. our purpose has been to find the best possible algorithm to automatically analyse large quantities of apl-programs. we have also used our measuring tool to make some experiments to find out if apl-programs and workspaces obey the laws of software science or not. becasue our purpose was to analyse large quantities, i.e. hundreds of programs we have not implemented an algorithm, which gives exactly correct results from software science point of view, because this would necessitate manual clues to the analysing algorithm and thus an interactive mode of analysis. instead of it we have strived for a tool, which carries out the analysis automatically and as correctly as possible. in the next section some difficulties encountered in the design of the measuring algorithm and some inherent limitations of it are discussed. section 3 summarises the sources of errors in the analysis carried out by our algorithm, while section 4 gives a more detailed description of the way analysis is carried out. the remaining sections of this paper report on some experiments we have carried out using our measuring tool. the purpose of these experiments has been to evaluate the explaining power of halstead's theory in connection of apl- programs. however, no attempt has been made to process the results of the experiments statistically. the results of the experiments have been treated here only when 'obvious' (in)compatibilities between the theory and the results have been observed. possible reasons for the (in)compatibilities are also pointed out. timo laurmaa markku syrjänen a high capacity multihop packet cdma wireless network wireless multihop networks overlaid with cellular structure havethe potential to support high data rate internet traffic. in thispaper, we consider techniques by which the system capacity of suchnetworks can be increased. first, methods for increasing linkcapacity in single-user systems are explored. subsequently, weconsider a different set of techniques suitable for multiusersystems. we also investigate the effect of traffic dynamics onsystem capacity and ways to achieve the maximum throughput.finally, we present capacity bounds which illustrate how thosetechniques help in trading off the conserved power for capacityadvantage. ali nabi zadeh bijan jabbari tcp-real: improving real-time capabilities of tcp over heterogeneous networks we present a tcp-compatible and -friendly protocol which abolishes thr ee major shortfalls of tcp for reliable multimedia applications over heterogeneous networks: (i) ineffective bandwidth utilization, (ii) unnecessary congestion-oriented responses to wireless link errors (e.g., fading channels) and operations (e.g. handoffs), and (iii) wasteful window adjustments over asymmetric, low-bandwidth reverse paths. we propose tcp-real, a high-throughput transport protocol that minimizes transmission-rate gaps, thereby enabling better performance and reasonable playback timers. in tcp- real, the receiver decides with better accuracy about the appropriate size of the congestion window. slow start and timeout adjustments are used whenever congestion avoidance fails; however, rate and timeout adjustments are cancelled whenever the receiving rate indicates sufficient availability of bandwidth. we detail the protocol design and we report significant improvement on the performance of the protocol with time-constrained traffic, wireless link errors and asymmetric paths. c. zhang v. tsaoussidis a note on multiple error detection in ascii numeric data communication c. k. chu cnmgraf - graphic presentation services for network management two key problems in the design of systems and network management applications are: how to efficiently manage large quantities of related data, and how to allow the application user to view this data at a user-defined level of detail. this paper describes an architecture for network management facilities that may help solve these problems. it also describes cnmgraf (communications network management graphics facility), which was developed to test the presentation service model defined by this architecture. richard s. gilbert wolfgang b. kleinöoder the specifications and design of a distributed workstation (abstract only) this project includes the development of a general system for transparent sharing and access of resources in a distributed is&r; environment. the proposed pc-based distributed workstation (pcdws) prototype will give is&r; users an integrated pc-based workstation environment for transparent access and sharing of resources available from both local and remote facilities. the pcdws will provide a robust personal computer workstation environment with a comprehensive set of tools as functional components to serve the users, as well as intercommunication and uploading/downloading protocols between workstations and remote mainframes as well as between workstations, thus providing access to multiple local and/or remote dbms and is&r; systems. frank chum flexible routing and addressing for a next generation ip due to a limited address space and poor scaling of backbone routing information, the internet protocol (ip) is rapidly reaching the end of its useful lifetime. the simple internet protocol plus (sipp), a proposed next generation internet protocol, solves these problems with larger internet layer addresses. in addition, sipp provides a number of advanced routing and addressing capabilities including mobility, extended (variable-length) addressing, provider selection, and certain forms of multicast. these capabilities are all achieved through a single mechanism, a generalization of the ip loose source route. we argue that, for reasons of simplicity and evolvability, a single powerful mechanism to achieve a wide range of routing and addressing functions is preferable to having multiple specific mechanisms, one for each function. paul francis ramesh govindan comments on "carry-over round robin: a simple cell scheduling mechanism for atm networks" saha, mukherjee, and tripathi [1] present and analyze an atm cell scheduling algorithm called carry-over round robin. unfortunately, one of the basic lemmas in their paper, related to the performance of this algorithm, does not hold. this adversely affects the delay and fairness properties that they subsequently derive. to substantiate this, we briefly describe carry-over round robin and present a counterexample to the lemma. we end with some concluding remarks. verus pronk jan korst x-net: a dual bus fiber-optic lan using active switches x-net is a new local area network based on the dual, unidirectional bus topology. stations are connected to two fiber-optic busses using active switches. at light load, x-net behaves as a random access scheme. with increasing load, transmissions are done in an orderly manner, arranged into cycles. the main advantage of x-net is its superior channel utilization and delay performance, compared to other bus schemes. the packet delay is bounded at all loads. performance is shown to be satisfactory over a wide range of a values, where a is the ratio of end-to-end medium propagation delay to the packet transmission time. therefore, x-net is a suitable candidate for operation at high speeds. a. e. kamal b. w. abeysundara an unconventional proposal: using the x86 architecture as the ubiquitous virtual standard architecture jochen liedtke nayeem islam trent jaeger vsevolod panteleenko yoonho park closely coupled asynchronous hierarchical and parallel processing in an open architecture dick naedel completing an mimd multiprocessor taxonomy eric e. johnson data path issues in a highly concurrent machine (abstract) it is desirable to achieve high read bandwidths for the processing elements of highly concurrent sisd computers, both while keeping the register and memory write bandwidths low, and while keeping the interconnection networks simple. solutions to these problems are proposed, simulated, verified, and measured. new techniques called write elimination and write liberation are presented that also greatly reduce the complexity of the data cache. the elimination of addressable hardware registers is also achieved. augustus k. uht darin b. johnson tradeoffs for selection in distributed networks (preliminary version) algorithms are presented for selecting an element of given rank from a set of elements distributed among the nodes of a network. network topologies considered are a ring, a mesh, and a complete binary tree. for the ring and the mesh, upper bound tradeoffs are identified between the number of messages transmitted and the total delay due to message transmission. for the tree, an algorithm is presented that uses an asymptotically optimal number of messages. greg n. frederickson the interplanetary internet adrian hooke packet reordering is not pathological network behavior jon c. r. bennett craig partridge nicholas shectman d-prma: a dynamic packet reservation multiple access protocol for wireless communications mehdi alasti nariman farvardin mars: a parallel graph reduction multiprocessor m. castan a. contessa e. cousin c. coustet b. lecussan interworking between digital european cordless telecommunications and a distributed packet switch the digital european cordless telecommunications (dect) standard specifies an air interface. dect requires an external infrastructure to transfer information between wireless terminals, and to transfer information between a wireless terminal and a fixed network. the public switched telephone network, the gsm cellular network, private branch exchanges and mobile data networks are all under investigation as dect backbone networks. in this paper we look to the future and describe interworking between the dect air interface and a wireless network infrastructure referred to as a cellular packet switch. to achieve distributed network control, the cellular packet switch uses an ieee 802.6 metropolitan area network to link base stations, databases and fixed networks. in this paper we specify details of dect information transport on the metropolitan area network. we also give examples of network layer message flows for location updating and handover. the messages for other key procedures including call origination and release appear in a more detailed technical report. sudarshan rao david j. goodman gregory p. pollini kathleen s. meier-hellstern a model for the design of high performance protocols for a networked computing environment the advent of high bandwidth local area networks means that it is now possible to interconnect large numbers of devices with widely differing processing capabilities in such a manner that the various devices may closely interact. before such a system may be realised, it is necessary to define a set of conventions to govern the way in which the network nodes interact with each other. the triadic network model is introduced to describe the interactions between personal computers and high performance computers over a general purpose network by using a third classification of devices, whose purpose is to support the operation of the network as a distributed system. the model is not restricted to any particular network or operating system, but may form the interface between the two. hence, the use of a protocol set based on this model enables the provision of a uniform interface to the applications software. some of the principles of a protocol set developed from the triadic network model are presented in the paper. gary d. law ordering subscribers on cable networks raphael rom pipelining and bypassing in a vliw processor (abstract) this paper describes issues involved in the bypassing mechanism for a vliw processor and its relation to the pipeline structure of the processor. we will first describe the pipeline structure of our processor and analyze its performance and compare it to typical risc-style pipeline structures given the context of a processor with multiple functional units. next, we shall study the performance effects of various bypassing schemes in terms of their effectiveness in resolving pipeline data hazards and their effect on the processor cycle time. arthur abnous nader bagherzadeh on ftss-solvable distributed problems joffroy beauquier synnöve kekkonen- moneta wireless ad hoc multicast routing with mobility prediction an ad hoc wireless network is an infrastructureless network composed of mobile hosts. the primary concerns in ad hoc networks are bandwidth limitations and unpredictable topology changes. thus, efficient utilization of routing packets and immediate recovery of route breaks are critical in routing and multicasting protocols. a multicast scheme, on-demand multicast routing protocol (odmrp), has been recently proposed for mobile ad hoc networks. odmrp is a reactive (on-demand) protocol that delivers packets to destination(s) on a mesh topology using scoped flooding of data. we can apply a number of enhancements to improve the performance of odmrp. in this paper, we propose a mobility prediction scheme to help select stable routes and to perform rerouting in anticipation of topology changes. we also introduce techniques to improve transmission reliability and eliminate route acquisition latency. the impact of our improvements is evaluated via simulation. sung-ju lee william su mario gerla developing a fairness metric for packet scheduling algorithms in a single-hop wireless network wesley murry sharing the "cost" of multicast trees: an axiomatic analysis given the need to provide users with reasonable feedback about the "costs" their network usage incurs, and the increasingly commercial nature of the internet, we believe that the allocation of cost among users will play an important role in future networks. this paper discusses cost allocation in the context of multicast flows. the question we discuss is this: when a single data flow is shared among many receivers, how does one split the cost of that flow among the receivers? multicast routing increases network efficiency by using a single shared delivery tree. we address the issue of how these savings are allocated among the various members of the multicast group. we first consider an axiomatic approach to the problem, analyzing the implications of different distributive notions on the resulting allocations. we then consider a one-pass mechanism to implement such allocation schemes and investigate the family of allocation schemes such mechanisms can support. shai herzog scott shenker deborah estrin empirically derived analytic models of wide-area tcp connections vern paxson transport and control issues in multimedia wireless networks it is not an easy task in the umts environment to effectively design the transport and the management of traffic belonging to multimedia teleservices among those defined by iturecommendations, due to the hard communication requirements which this kind of application can call for. in this paper the results of an overall research work, dealing with an effective management of "multimedia" and "multi-requirement" services in enhanced third-generation mobile radio systems, are presented. the contemporary use of several bearers, one for each traffic component of the service, is proposed as a reference scenario for the transport of multimedia services in future mobile radio environments. the effectiveness of this choice is guaranteed by providing for innovative control techniques, on top of a prma-based access protocol, ad-hoc developed to recognise and jointly manage the different parts of a unique multimedia traffic. for this aim, a two-level priority (static and dynamic priorities) mechanism is proposed to be adopted by the higher protocol levels of umts for the adaptation of call set-up, channel access, handover, and admission control procedures to the nature of multimedia services and for optimising the sharing of radio channel resources and the management of the reservation buffer. achieved results demonstrate that the priority-based mechanism shows good performance especially during periods in which the system traffic load is high and well reacts to the worsening of multimedia service quality, both in terms of information loss and synchronisation of its different traffic components. antonio iera salvatore marano antonella molinaro improving tci/ip performance over wireless networks hari balakrishnan srinivasan seshan elan amir randy h. katz window selection in flow controlled networks the end-to-end window scheme is a popular mechanism for flow and congestion control in packet switched networks. the window scheme may be implemented in the transport layer protocol (like in tcp), or in the source-to-destination protocol(like in arpanet), or in the network layer protocol (like x.25). in many implementations, the window size is chosen at connection set up time. in this paper, we provide guidelines for window selection. specifically, we show that if the network becomes overloaded, that is, the offered load exceeds network capacity, then the selection of user windows has a critical impact on individual user throughputs. thus, user windows should be chosen judiciously, so as to satisfy a well defined "fairness" criterion. we formulate the optimal window assignment as a mathematical programming problem, and show that the exact solution is computationally impractical because of the combinatorial nature of the problem and the complexity of the underlying multiple chain, closed network of queue model. we then develop a heuristic approach which is computationally very efficient and provides nearly optimal solutions. numerical results are provided to illustrate and validate the method. mario geria h. w. chan architectural tradeoffs in the design of mips-x the design of a risc processor requires a careful analysis of the tradeoffs that can be made between hardware complexity and software. as new generations of processors are built to take advantage of more advanced technologies, new and different tradeoffs must be considered. we examine the design of a second generation vlsi risc processor, mips-x. mips-x is the successor to the mips project at stanford university and like mips, it is a single-chip 32-bit vlsi processor that uses a simplified instruction set, pipelining and a software code reorganizer. however, in the quest for higher performance, mips-x uses a deeper pipeline, a much simpler instruction set and achieves the goal of single cycle execution using a 2-phase, 20 mhz clock. this has necessitated the inclusion of an on-chip instruction cache and careful consideration of the control of the machine. many tradeoffs were made during the design of mips-x and this paper examines several key areas. they are: the organization of the on-chip instruction cache, the coprocessor interface, branches and the resulting branch delay, and exception handling. for each issue we present the most promising alternatives considered for mips-x and the approach finally selected. working parts have been received and this gives us a firm basis upon which to evaluate the success of our design. p. chow m. horowitz computational methods for performance evaluation of a statistical multiplexer supporting bursty traffic guo-liang wu jon w. mark design of a high-performance atm firewall a router-based packet- filtering firewall is an effective way of protecting an enterprise network from unauthorized access. however, it will not work efficiently in an atm network because it requires the termination of end-to- end atm connections at a packet-filtering router, which incurs huge overhead of sar (segmentation and reassembly). very few approaches to this problem have been proposed in the literature, and none is completely satisfactory. in this paper we present the hardware design of a high-speed atm firewall that does not require the termination of an end-to-end connection in the middle. we propose a novel firewall design philosophy, called quality of firewalling (qof), that applies security measures of different strength to traffic with different risk levels and show how it can be implemented in our firewall. compared with the traditional firewalls, this atm firewall performs exactly the same packet- level filtering without compromising the performance and has the same "look and feel" by sitting at the chokepoint between the trusted atm lan and untrusted atm wan. it is also easy to manage and flexible to use. jun xu mukesh singhal pre-allocation media access control protocols for multiple access wdm photonic networks media access control protocols for an optically interconnected star-coupled system with pre-allocated wdma channels are introduced and compared. the photonic network is based on a passive star- coupled wdm--based configuration with high topological connectivity and low complexity. the channels are pre- allocated to the nodes with this approach, where each node has a home channel that it uses either for all data packet transmissions or all data packet receptions. a home channel may be shared if the number of nodes exceeds the number of channels in the system. this approach does not require both tunable transmitters and tunable receivers. the performance of a generalized random access protocol is compared to a protocol based on interleaved time multiplexing. both protocols are designed to operate in a multiple-channel multiple-access environment and require each node to possess a tunable transmitter and a fixed (or slow tunable) receiver. semi-markov analytic models are developed to investigate the performance of the two protocols. the analytic models are validated through simulation and performance is evaluated in terms of network throughput and packet delay with variations in system parameters. krishna m. sivalingam kalyani bogineni patrick w. dowd a distributed mechanism for power saving in ieee 802.11 wireless lans the finite battery power of mobile computers represents one of the greatest limitations to the utility of portable computers. furthermore, portable computers often need to perform power consuming activities, such as transmitting and receiving data by means of a random-access, wireless channel. the amount of power consumed to transfer the data on the wireless channel is negatively affected by the channel congestion level, and significantly depends on the mac protocol adopted. this paper illustrates the design and the performance evaluation of a new mechanism that, by controlling the accesses to the shared transmission channel of a wireless lan, leads each station to an optimal power consumption level. specifically, we considered the standard ieee 802.11 distributed coordination function (dcf) access scheme for wlans. for this protocol we analytically derived the optimal average power consumption levels required for a frame transmission. by exploiting these analytical results, we define a power save, distributed contention control (ps-dcc) mechanism that can be adopted to enhance the performance of the standard ieee 802.11 dcf protocol from a power saving standpoint. the performance of an ieee 802.11 network enhanced with the ps-dcc mechanism has been investigated by simulation. results show that the enhanced protocol closely approximates the optimal power consumption level, and provides a channel utilization close to the theoretical upper bound for the ieee 802.11 protocol capacity. in addition, even in low load situations, the enhanced protocol does not introduce additional overheads with respect to the standard protocol. luciano bononi marco conti lorenzo donatiello guest editorial: mobile multimedia communications andrew t. campbell george c. polyzos a reliable multicast session protocol for collaborative continuous-feed applications walid mostafa mukesh singhal consistency management in deno we describe a new replicated - object protocol designed for use in mobile and weakly - connected environments. the protocol differs from previous protocols in combining epidemic information propagation with voting, and in using fixed per - object currencies for voting. the advantage of epidemic protocols is that data movement only requires pair - wise communication. hence, there is no need for a majority quorum to be available and simultaneously connected at any single time. the protocols increase availability by using voting, rather than primary - copy or primary - commit schemes. finally, the use of per - object currencies allows voting to take place in an entirely decentralized fashion, without any server having complete knowledge of group membership. we show that currency allocation can be used to implement diverse policies. for example, uniform currency distributions emulate traditional voting schemes, while allocating all currency to a single server emulates a primary - copy scheme. we present simulation results showing both schemes, as well as the performance advantages of using currency proxies to temporarily reallocate currency during planned disconnections. furthermore, we discuss an initial design of the underlying replicated - object system and present a basic api. peter j. keleher ugur cetintemel active basestations & nodes for a mobile environment athanassios boulis paul lettieri mani b. srivastava supporting service discovery, querying and interaction in ubiquitous computing environments future computing environments will consist of a wide range of network based appliances, applications and services interconnected using both wired and wireless networks. in order to encourage the development of applications in such environments and remove the need for complex administration and configuration tasks, researchers have recently developed a range of service discovery and interaction platforms. examples of such platforms include slp, havi, upnp and jini. while these platforms share a number of common attributes, they each have distinguishing features and hence future networked environments are likely to present developers with a heterogeneous environment composed of multiple specialised support platforms. however, careful analysis of these platforms reveals shortcomings that we believe will inhibit the development of applications that exploit service rich environments. in this paper we discuss these shortcomings and propose a new unifying architecture that brings together the advantages of current service discovery and interaction technologies and provides a new api that we consider to be better suited to the development of service based applications. this work is specifically targeted towards mobile environments, where applications will be required to interact with a wide range of services and devices with minimal user intervention. adrian friday nigel davies elaine catterall concert: design of a multiprocessor development system concert is a shared-memory multiprocessor testbed intended to facilitate experimentation with parallel programs and programming languages. it consists of up to eight clusters, with 4-8 processors in each cluster. the processors in each cluster communicate using a shared bus, but each processor also has a private path to some memory. the novel feature of concert is the ringbus, a segmented bus in the shape of a ring that permits communication between clusters at relatively low cost. efficient arbitration among requests to use the ringbus is a major challenge, which is met by a novel hardware organization, the criss-cross arbiter. simulation of the concert ringbus and arbiter show their performance to lie between that of a crossbar switch and a simple shared intercluster bus. r. r. halstead t. l. anderson r. b. osborne t. l. sterling evaluating design alternatives for reliable communication on high-speed networks raoul a. f. bhoedjang kees verstoep tim r henri e. bal rutger f. h. hofman an efficient routing control for the sigma network (4) when processing vectors on simd computers, the interconnection network may become the bottleneck for performances if it lacks an efficient routing control unit. in the pass, many multistage networks have been designed, but general algorithms to control them cannot be used at execution time : they are too time consuming. this has led many manufacturers to use crossbar networks in the design of simd computers. in [se84a],[se84b], we defined the sigma network (n) and we gave realistic algorithms to control it for performing families of permutations covering standard needs in vector processing. here, we expose the design of a very efficient control unit for the sigma network (4) (16 entries, 16 exits). a. seznec performance analysis using the mips r10000 performance counters tuning supercomputer application performance often requires analyzing the interaction of the application and the underlying architecture. in this paper, we describe support in the mips r10000 for non-intrusively monitoring a variety of processor events -- support that is particularly useful for characterizing the dynamic behavior of multi-level memory hierarchies, hardware-based cache coherence, and speculative execution. we first explain how performance data is collected using an integrated set of hardware mechanisms, operating system abstractions, and performance tools. we then describe several examples drawn from scientific applications that illustrate how the counters and profiling tools provide information that helps developers analyze and tune applications. marco zagha brond larson steve turner marty itzkowitz mobile networking in the internet computers capable of attaching to the internet from many places are likely to grow in popularity until they dominate the population of the internet. consequently, protocol research has shifted into high gear to develop appropriate network protocols for supporting mobility. this introductory article attempts to outline some of the many promising and interesting research directions. the papers in this special issue indicate the diversity of viewpoints within the research community, and it is part of the purpose of this introduction to frame their place within the overall research area. charles e. perkins tcp/ip performance with random loss and bidirectional congestion t. v. lakshman upamanyu madhow bernhard suter computer structures: what have we learned from the pdp-11? gordon bell william d. strecker editorial a. k. salkintzis c. chamzas p. t. mathiopoulas on the time-complexity of broadcast in radio networks: an exponential gap between determinism randomization reuven bar-yehuda oded goldreich alon itai proposed automated information management at nasa: its performance measurement and evaluation rebecca r. bogart throughput in a counterflow pipeline processor the counterflow pipeline processor, or cfpp, is a unique form of pipelined risc architecture whose goal is to obtain regular and modular performance from a bi-directional pipeline. in this pipeline, instructions and results move in opposite directions in a counterflow fashion. a basic synchronous model of the cfpp was created and used to study configuration options which affect the flow of instructions and results through the pipeline. these options, which varied instruction execution, pipeline movement arbitration, and result movements, were varied in order to find the configuration which maximized throughput for a set of benchmarks. aimee severson brent nelson development of a tcp/ip for the ibm/370 this paper describes the design and implementation decisions that have been made in developing software to support the darpa tcp/ip protocols for the ibm os/370 environment at the university of california division of library automation. the implementation is designed to support over 100 concurrent tcp connections, all of which are managed by a single program, which acts as a specialized sub-operating system. the system is optimized for line-by-line or screen-by-screen terminal traffic rather than character-by-character traffic. in addition, this tcp is designed to exploit the availability of the large main storage and processor speed available on the ibm/370. tcp/ip is generally considered to be a mature protocol specification; however, in the course of our implementation we found several parts to be either ambiguous or problematic \--- in particular, error handling and notification, icmp and its relationship to other protocols, and synchronization of data flow with tcp callers. we also discuss problems encountered in trying to replace hardwired terminals in a public access environment with tcp and telnet, and some protocol changes that would make these protocols more hospitable to our environment. robert k. brandriff clifford a. lynch mark h. needleman a mobility-transparent deterministic broadcast mechanism for ad hoc networks stefano basagni imrich chlamtac danilo bruschi the erica switch algorithm for abr traffic management in atm networks shivkumar kalyanaraman raj jain sonia fahmy rohit goyal bobby vandalore x.75 internetworking of datapac and telenet in 1980, agreement was reached by ccitt defining a standard for the interconnection of public data networks. as a result, networks have been able to proceed with the establishment of international services using ccitt recommendation x.75. this paper discusses the methodology used and the operating experience gained by the introduction of an x.75 interface between the gte telenet and datapac public packet switching networks. mehmet s. unsoy theresa a. shanahan distributed reconfiguration of metamorphic robot chains the problem we address is the distributed reconfiguration of a metamorphic robotic system composed of any number of two dimensional hexagonal modules from specific initial to specific goal configurations. we present a distributed algorithm for reconfiguring a straight chain of hexagonal modules at one location to any intersecting straight chain configuration at some other location in the plane. we prove our algorithm is correct, and show that it is either optimal or asymptotically optimal in the number of moves and asymptotically optimal in the time required for parallel reconfiguration. we then consider the distributed reconfiguration of straight chains of modules to a more general class of goal configurations. jennifer e. walter jennifer l. welch nancy m. amato an unslotted multichannel channel-access protocol for distributed direct- sequence networks a multichannel reservation-based channel-access protocol is investigated in this paper. the available system bandwidth is divided into distinct frequency channels. under the protocol, one channel (the control channel) is used to exchange reservation messages and the remaining channels (the traffic channels) are used for information-bearing traffic. the performance of this scheme is compared to that of a single-channel reservation-based protocol. a simple contention-based slotted-aloha protocol is also considered. performance results take into account the effects of multiple-access interference on acquisition and packet errors. results show that the reservation-based approach is advantageous under conditions of high traffic. in addition, a pacing mechanism that mitigates multiple-access interference and promotes fairness is described, and results are presented that demonstrate its effectiveness. arvind r. raghavan carl w. baum scoq: a fast packet switch with shared concentration and output queueing david x. chen jon w. mark the gould np1 system interconnecting the gould np1 is a multicomputer multiprocessing system designed for high performance and parallel processing required in diverse scientific and engineering applications. the np1's basic building block is a dual-processor single bus system which can be expanded up to eight processors over four system buses. this paper discusses the overall design and implementation of tne np1 system interconnection in particularly the inter-system bus link which interconnects four system buses to form a tightly coupled eight processor system. such interconnectivity provides np1 the flexibility in system expansion capable of addressing full four gigabytes of physical memory with the least communications delay. index terms - gould np1, dual-cpu, multiprocessor, processor farm, inter- system bus link. d. j. vianney j. h. thomas v. rabaza on testing hierarchies for protocols deepinder p. sidhu howard motteler raghu vallurupalli business: the 8th layer: the 'big pipe' theory of network integration kate gerwig causal ordering in reliable group communications rosario aiello elena pagani gian paolo rossi web sites michele tepper jay blickstein sensitivity analysis of reliability and performability measures for multiprocessor systems traditional evaluation techniques for multiprocessor systems use markov chains and markov reward models to compute measures such as mean time to failure, reliability, performance, and performability. in this paper, we discuss the extension of markov models to include parametric sensitivity analysis. using such analysis, we can guide system optimization, identify parts of a system model sensitive to error, and find system reliability and performability bottlenecks. as an example we consider three models of a 16 processor. 16 memory system. a network provides communication between the processors and the memories. two crossbar-network models and the omega network are considered. for these models, we examine the sensitivity of the mean time to failure, unreliability, and performability to changes in component failure rates. we use the sensitivities to identify bottlenecks in the three system models. j. t. blake a. l. reibman k. s. trivedi a product location framework for mobile commerce environment recent advances in wireless networking, mobile technologies, and applications have led to the emergence of mobile commerce. as many of these applications require location tracking of products, users, and services, support for location management has become a major issue in m-commerce. although some progress has been made in adding location support in business to consumer (b2c) m-commerce, very little work has been done towards addressing these issues in the business to business (b2b) environment. in this paper, we discuss location management and present a new location-aware framework for b2b m-commerce environment. karlene cousins upkar varshney packet network simulation: speedup and accuracy versus timing granularity jong suk ahn peter b. danzig verification/validation of performance simulation using ansi x3.102 measurements m. b. hidalgo w. s. kelly t. n. washburn experimental testing of transport protocol o rafiq c chraïbi r castanet crawler-friendly web servers onn brandman junghoo cho hector garcia-molina narayanan shivakumar the effect of mobile ip handoffs on the performance of tcp mobile ip is a standard for handling routing for hosts that have moved from their home network. this paper studies the costs of the mobile ip handoff that occurs when a mobile host moves between networks. experiments were carried out with mobile ip and tcp over varying network conditions to observe the effect of handoffs on the transmission. this paper shows that although mobile ip may be appropriate for current applications, its long handoff periods make it unsuitable for the future. anne fladenmuller ranil de silva dataspace - querying and monitoring deeply networked collections in physical space tomasz imielinski samir goel performance of checksums and crc's over real data jonathan stone michael greenwald craig partridge james hughes the network computer (abstract) andy hopper a workload characterization pipeline for models of parallel systems the same application implemented on different systems will necessarily present different workloads to the systems. characterizations of workloads intended to represent the same application, but input to models of different systems, must also differ in analogous ways. we present a hierarchical method for characterizing a workload at increasing levels of detail such that every characterization at a lower level still accurately represents the workload at higher levels. we discuss our experience in using the method to feed the same application through a workload characterization "pipeline" to two different models of two different systems, a conventional relational database system and a logic-based distributed database system. we have developed programs that partially automate the characterization changes that are required when the system to be modeled changes. william alexander tom w. keller ellen e. boughter a robust, optimization-based approach for approximate answering of aggregate queries the ability to approximately answer aggregation queries accurately and efficiently is of great benefit for decision support and data mining tools. in contrast to previous sampling-based studies, we treat the problem as an _optimization_ problem whose goal is to minimize the error in answering queries in the given workload. a key novelty of our approach is that we can tailor the choice of samples to be robust even for workloads that are "similar" but not necessarily identical to the given workload. finally, our techniques recognize the importance of taking into account the variance in the data distribution in a principled manner. we show how our solution can be implemented on a database system, and present results of extensive experiments on microsoft sql server 2000 that demonstrate the superior quality of our method compared to previous work. surajit chaudhuri gautam das vivek narasayya inexact agreement: accuracy, precision, and graceful degradation stephen r. mahaney fred b. schneider very long instruction word architectures and the eli-512 by compiling ordinary scientific applications programs with a radical technique called trace scheduling, we are generating code for a parallel machine that will run these programs faster than an equivalent sequential machine---we expect 10 to 30 times faster. trace scheduling generates code for machines called very long instruction word architectures. in very long instruction word machines, many statically scheduled, tightly coupled, fine- grained operations execute in parallel within a single instruction stream. vliws are more parallel extensions of several current architectures. these current architectures have never cracked a fundamental barrier. the speedup they get from parallelism is never more than a factor of 2 to 3. not that we couldn't build more parallel machines of this type; but until trace scheduling we didn't know how to generate code for them. trace scheduling finds sufficient parallelism in ordinary code to justify thinking about a highly parallel vliw. at yale we are actually building one. our machine, the eli-512, has a horizontal instruction word of over 500 bits and will do 10 to 30 risc-level operations per cycle [patterson 82]. eli stands for enormously longword instructions; 512 is the size of the instruction word we hope to achieve. (the current design has a 1200-bit instruction word.) once it became clear that we could actually compile code for a vliw machine, some new questions appeared, and answers are presented in this paper. how do we put enough tests in each cycle without making the machine too big? how do we put enough memory references in each cycle without making the machine too slow? joseph a. fisher algorithms for allocating wavelength converters in all-optical networks gaoxi xiao yiu-wing leung authentication, confidentiality, and integrity extensions to the xns protocol suite r. housley optimizing the end-to-end performance of reliable flows over wireless links reiner ludwig almudena konrad anthony d. joseph error control schemes for networks: an overview in this paper, we investigate the issue of error control in wireless communication networks. we review the alternative error control schemes available for providing reliable end-to-end communication in wireless environments. through case studies, the performance and tradeoffs of these schemes are shown. based on the application environments and qos requirements, the design issues of error control are discussed to achieve the best solution. hang liu hairuo ma magda el zarki sanjay gupta chairman's remarks (panel session, abstract only) from its experimental 1968 inception in arpanet, followed by ccitt standardization of the x.25 protocol in 1976, packet switching now provides inexpensive worldwide data communications capabilities for its subscribers. customer connections to public packet switched network services are increasing at a phenomenal rate, typically 30-40% year over year with no foreseen slowdown. this 2-3 year doubling effect presents numerous challenges to network operators in trying to effectively support and evolve their packet switched networks while simultaneously increasing customer service levels. this panel presents an overview of current customer packet switching applications and then focuses on how network operators are innovatively meeting the challenges presented by high growth on the operation, control, support, configuration and evolution of their packet switched networks. t. c. buttle technical correspondence corporate tech correspondence integration through intermediary system networks usage of online bibliographic databases has been hampered by the twin problems of (1) retrieval system complexity and (2) the heterogeneity among the many different systems and their hundreds of databases. these problems have generally limited operation of these systems to human expert information specialists who act as intermediaries for the "end users" who need the information in the online bibliographic databases. although convenient physical access to these disparate systems is now possible through telecommunications networks, there is still considerable difficulty of logical access in such areas as identification of suitable available systems and handling login protocols. the (partial) solutions to these problems of creating a single good system or standardizing among many systems tend to be very costly and/or difficult to implement. as another means for surmounting these problems, we have experimented with a series of increasingly more sophisticated computer intermediary systems \\--- under the generic name conit (for connector for networked information transfer) --- which attempt to allow computer-inexperienced end users to search these databases themselves. conit talks to the users in a common, easy- to-learn and easy-to-use language. it first aids the user identify databases appropriate for his problem. it then automatically connects to a system that has the given database and translates the user's request into an appropriate series of commands for that system. responses from the retrieval system are translated back to a common format for the user. conit also assists the user in reformulating his search strategy in order to find more relevant documents, or fewer irrelevant documents. (bibliographic searching generally differs from numerical data searching in its greater ambiguity and need for dynamic reformulation through interaction with the database.) conit interfaces users to the three main bibliographic retrieval systems: dialog, sdc orbit, and national library of medicine medline. in a recent series of experiments it was demonstrated that one version of conit succeeded in achieving retrieval effectiveness for end users who had not previously used any of the three retrieval systems which was as great in terms of numbers of relevant documents retrieved as that the same users could achieve working with human expert intermediaries on the same problems in the same retrieval systems. the online session time was longer for the end users on conit but the total person time was approximately the same for the two modes. those conit techniques that seem to be most important in achieving these results include: (1) a simple, common command-argument request language with english-like features (pure menus are considered too restrictive in this application and natural language too confusing for the user); (2) extensive cai for teaching the command language and for assisting users in search strategy reformulation; (3) automation of much of the "mechanics" usually required for searching: e.g., selection of and translation for the different retrieval systems (a "virtual system" concept is achieved); handling physical connection and login protocols; and remembering, clearing and regenerating retrieved sets as needed; (4) selection of only the basic, core retrieval functions for teaching to new users; and (5) a search strategy based on automatic extraction of keyword stems from a user-given natural-language topic phrase. current and future research center on making the computer intermediary a more truly "expert" assistant which is based on a developing quantitative model of indexing and retrieval for text-based information and which would: (1) incorporate a dialog-mediated mode into the command language mode; (2) assist users to develop a conceptual formalization of their problems which will aid both the computer and the user in search strategy formulation and reformulation; and (3) estimate the number of relevant documents found and missed so far and suggest specific search strategy reformulations including estimates of incremental costs and benefits. additional research questions include: (1) the proper emphasis and contexts for computer versus human directed control in mixed-initiative intermediary modes; (2) the appropriate distribution of intermediary functions among mainframe, mini, and micro hardware at retrieval system, network, and terminal sites (currently conit resides in a mainframe computer: mit multics); (3) the appropriate software structure for increasingly complex intermediaries (currently for conit we use a hybrid scheme where our own production rule interpreter is augmented by calls on pli compiler code): and (4) the appropriate structure for expert assistants as application areas are extended; e.g., a single integrated expert or a multiplicity of coordinated experts. richard s. marcus president's letter bryan kocher acm forum robert l. ashenhurst news track rosalie steier viewpoint brian reid seminar: safe concurrent programming in java with csp christopher h. nevison chikid voices: it's too bad they don't let you be a kid! allison druin news track rosalie steier software runaways - some surprising findings robert l. glass a compendium of multimedia tools for all budgets dean sanders janet hartman design representation in cad tools (session overview) design is an evolutionary process which transforms a set of requirements into a working, manufacturable product. to mechanically assist this process, a design must be represented in one or more formal languages which unambiguously convey the intent of the engineer to (and between) the design tools. this session addresses the relationship of computer aided design (cad) tools for digital systems and design representation. there are three papers in this session and each discusses a facet of the representation problem. drongowski. a qualitative theory for design representation is presented which provides an overview for the other papers. the discussion focuses on the importance of formal semantics for representation and problems in the translation from one representation to another. bammi. in this work, a behavioral hardware description language, isp', is extended to include structural information (e.g., the interconnection of registers, combinatorial logic, control, etc.) the application of the language to synthesis is discussed. iyengar. representation is an especially critical issue in the application of expert systems to cad. this paper explores the representation and rule base for an expert system that synthesizes combinational logic. this work is being applied to the development of a logic synthesis system called agent which takes a register transfer level description to a control graph and coarse grain datapath network. paul j. drongowski theory of computing: a scientific perspective (extended abstract) oded goldreich avi wigderson legally speaking rosalie steier standards work intensifies with creation of business teams george s. carson computing prespectives maurice v. wilkes legally speaking pamela samuelson authors rosalie steier acm forum robert l. ashenhurst authors rosalie steier a report on acm sigcse/sigcue '96 integrating technology into computer science education bonnie mitchell scott owen nan schaller president's letter bryan kocher president's letter bryan kocher workshop report: online communities.: supporting sociability, designing usability peter j. wasilko legally specking rosalie steier vice-president's letter dennis frailey news track rosalie steier reaction to nrc recommendations ralph crafts report on the financial status of acm sigchi clare-marie karat forging interactive experiences (panel session) (abstract only): a content perspective chris marrin rob myers linda hahner jim stewartson leonard daly acm forum robert l. ashenhurst acm forum robert l. ashenhurst news track rosalie steier viewpoint john mccarthy technical correspondence corporate tech correspondence personal computing larry press legally speaking rosalie steier chi ten year view: creating and sustaining common ground catherine r. marshall david novick acm forum peter j. jenning what makes a modeling and simulation professional?: the consensus view from one workshop ralph v. rogers a tribute to presper eckert maurice v. wilkes acm forum robert l. ashenhurst what's new?: talking with inventor bob olodort john gehl the chi '95 conference electronic publication: introduction to an experiment robert mack linn marks dave collins keith instone from the publisher open source developer day: a report on a series of panels held at the end of o'reilly's perl conference phil hughes authors rosalie steier news track rosalie steier authors rosalie steier professional chapters annual report every year, each director on the siggraph executive committee compiles an annual report for the annual siggraph organization report. to keep the overall report from becoming a book, the siggraph chair must pick highlights from each area. this issue's _computer graphics_ column contains my full annual report as presented to the siggraph executive committee. i hope that it helps to give you a better understanding of the structure of the professional chapters committee, the events we sponsor at the annual conference and what the chapters do over the course of a typical year. please keep in mind that this report is for the 1997-98 year so some information dates back to siggraph 97. as always, if you have any questions or comments, you can email me at **lang@siggraph.org.** scott lang collen cleary interacting with tvs: carnegie mellon design students win awards william hefley keyword index to titles authors rosalie steier web design & development '97: san francisco, california peter morville president's letter adele goldberg editorial pointers james maurer software portability annotated bibliography g. deshpande t. pearse p. oman information for authors diane crawford president's letter bryan kocher another view of software (panel session) before we can assess the roles and values of ai and the tools of logic in the domain of software we must be sure that we appreciate the nature of this domain. unfortunately there is a great deal of truth to the statement "software is to the computer as life is to the planet". thus we know that there are many kinds of software created in many kinds of ways and serving myriad purposes. we can appreciate some of the key issues of life by examining the human being but not, in a completely satisfactory way, by examining his tools. his tools. to understand software we must go beyond the techniques and methodologies we have so carefully and painfully crafted, e.g., high level languages, structured programming, data structures and types, semaphores, functional programming, etc. adding a few more tools will not change things very much and will probably not tell us much about why software is the way it is. our dissatisfaction with software surfaced with operating systems and their offspring. why? i think it was because they were created to provide an open set of services from a pool of loosely cooperating functions. furthermore the set of services and the pool of functions would not tolerate a bound in number, intricacy of communication, and efficiency of computer use. i believe that the word "software" obscures the issues that dominate our concerns and i choose the word "organithm" to identify that class of programs we study in software engineering. operating systems are the archetype organithms. paraphrasing bernal, an organithm is a partial, continuous, progressive, multiform, and conditionally interactive realization of the potentiality of human thought expressed as computer program. thought, being what it is, an organithm is a large collection of other perhaps more limiting programs held together by useful traffic patterns. while we may say that some of the parts are perfect for their purpose, the collection is never more than adequate and thus always in a state of evolution. supporting this evolution is the major goal of software engineering. put another way, the dynamics of organithm development set the locus of the concerns of software engineering. organithms model mental abstractions. as models they are approximations and never have enough state to serve as "uniform" approximants. mental abstractions thrive on deduction and induction to increase their set of accessible states and sooner or later, like the interpolants they are, they turn sour and must change, often in unanticipated ways, in order to remain of use. so it is with the organithms that model them: organithms are not maintained, they are reared. like living matter they are continual consumers of energy and continual producers of waste. the creation of an organithm implies the support and act of husbandry. thus research in the processes of organithm husbandry is, and has been, a vital concern of software engineering. since organithms compete in their own biosphere---the computer---improvement in performance is eternally sought and, as a result, most of an organithm's state is concerned with the management of its own internals: as an organithm develops, an ever increasing concern of its logic is its own internal management and significantly, a smaller fraction of its logic is devoted to serving its external utility. organithms tend to take advantage of their own purposes: as they develop, the internal functions of an organithm tend to maximize their use of the external functions the organithm exists to supply- this has been called closing the loop. often these internal uses intensify traffic to such a degree that organithms are created to manage the use of the external functions, e.g., a mail system must be capable of supporting organithms that generate and read mail. every external purpose of an organithm can be replaced by an organithmic surrogate. since organithms are models it takes little modification for them to model unanticipated abstractions. in the course of their development organithms suggest abstractions that are both valuable and that they can support. organithms fuel the expansion of abstract modeling as a technological activity. unlike life, in which reproduction dominates mutation, our organithms are so simple that mutation still dominates reproduction, but this will change. a major role of ai and logic is to help in the creation of organithms that increasingly extricate us from direct involvement in the internal growth of organithms. the major issue of software engineering environments is not how to ease our task of programming systems but how to accelerate the rate at which improvement in internal function can be obtained without unduly jeopardizing the external functions of an organithm and with maximizing the use of organithms to perform reorganization. insofar as ai and logic support autonomy in these activities of development and response will they be of value in the software enterprise. alan j. perlis vice-president's letter j. a. n. lee message from the chairman david e. siegel nifty assignments panel nick parlante owen astrachan mike clancy richard e. pattis julie zelenski stuart reges news track robert fox president's letter paul abrahams from washington rosalie steier in recognition of the 25th anniversary of computing reviews: selected reviews 1960 - 1984 jean e. sammet robert w. rector acm forum robert l. ashenhurst news track robert fox president's letter paul abrahams news track rosalie steier carto bof meets at siggraph 99 theresa-marie rhyne authors rosalie steier from washington rosalie steier news track rosalie steier information systems performance measurement and evaluation (session overview) the importance of performance measurement and evaluation has long been recognized in the field of computer science. it is used in the analysis of existing systems, to make projections on the performance of new or modified systems, and in the design and selection of new hardware and software. the tools and methods used in performance measurement and evaluation are varied. they include such things as timings, benchmarks, simulations, analytic modeling, and both hardware and software monitors. the ones used in a given instance depend upon the goals of the investigator and the system being studied. in "proposed automated information management at nasa: its performance measurement and evaluation" rebecca r. bogart presents performance measurements that are being considered for use in evaluating an information system that is still in the planning stages. this system will be an automated information system for a network that includes nasa headquarters and ten nasa centers. the paper describes the goals and objectives of this system and an evaluation technique that can be used to determine how well the goals and objectives are met. the primary tools proposed are software monitors embedded in the system which will gather system performance statistics. the use of user questionnaires is planned to augment the evaluation effort. john e. tolle used transaction log analysis and stochastic processes in the study described in his paper, "performance measurement and evaluation of online information systems." transaction logs, records of user commands and system responses, were gathered from several different online information systems and analyzed. this paper describes how desired data were obtained from these logs and how they were used in the stochastic processes. the primary objective of this study was to discover the extent to which online systems were used and to determine the patterns of user commands when conducting information searches. this information could then be used to determine how well the designs of the systems support the demands placed on them by the users. in "workload models for dbms performance evaluation" evans j. adams defines a hierarchy of workload models to support performance measurement and evaluation of database management systems. each successive layer in the hierarchy is characterized in progressively greater detail going from a user's view of the conceptual data model down to the underlying machine at the lowest level. he describes techniques for deriving the workload models at each level in the hierarchy and identifies, for each level, measurement parameters and performance metrics. this hierarchical model is proposed as a framework for constructing a dbms performance analyst's workbench and the incorporation of the workbench into future dbmss is suggested. j. wayne anderson conference review stuart lowry president's letter paul abrahams president's letter john r. white from washington rosalie steier forum: more on debugging diane crawford acm forum robert l. ashenhurst from washington rosalie steier field oriented design techniques: case studies and organizing dimensions this workshop, held at chi 95, focused on field research methods that allow us to incorporate a holistic understanding of users and their work and its context into the design process at the earliest stages. although these techniques hold great promise and have attracted attention and interest in the chi community, they have not been widely adopted or systematically discussed in the published literature. this workshop was developed to address this deficiency. we were fortunate in receiving a large set of position papers in response to our call for participation; we accepted 12 of about 20 papers submitted (and we both had a case to contribute, for a total of 14 cases).we planned the workshop activities to allow for maximum interaction among the participants. so that we could all arrive already acquainted with each others' studies, we circulated the participants' position papers ahead of time. our goals for the day-and-a- half workshop (extended by popular demand to include working lunches and extra time on the second day) were to develop a common understanding of the work done in each of the studies, to map the work done in each study to the stages of the design process where it fit best or had the greatest impact, and to come away with a systematic model of the terminology, methodology, and effectiveness of field research as represented in this set of cases. these goals were naturally larger than we could achieve in the short time allotted us, but the enthusiasm and dedication of the group has enabled us to continue to work on them since chi 95 and will yield a published collection of case studies and essays this fall. dennis wixon judy ramey editorial peter j. denning greetings from the guest editor r. s. heller key directions in parallelism and distribution in database system (panel session) umeshwar dayal forum diane crawford a database management system for the federal courts a judicial systems laboratory has been established and several large-scale information management systems projects have been undertaken within the federal judicial center in washington, d.c. the newness of the court application area, together with the experimental nature of the initial prototypes, required that the system building tools be as flexible and efficient as possible for effective software design and development. the size of the databases, the expected transaction volumes, and the long-term value of the court records required a data manipulation system capable of providing high performance and integrity. the resulting design criteria, the programming capabilities developed, and their use in system construction are described herein. this database programming facility has been especially designed as a technical management tool for the database administrator, while providing the applications programmer with a flexible database software interface for high productivity. specifically, a network-type database management system using sail as the data manipulation host language is described. generic data manipulation verb formats using sail's macro facilities and dynamic data structuring facilities allowing in-core database representations have been developed to achieve a level of flexibility not usually attained in conventional database systems. jack r. buchanan richard d. fennell forum diane crawford authors rosalie steier book preview: the design of computer supported cooperative work and groupware systems d. shapiro m. tauber r. traunmuller news track rosalie steier distributed file access (session overview) distributed file access is a key aspect of distributed system design. the ease and the efficiency with which files on remote machines can be accessed has a very pervasive influence on the overall success and acceptance of a distributed system. careful design is required to achieve these goals. techniques for efficient remote file access are therefore an area of active research. the papers in this session provide quite a diverse sampling of recent efforts in the construction and the evaluation of distributed file systems. the paper by david cheriton and paul roy describes the design and the performance of a file server supporting a collection of diskless workstations connected by a local area network. a much more loosely coupled environment is described by peter weinberger: his paper discusses the integration of a number of independently administered unix† machines into a network-wide file system. finally, paul leach presents an evaluation of a commercial system based on a network-wide single level store. willy zwaenepoel sphere packings and generative in 1998, the oldest problem in discrete geometry, the 400-year old kep ler conjecture, was solved. the conjecture asserts that the familiar cannonball packing of balls achieves the greatest density of any possible packing. the proof of the conjecture was unusually long, requiring nearly 300 pages of careful reasoning, 3 gigabytes of stored data, and 40,000 lines of specialized computer code. the computer verifications required for the proof were carried out over a period of years. this lecture will propose a new, vastly simplified, intuitive solution of the kepler conjecture based on ideas from the field of generative programming. thomas c. hales from the guest editor: introduction to the history project carl machover authors rosalie steier acm forum robert l. ashenhurst acm forum robert l. ashenhurst index president's letter paul abrahams on the foundations of the universal relation model the universal relation model aims at achieving complete access-path independence in relational databases by relieving the user of the need for logical navigation among relations. we clarify the assumptions underlying it and explore the approaches suggested for implementing it. the essential idea of the universal relation model is that access paths are embedded in attribute names. thus attribute names must play unique "roles." furthermore, it assumes that for every set of attributes there is a basic relationship that the user has in mind. the user's queries refer to these basic relationships rather than to the underlying database. two fundamentally different approaches to the universal relation model have been taken. according to the first approach, the user's view of the database is a universal relation or many universal relations, about which the user poses queries. the second approach sees the model as having query-processing capabilities that relieve the user of the need to specify the logical access path. thus, while the first approach gives a denotational semantics to query answering, the second approach gives it an operational semantics. we investigate the relationship between these two approaches. david maier jeffrey d. ullman moshe y. vardi notes about the authors acm forum robert l. ashenhurst implications of certain assumptions in database performance evauation the assumptions of uniformity and independence of attribute values in a file, uniformity of queries, constant number of records per block, and random placement of qualifying records among the blocks of a file are frequently used in database performance evaluation studies. in this paper we show that these assumptions often result in predicting only an upper bound of the expected system cost. we then discuss the implications of nonrandom placement, nonuniformity, and dependencies of attribute values on database design and database performance evaluation. s. christodoulakis simulation in research and research in simulation: a telecommunications perspective ward whitt legally speaking roslie steier forum: the power of interactive computing diane crawford acm forum robert l. ashenhurst editorial peter j. denning acm forum robert l. ashenhurst news track robert fox acm forum robert l. ashenhurst news track rosalie steier an interview with diane darrow steven pemberton domain names (panel session, abstract only): hierarchy in need of organization the darpa domain name project is an attempt to move from a centralized naming authority to a distributed one. in essence, the domain system computes a single binding from a character string known as a host name to another string known as the host address (a 32-bit integer). besides the syntactic changes in names, the chief difference between the old-style names and domain names is that in the old system, a database containing the name-to-address bindings was distributed to each host where bindings were performed locally, while in the domain system computation of the name-to-address binding uses the internet to contact search the hierarchy of domains until a server is found that can perform the binding. unfortunately, to make the system efficient, servers cache bindings they look up, and clients begin a search by contacting the nearest "leaf" server to avoid searching the hierarchy. if sufficient memory is available, the cache at a server corresponds exactly to old-style host tables. the domain scheme raises other questions. in principle, the hierarchical structure of domain names permits easy delegation of naming authority because it allows names within independent subtrees to be assigned by independent organizations. in practice, however, the hierarchical scheme collects together organizations under fixed top-level domains. because extremely small organizations cannot afford to maintain a domain name server, they must be supported by (and obey the authority of) a domain server for the subhierarchy under which they lie. a hierarchical organization of names also imposes restrictions on the use of the system. for example, even though domain name servers contain (name, address) pairs, it is impossible to locate the desired binding given only the address. douglas e. comer virtual identities in computer mediated communication ioannis paniaras military keynote address and panel discussion: military simulation and modeling - today - tomorrow joseph j. redden concurrency control in a dynamic search structure a design of a data structure and efficient algorithms for concurrent manipulations of a dynamic search structure by independent user processes is presented in this paper. the algorithms include updating data, inserting new elements, and deleting elements. the algorithms support a high level of concurrency. each of the operations listed above requires only constant amount of locking. in order to make the system even more efficient for the user processes, maintenance processes are introduced. the maintenance processes operate independently in the background to reorganize the data structure and "clean up" after the (more urgent) user processes. a proof of correctness of the algorithms is given and some experimental results and extensions are examined. udi manbar richard e. ladner about this issue anthony i. wasserman acm forum robert l. ashenhurst news track rosalie steier conferences marissa campbell acm forum robert l. ashenhurst president's letter paul abrahams acm forum robert l. ashenhurst an interview with douglas adams gordon cameron acm forum robert l. ashenhurst acm forum jon bentley authors rosalie steier perspectives on visual explanations nancy allen empirical research in software engineering: a workshop susan s. brilliant john c. knight editorial peter j. denning author index author index president's letter john r. white news track rosalie steier subwavelength lithography (panel): how will it affect your design flow? andrew b. kahng y. c. pati warren grobman robert pack lance glasser explore the fascinating fusion of computer vision and computer graphics gordon cameron technical correspondence corporate tech correspondence technical correspondence corporate tech correspondence authors rosalie steier from washington diane crawford authors rosalie steier authors rosalie steier web sites ken korman analysis of dynamic hashing with deferred splitting dynamic hashing with deferred splitting is a file organization scheme which increases storage utilization, as compared to standard dynamic hashing. in this scheme, splitting of a bucket is deferred if the bucket is full but its brother can accommodate new records. the performance of the scheme is analyzed. in a typical case the expected storage utilization increases from 69 to 76 percent. eugene veklerov president's letter bryan kocher datapac (panel session, abstract only) packet switched networks are well accepted and provide cost-effective data communications for a wide range of applications. this paper will examine the growth of packet switching by profiling telecom canada's packet switched network, datapac. it will focus on the following: the emergence of applications by new users of data communications, development of new applications by existing users, migration to packet switching networks from dedicated leased line networks, the development and introduction of new service features to continue the phenomenal growth rates achieved in earlier years. a. dobson news track robert fox author index author index authors rosalie steier new siggraph conference fee structure scott owen parasitic extraction accuracy; how much is enough? (panel) paul franzon mark basel aki fujimara sharad mehrotra ron preston robin c. sarma marty walker authors rosalie steier news track robert fox the chi 96 basic research symposium alan dix francesmary modugno president's letter bryan kocher pioneering artists joan truckenbrod icse 2000 opportunity carlo ghezzi authors rosalie steier acm forum robert l. ashenhurst acm forum robert l. ashenhurst news track rosalie steier some thoughts on running a conference barbara simons computer graphics laboratories robert j. mcdermott dateline: san antonio, monday, june 5, 2000 adrian miles president's letter bryan kocher oliver strimpel engineering education (panel): trends and needs s. w. director j. allen j. duley books reviews lynellen d. s. perry erika orrick acm forum robert l. ashenhurst perspectives in computing: a magazine for the academic community donald t. sanders on code reuse david lorge parnas arthurs rosalie steier what's happening marisa campbell president's letter john r. white authors rosalie steir acm forum robert l. ashenhurst author index w. bruce croft remote procedure calls varsus problem-orinted protocols (panel session, title only) l. j. miller p. leach a. freier r. lyon f. parr acm form robert l. ashenhurst president's letter bryan kocher sigsoft policy statement for conferences and workshops sponsored by sigsoft will tracz design and use of muds for serious purposes: report from a workshop at the cscw conference in boston, 16th november '96 christer garbis yvonne wærn news track rosalie steier book reviews karen t. sutherland editorial peter j. denning viewpoint daniel d. mccracken nsf report: theory of computing program zeke zalcstein authors rosalie steier authors rosalie steier forum diane crawford authors rosalie steier introduction: mid-summer 2000 kick-off lynellen d. s. perry jessica ledbetter technical correspondence corporate tech correspondence widening the net: workshop report on the theory and practice of physical and network communities steve whittaker ellen isaacs vicki o'day design: building vocal space: expressing identity in the radically collaborative workplace kate ehrlich austin henderson on site: global perceptions of is journals nikolaos a. mylonopoulos vasilis theoharakis erratum: " two heads are better that two tapes" tao jiang joel i. seiferas paul m. b. vitanyi what's happening: call for entries: the liberty mutual prize in ergonomics and occupational safety jennifer bruer president's letter paul abrahams how we made siggraph 98 walt bransford acm forum robert l. ashenhurst authors rosalie steier editorial peter j. denning president's letter bryan kocher technology for managing large packet switching networks (panel session, abstract only) the evolution of packet switching network technology over the past ten years has been strongly influenced by the dramatic growth of these networks. the ability to control and manage large networks has required many innovations and has been most successful where the growing network itself has provided the mechanisms needed for network management. this discussion will cover the increasing use of decentralization and parallel operation of functions required to monitor the network, to remotely test and maintain subscriber facilities, to collect billing and statistical data, and to distribute and install new software. specific problems which require special attention in large network management will be covered; these include time synchronization, routing optimization, and network security. the sl-10 packet switching system from northern telecom is referenced through this discussion as an example of a system which has built on the facilities and technology of the network switching equipment to provide the management function. since all of the attributes of capacity, availability, reliable communication, configuration flexibility, and security are the same for both packet switching subscribers and network management, using a common base provides real advantages. the discussion will show how the exclusive use of network terminal interfaces, virtual circuits, and x.25 host interfaces for all management data transfer functions provides the required function with virtually unlimited growth potential. support of both packet switching and management functions on the same base of processor, storage devices, and software, further extends the benefits of commonality. d. jeannes report on building and using test collections panel, sigir 1996 donna harman technical correspondence corporate tech correspondence viewpoint rosalie steier interaction in 3d graphics karen sullivan authors rosalie steier legally speaking pamela samuelson robert j. glushko president's letter john r. white editorial peter j. denning viewpoint don libes elliott i. organick (1925-1985) gary lindstrom announcing emacspeack-97 t. v. raman viewpoint paul saffo authors rosalie steier syntactic analysis and design environments (session overview) gary e. swinson authors rosalie steier viewpoint alfred bork acm forum robert l. ashenhurst selections from the siggraph 96 educator's program rosalee wolfe technical correspondence corporate tech correspondence computer graphics are brittle with the entertainment industry focus of this issue, i decided to ask a selection of artists for their thoughts on the broad (and deliberately vague) theme: art, computer graphics and entertainment. the authors come from a variety of artistic backgrounds, each with their own unique perspective on computer graphics.the results constitute this month's artist's view column. i hope you find these collective musings as stimulating and thought-provoking as i did. iara lee and taylor dupree's invited visual contribution appears on the back cover of this issue.many thanks to those who agreed to participate with very short notice, and to the many people who helped out with suggestions. bruce sterling forum: letters robert l. ashenhurst hci in australasia: towards interact'97 rachel croft susan wolfe extended terms for s/g officers: an editorial g. w. gorline forums for mis scholars bill c. hardgrave kent a. walstrom list of authors plunge da: a ~case study clive rosen authors rosalie steier author index acm forum robert l. ashenhurst there's no place like home: the doors of perception 2 conference, amsterdam, the netherlands, nov. 3 - 5, 1994 shannon ford measurement, management and optimization (session overview): session overview the abundance of literature devoted to the topics of measurement, management and optimization of digital systems is ample evidence of their vital importance in computing research. the papers in this session treat the issues of processor utilization measurement and file partitioning to enhance search performance. richard w. moulton presents a method for the characterization of processor utilization. the approach is designed to be self-calibrating, thereby providing independence from the many variables that have an impact on processor performance. an implementation of the method in a real-time control system is also described. on a slightly different note, yuan y. sung discusses the issue of distributing a file across a set of devices to minimize the query response time. a restriction of this general problem is defined and an accompanying distribution method is presented. this method is analyzed with respect to query resolution performance and contrasted with several other approaches. randy michelson acm forum robert l. ashenhurst news track rosalie steier conference preview: avi '98 and apchi '98 jennifer bruer sigchi needs you! robert mack allison druin david riederman jean scholtz cathleen wharton sigchi annual report mike atwood a whiter shade of pale mike milne from washington rosalie steier president's letter bryan kocher reprints from computing reviews joel seiferas president's letter bryan kocher bruce h. bruemmer news track rosalie steier news track rosalie steier acm forum james maurer the importance of detail maria g. wadlow knowledge management column mark jones president's letter robert aiken conference preview: iui 2001 marisa campbell authors rosalie steier personal computing larry press news track rosalie steier memex and beyond rosemary michelle simpson toward an informational model of the organization: communication transmission metrics john h. lundin lawrence l. schkade chi 97, looking to the future: a conference preview peter stevens forecasting the next 50 years in information technology varun grover integrated services digital network (isdn, panel) d. clark s. newman d. eigen a. clark j. mackie j. kaufeld acm forum robert l. ashenhurst 9th workshop on institutionalizing software reuse (wisr '9) workshop summary what has the field of reuse accomplished? where can we declare success? where should we take the blame for failure? the invitees to wisr recently met in austin, texas to address these questions and to help determine the direction of future reuse research and practice. jeffrey s. poulin don batory larry latour technical correspondence corporate tech correspondence standards for multimedia, accessibility, and the information infrastructure harry e. blanchard acm forum robert l. ashenhurst authors rosalie steier the annual siggraph conference: 25 years of leadership in computer graphics and interactive techniques steve cunningham invitation to ssr mehdi jazayeri acm forum robert l. ashenhurst acm forum robert l. ashenhurst conferences marisa campbell president's letter adele goldberg president's letter paul abrahams legally speaking rosalie steier visual meaning: commentaries on the continuing influence of edward r. tufte robert r. johnson workshop report: 4th hypertext writers' workshop: the messenger morphs the media deena larsen how does your job fit? lynellen d. s. perry a brief summary of ohs6 - the 6th workshop on open hypermedia systems siegfried reich forum diane crawford authors rosalie steier viewpoint rosalie steier president's letter paul abrahams high performance clusters (abstract): state of the art and challenges ahead david e. culler authors rosalie steier editorial pointers diane crawford from the guest editor bill buxton president's letter paul abrahams from washington rosalie steier the role of logic in software enterprise (panel paper) the history of advances in programming - the little that there is of it - is the history of successful formalisation: by inventing and studying formalism, by extracting rigorous procedures, we progressed from programming in machine code to programming in high level languages (hlls). note that before hlls were sufficiently formalised compilers used to be unreliable and expensive to use, programs could not be ported between machines, and the very use of hlls was more or less restricted to academia. observe also that if today a piece of software is still difficult to port it is almost invariably due to a deviation from strict formality. for many application domains hlls provide an acceptable linguistic level for program (and system) specification. one does not hear horror stories about computer applications in such domains (physics, astronomy). the expert views of such domains, their descriptive theories, can be easily expressed on the linguistic level of hll, thus becoming prescriptive theories (specifications) for computer software. if it is desired to use computers reliably in domains where this is not the case, then we must try to narrow the gap between the linguistic level on which domain-descriptive theories are formulated and the linguistic level on which programs are prescribed (specified). the programs, as ultimately run, are always formal objects. a rigorous, calculably verifiable relationship may be defined only between formal systems (scientific theories of nature are not calculably verified; they are tested by experiments!). therefore, if we want the programs to reliably satisfy their specifications, the latter must be formal objects too. or, put it differently: from a natural domain to a formal system there cannot lead an unbroken chain consisting of calculably verified steps only. there must be an interface at which the informality of a natural domain (or its description) comes into contact with the formality of a computer program (or its specification). such a watershed interface can be avoided only if the application domain is itself formal (e.g. mathematical). the problem statement at the watershed interface is crucial for the eventual success of any computer application: if it is formal, we stand a chance of developing and implementing a provably correct solution, but we cannot prove its validity in application domain. the informality of problem statement does not help a bit: it is still impossible to prove the validity and we cannot guarantee correctness of eventually implemented software. taking the lesser of two evils, we choose a formal problem statement and call it the specification. basically, there are two policies with respect to selection of the watershed linguistic level. they can be roughly characterised by the size and structure of the extra logic component (ec) of that level. we may opt for as big an ec as we can handle in our formal system, or for an ec not larger than necessary to validate the problem statement in application domain. the large ec policy corresponds to axiomatisation of individual elementary observations (as exemplified by horn clause and other data-based specifications); the small ec, to axiomatisation of 'laws' abstracted from and validated by elementary observations within the application domain (as illustrated by physical sciences). notice, however, that the choice of ec is almost entirely domain- dependent, determined by the availability (or otherwise) of well-tested developed theories for each individual domain. invention, formulation and validation of domain-descriptive theories is, of course, the main activity of scientifically-minded experts. nothing in a software engineer's education entitles him to claim the necessary knowledge and experience. similarly, no results of computing science entitle computing scientists to claim that theory-forming research in application domains can be reduced to collecting elementary observations. and logic does not recognise the notion of intra-domain validity. logic - a calculus of formal systems - plays an important role in software development from specification to implementation. it also can play a minor role in validation of specifications, insofar as the validation may be seen to include derivation of consequences. that logic plays no discernible role in descriptive theory formation - an act always serendipitous and contingent on invention - should not worry us too much, as long as we do not wish to claim the possession of the philosopher's stone. w. m. turski from washington diane crawford software complexity, program synthesis, and data flow analysis (session overview): session overview the papers in this session address three different themes - software complexity measurement, program synthesis, and data flow analysis. research results in each area should have significant impact in the software industry. kearney, et al's paper describes the difficult problems that must be solved by software complexity measurement research. researchers in this area are looking for the structural properties of software that make programs difficult to understand and debug. much software measures research has been plagued with theoretical and methodological problems which the paper describes. because software complexity measurement research results promise to benefit the software industry through lower development and maintenance costs, the software industry is often too ready to apply measures that are still of questionable value. as the paper points out, users of complexity measures should be aware of the current limitations. james m bieman some research issues in the field of cscw volker wulf news track rosalie steier technical correspondence corporate tech correspondence icse 97 program update richard taylor technical correspondence corporate tech correspondence editorial pointers james maurer from washington diane crawford best of technical support corporate linux journal staff authors rosalie steier information system modeling and management (session overview) as information systems and their applications become more complex, automated tools and techniques for modeling and managing such information system environments become more critically needed. this session deals with tools and techniques for modeling and management of information systems. horndasch and studer propose a formal model for office systems analysis and modeling in their paper "thm-net: an approach to office systems modeling". they use the concept from temporal-hierarchic data model (thm) for semantic data modeling together with petri nets for modeling the parallel, asynchronous aspects of office systems. the paper introduces the temporal-hierarchic data model and defines thm-net with examples from ifip working conference organization problem. various thm- net and thm data scheme concepts are illustrated by modeling part of the ifip problem. in the paper "xcp: an experimental tool for managing cooperative activity," sluizer and cashman present an experimental coordinator tool, xcp, which allows organizations to develop, maintain and implement plans of cooperative activity, called protocols. its goals are to reduce the costs of communication, coordination and decision by executing formal plans of cooperative activity in partnership with its users, and to aid new staff to learn a procedure. the paper describes the tool and its architecture. an annotated example is shown, and the status of the xcp project is given. in the paper by kao, "an automated scheduling system for project management," he presents a scheduling tool, the flight operations planning schedule system (fopss), used by the scheduling team of the johnson space center of nasa to support the space shuttle flight operations. the paper describes the fopss software, design goals and considerations, some special features and possible enhancement of fopss. frank y. chum technical correspondence corporate tech correspondence infrastructure issues related to theory of computing research faith fich letter from the chair jeffrey m. bradshaw editorial steven pemberton from washington diane crawford trip report for the seventh acm conference on hypertext marjorie c. luesebrink esec97/fse5 invitation helmut schauer nsf news john c. cherniavsky authors index hypertext and the www: a view from the trenches at hypertext'96 michael bieber john b. smith president's letter paul abrahams from washington diane crawford author index news track robert fox news track robert fox authors rosalie steier president's letter bryan kocher report on pods'97 z. meral ozsoyoglu annotated bibliography of the proceedings of the annual simulation symposium (1968-1991) ross a. gagliano martin d. fraser authors rosalie steier programming pearls jon bentley news track robert fox cscl report: human computer interaction and educational tools (hci-et) conference report: may 18 - 27, 1997 sozopol, bulgaria margaret m. mcmanus authoring multi-user virtual spaces jim larson solving growth problems in a rapidly expanding pdn (panel session, abstract only) tymnet's public data network has been expanding in size and revenues at about a 35% per year rate since its inception. maintaining this growth has put strain more on the organization than the technology. i will discuss several dimensions of the problem of managing prolonged rapid growth, including capacity management (bandwidth, switching, interfaces, and floor- space), topological complexity, job specialization, order processing, forecasting, and deploying new versions of code. the topic selection will favor issues that are likely to be common to other operations and the discussion will be biased toward solutions and pro-active management. stephen c. poppe president's letter adele goldberg acm forum robert l. ashenhurst editorial steven pemberton reflections steven cherry where have all the women gone? (panel session) virginia eaton sharon bell nell dale susie gallagher helen gigley cindy hanchey news track rosalie steier acm forum robert l. ashenhurst presentation presentation is intended to encompass notations and languages for expressing models. this session will focus on the linguistic and notational choices made in particular approaches. emphasis will be placed on common ideas. for example, there have been some assertions from proponents of the predicate calculus that it is a notation that is capable of expressing essentially all the interesting and important concepts that are encountered in other notations. emphasis will also be placed on why the design choices were made, why things are being represented a certain way, and what the effects of those choices were. the purpose of this workshop is to try to bring us all closer together. trying to bring us to a common terminology is something i don't have much hope for at this point. however, getting us to recognize in other areas problems and solutions we have encountered in our own work is something that i think is very possible, and i think it has happened already to a certain extent. i see that as a goal of this session. introduction jerry l. archibald mark c. wilkes editorial: a bill of rights and responsibilities joe halpern acm forum robert l. ashenhurst president's letter paul abrahams acm forum robert l. ashenhurst viewpoint gary chapman gathering and disseminating good practice at teaching and learning conferences ian utting president's letter bryan kocher workshop report: information doors --- where information search and hypertext link einat amitay letter from the chair jeffrey m. bradshaw news track rosalie steier join and semijoin algorithms for a multiprocessor database machine this paper presents and analyzes algorithms for computing joins and semijoins of relations in a multiprocessor database machine. first, a model of the multiprocessor architecture is described, incorporating parameters defining i/o, cpu, and message transmission times that permit calculation of the execution times of these algorithms. then, three join algorithms are presented and compared. it is shown that, for a given configuration, each algorithm has an application domain defined by the characteristics of the operand and result relations. since a semijoin operator is useful for decreasing i/o and transmission times in a multiprocessor system, we present and compare two equi-semijoin algorithms and one non-equi-semijoin algorithm. the execution times of these algorithms are generally linearly proportional to the size of the operand and result relations, and inversely proportional to the number of processors. we then compare a method which consists of joining two relations to a method whereby one joins their semijoins. finally, it is shown that the latter method, using semijoins, is generally better. the various algorithms presented are implemented in the sabre database system; an evaluation model selects the best algorithm for performing a join according to the results presented here. a first version of the sabre system is currently operational at inria. patrick valduriez georges gardarin upa and chi surveys on usability processes thyra rauch tom wilson operational experience with packet switching networks (panel session, title only) k. joseph a. dobson g. d. white d. jeannes r. stubbs s. poppe d. dingus viewpoint john shore from washingtion diane crawford information system performance measurement - revisited w. david penniman hermeneutic computer science dave west editorial james maurer fse 96 special session: novel notions, wild ideas, and fun flames david notkin acm forum robert l. ashenhurst parallel algorithms column: on the search for suitable models yossi matias authors rosalie steier natural language querying (session overview) the interface between a user and a software system should match the needs and background of the user. many query languages are built to aid clerks and managers in the performance of their jobs. therefore, a good query language would require no low level programming skills of its users. usually very high level nonprocedural languages are used because they eliminate low level artifacts such as loops. an appealing high level language would be the user's natural language. natural language should require no training or refresher courses for successful use. unfortunately, natural language interfaces have their own problems. ambiguity is a major problem and can arise in a variety of ways. ambiguous modifiers (e.g., "list the growers of orange trees.") and ambiguous pronoun references ("who was doug flutie's high school coach? also, what are his statistics?") are just two examples. a casual user of the system may also assume a shared background with the system. the doug flutie query assumes the system knows that flutie is a football player and that his (or his coach's) statistics refer to football achievement; not academic, physical or other measures. resolving these and other problems requires an extremely complex and, as yet, unachieved interactive interface. this complexity can be reduced if we settle for something less than unrestricted natural language. one simplification is to restrict the domain of discourse to the specific database being queried. other possible restrictions would be on the allowable form of a query, vocabulary, pronoun references, etc. many questions arise about restricted natural language query systems. some of the possible questions are: how do various types of restrictions affect use of a natural language query system? how can we combine the strengths of restricted natural language, menus and formal language in an integrated query system? how does a restricted natural language query system perform in the "real world" compared to a formal language query system? the three papers in this session address themselves to these questions, respectively. charles welty book preview jennifer bruer technical correspondence corporate tech correspondence current research directions in software engineering for parallel and distributed systems innes jelly ian gorton editorial peter l. denning what's happening marisa campbell acm president's letters adele goldberg president's letter bryan kocher sigsoft'96 post mortem david garlan viewpoint lynn arthur steen forum diane crawford authors rosalie steier minutes of the 1996 acm sigir business meeting david d. lewis news track rosalie steier report from basic research symposium at chi 99 yvonne wærn john mcgrew taking advantage of national science foundation funding opportunities: part 2: mock panels (panel session) attendees will participate in a mock review panel using actual funded proposals in order to get a deeper understanding of the process. the goal is help prospective proposers to develop their ideas and put them into a form that maximizes the competitiveness of their proposals. andrew bernat harriet taylor announcements amruth kumar database states and their tableaux recent work considers a database state to satisfy a set of dependencies if there exists a satisfying universal relation whose projections contain each of the relations in the state. such relations are called weak instances for the state. we propose the set of all weak instances for a state as an embodiment of the information represented by the state. we characterize states that have the same set of weak instances by the equivalence of their associated tableaux. we apply this notion to the comparison of database schemes and characterize all pairs of schemes such that for every legal state of one of them there exists an equivalent legal state of the other one. we use this approach to provide a new characterization of boyce-codd normal form relation schemes. alberto o. mendelzon from washington rosalie steier report: workshop on object-oriented re-engineering (woor'97) serge demeyer harald hall distributed database/file systems (session overview): introduction during the last fifteen years database management has grown so quickly that today it is found on machines of every size from the largest mainframes to the smallest micro computers. almost without exception, the production information systems which use database technology store the data at single site, usually local to the main processor. the reasons for this fact are: the most expensive component of a computing system has traditionally been the central processing unit. duplicating central processors at multiple sites, until recently, was cost prohibitive. relatively slow transmission speed and resulting poor performance made distribution infeasible. algorithms and other techniques for solving the problems in 2 above have been slow in coming. commercially available software has been slow in reaching the marketplace because of 1, 2 and 3 above. with the relatively recent advancements in price performance of new machines offered by every vendor, distribution of data to multiple sites is rapidly becoming commonplace. no longer is the cost of the central processing unit an obstacle to distributing the processing associated with distributed databases. the processing power needed for distributed database systems is already in place in most big corporations. micro computers are being used in almost every department for word processing, spreadsheets and other local computational needs. local area networks connecting these micro computers allow rapid access to data stored at different sites. john c. peck personal computing rosalie steier cscw '94 aaron tay the interdisciplinary challenge of building virtual worlds randy pausch technical correspondence corporate tech correspondence new sigops dues structure hank levy from washington rosalie steir dms (transformational software maintenance by reuse): a production research system? (keynoted talk) ira baxter news track robert fox computer graphics as a discipline (panel session) jeffrey j. mcconnell steve cunningham barbara mones-hattal deborah sokolove technical report column emil volcheck line operations network growth issues (panel session, abstract only) the task of keeping customer impact to a minimum while attempting to grow, modernize and improve the network in todays packet switching environment is becoming increasingly more difficult. as the network evolves and the customer demand for higher network availability increases, the need for line operations personnel to reduce customer impact has presented a unique challenge for innovative ideas to accomplish this objective. although the equipment supplier may provide excellent technical documentation for the provisioning, testing and maintenance of their products, they cannot provide the means to administer, manage and control a large network. this is the sole area where the network provider is responsible for deciding how his network will impact the customer. this presentation will deal with some of the operational problems encountered in growing a network from 5 nodes to 55 nodes over a period of 9 years. some of the subject areas that will be covered are: the maintenance window - is it required?; administrative documentation; utilization of network statistics; hardware retrofits; and software updates - major and minor. these are but a few of the problem areas challenging packet switching network providers today. their resolution are limited only by the imagination of the personnel charged with the responsibility of resolving them. g. d. white can mobile internet be all pervasive (panel session) (title only) kalyan basu cell libraries - build vs. buy; static vs. dynamic (panel) kurt keutzer kurt wolf david pietromonaco jay maxey jeff lewis martin lefebvre jeff burns viewpoint clifford stoll from washington rosalie steier acm forum robert l. ashenhurst president's letter paul abrahams how to get a ph.d. and have a life, too richard e. baker impact of the computer revolution: 1985 - 2001 edward yourdon introduction to the special issue on speech as data chris schmandt nichole yankelovich author index on concurrency control by multiple versions we examine the problem of concurrency control when the database management system supports multiple versions of the data. we characterize the limit of the parallelism achievable by the multiversion approach and demonstrate the resulting space-parallelism trade-off. christos h. papadimitriou paris c. kanellakis president's letter bryan kocher from washington rosalie steir letters and updates jennifer bruer acm forum robert l. ashenhurst technical correspondence corporate tech correspondence acm forum robert l. ashenhurst acm forum robert l. ashenhurst burn out in user services john e. bucher the proposed new computing reviews classification scheme anthony ralston conference preview: siggraph 98: celebrating 25 years of discovery william hefley acm forum robert l. ashenhurst product-line architectures: is 10lb. test enough? (featured talk) will tracz book previews jennifer bruer viewpoint anthony ralston news track robert fox news track rosalie steier trading cards carl machover acm forum robert l. ashenhurst a visit to the new biblioteque nationale de france and the opening exhibition: tous les savoirs du monde, encyclop edies et bibliot eques de sumer au xxie siecle paul kahn acm forum robert l. ashenhurst technical correspondence corporate technical correspondence president's letter bryan kocher tods---the first three years (1976--1978) david k. hsiao technical correspondence corporate technical correspondence introduction to session r2 (session overiew): advanced computer architectures mead and conway wrote in their pioneering book [1]. "many lsi chips such as microprocessors, now consist of multiple complex subsystems, and thus are really integrated systems rather than integrated circuits.". gone are the days when design of the integrated circuits (ic's) were under the sole purview of the electrical engineers or more so of the solid state device physicists. then the computer system designers were primarily responsible for designing their computer system based on the standard chips available in the market. but now the computer architects are involved in the chip design process right from the beginning. the close interaction between the engineers and computer scientists has resulted in increased automation of the whole design and fabrication process. this in turn has led to substantial reduction in cost and the turn- around time for the ic chips. it is predicted that by late 1980's it will be possible to fabricate chips containing millions of transistors. the devices and interconnections in such very large scale integrated (vlsi) systems will have linear dimensions smaller than the wavelength of visible light [1]. these advances in technology have produced tremendous impact in the area of computer architecture. the long standing semantic gap between the computer software and hardware now seems to be narrowing down. several innovative ideas have developed and they have been implemented in vlsi. some examples of those are reduced instruction set computer (risc) [2], systolic arrays [3] and chip computer [4], etc. ultimately, the circuits for these systems will encompass an entire wafer. these super chips will then be called wafer scale integrated (wsi) systems. at present, a wafer ranging from 2 to 8 inches in diameter in size can hold the equivalent of 25 to 100 microprocessors, such as intel 8086. the same wafer can also hold 4 to 20 megabits of dynamic memory if an 0.8-micrometer complementary mos (cmos) is used. there exists a large difference between the time taken for communication inside the chip and communication across the chip. much of the time in a general computer system is wasted in data movement between various modules. the performance of a computer system can be tremendously improved if as many components as possible can be placed inside a single chip. vlsi offers this possibility to the system designers. simple and regular interconnections lead to cheap implementations, high densities and good performance. the algorithms that have simple and regular data flows are particularly suitable for vlsi implementation. some examples of such algorithms are matrix manipulations, fast fourier transform (fft), etc. moreover, the concepts of pipelining and parallelism can be effectively employed to improve the overall execution speed. systolic arrays [3] are a vivid example of a special purpose high performance system that exploit these opportunities offered by vlsi. in this session, we have three carefully refereed papers that exemplify the impact of vlsi on computer architecture. the first one by kondo et al. describes an simd cellular array processor called adaptive array processor (aap-1). the system was designed and developed by the authors at the nippon telegraph and telephone public corporation of japan. the aap-1 consists of a 256×256 array of bit organized processing elements that is built using 1024 custom n-mos lsi's. extensive parallelism offers ultra high throughput for various types of two dimensional data processing. the processing speed of aap-1 is shown to exceed that of a 1 mips sequential computer by a factor of approximately 100 for certain applications. the second paper by hurson and shirazi is on the design and performance issues of a special purpose hardware recognizer capable of performing pattern matching and text retrieval operations. because of the similarities between the scanning process during compilation and the pattern matching operations, the proposed module can be used as a hardware scanner. the vlsi design, and the space and time complexities of the proposed organization are discussed. the third paper by p.l. mills describes a design of bit-parallel systolic system for matrix- vector and matrix-matrix multiplication. all the circuits, described here, can be extended to any specification of word length size. the required modifications of the circuits for two's complement operation are also outlined in this paper. i truly appreciate the efforts of the authors without whom this session would not have been possible. i also thank all other authors who had submitted papers in this area. i am indebted to the referees who spent a considerable amount of their time in selecting the papers for this session. finally, my sincere thanks are due to terry m. walker and wayne d. dominick for giving me the opportunity to chair this session on advanced computer architectures. l. n. bhuyan acm forum robert l. ashenhurst acm forum robert l. ashenhurst distributed and parallel computing issues in data warehousing (abstract) hector garcia-molina wilburt j. labio janet l. wiener yue zhuge acm forum robert l. ashenhurst authors rosalie steier news track robert fox acm forum diane crawford list of reviewers book preview marisa campbell editorial pointers james maurer president's letter john r. white president's letter john r. white forum diane crawford new publications of interest karen l. mcgraw forum diane crawford what is the proper system on chip design methodology (panel) richard goering pierre bricaud james g. dougherty steve glaser michael keating robert payne davoud samani newstrack robert fox president's letter bryan kocher president's letter bryan kocher acm forum robert l. ashenhurst editorial pointers james maurer editorial: information overload ken korman president's letter paul abrahams introduction jerry l. archibald 2nd workshop on open hypermedia systems uffe kock wiil president's letter paul abrahams esp6 (or, snowbound during the great storm of '96): 6th workshop on empirical studies of programmers, alexandria, virginia, usa, jan 5 - 7 deborah a. boehm- davis wayne d. gray the ariane 5 software failure mark dowson a delicate issue: what to do when the state of the practice leads the state of the art robert l. glass editorial pointers james maurer acm forum robert l. ashenhurst technical correspondence corporate tech correspondence president's letter paul abrahams avi'96: an international workshop peter pirolli integrating pc's into the information center (session overview) since the concept of the information center was developed at ibm canada in 1976, it has often raised as many questions as it answered. as originally conceived, the information center was to provide training, support, and data access to selected members of the user community so that they could develop idiosyncratic applications or produce new one-time reports. the goal was to reduce the programming load on an under-staffed data processing department by cutting demands for program development. the wide spread introduction of low-cost, powerful, personal desk-top computers had a tremendous impact on the nature and function of traditional data processing departments. the information center appeared to offer an ideal way to integrate the burgeoning numbers of microcomputers into the more conventional mainframe computer environments of most large business organizations. this session addresses several of the underlying factors involved in both the theory and practice of information center utilization. the first paper examines the background of the information center concept and the methods in which information centers are actually being used by industrial organizations in the san diego area. the investigation points out that information centers are, in fact, very popular, but there is a diversity of management responses about the best way to use microcomputers within the information center framework. the paper also stresses that the shift in computer power to the user can have profound effects on the role of computer science and information center departments in preparing professionals for their business and scientific careers. the second paper examines the use of microcomputers in the information center in a different way. it considers the "best" programming language aspect of the problem. the feasibility of using microcomputer-based implementations of prolog within the information center environment is studied. several distinct applications are examined and examples of prolog programs are presented. the paper concludes that prolog is particularly well suited to both business and scientific problems often faced by microcomputer users. the last paper considers the problem of training the heterogeneous user population that is associated with the typical information center. training must cover the entire spectrum, ranging from computer operations to application programming, to data security. the paper reviews microcomputer- based training packages as well as more traditional training devices. norman e. sondak policy discussion peter j. denning author index authors rosalie steier preamble mark rettig simplicity - a way out of the chasm (keynote talk) mary e. s. loomis the acm scholastic programming contest - 1977 to 1990 (special panel session) william b. poucher james comer richard rinewalt patrick ryan technical correspondence corporate tech correspondence conference review: ifip tc12 stuart c. shapiro public policy issues featured at numerous conferences this column again covers several items: a final update on acm's policy 98 conference and presentations i made at a visualization 97 workshop and a computers, freedom and privacy 98 birds-of-a-feather (bof) meeting. bob ellis backtracking: and the winner is... christopher welty from washington rosalie steier the best is yet to come: an interview with max d. hopper leon kappelman john windsor from washington rosalie steier the grid file: an adaptable, symmetric multikey file structure traditional file structures that provide multikey access to records, for example, inverted files, are extensions of file structures originally designed for single-key access. they manifest various deficiencies in particular for multikey access to highly dynamic files. we study the dynamic aspects of file structures that treat all keys symmetrically, that is, file structures which avoid the distinction between primary and secondary keys. we start from a bitmap approach and treat the problem of file design as one of data compression of a large sparse matrix. this leads to the notions of a grid partition of the search space and of a grid directory, which are the keys to a dynamic file structure called the grid file. this file system adapts gracefully to its contents under insertions and deletions, and thus achieves an upper bound of two disk accesses for single record retrieval; it also handles range queries and partially specified queries efficiently. we discuss in detail the design decisions that led to the grid file, present simulation results of its behavior, and compare it to other multikey access file structures. j. nievergelt hans hinterberger kenneth c. sevcik news track bryan kocher editorial peter j. denning cgw founder shares his experiences one of the intentions of this column is to let you share the experiences of some of the computer graphics pioneers. this month, we are most fortunate to present the following column from a friend --- and one of our industry's "movers and shakers" --- randy stickrod, whom many of you will recognize from his days at _computer graphics world_ magazine _(cgw)._ enjoy! randall stickrod carl machover chairman's column annie claybrook envisioning the future art exhibition gordon cameron dealing with change richard anderson news track robert fox siggraph 98 activities and acm/policy 98 wrap-up this column provides a summary of our public policy activities at the siggraph 98 conference as well as a wrap-up of the acm/policy 98 conference. see also **http://www.siggraph.org/othercom/pubpolicy.html.** bob ellis acm president's letter: $$? & $$! peter j. denning acm forum robert l. ashenhurst president's letter bryan kocher literate programming christopher j. van wyk editorial peter j. denning extreme programming: a discipline of software development (invited paper) (abstract only) you can look at software development as a system with inputs and outputs. as with any system, software development needs negative feed-back loops to keep it from oscillating. the negative feedback loops traditionally used --- separate testing groups, documentation, lengthy release cycles, reviews --- succeed at keeping certain aspects under control, but they tend to have only long term benefits. what if we could find a set of negative feedback loops that kept software development under control, but that people wanted to do, even under stress, and that contributed to productivity both short and long term? kent beck selected rationale for nrc recommendations tucker taft authors rosalie steier acm forum robert l. ashenhurst viewpoint: choosing appropriate information systems research methodologies robert d. galliers frank f. land on becoming editor-in-chief of jacm joeseph halpern president's letter paul abrahams vice-president's letter dennis frailey news track rosalie steier acm past president's letters david brandin object-oriented software product metrics (tutorial) clark b. archer michael c. stinson technical correspondence corporate tech correspondence interview with sameer parekh james t. dennis author index author index acm forum: letters robert l. ashenhurst why the pc will be the most pervasive visualization platform in 2001 (panel session) hanspeter pfister michael cox peter n. glaskowsky bill lorensen richard greco performance evaluation and control of distributed systems (session overview) distributed systems are modeled with three interesting techniques in the papers of this session. "a task-partitioning problem for a distributed real- time control data processing system," is presented by shi, hu and wang, all of the beijing institute of aeronautics and astronautics. miluzinovic, from purdue, and crnkovic, from the university of miami, are the authors of "state transition times for limited access contention multiple access schemes." "performance evaluation of concurrent systems using timed petri nets" is the subject of zuberek's paper. zuberek is from the university of newfoundland. four major factors: 1) inteprocessor messages, 2) cpu rate, 3) memory capacity and 4) number of processors, are analyzed by shi, hu and wang. their examination leads to four partitioning principles which, in turn, are used to derive a model of partitioning. while confronting the typical computational intractable problems, the authors present a static algorithm for a good, if not optimal, partitioning. milutinovic and crnkovic examine limited contention schemes for bus topology local area networks. limited contention schemes attempt to provide the advantages of both contention-based and conflict-free access strategies. simulation results reported support the authors' recommendations for robust binary adaptive access schemes for certain applications. performance of concurrent systems is modeled by zuberek using μ-timed petri nets and the finite state markov chains isomorphic to such nets. m-timed petri nets are petri nets with firing times are: 1) assigned to transitions and 2) are exponentially distributed random variables. an example application is presented by the author. stephen nemecek reuse r&d;: gap between theory and practice mansour zand vic basili ira baxter martin griss even-andre karlsson dewayne perry introduction the intended theme of this workshop was that of "experience using software process models". this experience was expected to reveal a number of crucial issues in the prescriptive and descriptive use of those models and their underlying support. for example: suitability of existing models \--- a model should be effective in describing and prescribing a wide range of actual software processes; it must also address a wide range of activities within each process including, in particular, individual and group interactions. experience using the model should answer the following questions: how usable are the models for guiding and controlling software development and maintenance; how well do the models scale from processes-in-the-small to processes- in-the-large; what activities are covered in the development and maintenance life-cycle; what policies of individual and group coordination and cooperation are prescribed by the models; and what mechanisms (tools) and structures are needed for supporting and/or enforcing these policies? instantiation of process models --- a software process model must be fully or partially instantiated to obtain a specific process definition for all or part of a project. generic process models, which can be customized for particular projects by being instantiated in various ways, enhance the potential for process model reuse. what are the characteristics of widely applicable process models? how is an appropriate model (or appropriate customization of a generic model) selected to meet the specific needs of a particular organization or project? what are the mechanisms needed for customizing and creating enactable model instances? dynamics of process models and model instances --- process models will evolve over time as experience accumulates, the requirements change, and the applicability of a given model is extended; process model instances may change during their enactment, either adding detail not prescribed by the model (refinement) or modifying (and possibly creating) detail to accommodate unpredictable contingencies. to what extent can software process models be evolved or adapted? how are the models verified and validated and what are the mechanisms for gaining and using feedback on model usage? how are instances refined and modified dynamically? the basic agenda for the workshop was to begin with the current state of the art, distill the experience that we have gained to date, and then generalize from that the directions we ought pursue. review of the state of the art (keynoter: bill curtis; reporter: watts humphrey). a summary discussion of the current state of the art in enactable software process models. mechanisms (keynoter: takuya katayamna; reporter: gail kaiser). a discussion of experience with notation in actual models --- the syntactic concerns of process models. what is right/wrong with our current facilities? what are the lessons learned? policies (keynoter: mark dowson; reporter: peter feiler). a discussion of experience with domains in actual models --- the semantic concerns of process models. what are lessons to be learned about model specific semantics? model independent semantics? data (keynoter: jack wileden; reporter: lolo penedo). generalizing and synthesizing from our experience, what are the object base and typing issues that need to be addressed for process modeling? control (keynoter: bob balzer; reporter: dewayne perry). generalizing and synthesizing from our experience, what are the dynamism issues that need to be addressed for process modeling? emerging issues --- summary (keynoter: sam redwine; reporter: david garlan). as a result of the discussions, what are the emerging issues given our base of some experience? as the position papers seem to establish an agenda of their own independent of the call for papers, so to the discussions establish an agenda of their own independent of the published one. a number of important general topics and issues arose from the discussions (as you will read more about in the ensuing session summaries). in reviewing the state of the art, the discussion delineated the following general objectives of process modeling: to provide a precise framework for understanding, experimenting with, and reasoning about the process; to facilitate process automation; and to provide a basis for process control. with this as the basic context, the discussion centered round the notions of activities, commitments, and policies. what is an activity? is it an atomic or a decomposable unit? how is it described? what are the meaningful units of commitment? what is the result of breaking a commitment? are there meaningful consequences? what level of constrainedness should policies have? should they be viewed as restrictions, or as scopes of autonomy? how are conflicts among policies handled or resolved? one thing that become abundantly clear in the experience discussions was that we had very little experience. we find ourselves suffering from the rather frustrating "chicken and egg" syndrome: we need more experience in order to determine the notation and underlying abstractions that we need, but we need notations and underlying abstractions in order to get that experience. the important question, in generalizing from the syntactic and semantic details of existing process models, is "what is new here?" some of the new wrinkles are constraints, multiple views, and multiple levels of context (or, perhaps, intent). on the data side, we need to develop a type model appropriate to process modeling that inherently reflects the assumptions, presumptions and observations found in modeling software processes. on the control side, we need to be able to express various aspects of a model at different levels depending on the intended use of those descriptions (for example, we may find a particular aspect better expressed as a goal in one context and as a recipe in another). in summary, we need more experience, we need more self-application (that is, we need to practice what we preach), and we need to look to other disciplines to see how they have solved their process problems. despite this lack of experience, the workshop was successful in that we made some progress in clarifying further some of the important issues and we broadened the areas of concern. we continue to have a rich and open field of research. dewayne e. perry authors rosalie steier news track robert fox programming pearls jon bentley the john henry syndrome (panel session)(abstract only): humans vs. machines as fpga designers human designers have done amazing things with fpgas. these designs challenge our assumptions about the speeds and densities acheivable by programmable hardware. but with multi-million gate designs and increasingly complex fpga architectures is there really any place for the hand-crafted design anymore? is there a way that cad tools can incorporate the techniques and knowledge of designers to create high-density, high-performance implementations automatically? or will the tools and architectures always lag the applications, thereby guaranteeing abundant job opportunities for fpga design experts? herman schmit ray andraka philip friedin satnam singh tim southgate editorial peter wegner high level design automation tools (session overview) as vlsi circuits become more complex, traditional design methods and tools have become inadequate to handle the job of assisting a designer in producing a correct design in a reasonable amount of time. new tools that support top down design techniques must be used. checking the functionality of a large vlsi design is analogous to checking the correctness of a large software design, but other factors, such as testability and manufacturability must also be considered. this session presents some work done in designing tools to support the higher levels of this design process. first we identify some of the stages in the design of vlsi. the first paper, by agrawal and poon, gives one view of the vlsi design process, with emphasis on the manufacturability of the resulting design. once the functionality of the system has been specified, the architecture of the system is specified as an interconnection of high level blocks described behaviorally. this architectural level design is then refined into a functional level design. a design at the functional level consists of behaviorally defined blocks usually smaller than blocks at the architectural level. examples are memories, registers, alus, etc. (the reader should be warned that there is no generally accepted terminology to describe these refinement levels.) blocks at the functional level are well defined in the sense that they can easily be refined into a design at a lower level, often through use of a library of descriptions of functional level components. simulation is extremely important in assessing the correctness of a functional level design. the functionality of the simulated circuit can be checked against higher level specifications, and some high level timing analysis can be done. an example of a functional simulator is helix, described by coelho in the second paper. the next step is gate level design, using either logic gates or gate array macros as primitives. gate level simulation done here offers a more detailed look at the detailed function and timing of the circuit. stick level and detailed timing simulation is done after gate level design. we consider these as well as layout and routing as low level design issues, and beyond the scope of this session. more and more of the cost of digital product comes from testing, making sure that what is manufactured works according to specification. this is done by applying test vectors to the product. these vectors are evaluated by doing fault simulation of the design. fault simulation has been traditionally done at the gate level. for complex designs this has become impractical because of memory requirements and extremely long simulation times. the solution to this problem is to do fault simulation at a higher than gate level. however, only gate level fault models have been widely accepted as accurate. simulating some areas of the circuit at the gate level and the rest at the functional level allows detailed fault simulation of regions of interest to be done quickly. the third paper, by rogers and abraham, present a fault simulator that allows this simulation technique. scott davidson keynote speaker jerrier a. haddad author index author index news track rosalie steier minutes of 3 december 1996 asiswg/asisrg meeting with tri-ada'96 clyde roby president's letter bryan kocher news track robert fox results introduction daniel m. roy acm forum robert l. ashenhurst news of the profession t. r. girill viewpoint james h. morris a conversation with steve jobs next, inc. president and ceo steve jobs (left), and vp of sales and marketing, dan'l lewin, discuss the goals of the new company, and the next computer system itself. peter j. denning karen a. frenkel technical correspondence corporate tech correspondence editorial peter j. denning on code reuse: a response don gotterbarn keith miller simon rogerson acm forum robert l. ashenhurst president's letter bryan kocher first isew cleanroom workshop summary graeme smith acm forum robert l. ashenhurst editorial peter j. denning programming pearls jon bentley guest editors' introduction: special issue on uniform random number generation raymond couture pierre l'ecuyer authors rosale steier from washington diane crawford cebit '97 belinda frasier acm forum robert l. ashenhurst chikids: a look back and a look forward allison druin computing perspectives maurice v. wilkes association for computing machinery rosalie steier reverse engineering and system renovation - an annotated bibliography to facilitate research in the field of reverse engineering and system renovation we have compiled an annotated bibliography. we put the contributions not only in alphabetical order but also grouped by topic so that readers focusing on a certain topic can read their annotations in the alphabetical listing. we also compiled an annotated list of pointers to information about reverse engineering and system renovation that can be reached via internet. for the sake of ease we also incorporated a brief introduction to the field of reverse engineering. m. g. j. van den brand p. klint c. verhoef acm forum robert l. ashenhurst computers and playfulness: humorous, cognitive, and social playfulness in real and virtual workplaces julie e. kendall jane webster authors rosalie steier protein geometry as a function of time the title of this talk is intentionally obscure since the range of tim e to be discussed is not given. the talk will start with decades (i.e. history). what did a protein look like in the 1940s? what were the methods that provided the "pictures"? what mathematical support was useful? how did the problems of protein structure develop in the ensuing decades? protein stability - reduce the time scale from decades to days or minutes. what did "denaturation" mean in the 30's through the &ldq; uo;50"s? what does it mean today? what did the molecular biology revolution do to our views of the structure of proteins? a multiple answer question. the genes and their immediate products are linear polymers. the comparison and analysis of simple linear lists of characters has absolutely required the development of new mathematical tools to even get off the starting blocks. data acquisition, data banks, and data analysis are all moving along in high gear. the problems produced by the division of science into smaller and smaller sections each with its own developing vocabulary are increasing exponentially. today chemical investigations can be carried out at the femtosecond time scale. do we on the biological side really care. most of biology is in the micro to multisecond range or longer, but even macromolecules have important functions in the picosecond region. refinement of x-ray structures is not as good as it should be. where is the problem? nmr like all other spectroscopic procedures has an intrinsic time base. x-ray diffraction is better for static structures, nmr should spend much more of its effort on time specifications where, in principal, it beats the x-ray procedures hands down. cryoelectron microscopy is pushing spatial resolution to lower and lower limits. bridging the gaps between the em and the x-ray/nmr regions in both space and time requires major mathematical help. as solutions to some of these problems appear, the problems are simply made worse frederic m. richards acm forum robert l. ashenhurst domain names (panel session, abstract only): more questions than answers the darpa domain name project raises several interesting questions about the underlying assumptions and issues involved in providing a hierarchical naming scheme in an environment the size of the internet. for example, the project was motivated by the growth of the internet. however, because the proliferation of hosts took place within the sites belonging to the internet (due in part to an increase in the number of workstations and other small machines), and not necessarily in the number of sites, a less general solution that maintained a centralized and fully distributed site table, with each site running a name server for its hosts, would have been sufficient. in addition, the distributed implementation of the name to address database uncovered several fundamental naming and addressing issues that had been ignore by the original centralized approach. for example, name servers and resolvers must be able to select a reasonable internet address for hosts connected to the internet at multiple points. the decentralized solution also raises questions about authority over host names. larry l. peterson programmimg pearls jon bentley a note from the editor art karshmer reuse research: contributions, problems and non-problems murali sitaraman maggie davis premkumar devanbu jeffrey poulin alexander ran bruce weide acm forum robert l. ashenhurst from washington rosalie steier chairman's introduction ed hassler the market environment for database machines and servers (keynote address) since the early 70s, the industrial research community has pursued the elusive dream of a commercially successful database machine: a dedicated- function computer based on an architecture specialized and optimized for database functions, with price and performance characteristics substantially beyond what can be achieved with general purpose software and hardware. during this period, several forces have conspired to frustrate achievement of this goal - forces which are for the most part independent of the dbm research itself. chief among these is the accelerating pace of advances in microelectronics, which simultaneously creates a moving target for database machine vendors while focusing the beleaguered computer manufacturer's r&d; resources on trying to keep up with protracted product life cycles. meanwhile, successive generations of relational database software products are incorporating sophisticated performance techniques that further challenge the database hardware vendor. the database server, in contrast, finds itself in a far more hospitable environment. at one time, the database machine (say, in the role of a "backend") and the database server in a network were viewed as minor variations on a common theme. now the differences are understood to be essential, bringing the server concept in tune with prevailing trends as surely as the backend is in conflict with them. the opportunity for database servers is fueled by the growth of distributed computing and the strength of the "open systems" movement, leading to standards at multiple levels of the relational database architecture. as database server interface standards (de facto or otherwise) are established, a market for these subsystems will emerge which is both very large and broadly based. but, if the market for database machines is to expand beyond narrowly defined niches, product suppliers must overcome far greater obstacles. gene lowenthal keynote address: computers versus common sense douglas b. lenat forum: the date crisis diane crawford performance systems technology (pst) and computer-based instruction (cbi): tools for instructional designers in the 21st century, part i gloria a. reece technical correspondence corporate tech correspondence authors rosalie steier end user applications (panel session, title only) n. shacham a. maurer j. schiller president's letter adele goldberg implicit surfaces '98 jules bloomenthal chairman's column: business news annie claybrook viewpoint john dobson brian randell personal computing rosalie steier technical correspondence corporate technical correspondence response time analysis of multiprocessor computers for database support comparison of three multiprocessor computer architectures for database support is made possible through evaluation of response time expressions. these expressions are derived by parameterizing algorithms performed by each machine to execute a relational algebra query. parameters represent properties of the database and components of the machines. studies of particular parameter values exhibit response times for conventional machine technology, for low selectivity, high duplicate occurrence, and parallel disk access, increasing the number of processors, and improving communication and processing technology. roger k. shultz roy j. zingg introduction scott lang colleen cleary new track rosalie steier technical correspondence corporate tech correspondence designing a portable natural language database query system one barrier to the acceptance of natural language database query systems is the substantial installation effort required for each new database. much of this effort involves the encoding of semantic knowledge for the domain of discourse, necessary to correctly interpret and respond to natural language questions. for such systems to be practical, techniques must be developed to increase their portability to new domains. this paper discusses several issues involving the portability of natural language interfaces to database systems, and presents the approach taken in co-op \\--- a natural language database query system that provides cooperative responses to english questions and operates with a typical coda-syl database system. co-op derives its domain-specific knowledge from a lexicon (the list of words known to the system) and the information already present in the structure and content of the underlying database. experience with the implementation suggests that strategies that are not directly derivative of cognitive or linguistic models may nonetheless play an important role in the development of practical natural language systems. s. jerrold kaplan viewpoint lawrence snyder the jcd table of contents experiment t. r. girill nsf workshop on a software research program for the 21st century victor r. basili laszlo belady barry boehm frederick brooks james browne richard demillo stuart i. feldman cordell green butler lampson duncan lawrie nancy leveson nancy lynch mark weiser jeannette wing authors rosalie steier authors rosalie steier simulation in the analog days dave sieg user/system interfaces and natural language processing (session overview) this session consists of three papers that discuss various aspects of user/system interfaces and natural language processing. topics to be addressed will include rapid prototyping, windowing, and natural language processing for a relational database management system. the first paper, by j. reese et al., presents a description of the design of the graphical user interface design system (guides) for rapid-prototyping and testing of user interfaces. guides utilizes a graphics editor and a user interface management system (uims) for the design and maintenance of user interfaces. users can execute the designed interfaces and perform interface- related tasks that are monitored by the guides system for subsequent analysis. an example illustrating the rapid prototyping capabilities of guides is also presented. an overview and evaluation of windowing is presented in the next paper by richard holcomb and alan l. tharp. windowing and windowing operations are first defined and then evaluated with respect to how four specific task-aid criteria and five memory-aid criteria are met utilizing windowing. the paper also includes a comparison of windowing, menus, and command languages with respect to the specified task and memory aids. the last paper, by david w. embley and roy e. kimbrell, discusses the implementation of a schema-driven natural-language processor. this processor, mnemos, utilizes an augmented schema to process natural language queries thereby enhancing or portability and ease of implementation. schema augmentation is achieved through the assumption that the database schema is relational and derived from an entity-relationship model, as well as the use of data frames (a higher construct) for attribute, entity, and relationship descriptions. initial experimental results and efficiency considerations are also presented. christie d. michelson corrigendum: editorial: taking stock joseph halpern the siggraph 98 art gallery: touchware lynn pocock developing teaching resources for reuse and publishing in the cstc deborah knox scott grissom acm forum robert l. ashenhurst editorial: international browser day steven pemberton authors rosalie steier the state of the union - siggraph's financial status nan c. schaller information for authors diane crawford thomas lambert robert fox andrew rosenbloom funding for experimetal computer science for almost a decade funding in experimental computer science has faced a problem similar to the weather--- everyone talked about it but no one did anything about it. fortunately, there are now some signs of relief. many industrial and government agencies are trying to find effective ways to provide environments in which experimental computer science can thrive in academia. this session brings together two leaders from government, one from industry and one from academia. each is uniquely qualified to address the issue of funding for experimental computer science. dr. adrion heads the national science foundation's funding efforts in this area. dr. druffel heads the department of defense's newly created software technology initiative. dr. matsa is a leader in ibm's effort to support extramural research. and, finally, dr. ritchie heads the university of washington's computer science department, which was the recipient of nsf's first experimental computer science award. richard adrion larry druffel samuel matsa robert ritchie equipment exhibits carl machover "seminole" graphics rosalee wolfe the politics of design: representing work liam j. bannon viewpoint donald a norman technical correspondence corporate tech correspondence acm forum robert l. ashenhurst acm forum robert l. ashenhurst acm forum robert l. ashenhurst newsletter of the west coast working group on fortran language development loren p. meissner new track rosalie steier the networks course: old problems, new solutions shakil akhtar nizar al-holou mark fienup gail t. finley robert s. roos sam tannouri operations considerations in a large privated packet switching network in early 1977, southern bell began to study ways to reduce terminal and leased line costs associated with internal computer communications. because a large volume of these communications were of an asynchronous, character-mode nature, packet switching emerged as the most attractive solution to reducing these costs. in may, 1979, the first application went on- line with one switch, one host concentrator and three terminal concentrators. operation of this network was simple, with operations and planning residing in the same work group. today, the network consists of twenty tandem switches, 185 local switches and 85 packet assembler/disassemblers, interconnected by 15,928 miles of trunk facilities. the network architecture has evolved to geographically distributed "principle switching points" interconnected through a single major switching center in atlanta, georgia. each "principle switching point" serves an area of company interest by supporting a number of local switches and concentrators. as the complexity of the network has increased, so has the complexity of network operation. operations have been both functionally and geographically differentiated to keep up with the evolution of the network, and to stay consistent with changing packet switch architecture. this talk will cover the network operations and maintenance considerations associated with the operation of today's very large network. r. d. stubbs l. dee swymer t. l. quinn news track rosalie steier editorial: introducing the acm journal on resources in computing lillian n. cassel edward a. fox acm forum robert l. ashenhurst conference preview: cscw 2000 marisa campbell acm forum robert l. ashenhurst acm presidents's letter paul abrahams acm forum robert l. ashenhurst the average time until bucket overflow it is common for file structures to be divided into equal-length partitions, called buckets, into which records arrive for insertion and from which records are physically deleted. we give a simple algorithm which permits calculation of the average time until overflow for a bucket of capacity n records, assuming that record insertions and deletions can be modeled as a stochastic process in the usual manner of queueing theory. we present some numerical examples, from which we make some general observations about the relationships among insertion and deletion rates, bucket capacity, initial fill, and average time until overflow. in particular, we observe that it makes sense to define the stable point as the product of the arrival rate and the average residence time of the records; then a bucket tends to fill up to its stable point quickly, in an amount of time almost independent of the stable point, but the average time until overflow increases rapidly with the difference between the bucket capacity and the stable point. robert b. cooper martin k. solomon technical correspondence corporate tech correspondence president's letter paul abrahamsss acm forum robert l. ashenhurst transcript of the popular interactive television talk show j'accuse: should virtual advertising be regulated? richard epstein acm forum robert l. ashenhurst language and models robert g. babb